Feb 24 05:12:33.161143 master-0 systemd[1]: Starting Kubernetes Kubelet... Feb 24 05:12:33.806419 master-0 kubenswrapper[4158]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 24 05:12:33.806419 master-0 kubenswrapper[4158]: Flag --minimum-container-ttl-duration has been deprecated, Use --eviction-hard or --eviction-soft instead. Will be removed in a future version. Feb 24 05:12:33.806419 master-0 kubenswrapper[4158]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 24 05:12:33.806419 master-0 kubenswrapper[4158]: Flag --register-with-taints has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 24 05:12:33.806419 master-0 kubenswrapper[4158]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Feb 24 05:12:33.806419 master-0 kubenswrapper[4158]: Flag --system-reserved has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 24 05:12:33.808027 master-0 kubenswrapper[4158]: I0224 05:12:33.807096 4158 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Feb 24 05:12:33.814552 master-0 kubenswrapper[4158]: W0224 05:12:33.814481 4158 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Feb 24 05:12:33.814552 master-0 kubenswrapper[4158]: W0224 05:12:33.814527 4158 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Feb 24 05:12:33.814552 master-0 kubenswrapper[4158]: W0224 05:12:33.814533 4158 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Feb 24 05:12:33.814552 master-0 kubenswrapper[4158]: W0224 05:12:33.814537 4158 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Feb 24 05:12:33.814552 master-0 kubenswrapper[4158]: W0224 05:12:33.814542 4158 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Feb 24 05:12:33.814552 master-0 kubenswrapper[4158]: W0224 05:12:33.814547 4158 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Feb 24 05:12:33.814552 master-0 kubenswrapper[4158]: W0224 05:12:33.814552 4158 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Feb 24 05:12:33.814552 master-0 kubenswrapper[4158]: W0224 05:12:33.814556 4158 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Feb 24 05:12:33.814552 master-0 kubenswrapper[4158]: W0224 05:12:33.814559 4158 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Feb 24 05:12:33.814552 master-0 kubenswrapper[4158]: W0224 05:12:33.814564 4158 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Feb 24 05:12:33.814552 master-0 kubenswrapper[4158]: W0224 05:12:33.814569 4158 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Feb 24 05:12:33.814552 master-0 kubenswrapper[4158]: W0224 05:12:33.814574 4158 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Feb 24 05:12:33.814552 master-0 kubenswrapper[4158]: W0224 05:12:33.814580 4158 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Feb 24 05:12:33.814552 master-0 kubenswrapper[4158]: W0224 05:12:33.814585 4158 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Feb 24 05:12:33.814552 master-0 kubenswrapper[4158]: W0224 05:12:33.814590 4158 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Feb 24 05:12:33.814552 master-0 kubenswrapper[4158]: W0224 05:12:33.814596 4158 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Feb 24 05:12:33.814552 master-0 kubenswrapper[4158]: W0224 05:12:33.814602 4158 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Feb 24 05:12:33.815542 master-0 kubenswrapper[4158]: W0224 05:12:33.814622 4158 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Feb 24 05:12:33.815542 master-0 kubenswrapper[4158]: W0224 05:12:33.814627 4158 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Feb 24 05:12:33.815542 master-0 kubenswrapper[4158]: W0224 05:12:33.814632 4158 feature_gate.go:330] unrecognized feature gate: Example Feb 24 05:12:33.815542 master-0 kubenswrapper[4158]: W0224 05:12:33.814635 4158 feature_gate.go:330] unrecognized feature gate: PinnedImages Feb 24 05:12:33.815542 master-0 kubenswrapper[4158]: W0224 05:12:33.814640 4158 feature_gate.go:330] unrecognized feature gate: SignatureStores Feb 24 05:12:33.815542 master-0 kubenswrapper[4158]: W0224 05:12:33.814644 4158 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Feb 24 05:12:33.815542 master-0 kubenswrapper[4158]: W0224 05:12:33.814648 4158 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Feb 24 05:12:33.815542 master-0 kubenswrapper[4158]: W0224 05:12:33.814652 4158 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Feb 24 05:12:33.815542 master-0 kubenswrapper[4158]: W0224 05:12:33.814656 4158 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Feb 24 05:12:33.815542 master-0 kubenswrapper[4158]: W0224 05:12:33.814661 4158 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Feb 24 05:12:33.815542 master-0 kubenswrapper[4158]: W0224 05:12:33.814666 4158 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Feb 24 05:12:33.815542 master-0 kubenswrapper[4158]: W0224 05:12:33.814669 4158 feature_gate.go:330] unrecognized feature gate: InsightsConfig Feb 24 05:12:33.815542 master-0 kubenswrapper[4158]: W0224 05:12:33.814674 4158 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Feb 24 05:12:33.815542 master-0 kubenswrapper[4158]: W0224 05:12:33.814678 4158 feature_gate.go:330] unrecognized feature gate: GatewayAPI Feb 24 05:12:33.815542 master-0 kubenswrapper[4158]: W0224 05:12:33.814684 4158 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Feb 24 05:12:33.815542 master-0 kubenswrapper[4158]: W0224 05:12:33.814690 4158 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Feb 24 05:12:33.815542 master-0 kubenswrapper[4158]: W0224 05:12:33.814695 4158 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Feb 24 05:12:33.815542 master-0 kubenswrapper[4158]: W0224 05:12:33.814699 4158 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Feb 24 05:12:33.815542 master-0 kubenswrapper[4158]: W0224 05:12:33.814704 4158 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Feb 24 05:12:33.815542 master-0 kubenswrapper[4158]: W0224 05:12:33.814708 4158 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Feb 24 05:12:33.816450 master-0 kubenswrapper[4158]: W0224 05:12:33.814712 4158 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Feb 24 05:12:33.816450 master-0 kubenswrapper[4158]: W0224 05:12:33.814716 4158 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Feb 24 05:12:33.816450 master-0 kubenswrapper[4158]: W0224 05:12:33.814721 4158 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Feb 24 05:12:33.816450 master-0 kubenswrapper[4158]: W0224 05:12:33.814725 4158 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Feb 24 05:12:33.816450 master-0 kubenswrapper[4158]: W0224 05:12:33.814729 4158 feature_gate.go:330] unrecognized feature gate: ExternalOIDCWithUIDAndExtraClaimMappings Feb 24 05:12:33.816450 master-0 kubenswrapper[4158]: W0224 05:12:33.814733 4158 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Feb 24 05:12:33.816450 master-0 kubenswrapper[4158]: W0224 05:12:33.814737 4158 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Feb 24 05:12:33.816450 master-0 kubenswrapper[4158]: W0224 05:12:33.814742 4158 feature_gate.go:330] unrecognized feature gate: PlatformOperators Feb 24 05:12:33.816450 master-0 kubenswrapper[4158]: W0224 05:12:33.814746 4158 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Feb 24 05:12:33.816450 master-0 kubenswrapper[4158]: W0224 05:12:33.814750 4158 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Feb 24 05:12:33.816450 master-0 kubenswrapper[4158]: W0224 05:12:33.814754 4158 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Feb 24 05:12:33.816450 master-0 kubenswrapper[4158]: W0224 05:12:33.814758 4158 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Feb 24 05:12:33.816450 master-0 kubenswrapper[4158]: W0224 05:12:33.814762 4158 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Feb 24 05:12:33.816450 master-0 kubenswrapper[4158]: W0224 05:12:33.814768 4158 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Feb 24 05:12:33.816450 master-0 kubenswrapper[4158]: W0224 05:12:33.814773 4158 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Feb 24 05:12:33.816450 master-0 kubenswrapper[4158]: W0224 05:12:33.814778 4158 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Feb 24 05:12:33.816450 master-0 kubenswrapper[4158]: W0224 05:12:33.814783 4158 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Feb 24 05:12:33.816450 master-0 kubenswrapper[4158]: W0224 05:12:33.814788 4158 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Feb 24 05:12:33.816450 master-0 kubenswrapper[4158]: W0224 05:12:33.814792 4158 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Feb 24 05:12:33.817703 master-0 kubenswrapper[4158]: W0224 05:12:33.814798 4158 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Feb 24 05:12:33.817703 master-0 kubenswrapper[4158]: W0224 05:12:33.814802 4158 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Feb 24 05:12:33.817703 master-0 kubenswrapper[4158]: W0224 05:12:33.814806 4158 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Feb 24 05:12:33.817703 master-0 kubenswrapper[4158]: W0224 05:12:33.814811 4158 feature_gate.go:330] unrecognized feature gate: OVNObservability Feb 24 05:12:33.817703 master-0 kubenswrapper[4158]: W0224 05:12:33.814817 4158 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Feb 24 05:12:33.817703 master-0 kubenswrapper[4158]: W0224 05:12:33.814822 4158 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Feb 24 05:12:33.817703 master-0 kubenswrapper[4158]: W0224 05:12:33.814827 4158 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Feb 24 05:12:33.817703 master-0 kubenswrapper[4158]: W0224 05:12:33.814832 4158 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Feb 24 05:12:33.817703 master-0 kubenswrapper[4158]: W0224 05:12:33.814837 4158 feature_gate.go:330] unrecognized feature gate: NewOLM Feb 24 05:12:33.817703 master-0 kubenswrapper[4158]: W0224 05:12:33.814841 4158 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Feb 24 05:12:33.817703 master-0 kubenswrapper[4158]: W0224 05:12:33.814845 4158 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Feb 24 05:12:33.817703 master-0 kubenswrapper[4158]: W0224 05:12:33.814849 4158 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Feb 24 05:12:33.817703 master-0 kubenswrapper[4158]: W0224 05:12:33.814854 4158 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Feb 24 05:12:33.817703 master-0 kubenswrapper[4158]: W0224 05:12:33.814858 4158 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Feb 24 05:12:33.817703 master-0 kubenswrapper[4158]: W0224 05:12:33.814862 4158 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Feb 24 05:12:33.817703 master-0 kubenswrapper[4158]: W0224 05:12:33.814866 4158 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Feb 24 05:12:33.817703 master-0 kubenswrapper[4158]: I0224 05:12:33.814998 4158 flags.go:64] FLAG: --address="0.0.0.0" Feb 24 05:12:33.817703 master-0 kubenswrapper[4158]: I0224 05:12:33.815012 4158 flags.go:64] FLAG: --allowed-unsafe-sysctls="[]" Feb 24 05:12:33.817703 master-0 kubenswrapper[4158]: I0224 05:12:33.815026 4158 flags.go:64] FLAG: --anonymous-auth="true" Feb 24 05:12:33.817703 master-0 kubenswrapper[4158]: I0224 05:12:33.815033 4158 flags.go:64] FLAG: --application-metrics-count-limit="100" Feb 24 05:12:33.817703 master-0 kubenswrapper[4158]: I0224 05:12:33.815041 4158 flags.go:64] FLAG: --authentication-token-webhook="false" Feb 24 05:12:33.818668 master-0 kubenswrapper[4158]: I0224 05:12:33.815047 4158 flags.go:64] FLAG: --authentication-token-webhook-cache-ttl="2m0s" Feb 24 05:12:33.818668 master-0 kubenswrapper[4158]: I0224 05:12:33.815055 4158 flags.go:64] FLAG: --authorization-mode="AlwaysAllow" Feb 24 05:12:33.818668 master-0 kubenswrapper[4158]: I0224 05:12:33.815063 4158 flags.go:64] FLAG: --authorization-webhook-cache-authorized-ttl="5m0s" Feb 24 05:12:33.818668 master-0 kubenswrapper[4158]: I0224 05:12:33.815068 4158 flags.go:64] FLAG: --authorization-webhook-cache-unauthorized-ttl="30s" Feb 24 05:12:33.818668 master-0 kubenswrapper[4158]: I0224 05:12:33.815073 4158 flags.go:64] FLAG: --boot-id-file="/proc/sys/kernel/random/boot_id" Feb 24 05:12:33.818668 master-0 kubenswrapper[4158]: I0224 05:12:33.815080 4158 flags.go:64] FLAG: --bootstrap-kubeconfig="/etc/kubernetes/kubeconfig" Feb 24 05:12:33.818668 master-0 kubenswrapper[4158]: I0224 05:12:33.815085 4158 flags.go:64] FLAG: --cert-dir="/var/lib/kubelet/pki" Feb 24 05:12:33.818668 master-0 kubenswrapper[4158]: I0224 05:12:33.815090 4158 flags.go:64] FLAG: --cgroup-driver="cgroupfs" Feb 24 05:12:33.818668 master-0 kubenswrapper[4158]: I0224 05:12:33.815095 4158 flags.go:64] FLAG: --cgroup-root="" Feb 24 05:12:33.818668 master-0 kubenswrapper[4158]: I0224 05:12:33.815100 4158 flags.go:64] FLAG: --cgroups-per-qos="true" Feb 24 05:12:33.818668 master-0 kubenswrapper[4158]: I0224 05:12:33.815104 4158 flags.go:64] FLAG: --client-ca-file="" Feb 24 05:12:33.818668 master-0 kubenswrapper[4158]: I0224 05:12:33.815109 4158 flags.go:64] FLAG: --cloud-config="" Feb 24 05:12:33.818668 master-0 kubenswrapper[4158]: I0224 05:12:33.815113 4158 flags.go:64] FLAG: --cloud-provider="" Feb 24 05:12:33.818668 master-0 kubenswrapper[4158]: I0224 05:12:33.815118 4158 flags.go:64] FLAG: --cluster-dns="[]" Feb 24 05:12:33.818668 master-0 kubenswrapper[4158]: I0224 05:12:33.815125 4158 flags.go:64] FLAG: --cluster-domain="" Feb 24 05:12:33.818668 master-0 kubenswrapper[4158]: I0224 05:12:33.815129 4158 flags.go:64] FLAG: --config="/etc/kubernetes/kubelet.conf" Feb 24 05:12:33.818668 master-0 kubenswrapper[4158]: I0224 05:12:33.815135 4158 flags.go:64] FLAG: --config-dir="" Feb 24 05:12:33.818668 master-0 kubenswrapper[4158]: I0224 05:12:33.815140 4158 flags.go:64] FLAG: --container-hints="/etc/cadvisor/container_hints.json" Feb 24 05:12:33.818668 master-0 kubenswrapper[4158]: I0224 05:12:33.815145 4158 flags.go:64] FLAG: --container-log-max-files="5" Feb 24 05:12:33.818668 master-0 kubenswrapper[4158]: I0224 05:12:33.815153 4158 flags.go:64] FLAG: --container-log-max-size="10Mi" Feb 24 05:12:33.818668 master-0 kubenswrapper[4158]: I0224 05:12:33.815158 4158 flags.go:64] FLAG: --container-runtime-endpoint="/var/run/crio/crio.sock" Feb 24 05:12:33.818668 master-0 kubenswrapper[4158]: I0224 05:12:33.815163 4158 flags.go:64] FLAG: --containerd="/run/containerd/containerd.sock" Feb 24 05:12:33.818668 master-0 kubenswrapper[4158]: I0224 05:12:33.815183 4158 flags.go:64] FLAG: --containerd-namespace="k8s.io" Feb 24 05:12:33.818668 master-0 kubenswrapper[4158]: I0224 05:12:33.815189 4158 flags.go:64] FLAG: --contention-profiling="false" Feb 24 05:12:33.819715 master-0 kubenswrapper[4158]: I0224 05:12:33.815194 4158 flags.go:64] FLAG: --cpu-cfs-quota="true" Feb 24 05:12:33.819715 master-0 kubenswrapper[4158]: I0224 05:12:33.815199 4158 flags.go:64] FLAG: --cpu-cfs-quota-period="100ms" Feb 24 05:12:33.819715 master-0 kubenswrapper[4158]: I0224 05:12:33.815204 4158 flags.go:64] FLAG: --cpu-manager-policy="none" Feb 24 05:12:33.819715 master-0 kubenswrapper[4158]: I0224 05:12:33.815209 4158 flags.go:64] FLAG: --cpu-manager-policy-options="" Feb 24 05:12:33.819715 master-0 kubenswrapper[4158]: I0224 05:12:33.815215 4158 flags.go:64] FLAG: --cpu-manager-reconcile-period="10s" Feb 24 05:12:33.819715 master-0 kubenswrapper[4158]: I0224 05:12:33.815220 4158 flags.go:64] FLAG: --enable-controller-attach-detach="true" Feb 24 05:12:33.819715 master-0 kubenswrapper[4158]: I0224 05:12:33.815226 4158 flags.go:64] FLAG: --enable-debugging-handlers="true" Feb 24 05:12:33.819715 master-0 kubenswrapper[4158]: I0224 05:12:33.815230 4158 flags.go:64] FLAG: --enable-load-reader="false" Feb 24 05:12:33.819715 master-0 kubenswrapper[4158]: I0224 05:12:33.815235 4158 flags.go:64] FLAG: --enable-server="true" Feb 24 05:12:33.819715 master-0 kubenswrapper[4158]: I0224 05:12:33.815240 4158 flags.go:64] FLAG: --enforce-node-allocatable="[pods]" Feb 24 05:12:33.819715 master-0 kubenswrapper[4158]: I0224 05:12:33.815248 4158 flags.go:64] FLAG: --event-burst="100" Feb 24 05:12:33.819715 master-0 kubenswrapper[4158]: I0224 05:12:33.815252 4158 flags.go:64] FLAG: --event-qps="50" Feb 24 05:12:33.819715 master-0 kubenswrapper[4158]: I0224 05:12:33.815257 4158 flags.go:64] FLAG: --event-storage-age-limit="default=0" Feb 24 05:12:33.819715 master-0 kubenswrapper[4158]: I0224 05:12:33.815262 4158 flags.go:64] FLAG: --event-storage-event-limit="default=0" Feb 24 05:12:33.819715 master-0 kubenswrapper[4158]: I0224 05:12:33.815269 4158 flags.go:64] FLAG: --eviction-hard="" Feb 24 05:12:33.819715 master-0 kubenswrapper[4158]: I0224 05:12:33.815276 4158 flags.go:64] FLAG: --eviction-max-pod-grace-period="0" Feb 24 05:12:33.819715 master-0 kubenswrapper[4158]: I0224 05:12:33.815281 4158 flags.go:64] FLAG: --eviction-minimum-reclaim="" Feb 24 05:12:33.819715 master-0 kubenswrapper[4158]: I0224 05:12:33.815287 4158 flags.go:64] FLAG: --eviction-pressure-transition-period="5m0s" Feb 24 05:12:33.819715 master-0 kubenswrapper[4158]: I0224 05:12:33.815292 4158 flags.go:64] FLAG: --eviction-soft="" Feb 24 05:12:33.819715 master-0 kubenswrapper[4158]: I0224 05:12:33.815296 4158 flags.go:64] FLAG: --eviction-soft-grace-period="" Feb 24 05:12:33.819715 master-0 kubenswrapper[4158]: I0224 05:12:33.815301 4158 flags.go:64] FLAG: --exit-on-lock-contention="false" Feb 24 05:12:33.819715 master-0 kubenswrapper[4158]: I0224 05:12:33.815320 4158 flags.go:64] FLAG: --experimental-allocatable-ignore-eviction="false" Feb 24 05:12:33.819715 master-0 kubenswrapper[4158]: I0224 05:12:33.815326 4158 flags.go:64] FLAG: --experimental-mounter-path="" Feb 24 05:12:33.819715 master-0 kubenswrapper[4158]: I0224 05:12:33.815332 4158 flags.go:64] FLAG: --fail-cgroupv1="false" Feb 24 05:12:33.819715 master-0 kubenswrapper[4158]: I0224 05:12:33.815337 4158 flags.go:64] FLAG: --fail-swap-on="true" Feb 24 05:12:33.819715 master-0 kubenswrapper[4158]: I0224 05:12:33.815341 4158 flags.go:64] FLAG: --feature-gates="" Feb 24 05:12:33.820949 master-0 kubenswrapper[4158]: I0224 05:12:33.815348 4158 flags.go:64] FLAG: --file-check-frequency="20s" Feb 24 05:12:33.820949 master-0 kubenswrapper[4158]: I0224 05:12:33.815354 4158 flags.go:64] FLAG: --global-housekeeping-interval="1m0s" Feb 24 05:12:33.820949 master-0 kubenswrapper[4158]: I0224 05:12:33.815359 4158 flags.go:64] FLAG: --hairpin-mode="promiscuous-bridge" Feb 24 05:12:33.820949 master-0 kubenswrapper[4158]: I0224 05:12:33.815364 4158 flags.go:64] FLAG: --healthz-bind-address="127.0.0.1" Feb 24 05:12:33.820949 master-0 kubenswrapper[4158]: I0224 05:12:33.815370 4158 flags.go:64] FLAG: --healthz-port="10248" Feb 24 05:12:33.820949 master-0 kubenswrapper[4158]: I0224 05:12:33.815377 4158 flags.go:64] FLAG: --help="false" Feb 24 05:12:33.820949 master-0 kubenswrapper[4158]: I0224 05:12:33.815382 4158 flags.go:64] FLAG: --hostname-override="" Feb 24 05:12:33.820949 master-0 kubenswrapper[4158]: I0224 05:12:33.815387 4158 flags.go:64] FLAG: --housekeeping-interval="10s" Feb 24 05:12:33.820949 master-0 kubenswrapper[4158]: I0224 05:12:33.815393 4158 flags.go:64] FLAG: --http-check-frequency="20s" Feb 24 05:12:33.820949 master-0 kubenswrapper[4158]: I0224 05:12:33.815398 4158 flags.go:64] FLAG: --image-credential-provider-bin-dir="" Feb 24 05:12:33.820949 master-0 kubenswrapper[4158]: I0224 05:12:33.815403 4158 flags.go:64] FLAG: --image-credential-provider-config="" Feb 24 05:12:33.820949 master-0 kubenswrapper[4158]: I0224 05:12:33.815408 4158 flags.go:64] FLAG: --image-gc-high-threshold="85" Feb 24 05:12:33.820949 master-0 kubenswrapper[4158]: I0224 05:12:33.815412 4158 flags.go:64] FLAG: --image-gc-low-threshold="80" Feb 24 05:12:33.820949 master-0 kubenswrapper[4158]: I0224 05:12:33.815417 4158 flags.go:64] FLAG: --image-service-endpoint="" Feb 24 05:12:33.820949 master-0 kubenswrapper[4158]: I0224 05:12:33.815422 4158 flags.go:64] FLAG: --kernel-memcg-notification="false" Feb 24 05:12:33.820949 master-0 kubenswrapper[4158]: I0224 05:12:33.815427 4158 flags.go:64] FLAG: --kube-api-burst="100" Feb 24 05:12:33.820949 master-0 kubenswrapper[4158]: I0224 05:12:33.815432 4158 flags.go:64] FLAG: --kube-api-content-type="application/vnd.kubernetes.protobuf" Feb 24 05:12:33.820949 master-0 kubenswrapper[4158]: I0224 05:12:33.815437 4158 flags.go:64] FLAG: --kube-api-qps="50" Feb 24 05:12:33.820949 master-0 kubenswrapper[4158]: I0224 05:12:33.815442 4158 flags.go:64] FLAG: --kube-reserved="" Feb 24 05:12:33.820949 master-0 kubenswrapper[4158]: I0224 05:12:33.815456 4158 flags.go:64] FLAG: --kube-reserved-cgroup="" Feb 24 05:12:33.820949 master-0 kubenswrapper[4158]: I0224 05:12:33.815460 4158 flags.go:64] FLAG: --kubeconfig="/var/lib/kubelet/kubeconfig" Feb 24 05:12:33.820949 master-0 kubenswrapper[4158]: I0224 05:12:33.815465 4158 flags.go:64] FLAG: --kubelet-cgroups="" Feb 24 05:12:33.820949 master-0 kubenswrapper[4158]: I0224 05:12:33.815470 4158 flags.go:64] FLAG: --local-storage-capacity-isolation="true" Feb 24 05:12:33.820949 master-0 kubenswrapper[4158]: I0224 05:12:33.815475 4158 flags.go:64] FLAG: --lock-file="" Feb 24 05:12:33.820949 master-0 kubenswrapper[4158]: I0224 05:12:33.815481 4158 flags.go:64] FLAG: --log-cadvisor-usage="false" Feb 24 05:12:33.820949 master-0 kubenswrapper[4158]: I0224 05:12:33.815486 4158 flags.go:64] FLAG: --log-flush-frequency="5s" Feb 24 05:12:33.822135 master-0 kubenswrapper[4158]: I0224 05:12:33.815491 4158 flags.go:64] FLAG: --log-json-info-buffer-size="0" Feb 24 05:12:33.822135 master-0 kubenswrapper[4158]: I0224 05:12:33.815498 4158 flags.go:64] FLAG: --log-json-split-stream="false" Feb 24 05:12:33.822135 master-0 kubenswrapper[4158]: I0224 05:12:33.815503 4158 flags.go:64] FLAG: --log-text-info-buffer-size="0" Feb 24 05:12:33.822135 master-0 kubenswrapper[4158]: I0224 05:12:33.815508 4158 flags.go:64] FLAG: --log-text-split-stream="false" Feb 24 05:12:33.822135 master-0 kubenswrapper[4158]: I0224 05:12:33.815513 4158 flags.go:64] FLAG: --logging-format="text" Feb 24 05:12:33.822135 master-0 kubenswrapper[4158]: I0224 05:12:33.815518 4158 flags.go:64] FLAG: --machine-id-file="/etc/machine-id,/var/lib/dbus/machine-id" Feb 24 05:12:33.822135 master-0 kubenswrapper[4158]: I0224 05:12:33.815524 4158 flags.go:64] FLAG: --make-iptables-util-chains="true" Feb 24 05:12:33.822135 master-0 kubenswrapper[4158]: I0224 05:12:33.815529 4158 flags.go:64] FLAG: --manifest-url="" Feb 24 05:12:33.822135 master-0 kubenswrapper[4158]: I0224 05:12:33.815534 4158 flags.go:64] FLAG: --manifest-url-header="" Feb 24 05:12:33.822135 master-0 kubenswrapper[4158]: I0224 05:12:33.815542 4158 flags.go:64] FLAG: --max-housekeeping-interval="15s" Feb 24 05:12:33.822135 master-0 kubenswrapper[4158]: I0224 05:12:33.815548 4158 flags.go:64] FLAG: --max-open-files="1000000" Feb 24 05:12:33.822135 master-0 kubenswrapper[4158]: I0224 05:12:33.815557 4158 flags.go:64] FLAG: --max-pods="110" Feb 24 05:12:33.822135 master-0 kubenswrapper[4158]: I0224 05:12:33.815563 4158 flags.go:64] FLAG: --maximum-dead-containers="-1" Feb 24 05:12:33.822135 master-0 kubenswrapper[4158]: I0224 05:12:33.815569 4158 flags.go:64] FLAG: --maximum-dead-containers-per-container="1" Feb 24 05:12:33.822135 master-0 kubenswrapper[4158]: I0224 05:12:33.815574 4158 flags.go:64] FLAG: --memory-manager-policy="None" Feb 24 05:12:33.822135 master-0 kubenswrapper[4158]: I0224 05:12:33.815579 4158 flags.go:64] FLAG: --minimum-container-ttl-duration="6m0s" Feb 24 05:12:33.822135 master-0 kubenswrapper[4158]: I0224 05:12:33.815585 4158 flags.go:64] FLAG: --minimum-image-ttl-duration="2m0s" Feb 24 05:12:33.822135 master-0 kubenswrapper[4158]: I0224 05:12:33.815591 4158 flags.go:64] FLAG: --node-ip="192.168.32.10" Feb 24 05:12:33.822135 master-0 kubenswrapper[4158]: I0224 05:12:33.815596 4158 flags.go:64] FLAG: --node-labels="node-role.kubernetes.io/control-plane=,node-role.kubernetes.io/master=,node.openshift.io/os_id=rhcos" Feb 24 05:12:33.822135 master-0 kubenswrapper[4158]: I0224 05:12:33.815610 4158 flags.go:64] FLAG: --node-status-max-images="50" Feb 24 05:12:33.822135 master-0 kubenswrapper[4158]: I0224 05:12:33.815615 4158 flags.go:64] FLAG: --node-status-update-frequency="10s" Feb 24 05:12:33.822135 master-0 kubenswrapper[4158]: I0224 05:12:33.815620 4158 flags.go:64] FLAG: --oom-score-adj="-999" Feb 24 05:12:33.822135 master-0 kubenswrapper[4158]: I0224 05:12:33.815625 4158 flags.go:64] FLAG: --pod-cidr="" Feb 24 05:12:33.823301 master-0 kubenswrapper[4158]: I0224 05:12:33.815630 4158 flags.go:64] FLAG: --pod-infra-container-image="quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6d5001a555eb05eef7f23d64667303c2b4db8343ee900c265f7613c40c1db229" Feb 24 05:12:33.823301 master-0 kubenswrapper[4158]: I0224 05:12:33.815638 4158 flags.go:64] FLAG: --pod-manifest-path="" Feb 24 05:12:33.823301 master-0 kubenswrapper[4158]: I0224 05:12:33.815642 4158 flags.go:64] FLAG: --pod-max-pids="-1" Feb 24 05:12:33.823301 master-0 kubenswrapper[4158]: I0224 05:12:33.815649 4158 flags.go:64] FLAG: --pods-per-core="0" Feb 24 05:12:33.823301 master-0 kubenswrapper[4158]: I0224 05:12:33.815654 4158 flags.go:64] FLAG: --port="10250" Feb 24 05:12:33.823301 master-0 kubenswrapper[4158]: I0224 05:12:33.815659 4158 flags.go:64] FLAG: --protect-kernel-defaults="false" Feb 24 05:12:33.823301 master-0 kubenswrapper[4158]: I0224 05:12:33.815663 4158 flags.go:64] FLAG: --provider-id="" Feb 24 05:12:33.823301 master-0 kubenswrapper[4158]: I0224 05:12:33.815668 4158 flags.go:64] FLAG: --qos-reserved="" Feb 24 05:12:33.823301 master-0 kubenswrapper[4158]: I0224 05:12:33.815672 4158 flags.go:64] FLAG: --read-only-port="10255" Feb 24 05:12:33.823301 master-0 kubenswrapper[4158]: I0224 05:12:33.815677 4158 flags.go:64] FLAG: --register-node="true" Feb 24 05:12:33.823301 master-0 kubenswrapper[4158]: I0224 05:12:33.815682 4158 flags.go:64] FLAG: --register-schedulable="true" Feb 24 05:12:33.823301 master-0 kubenswrapper[4158]: I0224 05:12:33.815688 4158 flags.go:64] FLAG: --register-with-taints="node-role.kubernetes.io/master=:NoSchedule" Feb 24 05:12:33.823301 master-0 kubenswrapper[4158]: I0224 05:12:33.815698 4158 flags.go:64] FLAG: --registry-burst="10" Feb 24 05:12:33.823301 master-0 kubenswrapper[4158]: I0224 05:12:33.815703 4158 flags.go:64] FLAG: --registry-qps="5" Feb 24 05:12:33.823301 master-0 kubenswrapper[4158]: I0224 05:12:33.815707 4158 flags.go:64] FLAG: --reserved-cpus="" Feb 24 05:12:33.823301 master-0 kubenswrapper[4158]: I0224 05:12:33.815712 4158 flags.go:64] FLAG: --reserved-memory="" Feb 24 05:12:33.823301 master-0 kubenswrapper[4158]: I0224 05:12:33.815719 4158 flags.go:64] FLAG: --resolv-conf="/etc/resolv.conf" Feb 24 05:12:33.823301 master-0 kubenswrapper[4158]: I0224 05:12:33.815724 4158 flags.go:64] FLAG: --root-dir="/var/lib/kubelet" Feb 24 05:12:33.823301 master-0 kubenswrapper[4158]: I0224 05:12:33.815729 4158 flags.go:64] FLAG: --rotate-certificates="false" Feb 24 05:12:33.823301 master-0 kubenswrapper[4158]: I0224 05:12:33.815733 4158 flags.go:64] FLAG: --rotate-server-certificates="false" Feb 24 05:12:33.823301 master-0 kubenswrapper[4158]: I0224 05:12:33.815738 4158 flags.go:64] FLAG: --runonce="false" Feb 24 05:12:33.823301 master-0 kubenswrapper[4158]: I0224 05:12:33.815743 4158 flags.go:64] FLAG: --runtime-cgroups="/system.slice/crio.service" Feb 24 05:12:33.823301 master-0 kubenswrapper[4158]: I0224 05:12:33.815747 4158 flags.go:64] FLAG: --runtime-request-timeout="2m0s" Feb 24 05:12:33.823301 master-0 kubenswrapper[4158]: I0224 05:12:33.815752 4158 flags.go:64] FLAG: --seccomp-default="false" Feb 24 05:12:33.823301 master-0 kubenswrapper[4158]: I0224 05:12:33.815757 4158 flags.go:64] FLAG: --serialize-image-pulls="true" Feb 24 05:12:33.824754 master-0 kubenswrapper[4158]: I0224 05:12:33.815762 4158 flags.go:64] FLAG: --storage-driver-buffer-duration="1m0s" Feb 24 05:12:33.824754 master-0 kubenswrapper[4158]: I0224 05:12:33.815767 4158 flags.go:64] FLAG: --storage-driver-db="cadvisor" Feb 24 05:12:33.824754 master-0 kubenswrapper[4158]: I0224 05:12:33.815772 4158 flags.go:64] FLAG: --storage-driver-host="localhost:8086" Feb 24 05:12:33.824754 master-0 kubenswrapper[4158]: I0224 05:12:33.815777 4158 flags.go:64] FLAG: --storage-driver-password="root" Feb 24 05:12:33.824754 master-0 kubenswrapper[4158]: I0224 05:12:33.815782 4158 flags.go:64] FLAG: --storage-driver-secure="false" Feb 24 05:12:33.824754 master-0 kubenswrapper[4158]: I0224 05:12:33.815788 4158 flags.go:64] FLAG: --storage-driver-table="stats" Feb 24 05:12:33.824754 master-0 kubenswrapper[4158]: I0224 05:12:33.815793 4158 flags.go:64] FLAG: --storage-driver-user="root" Feb 24 05:12:33.824754 master-0 kubenswrapper[4158]: I0224 05:12:33.815798 4158 flags.go:64] FLAG: --streaming-connection-idle-timeout="4h0m0s" Feb 24 05:12:33.824754 master-0 kubenswrapper[4158]: I0224 05:12:33.815803 4158 flags.go:64] FLAG: --sync-frequency="1m0s" Feb 24 05:12:33.824754 master-0 kubenswrapper[4158]: I0224 05:12:33.815808 4158 flags.go:64] FLAG: --system-cgroups="" Feb 24 05:12:33.824754 master-0 kubenswrapper[4158]: I0224 05:12:33.815815 4158 flags.go:64] FLAG: --system-reserved="cpu=500m,ephemeral-storage=1Gi,memory=1Gi" Feb 24 05:12:33.824754 master-0 kubenswrapper[4158]: I0224 05:12:33.815823 4158 flags.go:64] FLAG: --system-reserved-cgroup="" Feb 24 05:12:33.824754 master-0 kubenswrapper[4158]: I0224 05:12:33.815828 4158 flags.go:64] FLAG: --tls-cert-file="" Feb 24 05:12:33.824754 master-0 kubenswrapper[4158]: I0224 05:12:33.815833 4158 flags.go:64] FLAG: --tls-cipher-suites="[]" Feb 24 05:12:33.824754 master-0 kubenswrapper[4158]: I0224 05:12:33.815839 4158 flags.go:64] FLAG: --tls-min-version="" Feb 24 05:12:33.824754 master-0 kubenswrapper[4158]: I0224 05:12:33.815845 4158 flags.go:64] FLAG: --tls-private-key-file="" Feb 24 05:12:33.824754 master-0 kubenswrapper[4158]: I0224 05:12:33.815850 4158 flags.go:64] FLAG: --topology-manager-policy="none" Feb 24 05:12:33.824754 master-0 kubenswrapper[4158]: I0224 05:12:33.815855 4158 flags.go:64] FLAG: --topology-manager-policy-options="" Feb 24 05:12:33.824754 master-0 kubenswrapper[4158]: I0224 05:12:33.815859 4158 flags.go:64] FLAG: --topology-manager-scope="container" Feb 24 05:12:33.824754 master-0 kubenswrapper[4158]: I0224 05:12:33.815864 4158 flags.go:64] FLAG: --v="2" Feb 24 05:12:33.824754 master-0 kubenswrapper[4158]: I0224 05:12:33.815872 4158 flags.go:64] FLAG: --version="false" Feb 24 05:12:33.824754 master-0 kubenswrapper[4158]: I0224 05:12:33.815880 4158 flags.go:64] FLAG: --vmodule="" Feb 24 05:12:33.824754 master-0 kubenswrapper[4158]: I0224 05:12:33.815888 4158 flags.go:64] FLAG: --volume-plugin-dir="/etc/kubernetes/kubelet-plugins/volume/exec" Feb 24 05:12:33.824754 master-0 kubenswrapper[4158]: I0224 05:12:33.815893 4158 flags.go:64] FLAG: --volume-stats-agg-period="1m0s" Feb 24 05:12:33.824754 master-0 kubenswrapper[4158]: W0224 05:12:33.816028 4158 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Feb 24 05:12:33.826285 master-0 kubenswrapper[4158]: W0224 05:12:33.816035 4158 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Feb 24 05:12:33.826285 master-0 kubenswrapper[4158]: W0224 05:12:33.816041 4158 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Feb 24 05:12:33.826285 master-0 kubenswrapper[4158]: W0224 05:12:33.816045 4158 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Feb 24 05:12:33.826285 master-0 kubenswrapper[4158]: W0224 05:12:33.816050 4158 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Feb 24 05:12:33.826285 master-0 kubenswrapper[4158]: W0224 05:12:33.816054 4158 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Feb 24 05:12:33.826285 master-0 kubenswrapper[4158]: W0224 05:12:33.816059 4158 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Feb 24 05:12:33.826285 master-0 kubenswrapper[4158]: W0224 05:12:33.816064 4158 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Feb 24 05:12:33.826285 master-0 kubenswrapper[4158]: W0224 05:12:33.816068 4158 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Feb 24 05:12:33.826285 master-0 kubenswrapper[4158]: W0224 05:12:33.816072 4158 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Feb 24 05:12:33.826285 master-0 kubenswrapper[4158]: W0224 05:12:33.816077 4158 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Feb 24 05:12:33.826285 master-0 kubenswrapper[4158]: W0224 05:12:33.816081 4158 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Feb 24 05:12:33.826285 master-0 kubenswrapper[4158]: W0224 05:12:33.816085 4158 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Feb 24 05:12:33.826285 master-0 kubenswrapper[4158]: W0224 05:12:33.816090 4158 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Feb 24 05:12:33.826285 master-0 kubenswrapper[4158]: W0224 05:12:33.816094 4158 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Feb 24 05:12:33.826285 master-0 kubenswrapper[4158]: W0224 05:12:33.816099 4158 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Feb 24 05:12:33.826285 master-0 kubenswrapper[4158]: W0224 05:12:33.816104 4158 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Feb 24 05:12:33.826285 master-0 kubenswrapper[4158]: W0224 05:12:33.816108 4158 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Feb 24 05:12:33.826285 master-0 kubenswrapper[4158]: W0224 05:12:33.816115 4158 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Feb 24 05:12:33.826285 master-0 kubenswrapper[4158]: W0224 05:12:33.816119 4158 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Feb 24 05:12:33.827974 master-0 kubenswrapper[4158]: W0224 05:12:33.816124 4158 feature_gate.go:330] unrecognized feature gate: Example Feb 24 05:12:33.827974 master-0 kubenswrapper[4158]: W0224 05:12:33.816129 4158 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Feb 24 05:12:33.827974 master-0 kubenswrapper[4158]: W0224 05:12:33.816133 4158 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Feb 24 05:12:33.827974 master-0 kubenswrapper[4158]: W0224 05:12:33.816138 4158 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Feb 24 05:12:33.827974 master-0 kubenswrapper[4158]: W0224 05:12:33.816143 4158 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Feb 24 05:12:33.827974 master-0 kubenswrapper[4158]: W0224 05:12:33.816147 4158 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Feb 24 05:12:33.827974 master-0 kubenswrapper[4158]: W0224 05:12:33.816151 4158 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Feb 24 05:12:33.827974 master-0 kubenswrapper[4158]: W0224 05:12:33.816156 4158 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Feb 24 05:12:33.827974 master-0 kubenswrapper[4158]: W0224 05:12:33.816160 4158 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Feb 24 05:12:33.827974 master-0 kubenswrapper[4158]: W0224 05:12:33.816168 4158 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Feb 24 05:12:33.827974 master-0 kubenswrapper[4158]: W0224 05:12:33.816172 4158 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Feb 24 05:12:33.827974 master-0 kubenswrapper[4158]: W0224 05:12:33.816177 4158 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Feb 24 05:12:33.827974 master-0 kubenswrapper[4158]: W0224 05:12:33.816182 4158 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Feb 24 05:12:33.827974 master-0 kubenswrapper[4158]: W0224 05:12:33.816187 4158 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Feb 24 05:12:33.827974 master-0 kubenswrapper[4158]: W0224 05:12:33.816192 4158 feature_gate.go:330] unrecognized feature gate: PlatformOperators Feb 24 05:12:33.827974 master-0 kubenswrapper[4158]: W0224 05:12:33.816198 4158 feature_gate.go:330] unrecognized feature gate: OVNObservability Feb 24 05:12:33.827974 master-0 kubenswrapper[4158]: W0224 05:12:33.816202 4158 feature_gate.go:330] unrecognized feature gate: PinnedImages Feb 24 05:12:33.827974 master-0 kubenswrapper[4158]: W0224 05:12:33.816207 4158 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Feb 24 05:12:33.827974 master-0 kubenswrapper[4158]: W0224 05:12:33.816212 4158 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Feb 24 05:12:33.827974 master-0 kubenswrapper[4158]: W0224 05:12:33.816217 4158 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Feb 24 05:12:33.829219 master-0 kubenswrapper[4158]: W0224 05:12:33.816221 4158 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Feb 24 05:12:33.829219 master-0 kubenswrapper[4158]: W0224 05:12:33.816225 4158 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Feb 24 05:12:33.829219 master-0 kubenswrapper[4158]: W0224 05:12:33.816230 4158 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Feb 24 05:12:33.829219 master-0 kubenswrapper[4158]: W0224 05:12:33.816234 4158 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Feb 24 05:12:33.829219 master-0 kubenswrapper[4158]: W0224 05:12:33.816239 4158 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Feb 24 05:12:33.829219 master-0 kubenswrapper[4158]: W0224 05:12:33.816243 4158 feature_gate.go:330] unrecognized feature gate: ExternalOIDCWithUIDAndExtraClaimMappings Feb 24 05:12:33.829219 master-0 kubenswrapper[4158]: W0224 05:12:33.816248 4158 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Feb 24 05:12:33.829219 master-0 kubenswrapper[4158]: W0224 05:12:33.816253 4158 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Feb 24 05:12:33.829219 master-0 kubenswrapper[4158]: W0224 05:12:33.816257 4158 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Feb 24 05:12:33.829219 master-0 kubenswrapper[4158]: W0224 05:12:33.816262 4158 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Feb 24 05:12:33.829219 master-0 kubenswrapper[4158]: W0224 05:12:33.816268 4158 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Feb 24 05:12:33.829219 master-0 kubenswrapper[4158]: W0224 05:12:33.816273 4158 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Feb 24 05:12:33.829219 master-0 kubenswrapper[4158]: W0224 05:12:33.816277 4158 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Feb 24 05:12:33.829219 master-0 kubenswrapper[4158]: W0224 05:12:33.816282 4158 feature_gate.go:330] unrecognized feature gate: SignatureStores Feb 24 05:12:33.829219 master-0 kubenswrapper[4158]: W0224 05:12:33.816286 4158 feature_gate.go:330] unrecognized feature gate: NewOLM Feb 24 05:12:33.829219 master-0 kubenswrapper[4158]: W0224 05:12:33.816291 4158 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Feb 24 05:12:33.829219 master-0 kubenswrapper[4158]: W0224 05:12:33.816296 4158 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Feb 24 05:12:33.829219 master-0 kubenswrapper[4158]: W0224 05:12:33.816300 4158 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Feb 24 05:12:33.829219 master-0 kubenswrapper[4158]: W0224 05:12:33.816305 4158 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Feb 24 05:12:33.829219 master-0 kubenswrapper[4158]: W0224 05:12:33.816323 4158 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Feb 24 05:12:33.830437 master-0 kubenswrapper[4158]: W0224 05:12:33.816328 4158 feature_gate.go:330] unrecognized feature gate: InsightsConfig Feb 24 05:12:33.830437 master-0 kubenswrapper[4158]: W0224 05:12:33.816335 4158 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Feb 24 05:12:33.830437 master-0 kubenswrapper[4158]: W0224 05:12:33.816341 4158 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Feb 24 05:12:33.830437 master-0 kubenswrapper[4158]: W0224 05:12:33.816346 4158 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Feb 24 05:12:33.830437 master-0 kubenswrapper[4158]: W0224 05:12:33.816351 4158 feature_gate.go:330] unrecognized feature gate: GatewayAPI Feb 24 05:12:33.830437 master-0 kubenswrapper[4158]: W0224 05:12:33.816356 4158 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Feb 24 05:12:33.830437 master-0 kubenswrapper[4158]: W0224 05:12:33.816361 4158 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Feb 24 05:12:33.830437 master-0 kubenswrapper[4158]: W0224 05:12:33.816366 4158 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Feb 24 05:12:33.830437 master-0 kubenswrapper[4158]: W0224 05:12:33.816371 4158 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Feb 24 05:12:33.830437 master-0 kubenswrapper[4158]: W0224 05:12:33.816375 4158 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Feb 24 05:12:33.830437 master-0 kubenswrapper[4158]: W0224 05:12:33.816380 4158 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Feb 24 05:12:33.830437 master-0 kubenswrapper[4158]: W0224 05:12:33.816385 4158 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Feb 24 05:12:33.830437 master-0 kubenswrapper[4158]: I0224 05:12:33.816393 4158 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false StreamingCollectionEncodingToJSON:true StreamingCollectionEncodingToProtobuf:true TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Feb 24 05:12:33.833663 master-0 kubenswrapper[4158]: I0224 05:12:33.833550 4158 server.go:491] "Kubelet version" kubeletVersion="v1.31.14" Feb 24 05:12:33.833663 master-0 kubenswrapper[4158]: I0224 05:12:33.833617 4158 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Feb 24 05:12:33.834209 master-0 kubenswrapper[4158]: W0224 05:12:33.833760 4158 feature_gate.go:330] unrecognized feature gate: PlatformOperators Feb 24 05:12:33.834209 master-0 kubenswrapper[4158]: W0224 05:12:33.833773 4158 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Feb 24 05:12:33.834209 master-0 kubenswrapper[4158]: W0224 05:12:33.833780 4158 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Feb 24 05:12:33.834209 master-0 kubenswrapper[4158]: W0224 05:12:33.833788 4158 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Feb 24 05:12:33.834209 master-0 kubenswrapper[4158]: W0224 05:12:33.833794 4158 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Feb 24 05:12:33.834209 master-0 kubenswrapper[4158]: W0224 05:12:33.833801 4158 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Feb 24 05:12:33.834209 master-0 kubenswrapper[4158]: W0224 05:12:33.833808 4158 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Feb 24 05:12:33.834209 master-0 kubenswrapper[4158]: W0224 05:12:33.833814 4158 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Feb 24 05:12:33.834209 master-0 kubenswrapper[4158]: W0224 05:12:33.833820 4158 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Feb 24 05:12:33.834209 master-0 kubenswrapper[4158]: W0224 05:12:33.833826 4158 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Feb 24 05:12:33.834209 master-0 kubenswrapper[4158]: W0224 05:12:33.833832 4158 feature_gate.go:330] unrecognized feature gate: ExternalOIDCWithUIDAndExtraClaimMappings Feb 24 05:12:33.834209 master-0 kubenswrapper[4158]: W0224 05:12:33.833837 4158 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Feb 24 05:12:33.834209 master-0 kubenswrapper[4158]: W0224 05:12:33.833843 4158 feature_gate.go:330] unrecognized feature gate: NewOLM Feb 24 05:12:33.834209 master-0 kubenswrapper[4158]: W0224 05:12:33.833848 4158 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Feb 24 05:12:33.834209 master-0 kubenswrapper[4158]: W0224 05:12:33.833855 4158 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Feb 24 05:12:33.834209 master-0 kubenswrapper[4158]: W0224 05:12:33.833861 4158 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Feb 24 05:12:33.834209 master-0 kubenswrapper[4158]: W0224 05:12:33.833866 4158 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Feb 24 05:12:33.834209 master-0 kubenswrapper[4158]: W0224 05:12:33.833873 4158 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Feb 24 05:12:33.834209 master-0 kubenswrapper[4158]: W0224 05:12:33.833878 4158 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Feb 24 05:12:33.834209 master-0 kubenswrapper[4158]: W0224 05:12:33.833884 4158 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Feb 24 05:12:33.835558 master-0 kubenswrapper[4158]: W0224 05:12:33.833889 4158 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Feb 24 05:12:33.835558 master-0 kubenswrapper[4158]: W0224 05:12:33.833896 4158 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Feb 24 05:12:33.835558 master-0 kubenswrapper[4158]: W0224 05:12:33.833902 4158 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Feb 24 05:12:33.835558 master-0 kubenswrapper[4158]: W0224 05:12:33.833908 4158 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Feb 24 05:12:33.835558 master-0 kubenswrapper[4158]: W0224 05:12:33.833916 4158 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Feb 24 05:12:33.835558 master-0 kubenswrapper[4158]: W0224 05:12:33.833927 4158 feature_gate.go:330] unrecognized feature gate: Example Feb 24 05:12:33.835558 master-0 kubenswrapper[4158]: W0224 05:12:33.833933 4158 feature_gate.go:330] unrecognized feature gate: PinnedImages Feb 24 05:12:33.835558 master-0 kubenswrapper[4158]: W0224 05:12:33.833939 4158 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Feb 24 05:12:33.835558 master-0 kubenswrapper[4158]: W0224 05:12:33.833945 4158 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Feb 24 05:12:33.835558 master-0 kubenswrapper[4158]: W0224 05:12:33.833951 4158 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Feb 24 05:12:33.835558 master-0 kubenswrapper[4158]: W0224 05:12:33.833956 4158 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Feb 24 05:12:33.835558 master-0 kubenswrapper[4158]: W0224 05:12:33.833964 4158 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Feb 24 05:12:33.835558 master-0 kubenswrapper[4158]: W0224 05:12:33.833970 4158 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Feb 24 05:12:33.835558 master-0 kubenswrapper[4158]: W0224 05:12:33.833977 4158 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Feb 24 05:12:33.835558 master-0 kubenswrapper[4158]: W0224 05:12:33.833984 4158 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Feb 24 05:12:33.835558 master-0 kubenswrapper[4158]: W0224 05:12:33.833989 4158 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Feb 24 05:12:33.835558 master-0 kubenswrapper[4158]: W0224 05:12:33.833995 4158 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Feb 24 05:12:33.835558 master-0 kubenswrapper[4158]: W0224 05:12:33.834001 4158 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Feb 24 05:12:33.835558 master-0 kubenswrapper[4158]: W0224 05:12:33.834006 4158 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Feb 24 05:12:33.835558 master-0 kubenswrapper[4158]: W0224 05:12:33.834011 4158 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Feb 24 05:12:33.836598 master-0 kubenswrapper[4158]: W0224 05:12:33.834017 4158 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Feb 24 05:12:33.836598 master-0 kubenswrapper[4158]: W0224 05:12:33.834022 4158 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Feb 24 05:12:33.836598 master-0 kubenswrapper[4158]: W0224 05:12:33.834027 4158 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Feb 24 05:12:33.836598 master-0 kubenswrapper[4158]: W0224 05:12:33.834033 4158 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Feb 24 05:12:33.836598 master-0 kubenswrapper[4158]: W0224 05:12:33.834039 4158 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Feb 24 05:12:33.836598 master-0 kubenswrapper[4158]: W0224 05:12:33.834045 4158 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Feb 24 05:12:33.836598 master-0 kubenswrapper[4158]: W0224 05:12:33.834051 4158 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Feb 24 05:12:33.836598 master-0 kubenswrapper[4158]: W0224 05:12:33.834056 4158 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Feb 24 05:12:33.836598 master-0 kubenswrapper[4158]: W0224 05:12:33.834061 4158 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Feb 24 05:12:33.836598 master-0 kubenswrapper[4158]: W0224 05:12:33.834067 4158 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Feb 24 05:12:33.836598 master-0 kubenswrapper[4158]: W0224 05:12:33.834072 4158 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Feb 24 05:12:33.836598 master-0 kubenswrapper[4158]: W0224 05:12:33.834078 4158 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Feb 24 05:12:33.836598 master-0 kubenswrapper[4158]: W0224 05:12:33.834083 4158 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Feb 24 05:12:33.836598 master-0 kubenswrapper[4158]: W0224 05:12:33.834089 4158 feature_gate.go:330] unrecognized feature gate: InsightsConfig Feb 24 05:12:33.836598 master-0 kubenswrapper[4158]: W0224 05:12:33.834096 4158 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Feb 24 05:12:33.836598 master-0 kubenswrapper[4158]: W0224 05:12:33.834103 4158 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Feb 24 05:12:33.836598 master-0 kubenswrapper[4158]: W0224 05:12:33.834109 4158 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Feb 24 05:12:33.836598 master-0 kubenswrapper[4158]: W0224 05:12:33.834115 4158 feature_gate.go:330] unrecognized feature gate: OVNObservability Feb 24 05:12:33.836598 master-0 kubenswrapper[4158]: W0224 05:12:33.834121 4158 feature_gate.go:330] unrecognized feature gate: SignatureStores Feb 24 05:12:33.836598 master-0 kubenswrapper[4158]: W0224 05:12:33.834127 4158 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Feb 24 05:12:33.837888 master-0 kubenswrapper[4158]: W0224 05:12:33.834132 4158 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Feb 24 05:12:33.837888 master-0 kubenswrapper[4158]: W0224 05:12:33.834137 4158 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Feb 24 05:12:33.837888 master-0 kubenswrapper[4158]: W0224 05:12:33.834143 4158 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Feb 24 05:12:33.837888 master-0 kubenswrapper[4158]: W0224 05:12:33.834148 4158 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Feb 24 05:12:33.837888 master-0 kubenswrapper[4158]: W0224 05:12:33.834154 4158 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Feb 24 05:12:33.837888 master-0 kubenswrapper[4158]: W0224 05:12:33.834162 4158 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Feb 24 05:12:33.837888 master-0 kubenswrapper[4158]: W0224 05:12:33.834168 4158 feature_gate.go:330] unrecognized feature gate: GatewayAPI Feb 24 05:12:33.837888 master-0 kubenswrapper[4158]: W0224 05:12:33.834173 4158 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Feb 24 05:12:33.837888 master-0 kubenswrapper[4158]: W0224 05:12:33.834180 4158 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Feb 24 05:12:33.837888 master-0 kubenswrapper[4158]: W0224 05:12:33.834187 4158 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Feb 24 05:12:33.837888 master-0 kubenswrapper[4158]: W0224 05:12:33.834197 4158 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Feb 24 05:12:33.837888 master-0 kubenswrapper[4158]: W0224 05:12:33.834204 4158 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Feb 24 05:12:33.837888 master-0 kubenswrapper[4158]: I0224 05:12:33.834215 4158 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false StreamingCollectionEncodingToJSON:true StreamingCollectionEncodingToProtobuf:true TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Feb 24 05:12:33.837888 master-0 kubenswrapper[4158]: W0224 05:12:33.834427 4158 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Feb 24 05:12:33.837888 master-0 kubenswrapper[4158]: W0224 05:12:33.834443 4158 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Feb 24 05:12:33.838908 master-0 kubenswrapper[4158]: W0224 05:12:33.834449 4158 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Feb 24 05:12:33.838908 master-0 kubenswrapper[4158]: W0224 05:12:33.834455 4158 feature_gate.go:330] unrecognized feature gate: ExternalOIDCWithUIDAndExtraClaimMappings Feb 24 05:12:33.838908 master-0 kubenswrapper[4158]: W0224 05:12:33.834460 4158 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Feb 24 05:12:33.838908 master-0 kubenswrapper[4158]: W0224 05:12:33.834467 4158 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Feb 24 05:12:33.838908 master-0 kubenswrapper[4158]: W0224 05:12:33.834474 4158 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Feb 24 05:12:33.838908 master-0 kubenswrapper[4158]: W0224 05:12:33.834482 4158 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Feb 24 05:12:33.838908 master-0 kubenswrapper[4158]: W0224 05:12:33.834488 4158 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Feb 24 05:12:33.838908 master-0 kubenswrapper[4158]: W0224 05:12:33.834494 4158 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Feb 24 05:12:33.838908 master-0 kubenswrapper[4158]: W0224 05:12:33.834500 4158 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Feb 24 05:12:33.838908 master-0 kubenswrapper[4158]: W0224 05:12:33.834506 4158 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Feb 24 05:12:33.838908 master-0 kubenswrapper[4158]: W0224 05:12:33.834511 4158 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Feb 24 05:12:33.838908 master-0 kubenswrapper[4158]: W0224 05:12:33.834517 4158 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Feb 24 05:12:33.838908 master-0 kubenswrapper[4158]: W0224 05:12:33.834524 4158 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Feb 24 05:12:33.838908 master-0 kubenswrapper[4158]: W0224 05:12:33.834530 4158 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Feb 24 05:12:33.838908 master-0 kubenswrapper[4158]: W0224 05:12:33.834535 4158 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Feb 24 05:12:33.838908 master-0 kubenswrapper[4158]: W0224 05:12:33.834541 4158 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Feb 24 05:12:33.838908 master-0 kubenswrapper[4158]: W0224 05:12:33.834546 4158 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Feb 24 05:12:33.838908 master-0 kubenswrapper[4158]: W0224 05:12:33.834551 4158 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Feb 24 05:12:33.838908 master-0 kubenswrapper[4158]: W0224 05:12:33.834556 4158 feature_gate.go:330] unrecognized feature gate: SignatureStores Feb 24 05:12:33.839958 master-0 kubenswrapper[4158]: W0224 05:12:33.834561 4158 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Feb 24 05:12:33.839958 master-0 kubenswrapper[4158]: W0224 05:12:33.834567 4158 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Feb 24 05:12:33.839958 master-0 kubenswrapper[4158]: W0224 05:12:33.834572 4158 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Feb 24 05:12:33.839958 master-0 kubenswrapper[4158]: W0224 05:12:33.834578 4158 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Feb 24 05:12:33.839958 master-0 kubenswrapper[4158]: W0224 05:12:33.834583 4158 feature_gate.go:330] unrecognized feature gate: OVNObservability Feb 24 05:12:33.839958 master-0 kubenswrapper[4158]: W0224 05:12:33.834588 4158 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Feb 24 05:12:33.839958 master-0 kubenswrapper[4158]: W0224 05:12:33.834593 4158 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Feb 24 05:12:33.839958 master-0 kubenswrapper[4158]: W0224 05:12:33.834599 4158 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Feb 24 05:12:33.839958 master-0 kubenswrapper[4158]: W0224 05:12:33.834604 4158 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Feb 24 05:12:33.839958 master-0 kubenswrapper[4158]: W0224 05:12:33.834610 4158 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Feb 24 05:12:33.839958 master-0 kubenswrapper[4158]: W0224 05:12:33.834616 4158 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Feb 24 05:12:33.839958 master-0 kubenswrapper[4158]: W0224 05:12:33.834622 4158 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Feb 24 05:12:33.839958 master-0 kubenswrapper[4158]: W0224 05:12:33.834630 4158 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Feb 24 05:12:33.839958 master-0 kubenswrapper[4158]: W0224 05:12:33.834636 4158 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Feb 24 05:12:33.839958 master-0 kubenswrapper[4158]: W0224 05:12:33.834641 4158 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Feb 24 05:12:33.839958 master-0 kubenswrapper[4158]: W0224 05:12:33.834647 4158 feature_gate.go:330] unrecognized feature gate: PinnedImages Feb 24 05:12:33.839958 master-0 kubenswrapper[4158]: W0224 05:12:33.834653 4158 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Feb 24 05:12:33.839958 master-0 kubenswrapper[4158]: W0224 05:12:33.834659 4158 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Feb 24 05:12:33.839958 master-0 kubenswrapper[4158]: W0224 05:12:33.834664 4158 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Feb 24 05:12:33.839958 master-0 kubenswrapper[4158]: W0224 05:12:33.834670 4158 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Feb 24 05:12:33.841019 master-0 kubenswrapper[4158]: W0224 05:12:33.834675 4158 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Feb 24 05:12:33.841019 master-0 kubenswrapper[4158]: W0224 05:12:33.834680 4158 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Feb 24 05:12:33.841019 master-0 kubenswrapper[4158]: W0224 05:12:33.834686 4158 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Feb 24 05:12:33.841019 master-0 kubenswrapper[4158]: W0224 05:12:33.834691 4158 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Feb 24 05:12:33.841019 master-0 kubenswrapper[4158]: W0224 05:12:33.834696 4158 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Feb 24 05:12:33.841019 master-0 kubenswrapper[4158]: W0224 05:12:33.834701 4158 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Feb 24 05:12:33.841019 master-0 kubenswrapper[4158]: W0224 05:12:33.834706 4158 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Feb 24 05:12:33.841019 master-0 kubenswrapper[4158]: W0224 05:12:33.834711 4158 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Feb 24 05:12:33.841019 master-0 kubenswrapper[4158]: W0224 05:12:33.834717 4158 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Feb 24 05:12:33.841019 master-0 kubenswrapper[4158]: W0224 05:12:33.834721 4158 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Feb 24 05:12:33.841019 master-0 kubenswrapper[4158]: W0224 05:12:33.834727 4158 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Feb 24 05:12:33.841019 master-0 kubenswrapper[4158]: W0224 05:12:33.834733 4158 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Feb 24 05:12:33.841019 master-0 kubenswrapper[4158]: W0224 05:12:33.834738 4158 feature_gate.go:330] unrecognized feature gate: NewOLM Feb 24 05:12:33.841019 master-0 kubenswrapper[4158]: W0224 05:12:33.834743 4158 feature_gate.go:330] unrecognized feature gate: Example Feb 24 05:12:33.841019 master-0 kubenswrapper[4158]: W0224 05:12:33.834750 4158 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Feb 24 05:12:33.841019 master-0 kubenswrapper[4158]: W0224 05:12:33.834759 4158 feature_gate.go:330] unrecognized feature gate: InsightsConfig Feb 24 05:12:33.841019 master-0 kubenswrapper[4158]: W0224 05:12:33.834765 4158 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Feb 24 05:12:33.841019 master-0 kubenswrapper[4158]: W0224 05:12:33.834770 4158 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Feb 24 05:12:33.841019 master-0 kubenswrapper[4158]: W0224 05:12:33.834775 4158 feature_gate.go:330] unrecognized feature gate: GatewayAPI Feb 24 05:12:33.841019 master-0 kubenswrapper[4158]: W0224 05:12:33.834781 4158 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Feb 24 05:12:33.842096 master-0 kubenswrapper[4158]: W0224 05:12:33.834786 4158 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Feb 24 05:12:33.842096 master-0 kubenswrapper[4158]: W0224 05:12:33.834791 4158 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Feb 24 05:12:33.842096 master-0 kubenswrapper[4158]: W0224 05:12:33.834796 4158 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Feb 24 05:12:33.842096 master-0 kubenswrapper[4158]: W0224 05:12:33.834801 4158 feature_gate.go:330] unrecognized feature gate: PlatformOperators Feb 24 05:12:33.842096 master-0 kubenswrapper[4158]: W0224 05:12:33.834807 4158 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Feb 24 05:12:33.842096 master-0 kubenswrapper[4158]: W0224 05:12:33.834812 4158 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Feb 24 05:12:33.842096 master-0 kubenswrapper[4158]: W0224 05:12:33.834817 4158 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Feb 24 05:12:33.842096 master-0 kubenswrapper[4158]: W0224 05:12:33.834822 4158 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Feb 24 05:12:33.842096 master-0 kubenswrapper[4158]: W0224 05:12:33.834830 4158 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Feb 24 05:12:33.842096 master-0 kubenswrapper[4158]: W0224 05:12:33.834835 4158 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Feb 24 05:12:33.842096 master-0 kubenswrapper[4158]: W0224 05:12:33.834841 4158 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Feb 24 05:12:33.842096 master-0 kubenswrapper[4158]: I0224 05:12:33.834850 4158 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false StreamingCollectionEncodingToJSON:true StreamingCollectionEncodingToProtobuf:true TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Feb 24 05:12:33.842096 master-0 kubenswrapper[4158]: I0224 05:12:33.835124 4158 server.go:940] "Client rotation is on, will bootstrap in background" Feb 24 05:12:33.842096 master-0 kubenswrapper[4158]: I0224 05:12:33.838584 4158 bootstrap.go:101] "Use the bootstrap credentials to request a cert, and set kubeconfig to point to the certificate dir" Feb 24 05:12:33.842096 master-0 kubenswrapper[4158]: I0224 05:12:33.840602 4158 server.go:997] "Starting client certificate rotation" Feb 24 05:12:33.842901 master-0 kubenswrapper[4158]: I0224 05:12:33.840624 4158 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate rotation is enabled Feb 24 05:12:33.842901 master-0 kubenswrapper[4158]: I0224 05:12:33.840841 4158 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Feb 24 05:12:33.866768 master-0 kubenswrapper[4158]: I0224 05:12:33.866648 4158 dynamic_cafile_content.go:123] "Loaded a new CA Bundle and Verifier" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Feb 24 05:12:33.869130 master-0 kubenswrapper[4158]: E0224 05:12:33.869059 4158 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://api-int.sno.openstack.lab:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Feb 24 05:12:33.870378 master-0 kubenswrapper[4158]: I0224 05:12:33.870329 4158 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Feb 24 05:12:33.896107 master-0 kubenswrapper[4158]: I0224 05:12:33.896026 4158 log.go:25] "Validated CRI v1 runtime API" Feb 24 05:12:33.902972 master-0 kubenswrapper[4158]: I0224 05:12:33.902920 4158 log.go:25] "Validated CRI v1 image API" Feb 24 05:12:33.905157 master-0 kubenswrapper[4158]: I0224 05:12:33.905121 4158 server.go:1437] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Feb 24 05:12:33.913878 master-0 kubenswrapper[4158]: I0224 05:12:33.913806 4158 fs.go:135] Filesystem UUIDs: map[7B77-95E7:/dev/vda2 910678ff-f77e-4a7d-8d53-86f2ac47a823:/dev/vda4 c6a7f20e-7412-4bcb-a694-c65c3535af20:/dev/vda3] Feb 24 05:12:33.913935 master-0 kubenswrapper[4158]: I0224 05:12:33.913871 4158 fs.go:136] Filesystem partitions: map[/dev/shm:{mountpoint:/dev/shm major:0 minor:22 fsType:tmpfs blockSize:0} /dev/vda3:{mountpoint:/boot major:252 minor:3 fsType:ext4 blockSize:0} /dev/vda4:{mountpoint:/var major:252 minor:4 fsType:xfs blockSize:0} /run:{mountpoint:/run major:0 minor:24 fsType:tmpfs blockSize:0} /tmp:{mountpoint:/tmp major:0 minor:30 fsType:tmpfs blockSize:0}] Feb 24 05:12:33.938278 master-0 kubenswrapper[4158]: I0224 05:12:33.937934 4158 manager.go:217] Machine: {Timestamp:2026-02-24 05:12:33.936051193 +0000 UTC m=+0.600047896 CPUVendorID:AuthenticAMD NumCores:16 NumPhysicalCores:1 NumSockets:16 CpuFrequency:2799998 MemoryCapacity:50514145280 SwapCapacity:0 MemoryByType:map[] NVMInfo:{MemoryModeCapacity:0 AppDirectModeCapacity:0 AvgPowerBudget:0} HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] MachineID:8094cc4b75b94a6193669cda4f2ebd55 SystemUUID:8094cc4b-75b9-4a61-9366-9cda4f2ebd55 BootID:a3e360dd-b72b-40f0-a056-0eff64b26b55 Filesystems:[{Device:/tmp DeviceMajor:0 DeviceMinor:30 Capacity:25257074688 Type:vfs Inodes:1048576 HasInodes:true} {Device:/dev/vda3 DeviceMajor:252 DeviceMinor:3 Capacity:366869504 Type:vfs Inodes:98304 HasInodes:true} {Device:/dev/shm DeviceMajor:0 DeviceMinor:22 Capacity:25257070592 Type:vfs Inodes:6166277 HasInodes:true} {Device:/run DeviceMajor:0 DeviceMinor:24 Capacity:10102829056 Type:vfs Inodes:819200 HasInodes:true} {Device:/dev/vda4 DeviceMajor:252 DeviceMinor:4 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true}] DiskMap:map[252:0:{Name:vda Major:252 Minor:0 Size:214748364800 Scheduler:none} 252:16:{Name:vdb Major:252 Minor:16 Size:21474836480 Scheduler:none} 252:32:{Name:vdc Major:252 Minor:32 Size:21474836480 Scheduler:none} 252:48:{Name:vdd Major:252 Minor:48 Size:21474836480 Scheduler:none} 252:64:{Name:vde Major:252 Minor:64 Size:21474836480 Scheduler:none}] NetworkDevices:[{Name:br-ex MacAddress:fa:16:9e:81:f6:10 Speed:0 Mtu:9000} {Name:eth0 MacAddress:fa:16:9e:81:f6:10 Speed:-1 Mtu:9000} {Name:eth1 MacAddress:fa:16:3e:63:ba:dc Speed:-1 Mtu:9000} {Name:eth2 MacAddress:fa:16:3e:5d:e3:99 Speed:-1 Mtu:9000} {Name:ovs-system MacAddress:2e:18:f0:62:3d:21 Speed:0 Mtu:1500}] Topology:[{Id:0 Memory:50514145280 HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] Cores:[{Id:0 Threads:[0] Caches:[{Id:0 Size:32768 Type:Data Level:1} {Id:0 Size:32768 Type:Instruction Level:1} {Id:0 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:0 Size:16777216 Type:Unified Level:3}] SocketID:0 BookID: DrawerID:} {Id:0 Threads:[1] Caches:[{Id:1 Size:32768 Type:Data Level:1} {Id:1 Size:32768 Type:Instruction Level:1} {Id:1 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:1 Size:16777216 Type:Unified Level:3}] SocketID:1 BookID: DrawerID:} {Id:0 Threads:[10] Caches:[{Id:10 Size:32768 Type:Data Level:1} {Id:10 Size:32768 Type:Instruction Level:1} {Id:10 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:10 Size:16777216 Type:Unified Level:3}] SocketID:10 BookID: DrawerID:} {Id:0 Threads:[11] Caches:[{Id:11 Size:32768 Type:Data Level:1} {Id:11 Size:32768 Type:Instruction Level:1} {Id:11 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:11 Size:16777216 Type:Unified Level:3}] SocketID:11 BookID: DrawerID:} {Id:0 Threads:[12] Caches:[{Id:12 Size:32768 Type:Data Level:1} {Id:12 Size:32768 Type:Instruction Level:1} {Id:12 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:12 Size:16777216 Type:Unified Level:3}] SocketID:12 BookID: DrawerID:} {Id:0 Threads:[13] Caches:[{Id:13 Size:32768 Type:Data Level:1} {Id:13 Size:32768 Type:Instruction Level:1} {Id:13 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:13 Size:16777216 Type:Unified Level:3}] SocketID:13 BookID: DrawerID:} {Id:0 Threads:[14] Caches:[{Id:14 Size:32768 Type:Data Level:1} {Id:14 Size:32768 Type:Instruction Level:1} {Id:14 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:14 Size:16777216 Type:Unified Level:3}] SocketID:14 BookID: DrawerID:} {Id:0 Threads:[15] Caches:[{Id:15 Size:32768 Type:Data Level:1} {Id:15 Size:32768 Type:Instruction Level:1} {Id:15 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:15 Size:16777216 Type:Unified Level:3}] SocketID:15 BookID: DrawerID:} {Id:0 Threads:[2] Caches:[{Id:2 Size:32768 Type:Data Level:1} {Id:2 Size:32768 Type:Instruction Level:1} {Id:2 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:2 Size:16777216 Type:Unified Level:3}] SocketID:2 BookID: DrawerID:} {Id:0 Threads:[3] Caches:[{Id:3 Size:32768 Type:Data Level:1} {Id:3 Size:32768 Type:Instruction Level:1} {Id:3 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:3 Size:16777216 Type:Unified Level:3}] SocketID:3 BookID: DrawerID:} {Id:0 Threads:[4] Caches:[{Id:4 Size:32768 Type:Data Level:1} {Id:4 Size:32768 Type:Instruction Level:1} {Id:4 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:4 Size:16777216 Type:Unified Level:3}] SocketID:4 BookID: DrawerID:} {Id:0 Threads:[5] Caches:[{Id:5 Size:32768 Type:Data Level:1} {Id:5 Size:32768 Type:Instruction Level:1} {Id:5 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:5 Size:16777216 Type:Unified Level:3}] SocketID:5 BookID: DrawerID:} {Id:0 Threads:[6] Caches:[{Id:6 Size:32768 Type:Data Level:1} {Id:6 Size:32768 Type:Instruction Level:1} {Id:6 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:6 Size:16777216 Type:Unified Level:3}] SocketID:6 BookID: DrawerID:} {Id:0 Threads:[7] Caches:[{Id:7 Size:32768 Type:Data Level:1} {Id:7 Size:32768 Type:Instruction Level:1} {Id:7 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:7 Size:16777216 Type:Unified Level:3}] SocketID:7 BookID: DrawerID:} {Id:0 Threads:[8] Caches:[{Id:8 Size:32768 Type:Data Level:1} {Id:8 Size:32768 Type:Instruction Level:1} {Id:8 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:8 Size:16777216 Type:Unified Level:3}] SocketID:8 BookID: DrawerID:} {Id:0 Threads:[9] Caches:[{Id:9 Size:32768 Type:Data Level:1} {Id:9 Size:32768 Type:Instruction Level:1} {Id:9 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:9 Size:16777216 Type:Unified Level:3}] SocketID:9 BookID: DrawerID:}] Caches:[] Distances:[10]}] CloudProvider:Unknown InstanceType:Unknown InstanceID:None} Feb 24 05:12:33.938278 master-0 kubenswrapper[4158]: I0224 05:12:33.938207 4158 manager_no_libpfm.go:29] cAdvisor is build without cgo and/or libpfm support. Perf event counters are not available. Feb 24 05:12:33.938540 master-0 kubenswrapper[4158]: I0224 05:12:33.938408 4158 manager.go:233] Version: {KernelVersion:5.14.0-427.109.1.el9_4.x86_64 ContainerOsVersion:Red Hat Enterprise Linux CoreOS 418.94.202602022246-0 DockerVersion: DockerAPIVersion: CadvisorVersion: CadvisorRevision:} Feb 24 05:12:33.938895 master-0 kubenswrapper[4158]: I0224 05:12:33.938857 4158 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Feb 24 05:12:33.939100 master-0 kubenswrapper[4158]: I0224 05:12:33.939049 4158 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Feb 24 05:12:33.939353 master-0 kubenswrapper[4158]: I0224 05:12:33.939087 4158 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"master-0","RuntimeCgroupsName":"/system.slice/crio.service","SystemCgroupsName":"/system.slice","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":true,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":{"cpu":"500m","ephemeral-storage":"1Gi","memory":"1Gi"},"HardEvictionThresholds":[{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":4096,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Feb 24 05:12:33.939413 master-0 kubenswrapper[4158]: I0224 05:12:33.939378 4158 topology_manager.go:138] "Creating topology manager with none policy" Feb 24 05:12:33.939413 master-0 kubenswrapper[4158]: I0224 05:12:33.939390 4158 container_manager_linux.go:303] "Creating device plugin manager" Feb 24 05:12:33.939885 master-0 kubenswrapper[4158]: I0224 05:12:33.939854 4158 manager.go:142] "Creating Device Plugin manager" path="/var/lib/kubelet/device-plugins/kubelet.sock" Feb 24 05:12:33.939942 master-0 kubenswrapper[4158]: I0224 05:12:33.939886 4158 server.go:66] "Creating device plugin registration server" version="v1beta1" socket="/var/lib/kubelet/device-plugins/kubelet.sock" Feb 24 05:12:33.940753 master-0 kubenswrapper[4158]: I0224 05:12:33.940721 4158 state_mem.go:36] "Initialized new in-memory state store" Feb 24 05:12:33.940854 master-0 kubenswrapper[4158]: I0224 05:12:33.940824 4158 server.go:1245] "Using root directory" path="/var/lib/kubelet" Feb 24 05:12:33.945843 master-0 kubenswrapper[4158]: I0224 05:12:33.945811 4158 kubelet.go:418] "Attempting to sync node with API server" Feb 24 05:12:33.945843 master-0 kubenswrapper[4158]: I0224 05:12:33.945833 4158 kubelet.go:313] "Adding static pod path" path="/etc/kubernetes/manifests" Feb 24 05:12:33.945920 master-0 kubenswrapper[4158]: I0224 05:12:33.945856 4158 file.go:69] "Watching path" path="/etc/kubernetes/manifests" Feb 24 05:12:33.945920 master-0 kubenswrapper[4158]: I0224 05:12:33.945872 4158 kubelet.go:324] "Adding apiserver pod source" Feb 24 05:12:33.946556 master-0 kubenswrapper[4158]: I0224 05:12:33.946500 4158 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Feb 24 05:12:33.951776 master-0 kubenswrapper[4158]: I0224 05:12:33.951718 4158 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="cri-o" version="1.31.13-6.rhaos4.18.git7ed6156.el9" apiVersion="v1" Feb 24 05:12:33.956471 master-0 kubenswrapper[4158]: I0224 05:12:33.956192 4158 kubelet.go:854] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Feb 24 05:12:33.957054 master-0 kubenswrapper[4158]: I0224 05:12:33.957027 4158 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/portworx-volume" Feb 24 05:12:33.957110 master-0 kubenswrapper[4158]: I0224 05:12:33.957065 4158 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/empty-dir" Feb 24 05:12:33.957110 master-0 kubenswrapper[4158]: I0224 05:12:33.957096 4158 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/git-repo" Feb 24 05:12:33.957110 master-0 kubenswrapper[4158]: I0224 05:12:33.957106 4158 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/host-path" Feb 24 05:12:33.957298 master-0 kubenswrapper[4158]: I0224 05:12:33.957118 4158 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/nfs" Feb 24 05:12:33.957298 master-0 kubenswrapper[4158]: I0224 05:12:33.957130 4158 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/secret" Feb 24 05:12:33.957298 master-0 kubenswrapper[4158]: I0224 05:12:33.957148 4158 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/iscsi" Feb 24 05:12:33.957298 master-0 kubenswrapper[4158]: I0224 05:12:33.957158 4158 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/downward-api" Feb 24 05:12:33.957298 master-0 kubenswrapper[4158]: I0224 05:12:33.957171 4158 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/fc" Feb 24 05:12:33.957298 master-0 kubenswrapper[4158]: I0224 05:12:33.957182 4158 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/configmap" Feb 24 05:12:33.957298 master-0 kubenswrapper[4158]: I0224 05:12:33.957198 4158 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/projected" Feb 24 05:12:33.960929 master-0 kubenswrapper[4158]: I0224 05:12:33.960859 4158 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/local-volume" Feb 24 05:12:33.960991 master-0 kubenswrapper[4158]: I0224 05:12:33.960981 4158 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/csi" Feb 24 05:12:33.961115 master-0 kubenswrapper[4158]: W0224 05:12:33.961000 4158 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.sno.openstack.lab:6443/api/v1/nodes?fieldSelector=metadata.name%3Dmaster-0&limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Feb 24 05:12:33.961175 master-0 kubenswrapper[4158]: E0224 05:12:33.961143 4158 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.sno.openstack.lab:6443/api/v1/nodes?fieldSelector=metadata.name%3Dmaster-0&limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Feb 24 05:12:33.961223 master-0 kubenswrapper[4158]: W0224 05:12:33.961133 4158 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.sno.openstack.lab:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Feb 24 05:12:33.961341 master-0 kubenswrapper[4158]: E0224 05:12:33.961264 4158 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.sno.openstack.lab:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Feb 24 05:12:33.962345 master-0 kubenswrapper[4158]: I0224 05:12:33.962278 4158 server.go:1280] "Started kubelet" Feb 24 05:12:33.962529 master-0 kubenswrapper[4158]: I0224 05:12:33.962435 4158 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Feb 24 05:12:33.962653 master-0 kubenswrapper[4158]: I0224 05:12:33.962539 4158 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Feb 24 05:12:33.962723 master-0 kubenswrapper[4158]: I0224 05:12:33.962701 4158 server_v1.go:47] "podresources" method="list" useActivePods=true Feb 24 05:12:33.963425 master-0 kubenswrapper[4158]: I0224 05:12:33.963353 4158 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.sno.openstack.lab:6443/apis/storage.k8s.io/v1/csinodes/master-0?resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Feb 24 05:12:33.963512 master-0 kubenswrapper[4158]: I0224 05:12:33.963491 4158 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Feb 24 05:12:33.964782 master-0 systemd[1]: Started Kubernetes Kubelet. Feb 24 05:12:33.966259 master-0 kubenswrapper[4158]: I0224 05:12:33.966222 4158 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate rotation is enabled Feb 24 05:12:33.966259 master-0 kubenswrapper[4158]: I0224 05:12:33.966260 4158 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Feb 24 05:12:33.967098 master-0 kubenswrapper[4158]: E0224 05:12:33.967038 4158 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 24 05:12:33.967098 master-0 kubenswrapper[4158]: I0224 05:12:33.967074 4158 volume_manager.go:287] "The desired_state_of_world populator starts" Feb 24 05:12:33.967098 master-0 kubenswrapper[4158]: I0224 05:12:33.967097 4158 volume_manager.go:289] "Starting Kubelet Volume Manager" Feb 24 05:12:33.967238 master-0 kubenswrapper[4158]: I0224 05:12:33.967171 4158 desired_state_of_world_populator.go:147] "Desired state populator starts to run" Feb 24 05:12:33.968696 master-0 kubenswrapper[4158]: E0224 05:12:33.967260 4158 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/default/events\": dial tcp 192.168.32.10:6443: connect: connection refused" event="&Event{ObjectMeta:{master-0.189716b713e58c16 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:master-0,UID:master-0,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-02-24 05:12:33.96221647 +0000 UTC m=+0.626213213,LastTimestamp:2026-02-24 05:12:33.96221647 +0000 UTC m=+0.626213213,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Feb 24 05:12:33.968895 master-0 kubenswrapper[4158]: I0224 05:12:33.968720 4158 reconstruct.go:97] "Volume reconstruction finished" Feb 24 05:12:33.968895 master-0 kubenswrapper[4158]: I0224 05:12:33.968747 4158 reconciler.go:26] "Reconciler: start to sync state" Feb 24 05:12:33.969130 master-0 kubenswrapper[4158]: I0224 05:12:33.969101 4158 server.go:449] "Adding debug handlers to kubelet server" Feb 24 05:12:33.969708 master-0 kubenswrapper[4158]: W0224 05:12:33.969596 4158 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.sno.openstack.lab:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Feb 24 05:12:33.969834 master-0 kubenswrapper[4158]: E0224 05:12:33.969772 4158 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.sno.openstack.lab:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Feb 24 05:12:33.969913 master-0 kubenswrapper[4158]: E0224 05:12:33.969849 4158 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": dial tcp 192.168.32.10:6443: connect: connection refused" interval="200ms" Feb 24 05:12:33.970634 master-0 kubenswrapper[4158]: I0224 05:12:33.970612 4158 factory.go:153] Registering CRI-O factory Feb 24 05:12:33.970733 master-0 kubenswrapper[4158]: I0224 05:12:33.970642 4158 factory.go:221] Registration of the crio container factory successfully Feb 24 05:12:33.970733 master-0 kubenswrapper[4158]: I0224 05:12:33.970702 4158 factory.go:219] Registration of the containerd container factory failed: unable to create containerd client: containerd: cannot unix dial containerd api service: dial unix /run/containerd/containerd.sock: connect: no such file or directory Feb 24 05:12:33.970733 master-0 kubenswrapper[4158]: I0224 05:12:33.970714 4158 factory.go:55] Registering systemd factory Feb 24 05:12:33.970733 master-0 kubenswrapper[4158]: I0224 05:12:33.970724 4158 factory.go:221] Registration of the systemd container factory successfully Feb 24 05:12:33.970872 master-0 kubenswrapper[4158]: I0224 05:12:33.970754 4158 factory.go:103] Registering Raw factory Feb 24 05:12:33.970872 master-0 kubenswrapper[4158]: I0224 05:12:33.970779 4158 manager.go:1196] Started watching for new ooms in manager Feb 24 05:12:33.971367 master-0 kubenswrapper[4158]: I0224 05:12:33.971344 4158 manager.go:319] Starting recovery of all containers Feb 24 05:12:33.974877 master-0 kubenswrapper[4158]: E0224 05:12:33.974829 4158 kubelet.go:1495] "Image garbage collection failed once. Stats initialization may not have completed yet" err="failed to get imageFs info: unable to find data in memory cache" Feb 24 05:12:33.997422 master-0 kubenswrapper[4158]: I0224 05:12:33.996812 4158 manager.go:324] Recovery completed Feb 24 05:12:34.011008 master-0 kubenswrapper[4158]: I0224 05:12:34.010959 4158 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 24 05:12:34.012597 master-0 kubenswrapper[4158]: I0224 05:12:34.012548 4158 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Feb 24 05:12:34.012646 master-0 kubenswrapper[4158]: I0224 05:12:34.012629 4158 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Feb 24 05:12:34.012676 master-0 kubenswrapper[4158]: I0224 05:12:34.012652 4158 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Feb 24 05:12:34.014153 master-0 kubenswrapper[4158]: I0224 05:12:34.014113 4158 cpu_manager.go:225] "Starting CPU manager" policy="none" Feb 24 05:12:34.014153 master-0 kubenswrapper[4158]: I0224 05:12:34.014139 4158 cpu_manager.go:226] "Reconciling" reconcilePeriod="10s" Feb 24 05:12:34.014245 master-0 kubenswrapper[4158]: I0224 05:12:34.014173 4158 state_mem.go:36] "Initialized new in-memory state store" Feb 24 05:12:34.019498 master-0 kubenswrapper[4158]: I0224 05:12:34.019459 4158 policy_none.go:49] "None policy: Start" Feb 24 05:12:34.020654 master-0 kubenswrapper[4158]: I0224 05:12:34.020623 4158 memory_manager.go:170] "Starting memorymanager" policy="None" Feb 24 05:12:34.020749 master-0 kubenswrapper[4158]: I0224 05:12:34.020727 4158 state_mem.go:35] "Initializing new in-memory state store" Feb 24 05:12:34.067605 master-0 kubenswrapper[4158]: E0224 05:12:34.067423 4158 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 24 05:12:34.087448 master-0 kubenswrapper[4158]: I0224 05:12:34.087238 4158 manager.go:334] "Starting Device Plugin manager" Feb 24 05:12:34.087448 master-0 kubenswrapper[4158]: I0224 05:12:34.087399 4158 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Feb 24 05:12:34.087448 master-0 kubenswrapper[4158]: I0224 05:12:34.087419 4158 server.go:79] "Starting device plugin registration server" Feb 24 05:12:34.087983 master-0 kubenswrapper[4158]: I0224 05:12:34.087916 4158 eviction_manager.go:189] "Eviction manager: starting control loop" Feb 24 05:12:34.087983 master-0 kubenswrapper[4158]: I0224 05:12:34.087935 4158 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Feb 24 05:12:34.088697 master-0 kubenswrapper[4158]: I0224 05:12:34.088634 4158 plugin_watcher.go:51] "Plugin Watcher Start" path="/var/lib/kubelet/plugins_registry" Feb 24 05:12:34.088936 master-0 kubenswrapper[4158]: I0224 05:12:34.088879 4158 plugin_manager.go:116] "The desired_state_of_world populator (plugin watcher) starts" Feb 24 05:12:34.088936 master-0 kubenswrapper[4158]: I0224 05:12:34.088928 4158 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Feb 24 05:12:34.091202 master-0 kubenswrapper[4158]: E0224 05:12:34.091158 4158 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"master-0\" not found" Feb 24 05:12:34.140416 master-0 kubenswrapper[4158]: I0224 05:12:34.140279 4158 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Feb 24 05:12:34.153576 master-0 kubenswrapper[4158]: I0224 05:12:34.143143 4158 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Feb 24 05:12:34.153576 master-0 kubenswrapper[4158]: I0224 05:12:34.143225 4158 status_manager.go:217] "Starting to sync pod status with apiserver" Feb 24 05:12:34.153576 master-0 kubenswrapper[4158]: I0224 05:12:34.143263 4158 kubelet.go:2335] "Starting kubelet main sync loop" Feb 24 05:12:34.153576 master-0 kubenswrapper[4158]: E0224 05:12:34.143368 4158 kubelet.go:2359] "Skipping pod synchronization" err="PLEG is not healthy: pleg has yet to be successful" Feb 24 05:12:34.153576 master-0 kubenswrapper[4158]: W0224 05:12:34.144552 4158 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.sno.openstack.lab:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Feb 24 05:12:34.153576 master-0 kubenswrapper[4158]: E0224 05:12:34.144693 4158 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.sno.openstack.lab:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Feb 24 05:12:34.171130 master-0 kubenswrapper[4158]: E0224 05:12:34.171057 4158 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": dial tcp 192.168.32.10:6443: connect: connection refused" interval="400ms" Feb 24 05:12:34.188119 master-0 kubenswrapper[4158]: I0224 05:12:34.188058 4158 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 24 05:12:34.189982 master-0 kubenswrapper[4158]: I0224 05:12:34.189937 4158 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Feb 24 05:12:34.190047 master-0 kubenswrapper[4158]: I0224 05:12:34.190002 4158 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Feb 24 05:12:34.190047 master-0 kubenswrapper[4158]: I0224 05:12:34.190026 4158 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Feb 24 05:12:34.190111 master-0 kubenswrapper[4158]: I0224 05:12:34.190095 4158 kubelet_node_status.go:76] "Attempting to register node" node="master-0" Feb 24 05:12:34.191533 master-0 kubenswrapper[4158]: E0224 05:12:34.191471 4158 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.sno.openstack.lab:6443/api/v1/nodes\": dial tcp 192.168.32.10:6443: connect: connection refused" node="master-0" Feb 24 05:12:34.243529 master-0 kubenswrapper[4158]: I0224 05:12:34.243450 4158 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-etcd/etcd-master-0-master-0","openshift-kube-apiserver/bootstrap-kube-apiserver-master-0","kube-system/bootstrap-kube-controller-manager-master-0","kube-system/bootstrap-kube-scheduler-master-0","openshift-machine-config-operator/kube-rbac-proxy-crio-master-0"] Feb 24 05:12:34.243677 master-0 kubenswrapper[4158]: I0224 05:12:34.243595 4158 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 24 05:12:34.245611 master-0 kubenswrapper[4158]: I0224 05:12:34.245520 4158 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Feb 24 05:12:34.245670 master-0 kubenswrapper[4158]: I0224 05:12:34.245624 4158 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Feb 24 05:12:34.245670 master-0 kubenswrapper[4158]: I0224 05:12:34.245638 4158 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Feb 24 05:12:34.245879 master-0 kubenswrapper[4158]: I0224 05:12:34.245850 4158 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 24 05:12:34.246267 master-0 kubenswrapper[4158]: I0224 05:12:34.246206 4158 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-master-0-master-0" Feb 24 05:12:34.246360 master-0 kubenswrapper[4158]: I0224 05:12:34.246303 4158 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 24 05:12:34.246975 master-0 kubenswrapper[4158]: I0224 05:12:34.246942 4158 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Feb 24 05:12:34.247009 master-0 kubenswrapper[4158]: I0224 05:12:34.246979 4158 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Feb 24 05:12:34.247009 master-0 kubenswrapper[4158]: I0224 05:12:34.247000 4158 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Feb 24 05:12:34.247266 master-0 kubenswrapper[4158]: I0224 05:12:34.247091 4158 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 24 05:12:34.247464 master-0 kubenswrapper[4158]: I0224 05:12:34.247411 4158 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Feb 24 05:12:34.247512 master-0 kubenswrapper[4158]: I0224 05:12:34.247499 4158 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 24 05:12:34.247759 master-0 kubenswrapper[4158]: I0224 05:12:34.247726 4158 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Feb 24 05:12:34.247759 master-0 kubenswrapper[4158]: I0224 05:12:34.247757 4158 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Feb 24 05:12:34.247817 master-0 kubenswrapper[4158]: I0224 05:12:34.247778 4158 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Feb 24 05:12:34.248165 master-0 kubenswrapper[4158]: I0224 05:12:34.248128 4158 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Feb 24 05:12:34.248277 master-0 kubenswrapper[4158]: I0224 05:12:34.248251 4158 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Feb 24 05:12:34.248346 master-0 kubenswrapper[4158]: I0224 05:12:34.248329 4158 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Feb 24 05:12:34.248782 master-0 kubenswrapper[4158]: I0224 05:12:34.248737 4158 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 24 05:12:34.248909 master-0 kubenswrapper[4158]: I0224 05:12:34.248854 4158 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Feb 24 05:12:34.248957 master-0 kubenswrapper[4158]: I0224 05:12:34.248935 4158 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Feb 24 05:12:34.248990 master-0 kubenswrapper[4158]: I0224 05:12:34.248966 4158 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Feb 24 05:12:34.249097 master-0 kubenswrapper[4158]: I0224 05:12:34.249067 4158 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="kube-system/bootstrap-kube-controller-manager-master-0" Feb 24 05:12:34.249134 master-0 kubenswrapper[4158]: I0224 05:12:34.249116 4158 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 24 05:12:34.250611 master-0 kubenswrapper[4158]: I0224 05:12:34.250498 4158 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Feb 24 05:12:34.250611 master-0 kubenswrapper[4158]: I0224 05:12:34.250573 4158 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Feb 24 05:12:34.250611 master-0 kubenswrapper[4158]: I0224 05:12:34.250596 4158 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Feb 24 05:12:34.251129 master-0 kubenswrapper[4158]: I0224 05:12:34.250627 4158 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Feb 24 05:12:34.251129 master-0 kubenswrapper[4158]: I0224 05:12:34.250703 4158 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Feb 24 05:12:34.251129 master-0 kubenswrapper[4158]: I0224 05:12:34.250723 4158 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Feb 24 05:12:34.251129 master-0 kubenswrapper[4158]: I0224 05:12:34.250978 4158 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 24 05:12:34.252095 master-0 kubenswrapper[4158]: I0224 05:12:34.251829 4158 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="kube-system/bootstrap-kube-scheduler-master-0" Feb 24 05:12:34.252095 master-0 kubenswrapper[4158]: I0224 05:12:34.251885 4158 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 24 05:12:34.252553 master-0 kubenswrapper[4158]: I0224 05:12:34.252460 4158 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Feb 24 05:12:34.252630 master-0 kubenswrapper[4158]: I0224 05:12:34.252573 4158 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Feb 24 05:12:34.252630 master-0 kubenswrapper[4158]: I0224 05:12:34.252600 4158 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Feb 24 05:12:34.253179 master-0 kubenswrapper[4158]: I0224 05:12:34.253124 4158 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" Feb 24 05:12:34.253260 master-0 kubenswrapper[4158]: I0224 05:12:34.253233 4158 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 24 05:12:34.253471 master-0 kubenswrapper[4158]: I0224 05:12:34.253418 4158 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Feb 24 05:12:34.253560 master-0 kubenswrapper[4158]: I0224 05:12:34.253473 4158 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Feb 24 05:12:34.253560 master-0 kubenswrapper[4158]: I0224 05:12:34.253550 4158 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Feb 24 05:12:34.254492 master-0 kubenswrapper[4158]: I0224 05:12:34.254431 4158 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Feb 24 05:12:34.254492 master-0 kubenswrapper[4158]: I0224 05:12:34.254492 4158 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Feb 24 05:12:34.254670 master-0 kubenswrapper[4158]: I0224 05:12:34.254518 4158 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Feb 24 05:12:34.270549 master-0 kubenswrapper[4158]: I0224 05:12:34.270472 4158 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/host-path/c9ad9373c007a4fcd25e70622bdc8deb-config\") pod \"bootstrap-kube-controller-manager-master-0\" (UID: \"c9ad9373c007a4fcd25e70622bdc8deb\") " pod="kube-system/bootstrap-kube-controller-manager-master-0" Feb 24 05:12:34.270549 master-0 kubenswrapper[4158]: I0224 05:12:34.270531 4158 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/c997c8e9d3be51d454d8e61e376bef08-etc-kube\") pod \"kube-rbac-proxy-crio-master-0\" (UID: \"c997c8e9d3be51d454d8e61e376bef08\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" Feb 24 05:12:34.270780 master-0 kubenswrapper[4158]: I0224 05:12:34.270570 4158 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/12dab5d350ebc129b0bfa4714d330b15-data-dir\") pod \"etcd-master-0-master-0\" (UID: \"12dab5d350ebc129b0bfa4714d330b15\") " pod="openshift-etcd/etcd-master-0-master-0" Feb 24 05:12:34.270780 master-0 kubenswrapper[4158]: I0224 05:12:34.270606 4158 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/687e92a6cecf1e2beeef16a0b322ad08-audit-dir\") pod \"bootstrap-kube-apiserver-master-0\" (UID: \"687e92a6cecf1e2beeef16a0b322ad08\") " pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Feb 24 05:12:34.270780 master-0 kubenswrapper[4158]: I0224 05:12:34.270640 4158 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/c997c8e9d3be51d454d8e61e376bef08-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-master-0\" (UID: \"c997c8e9d3be51d454d8e61e376bef08\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" Feb 24 05:12:34.270780 master-0 kubenswrapper[4158]: I0224 05:12:34.270705 4158 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kubernetes-cloud\" (UniqueName: \"kubernetes.io/host-path/687e92a6cecf1e2beeef16a0b322ad08-etc-kubernetes-cloud\") pod \"bootstrap-kube-apiserver-master-0\" (UID: \"687e92a6cecf1e2beeef16a0b322ad08\") " pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Feb 24 05:12:34.270780 master-0 kubenswrapper[4158]: I0224 05:12:34.270771 4158 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secrets\" (UniqueName: \"kubernetes.io/host-path/c9ad9373c007a4fcd25e70622bdc8deb-secrets\") pod \"bootstrap-kube-controller-manager-master-0\" (UID: \"c9ad9373c007a4fcd25e70622bdc8deb\") " pod="kube-system/bootstrap-kube-controller-manager-master-0" Feb 24 05:12:34.271059 master-0 kubenswrapper[4158]: I0224 05:12:34.270841 4158 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssl-certs-host\" (UniqueName: \"kubernetes.io/host-path/c9ad9373c007a4fcd25e70622bdc8deb-ssl-certs-host\") pod \"bootstrap-kube-controller-manager-master-0\" (UID: \"c9ad9373c007a4fcd25e70622bdc8deb\") " pod="kube-system/bootstrap-kube-controller-manager-master-0" Feb 24 05:12:34.271059 master-0 kubenswrapper[4158]: I0224 05:12:34.270884 4158 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/host-path/c9ad9373c007a4fcd25e70622bdc8deb-logs\") pod \"bootstrap-kube-controller-manager-master-0\" (UID: \"c9ad9373c007a4fcd25e70622bdc8deb\") " pod="kube-system/bootstrap-kube-controller-manager-master-0" Feb 24 05:12:34.271059 master-0 kubenswrapper[4158]: I0224 05:12:34.270930 4158 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/host-path/56c3cb71c9851003c8de7e7c5db4b87e-logs\") pod \"bootstrap-kube-scheduler-master-0\" (UID: \"56c3cb71c9851003c8de7e7c5db4b87e\") " pod="kube-system/bootstrap-kube-scheduler-master-0" Feb 24 05:12:34.271059 master-0 kubenswrapper[4158]: I0224 05:12:34.270978 4158 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kubernetes-cloud\" (UniqueName: \"kubernetes.io/host-path/c9ad9373c007a4fcd25e70622bdc8deb-etc-kubernetes-cloud\") pod \"bootstrap-kube-controller-manager-master-0\" (UID: \"c9ad9373c007a4fcd25e70622bdc8deb\") " pod="kube-system/bootstrap-kube-controller-manager-master-0" Feb 24 05:12:34.271059 master-0 kubenswrapper[4158]: I0224 05:12:34.271024 4158 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/host-path/12dab5d350ebc129b0bfa4714d330b15-certs\") pod \"etcd-master-0-master-0\" (UID: \"12dab5d350ebc129b0bfa4714d330b15\") " pod="openshift-etcd/etcd-master-0-master-0" Feb 24 05:12:34.271356 master-0 kubenswrapper[4158]: I0224 05:12:34.271070 4158 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssl-certs-host\" (UniqueName: \"kubernetes.io/host-path/687e92a6cecf1e2beeef16a0b322ad08-ssl-certs-host\") pod \"bootstrap-kube-apiserver-master-0\" (UID: \"687e92a6cecf1e2beeef16a0b322ad08\") " pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Feb 24 05:12:34.271356 master-0 kubenswrapper[4158]: I0224 05:12:34.271108 4158 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/host-path/687e92a6cecf1e2beeef16a0b322ad08-logs\") pod \"bootstrap-kube-apiserver-master-0\" (UID: \"687e92a6cecf1e2beeef16a0b322ad08\") " pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Feb 24 05:12:34.271356 master-0 kubenswrapper[4158]: I0224 05:12:34.271159 4158 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secrets\" (UniqueName: \"kubernetes.io/host-path/56c3cb71c9851003c8de7e7c5db4b87e-secrets\") pod \"bootstrap-kube-scheduler-master-0\" (UID: \"56c3cb71c9851003c8de7e7c5db4b87e\") " pod="kube-system/bootstrap-kube-scheduler-master-0" Feb 24 05:12:34.271356 master-0 kubenswrapper[4158]: I0224 05:12:34.271203 4158 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secrets\" (UniqueName: \"kubernetes.io/host-path/687e92a6cecf1e2beeef16a0b322ad08-secrets\") pod \"bootstrap-kube-apiserver-master-0\" (UID: \"687e92a6cecf1e2beeef16a0b322ad08\") " pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Feb 24 05:12:34.271356 master-0 kubenswrapper[4158]: I0224 05:12:34.271236 4158 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/host-path/687e92a6cecf1e2beeef16a0b322ad08-config\") pod \"bootstrap-kube-apiserver-master-0\" (UID: \"687e92a6cecf1e2beeef16a0b322ad08\") " pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Feb 24 05:12:34.371700 master-0 kubenswrapper[4158]: I0224 05:12:34.371592 4158 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/c997c8e9d3be51d454d8e61e376bef08-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-master-0\" (UID: \"c997c8e9d3be51d454d8e61e376bef08\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" Feb 24 05:12:34.371947 master-0 kubenswrapper[4158]: I0224 05:12:34.371698 4158 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kubernetes-cloud\" (UniqueName: \"kubernetes.io/host-path/687e92a6cecf1e2beeef16a0b322ad08-etc-kubernetes-cloud\") pod \"bootstrap-kube-apiserver-master-0\" (UID: \"687e92a6cecf1e2beeef16a0b322ad08\") " pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Feb 24 05:12:34.371947 master-0 kubenswrapper[4158]: I0224 05:12:34.371932 4158 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kubernetes-cloud\" (UniqueName: \"kubernetes.io/host-path/687e92a6cecf1e2beeef16a0b322ad08-etc-kubernetes-cloud\") pod \"bootstrap-kube-apiserver-master-0\" (UID: \"687e92a6cecf1e2beeef16a0b322ad08\") " pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Feb 24 05:12:34.372075 master-0 kubenswrapper[4158]: I0224 05:12:34.371999 4158 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/c997c8e9d3be51d454d8e61e376bef08-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-master-0\" (UID: \"c997c8e9d3be51d454d8e61e376bef08\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" Feb 24 05:12:34.372241 master-0 kubenswrapper[4158]: I0224 05:12:34.372123 4158 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secrets\" (UniqueName: \"kubernetes.io/host-path/c9ad9373c007a4fcd25e70622bdc8deb-secrets\") pod \"bootstrap-kube-controller-manager-master-0\" (UID: \"c9ad9373c007a4fcd25e70622bdc8deb\") " pod="kube-system/bootstrap-kube-controller-manager-master-0" Feb 24 05:12:34.372241 master-0 kubenswrapper[4158]: I0224 05:12:34.372191 4158 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssl-certs-host\" (UniqueName: \"kubernetes.io/host-path/c9ad9373c007a4fcd25e70622bdc8deb-ssl-certs-host\") pod \"bootstrap-kube-controller-manager-master-0\" (UID: \"c9ad9373c007a4fcd25e70622bdc8deb\") " pod="kube-system/bootstrap-kube-controller-manager-master-0" Feb 24 05:12:34.372241 master-0 kubenswrapper[4158]: I0224 05:12:34.372230 4158 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/host-path/c9ad9373c007a4fcd25e70622bdc8deb-logs\") pod \"bootstrap-kube-controller-manager-master-0\" (UID: \"c9ad9373c007a4fcd25e70622bdc8deb\") " pod="kube-system/bootstrap-kube-controller-manager-master-0" Feb 24 05:12:34.372473 master-0 kubenswrapper[4158]: I0224 05:12:34.372278 4158 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/host-path/56c3cb71c9851003c8de7e7c5db4b87e-logs\") pod \"bootstrap-kube-scheduler-master-0\" (UID: \"56c3cb71c9851003c8de7e7c5db4b87e\") " pod="kube-system/bootstrap-kube-scheduler-master-0" Feb 24 05:12:34.372473 master-0 kubenswrapper[4158]: I0224 05:12:34.372367 4158 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secrets\" (UniqueName: \"kubernetes.io/host-path/c9ad9373c007a4fcd25e70622bdc8deb-secrets\") pod \"bootstrap-kube-controller-manager-master-0\" (UID: \"c9ad9373c007a4fcd25e70622bdc8deb\") " pod="kube-system/bootstrap-kube-controller-manager-master-0" Feb 24 05:12:34.372473 master-0 kubenswrapper[4158]: I0224 05:12:34.372366 4158 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kubernetes-cloud\" (UniqueName: \"kubernetes.io/host-path/c9ad9373c007a4fcd25e70622bdc8deb-etc-kubernetes-cloud\") pod \"bootstrap-kube-controller-manager-master-0\" (UID: \"c9ad9373c007a4fcd25e70622bdc8deb\") " pod="kube-system/bootstrap-kube-controller-manager-master-0" Feb 24 05:12:34.372473 master-0 kubenswrapper[4158]: I0224 05:12:34.372416 4158 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssl-certs-host\" (UniqueName: \"kubernetes.io/host-path/c9ad9373c007a4fcd25e70622bdc8deb-ssl-certs-host\") pod \"bootstrap-kube-controller-manager-master-0\" (UID: \"c9ad9373c007a4fcd25e70622bdc8deb\") " pod="kube-system/bootstrap-kube-controller-manager-master-0" Feb 24 05:12:34.372473 master-0 kubenswrapper[4158]: I0224 05:12:34.372466 4158 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/host-path/12dab5d350ebc129b0bfa4714d330b15-certs\") pod \"etcd-master-0-master-0\" (UID: \"12dab5d350ebc129b0bfa4714d330b15\") " pod="openshift-etcd/etcd-master-0-master-0" Feb 24 05:12:34.372766 master-0 kubenswrapper[4158]: I0224 05:12:34.372529 4158 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssl-certs-host\" (UniqueName: \"kubernetes.io/host-path/687e92a6cecf1e2beeef16a0b322ad08-ssl-certs-host\") pod \"bootstrap-kube-apiserver-master-0\" (UID: \"687e92a6cecf1e2beeef16a0b322ad08\") " pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Feb 24 05:12:34.372766 master-0 kubenswrapper[4158]: I0224 05:12:34.372538 4158 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"certs\" (UniqueName: \"kubernetes.io/host-path/12dab5d350ebc129b0bfa4714d330b15-certs\") pod \"etcd-master-0-master-0\" (UID: \"12dab5d350ebc129b0bfa4714d330b15\") " pod="openshift-etcd/etcd-master-0-master-0" Feb 24 05:12:34.372766 master-0 kubenswrapper[4158]: I0224 05:12:34.372465 4158 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/host-path/c9ad9373c007a4fcd25e70622bdc8deb-logs\") pod \"bootstrap-kube-controller-manager-master-0\" (UID: \"c9ad9373c007a4fcd25e70622bdc8deb\") " pod="kube-system/bootstrap-kube-controller-manager-master-0" Feb 24 05:12:34.372766 master-0 kubenswrapper[4158]: I0224 05:12:34.372588 4158 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/host-path/687e92a6cecf1e2beeef16a0b322ad08-logs\") pod \"bootstrap-kube-apiserver-master-0\" (UID: \"687e92a6cecf1e2beeef16a0b322ad08\") " pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Feb 24 05:12:34.372766 master-0 kubenswrapper[4158]: I0224 05:12:34.372474 4158 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/host-path/56c3cb71c9851003c8de7e7c5db4b87e-logs\") pod \"bootstrap-kube-scheduler-master-0\" (UID: \"56c3cb71c9851003c8de7e7c5db4b87e\") " pod="kube-system/bootstrap-kube-scheduler-master-0" Feb 24 05:12:34.372766 master-0 kubenswrapper[4158]: I0224 05:12:34.372612 4158 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kubernetes-cloud\" (UniqueName: \"kubernetes.io/host-path/c9ad9373c007a4fcd25e70622bdc8deb-etc-kubernetes-cloud\") pod \"bootstrap-kube-controller-manager-master-0\" (UID: \"c9ad9373c007a4fcd25e70622bdc8deb\") " pod="kube-system/bootstrap-kube-controller-manager-master-0" Feb 24 05:12:34.372766 master-0 kubenswrapper[4158]: I0224 05:12:34.372673 4158 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secrets\" (UniqueName: \"kubernetes.io/host-path/56c3cb71c9851003c8de7e7c5db4b87e-secrets\") pod \"bootstrap-kube-scheduler-master-0\" (UID: \"56c3cb71c9851003c8de7e7c5db4b87e\") " pod="kube-system/bootstrap-kube-scheduler-master-0" Feb 24 05:12:34.372766 master-0 kubenswrapper[4158]: I0224 05:12:34.372740 4158 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secrets\" (UniqueName: \"kubernetes.io/host-path/687e92a6cecf1e2beeef16a0b322ad08-secrets\") pod \"bootstrap-kube-apiserver-master-0\" (UID: \"687e92a6cecf1e2beeef16a0b322ad08\") " pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Feb 24 05:12:34.372766 master-0 kubenswrapper[4158]: I0224 05:12:34.372683 4158 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/host-path/687e92a6cecf1e2beeef16a0b322ad08-logs\") pod \"bootstrap-kube-apiserver-master-0\" (UID: \"687e92a6cecf1e2beeef16a0b322ad08\") " pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Feb 24 05:12:34.373229 master-0 kubenswrapper[4158]: I0224 05:12:34.372746 4158 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secrets\" (UniqueName: \"kubernetes.io/host-path/56c3cb71c9851003c8de7e7c5db4b87e-secrets\") pod \"bootstrap-kube-scheduler-master-0\" (UID: \"56c3cb71c9851003c8de7e7c5db4b87e\") " pod="kube-system/bootstrap-kube-scheduler-master-0" Feb 24 05:12:34.373229 master-0 kubenswrapper[4158]: I0224 05:12:34.372646 4158 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssl-certs-host\" (UniqueName: \"kubernetes.io/host-path/687e92a6cecf1e2beeef16a0b322ad08-ssl-certs-host\") pod \"bootstrap-kube-apiserver-master-0\" (UID: \"687e92a6cecf1e2beeef16a0b322ad08\") " pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Feb 24 05:12:34.373229 master-0 kubenswrapper[4158]: I0224 05:12:34.372791 4158 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/host-path/687e92a6cecf1e2beeef16a0b322ad08-config\") pod \"bootstrap-kube-apiserver-master-0\" (UID: \"687e92a6cecf1e2beeef16a0b322ad08\") " pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Feb 24 05:12:34.373229 master-0 kubenswrapper[4158]: I0224 05:12:34.372859 4158 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/host-path/c9ad9373c007a4fcd25e70622bdc8deb-config\") pod \"bootstrap-kube-controller-manager-master-0\" (UID: \"c9ad9373c007a4fcd25e70622bdc8deb\") " pod="kube-system/bootstrap-kube-controller-manager-master-0" Feb 24 05:12:34.373229 master-0 kubenswrapper[4158]: I0224 05:12:34.372859 4158 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secrets\" (UniqueName: \"kubernetes.io/host-path/687e92a6cecf1e2beeef16a0b322ad08-secrets\") pod \"bootstrap-kube-apiserver-master-0\" (UID: \"687e92a6cecf1e2beeef16a0b322ad08\") " pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Feb 24 05:12:34.373229 master-0 kubenswrapper[4158]: I0224 05:12:34.372916 4158 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/host-path/687e92a6cecf1e2beeef16a0b322ad08-config\") pod \"bootstrap-kube-apiserver-master-0\" (UID: \"687e92a6cecf1e2beeef16a0b322ad08\") " pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Feb 24 05:12:34.373229 master-0 kubenswrapper[4158]: I0224 05:12:34.372963 4158 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/host-path/c9ad9373c007a4fcd25e70622bdc8deb-config\") pod \"bootstrap-kube-controller-manager-master-0\" (UID: \"c9ad9373c007a4fcd25e70622bdc8deb\") " pod="kube-system/bootstrap-kube-controller-manager-master-0" Feb 24 05:12:34.373229 master-0 kubenswrapper[4158]: I0224 05:12:34.373002 4158 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/c997c8e9d3be51d454d8e61e376bef08-etc-kube\") pod \"kube-rbac-proxy-crio-master-0\" (UID: \"c997c8e9d3be51d454d8e61e376bef08\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" Feb 24 05:12:34.373229 master-0 kubenswrapper[4158]: I0224 05:12:34.373063 4158 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/12dab5d350ebc129b0bfa4714d330b15-data-dir\") pod \"etcd-master-0-master-0\" (UID: \"12dab5d350ebc129b0bfa4714d330b15\") " pod="openshift-etcd/etcd-master-0-master-0" Feb 24 05:12:34.373229 master-0 kubenswrapper[4158]: I0224 05:12:34.373100 4158 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/c997c8e9d3be51d454d8e61e376bef08-etc-kube\") pod \"kube-rbac-proxy-crio-master-0\" (UID: \"c997c8e9d3be51d454d8e61e376bef08\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" Feb 24 05:12:34.373229 master-0 kubenswrapper[4158]: I0224 05:12:34.373113 4158 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/687e92a6cecf1e2beeef16a0b322ad08-audit-dir\") pod \"bootstrap-kube-apiserver-master-0\" (UID: \"687e92a6cecf1e2beeef16a0b322ad08\") " pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Feb 24 05:12:34.373229 master-0 kubenswrapper[4158]: I0224 05:12:34.373185 4158 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/12dab5d350ebc129b0bfa4714d330b15-data-dir\") pod \"etcd-master-0-master-0\" (UID: \"12dab5d350ebc129b0bfa4714d330b15\") " pod="openshift-etcd/etcd-master-0-master-0" Feb 24 05:12:34.373229 master-0 kubenswrapper[4158]: I0224 05:12:34.373201 4158 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/687e92a6cecf1e2beeef16a0b322ad08-audit-dir\") pod \"bootstrap-kube-apiserver-master-0\" (UID: \"687e92a6cecf1e2beeef16a0b322ad08\") " pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Feb 24 05:12:34.392752 master-0 kubenswrapper[4158]: I0224 05:12:34.392663 4158 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 24 05:12:34.394720 master-0 kubenswrapper[4158]: I0224 05:12:34.394671 4158 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Feb 24 05:12:34.394720 master-0 kubenswrapper[4158]: I0224 05:12:34.394717 4158 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Feb 24 05:12:34.394860 master-0 kubenswrapper[4158]: I0224 05:12:34.394731 4158 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Feb 24 05:12:34.394860 master-0 kubenswrapper[4158]: I0224 05:12:34.394820 4158 kubelet_node_status.go:76] "Attempting to register node" node="master-0" Feb 24 05:12:34.395896 master-0 kubenswrapper[4158]: E0224 05:12:34.395825 4158 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.sno.openstack.lab:6443/api/v1/nodes\": dial tcp 192.168.32.10:6443: connect: connection refused" node="master-0" Feb 24 05:12:34.572767 master-0 kubenswrapper[4158]: E0224 05:12:34.572672 4158 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": dial tcp 192.168.32.10:6443: connect: connection refused" interval="800ms" Feb 24 05:12:34.588550 master-0 kubenswrapper[4158]: I0224 05:12:34.588399 4158 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-master-0-master-0" Feb 24 05:12:34.604109 master-0 kubenswrapper[4158]: I0224 05:12:34.604013 4158 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Feb 24 05:12:34.625395 master-0 kubenswrapper[4158]: I0224 05:12:34.625332 4158 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="kube-system/bootstrap-kube-controller-manager-master-0" Feb 24 05:12:34.639596 master-0 kubenswrapper[4158]: I0224 05:12:34.639544 4158 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="kube-system/bootstrap-kube-scheduler-master-0" Feb 24 05:12:34.643682 master-0 kubenswrapper[4158]: I0224 05:12:34.643617 4158 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" Feb 24 05:12:34.796883 master-0 kubenswrapper[4158]: I0224 05:12:34.796802 4158 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 24 05:12:34.799464 master-0 kubenswrapper[4158]: I0224 05:12:34.799411 4158 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Feb 24 05:12:34.799464 master-0 kubenswrapper[4158]: I0224 05:12:34.799467 4158 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Feb 24 05:12:34.799667 master-0 kubenswrapper[4158]: I0224 05:12:34.799489 4158 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Feb 24 05:12:34.799667 master-0 kubenswrapper[4158]: I0224 05:12:34.799546 4158 kubelet_node_status.go:76] "Attempting to register node" node="master-0" Feb 24 05:12:34.800724 master-0 kubenswrapper[4158]: E0224 05:12:34.800653 4158 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.sno.openstack.lab:6443/api/v1/nodes\": dial tcp 192.168.32.10:6443: connect: connection refused" node="master-0" Feb 24 05:12:34.938009 master-0 kubenswrapper[4158]: W0224 05:12:34.937744 4158 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.sno.openstack.lab:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Feb 24 05:12:34.938009 master-0 kubenswrapper[4158]: E0224 05:12:34.937895 4158 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.sno.openstack.lab:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Feb 24 05:12:34.965566 master-0 kubenswrapper[4158]: I0224 05:12:34.965469 4158 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.sno.openstack.lab:6443/apis/storage.k8s.io/v1/csinodes/master-0?resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Feb 24 05:12:34.973928 master-0 kubenswrapper[4158]: W0224 05:12:34.973809 4158 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.sno.openstack.lab:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Feb 24 05:12:34.974113 master-0 kubenswrapper[4158]: E0224 05:12:34.973946 4158 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.sno.openstack.lab:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Feb 24 05:12:35.375088 master-0 kubenswrapper[4158]: E0224 05:12:35.374934 4158 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": dial tcp 192.168.32.10:6443: connect: connection refused" interval="1.6s" Feb 24 05:12:35.395210 master-0 kubenswrapper[4158]: W0224 05:12:35.395096 4158 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.sno.openstack.lab:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Feb 24 05:12:35.395210 master-0 kubenswrapper[4158]: E0224 05:12:35.395181 4158 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.sno.openstack.lab:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Feb 24 05:12:35.415722 master-0 kubenswrapper[4158]: W0224 05:12:35.415631 4158 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod687e92a6cecf1e2beeef16a0b322ad08.slice/crio-b372465c7e56b5169454db98ec70891520a7992edc8d9521f0da0806e2998e04 WatchSource:0}: Error finding container b372465c7e56b5169454db98ec70891520a7992edc8d9521f0da0806e2998e04: Status 404 returned error can't find the container with id b372465c7e56b5169454db98ec70891520a7992edc8d9521f0da0806e2998e04 Feb 24 05:12:35.421712 master-0 kubenswrapper[4158]: I0224 05:12:35.421659 4158 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 24 05:12:35.455082 master-0 kubenswrapper[4158]: W0224 05:12:35.454538 4158 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc997c8e9d3be51d454d8e61e376bef08.slice/crio-af62c50cd75ed27beeb63e0f7014692299e172af746bf8738716ac3ff47c9622 WatchSource:0}: Error finding container af62c50cd75ed27beeb63e0f7014692299e172af746bf8738716ac3ff47c9622: Status 404 returned error can't find the container with id af62c50cd75ed27beeb63e0f7014692299e172af746bf8738716ac3ff47c9622 Feb 24 05:12:35.492262 master-0 kubenswrapper[4158]: W0224 05:12:35.492182 4158 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod12dab5d350ebc129b0bfa4714d330b15.slice/crio-f1fb923ea59745e7261babd35ceb4f756ebbc1afdb5f4b607af29ed59d22b5f8 WatchSource:0}: Error finding container f1fb923ea59745e7261babd35ceb4f756ebbc1afdb5f4b607af29ed59d22b5f8: Status 404 returned error can't find the container with id f1fb923ea59745e7261babd35ceb4f756ebbc1afdb5f4b607af29ed59d22b5f8 Feb 24 05:12:35.527936 master-0 kubenswrapper[4158]: W0224 05:12:35.527786 4158 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.sno.openstack.lab:6443/api/v1/nodes?fieldSelector=metadata.name%3Dmaster-0&limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Feb 24 05:12:35.528390 master-0 kubenswrapper[4158]: E0224 05:12:35.527940 4158 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.sno.openstack.lab:6443/api/v1/nodes?fieldSelector=metadata.name%3Dmaster-0&limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Feb 24 05:12:35.596157 master-0 kubenswrapper[4158]: W0224 05:12:35.596061 4158 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod56c3cb71c9851003c8de7e7c5db4b87e.slice/crio-b4dd28dfe0dfb965f7a49c3ef1803925b4da66fd0d5c36e6b22e6c8bf1f041ec WatchSource:0}: Error finding container b4dd28dfe0dfb965f7a49c3ef1803925b4da66fd0d5c36e6b22e6c8bf1f041ec: Status 404 returned error can't find the container with id b4dd28dfe0dfb965f7a49c3ef1803925b4da66fd0d5c36e6b22e6c8bf1f041ec Feb 24 05:12:35.601504 master-0 kubenswrapper[4158]: I0224 05:12:35.601434 4158 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 24 05:12:35.602682 master-0 kubenswrapper[4158]: I0224 05:12:35.602655 4158 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Feb 24 05:12:35.602757 master-0 kubenswrapper[4158]: I0224 05:12:35.602710 4158 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Feb 24 05:12:35.602757 master-0 kubenswrapper[4158]: I0224 05:12:35.602730 4158 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Feb 24 05:12:35.602920 master-0 kubenswrapper[4158]: I0224 05:12:35.602798 4158 kubelet_node_status.go:76] "Attempting to register node" node="master-0" Feb 24 05:12:35.604173 master-0 kubenswrapper[4158]: E0224 05:12:35.604082 4158 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.sno.openstack.lab:6443/api/v1/nodes\": dial tcp 192.168.32.10:6443: connect: connection refused" node="master-0" Feb 24 05:12:35.947158 master-0 kubenswrapper[4158]: I0224 05:12:35.946935 4158 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Feb 24 05:12:35.948855 master-0 kubenswrapper[4158]: E0224 05:12:35.948752 4158 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://api-int.sno.openstack.lab:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Feb 24 05:12:35.965652 master-0 kubenswrapper[4158]: I0224 05:12:35.965536 4158 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.sno.openstack.lab:6443/apis/storage.k8s.io/v1/csinodes/master-0?resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Feb 24 05:12:36.153169 master-0 kubenswrapper[4158]: I0224 05:12:36.153045 4158 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-master-0-master-0" event={"ID":"12dab5d350ebc129b0bfa4714d330b15","Type":"ContainerStarted","Data":"f1fb923ea59745e7261babd35ceb4f756ebbc1afdb5f4b607af29ed59d22b5f8"} Feb 24 05:12:36.155091 master-0 kubenswrapper[4158]: I0224 05:12:36.155069 4158 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" event={"ID":"c997c8e9d3be51d454d8e61e376bef08","Type":"ContainerStarted","Data":"af62c50cd75ed27beeb63e0f7014692299e172af746bf8738716ac3ff47c9622"} Feb 24 05:12:36.156338 master-0 kubenswrapper[4158]: I0224 05:12:36.156291 4158 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="kube-system/bootstrap-kube-controller-manager-master-0" event={"ID":"c9ad9373c007a4fcd25e70622bdc8deb","Type":"ContainerStarted","Data":"b61b99b2785eeea3d1aff791e9d12068cc8f8c45a0b7df02a029df563a9b7817"} Feb 24 05:12:36.157895 master-0 kubenswrapper[4158]: I0224 05:12:36.157871 4158 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" event={"ID":"687e92a6cecf1e2beeef16a0b322ad08","Type":"ContainerStarted","Data":"b372465c7e56b5169454db98ec70891520a7992edc8d9521f0da0806e2998e04"} Feb 24 05:12:36.159839 master-0 kubenswrapper[4158]: I0224 05:12:36.159779 4158 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="kube-system/bootstrap-kube-scheduler-master-0" event={"ID":"56c3cb71c9851003c8de7e7c5db4b87e","Type":"ContainerStarted","Data":"b4dd28dfe0dfb965f7a49c3ef1803925b4da66fd0d5c36e6b22e6c8bf1f041ec"} Feb 24 05:12:36.829505 master-0 kubenswrapper[4158]: W0224 05:12:36.829447 4158 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.sno.openstack.lab:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Feb 24 05:12:36.829505 master-0 kubenswrapper[4158]: E0224 05:12:36.829504 4158 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.sno.openstack.lab:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Feb 24 05:12:36.965204 master-0 kubenswrapper[4158]: I0224 05:12:36.965143 4158 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.sno.openstack.lab:6443/apis/storage.k8s.io/v1/csinodes/master-0?resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Feb 24 05:12:36.975910 master-0 kubenswrapper[4158]: E0224 05:12:36.975864 4158 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": dial tcp 192.168.32.10:6443: connect: connection refused" interval="3.2s" Feb 24 05:12:37.204496 master-0 kubenswrapper[4158]: I0224 05:12:37.204435 4158 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 24 05:12:37.205506 master-0 kubenswrapper[4158]: I0224 05:12:37.205471 4158 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Feb 24 05:12:37.205506 master-0 kubenswrapper[4158]: I0224 05:12:37.205509 4158 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Feb 24 05:12:37.205652 master-0 kubenswrapper[4158]: I0224 05:12:37.205521 4158 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Feb 24 05:12:37.205652 master-0 kubenswrapper[4158]: I0224 05:12:37.205562 4158 kubelet_node_status.go:76] "Attempting to register node" node="master-0" Feb 24 05:12:37.206280 master-0 kubenswrapper[4158]: E0224 05:12:37.206245 4158 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.sno.openstack.lab:6443/api/v1/nodes\": dial tcp 192.168.32.10:6443: connect: connection refused" node="master-0" Feb 24 05:12:37.494970 master-0 kubenswrapper[4158]: W0224 05:12:37.494888 4158 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.sno.openstack.lab:6443/api/v1/nodes?fieldSelector=metadata.name%3Dmaster-0&limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Feb 24 05:12:37.494970 master-0 kubenswrapper[4158]: E0224 05:12:37.494966 4158 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.sno.openstack.lab:6443/api/v1/nodes?fieldSelector=metadata.name%3Dmaster-0&limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Feb 24 05:12:37.536011 master-0 kubenswrapper[4158]: W0224 05:12:37.535979 4158 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.sno.openstack.lab:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Feb 24 05:12:37.536092 master-0 kubenswrapper[4158]: E0224 05:12:37.536019 4158 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.sno.openstack.lab:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Feb 24 05:12:37.606695 master-0 kubenswrapper[4158]: W0224 05:12:37.606628 4158 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.sno.openstack.lab:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Feb 24 05:12:37.606778 master-0 kubenswrapper[4158]: E0224 05:12:37.606745 4158 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.sno.openstack.lab:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Feb 24 05:12:37.965423 master-0 kubenswrapper[4158]: I0224 05:12:37.965247 4158 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.sno.openstack.lab:6443/apis/storage.k8s.io/v1/csinodes/master-0?resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Feb 24 05:12:38.167614 master-0 kubenswrapper[4158]: I0224 05:12:38.166962 4158 generic.go:334] "Generic (PLEG): container finished" podID="c997c8e9d3be51d454d8e61e376bef08" containerID="23d5e42153d1239bec04afab6c545620b9ef683ee911bb6159c7f6877a1bbf3e" exitCode=0 Feb 24 05:12:38.167614 master-0 kubenswrapper[4158]: I0224 05:12:38.167024 4158 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" event={"ID":"c997c8e9d3be51d454d8e61e376bef08","Type":"ContainerDied","Data":"23d5e42153d1239bec04afab6c545620b9ef683ee911bb6159c7f6877a1bbf3e"} Feb 24 05:12:38.167614 master-0 kubenswrapper[4158]: I0224 05:12:38.167159 4158 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 24 05:12:38.168040 master-0 kubenswrapper[4158]: I0224 05:12:38.167995 4158 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Feb 24 05:12:38.168040 master-0 kubenswrapper[4158]: I0224 05:12:38.168034 4158 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Feb 24 05:12:38.168146 master-0 kubenswrapper[4158]: I0224 05:12:38.168049 4158 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Feb 24 05:12:38.915009 master-0 kubenswrapper[4158]: E0224 05:12:38.914716 4158 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/default/events\": dial tcp 192.168.32.10:6443: connect: connection refused" event="&Event{ObjectMeta:{master-0.189716b713e58c16 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:master-0,UID:master-0,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-02-24 05:12:33.96221647 +0000 UTC m=+0.626213213,LastTimestamp:2026-02-24 05:12:33.96221647 +0000 UTC m=+0.626213213,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Feb 24 05:12:38.965626 master-0 kubenswrapper[4158]: I0224 05:12:38.965541 4158 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.sno.openstack.lab:6443/apis/storage.k8s.io/v1/csinodes/master-0?resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Feb 24 05:12:39.172787 master-0 kubenswrapper[4158]: I0224 05:12:39.172557 4158 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-master-0-master-0" event={"ID":"12dab5d350ebc129b0bfa4714d330b15","Type":"ContainerStarted","Data":"b935af68bde88f6abe85c555e4940dca0ca8f352e0978bf5d688472276e6f4de"} Feb 24 05:12:39.172787 master-0 kubenswrapper[4158]: I0224 05:12:39.172623 4158 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-master-0-master-0" event={"ID":"12dab5d350ebc129b0bfa4714d330b15","Type":"ContainerStarted","Data":"60a30f593dbe2d93b00c55ee834611c7e89cff694c686371357f4d3a921ea7a0"} Feb 24 05:12:39.172787 master-0 kubenswrapper[4158]: I0224 05:12:39.172732 4158 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 24 05:12:39.174113 master-0 kubenswrapper[4158]: I0224 05:12:39.174078 4158 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Feb 24 05:12:39.174229 master-0 kubenswrapper[4158]: I0224 05:12:39.174130 4158 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Feb 24 05:12:39.174229 master-0 kubenswrapper[4158]: I0224 05:12:39.174150 4158 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Feb 24 05:12:39.175698 master-0 kubenswrapper[4158]: I0224 05:12:39.175657 4158 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-config-operator_kube-rbac-proxy-crio-master-0_c997c8e9d3be51d454d8e61e376bef08/kube-rbac-proxy-crio/0.log" Feb 24 05:12:39.176288 master-0 kubenswrapper[4158]: I0224 05:12:39.176233 4158 generic.go:334] "Generic (PLEG): container finished" podID="c997c8e9d3be51d454d8e61e376bef08" containerID="ee0888064e3897dd09bd934ea8de7e1912c230608867b478ac47b5b74800f4fe" exitCode=1 Feb 24 05:12:39.176439 master-0 kubenswrapper[4158]: I0224 05:12:39.176293 4158 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" event={"ID":"c997c8e9d3be51d454d8e61e376bef08","Type":"ContainerDied","Data":"ee0888064e3897dd09bd934ea8de7e1912c230608867b478ac47b5b74800f4fe"} Feb 24 05:12:39.176439 master-0 kubenswrapper[4158]: I0224 05:12:39.176340 4158 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 24 05:12:39.183133 master-0 kubenswrapper[4158]: I0224 05:12:39.183082 4158 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Feb 24 05:12:39.183351 master-0 kubenswrapper[4158]: I0224 05:12:39.183161 4158 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Feb 24 05:12:39.183351 master-0 kubenswrapper[4158]: I0224 05:12:39.183183 4158 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Feb 24 05:12:39.184029 master-0 kubenswrapper[4158]: I0224 05:12:39.183985 4158 scope.go:117] "RemoveContainer" containerID="ee0888064e3897dd09bd934ea8de7e1912c230608867b478ac47b5b74800f4fe" Feb 24 05:12:39.964927 master-0 kubenswrapper[4158]: I0224 05:12:39.964851 4158 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.sno.openstack.lab:6443/apis/storage.k8s.io/v1/csinodes/master-0?resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Feb 24 05:12:40.178057 master-0 kubenswrapper[4158]: E0224 05:12:40.177991 4158 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": dial tcp 192.168.32.10:6443: connect: connection refused" interval="6.4s" Feb 24 05:12:40.181480 master-0 kubenswrapper[4158]: I0224 05:12:40.181445 4158 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-config-operator_kube-rbac-proxy-crio-master-0_c997c8e9d3be51d454d8e61e376bef08/kube-rbac-proxy-crio/1.log" Feb 24 05:12:40.181999 master-0 kubenswrapper[4158]: I0224 05:12:40.181975 4158 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-config-operator_kube-rbac-proxy-crio-master-0_c997c8e9d3be51d454d8e61e376bef08/kube-rbac-proxy-crio/0.log" Feb 24 05:12:40.182550 master-0 kubenswrapper[4158]: I0224 05:12:40.182515 4158 generic.go:334] "Generic (PLEG): container finished" podID="c997c8e9d3be51d454d8e61e376bef08" containerID="4a6431ad2e348da673451c4ac01f0742fed27f8448f349ab43dae3e0ab73a9ce" exitCode=1 Feb 24 05:12:40.182647 master-0 kubenswrapper[4158]: I0224 05:12:40.182617 4158 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 24 05:12:40.182690 master-0 kubenswrapper[4158]: I0224 05:12:40.182645 4158 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 24 05:12:40.182723 master-0 kubenswrapper[4158]: I0224 05:12:40.182636 4158 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" event={"ID":"c997c8e9d3be51d454d8e61e376bef08","Type":"ContainerDied","Data":"4a6431ad2e348da673451c4ac01f0742fed27f8448f349ab43dae3e0ab73a9ce"} Feb 24 05:12:40.182779 master-0 kubenswrapper[4158]: I0224 05:12:40.182760 4158 scope.go:117] "RemoveContainer" containerID="ee0888064e3897dd09bd934ea8de7e1912c230608867b478ac47b5b74800f4fe" Feb 24 05:12:40.183967 master-0 kubenswrapper[4158]: I0224 05:12:40.183788 4158 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Feb 24 05:12:40.183967 master-0 kubenswrapper[4158]: I0224 05:12:40.183816 4158 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Feb 24 05:12:40.183967 master-0 kubenswrapper[4158]: I0224 05:12:40.183843 4158 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Feb 24 05:12:40.183967 master-0 kubenswrapper[4158]: I0224 05:12:40.183853 4158 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Feb 24 05:12:40.183967 master-0 kubenswrapper[4158]: I0224 05:12:40.183861 4158 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Feb 24 05:12:40.183967 master-0 kubenswrapper[4158]: I0224 05:12:40.183867 4158 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Feb 24 05:12:40.184486 master-0 kubenswrapper[4158]: I0224 05:12:40.184460 4158 scope.go:117] "RemoveContainer" containerID="4a6431ad2e348da673451c4ac01f0742fed27f8448f349ab43dae3e0ab73a9ce" Feb 24 05:12:40.184699 master-0 kubenswrapper[4158]: E0224 05:12:40.184657 4158 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-rbac-proxy-crio\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-rbac-proxy-crio pod=kube-rbac-proxy-crio-master-0_openshift-machine-config-operator(c997c8e9d3be51d454d8e61e376bef08)\"" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" podUID="c997c8e9d3be51d454d8e61e376bef08" Feb 24 05:12:40.230096 master-0 kubenswrapper[4158]: I0224 05:12:40.229933 4158 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Feb 24 05:12:40.231476 master-0 kubenswrapper[4158]: E0224 05:12:40.231413 4158 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://api-int.sno.openstack.lab:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Feb 24 05:12:40.406950 master-0 kubenswrapper[4158]: I0224 05:12:40.406852 4158 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 24 05:12:40.408385 master-0 kubenswrapper[4158]: I0224 05:12:40.408280 4158 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Feb 24 05:12:40.408385 master-0 kubenswrapper[4158]: I0224 05:12:40.408339 4158 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Feb 24 05:12:40.408385 master-0 kubenswrapper[4158]: I0224 05:12:40.408350 4158 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Feb 24 05:12:40.408787 master-0 kubenswrapper[4158]: I0224 05:12:40.408423 4158 kubelet_node_status.go:76] "Attempting to register node" node="master-0" Feb 24 05:12:40.409689 master-0 kubenswrapper[4158]: E0224 05:12:40.409619 4158 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.sno.openstack.lab:6443/api/v1/nodes\": dial tcp 192.168.32.10:6443: connect: connection refused" node="master-0" Feb 24 05:12:40.846998 master-0 kubenswrapper[4158]: W0224 05:12:40.846845 4158 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.sno.openstack.lab:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Feb 24 05:12:40.847268 master-0 kubenswrapper[4158]: E0224 05:12:40.847030 4158 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.sno.openstack.lab:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Feb 24 05:12:40.964818 master-0 kubenswrapper[4158]: I0224 05:12:40.964764 4158 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.sno.openstack.lab:6443/apis/storage.k8s.io/v1/csinodes/master-0?resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Feb 24 05:12:41.048295 master-0 kubenswrapper[4158]: W0224 05:12:41.048153 4158 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.sno.openstack.lab:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Feb 24 05:12:41.048532 master-0 kubenswrapper[4158]: E0224 05:12:41.048341 4158 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.sno.openstack.lab:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Feb 24 05:12:41.185446 master-0 kubenswrapper[4158]: I0224 05:12:41.185284 4158 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 24 05:12:41.186363 master-0 kubenswrapper[4158]: I0224 05:12:41.186248 4158 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Feb 24 05:12:41.186470 master-0 kubenswrapper[4158]: I0224 05:12:41.186377 4158 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Feb 24 05:12:41.186470 master-0 kubenswrapper[4158]: I0224 05:12:41.186406 4158 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Feb 24 05:12:41.187180 master-0 kubenswrapper[4158]: I0224 05:12:41.187132 4158 scope.go:117] "RemoveContainer" containerID="4a6431ad2e348da673451c4ac01f0742fed27f8448f349ab43dae3e0ab73a9ce" Feb 24 05:12:41.187534 master-0 kubenswrapper[4158]: E0224 05:12:41.187476 4158 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-rbac-proxy-crio\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-rbac-proxy-crio pod=kube-rbac-proxy-crio-master-0_openshift-machine-config-operator(c997c8e9d3be51d454d8e61e376bef08)\"" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" podUID="c997c8e9d3be51d454d8e61e376bef08" Feb 24 05:12:41.878862 master-0 kubenswrapper[4158]: W0224 05:12:41.878714 4158 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.sno.openstack.lab:6443/api/v1/nodes?fieldSelector=metadata.name%3Dmaster-0&limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Feb 24 05:12:41.878862 master-0 kubenswrapper[4158]: E0224 05:12:41.878867 4158 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.sno.openstack.lab:6443/api/v1/nodes?fieldSelector=metadata.name%3Dmaster-0&limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Feb 24 05:12:41.965042 master-0 kubenswrapper[4158]: I0224 05:12:41.964951 4158 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.sno.openstack.lab:6443/apis/storage.k8s.io/v1/csinodes/master-0?resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Feb 24 05:12:42.965491 master-0 kubenswrapper[4158]: I0224 05:12:42.965230 4158 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.sno.openstack.lab:6443/apis/storage.k8s.io/v1/csinodes/master-0?resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Feb 24 05:12:43.192241 master-0 kubenswrapper[4158]: I0224 05:12:43.192188 4158 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-config-operator_kube-rbac-proxy-crio-master-0_c997c8e9d3be51d454d8e61e376bef08/kube-rbac-proxy-crio/1.log" Feb 24 05:12:43.195287 master-0 kubenswrapper[4158]: I0224 05:12:43.195224 4158 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="kube-system/bootstrap-kube-controller-manager-master-0" event={"ID":"c9ad9373c007a4fcd25e70622bdc8deb","Type":"ContainerStarted","Data":"d6dd4a61ed7af8ebd78eddfac6cf4fdcc660e18cd4faabe4c2d616a566d86ff6"} Feb 24 05:12:43.197563 master-0 kubenswrapper[4158]: I0224 05:12:43.197519 4158 generic.go:334] "Generic (PLEG): container finished" podID="687e92a6cecf1e2beeef16a0b322ad08" containerID="8a354a07387d3c5b777d3173404f7e00d07019b84f1dd84ee4973912d46b023d" exitCode=0 Feb 24 05:12:43.197789 master-0 kubenswrapper[4158]: I0224 05:12:43.197677 4158 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 24 05:12:43.197953 master-0 kubenswrapper[4158]: I0224 05:12:43.197593 4158 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" event={"ID":"687e92a6cecf1e2beeef16a0b322ad08","Type":"ContainerDied","Data":"8a354a07387d3c5b777d3173404f7e00d07019b84f1dd84ee4973912d46b023d"} Feb 24 05:12:43.198832 master-0 kubenswrapper[4158]: I0224 05:12:43.198806 4158 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Feb 24 05:12:43.198889 master-0 kubenswrapper[4158]: I0224 05:12:43.198842 4158 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Feb 24 05:12:43.198889 master-0 kubenswrapper[4158]: I0224 05:12:43.198880 4158 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Feb 24 05:12:43.199628 master-0 kubenswrapper[4158]: I0224 05:12:43.199578 4158 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="kube-system/bootstrap-kube-scheduler-master-0" event={"ID":"56c3cb71c9851003c8de7e7c5db4b87e","Type":"ContainerStarted","Data":"ec92c2ccaab799d81de24af8faba27c40dd8197fcd80279d1de6e4daee2ed87c"} Feb 24 05:12:43.199675 master-0 kubenswrapper[4158]: I0224 05:12:43.199647 4158 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 24 05:12:43.201097 master-0 kubenswrapper[4158]: I0224 05:12:43.200629 4158 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Feb 24 05:12:43.201097 master-0 kubenswrapper[4158]: I0224 05:12:43.200673 4158 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Feb 24 05:12:43.201097 master-0 kubenswrapper[4158]: I0224 05:12:43.200683 4158 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Feb 24 05:12:43.202892 master-0 kubenswrapper[4158]: I0224 05:12:43.202476 4158 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 24 05:12:43.203811 master-0 kubenswrapper[4158]: I0224 05:12:43.203316 4158 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Feb 24 05:12:43.203811 master-0 kubenswrapper[4158]: I0224 05:12:43.203339 4158 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Feb 24 05:12:43.203811 master-0 kubenswrapper[4158]: I0224 05:12:43.203347 4158 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Feb 24 05:12:44.096244 master-0 kubenswrapper[4158]: E0224 05:12:44.091375 4158 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"master-0\" not found" Feb 24 05:12:44.204161 master-0 kubenswrapper[4158]: I0224 05:12:44.203821 4158 generic.go:334] "Generic (PLEG): container finished" podID="c9ad9373c007a4fcd25e70622bdc8deb" containerID="d6dd4a61ed7af8ebd78eddfac6cf4fdcc660e18cd4faabe4c2d616a566d86ff6" exitCode=1 Feb 24 05:12:44.204161 master-0 kubenswrapper[4158]: I0224 05:12:44.203919 4158 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="kube-system/bootstrap-kube-controller-manager-master-0" event={"ID":"c9ad9373c007a4fcd25e70622bdc8deb","Type":"ContainerDied","Data":"d6dd4a61ed7af8ebd78eddfac6cf4fdcc660e18cd4faabe4c2d616a566d86ff6"} Feb 24 05:12:44.205957 master-0 kubenswrapper[4158]: I0224 05:12:44.205914 4158 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" event={"ID":"687e92a6cecf1e2beeef16a0b322ad08","Type":"ContainerStarted","Data":"cc6b6656b384d618f557c520566b8453504d4b13ef2e0b51275773dd51dc1d14"} Feb 24 05:12:44.206131 master-0 kubenswrapper[4158]: I0224 05:12:44.205956 4158 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 24 05:12:44.207754 master-0 kubenswrapper[4158]: I0224 05:12:44.207691 4158 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Feb 24 05:12:44.207845 master-0 kubenswrapper[4158]: I0224 05:12:44.207774 4158 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Feb 24 05:12:44.207845 master-0 kubenswrapper[4158]: I0224 05:12:44.207801 4158 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Feb 24 05:12:45.120631 master-0 kubenswrapper[4158]: I0224 05:12:45.120568 4158 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "master-0" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 24 05:12:45.121138 master-0 kubenswrapper[4158]: W0224 05:12:45.120641 4158 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:anonymous" cannot list resource "services" in API group "" at the cluster scope Feb 24 05:12:45.121138 master-0 kubenswrapper[4158]: E0224 05:12:45.120676 4158 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:anonymous\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" Feb 24 05:12:45.212953 master-0 kubenswrapper[4158]: I0224 05:12:45.212799 4158 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="kube-system/bootstrap-kube-controller-manager-master-0" event={"ID":"c9ad9373c007a4fcd25e70622bdc8deb","Type":"ContainerStarted","Data":"f4b821030f840d2d8e85c4724668b0193cc5837173c2d037d234e06dfa6a0d7e"} Feb 24 05:12:45.213184 master-0 kubenswrapper[4158]: I0224 05:12:45.212945 4158 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 24 05:12:45.214289 master-0 kubenswrapper[4158]: I0224 05:12:45.214217 4158 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Feb 24 05:12:45.214289 master-0 kubenswrapper[4158]: I0224 05:12:45.214258 4158 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Feb 24 05:12:45.214289 master-0 kubenswrapper[4158]: I0224 05:12:45.214268 4158 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Feb 24 05:12:45.214668 master-0 kubenswrapper[4158]: I0224 05:12:45.214630 4158 scope.go:117] "RemoveContainer" containerID="d6dd4a61ed7af8ebd78eddfac6cf4fdcc660e18cd4faabe4c2d616a566d86ff6" Feb 24 05:12:45.974223 master-0 kubenswrapper[4158]: I0224 05:12:45.974069 4158 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "master-0" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 24 05:12:46.224052 master-0 kubenswrapper[4158]: I0224 05:12:46.223957 4158 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="kube-system/bootstrap-kube-controller-manager-master-0" event={"ID":"c9ad9373c007a4fcd25e70622bdc8deb","Type":"ContainerStarted","Data":"487b980bd30f01a4370e0f26f33ccb604a296e479126f79a526ff15ecc98aaf8"} Feb 24 05:12:46.224814 master-0 kubenswrapper[4158]: I0224 05:12:46.224134 4158 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 24 05:12:46.224951 master-0 kubenswrapper[4158]: I0224 05:12:46.224921 4158 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Feb 24 05:12:46.224951 master-0 kubenswrapper[4158]: I0224 05:12:46.224948 4158 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Feb 24 05:12:46.225017 master-0 kubenswrapper[4158]: I0224 05:12:46.224959 4158 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Feb 24 05:12:46.587059 master-0 kubenswrapper[4158]: E0224 05:12:46.586973 4158 controller.go:145] "Failed to ensure lease exists, will retry" err="leases.coordination.k8s.io \"master-0\" is forbidden: User \"system:anonymous\" cannot get resource \"leases\" in API group \"coordination.k8s.io\" in the namespace \"kube-node-lease\"" interval="7s" Feb 24 05:12:46.810446 master-0 kubenswrapper[4158]: I0224 05:12:46.810370 4158 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 24 05:12:46.811770 master-0 kubenswrapper[4158]: I0224 05:12:46.811709 4158 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Feb 24 05:12:46.811830 master-0 kubenswrapper[4158]: I0224 05:12:46.811785 4158 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Feb 24 05:12:46.811830 master-0 kubenswrapper[4158]: I0224 05:12:46.811806 4158 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Feb 24 05:12:46.811904 master-0 kubenswrapper[4158]: I0224 05:12:46.811880 4158 kubelet_node_status.go:76] "Attempting to register node" node="master-0" Feb 24 05:12:46.820417 master-0 kubenswrapper[4158]: E0224 05:12:46.820366 4158 kubelet_node_status.go:99] "Unable to register node with API server" err="nodes is forbidden: User \"system:anonymous\" cannot create resource \"nodes\" in API group \"\" at the cluster scope" node="master-0" Feb 24 05:12:46.970988 master-0 kubenswrapper[4158]: I0224 05:12:46.970860 4158 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "master-0" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 24 05:12:47.230675 master-0 kubenswrapper[4158]: I0224 05:12:47.230474 4158 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 24 05:12:47.230675 master-0 kubenswrapper[4158]: I0224 05:12:47.230507 4158 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 24 05:12:47.230675 master-0 kubenswrapper[4158]: I0224 05:12:47.230449 4158 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" event={"ID":"687e92a6cecf1e2beeef16a0b322ad08","Type":"ContainerStarted","Data":"675a6b8500a8182d86383462e937389b91fb39e2924ac2657060054e1ef499ea"} Feb 24 05:12:47.232013 master-0 kubenswrapper[4158]: I0224 05:12:47.231957 4158 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Feb 24 05:12:47.232013 master-0 kubenswrapper[4158]: I0224 05:12:47.232010 4158 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Feb 24 05:12:47.232138 master-0 kubenswrapper[4158]: I0224 05:12:47.232029 4158 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Feb 24 05:12:47.232138 master-0 kubenswrapper[4158]: I0224 05:12:47.232115 4158 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Feb 24 05:12:47.232280 master-0 kubenswrapper[4158]: I0224 05:12:47.232149 4158 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Feb 24 05:12:47.232280 master-0 kubenswrapper[4158]: I0224 05:12:47.232172 4158 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Feb 24 05:12:47.795926 master-0 kubenswrapper[4158]: I0224 05:12:47.795833 4158 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Feb 24 05:12:47.842545 master-0 kubenswrapper[4158]: I0224 05:12:47.842448 4158 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="kube-system/bootstrap-kube-controller-manager-master-0" Feb 24 05:12:47.973296 master-0 kubenswrapper[4158]: I0224 05:12:47.973203 4158 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "master-0" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 24 05:12:48.233052 master-0 kubenswrapper[4158]: I0224 05:12:48.232950 4158 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 24 05:12:48.234169 master-0 kubenswrapper[4158]: I0224 05:12:48.232950 4158 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 24 05:12:48.234511 master-0 kubenswrapper[4158]: I0224 05:12:48.234427 4158 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Feb 24 05:12:48.234620 master-0 kubenswrapper[4158]: I0224 05:12:48.234519 4158 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Feb 24 05:12:48.234620 master-0 kubenswrapper[4158]: I0224 05:12:48.234540 4158 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Feb 24 05:12:48.235013 master-0 kubenswrapper[4158]: I0224 05:12:48.234911 4158 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Feb 24 05:12:48.235114 master-0 kubenswrapper[4158]: I0224 05:12:48.235056 4158 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Feb 24 05:12:48.235114 master-0 kubenswrapper[4158]: I0224 05:12:48.235078 4158 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Feb 24 05:12:48.754559 master-0 kubenswrapper[4158]: I0224 05:12:48.754472 4158 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Feb 24 05:12:48.778627 master-0 kubenswrapper[4158]: I0224 05:12:48.778531 4158 reflector.go:368] Caches populated for *v1.CertificateSigningRequest from k8s.io/client-go/tools/watch/informerwatcher.go:146 Feb 24 05:12:48.817027 master-0 kubenswrapper[4158]: I0224 05:12:48.816845 4158 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Feb 24 05:12:48.827792 master-0 kubenswrapper[4158]: I0224 05:12:48.827687 4158 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Feb 24 05:12:48.924605 master-0 kubenswrapper[4158]: E0224 05:12:48.924295 4158 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{master-0.189716b713e58c16 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:master-0,UID:master-0,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-02-24 05:12:33.96221647 +0000 UTC m=+0.626213213,LastTimestamp:2026-02-24 05:12:33.96221647 +0000 UTC m=+0.626213213,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Feb 24 05:12:48.933070 master-0 kubenswrapper[4158]: E0224 05:12:48.932847 4158 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{master-0.189716b716e662b4 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:master-0,UID:master-0,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node master-0 status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-02-24 05:12:34.01260306 +0000 UTC m=+0.676599783,LastTimestamp:2026-02-24 05:12:34.01260306 +0000 UTC m=+0.676599783,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Feb 24 05:12:48.941344 master-0 kubenswrapper[4158]: E0224 05:12:48.941192 4158 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{master-0.189716b716e70495 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:master-0,UID:master-0,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node master-0 status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-02-24 05:12:34.012644501 +0000 UTC m=+0.676641234,LastTimestamp:2026-02-24 05:12:34.012644501 +0000 UTC m=+0.676641234,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Feb 24 05:12:48.946879 master-0 kubenswrapper[4158]: E0224 05:12:48.946667 4158 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{master-0.189716b716e75e3c default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:master-0,UID:master-0,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node master-0 status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-02-24 05:12:34.012667452 +0000 UTC m=+0.676664185,LastTimestamp:2026-02-24 05:12:34.012667452 +0000 UTC m=+0.676664185,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Feb 24 05:12:48.953574 master-0 kubenswrapper[4158]: E0224 05:12:48.953272 4158 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{master-0.189716b71b88ec61 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:master-0,UID:master-0,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeAllocatableEnforced,Message:Updated Node Allocatable limit across pods,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-02-24 05:12:34.090364001 +0000 UTC m=+0.754360694,LastTimestamp:2026-02-24 05:12:34.090364001 +0000 UTC m=+0.754360694,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Feb 24 05:12:48.959876 master-0 kubenswrapper[4158]: E0224 05:12:48.959697 4158 event.go:359] "Server rejected event (will not retry!)" err="events \"master-0.189716b716e662b4\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{master-0.189716b716e662b4 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:master-0,UID:master-0,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node master-0 status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-02-24 05:12:34.01260306 +0000 UTC m=+0.676599783,LastTimestamp:2026-02-24 05:12:34.18997774 +0000 UTC m=+0.853974463,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Feb 24 05:12:48.969985 master-0 kubenswrapper[4158]: I0224 05:12:48.969927 4158 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "master-0" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 24 05:12:48.970155 master-0 kubenswrapper[4158]: E0224 05:12:48.969904 4158 event.go:359] "Server rejected event (will not retry!)" err="events \"master-0.189716b716e70495\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{master-0.189716b716e70495 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:master-0,UID:master-0,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node master-0 status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-02-24 05:12:34.012644501 +0000 UTC m=+0.676641234,LastTimestamp:2026-02-24 05:12:34.190016561 +0000 UTC m=+0.854013294,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Feb 24 05:12:48.976040 master-0 kubenswrapper[4158]: E0224 05:12:48.975908 4158 event.go:359] "Server rejected event (will not retry!)" err="events \"master-0.189716b716e75e3c\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{master-0.189716b716e75e3c default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:master-0,UID:master-0,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node master-0 status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-02-24 05:12:34.012667452 +0000 UTC m=+0.676664185,LastTimestamp:2026-02-24 05:12:34.190036531 +0000 UTC m=+0.854033254,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Feb 24 05:12:48.981753 master-0 kubenswrapper[4158]: E0224 05:12:48.981579 4158 event.go:359] "Server rejected event (will not retry!)" err="events \"master-0.189716b716e662b4\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{master-0.189716b716e662b4 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:master-0,UID:master-0,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node master-0 status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-02-24 05:12:34.01260306 +0000 UTC m=+0.676599783,LastTimestamp:2026-02-24 05:12:34.245579377 +0000 UTC m=+0.909576080,Count:3,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Feb 24 05:12:48.988093 master-0 kubenswrapper[4158]: E0224 05:12:48.987977 4158 event.go:359] "Server rejected event (will not retry!)" err="events \"master-0.189716b716e70495\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{master-0.189716b716e70495 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:master-0,UID:master-0,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node master-0 status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-02-24 05:12:34.012644501 +0000 UTC m=+0.676641234,LastTimestamp:2026-02-24 05:12:34.245633719 +0000 UTC m=+0.909630422,Count:3,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Feb 24 05:12:48.994976 master-0 kubenswrapper[4158]: E0224 05:12:48.994876 4158 event.go:359] "Server rejected event (will not retry!)" err="events \"master-0.189716b716e75e3c\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{master-0.189716b716e75e3c default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:master-0,UID:master-0,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node master-0 status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-02-24 05:12:34.012667452 +0000 UTC m=+0.676664185,LastTimestamp:2026-02-24 05:12:34.245644979 +0000 UTC m=+0.909641682,Count:3,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Feb 24 05:12:49.000005 master-0 kubenswrapper[4158]: E0224 05:12:48.999872 4158 event.go:359] "Server rejected event (will not retry!)" err="events \"master-0.189716b716e662b4\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{master-0.189716b716e662b4 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:master-0,UID:master-0,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node master-0 status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-02-24 05:12:34.01260306 +0000 UTC m=+0.676599783,LastTimestamp:2026-02-24 05:12:34.246965223 +0000 UTC m=+0.910961926,Count:4,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Feb 24 05:12:49.005070 master-0 kubenswrapper[4158]: E0224 05:12:49.004828 4158 event.go:359] "Server rejected event (will not retry!)" err="events \"master-0.189716b716e70495\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{master-0.189716b716e70495 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:master-0,UID:master-0,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node master-0 status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-02-24 05:12:34.012644501 +0000 UTC m=+0.676641234,LastTimestamp:2026-02-24 05:12:34.246987213 +0000 UTC m=+0.910983916,Count:4,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Feb 24 05:12:49.012089 master-0 kubenswrapper[4158]: E0224 05:12:49.011922 4158 event.go:359] "Server rejected event (will not retry!)" err="events \"master-0.189716b716e75e3c\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{master-0.189716b716e75e3c default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:master-0,UID:master-0,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node master-0 status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-02-24 05:12:34.012667452 +0000 UTC m=+0.676664185,LastTimestamp:2026-02-24 05:12:34.247006624 +0000 UTC m=+0.911003327,Count:4,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Feb 24 05:12:49.019255 master-0 kubenswrapper[4158]: E0224 05:12:49.019104 4158 event.go:359] "Server rejected event (will not retry!)" err="events \"master-0.189716b716e662b4\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{master-0.189716b716e662b4 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:master-0,UID:master-0,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node master-0 status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-02-24 05:12:34.01260306 +0000 UTC m=+0.676599783,LastTimestamp:2026-02-24 05:12:34.247749927 +0000 UTC m=+0.911746630,Count:5,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Feb 24 05:12:49.024574 master-0 kubenswrapper[4158]: E0224 05:12:49.024461 4158 event.go:359] "Server rejected event (will not retry!)" err="events \"master-0.189716b716e70495\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{master-0.189716b716e70495 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:master-0,UID:master-0,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node master-0 status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-02-24 05:12:34.012644501 +0000 UTC m=+0.676641234,LastTimestamp:2026-02-24 05:12:34.247772158 +0000 UTC m=+0.911768881,Count:5,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Feb 24 05:12:49.031347 master-0 kubenswrapper[4158]: E0224 05:12:49.031195 4158 event.go:359] "Server rejected event (will not retry!)" err="events \"master-0.189716b716e75e3c\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{master-0.189716b716e75e3c default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:master-0,UID:master-0,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node master-0 status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-02-24 05:12:34.012667452 +0000 UTC m=+0.676664185,LastTimestamp:2026-02-24 05:12:34.247785058 +0000 UTC m=+0.911781761,Count:5,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Feb 24 05:12:49.040900 master-0 kubenswrapper[4158]: E0224 05:12:49.040779 4158 event.go:359] "Server rejected event (will not retry!)" err="events \"master-0.189716b716e662b4\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{master-0.189716b716e662b4 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:master-0,UID:master-0,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node master-0 status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-02-24 05:12:34.01260306 +0000 UTC m=+0.676599783,LastTimestamp:2026-02-24 05:12:34.248164795 +0000 UTC m=+0.912161518,Count:6,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Feb 24 05:12:49.046763 master-0 kubenswrapper[4158]: E0224 05:12:49.046550 4158 event.go:359] "Server rejected event (will not retry!)" err="events \"master-0.189716b716e70495\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{master-0.189716b716e70495 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:master-0,UID:master-0,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node master-0 status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-02-24 05:12:34.012644501 +0000 UTC m=+0.676641234,LastTimestamp:2026-02-24 05:12:34.248271607 +0000 UTC m=+0.912268340,Count:6,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Feb 24 05:12:49.053900 master-0 kubenswrapper[4158]: E0224 05:12:49.053714 4158 event.go:359] "Server rejected event (will not retry!)" err="events \"master-0.189716b716e75e3c\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{master-0.189716b716e75e3c default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:master-0,UID:master-0,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node master-0 status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-02-24 05:12:34.012667452 +0000 UTC m=+0.676664185,LastTimestamp:2026-02-24 05:12:34.248347048 +0000 UTC m=+0.912343771,Count:6,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Feb 24 05:12:49.059162 master-0 kubenswrapper[4158]: E0224 05:12:49.059040 4158 event.go:359] "Server rejected event (will not retry!)" err="events \"master-0.189716b716e662b4\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{master-0.189716b716e662b4 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:master-0,UID:master-0,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node master-0 status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-02-24 05:12:34.01260306 +0000 UTC m=+0.676599783,LastTimestamp:2026-02-24 05:12:34.24891191 +0000 UTC m=+0.912908643,Count:7,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Feb 24 05:12:49.064906 master-0 kubenswrapper[4158]: E0224 05:12:49.064758 4158 event.go:359] "Server rejected event (will not retry!)" err="events \"master-0.189716b716e70495\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{master-0.189716b716e70495 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:master-0,UID:master-0,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node master-0 status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-02-24 05:12:34.012644501 +0000 UTC m=+0.676641234,LastTimestamp:2026-02-24 05:12:34.24895937 +0000 UTC m=+0.912956103,Count:7,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Feb 24 05:12:49.070751 master-0 kubenswrapper[4158]: E0224 05:12:49.070581 4158 event.go:359] "Server rejected event (will not retry!)" err="events \"master-0.189716b716e75e3c\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{master-0.189716b716e75e3c default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:master-0,UID:master-0,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node master-0 status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-02-24 05:12:34.012667452 +0000 UTC m=+0.676664185,LastTimestamp:2026-02-24 05:12:34.248990341 +0000 UTC m=+0.912987064,Count:7,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Feb 24 05:12:49.075831 master-0 kubenswrapper[4158]: E0224 05:12:49.075684 4158 event.go:359] "Server rejected event (will not retry!)" err="events \"master-0.189716b716e662b4\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{master-0.189716b716e662b4 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:master-0,UID:master-0,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node master-0 status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-02-24 05:12:34.01260306 +0000 UTC m=+0.676599783,LastTimestamp:2026-02-24 05:12:34.25054736 +0000 UTC m=+0.914544093,Count:8,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Feb 24 05:12:49.080897 master-0 kubenswrapper[4158]: E0224 05:12:49.080747 4158 event.go:359] "Server rejected event (will not retry!)" err="events \"master-0.189716b716e70495\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{master-0.189716b716e70495 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:master-0,UID:master-0,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node master-0 status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-02-24 05:12:34.012644501 +0000 UTC m=+0.676641234,LastTimestamp:2026-02-24 05:12:34.25058662 +0000 UTC m=+0.914583343,Count:8,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Feb 24 05:12:49.090046 master-0 kubenswrapper[4158]: E0224 05:12:49.089896 4158 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{bootstrap-kube-apiserver-master-0.189716b76ae1c28d openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:bootstrap-kube-apiserver-master-0,UID:687e92a6cecf1e2beeef16a0b322ad08,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Pulling,Message:Pulling image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8177c465e14c63854e5c0fa95ca0635cffc9b5dd3d077ecf971feedbc42b1274\",Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-02-24 05:12:35.421586061 +0000 UTC m=+2.085582794,LastTimestamp:2026-02-24 05:12:35.421586061 +0000 UTC m=+2.085582794,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Feb 24 05:12:49.095899 master-0 kubenswrapper[4158]: E0224 05:12:49.095765 4158 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"kube-system\"" event="&Event{ObjectMeta:{bootstrap-kube-controller-manager-master-0.189716b76b6469c7 kube-system 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:kube-system,Name:bootstrap-kube-controller-manager-master-0,UID:c9ad9373c007a4fcd25e70622bdc8deb,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-controller-manager},},Reason:Pulling,Message:Pulling image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8177c465e14c63854e5c0fa95ca0635cffc9b5dd3d077ecf971feedbc42b1274\",Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-02-24 05:12:35.430148551 +0000 UTC m=+2.094145254,LastTimestamp:2026-02-24 05:12:35.430148551 +0000 UTC m=+2.094145254,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Feb 24 05:12:49.104205 master-0 kubenswrapper[4158]: E0224 05:12:49.104032 4158 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-master-0.189716b76d190ec7 openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-master-0,UID:c997c8e9d3be51d454d8e61e376bef08,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Pulling,Message:Pulling image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cb2014728aa54e620f65424402b14c5247016734a9a982c393dc011acb1a1f52\",Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-02-24 05:12:35.458764487 +0000 UTC m=+2.122761220,LastTimestamp:2026-02-24 05:12:35.458764487 +0000 UTC m=+2.122761220,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Feb 24 05:12:49.111236 master-0 kubenswrapper[4158]: E0224 05:12:49.111126 4158 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-master-0-master-0.189716b76f48dbd0 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-master-0-master-0,UID:12dab5d350ebc129b0bfa4714d330b15,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcdctl},},Reason:Pulling,Message:Pulling image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d77a77c401bcfaa65a6ab6de82415af0e7ace1b470626647e5feb4875c89a5ef\",Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-02-24 05:12:35.4954516 +0000 UTC m=+2.159448333,LastTimestamp:2026-02-24 05:12:35.4954516 +0000 UTC m=+2.159448333,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Feb 24 05:12:49.119532 master-0 kubenswrapper[4158]: E0224 05:12:49.119354 4158 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"kube-system\"" event="&Event{ObjectMeta:{bootstrap-kube-scheduler-master-0.189716b7757ebd4d kube-system 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:kube-system,Name:bootstrap-kube-scheduler-master-0,UID:56c3cb71c9851003c8de7e7c5db4b87e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-scheduler},},Reason:Pulling,Message:Pulling image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8177c465e14c63854e5c0fa95ca0635cffc9b5dd3d077ecf971feedbc42b1274\",Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-02-24 05:12:35.599646029 +0000 UTC m=+2.263642762,LastTimestamp:2026-02-24 05:12:35.599646029 +0000 UTC m=+2.263642762,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Feb 24 05:12:49.126961 master-0 kubenswrapper[4158]: E0224 05:12:49.126772 4158 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-master-0.189716b7cff6d922 openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-master-0,UID:c997c8e9d3be51d454d8e61e376bef08,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Pulled,Message:Successfully pulled image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cb2014728aa54e620f65424402b14c5247016734a9a982c393dc011acb1a1f52\" in 1.658s (1.658s including waiting). Image size: 464984427 bytes.,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-02-24 05:12:37.117466914 +0000 UTC m=+3.781463607,LastTimestamp:2026-02-24 05:12:37.117466914 +0000 UTC m=+3.781463607,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Feb 24 05:12:49.133768 master-0 kubenswrapper[4158]: E0224 05:12:49.133622 4158 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-master-0.189716b7dbc6501f openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-master-0,UID:c997c8e9d3be51d454d8e61e376bef08,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Created,Message:Created container: setup,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-02-24 05:12:37.315612703 +0000 UTC m=+3.979609386,LastTimestamp:2026-02-24 05:12:37.315612703 +0000 UTC m=+3.979609386,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Feb 24 05:12:49.141146 master-0 kubenswrapper[4158]: E0224 05:12:49.141016 4158 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-master-0.189716b7dcc4a583 openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-master-0,UID:c997c8e9d3be51d454d8e61e376bef08,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Started,Message:Started container setup,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-02-24 05:12:37.332280707 +0000 UTC m=+3.996277400,LastTimestamp:2026-02-24 05:12:37.332280707 +0000 UTC m=+3.996277400,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Feb 24 05:12:49.147978 master-0 kubenswrapper[4158]: E0224 05:12:49.147825 4158 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-master-0-master-0.189716b802eee81b openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-master-0-master-0,UID:12dab5d350ebc129b0bfa4714d330b15,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcdctl},},Reason:Pulled,Message:Successfully pulled image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d77a77c401bcfaa65a6ab6de82415af0e7ace1b470626647e5feb4875c89a5ef\" in 2.477s (2.477s including waiting). Image size: 529218694 bytes.,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-02-24 05:12:37.972584475 +0000 UTC m=+4.636581168,LastTimestamp:2026-02-24 05:12:37.972584475 +0000 UTC m=+4.636581168,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Feb 24 05:12:49.155447 master-0 kubenswrapper[4158]: E0224 05:12:49.155268 4158 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-master-0.189716b80ebefc8c openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-master-0,UID:c997c8e9d3be51d454d8e61e376bef08,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-rbac-proxy-crio},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cb2014728aa54e620f65424402b14c5247016734a9a982c393dc011acb1a1f52\" already present on machine,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-02-24 05:12:38.170770572 +0000 UTC m=+4.834767265,LastTimestamp:2026-02-24 05:12:38.170770572 +0000 UTC m=+4.834767265,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Feb 24 05:12:49.162199 master-0 kubenswrapper[4158]: E0224 05:12:49.162006 4158 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-master-0-master-0.189716b80f087b30 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-master-0-master-0,UID:12dab5d350ebc129b0bfa4714d330b15,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcdctl},},Reason:Created,Message:Created container: etcdctl,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-02-24 05:12:38.17558712 +0000 UTC m=+4.839583813,LastTimestamp:2026-02-24 05:12:38.17558712 +0000 UTC m=+4.839583813,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Feb 24 05:12:49.169655 master-0 kubenswrapper[4158]: E0224 05:12:49.169527 4158 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-master-0-master-0.189716b80ff3cfd7 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-master-0-master-0,UID:12dab5d350ebc129b0bfa4714d330b15,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcdctl},},Reason:Started,Message:Started container etcdctl,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-02-24 05:12:38.191009751 +0000 UTC m=+4.855006444,LastTimestamp:2026-02-24 05:12:38.191009751 +0000 UTC m=+4.855006444,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Feb 24 05:12:49.176793 master-0 kubenswrapper[4158]: E0224 05:12:49.176656 4158 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-master-0-master-0.189716b810387136 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-master-0-master-0,UID:12dab5d350ebc129b0bfa4714d330b15,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d77a77c401bcfaa65a6ab6de82415af0e7ace1b470626647e5feb4875c89a5ef\" already present on machine,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-02-24 05:12:38.19550751 +0000 UTC m=+4.859504193,LastTimestamp:2026-02-24 05:12:38.19550751 +0000 UTC m=+4.859504193,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Feb 24 05:12:49.183539 master-0 kubenswrapper[4158]: E0224 05:12:49.183383 4158 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-master-0.189716b81aed7205 openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-master-0,UID:c997c8e9d3be51d454d8e61e376bef08,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-rbac-proxy-crio},},Reason:Created,Message:Created container: kube-rbac-proxy-crio,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-02-24 05:12:38.375141893 +0000 UTC m=+5.039138596,LastTimestamp:2026-02-24 05:12:38.375141893 +0000 UTC m=+5.039138596,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Feb 24 05:12:49.189460 master-0 kubenswrapper[4158]: E0224 05:12:49.189281 4158 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-master-0-master-0.189716b81af682a2 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-master-0-master-0,UID:12dab5d350ebc129b0bfa4714d330b15,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd},},Reason:Created,Message:Created container: etcd,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-02-24 05:12:38.37573597 +0000 UTC m=+5.039732673,LastTimestamp:2026-02-24 05:12:38.37573597 +0000 UTC m=+5.039732673,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Feb 24 05:12:49.196663 master-0 kubenswrapper[4158]: E0224 05:12:49.196430 4158 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-master-0-master-0.189716b81bc0b38a openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-master-0-master-0,UID:12dab5d350ebc129b0bfa4714d330b15,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd},},Reason:Started,Message:Started container etcd,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-02-24 05:12:38.388986762 +0000 UTC m=+5.052983455,LastTimestamp:2026-02-24 05:12:38.388986762 +0000 UTC m=+5.052983455,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Feb 24 05:12:49.204580 master-0 kubenswrapper[4158]: E0224 05:12:49.204341 4158 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-master-0.189716b81bd93903 openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-master-0,UID:c997c8e9d3be51d454d8e61e376bef08,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-rbac-proxy-crio},},Reason:Started,Message:Started container kube-rbac-proxy-crio,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-02-24 05:12:38.390593795 +0000 UTC m=+5.054590488,LastTimestamp:2026-02-24 05:12:38.390593795 +0000 UTC m=+5.054590488,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Feb 24 05:12:49.213142 master-0 kubenswrapper[4158]: E0224 05:12:49.212952 4158 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-rbac-proxy-crio-master-0.189716b80ebefc8c\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-master-0.189716b80ebefc8c openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-master-0,UID:c997c8e9d3be51d454d8e61e376bef08,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-rbac-proxy-crio},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cb2014728aa54e620f65424402b14c5247016734a9a982c393dc011acb1a1f52\" already present on machine,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-02-24 05:12:38.170770572 +0000 UTC m=+4.834767265,LastTimestamp:2026-02-24 05:12:39.186798044 +0000 UTC m=+5.850794767,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Feb 24 05:12:49.220219 master-0 kubenswrapper[4158]: E0224 05:12:49.220084 4158 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-rbac-proxy-crio-master-0.189716b81aed7205\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-master-0.189716b81aed7205 openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-master-0,UID:c997c8e9d3be51d454d8e61e376bef08,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-rbac-proxy-crio},},Reason:Created,Message:Created container: kube-rbac-proxy-crio,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-02-24 05:12:38.375141893 +0000 UTC m=+5.039138596,LastTimestamp:2026-02-24 05:12:39.403469297 +0000 UTC m=+6.067465990,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Feb 24 05:12:49.227700 master-0 kubenswrapper[4158]: E0224 05:12:49.227576 4158 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-rbac-proxy-crio-master-0.189716b81bd93903\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-master-0.189716b81bd93903 openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-master-0,UID:c997c8e9d3be51d454d8e61e376bef08,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-rbac-proxy-crio},},Reason:Started,Message:Started container kube-rbac-proxy-crio,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-02-24 05:12:38.390593795 +0000 UTC m=+5.054590488,LastTimestamp:2026-02-24 05:12:39.415606103 +0000 UTC m=+6.079602806,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Feb 24 05:12:49.236413 master-0 kubenswrapper[4158]: E0224 05:12:49.236228 4158 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-master-0.189716b886c7daf6 openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-master-0,UID:c997c8e9d3be51d454d8e61e376bef08,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-rbac-proxy-crio},},Reason:BackOff,Message:Back-off restarting failed container kube-rbac-proxy-crio in pod kube-rbac-proxy-crio-master-0_openshift-machine-config-operator(c997c8e9d3be51d454d8e61e376bef08),Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-02-24 05:12:40.184617718 +0000 UTC m=+6.848614421,LastTimestamp:2026-02-24 05:12:40.184617718 +0000 UTC m=+6.848614421,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Feb 24 05:12:49.237149 master-0 kubenswrapper[4158]: I0224 05:12:49.237100 4158 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 24 05:12:49.240998 master-0 kubenswrapper[4158]: I0224 05:12:49.240925 4158 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Feb 24 05:12:49.241061 master-0 kubenswrapper[4158]: I0224 05:12:49.241043 4158 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Feb 24 05:12:49.241097 master-0 kubenswrapper[4158]: I0224 05:12:49.241073 4158 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Feb 24 05:12:49.246676 master-0 kubenswrapper[4158]: I0224 05:12:49.246633 4158 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Feb 24 05:12:49.250021 master-0 kubenswrapper[4158]: E0224 05:12:49.249798 4158 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-rbac-proxy-crio-master-0.189716b886c7daf6\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-master-0.189716b886c7daf6 openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-master-0,UID:c997c8e9d3be51d454d8e61e376bef08,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-rbac-proxy-crio},},Reason:BackOff,Message:Back-off restarting failed container kube-rbac-proxy-crio in pod kube-rbac-proxy-crio-master-0_openshift-machine-config-operator(c997c8e9d3be51d454d8e61e376bef08),Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-02-24 05:12:40.184617718 +0000 UTC m=+6.848614421,LastTimestamp:2026-02-24 05:12:41.187416765 +0000 UTC m=+7.851413488,Count:2,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Feb 24 05:12:49.256573 master-0 kubenswrapper[4158]: E0224 05:12:49.256407 4158 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"kube-system\"" event="&Event{ObjectMeta:{bootstrap-kube-controller-manager-master-0.189716b90cf88786 kube-system 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:kube-system,Name:bootstrap-kube-controller-manager-master-0,UID:c9ad9373c007a4fcd25e70622bdc8deb,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-controller-manager},},Reason:Pulled,Message:Successfully pulled image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8177c465e14c63854e5c0fa95ca0635cffc9b5dd3d077ecf971feedbc42b1274\" in 7.005s (7.005s including waiting). Image size: 943734757 bytes.,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-02-24 05:12:42.435954566 +0000 UTC m=+9.099951299,LastTimestamp:2026-02-24 05:12:42.435954566 +0000 UTC m=+9.099951299,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Feb 24 05:12:49.264089 master-0 kubenswrapper[4158]: E0224 05:12:49.263937 4158 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"kube-system\"" event="&Event{ObjectMeta:{bootstrap-kube-scheduler-master-0.189716b90e789246 kube-system 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:kube-system,Name:bootstrap-kube-scheduler-master-0,UID:56c3cb71c9851003c8de7e7c5db4b87e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-scheduler},},Reason:Pulled,Message:Successfully pulled image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8177c465e14c63854e5c0fa95ca0635cffc9b5dd3d077ecf971feedbc42b1274\" in 6.861s (6.861s including waiting). Image size: 943734757 bytes.,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-02-24 05:12:42.461123142 +0000 UTC m=+9.125119845,LastTimestamp:2026-02-24 05:12:42.461123142 +0000 UTC m=+9.125119845,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Feb 24 05:12:49.269618 master-0 kubenswrapper[4158]: E0224 05:12:49.269423 4158 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{bootstrap-kube-apiserver-master-0.189716b910a88632 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:bootstrap-kube-apiserver-master-0,UID:687e92a6cecf1e2beeef16a0b322ad08,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Pulled,Message:Successfully pulled image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8177c465e14c63854e5c0fa95ca0635cffc9b5dd3d077ecf971feedbc42b1274\" in 7.076s (7.076s including waiting). Image size: 943734757 bytes.,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-02-24 05:12:42.49782021 +0000 UTC m=+9.161816943,LastTimestamp:2026-02-24 05:12:42.49782021 +0000 UTC m=+9.161816943,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Feb 24 05:12:49.274654 master-0 kubenswrapper[4158]: E0224 05:12:49.274519 4158 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"kube-system\"" event="&Event{ObjectMeta:{bootstrap-kube-scheduler-master-0.189716b91c3f7363 kube-system 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:kube-system,Name:bootstrap-kube-scheduler-master-0,UID:56c3cb71c9851003c8de7e7c5db4b87e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-scheduler},},Reason:Created,Message:Created container: kube-scheduler,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-02-24 05:12:42.692260707 +0000 UTC m=+9.356257400,LastTimestamp:2026-02-24 05:12:42.692260707 +0000 UTC m=+9.356257400,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Feb 24 05:12:49.281474 master-0 kubenswrapper[4158]: E0224 05:12:49.281349 4158 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"kube-system\"" event="&Event{ObjectMeta:{bootstrap-kube-controller-manager-master-0.189716b91c42f1a1 kube-system 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:kube-system,Name:bootstrap-kube-controller-manager-master-0,UID:c9ad9373c007a4fcd25e70622bdc8deb,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-controller-manager},},Reason:Created,Message:Created container: kube-controller-manager,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-02-24 05:12:42.692489633 +0000 UTC m=+9.356486336,LastTimestamp:2026-02-24 05:12:42.692489633 +0000 UTC m=+9.356486336,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Feb 24 05:12:49.286848 master-0 kubenswrapper[4158]: E0224 05:12:49.286706 4158 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"kube-system\"" event="&Event{ObjectMeta:{bootstrap-kube-scheduler-master-0.189716b91d1e10ae kube-system 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:kube-system,Name:bootstrap-kube-scheduler-master-0,UID:56c3cb71c9851003c8de7e7c5db4b87e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-scheduler},},Reason:Started,Message:Started container kube-scheduler,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-02-24 05:12:42.706849966 +0000 UTC m=+9.370846659,LastTimestamp:2026-02-24 05:12:42.706849966 +0000 UTC m=+9.370846659,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Feb 24 05:12:49.291570 master-0 kubenswrapper[4158]: E0224 05:12:49.291422 4158 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"kube-system\"" event="&Event{ObjectMeta:{bootstrap-kube-controller-manager-master-0.189716b91d27f62e kube-system 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:kube-system,Name:bootstrap-kube-controller-manager-master-0,UID:c9ad9373c007a4fcd25e70622bdc8deb,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-controller-manager},},Reason:Started,Message:Started container kube-controller-manager,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-02-24 05:12:42.707498542 +0000 UTC m=+9.371495235,LastTimestamp:2026-02-24 05:12:42.707498542 +0000 UTC m=+9.371495235,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Feb 24 05:12:49.296593 master-0 kubenswrapper[4158]: E0224 05:12:49.296351 4158 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"kube-system\"" event="&Event{ObjectMeta:{bootstrap-kube-controller-manager-master-0.189716b91d38aabb kube-system 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:kube-system,Name:bootstrap-kube-controller-manager-master-0,UID:c9ad9373c007a4fcd25e70622bdc8deb,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{cluster-policy-controller},},Reason:Pulling,Message:Pulling image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:457c564075e8b14b1d24ff6eab750600ebc90ff8b7bb137306a579ee8445ae95\",Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-02-24 05:12:42.708593339 +0000 UTC m=+9.372590042,LastTimestamp:2026-02-24 05:12:42.708593339 +0000 UTC m=+9.372590042,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Feb 24 05:12:49.301714 master-0 kubenswrapper[4158]: E0224 05:12:49.301529 4158 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{bootstrap-kube-apiserver-master-0.189716b922effb9f openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:bootstrap-kube-apiserver-master-0,UID:687e92a6cecf1e2beeef16a0b322ad08,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Created,Message:Created container: setup,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-02-24 05:12:42.804493215 +0000 UTC m=+9.468489948,LastTimestamp:2026-02-24 05:12:42.804493215 +0000 UTC m=+9.468489948,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Feb 24 05:12:49.307493 master-0 kubenswrapper[4158]: E0224 05:12:49.307271 4158 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{bootstrap-kube-apiserver-master-0.189716b923bdbd19 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:bootstrap-kube-apiserver-master-0,UID:687e92a6cecf1e2beeef16a0b322ad08,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Started,Message:Started container setup,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-02-24 05:12:42.817977625 +0000 UTC m=+9.481974318,LastTimestamp:2026-02-24 05:12:42.817977625 +0000 UTC m=+9.481974318,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Feb 24 05:12:49.316209 master-0 kubenswrapper[4158]: E0224 05:12:49.316049 4158 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{bootstrap-kube-apiserver-master-0.189716b93aa7bbdf openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:bootstrap-kube-apiserver-master-0,UID:687e92a6cecf1e2beeef16a0b322ad08,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8177c465e14c63854e5c0fa95ca0635cffc9b5dd3d077ecf971feedbc42b1274\" already present on machine,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-02-24 05:12:43.202411487 +0000 UTC m=+9.866408180,LastTimestamp:2026-02-24 05:12:43.202411487 +0000 UTC m=+9.866408180,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Feb 24 05:12:49.323448 master-0 kubenswrapper[4158]: E0224 05:12:49.323169 4158 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{bootstrap-kube-apiserver-master-0.189716b94aad8840 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:bootstrap-kube-apiserver-master-0,UID:687e92a6cecf1e2beeef16a0b322ad08,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver},},Reason:Created,Message:Created container: kube-apiserver,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-02-24 05:12:43.471226944 +0000 UTC m=+10.135223637,LastTimestamp:2026-02-24 05:12:43.471226944 +0000 UTC m=+10.135223637,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Feb 24 05:12:49.330385 master-0 kubenswrapper[4158]: E0224 05:12:49.330250 4158 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{bootstrap-kube-apiserver-master-0.189716b94b781bc3 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:bootstrap-kube-apiserver-master-0,UID:687e92a6cecf1e2beeef16a0b322ad08,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver},},Reason:Started,Message:Started container kube-apiserver,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-02-24 05:12:43.484502979 +0000 UTC m=+10.148499672,LastTimestamp:2026-02-24 05:12:43.484502979 +0000 UTC m=+10.148499672,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Feb 24 05:12:49.335902 master-0 kubenswrapper[4158]: E0224 05:12:49.335772 4158 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{bootstrap-kube-apiserver-master-0.189716b94b8b699c openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:bootstrap-kube-apiserver-master-0,UID:687e92a6cecf1e2beeef16a0b322ad08,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-insecure-readyz},},Reason:Pulling,Message:Pulling image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fd63e2c1185e529c6e9f6e1426222ff2ac195132b44a1775f407e4593b66d4c\",Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-02-24 05:12:43.485768092 +0000 UTC m=+10.149764805,LastTimestamp:2026-02-24 05:12:43.485768092 +0000 UTC m=+10.149764805,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Feb 24 05:12:49.343244 master-0 kubenswrapper[4158]: E0224 05:12:49.343092 4158 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"kube-system\"" event="&Event{ObjectMeta:{bootstrap-kube-controller-manager-master-0.189716b99fbbacec kube-system 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:kube-system,Name:bootstrap-kube-controller-manager-master-0,UID:c9ad9373c007a4fcd25e70622bdc8deb,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{cluster-policy-controller},},Reason:Pulled,Message:Successfully pulled image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:457c564075e8b14b1d24ff6eab750600ebc90ff8b7bb137306a579ee8445ae95\" in 2.189s (2.189s including waiting). Image size: 505137106 bytes.,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-02-24 05:12:44.898217196 +0000 UTC m=+11.562213889,LastTimestamp:2026-02-24 05:12:44.898217196 +0000 UTC m=+11.562213889,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Feb 24 05:12:49.350658 master-0 kubenswrapper[4158]: E0224 05:12:49.350512 4158 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"kube-system\"" event="&Event{ObjectMeta:{bootstrap-kube-controller-manager-master-0.189716b9ae1c4abc kube-system 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:kube-system,Name:bootstrap-kube-controller-manager-master-0,UID:c9ad9373c007a4fcd25e70622bdc8deb,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{cluster-policy-controller},},Reason:Created,Message:Created container: cluster-policy-controller,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-02-24 05:12:45.139430076 +0000 UTC m=+11.803426769,LastTimestamp:2026-02-24 05:12:45.139430076 +0000 UTC m=+11.803426769,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Feb 24 05:12:49.357583 master-0 kubenswrapper[4158]: E0224 05:12:49.357445 4158 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"kube-system\"" event="&Event{ObjectMeta:{bootstrap-kube-controller-manager-master-0.189716b9aeabf853 kube-system 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:kube-system,Name:bootstrap-kube-controller-manager-master-0,UID:c9ad9373c007a4fcd25e70622bdc8deb,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{cluster-policy-controller},},Reason:Started,Message:Started container cluster-policy-controller,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-02-24 05:12:45.148846163 +0000 UTC m=+11.812842846,LastTimestamp:2026-02-24 05:12:45.148846163 +0000 UTC m=+11.812842846,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Feb 24 05:12:49.364214 master-0 kubenswrapper[4158]: E0224 05:12:49.364056 4158 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"kube-system\"" event="&Event{ObjectMeta:{bootstrap-kube-controller-manager-master-0.189716b9b2c063bd kube-system 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:kube-system,Name:bootstrap-kube-controller-manager-master-0,UID:c9ad9373c007a4fcd25e70622bdc8deb,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-controller-manager},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8177c465e14c63854e5c0fa95ca0635cffc9b5dd3d077ecf971feedbc42b1274\" already present on machine,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-02-24 05:12:45.217293245 +0000 UTC m=+11.881289968,LastTimestamp:2026-02-24 05:12:45.217293245 +0000 UTC m=+11.881289968,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Feb 24 05:12:49.370358 master-0 kubenswrapper[4158]: E0224 05:12:49.370186 4158 event.go:359] "Server rejected event (will not retry!)" err="events \"bootstrap-kube-controller-manager-master-0.189716b91c42f1a1\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"kube-system\"" event="&Event{ObjectMeta:{bootstrap-kube-controller-manager-master-0.189716b91c42f1a1 kube-system 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:kube-system,Name:bootstrap-kube-controller-manager-master-0,UID:c9ad9373c007a4fcd25e70622bdc8deb,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-controller-manager},},Reason:Created,Message:Created container: kube-controller-manager,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-02-24 05:12:42.692489633 +0000 UTC m=+9.356486336,LastTimestamp:2026-02-24 05:12:45.424102204 +0000 UTC m=+12.088098907,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Feb 24 05:12:49.377365 master-0 kubenswrapper[4158]: E0224 05:12:49.377114 4158 event.go:359] "Server rejected event (will not retry!)" err="events \"bootstrap-kube-controller-manager-master-0.189716b91d27f62e\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"kube-system\"" event="&Event{ObjectMeta:{bootstrap-kube-controller-manager-master-0.189716b91d27f62e kube-system 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:kube-system,Name:bootstrap-kube-controller-manager-master-0,UID:c9ad9373c007a4fcd25e70622bdc8deb,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-controller-manager},},Reason:Started,Message:Started container kube-controller-manager,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-02-24 05:12:42.707498542 +0000 UTC m=+9.371495235,LastTimestamp:2026-02-24 05:12:45.461679924 +0000 UTC m=+12.125676627,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Feb 24 05:12:49.384926 master-0 kubenswrapper[4158]: E0224 05:12:49.384770 4158 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{bootstrap-kube-apiserver-master-0.189716ba07f69891 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:bootstrap-kube-apiserver-master-0,UID:687e92a6cecf1e2beeef16a0b322ad08,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-insecure-readyz},},Reason:Pulled,Message:Successfully pulled image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fd63e2c1185e529c6e9f6e1426222ff2ac195132b44a1775f407e4593b66d4c\" in 3.161s (3.161s including waiting). Image size: 514875199 bytes.,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-02-24 05:12:46.646909073 +0000 UTC m=+13.310905796,LastTimestamp:2026-02-24 05:12:46.646909073 +0000 UTC m=+13.310905796,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Feb 24 05:12:49.392594 master-0 kubenswrapper[4158]: E0224 05:12:49.392376 4158 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{bootstrap-kube-apiserver-master-0.189716ba15762344 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:bootstrap-kube-apiserver-master-0,UID:687e92a6cecf1e2beeef16a0b322ad08,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-insecure-readyz},},Reason:Created,Message:Created container: kube-apiserver-insecure-readyz,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-02-24 05:12:46.87337146 +0000 UTC m=+13.537368153,LastTimestamp:2026-02-24 05:12:46.87337146 +0000 UTC m=+13.537368153,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Feb 24 05:12:49.399500 master-0 kubenswrapper[4158]: E0224 05:12:49.399263 4158 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{bootstrap-kube-apiserver-master-0.189716ba1663c184 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:bootstrap-kube-apiserver-master-0,UID:687e92a6cecf1e2beeef16a0b322ad08,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-insecure-readyz},},Reason:Started,Message:Started container kube-apiserver-insecure-readyz,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-02-24 05:12:46.888944004 +0000 UTC m=+13.552940737,LastTimestamp:2026-02-24 05:12:46.888944004 +0000 UTC m=+13.552940737,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Feb 24 05:12:49.510662 master-0 kubenswrapper[4158]: I0224 05:12:49.510469 4158 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="kube-system/bootstrap-kube-controller-manager-master-0" Feb 24 05:12:49.510902 master-0 kubenswrapper[4158]: I0224 05:12:49.510786 4158 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 24 05:12:49.512671 master-0 kubenswrapper[4158]: I0224 05:12:49.512517 4158 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Feb 24 05:12:49.512671 master-0 kubenswrapper[4158]: I0224 05:12:49.512573 4158 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Feb 24 05:12:49.512671 master-0 kubenswrapper[4158]: I0224 05:12:49.512591 4158 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Feb 24 05:12:49.971846 master-0 kubenswrapper[4158]: I0224 05:12:49.971539 4158 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "master-0" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 24 05:12:50.060049 master-0 kubenswrapper[4158]: W0224 05:12:50.059904 4158 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes "master-0" is forbidden: User "system:anonymous" cannot list resource "nodes" in API group "" at the cluster scope Feb 24 05:12:50.060049 master-0 kubenswrapper[4158]: E0224 05:12:50.059977 4158 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes \"master-0\" is forbidden: User \"system:anonymous\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" Feb 24 05:12:50.240537 master-0 kubenswrapper[4158]: I0224 05:12:50.240445 4158 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 24 05:12:50.241960 master-0 kubenswrapper[4158]: I0224 05:12:50.241554 4158 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Feb 24 05:12:50.241960 master-0 kubenswrapper[4158]: I0224 05:12:50.241596 4158 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Feb 24 05:12:50.241960 master-0 kubenswrapper[4158]: I0224 05:12:50.241618 4158 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Feb 24 05:12:50.560539 master-0 kubenswrapper[4158]: W0224 05:12:50.560257 4158 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:anonymous" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope Feb 24 05:12:50.560539 master-0 kubenswrapper[4158]: E0224 05:12:50.560398 4158 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:anonymous\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" Feb 24 05:12:50.973771 master-0 kubenswrapper[4158]: I0224 05:12:50.973544 4158 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "master-0" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 24 05:12:51.242899 master-0 kubenswrapper[4158]: I0224 05:12:51.242853 4158 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 24 05:12:51.243794 master-0 kubenswrapper[4158]: I0224 05:12:51.243758 4158 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Feb 24 05:12:51.243877 master-0 kubenswrapper[4158]: I0224 05:12:51.243813 4158 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Feb 24 05:12:51.243877 master-0 kubenswrapper[4158]: I0224 05:12:51.243826 4158 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Feb 24 05:12:51.971716 master-0 kubenswrapper[4158]: I0224 05:12:51.971508 4158 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "master-0" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 24 05:12:52.144340 master-0 kubenswrapper[4158]: I0224 05:12:52.144036 4158 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 24 05:12:52.146018 master-0 kubenswrapper[4158]: I0224 05:12:52.145958 4158 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Feb 24 05:12:52.146124 master-0 kubenswrapper[4158]: I0224 05:12:52.146031 4158 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Feb 24 05:12:52.146124 master-0 kubenswrapper[4158]: I0224 05:12:52.146050 4158 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Feb 24 05:12:52.146698 master-0 kubenswrapper[4158]: I0224 05:12:52.146646 4158 scope.go:117] "RemoveContainer" containerID="4a6431ad2e348da673451c4ac01f0742fed27f8448f349ab43dae3e0ab73a9ce" Feb 24 05:12:52.160547 master-0 kubenswrapper[4158]: E0224 05:12:52.160386 4158 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-rbac-proxy-crio-master-0.189716b80ebefc8c\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-master-0.189716b80ebefc8c openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-master-0,UID:c997c8e9d3be51d454d8e61e376bef08,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-rbac-proxy-crio},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cb2014728aa54e620f65424402b14c5247016734a9a982c393dc011acb1a1f52\" already present on machine,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-02-24 05:12:38.170770572 +0000 UTC m=+4.834767265,LastTimestamp:2026-02-24 05:12:52.150976529 +0000 UTC m=+18.814973252,Count:3,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Feb 24 05:12:52.466582 master-0 kubenswrapper[4158]: E0224 05:12:52.466235 4158 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-rbac-proxy-crio-master-0.189716b81aed7205\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-master-0.189716b81aed7205 openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-master-0,UID:c997c8e9d3be51d454d8e61e376bef08,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-rbac-proxy-crio},},Reason:Created,Message:Created container: kube-rbac-proxy-crio,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-02-24 05:12:38.375141893 +0000 UTC m=+5.039138596,LastTimestamp:2026-02-24 05:12:52.456834303 +0000 UTC m=+19.120831036,Count:3,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Feb 24 05:12:52.487400 master-0 kubenswrapper[4158]: E0224 05:12:52.487106 4158 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-rbac-proxy-crio-master-0.189716b81bd93903\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-master-0.189716b81bd93903 openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-master-0,UID:c997c8e9d3be51d454d8e61e376bef08,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-rbac-proxy-crio},},Reason:Started,Message:Started container kube-rbac-proxy-crio,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-02-24 05:12:38.390593795 +0000 UTC m=+5.054590488,LastTimestamp:2026-02-24 05:12:52.477098345 +0000 UTC m=+19.141095068,Count:3,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Feb 24 05:12:52.636506 master-0 kubenswrapper[4158]: W0224 05:12:52.636300 4158 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:anonymous" cannot list resource "services" in API group "" at the cluster scope Feb 24 05:12:52.636506 master-0 kubenswrapper[4158]: E0224 05:12:52.636377 4158 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:anonymous\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" Feb 24 05:12:52.977395 master-0 kubenswrapper[4158]: I0224 05:12:52.976763 4158 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "master-0" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 24 05:12:53.251581 master-0 kubenswrapper[4158]: I0224 05:12:53.251509 4158 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-config-operator_kube-rbac-proxy-crio-master-0_c997c8e9d3be51d454d8e61e376bef08/kube-rbac-proxy-crio/2.log" Feb 24 05:12:53.252288 master-0 kubenswrapper[4158]: I0224 05:12:53.252228 4158 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-config-operator_kube-rbac-proxy-crio-master-0_c997c8e9d3be51d454d8e61e376bef08/kube-rbac-proxy-crio/1.log" Feb 24 05:12:53.252925 master-0 kubenswrapper[4158]: I0224 05:12:53.252839 4158 generic.go:334] "Generic (PLEG): container finished" podID="c997c8e9d3be51d454d8e61e376bef08" containerID="3e6942d2ca28138c7420b132dcdbb1b9a811151a995bdac20311a616719b966c" exitCode=1 Feb 24 05:12:53.252925 master-0 kubenswrapper[4158]: I0224 05:12:53.252879 4158 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" event={"ID":"c997c8e9d3be51d454d8e61e376bef08","Type":"ContainerDied","Data":"3e6942d2ca28138c7420b132dcdbb1b9a811151a995bdac20311a616719b966c"} Feb 24 05:12:53.253159 master-0 kubenswrapper[4158]: I0224 05:12:53.252937 4158 scope.go:117] "RemoveContainer" containerID="4a6431ad2e348da673451c4ac01f0742fed27f8448f349ab43dae3e0ab73a9ce" Feb 24 05:12:53.253159 master-0 kubenswrapper[4158]: I0224 05:12:53.253086 4158 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 24 05:12:53.255831 master-0 kubenswrapper[4158]: I0224 05:12:53.255788 4158 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Feb 24 05:12:53.255831 master-0 kubenswrapper[4158]: I0224 05:12:53.255828 4158 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Feb 24 05:12:53.256025 master-0 kubenswrapper[4158]: I0224 05:12:53.255842 4158 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Feb 24 05:12:53.256220 master-0 kubenswrapper[4158]: I0224 05:12:53.256191 4158 scope.go:117] "RemoveContainer" containerID="3e6942d2ca28138c7420b132dcdbb1b9a811151a995bdac20311a616719b966c" Feb 24 05:12:53.256414 master-0 kubenswrapper[4158]: E0224 05:12:53.256374 4158 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-rbac-proxy-crio\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-rbac-proxy-crio pod=kube-rbac-proxy-crio-master-0_openshift-machine-config-operator(c997c8e9d3be51d454d8e61e376bef08)\"" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" podUID="c997c8e9d3be51d454d8e61e376bef08" Feb 24 05:12:53.264602 master-0 kubenswrapper[4158]: E0224 05:12:53.264438 4158 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-rbac-proxy-crio-master-0.189716b886c7daf6\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-master-0.189716b886c7daf6 openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-master-0,UID:c997c8e9d3be51d454d8e61e376bef08,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-rbac-proxy-crio},},Reason:BackOff,Message:Back-off restarting failed container kube-rbac-proxy-crio in pod kube-rbac-proxy-crio-master-0_openshift-machine-config-operator(c997c8e9d3be51d454d8e61e376bef08),Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-02-24 05:12:40.184617718 +0000 UTC m=+6.848614421,LastTimestamp:2026-02-24 05:12:53.25634641 +0000 UTC m=+19.920343113,Count:3,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Feb 24 05:12:53.569695 master-0 kubenswrapper[4158]: I0224 05:12:53.569438 4158 csr.go:261] certificate signing request csr-8fbjx is approved, waiting to be issued Feb 24 05:12:53.595584 master-0 kubenswrapper[4158]: E0224 05:12:53.595530 4158 controller.go:145] "Failed to ensure lease exists, will retry" err="leases.coordination.k8s.io \"master-0\" is forbidden: User \"system:anonymous\" cannot get resource \"leases\" in API group \"coordination.k8s.io\" in the namespace \"kube-node-lease\"" interval="7s" Feb 24 05:12:53.777889 master-0 kubenswrapper[4158]: W0224 05:12:53.777697 4158 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: runtimeclasses.node.k8s.io is forbidden: User "system:anonymous" cannot list resource "runtimeclasses" in API group "node.k8s.io" at the cluster scope Feb 24 05:12:53.777889 master-0 kubenswrapper[4158]: E0224 05:12:53.777774 4158 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: runtimeclasses.node.k8s.io is forbidden: User \"system:anonymous\" cannot list resource \"runtimeclasses\" in API group \"node.k8s.io\" at the cluster scope" logger="UnhandledError" Feb 24 05:12:53.795645 master-0 kubenswrapper[4158]: I0224 05:12:53.795506 4158 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="kube-system/bootstrap-kube-controller-manager-master-0" Feb 24 05:12:53.795905 master-0 kubenswrapper[4158]: I0224 05:12:53.795712 4158 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 24 05:12:53.796774 master-0 kubenswrapper[4158]: I0224 05:12:53.796741 4158 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Feb 24 05:12:53.796847 master-0 kubenswrapper[4158]: I0224 05:12:53.796799 4158 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Feb 24 05:12:53.796847 master-0 kubenswrapper[4158]: I0224 05:12:53.796823 4158 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Feb 24 05:12:53.800460 master-0 kubenswrapper[4158]: I0224 05:12:53.800417 4158 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="kube-system/bootstrap-kube-controller-manager-master-0" Feb 24 05:12:53.821519 master-0 kubenswrapper[4158]: I0224 05:12:53.821440 4158 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 24 05:12:53.822527 master-0 kubenswrapper[4158]: I0224 05:12:53.822467 4158 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Feb 24 05:12:53.823088 master-0 kubenswrapper[4158]: I0224 05:12:53.822612 4158 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Feb 24 05:12:53.823088 master-0 kubenswrapper[4158]: I0224 05:12:53.822643 4158 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Feb 24 05:12:53.823088 master-0 kubenswrapper[4158]: I0224 05:12:53.822697 4158 kubelet_node_status.go:76] "Attempting to register node" node="master-0" Feb 24 05:12:53.830529 master-0 kubenswrapper[4158]: E0224 05:12:53.830465 4158 kubelet_node_status.go:99] "Unable to register node with API server" err="nodes is forbidden: User \"system:anonymous\" cannot create resource \"nodes\" in API group \"\" at the cluster scope" node="master-0" Feb 24 05:12:53.969598 master-0 kubenswrapper[4158]: I0224 05:12:53.969561 4158 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "master-0" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 24 05:12:54.091711 master-0 kubenswrapper[4158]: E0224 05:12:54.091525 4158 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"master-0\" not found" Feb 24 05:12:54.258724 master-0 kubenswrapper[4158]: I0224 05:12:54.258654 4158 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-config-operator_kube-rbac-proxy-crio-master-0_c997c8e9d3be51d454d8e61e376bef08/kube-rbac-proxy-crio/2.log" Feb 24 05:12:54.259617 master-0 kubenswrapper[4158]: I0224 05:12:54.259565 4158 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 24 05:12:54.260910 master-0 kubenswrapper[4158]: I0224 05:12:54.260876 4158 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Feb 24 05:12:54.261094 master-0 kubenswrapper[4158]: I0224 05:12:54.260926 4158 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Feb 24 05:12:54.261094 master-0 kubenswrapper[4158]: I0224 05:12:54.260947 4158 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Feb 24 05:12:54.718267 master-0 kubenswrapper[4158]: I0224 05:12:54.718138 4158 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="kube-system/bootstrap-kube-controller-manager-master-0" Feb 24 05:12:54.726260 master-0 kubenswrapper[4158]: I0224 05:12:54.726172 4158 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="kube-system/bootstrap-kube-controller-manager-master-0" Feb 24 05:12:54.972050 master-0 kubenswrapper[4158]: I0224 05:12:54.971827 4158 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "master-0" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 24 05:12:55.262815 master-0 kubenswrapper[4158]: I0224 05:12:55.262714 4158 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 24 05:12:55.264434 master-0 kubenswrapper[4158]: I0224 05:12:55.264369 4158 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Feb 24 05:12:55.265578 master-0 kubenswrapper[4158]: I0224 05:12:55.265524 4158 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Feb 24 05:12:55.265578 master-0 kubenswrapper[4158]: I0224 05:12:55.265568 4158 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Feb 24 05:12:55.268746 master-0 kubenswrapper[4158]: I0224 05:12:55.268652 4158 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="kube-system/bootstrap-kube-controller-manager-master-0" Feb 24 05:12:55.270429 master-0 kubenswrapper[4158]: I0224 05:12:55.270358 4158 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="kube-system/bootstrap-kube-controller-manager-master-0" Feb 24 05:12:55.972173 master-0 kubenswrapper[4158]: I0224 05:12:55.972090 4158 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "master-0" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 24 05:12:56.265479 master-0 kubenswrapper[4158]: I0224 05:12:56.265391 4158 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 24 05:12:56.267070 master-0 kubenswrapper[4158]: I0224 05:12:56.267004 4158 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Feb 24 05:12:56.267165 master-0 kubenswrapper[4158]: I0224 05:12:56.267103 4158 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Feb 24 05:12:56.267165 master-0 kubenswrapper[4158]: I0224 05:12:56.267137 4158 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Feb 24 05:12:56.973010 master-0 kubenswrapper[4158]: I0224 05:12:56.972921 4158 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "master-0" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 24 05:12:57.270003 master-0 kubenswrapper[4158]: I0224 05:12:57.269740 4158 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 24 05:12:57.271008 master-0 kubenswrapper[4158]: I0224 05:12:57.270953 4158 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Feb 24 05:12:57.271600 master-0 kubenswrapper[4158]: I0224 05:12:57.271058 4158 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Feb 24 05:12:57.271600 master-0 kubenswrapper[4158]: I0224 05:12:57.271125 4158 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Feb 24 05:12:57.972542 master-0 kubenswrapper[4158]: I0224 05:12:57.972463 4158 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "master-0" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 24 05:12:58.972686 master-0 kubenswrapper[4158]: I0224 05:12:58.972596 4158 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "master-0" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 24 05:12:59.970956 master-0 kubenswrapper[4158]: I0224 05:12:59.970861 4158 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "master-0" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 24 05:13:00.605667 master-0 kubenswrapper[4158]: E0224 05:13:00.605566 4158 controller.go:145] "Failed to ensure lease exists, will retry" err="leases.coordination.k8s.io \"master-0\" is forbidden: User \"system:anonymous\" cannot get resource \"leases\" in API group \"coordination.k8s.io\" in the namespace \"kube-node-lease\"" interval="7s" Feb 24 05:13:00.831121 master-0 kubenswrapper[4158]: I0224 05:13:00.830982 4158 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 24 05:13:00.833148 master-0 kubenswrapper[4158]: I0224 05:13:00.832830 4158 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Feb 24 05:13:00.833148 master-0 kubenswrapper[4158]: I0224 05:13:00.832930 4158 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Feb 24 05:13:00.833148 master-0 kubenswrapper[4158]: I0224 05:13:00.832952 4158 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Feb 24 05:13:00.833148 master-0 kubenswrapper[4158]: I0224 05:13:00.833036 4158 kubelet_node_status.go:76] "Attempting to register node" node="master-0" Feb 24 05:13:00.841432 master-0 kubenswrapper[4158]: E0224 05:13:00.841348 4158 kubelet_node_status.go:99] "Unable to register node with API server" err="nodes is forbidden: User \"system:anonymous\" cannot create resource \"nodes\" in API group \"\" at the cluster scope" node="master-0" Feb 24 05:13:00.969387 master-0 kubenswrapper[4158]: I0224 05:13:00.969119 4158 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "master-0" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 24 05:13:01.972234 master-0 kubenswrapper[4158]: I0224 05:13:01.972131 4158 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "master-0" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 24 05:13:02.972264 master-0 kubenswrapper[4158]: I0224 05:13:02.972151 4158 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "master-0" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 24 05:13:03.971567 master-0 kubenswrapper[4158]: I0224 05:13:03.971468 4158 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "master-0" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 24 05:13:04.092621 master-0 kubenswrapper[4158]: E0224 05:13:04.092415 4158 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"master-0\" not found" Feb 24 05:13:04.144813 master-0 kubenswrapper[4158]: I0224 05:13:04.143954 4158 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 24 05:13:04.146303 master-0 kubenswrapper[4158]: I0224 05:13:04.146256 4158 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Feb 24 05:13:04.146536 master-0 kubenswrapper[4158]: I0224 05:13:04.146347 4158 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Feb 24 05:13:04.146536 master-0 kubenswrapper[4158]: I0224 05:13:04.146368 4158 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Feb 24 05:13:04.147034 master-0 kubenswrapper[4158]: I0224 05:13:04.146991 4158 scope.go:117] "RemoveContainer" containerID="3e6942d2ca28138c7420b132dcdbb1b9a811151a995bdac20311a616719b966c" Feb 24 05:13:04.147337 master-0 kubenswrapper[4158]: E0224 05:13:04.147255 4158 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-rbac-proxy-crio\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-rbac-proxy-crio pod=kube-rbac-proxy-crio-master-0_openshift-machine-config-operator(c997c8e9d3be51d454d8e61e376bef08)\"" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" podUID="c997c8e9d3be51d454d8e61e376bef08" Feb 24 05:13:04.155037 master-0 kubenswrapper[4158]: E0224 05:13:04.154834 4158 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-rbac-proxy-crio-master-0.189716b886c7daf6\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-master-0.189716b886c7daf6 openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-master-0,UID:c997c8e9d3be51d454d8e61e376bef08,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-rbac-proxy-crio},},Reason:BackOff,Message:Back-off restarting failed container kube-rbac-proxy-crio in pod kube-rbac-proxy-crio-master-0_openshift-machine-config-operator(c997c8e9d3be51d454d8e61e376bef08),Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-02-24 05:12:40.184617718 +0000 UTC m=+6.848614421,LastTimestamp:2026-02-24 05:13:04.147206754 +0000 UTC m=+30.811203477,Count:4,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Feb 24 05:13:04.973664 master-0 kubenswrapper[4158]: I0224 05:13:04.973579 4158 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "master-0" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 24 05:13:05.019938 master-0 kubenswrapper[4158]: I0224 05:13:05.019866 4158 csr.go:257] certificate signing request csr-8fbjx is issued Feb 24 05:13:05.841683 master-0 kubenswrapper[4158]: I0224 05:13:05.841580 4158 transport.go:147] "Certificate rotation detected, shutting down client connections to start using new credentials" Feb 24 05:13:05.976111 master-0 kubenswrapper[4158]: I0224 05:13:05.976007 4158 nodeinfomanager.go:401] Failed to publish CSINode: nodes "master-0" not found Feb 24 05:13:05.991127 master-0 kubenswrapper[4158]: I0224 05:13:05.991053 4158 nodeinfomanager.go:401] Failed to publish CSINode: nodes "master-0" not found Feb 24 05:13:06.022561 master-0 kubenswrapper[4158]: I0224 05:13:06.022466 4158 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate expiration is 2026-02-25 05:04:32 +0000 UTC, rotation deadline is 2026-02-25 02:10:56.430212868 +0000 UTC Feb 24 05:13:06.022561 master-0 kubenswrapper[4158]: I0224 05:13:06.022535 4158 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Waiting 20h57m50.407683544s for next certificate rotation Feb 24 05:13:06.050413 master-0 kubenswrapper[4158]: I0224 05:13:06.050356 4158 nodeinfomanager.go:401] Failed to publish CSINode: nodes "master-0" not found Feb 24 05:13:06.324063 master-0 kubenswrapper[4158]: I0224 05:13:06.323997 4158 nodeinfomanager.go:401] Failed to publish CSINode: nodes "master-0" not found Feb 24 05:13:06.324063 master-0 kubenswrapper[4158]: E0224 05:13:06.324037 4158 csi_plugin.go:305] Failed to initialize CSINode: error updating CSINode annotation: timed out waiting for the condition; caused by: nodes "master-0" not found Feb 24 05:13:06.344621 master-0 kubenswrapper[4158]: I0224 05:13:06.344563 4158 nodeinfomanager.go:401] Failed to publish CSINode: nodes "master-0" not found Feb 24 05:13:06.366341 master-0 kubenswrapper[4158]: I0224 05:13:06.366249 4158 nodeinfomanager.go:401] Failed to publish CSINode: nodes "master-0" not found Feb 24 05:13:06.422633 master-0 kubenswrapper[4158]: I0224 05:13:06.422543 4158 nodeinfomanager.go:401] Failed to publish CSINode: nodes "master-0" not found Feb 24 05:13:06.695825 master-0 kubenswrapper[4158]: I0224 05:13:06.695610 4158 nodeinfomanager.go:401] Failed to publish CSINode: nodes "master-0" not found Feb 24 05:13:06.695825 master-0 kubenswrapper[4158]: E0224 05:13:06.695701 4158 csi_plugin.go:305] Failed to initialize CSINode: error updating CSINode annotation: timed out waiting for the condition; caused by: nodes "master-0" not found Feb 24 05:13:06.800417 master-0 kubenswrapper[4158]: I0224 05:13:06.800286 4158 nodeinfomanager.go:401] Failed to publish CSINode: nodes "master-0" not found Feb 24 05:13:06.817351 master-0 kubenswrapper[4158]: I0224 05:13:06.817272 4158 nodeinfomanager.go:401] Failed to publish CSINode: nodes "master-0" not found Feb 24 05:13:06.876574 master-0 kubenswrapper[4158]: I0224 05:13:06.876496 4158 nodeinfomanager.go:401] Failed to publish CSINode: nodes "master-0" not found Feb 24 05:13:07.151531 master-0 kubenswrapper[4158]: I0224 05:13:07.151480 4158 nodeinfomanager.go:401] Failed to publish CSINode: nodes "master-0" not found Feb 24 05:13:07.151531 master-0 kubenswrapper[4158]: E0224 05:13:07.151523 4158 csi_plugin.go:305] Failed to initialize CSINode: error updating CSINode annotation: timed out waiting for the condition; caused by: nodes "master-0" not found Feb 24 05:13:07.612963 master-0 kubenswrapper[4158]: E0224 05:13:07.612891 4158 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"master-0\" not found" node="master-0" Feb 24 05:13:07.728114 master-0 kubenswrapper[4158]: I0224 05:13:07.728009 4158 nodeinfomanager.go:401] Failed to publish CSINode: nodes "master-0" not found Feb 24 05:13:07.744963 master-0 kubenswrapper[4158]: I0224 05:13:07.744887 4158 nodeinfomanager.go:401] Failed to publish CSINode: nodes "master-0" not found Feb 24 05:13:07.807765 master-0 kubenswrapper[4158]: I0224 05:13:07.807702 4158 nodeinfomanager.go:401] Failed to publish CSINode: nodes "master-0" not found Feb 24 05:13:07.842404 master-0 kubenswrapper[4158]: I0224 05:13:07.842231 4158 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 24 05:13:07.844526 master-0 kubenswrapper[4158]: I0224 05:13:07.844468 4158 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Feb 24 05:13:07.844655 master-0 kubenswrapper[4158]: I0224 05:13:07.844539 4158 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Feb 24 05:13:07.844655 master-0 kubenswrapper[4158]: I0224 05:13:07.844559 4158 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Feb 24 05:13:07.844655 master-0 kubenswrapper[4158]: I0224 05:13:07.844628 4158 kubelet_node_status.go:76] "Attempting to register node" node="master-0" Feb 24 05:13:07.857599 master-0 kubenswrapper[4158]: I0224 05:13:07.857499 4158 kubelet_node_status.go:79] "Successfully registered node" node="master-0" Feb 24 05:13:07.857599 master-0 kubenswrapper[4158]: E0224 05:13:07.857552 4158 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"master-0\": node \"master-0\" not found" Feb 24 05:13:07.873490 master-0 kubenswrapper[4158]: E0224 05:13:07.873296 4158 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 24 05:13:07.973696 master-0 kubenswrapper[4158]: E0224 05:13:07.973587 4158 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 24 05:13:07.987004 master-0 kubenswrapper[4158]: I0224 05:13:07.986949 4158 certificate_manager.go:356] kubernetes.io/kubelet-serving: Rotating certificates Feb 24 05:13:08.000199 master-0 kubenswrapper[4158]: I0224 05:13:08.000094 4158 reflector.go:368] Caches populated for *v1.CertificateSigningRequest from k8s.io/client-go/tools/watch/informerwatcher.go:146 Feb 24 05:13:08.074093 master-0 kubenswrapper[4158]: E0224 05:13:08.073998 4158 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 24 05:13:08.174342 master-0 kubenswrapper[4158]: E0224 05:13:08.174131 4158 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 24 05:13:08.275072 master-0 kubenswrapper[4158]: E0224 05:13:08.274964 4158 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 24 05:13:08.375746 master-0 kubenswrapper[4158]: E0224 05:13:08.375644 4158 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 24 05:13:08.476080 master-0 kubenswrapper[4158]: E0224 05:13:08.475901 4158 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 24 05:13:08.576870 master-0 kubenswrapper[4158]: E0224 05:13:08.576778 4158 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 24 05:13:08.678047 master-0 kubenswrapper[4158]: E0224 05:13:08.677943 4158 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 24 05:13:08.778686 master-0 kubenswrapper[4158]: E0224 05:13:08.778604 4158 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 24 05:13:08.879243 master-0 kubenswrapper[4158]: E0224 05:13:08.879137 4158 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 24 05:13:08.936199 master-0 kubenswrapper[4158]: I0224 05:13:08.936111 4158 reflector.go:368] Caches populated for *v1.Service from k8s.io/client-go/informers/factory.go:160 Feb 24 05:13:08.979514 master-0 kubenswrapper[4158]: E0224 05:13:08.979430 4158 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 24 05:13:09.079942 master-0 kubenswrapper[4158]: E0224 05:13:09.079790 4158 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 24 05:13:09.180514 master-0 kubenswrapper[4158]: E0224 05:13:09.180429 4158 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 24 05:13:09.281268 master-0 kubenswrapper[4158]: E0224 05:13:09.281166 4158 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 24 05:13:09.381759 master-0 kubenswrapper[4158]: E0224 05:13:09.381611 4158 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 24 05:13:09.482931 master-0 kubenswrapper[4158]: E0224 05:13:09.482778 4158 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 24 05:13:09.583371 master-0 kubenswrapper[4158]: E0224 05:13:09.583190 4158 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 24 05:13:09.684266 master-0 kubenswrapper[4158]: E0224 05:13:09.684045 4158 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 24 05:13:09.785126 master-0 kubenswrapper[4158]: E0224 05:13:09.785016 4158 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 24 05:13:09.885913 master-0 kubenswrapper[4158]: E0224 05:13:09.885761 4158 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 24 05:13:09.986277 master-0 kubenswrapper[4158]: E0224 05:13:09.986203 4158 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 24 05:13:10.087196 master-0 kubenswrapper[4158]: E0224 05:13:10.087135 4158 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 24 05:13:10.187351 master-0 kubenswrapper[4158]: E0224 05:13:10.187251 4158 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 24 05:13:10.288378 master-0 kubenswrapper[4158]: E0224 05:13:10.288192 4158 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 24 05:13:10.388446 master-0 kubenswrapper[4158]: E0224 05:13:10.388391 4158 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 24 05:13:10.489004 master-0 kubenswrapper[4158]: E0224 05:13:10.488960 4158 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 24 05:13:10.589237 master-0 kubenswrapper[4158]: E0224 05:13:10.589105 4158 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 24 05:13:10.619157 master-0 kubenswrapper[4158]: I0224 05:13:10.618861 4158 reflector.go:368] Caches populated for *v1.Node from k8s.io/client-go/informers/factory.go:160 Feb 24 05:13:10.758340 master-0 kubenswrapper[4158]: I0224 05:13:10.758207 4158 reflector.go:368] Caches populated for *v1.RuntimeClass from k8s.io/client-go/informers/factory.go:160 Feb 24 05:13:10.967957 master-0 kubenswrapper[4158]: I0224 05:13:10.967693 4158 apiserver.go:52] "Watching apiserver" Feb 24 05:13:10.973047 master-0 kubenswrapper[4158]: I0224 05:13:10.972960 4158 reflector.go:368] Caches populated for *v1.Pod from pkg/kubelet/config/apiserver.go:66 Feb 24 05:13:10.973420 master-0 kubenswrapper[4158]: I0224 05:13:10.973351 4158 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["assisted-installer/assisted-installer-controller-r6zx7","openshift-cluster-version/cluster-version-operator-5cfd9759cf-r4rf2","openshift-network-operator/network-operator-7d7db75979-4fk6k"] Feb 24 05:13:10.974008 master-0 kubenswrapper[4158]: I0224 05:13:10.973965 4158 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="assisted-installer/assisted-installer-controller-r6zx7" Feb 24 05:13:10.974857 master-0 kubenswrapper[4158]: I0224 05:13:10.974176 4158 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/network-operator-7d7db75979-4fk6k" Feb 24 05:13:10.974857 master-0 kubenswrapper[4158]: I0224 05:13:10.974179 4158 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-5cfd9759cf-r4rf2" Feb 24 05:13:10.979489 master-0 kubenswrapper[4158]: I0224 05:13:10.978834 4158 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"cluster-version-operator-serving-cert" Feb 24 05:13:10.979489 master-0 kubenswrapper[4158]: I0224 05:13:10.979010 4158 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-operator"/"metrics-tls" Feb 24 05:13:10.979489 master-0 kubenswrapper[4158]: I0224 05:13:10.979255 4158 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"openshift-service-ca.crt" Feb 24 05:13:10.979489 master-0 kubenswrapper[4158]: I0224 05:13:10.979393 4158 reflector.go:368] Caches populated for *v1.ConfigMap from object-"assisted-installer"/"kube-root-ca.crt" Feb 24 05:13:10.979489 master-0 kubenswrapper[4158]: I0224 05:13:10.979575 4158 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"openshift-service-ca.crt" Feb 24 05:13:10.979489 master-0 kubenswrapper[4158]: I0224 05:13:10.979633 4158 reflector.go:368] Caches populated for *v1.Secret from object-"assisted-installer"/"assisted-installer-controller-secret" Feb 24 05:13:10.979489 master-0 kubenswrapper[4158]: I0224 05:13:10.979661 4158 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"kube-root-ca.crt" Feb 24 05:13:10.979489 master-0 kubenswrapper[4158]: I0224 05:13:10.979599 4158 reflector.go:368] Caches populated for *v1.ConfigMap from object-"assisted-installer"/"openshift-service-ca.crt" Feb 24 05:13:10.979489 master-0 kubenswrapper[4158]: I0224 05:13:10.979835 4158 reflector.go:368] Caches populated for *v1.ConfigMap from object-"assisted-installer"/"assisted-installer-controller-config" Feb 24 05:13:10.984526 master-0 kubenswrapper[4158]: I0224 05:13:10.980013 4158 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"kube-root-ca.crt" Feb 24 05:13:11.068264 master-0 kubenswrapper[4158]: I0224 05:13:11.068152 4158 desired_state_of_world_populator.go:155] "Finished populating initial desired state of world" Feb 24 05:13:11.151354 master-0 kubenswrapper[4158]: I0224 05:13:11.151243 4158 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sno-bootstrap-files\" (UniqueName: \"kubernetes.io/host-path/8a278410-3079-49d9-8c59-4cedf3f50213-sno-bootstrap-files\") pod \"assisted-installer-controller-r6zx7\" (UID: \"8a278410-3079-49d9-8c59-4cedf3f50213\") " pod="assisted-installer/assisted-installer-controller-r6zx7" Feb 24 05:13:11.151771 master-0 kubenswrapper[4158]: I0224 05:13:11.151376 4158 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/7c4b448f-670e-45a1-bdd7-c42903c682a9-etc-ssl-certs\") pod \"cluster-version-operator-5cfd9759cf-r4rf2\" (UID: \"7c4b448f-670e-45a1-bdd7-c42903c682a9\") " pod="openshift-cluster-version/cluster-version-operator-5cfd9759cf-r4rf2" Feb 24 05:13:11.151771 master-0 kubenswrapper[4158]: I0224 05:13:11.151419 4158 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7c4b448f-670e-45a1-bdd7-c42903c682a9-serving-cert\") pod \"cluster-version-operator-5cfd9759cf-r4rf2\" (UID: \"7c4b448f-670e-45a1-bdd7-c42903c682a9\") " pod="openshift-cluster-version/cluster-version-operator-5cfd9759cf-r4rf2" Feb 24 05:13:11.151771 master-0 kubenswrapper[4158]: I0224 05:13:11.151452 4158 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/7c4b448f-670e-45a1-bdd7-c42903c682a9-service-ca\") pod \"cluster-version-operator-5cfd9759cf-r4rf2\" (UID: \"7c4b448f-670e-45a1-bdd7-c42903c682a9\") " pod="openshift-cluster-version/cluster-version-operator-5cfd9759cf-r4rf2" Feb 24 05:13:11.151771 master-0 kubenswrapper[4158]: I0224 05:13:11.151493 4158 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-ca-bundle\" (UniqueName: \"kubernetes.io/host-path/8a278410-3079-49d9-8c59-4cedf3f50213-host-ca-bundle\") pod \"assisted-installer-controller-r6zx7\" (UID: \"8a278410-3079-49d9-8c59-4cedf3f50213\") " pod="assisted-installer/assisted-installer-controller-r6zx7" Feb 24 05:13:11.151771 master-0 kubenswrapper[4158]: I0224 05:13:11.151536 4158 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lfhqc\" (UniqueName: \"kubernetes.io/projected/8a278410-3079-49d9-8c59-4cedf3f50213-kube-api-access-lfhqc\") pod \"assisted-installer-controller-r6zx7\" (UID: \"8a278410-3079-49d9-8c59-4cedf3f50213\") " pod="assisted-installer/assisted-installer-controller-r6zx7" Feb 24 05:13:11.151771 master-0 kubenswrapper[4158]: I0224 05:13:11.151573 4158 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dcj62\" (UniqueName: \"kubernetes.io/projected/f77227c8-c52d-4a71-ae1b-792055f6f23d-kube-api-access-dcj62\") pod \"network-operator-7d7db75979-4fk6k\" (UID: \"f77227c8-c52d-4a71-ae1b-792055f6f23d\") " pod="openshift-network-operator/network-operator-7d7db75979-4fk6k" Feb 24 05:13:11.151771 master-0 kubenswrapper[4158]: I0224 05:13:11.151604 4158 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/7c4b448f-670e-45a1-bdd7-c42903c682a9-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5cfd9759cf-r4rf2\" (UID: \"7c4b448f-670e-45a1-bdd7-c42903c682a9\") " pod="openshift-cluster-version/cluster-version-operator-5cfd9759cf-r4rf2" Feb 24 05:13:11.152422 master-0 kubenswrapper[4158]: I0224 05:13:11.151810 4158 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/7c4b448f-670e-45a1-bdd7-c42903c682a9-kube-api-access\") pod \"cluster-version-operator-5cfd9759cf-r4rf2\" (UID: \"7c4b448f-670e-45a1-bdd7-c42903c682a9\") " pod="openshift-cluster-version/cluster-version-operator-5cfd9759cf-r4rf2" Feb 24 05:13:11.152422 master-0 kubenswrapper[4158]: I0224 05:13:11.151899 4158 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-resolv-conf\" (UniqueName: \"kubernetes.io/host-path/8a278410-3079-49d9-8c59-4cedf3f50213-host-resolv-conf\") pod \"assisted-installer-controller-r6zx7\" (UID: \"8a278410-3079-49d9-8c59-4cedf3f50213\") " pod="assisted-installer/assisted-installer-controller-r6zx7" Feb 24 05:13:11.152422 master-0 kubenswrapper[4158]: I0224 05:13:11.151973 4158 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-run-resolv-conf\" (UniqueName: \"kubernetes.io/host-path/8a278410-3079-49d9-8c59-4cedf3f50213-host-var-run-resolv-conf\") pod \"assisted-installer-controller-r6zx7\" (UID: \"8a278410-3079-49d9-8c59-4cedf3f50213\") " pod="assisted-installer/assisted-installer-controller-r6zx7" Feb 24 05:13:11.152422 master-0 kubenswrapper[4158]: I0224 05:13:11.152067 4158 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/f77227c8-c52d-4a71-ae1b-792055f6f23d-host-etc-kube\") pod \"network-operator-7d7db75979-4fk6k\" (UID: \"f77227c8-c52d-4a71-ae1b-792055f6f23d\") " pod="openshift-network-operator/network-operator-7d7db75979-4fk6k" Feb 24 05:13:11.152422 master-0 kubenswrapper[4158]: I0224 05:13:11.152143 4158 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/f77227c8-c52d-4a71-ae1b-792055f6f23d-metrics-tls\") pod \"network-operator-7d7db75979-4fk6k\" (UID: \"f77227c8-c52d-4a71-ae1b-792055f6f23d\") " pod="openshift-network-operator/network-operator-7d7db75979-4fk6k" Feb 24 05:13:11.252709 master-0 kubenswrapper[4158]: I0224 05:13:11.252613 4158 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/f77227c8-c52d-4a71-ae1b-792055f6f23d-host-etc-kube\") pod \"network-operator-7d7db75979-4fk6k\" (UID: \"f77227c8-c52d-4a71-ae1b-792055f6f23d\") " pod="openshift-network-operator/network-operator-7d7db75979-4fk6k" Feb 24 05:13:11.252709 master-0 kubenswrapper[4158]: I0224 05:13:11.252671 4158 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/f77227c8-c52d-4a71-ae1b-792055f6f23d-metrics-tls\") pod \"network-operator-7d7db75979-4fk6k\" (UID: \"f77227c8-c52d-4a71-ae1b-792055f6f23d\") " pod="openshift-network-operator/network-operator-7d7db75979-4fk6k" Feb 24 05:13:11.252709 master-0 kubenswrapper[4158]: I0224 05:13:11.252703 4158 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sno-bootstrap-files\" (UniqueName: \"kubernetes.io/host-path/8a278410-3079-49d9-8c59-4cedf3f50213-sno-bootstrap-files\") pod \"assisted-installer-controller-r6zx7\" (UID: \"8a278410-3079-49d9-8c59-4cedf3f50213\") " pod="assisted-installer/assisted-installer-controller-r6zx7" Feb 24 05:13:11.252709 master-0 kubenswrapper[4158]: I0224 05:13:11.252726 4158 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/7c4b448f-670e-45a1-bdd7-c42903c682a9-etc-ssl-certs\") pod \"cluster-version-operator-5cfd9759cf-r4rf2\" (UID: \"7c4b448f-670e-45a1-bdd7-c42903c682a9\") " pod="openshift-cluster-version/cluster-version-operator-5cfd9759cf-r4rf2" Feb 24 05:13:11.253222 master-0 kubenswrapper[4158]: I0224 05:13:11.252746 4158 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7c4b448f-670e-45a1-bdd7-c42903c682a9-serving-cert\") pod \"cluster-version-operator-5cfd9759cf-r4rf2\" (UID: \"7c4b448f-670e-45a1-bdd7-c42903c682a9\") " pod="openshift-cluster-version/cluster-version-operator-5cfd9759cf-r4rf2" Feb 24 05:13:11.253222 master-0 kubenswrapper[4158]: I0224 05:13:11.253075 4158 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/7c4b448f-670e-45a1-bdd7-c42903c682a9-service-ca\") pod \"cluster-version-operator-5cfd9759cf-r4rf2\" (UID: \"7c4b448f-670e-45a1-bdd7-c42903c682a9\") " pod="openshift-cluster-version/cluster-version-operator-5cfd9759cf-r4rf2" Feb 24 05:13:11.253222 master-0 kubenswrapper[4158]: I0224 05:13:11.253130 4158 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/f77227c8-c52d-4a71-ae1b-792055f6f23d-host-etc-kube\") pod \"network-operator-7d7db75979-4fk6k\" (UID: \"f77227c8-c52d-4a71-ae1b-792055f6f23d\") " pod="openshift-network-operator/network-operator-7d7db75979-4fk6k" Feb 24 05:13:11.253465 master-0 kubenswrapper[4158]: I0224 05:13:11.253180 4158 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dcj62\" (UniqueName: \"kubernetes.io/projected/f77227c8-c52d-4a71-ae1b-792055f6f23d-kube-api-access-dcj62\") pod \"network-operator-7d7db75979-4fk6k\" (UID: \"f77227c8-c52d-4a71-ae1b-792055f6f23d\") " pod="openshift-network-operator/network-operator-7d7db75979-4fk6k" Feb 24 05:13:11.253465 master-0 kubenswrapper[4158]: E0224 05:13:11.253198 4158 secret.go:189] Couldn't get secret openshift-cluster-version/cluster-version-operator-serving-cert: secret "cluster-version-operator-serving-cert" not found Feb 24 05:13:11.253465 master-0 kubenswrapper[4158]: I0224 05:13:11.253382 4158 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/7c4b448f-670e-45a1-bdd7-c42903c682a9-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5cfd9759cf-r4rf2\" (UID: \"7c4b448f-670e-45a1-bdd7-c42903c682a9\") " pod="openshift-cluster-version/cluster-version-operator-5cfd9759cf-r4rf2" Feb 24 05:13:11.253465 master-0 kubenswrapper[4158]: I0224 05:13:11.253455 4158 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-ca-bundle\" (UniqueName: \"kubernetes.io/host-path/8a278410-3079-49d9-8c59-4cedf3f50213-host-ca-bundle\") pod \"assisted-installer-controller-r6zx7\" (UID: \"8a278410-3079-49d9-8c59-4cedf3f50213\") " pod="assisted-installer/assisted-installer-controller-r6zx7" Feb 24 05:13:11.253687 master-0 kubenswrapper[4158]: I0224 05:13:11.253510 4158 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lfhqc\" (UniqueName: \"kubernetes.io/projected/8a278410-3079-49d9-8c59-4cedf3f50213-kube-api-access-lfhqc\") pod \"assisted-installer-controller-r6zx7\" (UID: \"8a278410-3079-49d9-8c59-4cedf3f50213\") " pod="assisted-installer/assisted-installer-controller-r6zx7" Feb 24 05:13:11.253687 master-0 kubenswrapper[4158]: I0224 05:13:11.253559 4158 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/7c4b448f-670e-45a1-bdd7-c42903c682a9-kube-api-access\") pod \"cluster-version-operator-5cfd9759cf-r4rf2\" (UID: \"7c4b448f-670e-45a1-bdd7-c42903c682a9\") " pod="openshift-cluster-version/cluster-version-operator-5cfd9759cf-r4rf2" Feb 24 05:13:11.253687 master-0 kubenswrapper[4158]: I0224 05:13:11.253615 4158 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-run-resolv-conf\" (UniqueName: \"kubernetes.io/host-path/8a278410-3079-49d9-8c59-4cedf3f50213-host-var-run-resolv-conf\") pod \"assisted-installer-controller-r6zx7\" (UID: \"8a278410-3079-49d9-8c59-4cedf3f50213\") " pod="assisted-installer/assisted-installer-controller-r6zx7" Feb 24 05:13:11.253687 master-0 kubenswrapper[4158]: E0224 05:13:11.253678 4158 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/7c4b448f-670e-45a1-bdd7-c42903c682a9-serving-cert podName:7c4b448f-670e-45a1-bdd7-c42903c682a9 nodeName:}" failed. No retries permitted until 2026-02-24 05:13:11.753636815 +0000 UTC m=+38.417633548 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/7c4b448f-670e-45a1-bdd7-c42903c682a9-serving-cert") pod "cluster-version-operator-5cfd9759cf-r4rf2" (UID: "7c4b448f-670e-45a1-bdd7-c42903c682a9") : secret "cluster-version-operator-serving-cert" not found Feb 24 05:13:11.253965 master-0 kubenswrapper[4158]: I0224 05:13:11.253697 4158 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-run-resolv-conf\" (UniqueName: \"kubernetes.io/host-path/8a278410-3079-49d9-8c59-4cedf3f50213-host-var-run-resolv-conf\") pod \"assisted-installer-controller-r6zx7\" (UID: \"8a278410-3079-49d9-8c59-4cedf3f50213\") " pod="assisted-installer/assisted-installer-controller-r6zx7" Feb 24 05:13:11.253965 master-0 kubenswrapper[4158]: I0224 05:13:11.253723 4158 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-resolv-conf\" (UniqueName: \"kubernetes.io/host-path/8a278410-3079-49d9-8c59-4cedf3f50213-host-resolv-conf\") pod \"assisted-installer-controller-r6zx7\" (UID: \"8a278410-3079-49d9-8c59-4cedf3f50213\") " pod="assisted-installer/assisted-installer-controller-r6zx7" Feb 24 05:13:11.253965 master-0 kubenswrapper[4158]: I0224 05:13:11.253781 4158 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-resolv-conf\" (UniqueName: \"kubernetes.io/host-path/8a278410-3079-49d9-8c59-4cedf3f50213-host-resolv-conf\") pod \"assisted-installer-controller-r6zx7\" (UID: \"8a278410-3079-49d9-8c59-4cedf3f50213\") " pod="assisted-installer/assisted-installer-controller-r6zx7" Feb 24 05:13:11.253965 master-0 kubenswrapper[4158]: I0224 05:13:11.253925 4158 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/7c4b448f-670e-45a1-bdd7-c42903c682a9-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5cfd9759cf-r4rf2\" (UID: \"7c4b448f-670e-45a1-bdd7-c42903c682a9\") " pod="openshift-cluster-version/cluster-version-operator-5cfd9759cf-r4rf2" Feb 24 05:13:11.254843 master-0 kubenswrapper[4158]: I0224 05:13:11.254353 4158 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sno-bootstrap-files\" (UniqueName: \"kubernetes.io/host-path/8a278410-3079-49d9-8c59-4cedf3f50213-sno-bootstrap-files\") pod \"assisted-installer-controller-r6zx7\" (UID: \"8a278410-3079-49d9-8c59-4cedf3f50213\") " pod="assisted-installer/assisted-installer-controller-r6zx7" Feb 24 05:13:11.254843 master-0 kubenswrapper[4158]: I0224 05:13:11.254567 4158 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-ca-bundle\" (UniqueName: \"kubernetes.io/host-path/8a278410-3079-49d9-8c59-4cedf3f50213-host-ca-bundle\") pod \"assisted-installer-controller-r6zx7\" (UID: \"8a278410-3079-49d9-8c59-4cedf3f50213\") " pod="assisted-installer/assisted-installer-controller-r6zx7" Feb 24 05:13:11.254843 master-0 kubenswrapper[4158]: I0224 05:13:11.254606 4158 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/7c4b448f-670e-45a1-bdd7-c42903c682a9-etc-ssl-certs\") pod \"cluster-version-operator-5cfd9759cf-r4rf2\" (UID: \"7c4b448f-670e-45a1-bdd7-c42903c682a9\") " pod="openshift-cluster-version/cluster-version-operator-5cfd9759cf-r4rf2" Feb 24 05:13:11.255438 master-0 kubenswrapper[4158]: I0224 05:13:11.255057 4158 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory" Feb 24 05:13:11.256816 master-0 kubenswrapper[4158]: I0224 05:13:11.256232 4158 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/7c4b448f-670e-45a1-bdd7-c42903c682a9-service-ca\") pod \"cluster-version-operator-5cfd9759cf-r4rf2\" (UID: \"7c4b448f-670e-45a1-bdd7-c42903c682a9\") " pod="openshift-cluster-version/cluster-version-operator-5cfd9759cf-r4rf2" Feb 24 05:13:11.264913 master-0 kubenswrapper[4158]: I0224 05:13:11.264763 4158 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/f77227c8-c52d-4a71-ae1b-792055f6f23d-metrics-tls\") pod \"network-operator-7d7db75979-4fk6k\" (UID: \"f77227c8-c52d-4a71-ae1b-792055f6f23d\") " pod="openshift-network-operator/network-operator-7d7db75979-4fk6k" Feb 24 05:13:11.274545 master-0 kubenswrapper[4158]: I0224 05:13:11.274480 4158 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lfhqc\" (UniqueName: \"kubernetes.io/projected/8a278410-3079-49d9-8c59-4cedf3f50213-kube-api-access-lfhqc\") pod \"assisted-installer-controller-r6zx7\" (UID: \"8a278410-3079-49d9-8c59-4cedf3f50213\") " pod="assisted-installer/assisted-installer-controller-r6zx7" Feb 24 05:13:11.280927 master-0 kubenswrapper[4158]: I0224 05:13:11.280882 4158 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/7c4b448f-670e-45a1-bdd7-c42903c682a9-kube-api-access\") pod \"cluster-version-operator-5cfd9759cf-r4rf2\" (UID: \"7c4b448f-670e-45a1-bdd7-c42903c682a9\") " pod="openshift-cluster-version/cluster-version-operator-5cfd9759cf-r4rf2" Feb 24 05:13:11.288821 master-0 kubenswrapper[4158]: I0224 05:13:11.288736 4158 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dcj62\" (UniqueName: \"kubernetes.io/projected/f77227c8-c52d-4a71-ae1b-792055f6f23d-kube-api-access-dcj62\") pod \"network-operator-7d7db75979-4fk6k\" (UID: \"f77227c8-c52d-4a71-ae1b-792055f6f23d\") " pod="openshift-network-operator/network-operator-7d7db75979-4fk6k" Feb 24 05:13:11.311121 master-0 kubenswrapper[4158]: I0224 05:13:11.310900 4158 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="assisted-installer/assisted-installer-controller-r6zx7" Feb 24 05:13:11.324721 master-0 kubenswrapper[4158]: I0224 05:13:11.324679 4158 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/network-operator-7d7db75979-4fk6k" Feb 24 05:13:11.349028 master-0 kubenswrapper[4158]: W0224 05:13:11.348943 4158 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf77227c8_c52d_4a71_ae1b_792055f6f23d.slice/crio-334819f0fd2ed876c3fd6a59791380d72baaff02835bfb8dad2cfe7eb85f0397 WatchSource:0}: Error finding container 334819f0fd2ed876c3fd6a59791380d72baaff02835bfb8dad2cfe7eb85f0397: Status 404 returned error can't find the container with id 334819f0fd2ed876c3fd6a59791380d72baaff02835bfb8dad2cfe7eb85f0397 Feb 24 05:13:11.757548 master-0 kubenswrapper[4158]: I0224 05:13:11.757445 4158 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7c4b448f-670e-45a1-bdd7-c42903c682a9-serving-cert\") pod \"cluster-version-operator-5cfd9759cf-r4rf2\" (UID: \"7c4b448f-670e-45a1-bdd7-c42903c682a9\") " pod="openshift-cluster-version/cluster-version-operator-5cfd9759cf-r4rf2" Feb 24 05:13:11.757781 master-0 kubenswrapper[4158]: E0224 05:13:11.757719 4158 secret.go:189] Couldn't get secret openshift-cluster-version/cluster-version-operator-serving-cert: secret "cluster-version-operator-serving-cert" not found Feb 24 05:13:11.757909 master-0 kubenswrapper[4158]: E0224 05:13:11.757875 4158 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/7c4b448f-670e-45a1-bdd7-c42903c682a9-serving-cert podName:7c4b448f-670e-45a1-bdd7-c42903c682a9 nodeName:}" failed. No retries permitted until 2026-02-24 05:13:12.757839186 +0000 UTC m=+39.421835919 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/7c4b448f-670e-45a1-bdd7-c42903c682a9-serving-cert") pod "cluster-version-operator-5cfd9759cf-r4rf2" (UID: "7c4b448f-670e-45a1-bdd7-c42903c682a9") : secret "cluster-version-operator-serving-cert" not found Feb 24 05:13:12.311678 master-0 kubenswrapper[4158]: I0224 05:13:12.311605 4158 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-7d7db75979-4fk6k" event={"ID":"f77227c8-c52d-4a71-ae1b-792055f6f23d","Type":"ContainerStarted","Data":"334819f0fd2ed876c3fd6a59791380d72baaff02835bfb8dad2cfe7eb85f0397"} Feb 24 05:13:12.312633 master-0 kubenswrapper[4158]: I0224 05:13:12.312595 4158 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="assisted-installer/assisted-installer-controller-r6zx7" event={"ID":"8a278410-3079-49d9-8c59-4cedf3f50213","Type":"ContainerStarted","Data":"f2b2e64cf1008b56ca7ac547f9f48c6ff5064b81e3d54d12e96dc4d8b69f818b"} Feb 24 05:13:12.766195 master-0 kubenswrapper[4158]: I0224 05:13:12.766109 4158 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7c4b448f-670e-45a1-bdd7-c42903c682a9-serving-cert\") pod \"cluster-version-operator-5cfd9759cf-r4rf2\" (UID: \"7c4b448f-670e-45a1-bdd7-c42903c682a9\") " pod="openshift-cluster-version/cluster-version-operator-5cfd9759cf-r4rf2" Feb 24 05:13:12.766415 master-0 kubenswrapper[4158]: E0224 05:13:12.766293 4158 secret.go:189] Couldn't get secret openshift-cluster-version/cluster-version-operator-serving-cert: secret "cluster-version-operator-serving-cert" not found Feb 24 05:13:12.766415 master-0 kubenswrapper[4158]: E0224 05:13:12.766407 4158 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/7c4b448f-670e-45a1-bdd7-c42903c682a9-serving-cert podName:7c4b448f-670e-45a1-bdd7-c42903c682a9 nodeName:}" failed. No retries permitted until 2026-02-24 05:13:14.766379622 +0000 UTC m=+41.430376325 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/7c4b448f-670e-45a1-bdd7-c42903c682a9-serving-cert") pod "cluster-version-operator-5cfd9759cf-r4rf2" (UID: "7c4b448f-670e-45a1-bdd7-c42903c682a9") : secret "cluster-version-operator-serving-cert" not found Feb 24 05:13:14.483982 master-0 kubenswrapper[4158]: I0224 05:13:14.483915 4158 csr.go:261] certificate signing request csr-zv6ls is approved, waiting to be issued Feb 24 05:13:14.493129 master-0 kubenswrapper[4158]: I0224 05:13:14.493073 4158 csr.go:257] certificate signing request csr-zv6ls is issued Feb 24 05:13:14.781212 master-0 kubenswrapper[4158]: I0224 05:13:14.781091 4158 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7c4b448f-670e-45a1-bdd7-c42903c682a9-serving-cert\") pod \"cluster-version-operator-5cfd9759cf-r4rf2\" (UID: \"7c4b448f-670e-45a1-bdd7-c42903c682a9\") " pod="openshift-cluster-version/cluster-version-operator-5cfd9759cf-r4rf2" Feb 24 05:13:14.781397 master-0 kubenswrapper[4158]: E0224 05:13:14.781245 4158 secret.go:189] Couldn't get secret openshift-cluster-version/cluster-version-operator-serving-cert: secret "cluster-version-operator-serving-cert" not found Feb 24 05:13:14.781397 master-0 kubenswrapper[4158]: E0224 05:13:14.781338 4158 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/7c4b448f-670e-45a1-bdd7-c42903c682a9-serving-cert podName:7c4b448f-670e-45a1-bdd7-c42903c682a9 nodeName:}" failed. No retries permitted until 2026-02-24 05:13:18.781288064 +0000 UTC m=+45.445284757 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/7c4b448f-670e-45a1-bdd7-c42903c682a9-serving-cert") pod "cluster-version-operator-5cfd9759cf-r4rf2" (UID: "7c4b448f-670e-45a1-bdd7-c42903c682a9") : secret "cluster-version-operator-serving-cert" not found Feb 24 05:13:15.495073 master-0 kubenswrapper[4158]: I0224 05:13:15.494994 4158 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-25 05:04:32 +0000 UTC, rotation deadline is 2026-02-25 01:26:53.41046325 +0000 UTC Feb 24 05:13:15.495073 master-0 kubenswrapper[4158]: I0224 05:13:15.495048 4158 certificate_manager.go:356] kubernetes.io/kubelet-serving: Waiting 20h13m37.915419294s for next certificate rotation Feb 24 05:13:15.606257 master-0 kubenswrapper[4158]: I0224 05:13:15.606184 4158 reflector.go:368] Caches populated for *v1.CSIDriver from k8s.io/client-go/informers/factory.go:160 Feb 24 05:13:16.164881 master-0 kubenswrapper[4158]: I0224 05:13:16.164782 4158 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/kube-rbac-proxy-crio-master-0"] Feb 24 05:13:16.165629 master-0 kubenswrapper[4158]: I0224 05:13:16.165568 4158 scope.go:117] "RemoveContainer" containerID="3e6942d2ca28138c7420b132dcdbb1b9a811151a995bdac20311a616719b966c" Feb 24 05:13:16.325081 master-0 kubenswrapper[4158]: I0224 05:13:16.324912 4158 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-7d7db75979-4fk6k" event={"ID":"f77227c8-c52d-4a71-ae1b-792055f6f23d","Type":"ContainerStarted","Data":"22b7d6a6838a4874825b0fb486995e1ecae2b2ab9edf5d7d1caac95d9b544b8e"} Feb 24 05:13:16.330677 master-0 kubenswrapper[4158]: I0224 05:13:16.330634 4158 generic.go:334] "Generic (PLEG): container finished" podID="8a278410-3079-49d9-8c59-4cedf3f50213" containerID="e982480a91e40cd1e1954911193f2f93b612563b4c71eb1b41d290507d50a572" exitCode=0 Feb 24 05:13:16.330677 master-0 kubenswrapper[4158]: I0224 05:13:16.330673 4158 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="assisted-installer/assisted-installer-controller-r6zx7" event={"ID":"8a278410-3079-49d9-8c59-4cedf3f50213","Type":"ContainerDied","Data":"e982480a91e40cd1e1954911193f2f93b612563b4c71eb1b41d290507d50a572"} Feb 24 05:13:16.388090 master-0 kubenswrapper[4158]: I0224 05:13:16.387998 4158 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-network-operator/network-operator-7d7db75979-4fk6k" podStartSLOduration=3.996796451 podStartE2EDuration="8.387967685s" podCreationTimestamp="2026-02-24 05:13:08 +0000 UTC" firstStartedPulling="2026-02-24 05:13:11.352184054 +0000 UTC m=+38.016180757" lastFinishedPulling="2026-02-24 05:13:15.743355258 +0000 UTC m=+42.407351991" observedRunningTime="2026-02-24 05:13:16.358645811 +0000 UTC m=+43.022642514" watchObservedRunningTime="2026-02-24 05:13:16.387967685 +0000 UTC m=+43.051964398" Feb 24 05:13:16.496618 master-0 kubenswrapper[4158]: I0224 05:13:16.496527 4158 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-25 05:04:32 +0000 UTC, rotation deadline is 2026-02-25 02:08:38.297604042 +0000 UTC Feb 24 05:13:16.496618 master-0 kubenswrapper[4158]: I0224 05:13:16.496611 4158 certificate_manager.go:356] kubernetes.io/kubelet-serving: Waiting 20h55m21.800999101s for next certificate rotation Feb 24 05:13:17.336485 master-0 kubenswrapper[4158]: I0224 05:13:17.336410 4158 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-config-operator_kube-rbac-proxy-crio-master-0_c997c8e9d3be51d454d8e61e376bef08/kube-rbac-proxy-crio/2.log" Feb 24 05:13:17.337005 master-0 kubenswrapper[4158]: I0224 05:13:17.336926 4158 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" event={"ID":"c997c8e9d3be51d454d8e61e376bef08","Type":"ContainerStarted","Data":"c041af7c63d223942ce08c38d39df788b42cf76c6700a1fcbc754b1fc0059d6c"} Feb 24 05:13:17.356377 master-0 kubenswrapper[4158]: I0224 05:13:17.356302 4158 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="assisted-installer/assisted-installer-controller-r6zx7" Feb 24 05:13:17.362882 master-0 kubenswrapper[4158]: I0224 05:13:17.362813 4158 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" podStartSLOduration=1.362793117 podStartE2EDuration="1.362793117s" podCreationTimestamp="2026-02-24 05:13:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-24 05:13:17.362252703 +0000 UTC m=+44.026249426" watchObservedRunningTime="2026-02-24 05:13:17.362793117 +0000 UTC m=+44.026789840" Feb 24 05:13:17.504464 master-0 kubenswrapper[4158]: I0224 05:13:17.504384 4158 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-var-run-resolv-conf\" (UniqueName: \"kubernetes.io/host-path/8a278410-3079-49d9-8c59-4cedf3f50213-host-var-run-resolv-conf\") pod \"8a278410-3079-49d9-8c59-4cedf3f50213\" (UID: \"8a278410-3079-49d9-8c59-4cedf3f50213\") " Feb 24 05:13:17.504464 master-0 kubenswrapper[4158]: I0224 05:13:17.504460 4158 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sno-bootstrap-files\" (UniqueName: \"kubernetes.io/host-path/8a278410-3079-49d9-8c59-4cedf3f50213-sno-bootstrap-files\") pod \"8a278410-3079-49d9-8c59-4cedf3f50213\" (UID: \"8a278410-3079-49d9-8c59-4cedf3f50213\") " Feb 24 05:13:17.505458 master-0 kubenswrapper[4158]: I0224 05:13:17.504496 4158 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-ca-bundle\" (UniqueName: \"kubernetes.io/host-path/8a278410-3079-49d9-8c59-4cedf3f50213-host-ca-bundle\") pod \"8a278410-3079-49d9-8c59-4cedf3f50213\" (UID: \"8a278410-3079-49d9-8c59-4cedf3f50213\") " Feb 24 05:13:17.505458 master-0 kubenswrapper[4158]: I0224 05:13:17.504540 4158 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lfhqc\" (UniqueName: \"kubernetes.io/projected/8a278410-3079-49d9-8c59-4cedf3f50213-kube-api-access-lfhqc\") pod \"8a278410-3079-49d9-8c59-4cedf3f50213\" (UID: \"8a278410-3079-49d9-8c59-4cedf3f50213\") " Feb 24 05:13:17.505458 master-0 kubenswrapper[4158]: I0224 05:13:17.504574 4158 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-resolv-conf\" (UniqueName: \"kubernetes.io/host-path/8a278410-3079-49d9-8c59-4cedf3f50213-host-resolv-conf\") pod \"8a278410-3079-49d9-8c59-4cedf3f50213\" (UID: \"8a278410-3079-49d9-8c59-4cedf3f50213\") " Feb 24 05:13:17.505458 master-0 kubenswrapper[4158]: I0224 05:13:17.504564 4158 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8a278410-3079-49d9-8c59-4cedf3f50213-host-var-run-resolv-conf" (OuterVolumeSpecName: "host-var-run-resolv-conf") pod "8a278410-3079-49d9-8c59-4cedf3f50213" (UID: "8a278410-3079-49d9-8c59-4cedf3f50213"). InnerVolumeSpecName "host-var-run-resolv-conf". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 24 05:13:17.505458 master-0 kubenswrapper[4158]: I0224 05:13:17.504565 4158 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8a278410-3079-49d9-8c59-4cedf3f50213-sno-bootstrap-files" (OuterVolumeSpecName: "sno-bootstrap-files") pod "8a278410-3079-49d9-8c59-4cedf3f50213" (UID: "8a278410-3079-49d9-8c59-4cedf3f50213"). InnerVolumeSpecName "sno-bootstrap-files". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 24 05:13:17.505458 master-0 kubenswrapper[4158]: I0224 05:13:17.504720 4158 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8a278410-3079-49d9-8c59-4cedf3f50213-host-ca-bundle" (OuterVolumeSpecName: "host-ca-bundle") pod "8a278410-3079-49d9-8c59-4cedf3f50213" (UID: "8a278410-3079-49d9-8c59-4cedf3f50213"). InnerVolumeSpecName "host-ca-bundle". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 24 05:13:17.505458 master-0 kubenswrapper[4158]: I0224 05:13:17.504617 4158 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8a278410-3079-49d9-8c59-4cedf3f50213-host-resolv-conf" (OuterVolumeSpecName: "host-resolv-conf") pod "8a278410-3079-49d9-8c59-4cedf3f50213" (UID: "8a278410-3079-49d9-8c59-4cedf3f50213"). InnerVolumeSpecName "host-resolv-conf". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 24 05:13:17.505458 master-0 kubenswrapper[4158]: I0224 05:13:17.504941 4158 reconciler_common.go:293] "Volume detached for volume \"host-resolv-conf\" (UniqueName: \"kubernetes.io/host-path/8a278410-3079-49d9-8c59-4cedf3f50213-host-resolv-conf\") on node \"master-0\" DevicePath \"\"" Feb 24 05:13:17.505458 master-0 kubenswrapper[4158]: I0224 05:13:17.504969 4158 reconciler_common.go:293] "Volume detached for volume \"host-var-run-resolv-conf\" (UniqueName: \"kubernetes.io/host-path/8a278410-3079-49d9-8c59-4cedf3f50213-host-var-run-resolv-conf\") on node \"master-0\" DevicePath \"\"" Feb 24 05:13:17.505458 master-0 kubenswrapper[4158]: I0224 05:13:17.505002 4158 reconciler_common.go:293] "Volume detached for volume \"sno-bootstrap-files\" (UniqueName: \"kubernetes.io/host-path/8a278410-3079-49d9-8c59-4cedf3f50213-sno-bootstrap-files\") on node \"master-0\" DevicePath \"\"" Feb 24 05:13:17.505458 master-0 kubenswrapper[4158]: I0224 05:13:17.505024 4158 reconciler_common.go:293] "Volume detached for volume \"host-ca-bundle\" (UniqueName: \"kubernetes.io/host-path/8a278410-3079-49d9-8c59-4cedf3f50213-host-ca-bundle\") on node \"master-0\" DevicePath \"\"" Feb 24 05:13:17.514510 master-0 kubenswrapper[4158]: I0224 05:13:17.514423 4158 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8a278410-3079-49d9-8c59-4cedf3f50213-kube-api-access-lfhqc" (OuterVolumeSpecName: "kube-api-access-lfhqc") pod "8a278410-3079-49d9-8c59-4cedf3f50213" (UID: "8a278410-3079-49d9-8c59-4cedf3f50213"). InnerVolumeSpecName "kube-api-access-lfhqc". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 24 05:13:17.605862 master-0 kubenswrapper[4158]: I0224 05:13:17.605654 4158 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lfhqc\" (UniqueName: \"kubernetes.io/projected/8a278410-3079-49d9-8c59-4cedf3f50213-kube-api-access-lfhqc\") on node \"master-0\" DevicePath \"\"" Feb 24 05:13:18.343428 master-0 kubenswrapper[4158]: I0224 05:13:18.343292 4158 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="assisted-installer/assisted-installer-controller-r6zx7" Feb 24 05:13:18.343825 master-0 kubenswrapper[4158]: I0224 05:13:18.343450 4158 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="assisted-installer/assisted-installer-controller-r6zx7" event={"ID":"8a278410-3079-49d9-8c59-4cedf3f50213","Type":"ContainerDied","Data":"f2b2e64cf1008b56ca7ac547f9f48c6ff5064b81e3d54d12e96dc4d8b69f818b"} Feb 24 05:13:18.343825 master-0 kubenswrapper[4158]: I0224 05:13:18.343503 4158 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f2b2e64cf1008b56ca7ac547f9f48c6ff5064b81e3d54d12e96dc4d8b69f818b" Feb 24 05:13:18.808037 master-0 kubenswrapper[4158]: I0224 05:13:18.807752 4158 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-network-operator/mtu-prober-cg7zd"] Feb 24 05:13:18.808037 master-0 kubenswrapper[4158]: E0224 05:13:18.807899 4158 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8a278410-3079-49d9-8c59-4cedf3f50213" containerName="assisted-installer-controller" Feb 24 05:13:18.808037 master-0 kubenswrapper[4158]: I0224 05:13:18.807929 4158 state_mem.go:107] "Deleted CPUSet assignment" podUID="8a278410-3079-49d9-8c59-4cedf3f50213" containerName="assisted-installer-controller" Feb 24 05:13:18.808037 master-0 kubenswrapper[4158]: I0224 05:13:18.808009 4158 memory_manager.go:354] "RemoveStaleState removing state" podUID="8a278410-3079-49d9-8c59-4cedf3f50213" containerName="assisted-installer-controller" Feb 24 05:13:18.809277 master-0 kubenswrapper[4158]: I0224 05:13:18.808371 4158 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/mtu-prober-cg7zd" Feb 24 05:13:18.814685 master-0 kubenswrapper[4158]: I0224 05:13:18.814629 4158 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7c4b448f-670e-45a1-bdd7-c42903c682a9-serving-cert\") pod \"cluster-version-operator-5cfd9759cf-r4rf2\" (UID: \"7c4b448f-670e-45a1-bdd7-c42903c682a9\") " pod="openshift-cluster-version/cluster-version-operator-5cfd9759cf-r4rf2" Feb 24 05:13:18.814809 master-0 kubenswrapper[4158]: E0224 05:13:18.814785 4158 secret.go:189] Couldn't get secret openshift-cluster-version/cluster-version-operator-serving-cert: secret "cluster-version-operator-serving-cert" not found Feb 24 05:13:18.814902 master-0 kubenswrapper[4158]: E0224 05:13:18.814860 4158 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/7c4b448f-670e-45a1-bdd7-c42903c682a9-serving-cert podName:7c4b448f-670e-45a1-bdd7-c42903c682a9 nodeName:}" failed. No retries permitted until 2026-02-24 05:13:26.814840491 +0000 UTC m=+53.478837224 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/7c4b448f-670e-45a1-bdd7-c42903c682a9-serving-cert") pod "cluster-version-operator-5cfd9759cf-r4rf2" (UID: "7c4b448f-670e-45a1-bdd7-c42903c682a9") : secret "cluster-version-operator-serving-cert" not found Feb 24 05:13:18.916181 master-0 kubenswrapper[4158]: I0224 05:13:18.916052 4158 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tkhxk\" (UniqueName: \"kubernetes.io/projected/ba74ac93-7ad1-46e5-97c6-75c410d6a39e-kube-api-access-tkhxk\") pod \"mtu-prober-cg7zd\" (UID: \"ba74ac93-7ad1-46e5-97c6-75c410d6a39e\") " pod="openshift-network-operator/mtu-prober-cg7zd" Feb 24 05:13:19.017147 master-0 kubenswrapper[4158]: I0224 05:13:19.017060 4158 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tkhxk\" (UniqueName: \"kubernetes.io/projected/ba74ac93-7ad1-46e5-97c6-75c410d6a39e-kube-api-access-tkhxk\") pod \"mtu-prober-cg7zd\" (UID: \"ba74ac93-7ad1-46e5-97c6-75c410d6a39e\") " pod="openshift-network-operator/mtu-prober-cg7zd" Feb 24 05:13:19.049261 master-0 kubenswrapper[4158]: I0224 05:13:19.049174 4158 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tkhxk\" (UniqueName: \"kubernetes.io/projected/ba74ac93-7ad1-46e5-97c6-75c410d6a39e-kube-api-access-tkhxk\") pod \"mtu-prober-cg7zd\" (UID: \"ba74ac93-7ad1-46e5-97c6-75c410d6a39e\") " pod="openshift-network-operator/mtu-prober-cg7zd" Feb 24 05:13:19.131305 master-0 kubenswrapper[4158]: I0224 05:13:19.131092 4158 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/mtu-prober-cg7zd" Feb 24 05:13:19.153185 master-0 kubenswrapper[4158]: W0224 05:13:19.153092 4158 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podba74ac93_7ad1_46e5_97c6_75c410d6a39e.slice/crio-87cd00dcbfae0a09b15eeee8498d1b2df616ce62ff83ab180ef147871919e915 WatchSource:0}: Error finding container 87cd00dcbfae0a09b15eeee8498d1b2df616ce62ff83ab180ef147871919e915: Status 404 returned error can't find the container with id 87cd00dcbfae0a09b15eeee8498d1b2df616ce62ff83ab180ef147871919e915 Feb 24 05:13:19.345079 master-0 kubenswrapper[4158]: I0224 05:13:19.344973 4158 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/mtu-prober-cg7zd" event={"ID":"ba74ac93-7ad1-46e5-97c6-75c410d6a39e","Type":"ContainerStarted","Data":"87cd00dcbfae0a09b15eeee8498d1b2df616ce62ff83ab180ef147871919e915"} Feb 24 05:13:20.350838 master-0 kubenswrapper[4158]: I0224 05:13:20.350738 4158 generic.go:334] "Generic (PLEG): container finished" podID="ba74ac93-7ad1-46e5-97c6-75c410d6a39e" containerID="c068b345adaab906615d4122b8703a382ed80a18092bab0453b7f7d8b6ad8324" exitCode=0 Feb 24 05:13:20.352767 master-0 kubenswrapper[4158]: I0224 05:13:20.350828 4158 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/mtu-prober-cg7zd" event={"ID":"ba74ac93-7ad1-46e5-97c6-75c410d6a39e","Type":"ContainerDied","Data":"c068b345adaab906615d4122b8703a382ed80a18092bab0453b7f7d8b6ad8324"} Feb 24 05:13:21.380554 master-0 kubenswrapper[4158]: I0224 05:13:21.380485 4158 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/mtu-prober-cg7zd" Feb 24 05:13:21.536679 master-0 kubenswrapper[4158]: I0224 05:13:21.536526 4158 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tkhxk\" (UniqueName: \"kubernetes.io/projected/ba74ac93-7ad1-46e5-97c6-75c410d6a39e-kube-api-access-tkhxk\") pod \"ba74ac93-7ad1-46e5-97c6-75c410d6a39e\" (UID: \"ba74ac93-7ad1-46e5-97c6-75c410d6a39e\") " Feb 24 05:13:21.542871 master-0 kubenswrapper[4158]: I0224 05:13:21.542794 4158 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ba74ac93-7ad1-46e5-97c6-75c410d6a39e-kube-api-access-tkhxk" (OuterVolumeSpecName: "kube-api-access-tkhxk") pod "ba74ac93-7ad1-46e5-97c6-75c410d6a39e" (UID: "ba74ac93-7ad1-46e5-97c6-75c410d6a39e"). InnerVolumeSpecName "kube-api-access-tkhxk". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 24 05:13:21.637711 master-0 kubenswrapper[4158]: I0224 05:13:21.637572 4158 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tkhxk\" (UniqueName: \"kubernetes.io/projected/ba74ac93-7ad1-46e5-97c6-75c410d6a39e-kube-api-access-tkhxk\") on node \"master-0\" DevicePath \"\"" Feb 24 05:13:22.360852 master-0 kubenswrapper[4158]: I0224 05:13:22.360730 4158 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/mtu-prober-cg7zd" event={"ID":"ba74ac93-7ad1-46e5-97c6-75c410d6a39e","Type":"ContainerDied","Data":"87cd00dcbfae0a09b15eeee8498d1b2df616ce62ff83ab180ef147871919e915"} Feb 24 05:13:22.360852 master-0 kubenswrapper[4158]: I0224 05:13:22.360812 4158 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="87cd00dcbfae0a09b15eeee8498d1b2df616ce62ff83ab180ef147871919e915" Feb 24 05:13:22.360852 master-0 kubenswrapper[4158]: I0224 05:13:22.360863 4158 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/mtu-prober-cg7zd" Feb 24 05:13:23.828838 master-0 kubenswrapper[4158]: I0224 05:13:23.828749 4158 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-network-operator/mtu-prober-cg7zd"] Feb 24 05:13:23.836821 master-0 kubenswrapper[4158]: I0224 05:13:23.836770 4158 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-network-operator/mtu-prober-cg7zd"] Feb 24 05:13:24.150460 master-0 kubenswrapper[4158]: I0224 05:13:24.150238 4158 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ba74ac93-7ad1-46e5-97c6-75c410d6a39e" path="/var/lib/kubelet/pods/ba74ac93-7ad1-46e5-97c6-75c410d6a39e/volumes" Feb 24 05:13:26.877012 master-0 kubenswrapper[4158]: I0224 05:13:26.876857 4158 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7c4b448f-670e-45a1-bdd7-c42903c682a9-serving-cert\") pod \"cluster-version-operator-5cfd9759cf-r4rf2\" (UID: \"7c4b448f-670e-45a1-bdd7-c42903c682a9\") " pod="openshift-cluster-version/cluster-version-operator-5cfd9759cf-r4rf2" Feb 24 05:13:26.878390 master-0 kubenswrapper[4158]: E0224 05:13:26.877174 4158 secret.go:189] Couldn't get secret openshift-cluster-version/cluster-version-operator-serving-cert: secret "cluster-version-operator-serving-cert" not found Feb 24 05:13:26.878390 master-0 kubenswrapper[4158]: E0224 05:13:26.877399 4158 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/7c4b448f-670e-45a1-bdd7-c42903c682a9-serving-cert podName:7c4b448f-670e-45a1-bdd7-c42903c682a9 nodeName:}" failed. No retries permitted until 2026-02-24 05:13:42.877347838 +0000 UTC m=+69.541344561 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/7c4b448f-670e-45a1-bdd7-c42903c682a9-serving-cert") pod "cluster-version-operator-5cfd9759cf-r4rf2" (UID: "7c4b448f-670e-45a1-bdd7-c42903c682a9") : secret "cluster-version-operator-serving-cert" not found Feb 24 05:13:28.726092 master-0 kubenswrapper[4158]: I0224 05:13:28.726018 4158 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-8qp5g"] Feb 24 05:13:28.726092 master-0 kubenswrapper[4158]: E0224 05:13:28.726113 4158 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ba74ac93-7ad1-46e5-97c6-75c410d6a39e" containerName="prober" Feb 24 05:13:28.726970 master-0 kubenswrapper[4158]: I0224 05:13:28.726129 4158 state_mem.go:107] "Deleted CPUSet assignment" podUID="ba74ac93-7ad1-46e5-97c6-75c410d6a39e" containerName="prober" Feb 24 05:13:28.726970 master-0 kubenswrapper[4158]: I0224 05:13:28.726159 4158 memory_manager.go:354] "RemoveStaleState removing state" podUID="ba74ac93-7ad1-46e5-97c6-75c410d6a39e" containerName="prober" Feb 24 05:13:28.726970 master-0 kubenswrapper[4158]: I0224 05:13:28.726421 4158 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-8qp5g" Feb 24 05:13:28.729866 master-0 kubenswrapper[4158]: I0224 05:13:28.729822 4158 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"multus-daemon-config" Feb 24 05:13:28.730027 master-0 kubenswrapper[4158]: I0224 05:13:28.729812 4158 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"cni-copy-resources" Feb 24 05:13:28.730027 master-0 kubenswrapper[4158]: I0224 05:13:28.729982 4158 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"openshift-service-ca.crt" Feb 24 05:13:28.730199 master-0 kubenswrapper[4158]: I0224 05:13:28.729807 4158 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"kube-root-ca.crt" Feb 24 05:13:28.894651 master-0 kubenswrapper[4158]: I0224 05:13:28.894550 4158 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/c00ee01c-143b-4e44-823c-c6bfdedb8ed6-os-release\") pod \"multus-8qp5g\" (UID: \"c00ee01c-143b-4e44-823c-c6bfdedb8ed6\") " pod="openshift-multus/multus-8qp5g" Feb 24 05:13:28.894651 master-0 kubenswrapper[4158]: I0224 05:13:28.894629 4158 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/c00ee01c-143b-4e44-823c-c6bfdedb8ed6-multus-conf-dir\") pod \"multus-8qp5g\" (UID: \"c00ee01c-143b-4e44-823c-c6bfdedb8ed6\") " pod="openshift-multus/multus-8qp5g" Feb 24 05:13:28.895399 master-0 kubenswrapper[4158]: I0224 05:13:28.894813 4158 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/c00ee01c-143b-4e44-823c-c6bfdedb8ed6-host-run-netns\") pod \"multus-8qp5g\" (UID: \"c00ee01c-143b-4e44-823c-c6bfdedb8ed6\") " pod="openshift-multus/multus-8qp5g" Feb 24 05:13:28.895399 master-0 kubenswrapper[4158]: I0224 05:13:28.894923 4158 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/c00ee01c-143b-4e44-823c-c6bfdedb8ed6-cni-binary-copy\") pod \"multus-8qp5g\" (UID: \"c00ee01c-143b-4e44-823c-c6bfdedb8ed6\") " pod="openshift-multus/multus-8qp5g" Feb 24 05:13:28.895399 master-0 kubenswrapper[4158]: I0224 05:13:28.894971 4158 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/c00ee01c-143b-4e44-823c-c6bfdedb8ed6-cnibin\") pod \"multus-8qp5g\" (UID: \"c00ee01c-143b-4e44-823c-c6bfdedb8ed6\") " pod="openshift-multus/multus-8qp5g" Feb 24 05:13:28.895399 master-0 kubenswrapper[4158]: I0224 05:13:28.895014 4158 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/c00ee01c-143b-4e44-823c-c6bfdedb8ed6-etc-kubernetes\") pod \"multus-8qp5g\" (UID: \"c00ee01c-143b-4e44-823c-c6bfdedb8ed6\") " pod="openshift-multus/multus-8qp5g" Feb 24 05:13:28.895399 master-0 kubenswrapper[4158]: I0224 05:13:28.895062 4158 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/c00ee01c-143b-4e44-823c-c6bfdedb8ed6-multus-socket-dir-parent\") pod \"multus-8qp5g\" (UID: \"c00ee01c-143b-4e44-823c-c6bfdedb8ed6\") " pod="openshift-multus/multus-8qp5g" Feb 24 05:13:28.895399 master-0 kubenswrapper[4158]: I0224 05:13:28.895110 4158 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/c00ee01c-143b-4e44-823c-c6bfdedb8ed6-host-var-lib-cni-bin\") pod \"multus-8qp5g\" (UID: \"c00ee01c-143b-4e44-823c-c6bfdedb8ed6\") " pod="openshift-multus/multus-8qp5g" Feb 24 05:13:28.897679 master-0 kubenswrapper[4158]: I0224 05:13:28.895468 4158 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/c00ee01c-143b-4e44-823c-c6bfdedb8ed6-host-var-lib-kubelet\") pod \"multus-8qp5g\" (UID: \"c00ee01c-143b-4e44-823c-c6bfdedb8ed6\") " pod="openshift-multus/multus-8qp5g" Feb 24 05:13:28.897679 master-0 kubenswrapper[4158]: I0224 05:13:28.895532 4158 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/c00ee01c-143b-4e44-823c-c6bfdedb8ed6-hostroot\") pod \"multus-8qp5g\" (UID: \"c00ee01c-143b-4e44-823c-c6bfdedb8ed6\") " pod="openshift-multus/multus-8qp5g" Feb 24 05:13:28.897679 master-0 kubenswrapper[4158]: I0224 05:13:28.895587 4158 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/c00ee01c-143b-4e44-823c-c6bfdedb8ed6-host-run-k8s-cni-cncf-io\") pod \"multus-8qp5g\" (UID: \"c00ee01c-143b-4e44-823c-c6bfdedb8ed6\") " pod="openshift-multus/multus-8qp5g" Feb 24 05:13:28.897679 master-0 kubenswrapper[4158]: I0224 05:13:28.895634 4158 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/c00ee01c-143b-4e44-823c-c6bfdedb8ed6-host-var-lib-cni-multus\") pod \"multus-8qp5g\" (UID: \"c00ee01c-143b-4e44-823c-c6bfdedb8ed6\") " pod="openshift-multus/multus-8qp5g" Feb 24 05:13:28.897679 master-0 kubenswrapper[4158]: I0224 05:13:28.895781 4158 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jx4rw\" (UniqueName: \"kubernetes.io/projected/c00ee01c-143b-4e44-823c-c6bfdedb8ed6-kube-api-access-jx4rw\") pod \"multus-8qp5g\" (UID: \"c00ee01c-143b-4e44-823c-c6bfdedb8ed6\") " pod="openshift-multus/multus-8qp5g" Feb 24 05:13:28.897679 master-0 kubenswrapper[4158]: I0224 05:13:28.895987 4158 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/c00ee01c-143b-4e44-823c-c6bfdedb8ed6-multus-cni-dir\") pod \"multus-8qp5g\" (UID: \"c00ee01c-143b-4e44-823c-c6bfdedb8ed6\") " pod="openshift-multus/multus-8qp5g" Feb 24 05:13:28.897679 master-0 kubenswrapper[4158]: I0224 05:13:28.896058 4158 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/c00ee01c-143b-4e44-823c-c6bfdedb8ed6-system-cni-dir\") pod \"multus-8qp5g\" (UID: \"c00ee01c-143b-4e44-823c-c6bfdedb8ed6\") " pod="openshift-multus/multus-8qp5g" Feb 24 05:13:28.897679 master-0 kubenswrapper[4158]: I0224 05:13:28.896102 4158 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/c00ee01c-143b-4e44-823c-c6bfdedb8ed6-multus-daemon-config\") pod \"multus-8qp5g\" (UID: \"c00ee01c-143b-4e44-823c-c6bfdedb8ed6\") " pod="openshift-multus/multus-8qp5g" Feb 24 05:13:28.897679 master-0 kubenswrapper[4158]: I0224 05:13:28.896135 4158 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/c00ee01c-143b-4e44-823c-c6bfdedb8ed6-host-run-multus-certs\") pod \"multus-8qp5g\" (UID: \"c00ee01c-143b-4e44-823c-c6bfdedb8ed6\") " pod="openshift-multus/multus-8qp5g" Feb 24 05:13:28.920883 master-0 kubenswrapper[4158]: I0224 05:13:28.920794 4158 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-additional-cni-plugins-jknmn"] Feb 24 05:13:28.921859 master-0 kubenswrapper[4158]: I0224 05:13:28.921766 4158 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-additional-cni-plugins-jknmn" Feb 24 05:13:28.926894 master-0 kubenswrapper[4158]: I0224 05:13:28.925361 4158 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"default-cni-sysctl-allowlist" Feb 24 05:13:28.926894 master-0 kubenswrapper[4158]: I0224 05:13:28.925748 4158 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"whereabouts-config" Feb 24 05:13:28.997022 master-0 kubenswrapper[4158]: I0224 05:13:28.996933 4158 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/c00ee01c-143b-4e44-823c-c6bfdedb8ed6-cnibin\") pod \"multus-8qp5g\" (UID: \"c00ee01c-143b-4e44-823c-c6bfdedb8ed6\") " pod="openshift-multus/multus-8qp5g" Feb 24 05:13:28.997022 master-0 kubenswrapper[4158]: I0224 05:13:28.996973 4158 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/c00ee01c-143b-4e44-823c-c6bfdedb8ed6-etc-kubernetes\") pod \"multus-8qp5g\" (UID: \"c00ee01c-143b-4e44-823c-c6bfdedb8ed6\") " pod="openshift-multus/multus-8qp5g" Feb 24 05:13:28.997022 master-0 kubenswrapper[4158]: I0224 05:13:28.996997 4158 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/c00ee01c-143b-4e44-823c-c6bfdedb8ed6-multus-socket-dir-parent\") pod \"multus-8qp5g\" (UID: \"c00ee01c-143b-4e44-823c-c6bfdedb8ed6\") " pod="openshift-multus/multus-8qp5g" Feb 24 05:13:28.997022 master-0 kubenswrapper[4158]: I0224 05:13:28.997019 4158 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/c00ee01c-143b-4e44-823c-c6bfdedb8ed6-host-var-lib-cni-bin\") pod \"multus-8qp5g\" (UID: \"c00ee01c-143b-4e44-823c-c6bfdedb8ed6\") " pod="openshift-multus/multus-8qp5g" Feb 24 05:13:28.997022 master-0 kubenswrapper[4158]: I0224 05:13:28.997038 4158 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/c00ee01c-143b-4e44-823c-c6bfdedb8ed6-host-var-lib-kubelet\") pod \"multus-8qp5g\" (UID: \"c00ee01c-143b-4e44-823c-c6bfdedb8ed6\") " pod="openshift-multus/multus-8qp5g" Feb 24 05:13:28.997650 master-0 kubenswrapper[4158]: I0224 05:13:28.997058 4158 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/c00ee01c-143b-4e44-823c-c6bfdedb8ed6-hostroot\") pod \"multus-8qp5g\" (UID: \"c00ee01c-143b-4e44-823c-c6bfdedb8ed6\") " pod="openshift-multus/multus-8qp5g" Feb 24 05:13:28.997650 master-0 kubenswrapper[4158]: I0224 05:13:28.997144 4158 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/767424fb-babf-4b73-b5e2-0bee65fcf207-cni-binary-copy\") pod \"multus-additional-cni-plugins-jknmn\" (UID: \"767424fb-babf-4b73-b5e2-0bee65fcf207\") " pod="openshift-multus/multus-additional-cni-plugins-jknmn" Feb 24 05:13:28.997650 master-0 kubenswrapper[4158]: I0224 05:13:28.997217 4158 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/c00ee01c-143b-4e44-823c-c6bfdedb8ed6-multus-socket-dir-parent\") pod \"multus-8qp5g\" (UID: \"c00ee01c-143b-4e44-823c-c6bfdedb8ed6\") " pod="openshift-multus/multus-8qp5g" Feb 24 05:13:28.997650 master-0 kubenswrapper[4158]: I0224 05:13:28.997430 4158 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/c00ee01c-143b-4e44-823c-c6bfdedb8ed6-hostroot\") pod \"multus-8qp5g\" (UID: \"c00ee01c-143b-4e44-823c-c6bfdedb8ed6\") " pod="openshift-multus/multus-8qp5g" Feb 24 05:13:28.997650 master-0 kubenswrapper[4158]: I0224 05:13:28.997513 4158 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/c00ee01c-143b-4e44-823c-c6bfdedb8ed6-host-run-k8s-cni-cncf-io\") pod \"multus-8qp5g\" (UID: \"c00ee01c-143b-4e44-823c-c6bfdedb8ed6\") " pod="openshift-multus/multus-8qp5g" Feb 24 05:13:28.997650 master-0 kubenswrapper[4158]: I0224 05:13:28.997599 4158 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/c00ee01c-143b-4e44-823c-c6bfdedb8ed6-host-var-lib-kubelet\") pod \"multus-8qp5g\" (UID: \"c00ee01c-143b-4e44-823c-c6bfdedb8ed6\") " pod="openshift-multus/multus-8qp5g" Feb 24 05:13:28.997650 master-0 kubenswrapper[4158]: I0224 05:13:28.997604 4158 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/c00ee01c-143b-4e44-823c-c6bfdedb8ed6-host-run-k8s-cni-cncf-io\") pod \"multus-8qp5g\" (UID: \"c00ee01c-143b-4e44-823c-c6bfdedb8ed6\") " pod="openshift-multus/multus-8qp5g" Feb 24 05:13:28.998108 master-0 kubenswrapper[4158]: I0224 05:13:28.997668 4158 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/c00ee01c-143b-4e44-823c-c6bfdedb8ed6-cnibin\") pod \"multus-8qp5g\" (UID: \"c00ee01c-143b-4e44-823c-c6bfdedb8ed6\") " pod="openshift-multus/multus-8qp5g" Feb 24 05:13:28.998108 master-0 kubenswrapper[4158]: I0224 05:13:28.997745 4158 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/c00ee01c-143b-4e44-823c-c6bfdedb8ed6-etc-kubernetes\") pod \"multus-8qp5g\" (UID: \"c00ee01c-143b-4e44-823c-c6bfdedb8ed6\") " pod="openshift-multus/multus-8qp5g" Feb 24 05:13:28.998108 master-0 kubenswrapper[4158]: I0224 05:13:28.997809 4158 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/c00ee01c-143b-4e44-823c-c6bfdedb8ed6-host-var-lib-cni-bin\") pod \"multus-8qp5g\" (UID: \"c00ee01c-143b-4e44-823c-c6bfdedb8ed6\") " pod="openshift-multus/multus-8qp5g" Feb 24 05:13:28.998108 master-0 kubenswrapper[4158]: I0224 05:13:28.997855 4158 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/c00ee01c-143b-4e44-823c-c6bfdedb8ed6-host-var-lib-cni-multus\") pod \"multus-8qp5g\" (UID: \"c00ee01c-143b-4e44-823c-c6bfdedb8ed6\") " pod="openshift-multus/multus-8qp5g" Feb 24 05:13:28.998108 master-0 kubenswrapper[4158]: I0224 05:13:28.997911 4158 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/c00ee01c-143b-4e44-823c-c6bfdedb8ed6-host-var-lib-cni-multus\") pod \"multus-8qp5g\" (UID: \"c00ee01c-143b-4e44-823c-c6bfdedb8ed6\") " pod="openshift-multus/multus-8qp5g" Feb 24 05:13:28.998108 master-0 kubenswrapper[4158]: I0224 05:13:28.997929 4158 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jx4rw\" (UniqueName: \"kubernetes.io/projected/c00ee01c-143b-4e44-823c-c6bfdedb8ed6-kube-api-access-jx4rw\") pod \"multus-8qp5g\" (UID: \"c00ee01c-143b-4e44-823c-c6bfdedb8ed6\") " pod="openshift-multus/multus-8qp5g" Feb 24 05:13:28.998108 master-0 kubenswrapper[4158]: I0224 05:13:28.997982 4158 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/767424fb-babf-4b73-b5e2-0bee65fcf207-system-cni-dir\") pod \"multus-additional-cni-plugins-jknmn\" (UID: \"767424fb-babf-4b73-b5e2-0bee65fcf207\") " pod="openshift-multus/multus-additional-cni-plugins-jknmn" Feb 24 05:13:28.998108 master-0 kubenswrapper[4158]: I0224 05:13:28.998027 4158 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/c00ee01c-143b-4e44-823c-c6bfdedb8ed6-multus-cni-dir\") pod \"multus-8qp5g\" (UID: \"c00ee01c-143b-4e44-823c-c6bfdedb8ed6\") " pod="openshift-multus/multus-8qp5g" Feb 24 05:13:28.998108 master-0 kubenswrapper[4158]: I0224 05:13:28.998063 4158 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hl828\" (UniqueName: \"kubernetes.io/projected/767424fb-babf-4b73-b5e2-0bee65fcf207-kube-api-access-hl828\") pod \"multus-additional-cni-plugins-jknmn\" (UID: \"767424fb-babf-4b73-b5e2-0bee65fcf207\") " pod="openshift-multus/multus-additional-cni-plugins-jknmn" Feb 24 05:13:28.998108 master-0 kubenswrapper[4158]: I0224 05:13:28.998105 4158 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/767424fb-babf-4b73-b5e2-0bee65fcf207-tuning-conf-dir\") pod \"multus-additional-cni-plugins-jknmn\" (UID: \"767424fb-babf-4b73-b5e2-0bee65fcf207\") " pod="openshift-multus/multus-additional-cni-plugins-jknmn" Feb 24 05:13:28.998725 master-0 kubenswrapper[4158]: I0224 05:13:28.998123 4158 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/c00ee01c-143b-4e44-823c-c6bfdedb8ed6-multus-cni-dir\") pod \"multus-8qp5g\" (UID: \"c00ee01c-143b-4e44-823c-c6bfdedb8ed6\") " pod="openshift-multus/multus-8qp5g" Feb 24 05:13:28.998725 master-0 kubenswrapper[4158]: I0224 05:13:28.998149 4158 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/c00ee01c-143b-4e44-823c-c6bfdedb8ed6-system-cni-dir\") pod \"multus-8qp5g\" (UID: \"c00ee01c-143b-4e44-823c-c6bfdedb8ed6\") " pod="openshift-multus/multus-8qp5g" Feb 24 05:13:28.998725 master-0 kubenswrapper[4158]: I0224 05:13:28.998203 4158 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/c00ee01c-143b-4e44-823c-c6bfdedb8ed6-multus-daemon-config\") pod \"multus-8qp5g\" (UID: \"c00ee01c-143b-4e44-823c-c6bfdedb8ed6\") " pod="openshift-multus/multus-8qp5g" Feb 24 05:13:28.998725 master-0 kubenswrapper[4158]: I0224 05:13:28.998247 4158 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/c00ee01c-143b-4e44-823c-c6bfdedb8ed6-host-run-multus-certs\") pod \"multus-8qp5g\" (UID: \"c00ee01c-143b-4e44-823c-c6bfdedb8ed6\") " pod="openshift-multus/multus-8qp5g" Feb 24 05:13:28.998725 master-0 kubenswrapper[4158]: I0224 05:13:28.998340 4158 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/c00ee01c-143b-4e44-823c-c6bfdedb8ed6-host-run-multus-certs\") pod \"multus-8qp5g\" (UID: \"c00ee01c-143b-4e44-823c-c6bfdedb8ed6\") " pod="openshift-multus/multus-8qp5g" Feb 24 05:13:28.998725 master-0 kubenswrapper[4158]: I0224 05:13:28.998363 4158 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/c00ee01c-143b-4e44-823c-c6bfdedb8ed6-system-cni-dir\") pod \"multus-8qp5g\" (UID: \"c00ee01c-143b-4e44-823c-c6bfdedb8ed6\") " pod="openshift-multus/multus-8qp5g" Feb 24 05:13:28.998725 master-0 kubenswrapper[4158]: I0224 05:13:28.998394 4158 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/c00ee01c-143b-4e44-823c-c6bfdedb8ed6-os-release\") pod \"multus-8qp5g\" (UID: \"c00ee01c-143b-4e44-823c-c6bfdedb8ed6\") " pod="openshift-multus/multus-8qp5g" Feb 24 05:13:28.998725 master-0 kubenswrapper[4158]: I0224 05:13:28.998461 4158 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/c00ee01c-143b-4e44-823c-c6bfdedb8ed6-multus-conf-dir\") pod \"multus-8qp5g\" (UID: \"c00ee01c-143b-4e44-823c-c6bfdedb8ed6\") " pod="openshift-multus/multus-8qp5g" Feb 24 05:13:28.998725 master-0 kubenswrapper[4158]: I0224 05:13:28.998481 4158 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/c00ee01c-143b-4e44-823c-c6bfdedb8ed6-os-release\") pod \"multus-8qp5g\" (UID: \"c00ee01c-143b-4e44-823c-c6bfdedb8ed6\") " pod="openshift-multus/multus-8qp5g" Feb 24 05:13:28.998725 master-0 kubenswrapper[4158]: I0224 05:13:28.998499 4158 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/767424fb-babf-4b73-b5e2-0bee65fcf207-os-release\") pod \"multus-additional-cni-plugins-jknmn\" (UID: \"767424fb-babf-4b73-b5e2-0bee65fcf207\") " pod="openshift-multus/multus-additional-cni-plugins-jknmn" Feb 24 05:13:28.998725 master-0 kubenswrapper[4158]: I0224 05:13:28.998538 4158 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/767424fb-babf-4b73-b5e2-0bee65fcf207-cnibin\") pod \"multus-additional-cni-plugins-jknmn\" (UID: \"767424fb-babf-4b73-b5e2-0bee65fcf207\") " pod="openshift-multus/multus-additional-cni-plugins-jknmn" Feb 24 05:13:28.998725 master-0 kubenswrapper[4158]: I0224 05:13:28.998548 4158 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/c00ee01c-143b-4e44-823c-c6bfdedb8ed6-multus-conf-dir\") pod \"multus-8qp5g\" (UID: \"c00ee01c-143b-4e44-823c-c6bfdedb8ed6\") " pod="openshift-multus/multus-8qp5g" Feb 24 05:13:28.998725 master-0 kubenswrapper[4158]: I0224 05:13:28.998610 4158 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/c00ee01c-143b-4e44-823c-c6bfdedb8ed6-host-run-netns\") pod \"multus-8qp5g\" (UID: \"c00ee01c-143b-4e44-823c-c6bfdedb8ed6\") " pod="openshift-multus/multus-8qp5g" Feb 24 05:13:28.998725 master-0 kubenswrapper[4158]: I0224 05:13:28.998641 4158 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/767424fb-babf-4b73-b5e2-0bee65fcf207-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-jknmn\" (UID: \"767424fb-babf-4b73-b5e2-0bee65fcf207\") " pod="openshift-multus/multus-additional-cni-plugins-jknmn" Feb 24 05:13:28.998725 master-0 kubenswrapper[4158]: I0224 05:13:28.998667 4158 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/c00ee01c-143b-4e44-823c-c6bfdedb8ed6-cni-binary-copy\") pod \"multus-8qp5g\" (UID: \"c00ee01c-143b-4e44-823c-c6bfdedb8ed6\") " pod="openshift-multus/multus-8qp5g" Feb 24 05:13:28.998725 master-0 kubenswrapper[4158]: I0224 05:13:28.998688 4158 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whereabouts-configmap\" (UniqueName: \"kubernetes.io/configmap/767424fb-babf-4b73-b5e2-0bee65fcf207-whereabouts-configmap\") pod \"multus-additional-cni-plugins-jknmn\" (UID: \"767424fb-babf-4b73-b5e2-0bee65fcf207\") " pod="openshift-multus/multus-additional-cni-plugins-jknmn" Feb 24 05:13:28.998725 master-0 kubenswrapper[4158]: I0224 05:13:28.998741 4158 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/c00ee01c-143b-4e44-823c-c6bfdedb8ed6-host-run-netns\") pod \"multus-8qp5g\" (UID: \"c00ee01c-143b-4e44-823c-c6bfdedb8ed6\") " pod="openshift-multus/multus-8qp5g" Feb 24 05:13:28.999857 master-0 kubenswrapper[4158]: I0224 05:13:28.999428 4158 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/c00ee01c-143b-4e44-823c-c6bfdedb8ed6-cni-binary-copy\") pod \"multus-8qp5g\" (UID: \"c00ee01c-143b-4e44-823c-c6bfdedb8ed6\") " pod="openshift-multus/multus-8qp5g" Feb 24 05:13:28.999857 master-0 kubenswrapper[4158]: I0224 05:13:28.999756 4158 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/c00ee01c-143b-4e44-823c-c6bfdedb8ed6-multus-daemon-config\") pod \"multus-8qp5g\" (UID: \"c00ee01c-143b-4e44-823c-c6bfdedb8ed6\") " pod="openshift-multus/multus-8qp5g" Feb 24 05:13:29.021681 master-0 kubenswrapper[4158]: I0224 05:13:29.021582 4158 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jx4rw\" (UniqueName: \"kubernetes.io/projected/c00ee01c-143b-4e44-823c-c6bfdedb8ed6-kube-api-access-jx4rw\") pod \"multus-8qp5g\" (UID: \"c00ee01c-143b-4e44-823c-c6bfdedb8ed6\") " pod="openshift-multus/multus-8qp5g" Feb 24 05:13:29.048555 master-0 kubenswrapper[4158]: I0224 05:13:29.048469 4158 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-8qp5g" Feb 24 05:13:29.068932 master-0 kubenswrapper[4158]: W0224 05:13:29.068856 4158 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc00ee01c_143b_4e44_823c_c6bfdedb8ed6.slice/crio-84b8e720c1d11da23dcffc231251263a604179069ed4f2a829aaaefed039c537 WatchSource:0}: Error finding container 84b8e720c1d11da23dcffc231251263a604179069ed4f2a829aaaefed039c537: Status 404 returned error can't find the container with id 84b8e720c1d11da23dcffc231251263a604179069ed4f2a829aaaefed039c537 Feb 24 05:13:29.099413 master-0 kubenswrapper[4158]: I0224 05:13:29.099285 4158 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"whereabouts-configmap\" (UniqueName: \"kubernetes.io/configmap/767424fb-babf-4b73-b5e2-0bee65fcf207-whereabouts-configmap\") pod \"multus-additional-cni-plugins-jknmn\" (UID: \"767424fb-babf-4b73-b5e2-0bee65fcf207\") " pod="openshift-multus/multus-additional-cni-plugins-jknmn" Feb 24 05:13:29.099746 master-0 kubenswrapper[4158]: I0224 05:13:29.099685 4158 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/767424fb-babf-4b73-b5e2-0bee65fcf207-system-cni-dir\") pod \"multus-additional-cni-plugins-jknmn\" (UID: \"767424fb-babf-4b73-b5e2-0bee65fcf207\") " pod="openshift-multus/multus-additional-cni-plugins-jknmn" Feb 24 05:13:29.099746 master-0 kubenswrapper[4158]: I0224 05:13:29.099745 4158 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/767424fb-babf-4b73-b5e2-0bee65fcf207-cni-binary-copy\") pod \"multus-additional-cni-plugins-jknmn\" (UID: \"767424fb-babf-4b73-b5e2-0bee65fcf207\") " pod="openshift-multus/multus-additional-cni-plugins-jknmn" Feb 24 05:13:29.099946 master-0 kubenswrapper[4158]: I0224 05:13:29.099786 4158 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hl828\" (UniqueName: \"kubernetes.io/projected/767424fb-babf-4b73-b5e2-0bee65fcf207-kube-api-access-hl828\") pod \"multus-additional-cni-plugins-jknmn\" (UID: \"767424fb-babf-4b73-b5e2-0bee65fcf207\") " pod="openshift-multus/multus-additional-cni-plugins-jknmn" Feb 24 05:13:29.099946 master-0 kubenswrapper[4158]: I0224 05:13:29.099827 4158 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/767424fb-babf-4b73-b5e2-0bee65fcf207-tuning-conf-dir\") pod \"multus-additional-cni-plugins-jknmn\" (UID: \"767424fb-babf-4b73-b5e2-0bee65fcf207\") " pod="openshift-multus/multus-additional-cni-plugins-jknmn" Feb 24 05:13:29.099946 master-0 kubenswrapper[4158]: I0224 05:13:29.099862 4158 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/767424fb-babf-4b73-b5e2-0bee65fcf207-os-release\") pod \"multus-additional-cni-plugins-jknmn\" (UID: \"767424fb-babf-4b73-b5e2-0bee65fcf207\") " pod="openshift-multus/multus-additional-cni-plugins-jknmn" Feb 24 05:13:29.099946 master-0 kubenswrapper[4158]: I0224 05:13:29.099850 4158 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/767424fb-babf-4b73-b5e2-0bee65fcf207-system-cni-dir\") pod \"multus-additional-cni-plugins-jknmn\" (UID: \"767424fb-babf-4b73-b5e2-0bee65fcf207\") " pod="openshift-multus/multus-additional-cni-plugins-jknmn" Feb 24 05:13:29.100259 master-0 kubenswrapper[4158]: I0224 05:13:29.100227 4158 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/767424fb-babf-4b73-b5e2-0bee65fcf207-tuning-conf-dir\") pod \"multus-additional-cni-plugins-jknmn\" (UID: \"767424fb-babf-4b73-b5e2-0bee65fcf207\") " pod="openshift-multus/multus-additional-cni-plugins-jknmn" Feb 24 05:13:29.100415 master-0 kubenswrapper[4158]: I0224 05:13:29.100363 4158 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/767424fb-babf-4b73-b5e2-0bee65fcf207-os-release\") pod \"multus-additional-cni-plugins-jknmn\" (UID: \"767424fb-babf-4b73-b5e2-0bee65fcf207\") " pod="openshift-multus/multus-additional-cni-plugins-jknmn" Feb 24 05:13:29.100503 master-0 kubenswrapper[4158]: I0224 05:13:29.100455 4158 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/767424fb-babf-4b73-b5e2-0bee65fcf207-cnibin\") pod \"multus-additional-cni-plugins-jknmn\" (UID: \"767424fb-babf-4b73-b5e2-0bee65fcf207\") " pod="openshift-multus/multus-additional-cni-plugins-jknmn" Feb 24 05:13:29.100593 master-0 kubenswrapper[4158]: I0224 05:13:29.100507 4158 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/767424fb-babf-4b73-b5e2-0bee65fcf207-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-jknmn\" (UID: \"767424fb-babf-4b73-b5e2-0bee65fcf207\") " pod="openshift-multus/multus-additional-cni-plugins-jknmn" Feb 24 05:13:29.100726 master-0 kubenswrapper[4158]: I0224 05:13:29.100669 4158 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/767424fb-babf-4b73-b5e2-0bee65fcf207-cnibin\") pod \"multus-additional-cni-plugins-jknmn\" (UID: \"767424fb-babf-4b73-b5e2-0bee65fcf207\") " pod="openshift-multus/multus-additional-cni-plugins-jknmn" Feb 24 05:13:29.101113 master-0 kubenswrapper[4158]: I0224 05:13:29.101039 4158 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"whereabouts-configmap\" (UniqueName: \"kubernetes.io/configmap/767424fb-babf-4b73-b5e2-0bee65fcf207-whereabouts-configmap\") pod \"multus-additional-cni-plugins-jknmn\" (UID: \"767424fb-babf-4b73-b5e2-0bee65fcf207\") " pod="openshift-multus/multus-additional-cni-plugins-jknmn" Feb 24 05:13:29.101577 master-0 kubenswrapper[4158]: I0224 05:13:29.101531 4158 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/767424fb-babf-4b73-b5e2-0bee65fcf207-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-jknmn\" (UID: \"767424fb-babf-4b73-b5e2-0bee65fcf207\") " pod="openshift-multus/multus-additional-cni-plugins-jknmn" Feb 24 05:13:29.101687 master-0 kubenswrapper[4158]: I0224 05:13:29.101543 4158 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/767424fb-babf-4b73-b5e2-0bee65fcf207-cni-binary-copy\") pod \"multus-additional-cni-plugins-jknmn\" (UID: \"767424fb-babf-4b73-b5e2-0bee65fcf207\") " pod="openshift-multus/multus-additional-cni-plugins-jknmn" Feb 24 05:13:29.120575 master-0 kubenswrapper[4158]: I0224 05:13:29.120460 4158 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hl828\" (UniqueName: \"kubernetes.io/projected/767424fb-babf-4b73-b5e2-0bee65fcf207-kube-api-access-hl828\") pod \"multus-additional-cni-plugins-jknmn\" (UID: \"767424fb-babf-4b73-b5e2-0bee65fcf207\") " pod="openshift-multus/multus-additional-cni-plugins-jknmn" Feb 24 05:13:29.245099 master-0 kubenswrapper[4158]: I0224 05:13:29.244713 4158 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-additional-cni-plugins-jknmn" Feb 24 05:13:29.265071 master-0 kubenswrapper[4158]: W0224 05:13:29.264966 4158 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod767424fb_babf_4b73_b5e2_0bee65fcf207.slice/crio-924c790b9f927c27385b4ab4089845c57c9181271438a831e175110ba7205a0b WatchSource:0}: Error finding container 924c790b9f927c27385b4ab4089845c57c9181271438a831e175110ba7205a0b: Status 404 returned error can't find the container with id 924c790b9f927c27385b4ab4089845c57c9181271438a831e175110ba7205a0b Feb 24 05:13:29.382997 master-0 kubenswrapper[4158]: I0224 05:13:29.382876 4158 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-jknmn" event={"ID":"767424fb-babf-4b73-b5e2-0bee65fcf207","Type":"ContainerStarted","Data":"924c790b9f927c27385b4ab4089845c57c9181271438a831e175110ba7205a0b"} Feb 24 05:13:29.384497 master-0 kubenswrapper[4158]: I0224 05:13:29.384430 4158 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-8qp5g" event={"ID":"c00ee01c-143b-4e44-823c-c6bfdedb8ed6","Type":"ContainerStarted","Data":"84b8e720c1d11da23dcffc231251263a604179069ed4f2a829aaaefed039c537"} Feb 24 05:13:29.709025 master-0 kubenswrapper[4158]: I0224 05:13:29.708825 4158 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/network-metrics-daemon-2vsjh"] Feb 24 05:13:29.709409 master-0 kubenswrapper[4158]: I0224 05:13:29.709353 4158 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-2vsjh" Feb 24 05:13:29.709511 master-0 kubenswrapper[4158]: E0224 05:13:29.709474 4158 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-2vsjh" podUID="7dcc5520-7aa8-4cd5-b06d-591827ed4e2a" Feb 24 05:13:29.809593 master-0 kubenswrapper[4158]: I0224 05:13:29.809508 4158 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/7dcc5520-7aa8-4cd5-b06d-591827ed4e2a-metrics-certs\") pod \"network-metrics-daemon-2vsjh\" (UID: \"7dcc5520-7aa8-4cd5-b06d-591827ed4e2a\") " pod="openshift-multus/network-metrics-daemon-2vsjh" Feb 24 05:13:29.809593 master-0 kubenswrapper[4158]: I0224 05:13:29.809610 4158 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8ktz5\" (UniqueName: \"kubernetes.io/projected/7dcc5520-7aa8-4cd5-b06d-591827ed4e2a-kube-api-access-8ktz5\") pod \"network-metrics-daemon-2vsjh\" (UID: \"7dcc5520-7aa8-4cd5-b06d-591827ed4e2a\") " pod="openshift-multus/network-metrics-daemon-2vsjh" Feb 24 05:13:29.911368 master-0 kubenswrapper[4158]: I0224 05:13:29.911229 4158 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8ktz5\" (UniqueName: \"kubernetes.io/projected/7dcc5520-7aa8-4cd5-b06d-591827ed4e2a-kube-api-access-8ktz5\") pod \"network-metrics-daemon-2vsjh\" (UID: \"7dcc5520-7aa8-4cd5-b06d-591827ed4e2a\") " pod="openshift-multus/network-metrics-daemon-2vsjh" Feb 24 05:13:29.911368 master-0 kubenswrapper[4158]: I0224 05:13:29.911304 4158 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/7dcc5520-7aa8-4cd5-b06d-591827ed4e2a-metrics-certs\") pod \"network-metrics-daemon-2vsjh\" (UID: \"7dcc5520-7aa8-4cd5-b06d-591827ed4e2a\") " pod="openshift-multus/network-metrics-daemon-2vsjh" Feb 24 05:13:29.911842 master-0 kubenswrapper[4158]: E0224 05:13:29.911489 4158 secret.go:189] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Feb 24 05:13:29.911842 master-0 kubenswrapper[4158]: E0224 05:13:29.911546 4158 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/7dcc5520-7aa8-4cd5-b06d-591827ed4e2a-metrics-certs podName:7dcc5520-7aa8-4cd5-b06d-591827ed4e2a nodeName:}" failed. No retries permitted until 2026-02-24 05:13:30.411531788 +0000 UTC m=+57.075528481 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/7dcc5520-7aa8-4cd5-b06d-591827ed4e2a-metrics-certs") pod "network-metrics-daemon-2vsjh" (UID: "7dcc5520-7aa8-4cd5-b06d-591827ed4e2a") : object "openshift-multus"/"metrics-daemon-secret" not registered Feb 24 05:13:29.944785 master-0 kubenswrapper[4158]: I0224 05:13:29.944736 4158 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8ktz5\" (UniqueName: \"kubernetes.io/projected/7dcc5520-7aa8-4cd5-b06d-591827ed4e2a-kube-api-access-8ktz5\") pod \"network-metrics-daemon-2vsjh\" (UID: \"7dcc5520-7aa8-4cd5-b06d-591827ed4e2a\") " pod="openshift-multus/network-metrics-daemon-2vsjh" Feb 24 05:13:30.416164 master-0 kubenswrapper[4158]: I0224 05:13:30.416079 4158 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/7dcc5520-7aa8-4cd5-b06d-591827ed4e2a-metrics-certs\") pod \"network-metrics-daemon-2vsjh\" (UID: \"7dcc5520-7aa8-4cd5-b06d-591827ed4e2a\") " pod="openshift-multus/network-metrics-daemon-2vsjh" Feb 24 05:13:30.416769 master-0 kubenswrapper[4158]: E0224 05:13:30.416282 4158 secret.go:189] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Feb 24 05:13:30.416769 master-0 kubenswrapper[4158]: E0224 05:13:30.416384 4158 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/7dcc5520-7aa8-4cd5-b06d-591827ed4e2a-metrics-certs podName:7dcc5520-7aa8-4cd5-b06d-591827ed4e2a nodeName:}" failed. No retries permitted until 2026-02-24 05:13:31.416356976 +0000 UTC m=+58.080353669 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/7dcc5520-7aa8-4cd5-b06d-591827ed4e2a-metrics-certs") pod "network-metrics-daemon-2vsjh" (UID: "7dcc5520-7aa8-4cd5-b06d-591827ed4e2a") : object "openshift-multus"/"metrics-daemon-secret" not registered Feb 24 05:13:31.144507 master-0 kubenswrapper[4158]: I0224 05:13:31.144442 4158 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-2vsjh" Feb 24 05:13:31.145407 master-0 kubenswrapper[4158]: E0224 05:13:31.144561 4158 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-2vsjh" podUID="7dcc5520-7aa8-4cd5-b06d-591827ed4e2a" Feb 24 05:13:31.424701 master-0 kubenswrapper[4158]: I0224 05:13:31.424539 4158 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/7dcc5520-7aa8-4cd5-b06d-591827ed4e2a-metrics-certs\") pod \"network-metrics-daemon-2vsjh\" (UID: \"7dcc5520-7aa8-4cd5-b06d-591827ed4e2a\") " pod="openshift-multus/network-metrics-daemon-2vsjh" Feb 24 05:13:31.424964 master-0 kubenswrapper[4158]: E0224 05:13:31.424719 4158 secret.go:189] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Feb 24 05:13:31.424964 master-0 kubenswrapper[4158]: E0224 05:13:31.424794 4158 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/7dcc5520-7aa8-4cd5-b06d-591827ed4e2a-metrics-certs podName:7dcc5520-7aa8-4cd5-b06d-591827ed4e2a nodeName:}" failed. No retries permitted until 2026-02-24 05:13:33.424776279 +0000 UTC m=+60.088772972 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/7dcc5520-7aa8-4cd5-b06d-591827ed4e2a-metrics-certs") pod "network-metrics-daemon-2vsjh" (UID: "7dcc5520-7aa8-4cd5-b06d-591827ed4e2a") : object "openshift-multus"/"metrics-daemon-secret" not registered Feb 24 05:13:32.395117 master-0 kubenswrapper[4158]: I0224 05:13:32.395028 4158 generic.go:334] "Generic (PLEG): container finished" podID="767424fb-babf-4b73-b5e2-0bee65fcf207" containerID="2b0f6afa851de70b995ddec42c066893d0946d31fc515e6b27f74dd91d84efa9" exitCode=0 Feb 24 05:13:32.395117 master-0 kubenswrapper[4158]: I0224 05:13:32.395127 4158 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-jknmn" event={"ID":"767424fb-babf-4b73-b5e2-0bee65fcf207","Type":"ContainerDied","Data":"2b0f6afa851de70b995ddec42c066893d0946d31fc515e6b27f74dd91d84efa9"} Feb 24 05:13:33.144520 master-0 kubenswrapper[4158]: I0224 05:13:33.144278 4158 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-2vsjh" Feb 24 05:13:33.144520 master-0 kubenswrapper[4158]: E0224 05:13:33.144505 4158 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-2vsjh" podUID="7dcc5520-7aa8-4cd5-b06d-591827ed4e2a" Feb 24 05:13:33.441352 master-0 kubenswrapper[4158]: I0224 05:13:33.441141 4158 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/7dcc5520-7aa8-4cd5-b06d-591827ed4e2a-metrics-certs\") pod \"network-metrics-daemon-2vsjh\" (UID: \"7dcc5520-7aa8-4cd5-b06d-591827ed4e2a\") " pod="openshift-multus/network-metrics-daemon-2vsjh" Feb 24 05:13:33.442633 master-0 kubenswrapper[4158]: E0224 05:13:33.441367 4158 secret.go:189] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Feb 24 05:13:33.442633 master-0 kubenswrapper[4158]: E0224 05:13:33.441458 4158 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/7dcc5520-7aa8-4cd5-b06d-591827ed4e2a-metrics-certs podName:7dcc5520-7aa8-4cd5-b06d-591827ed4e2a nodeName:}" failed. No retries permitted until 2026-02-24 05:13:37.441425809 +0000 UTC m=+64.105422522 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/7dcc5520-7aa8-4cd5-b06d-591827ed4e2a-metrics-certs") pod "network-metrics-daemon-2vsjh" (UID: "7dcc5520-7aa8-4cd5-b06d-591827ed4e2a") : object "openshift-multus"/"metrics-daemon-secret" not registered Feb 24 05:13:35.144449 master-0 kubenswrapper[4158]: I0224 05:13:35.143936 4158 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-2vsjh" Feb 24 05:13:35.146804 master-0 kubenswrapper[4158]: E0224 05:13:35.144616 4158 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-2vsjh" podUID="7dcc5520-7aa8-4cd5-b06d-591827ed4e2a" Feb 24 05:13:37.143765 master-0 kubenswrapper[4158]: I0224 05:13:37.143690 4158 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-2vsjh" Feb 24 05:13:37.144920 master-0 kubenswrapper[4158]: E0224 05:13:37.143872 4158 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-2vsjh" podUID="7dcc5520-7aa8-4cd5-b06d-591827ed4e2a" Feb 24 05:13:37.484413 master-0 kubenswrapper[4158]: I0224 05:13:37.484360 4158 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/7dcc5520-7aa8-4cd5-b06d-591827ed4e2a-metrics-certs\") pod \"network-metrics-daemon-2vsjh\" (UID: \"7dcc5520-7aa8-4cd5-b06d-591827ed4e2a\") " pod="openshift-multus/network-metrics-daemon-2vsjh" Feb 24 05:13:37.484682 master-0 kubenswrapper[4158]: E0224 05:13:37.484514 4158 secret.go:189] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Feb 24 05:13:37.484682 master-0 kubenswrapper[4158]: E0224 05:13:37.484565 4158 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/7dcc5520-7aa8-4cd5-b06d-591827ed4e2a-metrics-certs podName:7dcc5520-7aa8-4cd5-b06d-591827ed4e2a nodeName:}" failed. No retries permitted until 2026-02-24 05:13:45.484550908 +0000 UTC m=+72.148547601 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/7dcc5520-7aa8-4cd5-b06d-591827ed4e2a-metrics-certs") pod "network-metrics-daemon-2vsjh" (UID: "7dcc5520-7aa8-4cd5-b06d-591827ed4e2a") : object "openshift-multus"/"metrics-daemon-secret" not registered Feb 24 05:13:39.144200 master-0 kubenswrapper[4158]: I0224 05:13:39.144124 4158 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-2vsjh" Feb 24 05:13:39.144969 master-0 kubenswrapper[4158]: E0224 05:13:39.144286 4158 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-2vsjh" podUID="7dcc5520-7aa8-4cd5-b06d-591827ed4e2a" Feb 24 05:13:40.420554 master-0 kubenswrapper[4158]: I0224 05:13:40.420456 4158 generic.go:334] "Generic (PLEG): container finished" podID="767424fb-babf-4b73-b5e2-0bee65fcf207" containerID="1273096ef4d43d16e5ea21290ec73d25330bc531d5f7358ac2c2166cc791f502" exitCode=0 Feb 24 05:13:40.420554 master-0 kubenswrapper[4158]: I0224 05:13:40.420544 4158 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-jknmn" event={"ID":"767424fb-babf-4b73-b5e2-0bee65fcf207","Type":"ContainerDied","Data":"1273096ef4d43d16e5ea21290ec73d25330bc531d5f7358ac2c2166cc791f502"} Feb 24 05:13:41.105650 master-0 kubenswrapper[4158]: I0224 05:13:41.105606 4158 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-control-plane-5d8dfcdc87-b8ght"] Feb 24 05:13:41.105929 master-0 kubenswrapper[4158]: I0224 05:13:41.105907 4158 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-5d8dfcdc87-b8ght" Feb 24 05:13:41.108228 master-0 kubenswrapper[4158]: I0224 05:13:41.107698 4158 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"env-overrides" Feb 24 05:13:41.108228 master-0 kubenswrapper[4158]: I0224 05:13:41.108190 4158 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"kube-root-ca.crt" Feb 24 05:13:41.110227 master-0 kubenswrapper[4158]: I0224 05:13:41.109626 4158 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-control-plane-metrics-cert" Feb 24 05:13:41.110227 master-0 kubenswrapper[4158]: I0224 05:13:41.109748 4158 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"openshift-service-ca.crt" Feb 24 05:13:41.110227 master-0 kubenswrapper[4158]: I0224 05:13:41.109758 4158 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-config" Feb 24 05:13:41.111842 master-0 kubenswrapper[4158]: I0224 05:13:41.111815 4158 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/88b915ff-fd94-4998-aa09-70f95c0f1b8a-ovnkube-config\") pod \"ovnkube-control-plane-5d8dfcdc87-b8ght\" (UID: \"88b915ff-fd94-4998-aa09-70f95c0f1b8a\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-5d8dfcdc87-b8ght" Feb 24 05:13:41.111946 master-0 kubenswrapper[4158]: I0224 05:13:41.111851 4158 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/88b915ff-fd94-4998-aa09-70f95c0f1b8a-env-overrides\") pod \"ovnkube-control-plane-5d8dfcdc87-b8ght\" (UID: \"88b915ff-fd94-4998-aa09-70f95c0f1b8a\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-5d8dfcdc87-b8ght" Feb 24 05:13:41.111946 master-0 kubenswrapper[4158]: I0224 05:13:41.111867 4158 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bs794\" (UniqueName: \"kubernetes.io/projected/88b915ff-fd94-4998-aa09-70f95c0f1b8a-kube-api-access-bs794\") pod \"ovnkube-control-plane-5d8dfcdc87-b8ght\" (UID: \"88b915ff-fd94-4998-aa09-70f95c0f1b8a\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-5d8dfcdc87-b8ght" Feb 24 05:13:41.111946 master-0 kubenswrapper[4158]: I0224 05:13:41.111886 4158 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/88b915ff-fd94-4998-aa09-70f95c0f1b8a-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-5d8dfcdc87-b8ght\" (UID: \"88b915ff-fd94-4998-aa09-70f95c0f1b8a\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-5d8dfcdc87-b8ght" Feb 24 05:13:41.144507 master-0 kubenswrapper[4158]: I0224 05:13:41.144445 4158 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-2vsjh" Feb 24 05:13:41.144874 master-0 kubenswrapper[4158]: E0224 05:13:41.144833 4158 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-2vsjh" podUID="7dcc5520-7aa8-4cd5-b06d-591827ed4e2a" Feb 24 05:13:41.197062 master-0 kubenswrapper[4158]: W0224 05:13:41.196603 4158 warnings.go:70] would violate PodSecurity "restricted:latest": host namespaces (hostNetwork=true), hostPort (container "etcd" uses hostPorts 2379, 2380), privileged (containers "etcdctl", "etcd" must not set securityContext.privileged=true), allowPrivilegeEscalation != false (containers "etcdctl", "etcd" must set securityContext.allowPrivilegeEscalation=false), unrestricted capabilities (containers "etcdctl", "etcd" must set securityContext.capabilities.drop=["ALL"]), restricted volume types (volumes "certs", "data-dir" use restricted volume type "hostPath"), runAsNonRoot != true (pod or containers "etcdctl", "etcd" must set securityContext.runAsNonRoot=true), seccompProfile (pod or containers "etcdctl", "etcd" must set securityContext.seccompProfile.type to "RuntimeDefault" or "Localhost") Feb 24 05:13:41.197062 master-0 kubenswrapper[4158]: I0224 05:13:41.196934 4158 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-etcd/etcd-master-0-master-0"] Feb 24 05:13:41.213619 master-0 kubenswrapper[4158]: I0224 05:13:41.212944 4158 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/88b915ff-fd94-4998-aa09-70f95c0f1b8a-ovnkube-config\") pod \"ovnkube-control-plane-5d8dfcdc87-b8ght\" (UID: \"88b915ff-fd94-4998-aa09-70f95c0f1b8a\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-5d8dfcdc87-b8ght" Feb 24 05:13:41.213619 master-0 kubenswrapper[4158]: I0224 05:13:41.213034 4158 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/88b915ff-fd94-4998-aa09-70f95c0f1b8a-env-overrides\") pod \"ovnkube-control-plane-5d8dfcdc87-b8ght\" (UID: \"88b915ff-fd94-4998-aa09-70f95c0f1b8a\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-5d8dfcdc87-b8ght" Feb 24 05:13:41.213619 master-0 kubenswrapper[4158]: I0224 05:13:41.213070 4158 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bs794\" (UniqueName: \"kubernetes.io/projected/88b915ff-fd94-4998-aa09-70f95c0f1b8a-kube-api-access-bs794\") pod \"ovnkube-control-plane-5d8dfcdc87-b8ght\" (UID: \"88b915ff-fd94-4998-aa09-70f95c0f1b8a\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-5d8dfcdc87-b8ght" Feb 24 05:13:41.213619 master-0 kubenswrapper[4158]: I0224 05:13:41.213099 4158 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/88b915ff-fd94-4998-aa09-70f95c0f1b8a-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-5d8dfcdc87-b8ght\" (UID: \"88b915ff-fd94-4998-aa09-70f95c0f1b8a\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-5d8dfcdc87-b8ght" Feb 24 05:13:41.213619 master-0 kubenswrapper[4158]: I0224 05:13:41.213584 4158 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/88b915ff-fd94-4998-aa09-70f95c0f1b8a-ovnkube-config\") pod \"ovnkube-control-plane-5d8dfcdc87-b8ght\" (UID: \"88b915ff-fd94-4998-aa09-70f95c0f1b8a\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-5d8dfcdc87-b8ght" Feb 24 05:13:41.213967 master-0 kubenswrapper[4158]: I0224 05:13:41.213935 4158 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/88b915ff-fd94-4998-aa09-70f95c0f1b8a-env-overrides\") pod \"ovnkube-control-plane-5d8dfcdc87-b8ght\" (UID: \"88b915ff-fd94-4998-aa09-70f95c0f1b8a\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-5d8dfcdc87-b8ght" Feb 24 05:13:41.224062 master-0 kubenswrapper[4158]: I0224 05:13:41.224038 4158 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/88b915ff-fd94-4998-aa09-70f95c0f1b8a-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-5d8dfcdc87-b8ght\" (UID: \"88b915ff-fd94-4998-aa09-70f95c0f1b8a\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-5d8dfcdc87-b8ght" Feb 24 05:13:41.233933 master-0 kubenswrapper[4158]: I0224 05:13:41.233907 4158 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bs794\" (UniqueName: \"kubernetes.io/projected/88b915ff-fd94-4998-aa09-70f95c0f1b8a-kube-api-access-bs794\") pod \"ovnkube-control-plane-5d8dfcdc87-b8ght\" (UID: \"88b915ff-fd94-4998-aa09-70f95c0f1b8a\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-5d8dfcdc87-b8ght" Feb 24 05:13:41.357384 master-0 kubenswrapper[4158]: I0224 05:13:41.356258 4158 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-jtdzc"] Feb 24 05:13:41.357384 master-0 kubenswrapper[4158]: I0224 05:13:41.356884 4158 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-jtdzc" Feb 24 05:13:41.359597 master-0 kubenswrapper[4158]: I0224 05:13:41.359379 4158 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-script-lib" Feb 24 05:13:41.359597 master-0 kubenswrapper[4158]: I0224 05:13:41.359516 4158 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-node-metrics-cert" Feb 24 05:13:41.373281 master-0 kubenswrapper[4158]: I0224 05:13:41.373191 4158 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-etcd/etcd-master-0-master-0" podStartSLOduration=0.373169685 podStartE2EDuration="373.169685ms" podCreationTimestamp="2026-02-24 05:13:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-24 05:13:41.370766669 +0000 UTC m=+68.034763362" watchObservedRunningTime="2026-02-24 05:13:41.373169685 +0000 UTC m=+68.037166388" Feb 24 05:13:41.427921 master-0 kubenswrapper[4158]: I0224 05:13:41.427877 4158 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-5d8dfcdc87-b8ght" Feb 24 05:13:41.515459 master-0 kubenswrapper[4158]: I0224 05:13:41.515383 4158 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/ba37be4c-fd93-485e-9599-de562820d909-host-run-ovn-kubernetes\") pod \"ovnkube-node-jtdzc\" (UID: \"ba37be4c-fd93-485e-9599-de562820d909\") " pod="openshift-ovn-kubernetes/ovnkube-node-jtdzc" Feb 24 05:13:41.515459 master-0 kubenswrapper[4158]: I0224 05:13:41.515444 4158 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/ba37be4c-fd93-485e-9599-de562820d909-run-systemd\") pod \"ovnkube-node-jtdzc\" (UID: \"ba37be4c-fd93-485e-9599-de562820d909\") " pod="openshift-ovn-kubernetes/ovnkube-node-jtdzc" Feb 24 05:13:41.515459 master-0 kubenswrapper[4158]: I0224 05:13:41.515463 4158 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/ba37be4c-fd93-485e-9599-de562820d909-run-openvswitch\") pod \"ovnkube-node-jtdzc\" (UID: \"ba37be4c-fd93-485e-9599-de562820d909\") " pod="openshift-ovn-kubernetes/ovnkube-node-jtdzc" Feb 24 05:13:41.515808 master-0 kubenswrapper[4158]: I0224 05:13:41.515547 4158 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/ba37be4c-fd93-485e-9599-de562820d909-var-lib-openvswitch\") pod \"ovnkube-node-jtdzc\" (UID: \"ba37be4c-fd93-485e-9599-de562820d909\") " pod="openshift-ovn-kubernetes/ovnkube-node-jtdzc" Feb 24 05:13:41.515808 master-0 kubenswrapper[4158]: I0224 05:13:41.515603 4158 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/ba37be4c-fd93-485e-9599-de562820d909-ovnkube-config\") pod \"ovnkube-node-jtdzc\" (UID: \"ba37be4c-fd93-485e-9599-de562820d909\") " pod="openshift-ovn-kubernetes/ovnkube-node-jtdzc" Feb 24 05:13:41.515808 master-0 kubenswrapper[4158]: I0224 05:13:41.515633 4158 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/ba37be4c-fd93-485e-9599-de562820d909-host-slash\") pod \"ovnkube-node-jtdzc\" (UID: \"ba37be4c-fd93-485e-9599-de562820d909\") " pod="openshift-ovn-kubernetes/ovnkube-node-jtdzc" Feb 24 05:13:41.515808 master-0 kubenswrapper[4158]: I0224 05:13:41.515657 4158 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/ba37be4c-fd93-485e-9599-de562820d909-log-socket\") pod \"ovnkube-node-jtdzc\" (UID: \"ba37be4c-fd93-485e-9599-de562820d909\") " pod="openshift-ovn-kubernetes/ovnkube-node-jtdzc" Feb 24 05:13:41.515808 master-0 kubenswrapper[4158]: I0224 05:13:41.515681 4158 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/ba37be4c-fd93-485e-9599-de562820d909-run-ovn\") pod \"ovnkube-node-jtdzc\" (UID: \"ba37be4c-fd93-485e-9599-de562820d909\") " pod="openshift-ovn-kubernetes/ovnkube-node-jtdzc" Feb 24 05:13:41.515808 master-0 kubenswrapper[4158]: I0224 05:13:41.515704 4158 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/ba37be4c-fd93-485e-9599-de562820d909-host-cni-netd\") pod \"ovnkube-node-jtdzc\" (UID: \"ba37be4c-fd93-485e-9599-de562820d909\") " pod="openshift-ovn-kubernetes/ovnkube-node-jtdzc" Feb 24 05:13:41.515808 master-0 kubenswrapper[4158]: I0224 05:13:41.515724 4158 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/ba37be4c-fd93-485e-9599-de562820d909-host-run-netns\") pod \"ovnkube-node-jtdzc\" (UID: \"ba37be4c-fd93-485e-9599-de562820d909\") " pod="openshift-ovn-kubernetes/ovnkube-node-jtdzc" Feb 24 05:13:41.515808 master-0 kubenswrapper[4158]: I0224 05:13:41.515740 4158 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/ba37be4c-fd93-485e-9599-de562820d909-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-jtdzc\" (UID: \"ba37be4c-fd93-485e-9599-de562820d909\") " pod="openshift-ovn-kubernetes/ovnkube-node-jtdzc" Feb 24 05:13:41.515808 master-0 kubenswrapper[4158]: I0224 05:13:41.515760 4158 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/ba37be4c-fd93-485e-9599-de562820d909-env-overrides\") pod \"ovnkube-node-jtdzc\" (UID: \"ba37be4c-fd93-485e-9599-de562820d909\") " pod="openshift-ovn-kubernetes/ovnkube-node-jtdzc" Feb 24 05:13:41.515808 master-0 kubenswrapper[4158]: I0224 05:13:41.515780 4158 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/ba37be4c-fd93-485e-9599-de562820d909-systemd-units\") pod \"ovnkube-node-jtdzc\" (UID: \"ba37be4c-fd93-485e-9599-de562820d909\") " pod="openshift-ovn-kubernetes/ovnkube-node-jtdzc" Feb 24 05:13:41.515808 master-0 kubenswrapper[4158]: I0224 05:13:41.515797 4158 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/ba37be4c-fd93-485e-9599-de562820d909-ovn-node-metrics-cert\") pod \"ovnkube-node-jtdzc\" (UID: \"ba37be4c-fd93-485e-9599-de562820d909\") " pod="openshift-ovn-kubernetes/ovnkube-node-jtdzc" Feb 24 05:13:41.515808 master-0 kubenswrapper[4158]: I0224 05:13:41.515814 4158 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/ba37be4c-fd93-485e-9599-de562820d909-ovnkube-script-lib\") pod \"ovnkube-node-jtdzc\" (UID: \"ba37be4c-fd93-485e-9599-de562820d909\") " pod="openshift-ovn-kubernetes/ovnkube-node-jtdzc" Feb 24 05:13:41.516123 master-0 kubenswrapper[4158]: I0224 05:13:41.515832 4158 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/ba37be4c-fd93-485e-9599-de562820d909-host-kubelet\") pod \"ovnkube-node-jtdzc\" (UID: \"ba37be4c-fd93-485e-9599-de562820d909\") " pod="openshift-ovn-kubernetes/ovnkube-node-jtdzc" Feb 24 05:13:41.516123 master-0 kubenswrapper[4158]: I0224 05:13:41.515848 4158 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/ba37be4c-fd93-485e-9599-de562820d909-node-log\") pod \"ovnkube-node-jtdzc\" (UID: \"ba37be4c-fd93-485e-9599-de562820d909\") " pod="openshift-ovn-kubernetes/ovnkube-node-jtdzc" Feb 24 05:13:41.516123 master-0 kubenswrapper[4158]: I0224 05:13:41.515863 4158 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/ba37be4c-fd93-485e-9599-de562820d909-host-cni-bin\") pod \"ovnkube-node-jtdzc\" (UID: \"ba37be4c-fd93-485e-9599-de562820d909\") " pod="openshift-ovn-kubernetes/ovnkube-node-jtdzc" Feb 24 05:13:41.516123 master-0 kubenswrapper[4158]: I0224 05:13:41.515880 4158 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lvshk\" (UniqueName: \"kubernetes.io/projected/ba37be4c-fd93-485e-9599-de562820d909-kube-api-access-lvshk\") pod \"ovnkube-node-jtdzc\" (UID: \"ba37be4c-fd93-485e-9599-de562820d909\") " pod="openshift-ovn-kubernetes/ovnkube-node-jtdzc" Feb 24 05:13:41.516123 master-0 kubenswrapper[4158]: I0224 05:13:41.515930 4158 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/ba37be4c-fd93-485e-9599-de562820d909-etc-openvswitch\") pod \"ovnkube-node-jtdzc\" (UID: \"ba37be4c-fd93-485e-9599-de562820d909\") " pod="openshift-ovn-kubernetes/ovnkube-node-jtdzc" Feb 24 05:13:41.616575 master-0 kubenswrapper[4158]: I0224 05:13:41.616366 4158 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/ba37be4c-fd93-485e-9599-de562820d909-etc-openvswitch\") pod \"ovnkube-node-jtdzc\" (UID: \"ba37be4c-fd93-485e-9599-de562820d909\") " pod="openshift-ovn-kubernetes/ovnkube-node-jtdzc" Feb 24 05:13:41.616575 master-0 kubenswrapper[4158]: I0224 05:13:41.616553 4158 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/ba37be4c-fd93-485e-9599-de562820d909-etc-openvswitch\") pod \"ovnkube-node-jtdzc\" (UID: \"ba37be4c-fd93-485e-9599-de562820d909\") " pod="openshift-ovn-kubernetes/ovnkube-node-jtdzc" Feb 24 05:13:41.616902 master-0 kubenswrapper[4158]: I0224 05:13:41.616606 4158 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/ba37be4c-fd93-485e-9599-de562820d909-run-systemd\") pod \"ovnkube-node-jtdzc\" (UID: \"ba37be4c-fd93-485e-9599-de562820d909\") " pod="openshift-ovn-kubernetes/ovnkube-node-jtdzc" Feb 24 05:13:41.616902 master-0 kubenswrapper[4158]: I0224 05:13:41.616657 4158 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/ba37be4c-fd93-485e-9599-de562820d909-run-openvswitch\") pod \"ovnkube-node-jtdzc\" (UID: \"ba37be4c-fd93-485e-9599-de562820d909\") " pod="openshift-ovn-kubernetes/ovnkube-node-jtdzc" Feb 24 05:13:41.616902 master-0 kubenswrapper[4158]: I0224 05:13:41.616730 4158 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/ba37be4c-fd93-485e-9599-de562820d909-run-openvswitch\") pod \"ovnkube-node-jtdzc\" (UID: \"ba37be4c-fd93-485e-9599-de562820d909\") " pod="openshift-ovn-kubernetes/ovnkube-node-jtdzc" Feb 24 05:13:41.616902 master-0 kubenswrapper[4158]: I0224 05:13:41.616765 4158 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/ba37be4c-fd93-485e-9599-de562820d909-host-run-ovn-kubernetes\") pod \"ovnkube-node-jtdzc\" (UID: \"ba37be4c-fd93-485e-9599-de562820d909\") " pod="openshift-ovn-kubernetes/ovnkube-node-jtdzc" Feb 24 05:13:41.616902 master-0 kubenswrapper[4158]: I0224 05:13:41.616787 4158 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/ba37be4c-fd93-485e-9599-de562820d909-var-lib-openvswitch\") pod \"ovnkube-node-jtdzc\" (UID: \"ba37be4c-fd93-485e-9599-de562820d909\") " pod="openshift-ovn-kubernetes/ovnkube-node-jtdzc" Feb 24 05:13:41.616902 master-0 kubenswrapper[4158]: I0224 05:13:41.616800 4158 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/ba37be4c-fd93-485e-9599-de562820d909-run-systemd\") pod \"ovnkube-node-jtdzc\" (UID: \"ba37be4c-fd93-485e-9599-de562820d909\") " pod="openshift-ovn-kubernetes/ovnkube-node-jtdzc" Feb 24 05:13:41.616902 master-0 kubenswrapper[4158]: I0224 05:13:41.616804 4158 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/ba37be4c-fd93-485e-9599-de562820d909-ovnkube-config\") pod \"ovnkube-node-jtdzc\" (UID: \"ba37be4c-fd93-485e-9599-de562820d909\") " pod="openshift-ovn-kubernetes/ovnkube-node-jtdzc" Feb 24 05:13:41.617188 master-0 kubenswrapper[4158]: I0224 05:13:41.616938 4158 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/ba37be4c-fd93-485e-9599-de562820d909-var-lib-openvswitch\") pod \"ovnkube-node-jtdzc\" (UID: \"ba37be4c-fd93-485e-9599-de562820d909\") " pod="openshift-ovn-kubernetes/ovnkube-node-jtdzc" Feb 24 05:13:41.617188 master-0 kubenswrapper[4158]: I0224 05:13:41.616933 4158 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/ba37be4c-fd93-485e-9599-de562820d909-host-run-ovn-kubernetes\") pod \"ovnkube-node-jtdzc\" (UID: \"ba37be4c-fd93-485e-9599-de562820d909\") " pod="openshift-ovn-kubernetes/ovnkube-node-jtdzc" Feb 24 05:13:41.617271 master-0 kubenswrapper[4158]: I0224 05:13:41.617014 4158 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/ba37be4c-fd93-485e-9599-de562820d909-host-slash\") pod \"ovnkube-node-jtdzc\" (UID: \"ba37be4c-fd93-485e-9599-de562820d909\") " pod="openshift-ovn-kubernetes/ovnkube-node-jtdzc" Feb 24 05:13:41.617498 master-0 kubenswrapper[4158]: I0224 05:13:41.617455 4158 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/ba37be4c-fd93-485e-9599-de562820d909-log-socket\") pod \"ovnkube-node-jtdzc\" (UID: \"ba37be4c-fd93-485e-9599-de562820d909\") " pod="openshift-ovn-kubernetes/ovnkube-node-jtdzc" Feb 24 05:13:41.617581 master-0 kubenswrapper[4158]: I0224 05:13:41.617120 4158 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/ba37be4c-fd93-485e-9599-de562820d909-host-slash\") pod \"ovnkube-node-jtdzc\" (UID: \"ba37be4c-fd93-485e-9599-de562820d909\") " pod="openshift-ovn-kubernetes/ovnkube-node-jtdzc" Feb 24 05:13:41.617581 master-0 kubenswrapper[4158]: I0224 05:13:41.617541 4158 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/ba37be4c-fd93-485e-9599-de562820d909-run-ovn\") pod \"ovnkube-node-jtdzc\" (UID: \"ba37be4c-fd93-485e-9599-de562820d909\") " pod="openshift-ovn-kubernetes/ovnkube-node-jtdzc" Feb 24 05:13:41.617670 master-0 kubenswrapper[4158]: I0224 05:13:41.617508 4158 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/ba37be4c-fd93-485e-9599-de562820d909-run-ovn\") pod \"ovnkube-node-jtdzc\" (UID: \"ba37be4c-fd93-485e-9599-de562820d909\") " pod="openshift-ovn-kubernetes/ovnkube-node-jtdzc" Feb 24 05:13:41.617868 master-0 kubenswrapper[4158]: I0224 05:13:41.617819 4158 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/ba37be4c-fd93-485e-9599-de562820d909-log-socket\") pod \"ovnkube-node-jtdzc\" (UID: \"ba37be4c-fd93-485e-9599-de562820d909\") " pod="openshift-ovn-kubernetes/ovnkube-node-jtdzc" Feb 24 05:13:41.617961 master-0 kubenswrapper[4158]: I0224 05:13:41.617931 4158 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/ba37be4c-fd93-485e-9599-de562820d909-host-cni-netd\") pod \"ovnkube-node-jtdzc\" (UID: \"ba37be4c-fd93-485e-9599-de562820d909\") " pod="openshift-ovn-kubernetes/ovnkube-node-jtdzc" Feb 24 05:13:41.618062 master-0 kubenswrapper[4158]: I0224 05:13:41.618025 4158 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/ba37be4c-fd93-485e-9599-de562820d909-host-run-netns\") pod \"ovnkube-node-jtdzc\" (UID: \"ba37be4c-fd93-485e-9599-de562820d909\") " pod="openshift-ovn-kubernetes/ovnkube-node-jtdzc" Feb 24 05:13:41.618102 master-0 kubenswrapper[4158]: I0224 05:13:41.618060 4158 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/ba37be4c-fd93-485e-9599-de562820d909-ovnkube-config\") pod \"ovnkube-node-jtdzc\" (UID: \"ba37be4c-fd93-485e-9599-de562820d909\") " pod="openshift-ovn-kubernetes/ovnkube-node-jtdzc" Feb 24 05:13:41.618102 master-0 kubenswrapper[4158]: I0224 05:13:41.618069 4158 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/ba37be4c-fd93-485e-9599-de562820d909-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-jtdzc\" (UID: \"ba37be4c-fd93-485e-9599-de562820d909\") " pod="openshift-ovn-kubernetes/ovnkube-node-jtdzc" Feb 24 05:13:41.618175 master-0 kubenswrapper[4158]: I0224 05:13:41.618128 4158 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/ba37be4c-fd93-485e-9599-de562820d909-env-overrides\") pod \"ovnkube-node-jtdzc\" (UID: \"ba37be4c-fd93-485e-9599-de562820d909\") " pod="openshift-ovn-kubernetes/ovnkube-node-jtdzc" Feb 24 05:13:41.618175 master-0 kubenswrapper[4158]: I0224 05:13:41.618169 4158 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/ba37be4c-fd93-485e-9599-de562820d909-host-run-netns\") pod \"ovnkube-node-jtdzc\" (UID: \"ba37be4c-fd93-485e-9599-de562820d909\") " pod="openshift-ovn-kubernetes/ovnkube-node-jtdzc" Feb 24 05:13:41.618246 master-0 kubenswrapper[4158]: I0224 05:13:41.618213 4158 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/ba37be4c-fd93-485e-9599-de562820d909-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-jtdzc\" (UID: \"ba37be4c-fd93-485e-9599-de562820d909\") " pod="openshift-ovn-kubernetes/ovnkube-node-jtdzc" Feb 24 05:13:41.618246 master-0 kubenswrapper[4158]: I0224 05:13:41.618214 4158 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/ba37be4c-fd93-485e-9599-de562820d909-systemd-units\") pod \"ovnkube-node-jtdzc\" (UID: \"ba37be4c-fd93-485e-9599-de562820d909\") " pod="openshift-ovn-kubernetes/ovnkube-node-jtdzc" Feb 24 05:13:41.618655 master-0 kubenswrapper[4158]: I0224 05:13:41.618279 4158 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/ba37be4c-fd93-485e-9599-de562820d909-systemd-units\") pod \"ovnkube-node-jtdzc\" (UID: \"ba37be4c-fd93-485e-9599-de562820d909\") " pod="openshift-ovn-kubernetes/ovnkube-node-jtdzc" Feb 24 05:13:41.618730 master-0 kubenswrapper[4158]: I0224 05:13:41.618681 4158 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/ba37be4c-fd93-485e-9599-de562820d909-ovn-node-metrics-cert\") pod \"ovnkube-node-jtdzc\" (UID: \"ba37be4c-fd93-485e-9599-de562820d909\") " pod="openshift-ovn-kubernetes/ovnkube-node-jtdzc" Feb 24 05:13:41.618730 master-0 kubenswrapper[4158]: I0224 05:13:41.618133 4158 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/ba37be4c-fd93-485e-9599-de562820d909-host-cni-netd\") pod \"ovnkube-node-jtdzc\" (UID: \"ba37be4c-fd93-485e-9599-de562820d909\") " pod="openshift-ovn-kubernetes/ovnkube-node-jtdzc" Feb 24 05:13:41.618898 master-0 kubenswrapper[4158]: I0224 05:13:41.618736 4158 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/ba37be4c-fd93-485e-9599-de562820d909-host-kubelet\") pod \"ovnkube-node-jtdzc\" (UID: \"ba37be4c-fd93-485e-9599-de562820d909\") " pod="openshift-ovn-kubernetes/ovnkube-node-jtdzc" Feb 24 05:13:41.618972 master-0 kubenswrapper[4158]: I0224 05:13:41.618910 4158 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/ba37be4c-fd93-485e-9599-de562820d909-node-log\") pod \"ovnkube-node-jtdzc\" (UID: \"ba37be4c-fd93-485e-9599-de562820d909\") " pod="openshift-ovn-kubernetes/ovnkube-node-jtdzc" Feb 24 05:13:41.619049 master-0 kubenswrapper[4158]: I0224 05:13:41.618953 4158 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/ba37be4c-fd93-485e-9599-de562820d909-host-cni-bin\") pod \"ovnkube-node-jtdzc\" (UID: \"ba37be4c-fd93-485e-9599-de562820d909\") " pod="openshift-ovn-kubernetes/ovnkube-node-jtdzc" Feb 24 05:13:41.619155 master-0 kubenswrapper[4158]: I0224 05:13:41.619078 4158 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/ba37be4c-fd93-485e-9599-de562820d909-ovnkube-script-lib\") pod \"ovnkube-node-jtdzc\" (UID: \"ba37be4c-fd93-485e-9599-de562820d909\") " pod="openshift-ovn-kubernetes/ovnkube-node-jtdzc" Feb 24 05:13:41.619270 master-0 kubenswrapper[4158]: I0224 05:13:41.619242 4158 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lvshk\" (UniqueName: \"kubernetes.io/projected/ba37be4c-fd93-485e-9599-de562820d909-kube-api-access-lvshk\") pod \"ovnkube-node-jtdzc\" (UID: \"ba37be4c-fd93-485e-9599-de562820d909\") " pod="openshift-ovn-kubernetes/ovnkube-node-jtdzc" Feb 24 05:13:41.619368 master-0 kubenswrapper[4158]: I0224 05:13:41.619306 4158 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/ba37be4c-fd93-485e-9599-de562820d909-node-log\") pod \"ovnkube-node-jtdzc\" (UID: \"ba37be4c-fd93-485e-9599-de562820d909\") " pod="openshift-ovn-kubernetes/ovnkube-node-jtdzc" Feb 24 05:13:41.619368 master-0 kubenswrapper[4158]: I0224 05:13:41.618691 4158 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/ba37be4c-fd93-485e-9599-de562820d909-env-overrides\") pod \"ovnkube-node-jtdzc\" (UID: \"ba37be4c-fd93-485e-9599-de562820d909\") " pod="openshift-ovn-kubernetes/ovnkube-node-jtdzc" Feb 24 05:13:41.619855 master-0 kubenswrapper[4158]: I0224 05:13:41.619815 4158 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/ba37be4c-fd93-485e-9599-de562820d909-host-kubelet\") pod \"ovnkube-node-jtdzc\" (UID: \"ba37be4c-fd93-485e-9599-de562820d909\") " pod="openshift-ovn-kubernetes/ovnkube-node-jtdzc" Feb 24 05:13:41.619912 master-0 kubenswrapper[4158]: I0224 05:13:41.619880 4158 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/ba37be4c-fd93-485e-9599-de562820d909-host-cni-bin\") pod \"ovnkube-node-jtdzc\" (UID: \"ba37be4c-fd93-485e-9599-de562820d909\") " pod="openshift-ovn-kubernetes/ovnkube-node-jtdzc" Feb 24 05:13:41.623270 master-0 kubenswrapper[4158]: I0224 05:13:41.623204 4158 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/ba37be4c-fd93-485e-9599-de562820d909-ovn-node-metrics-cert\") pod \"ovnkube-node-jtdzc\" (UID: \"ba37be4c-fd93-485e-9599-de562820d909\") " pod="openshift-ovn-kubernetes/ovnkube-node-jtdzc" Feb 24 05:13:41.623503 master-0 kubenswrapper[4158]: I0224 05:13:41.623284 4158 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/ba37be4c-fd93-485e-9599-de562820d909-ovnkube-script-lib\") pod \"ovnkube-node-jtdzc\" (UID: \"ba37be4c-fd93-485e-9599-de562820d909\") " pod="openshift-ovn-kubernetes/ovnkube-node-jtdzc" Feb 24 05:13:41.653715 master-0 kubenswrapper[4158]: I0224 05:13:41.653614 4158 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lvshk\" (UniqueName: \"kubernetes.io/projected/ba37be4c-fd93-485e-9599-de562820d909-kube-api-access-lvshk\") pod \"ovnkube-node-jtdzc\" (UID: \"ba37be4c-fd93-485e-9599-de562820d909\") " pod="openshift-ovn-kubernetes/ovnkube-node-jtdzc" Feb 24 05:13:41.670943 master-0 kubenswrapper[4158]: I0224 05:13:41.670897 4158 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-jtdzc" Feb 24 05:13:42.943164 master-0 kubenswrapper[4158]: I0224 05:13:42.943051 4158 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7c4b448f-670e-45a1-bdd7-c42903c682a9-serving-cert\") pod \"cluster-version-operator-5cfd9759cf-r4rf2\" (UID: \"7c4b448f-670e-45a1-bdd7-c42903c682a9\") " pod="openshift-cluster-version/cluster-version-operator-5cfd9759cf-r4rf2" Feb 24 05:13:42.943955 master-0 kubenswrapper[4158]: E0224 05:13:42.943382 4158 secret.go:189] Couldn't get secret openshift-cluster-version/cluster-version-operator-serving-cert: secret "cluster-version-operator-serving-cert" not found Feb 24 05:13:42.943955 master-0 kubenswrapper[4158]: E0224 05:13:42.943483 4158 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/7c4b448f-670e-45a1-bdd7-c42903c682a9-serving-cert podName:7c4b448f-670e-45a1-bdd7-c42903c682a9 nodeName:}" failed. No retries permitted until 2026-02-24 05:14:14.943449878 +0000 UTC m=+101.607446601 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/7c4b448f-670e-45a1-bdd7-c42903c682a9-serving-cert") pod "cluster-version-operator-5cfd9759cf-r4rf2" (UID: "7c4b448f-670e-45a1-bdd7-c42903c682a9") : secret "cluster-version-operator-serving-cert" not found Feb 24 05:13:43.144183 master-0 kubenswrapper[4158]: I0224 05:13:43.144136 4158 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-2vsjh" Feb 24 05:13:43.144433 master-0 kubenswrapper[4158]: E0224 05:13:43.144376 4158 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-2vsjh" podUID="7dcc5520-7aa8-4cd5-b06d-591827ed4e2a" Feb 24 05:13:43.957782 master-0 kubenswrapper[4158]: W0224 05:13:43.957629 4158 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podba37be4c_fd93_485e_9599_de562820d909.slice/crio-d91bf7b8d34e1f15ac85412f592332fa821c616af9acf0e1fcb802613907ca17 WatchSource:0}: Error finding container d91bf7b8d34e1f15ac85412f592332fa821c616af9acf0e1fcb802613907ca17: Status 404 returned error can't find the container with id d91bf7b8d34e1f15ac85412f592332fa821c616af9acf0e1fcb802613907ca17 Feb 24 05:13:43.958522 master-0 kubenswrapper[4158]: W0224 05:13:43.958262 4158 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod88b915ff_fd94_4998_aa09_70f95c0f1b8a.slice/crio-b59c858a83fd92adb897139656578eaefef3c02c4b1c6979cd2c3711ce4f5720 WatchSource:0}: Error finding container b59c858a83fd92adb897139656578eaefef3c02c4b1c6979cd2c3711ce4f5720: Status 404 returned error can't find the container with id b59c858a83fd92adb897139656578eaefef3c02c4b1c6979cd2c3711ce4f5720 Feb 24 05:13:44.300824 master-0 kubenswrapper[4158]: I0224 05:13:44.300708 4158 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-network-diagnostics/network-check-target-vp2jg"] Feb 24 05:13:44.301400 master-0 kubenswrapper[4158]: I0224 05:13:44.301370 4158 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-vp2jg" Feb 24 05:13:44.302251 master-0 kubenswrapper[4158]: E0224 05:13:44.301493 4158 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-vp2jg" podUID="1d922f6f-70a3-46cb-b230-6a1e2b8cfdfa" Feb 24 05:13:44.356100 master-0 kubenswrapper[4158]: I0224 05:13:44.356020 4158 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ckfnc\" (UniqueName: \"kubernetes.io/projected/1d922f6f-70a3-46cb-b230-6a1e2b8cfdfa-kube-api-access-ckfnc\") pod \"network-check-target-vp2jg\" (UID: \"1d922f6f-70a3-46cb-b230-6a1e2b8cfdfa\") " pod="openshift-network-diagnostics/network-check-target-vp2jg" Feb 24 05:13:44.434339 master-0 kubenswrapper[4158]: I0224 05:13:44.434232 4158 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-5d8dfcdc87-b8ght" event={"ID":"88b915ff-fd94-4998-aa09-70f95c0f1b8a","Type":"ContainerStarted","Data":"1a25a16ac39c85492e260b57d493f31df45ebdeba2fd14c3415358f87a9bb6ab"} Feb 24 05:13:44.434339 master-0 kubenswrapper[4158]: I0224 05:13:44.434343 4158 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-5d8dfcdc87-b8ght" event={"ID":"88b915ff-fd94-4998-aa09-70f95c0f1b8a","Type":"ContainerStarted","Data":"b59c858a83fd92adb897139656578eaefef3c02c4b1c6979cd2c3711ce4f5720"} Feb 24 05:13:44.436780 master-0 kubenswrapper[4158]: I0224 05:13:44.436702 4158 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-jtdzc" event={"ID":"ba37be4c-fd93-485e-9599-de562820d909","Type":"ContainerStarted","Data":"d91bf7b8d34e1f15ac85412f592332fa821c616af9acf0e1fcb802613907ca17"} Feb 24 05:13:44.439655 master-0 kubenswrapper[4158]: I0224 05:13:44.439563 4158 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-8qp5g" event={"ID":"c00ee01c-143b-4e44-823c-c6bfdedb8ed6","Type":"ContainerStarted","Data":"490f13304c509feaf2afd704a94c71ccf8ca652272c148d9c65e650f48bb04e8"} Feb 24 05:13:44.457613 master-0 kubenswrapper[4158]: I0224 05:13:44.457468 4158 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ckfnc\" (UniqueName: \"kubernetes.io/projected/1d922f6f-70a3-46cb-b230-6a1e2b8cfdfa-kube-api-access-ckfnc\") pod \"network-check-target-vp2jg\" (UID: \"1d922f6f-70a3-46cb-b230-6a1e2b8cfdfa\") " pod="openshift-network-diagnostics/network-check-target-vp2jg" Feb 24 05:13:44.488567 master-0 kubenswrapper[4158]: E0224 05:13:44.488494 4158 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 24 05:13:44.488567 master-0 kubenswrapper[4158]: E0224 05:13:44.488555 4158 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 24 05:13:44.488567 master-0 kubenswrapper[4158]: E0224 05:13:44.488579 4158 projected.go:194] Error preparing data for projected volume kube-api-access-ckfnc for pod openshift-network-diagnostics/network-check-target-vp2jg: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 24 05:13:44.488872 master-0 kubenswrapper[4158]: E0224 05:13:44.488690 4158 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/1d922f6f-70a3-46cb-b230-6a1e2b8cfdfa-kube-api-access-ckfnc podName:1d922f6f-70a3-46cb-b230-6a1e2b8cfdfa nodeName:}" failed. No retries permitted until 2026-02-24 05:13:44.988642458 +0000 UTC m=+71.652639181 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-ckfnc" (UniqueName: "kubernetes.io/projected/1d922f6f-70a3-46cb-b230-6a1e2b8cfdfa-kube-api-access-ckfnc") pod "network-check-target-vp2jg" (UID: "1d922f6f-70a3-46cb-b230-6a1e2b8cfdfa") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 24 05:13:45.067927 master-0 kubenswrapper[4158]: I0224 05:13:45.067796 4158 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ckfnc\" (UniqueName: \"kubernetes.io/projected/1d922f6f-70a3-46cb-b230-6a1e2b8cfdfa-kube-api-access-ckfnc\") pod \"network-check-target-vp2jg\" (UID: \"1d922f6f-70a3-46cb-b230-6a1e2b8cfdfa\") " pod="openshift-network-diagnostics/network-check-target-vp2jg" Feb 24 05:13:45.068497 master-0 kubenswrapper[4158]: E0224 05:13:45.067958 4158 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 24 05:13:45.068497 master-0 kubenswrapper[4158]: E0224 05:13:45.067977 4158 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 24 05:13:45.068497 master-0 kubenswrapper[4158]: E0224 05:13:45.067987 4158 projected.go:194] Error preparing data for projected volume kube-api-access-ckfnc for pod openshift-network-diagnostics/network-check-target-vp2jg: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 24 05:13:45.068497 master-0 kubenswrapper[4158]: E0224 05:13:45.068029 4158 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/1d922f6f-70a3-46cb-b230-6a1e2b8cfdfa-kube-api-access-ckfnc podName:1d922f6f-70a3-46cb-b230-6a1e2b8cfdfa nodeName:}" failed. No retries permitted until 2026-02-24 05:13:46.068015145 +0000 UTC m=+72.732011838 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-ckfnc" (UniqueName: "kubernetes.io/projected/1d922f6f-70a3-46cb-b230-6a1e2b8cfdfa-kube-api-access-ckfnc") pod "network-check-target-vp2jg" (UID: "1d922f6f-70a3-46cb-b230-6a1e2b8cfdfa") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 24 05:13:45.144852 master-0 kubenswrapper[4158]: I0224 05:13:45.144374 4158 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-2vsjh" Feb 24 05:13:45.145043 master-0 kubenswrapper[4158]: E0224 05:13:45.145007 4158 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-2vsjh" podUID="7dcc5520-7aa8-4cd5-b06d-591827ed4e2a" Feb 24 05:13:45.447187 master-0 kubenswrapper[4158]: I0224 05:13:45.447128 4158 generic.go:334] "Generic (PLEG): container finished" podID="767424fb-babf-4b73-b5e2-0bee65fcf207" containerID="08804aa446128a3eba2bae15a34a0cc35ebced6e192e0098ad42bbf36874d56b" exitCode=0 Feb 24 05:13:45.447936 master-0 kubenswrapper[4158]: I0224 05:13:45.447896 4158 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-jknmn" event={"ID":"767424fb-babf-4b73-b5e2-0bee65fcf207","Type":"ContainerDied","Data":"08804aa446128a3eba2bae15a34a0cc35ebced6e192e0098ad42bbf36874d56b"} Feb 24 05:13:45.572277 master-0 kubenswrapper[4158]: I0224 05:13:45.572211 4158 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/7dcc5520-7aa8-4cd5-b06d-591827ed4e2a-metrics-certs\") pod \"network-metrics-daemon-2vsjh\" (UID: \"7dcc5520-7aa8-4cd5-b06d-591827ed4e2a\") " pod="openshift-multus/network-metrics-daemon-2vsjh" Feb 24 05:13:45.572508 master-0 kubenswrapper[4158]: E0224 05:13:45.572380 4158 secret.go:189] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Feb 24 05:13:45.572508 master-0 kubenswrapper[4158]: E0224 05:13:45.572472 4158 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/7dcc5520-7aa8-4cd5-b06d-591827ed4e2a-metrics-certs podName:7dcc5520-7aa8-4cd5-b06d-591827ed4e2a nodeName:}" failed. No retries permitted until 2026-02-24 05:14:01.572447672 +0000 UTC m=+88.236444365 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/7dcc5520-7aa8-4cd5-b06d-591827ed4e2a-metrics-certs") pod "network-metrics-daemon-2vsjh" (UID: "7dcc5520-7aa8-4cd5-b06d-591827ed4e2a") : object "openshift-multus"/"metrics-daemon-secret" not registered Feb 24 05:13:45.786943 master-0 kubenswrapper[4158]: I0224 05:13:45.786860 4158 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-8qp5g" podStartSLOduration=2.793607317 podStartE2EDuration="17.78683818s" podCreationTimestamp="2026-02-24 05:13:28 +0000 UTC" firstStartedPulling="2026-02-24 05:13:29.07154609 +0000 UTC m=+55.735542823" lastFinishedPulling="2026-02-24 05:13:44.064776983 +0000 UTC m=+70.728773686" observedRunningTime="2026-02-24 05:13:44.46720637 +0000 UTC m=+71.131203113" watchObservedRunningTime="2026-02-24 05:13:45.78683818 +0000 UTC m=+72.450834873" Feb 24 05:13:46.076593 master-0 kubenswrapper[4158]: I0224 05:13:46.076461 4158 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ckfnc\" (UniqueName: \"kubernetes.io/projected/1d922f6f-70a3-46cb-b230-6a1e2b8cfdfa-kube-api-access-ckfnc\") pod \"network-check-target-vp2jg\" (UID: \"1d922f6f-70a3-46cb-b230-6a1e2b8cfdfa\") " pod="openshift-network-diagnostics/network-check-target-vp2jg" Feb 24 05:13:46.077057 master-0 kubenswrapper[4158]: E0224 05:13:46.076710 4158 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 24 05:13:46.077057 master-0 kubenswrapper[4158]: E0224 05:13:46.076748 4158 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 24 05:13:46.077057 master-0 kubenswrapper[4158]: E0224 05:13:46.076761 4158 projected.go:194] Error preparing data for projected volume kube-api-access-ckfnc for pod openshift-network-diagnostics/network-check-target-vp2jg: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 24 05:13:46.077057 master-0 kubenswrapper[4158]: E0224 05:13:46.076833 4158 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/1d922f6f-70a3-46cb-b230-6a1e2b8cfdfa-kube-api-access-ckfnc podName:1d922f6f-70a3-46cb-b230-6a1e2b8cfdfa nodeName:}" failed. No retries permitted until 2026-02-24 05:13:48.076814046 +0000 UTC m=+74.740810969 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-ckfnc" (UniqueName: "kubernetes.io/projected/1d922f6f-70a3-46cb-b230-6a1e2b8cfdfa-kube-api-access-ckfnc") pod "network-check-target-vp2jg" (UID: "1d922f6f-70a3-46cb-b230-6a1e2b8cfdfa") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 24 05:13:46.144691 master-0 kubenswrapper[4158]: I0224 05:13:46.144623 4158 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-vp2jg" Feb 24 05:13:46.144896 master-0 kubenswrapper[4158]: E0224 05:13:46.144801 4158 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-vp2jg" podUID="1d922f6f-70a3-46cb-b230-6a1e2b8cfdfa" Feb 24 05:13:47.143715 master-0 kubenswrapper[4158]: I0224 05:13:47.143640 4158 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-2vsjh" Feb 24 05:13:47.144298 master-0 kubenswrapper[4158]: E0224 05:13:47.143886 4158 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-2vsjh" podUID="7dcc5520-7aa8-4cd5-b06d-591827ed4e2a" Feb 24 05:13:47.168115 master-0 kubenswrapper[4158]: I0224 05:13:47.168046 4158 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-network-node-identity/network-node-identity-rlg4x"] Feb 24 05:13:47.168708 master-0 kubenswrapper[4158]: I0224 05:13:47.168672 4158 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-node-identity/network-node-identity-rlg4x" Feb 24 05:13:47.171424 master-0 kubenswrapper[4158]: I0224 05:13:47.171384 4158 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-node-identity"/"network-node-identity-cert" Feb 24 05:13:47.171563 master-0 kubenswrapper[4158]: I0224 05:13:47.171416 4158 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"env-overrides" Feb 24 05:13:47.171563 master-0 kubenswrapper[4158]: I0224 05:13:47.171497 4158 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"ovnkube-identity-cm" Feb 24 05:13:47.171694 master-0 kubenswrapper[4158]: I0224 05:13:47.171585 4158 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"kube-root-ca.crt" Feb 24 05:13:47.171694 master-0 kubenswrapper[4158]: I0224 05:13:47.171682 4158 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"openshift-service-ca.crt" Feb 24 05:13:47.298933 master-0 kubenswrapper[4158]: I0224 05:13:47.298856 4158 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/c106275b-72b6-4877-95c3-830f93e35375-env-overrides\") pod \"network-node-identity-rlg4x\" (UID: \"c106275b-72b6-4877-95c3-830f93e35375\") " pod="openshift-network-node-identity/network-node-identity-rlg4x" Feb 24 05:13:47.298933 master-0 kubenswrapper[4158]: I0224 05:13:47.298906 4158 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4p8zb\" (UniqueName: \"kubernetes.io/projected/c106275b-72b6-4877-95c3-830f93e35375-kube-api-access-4p8zb\") pod \"network-node-identity-rlg4x\" (UID: \"c106275b-72b6-4877-95c3-830f93e35375\") " pod="openshift-network-node-identity/network-node-identity-rlg4x" Feb 24 05:13:47.299171 master-0 kubenswrapper[4158]: I0224 05:13:47.298972 4158 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/c106275b-72b6-4877-95c3-830f93e35375-ovnkube-identity-cm\") pod \"network-node-identity-rlg4x\" (UID: \"c106275b-72b6-4877-95c3-830f93e35375\") " pod="openshift-network-node-identity/network-node-identity-rlg4x" Feb 24 05:13:47.299171 master-0 kubenswrapper[4158]: I0224 05:13:47.299042 4158 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/c106275b-72b6-4877-95c3-830f93e35375-webhook-cert\") pod \"network-node-identity-rlg4x\" (UID: \"c106275b-72b6-4877-95c3-830f93e35375\") " pod="openshift-network-node-identity/network-node-identity-rlg4x" Feb 24 05:13:47.400026 master-0 kubenswrapper[4158]: I0224 05:13:47.399897 4158 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4p8zb\" (UniqueName: \"kubernetes.io/projected/c106275b-72b6-4877-95c3-830f93e35375-kube-api-access-4p8zb\") pod \"network-node-identity-rlg4x\" (UID: \"c106275b-72b6-4877-95c3-830f93e35375\") " pod="openshift-network-node-identity/network-node-identity-rlg4x" Feb 24 05:13:47.400026 master-0 kubenswrapper[4158]: I0224 05:13:47.399982 4158 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/c106275b-72b6-4877-95c3-830f93e35375-ovnkube-identity-cm\") pod \"network-node-identity-rlg4x\" (UID: \"c106275b-72b6-4877-95c3-830f93e35375\") " pod="openshift-network-node-identity/network-node-identity-rlg4x" Feb 24 05:13:47.400026 master-0 kubenswrapper[4158]: I0224 05:13:47.400002 4158 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/c106275b-72b6-4877-95c3-830f93e35375-webhook-cert\") pod \"network-node-identity-rlg4x\" (UID: \"c106275b-72b6-4877-95c3-830f93e35375\") " pod="openshift-network-node-identity/network-node-identity-rlg4x" Feb 24 05:13:47.400026 master-0 kubenswrapper[4158]: I0224 05:13:47.400021 4158 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/c106275b-72b6-4877-95c3-830f93e35375-env-overrides\") pod \"network-node-identity-rlg4x\" (UID: \"c106275b-72b6-4877-95c3-830f93e35375\") " pod="openshift-network-node-identity/network-node-identity-rlg4x" Feb 24 05:13:47.400731 master-0 kubenswrapper[4158]: I0224 05:13:47.400701 4158 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/c106275b-72b6-4877-95c3-830f93e35375-env-overrides\") pod \"network-node-identity-rlg4x\" (UID: \"c106275b-72b6-4877-95c3-830f93e35375\") " pod="openshift-network-node-identity/network-node-identity-rlg4x" Feb 24 05:13:47.401343 master-0 kubenswrapper[4158]: E0224 05:13:47.401267 4158 secret.go:189] Couldn't get secret openshift-network-node-identity/network-node-identity-cert: secret "network-node-identity-cert" not found Feb 24 05:13:47.401480 master-0 kubenswrapper[4158]: E0224 05:13:47.401440 4158 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c106275b-72b6-4877-95c3-830f93e35375-webhook-cert podName:c106275b-72b6-4877-95c3-830f93e35375 nodeName:}" failed. No retries permitted until 2026-02-24 05:13:47.90140497 +0000 UTC m=+74.565401683 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "webhook-cert" (UniqueName: "kubernetes.io/secret/c106275b-72b6-4877-95c3-830f93e35375-webhook-cert") pod "network-node-identity-rlg4x" (UID: "c106275b-72b6-4877-95c3-830f93e35375") : secret "network-node-identity-cert" not found Feb 24 05:13:47.402103 master-0 kubenswrapper[4158]: I0224 05:13:47.402040 4158 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/c106275b-72b6-4877-95c3-830f93e35375-ovnkube-identity-cm\") pod \"network-node-identity-rlg4x\" (UID: \"c106275b-72b6-4877-95c3-830f93e35375\") " pod="openshift-network-node-identity/network-node-identity-rlg4x" Feb 24 05:13:47.420950 master-0 kubenswrapper[4158]: I0224 05:13:47.420891 4158 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4p8zb\" (UniqueName: \"kubernetes.io/projected/c106275b-72b6-4877-95c3-830f93e35375-kube-api-access-4p8zb\") pod \"network-node-identity-rlg4x\" (UID: \"c106275b-72b6-4877-95c3-830f93e35375\") " pod="openshift-network-node-identity/network-node-identity-rlg4x" Feb 24 05:13:47.904215 master-0 kubenswrapper[4158]: I0224 05:13:47.904097 4158 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/c106275b-72b6-4877-95c3-830f93e35375-webhook-cert\") pod \"network-node-identity-rlg4x\" (UID: \"c106275b-72b6-4877-95c3-830f93e35375\") " pod="openshift-network-node-identity/network-node-identity-rlg4x" Feb 24 05:13:47.908041 master-0 kubenswrapper[4158]: I0224 05:13:47.907883 4158 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/c106275b-72b6-4877-95c3-830f93e35375-webhook-cert\") pod \"network-node-identity-rlg4x\" (UID: \"c106275b-72b6-4877-95c3-830f93e35375\") " pod="openshift-network-node-identity/network-node-identity-rlg4x" Feb 24 05:13:48.087413 master-0 kubenswrapper[4158]: I0224 05:13:48.087287 4158 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-node-identity/network-node-identity-rlg4x" Feb 24 05:13:48.098931 master-0 kubenswrapper[4158]: W0224 05:13:48.098875 4158 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc106275b_72b6_4877_95c3_830f93e35375.slice/crio-a2b7a210dee36e67d332da03e90107812f166b01198822dfb676fc0a9a05fc25 WatchSource:0}: Error finding container a2b7a210dee36e67d332da03e90107812f166b01198822dfb676fc0a9a05fc25: Status 404 returned error can't find the container with id a2b7a210dee36e67d332da03e90107812f166b01198822dfb676fc0a9a05fc25 Feb 24 05:13:48.105592 master-0 kubenswrapper[4158]: I0224 05:13:48.105552 4158 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ckfnc\" (UniqueName: \"kubernetes.io/projected/1d922f6f-70a3-46cb-b230-6a1e2b8cfdfa-kube-api-access-ckfnc\") pod \"network-check-target-vp2jg\" (UID: \"1d922f6f-70a3-46cb-b230-6a1e2b8cfdfa\") " pod="openshift-network-diagnostics/network-check-target-vp2jg" Feb 24 05:13:48.105816 master-0 kubenswrapper[4158]: E0224 05:13:48.105749 4158 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 24 05:13:48.105816 master-0 kubenswrapper[4158]: E0224 05:13:48.105803 4158 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 24 05:13:48.105816 master-0 kubenswrapper[4158]: E0224 05:13:48.105816 4158 projected.go:194] Error preparing data for projected volume kube-api-access-ckfnc for pod openshift-network-diagnostics/network-check-target-vp2jg: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 24 05:13:48.106041 master-0 kubenswrapper[4158]: E0224 05:13:48.105889 4158 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/1d922f6f-70a3-46cb-b230-6a1e2b8cfdfa-kube-api-access-ckfnc podName:1d922f6f-70a3-46cb-b230-6a1e2b8cfdfa nodeName:}" failed. No retries permitted until 2026-02-24 05:13:52.105873789 +0000 UTC m=+78.769870482 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-ckfnc" (UniqueName: "kubernetes.io/projected/1d922f6f-70a3-46cb-b230-6a1e2b8cfdfa-kube-api-access-ckfnc") pod "network-check-target-vp2jg" (UID: "1d922f6f-70a3-46cb-b230-6a1e2b8cfdfa") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 24 05:13:48.144302 master-0 kubenswrapper[4158]: I0224 05:13:48.144229 4158 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-vp2jg" Feb 24 05:13:48.145018 master-0 kubenswrapper[4158]: E0224 05:13:48.144417 4158 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-vp2jg" podUID="1d922f6f-70a3-46cb-b230-6a1e2b8cfdfa" Feb 24 05:13:48.457743 master-0 kubenswrapper[4158]: I0224 05:13:48.457627 4158 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-rlg4x" event={"ID":"c106275b-72b6-4877-95c3-830f93e35375","Type":"ContainerStarted","Data":"a2b7a210dee36e67d332da03e90107812f166b01198822dfb676fc0a9a05fc25"} Feb 24 05:13:48.463581 master-0 kubenswrapper[4158]: I0224 05:13:48.463516 4158 generic.go:334] "Generic (PLEG): container finished" podID="767424fb-babf-4b73-b5e2-0bee65fcf207" containerID="5ade2b4cc50015238a7faa7e8d4af8c535b8fa2c1005c60f4da3c1f127ccbe16" exitCode=0 Feb 24 05:13:48.463748 master-0 kubenswrapper[4158]: I0224 05:13:48.463602 4158 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-jknmn" event={"ID":"767424fb-babf-4b73-b5e2-0bee65fcf207","Type":"ContainerDied","Data":"5ade2b4cc50015238a7faa7e8d4af8c535b8fa2c1005c60f4da3c1f127ccbe16"} Feb 24 05:13:49.144433 master-0 kubenswrapper[4158]: I0224 05:13:49.144373 4158 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-2vsjh" Feb 24 05:13:49.145149 master-0 kubenswrapper[4158]: E0224 05:13:49.144540 4158 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-2vsjh" podUID="7dcc5520-7aa8-4cd5-b06d-591827ed4e2a" Feb 24 05:13:50.145515 master-0 kubenswrapper[4158]: I0224 05:13:50.145429 4158 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-vp2jg" Feb 24 05:13:50.172647 master-0 kubenswrapper[4158]: E0224 05:13:50.145576 4158 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-vp2jg" podUID="1d922f6f-70a3-46cb-b230-6a1e2b8cfdfa" Feb 24 05:13:51.144757 master-0 kubenswrapper[4158]: I0224 05:13:51.143983 4158 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-2vsjh" Feb 24 05:13:51.144757 master-0 kubenswrapper[4158]: E0224 05:13:51.144221 4158 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-2vsjh" podUID="7dcc5520-7aa8-4cd5-b06d-591827ed4e2a" Feb 24 05:13:52.146837 master-0 kubenswrapper[4158]: I0224 05:13:52.144440 4158 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-vp2jg" Feb 24 05:13:52.146837 master-0 kubenswrapper[4158]: E0224 05:13:52.144638 4158 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-vp2jg" podUID="1d922f6f-70a3-46cb-b230-6a1e2b8cfdfa" Feb 24 05:13:52.187233 master-0 kubenswrapper[4158]: I0224 05:13:52.187153 4158 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ckfnc\" (UniqueName: \"kubernetes.io/projected/1d922f6f-70a3-46cb-b230-6a1e2b8cfdfa-kube-api-access-ckfnc\") pod \"network-check-target-vp2jg\" (UID: \"1d922f6f-70a3-46cb-b230-6a1e2b8cfdfa\") " pod="openshift-network-diagnostics/network-check-target-vp2jg" Feb 24 05:13:52.187592 master-0 kubenswrapper[4158]: E0224 05:13:52.187364 4158 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 24 05:13:52.187679 master-0 kubenswrapper[4158]: E0224 05:13:52.187667 4158 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 24 05:13:52.187758 master-0 kubenswrapper[4158]: E0224 05:13:52.187747 4158 projected.go:194] Error preparing data for projected volume kube-api-access-ckfnc for pod openshift-network-diagnostics/network-check-target-vp2jg: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 24 05:13:52.187948 master-0 kubenswrapper[4158]: E0224 05:13:52.187933 4158 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/1d922f6f-70a3-46cb-b230-6a1e2b8cfdfa-kube-api-access-ckfnc podName:1d922f6f-70a3-46cb-b230-6a1e2b8cfdfa nodeName:}" failed. No retries permitted until 2026-02-24 05:14:00.187914917 +0000 UTC m=+86.851911610 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-ckfnc" (UniqueName: "kubernetes.io/projected/1d922f6f-70a3-46cb-b230-6a1e2b8cfdfa-kube-api-access-ckfnc") pod "network-check-target-vp2jg" (UID: "1d922f6f-70a3-46cb-b230-6a1e2b8cfdfa") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 24 05:13:53.144227 master-0 kubenswrapper[4158]: I0224 05:13:53.143806 4158 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-2vsjh" Feb 24 05:13:53.144227 master-0 kubenswrapper[4158]: E0224 05:13:53.143919 4158 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-2vsjh" podUID="7dcc5520-7aa8-4cd5-b06d-591827ed4e2a" Feb 24 05:13:54.144471 master-0 kubenswrapper[4158]: I0224 05:13:54.144423 4158 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-vp2jg" Feb 24 05:13:54.145376 master-0 kubenswrapper[4158]: E0224 05:13:54.145087 4158 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-vp2jg" podUID="1d922f6f-70a3-46cb-b230-6a1e2b8cfdfa" Feb 24 05:13:55.144514 master-0 kubenswrapper[4158]: I0224 05:13:55.144430 4158 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-2vsjh" Feb 24 05:13:55.145039 master-0 kubenswrapper[4158]: E0224 05:13:55.144622 4158 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-2vsjh" podUID="7dcc5520-7aa8-4cd5-b06d-591827ed4e2a" Feb 24 05:13:56.143918 master-0 kubenswrapper[4158]: I0224 05:13:56.143844 4158 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-vp2jg" Feb 24 05:13:56.144083 master-0 kubenswrapper[4158]: E0224 05:13:56.144024 4158 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-vp2jg" podUID="1d922f6f-70a3-46cb-b230-6a1e2b8cfdfa" Feb 24 05:13:57.143829 master-0 kubenswrapper[4158]: I0224 05:13:57.143675 4158 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-2vsjh" Feb 24 05:13:57.144418 master-0 kubenswrapper[4158]: E0224 05:13:57.144009 4158 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-2vsjh" podUID="7dcc5520-7aa8-4cd5-b06d-591827ed4e2a" Feb 24 05:13:57.158075 master-0 kubenswrapper[4158]: I0224 05:13:57.158014 4158 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/bootstrap-kube-apiserver-master-0"] Feb 24 05:13:58.144275 master-0 kubenswrapper[4158]: I0224 05:13:58.144188 4158 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-vp2jg" Feb 24 05:13:58.144943 master-0 kubenswrapper[4158]: E0224 05:13:58.144454 4158 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-vp2jg" podUID="1d922f6f-70a3-46cb-b230-6a1e2b8cfdfa" Feb 24 05:13:59.145174 master-0 kubenswrapper[4158]: I0224 05:13:59.144118 4158 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-2vsjh" Feb 24 05:13:59.145174 master-0 kubenswrapper[4158]: E0224 05:13:59.144417 4158 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-2vsjh" podUID="7dcc5520-7aa8-4cd5-b06d-591827ed4e2a" Feb 24 05:14:00.144070 master-0 kubenswrapper[4158]: I0224 05:14:00.143962 4158 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-vp2jg" Feb 24 05:14:00.144597 master-0 kubenswrapper[4158]: E0224 05:14:00.144535 4158 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-vp2jg" podUID="1d922f6f-70a3-46cb-b230-6a1e2b8cfdfa" Feb 24 05:14:00.164630 master-0 kubenswrapper[4158]: I0224 05:14:00.164556 4158 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["kube-system/bootstrap-kube-controller-manager-master-0"] Feb 24 05:14:00.265574 master-0 kubenswrapper[4158]: I0224 05:14:00.265462 4158 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ckfnc\" (UniqueName: \"kubernetes.io/projected/1d922f6f-70a3-46cb-b230-6a1e2b8cfdfa-kube-api-access-ckfnc\") pod \"network-check-target-vp2jg\" (UID: \"1d922f6f-70a3-46cb-b230-6a1e2b8cfdfa\") " pod="openshift-network-diagnostics/network-check-target-vp2jg" Feb 24 05:14:00.265905 master-0 kubenswrapper[4158]: E0224 05:14:00.265739 4158 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 24 05:14:00.265905 master-0 kubenswrapper[4158]: E0224 05:14:00.265833 4158 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 24 05:14:00.265905 master-0 kubenswrapper[4158]: E0224 05:14:00.265853 4158 projected.go:194] Error preparing data for projected volume kube-api-access-ckfnc for pod openshift-network-diagnostics/network-check-target-vp2jg: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 24 05:14:00.266106 master-0 kubenswrapper[4158]: E0224 05:14:00.265941 4158 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/1d922f6f-70a3-46cb-b230-6a1e2b8cfdfa-kube-api-access-ckfnc podName:1d922f6f-70a3-46cb-b230-6a1e2b8cfdfa nodeName:}" failed. No retries permitted until 2026-02-24 05:14:16.265918047 +0000 UTC m=+102.929914750 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-ckfnc" (UniqueName: "kubernetes.io/projected/1d922f6f-70a3-46cb-b230-6a1e2b8cfdfa-kube-api-access-ckfnc") pod "network-check-target-vp2jg" (UID: "1d922f6f-70a3-46cb-b230-6a1e2b8cfdfa") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 24 05:14:01.144435 master-0 kubenswrapper[4158]: I0224 05:14:01.144363 4158 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-2vsjh" Feb 24 05:14:01.144702 master-0 kubenswrapper[4158]: E0224 05:14:01.144519 4158 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-2vsjh" podUID="7dcc5520-7aa8-4cd5-b06d-591827ed4e2a" Feb 24 05:14:01.574902 master-0 kubenswrapper[4158]: I0224 05:14:01.574822 4158 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/7dcc5520-7aa8-4cd5-b06d-591827ed4e2a-metrics-certs\") pod \"network-metrics-daemon-2vsjh\" (UID: \"7dcc5520-7aa8-4cd5-b06d-591827ed4e2a\") " pod="openshift-multus/network-metrics-daemon-2vsjh" Feb 24 05:14:01.575549 master-0 kubenswrapper[4158]: E0224 05:14:01.575411 4158 secret.go:189] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Feb 24 05:14:01.575678 master-0 kubenswrapper[4158]: E0224 05:14:01.575639 4158 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/7dcc5520-7aa8-4cd5-b06d-591827ed4e2a-metrics-certs podName:7dcc5520-7aa8-4cd5-b06d-591827ed4e2a nodeName:}" failed. No retries permitted until 2026-02-24 05:14:33.575488325 +0000 UTC m=+120.239485058 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/7dcc5520-7aa8-4cd5-b06d-591827ed4e2a-metrics-certs") pod "network-metrics-daemon-2vsjh" (UID: "7dcc5520-7aa8-4cd5-b06d-591827ed4e2a") : object "openshift-multus"/"metrics-daemon-secret" not registered Feb 24 05:14:02.144825 master-0 kubenswrapper[4158]: I0224 05:14:02.144609 4158 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-vp2jg" Feb 24 05:14:02.145090 master-0 kubenswrapper[4158]: E0224 05:14:02.144816 4158 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-vp2jg" podUID="1d922f6f-70a3-46cb-b230-6a1e2b8cfdfa" Feb 24 05:14:02.512167 master-0 kubenswrapper[4158]: I0224 05:14:02.512071 4158 generic.go:334] "Generic (PLEG): container finished" podID="767424fb-babf-4b73-b5e2-0bee65fcf207" containerID="10e04cc7b2fe6f5614f2167cd49733daceb69f740134e7a457b65b54dad51b16" exitCode=0 Feb 24 05:14:02.512539 master-0 kubenswrapper[4158]: I0224 05:14:02.512211 4158 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-jknmn" event={"ID":"767424fb-babf-4b73-b5e2-0bee65fcf207","Type":"ContainerDied","Data":"10e04cc7b2fe6f5614f2167cd49733daceb69f740134e7a457b65b54dad51b16"} Feb 24 05:14:02.515007 master-0 kubenswrapper[4158]: I0224 05:14:02.514924 4158 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-5d8dfcdc87-b8ght" event={"ID":"88b915ff-fd94-4998-aa09-70f95c0f1b8a","Type":"ContainerStarted","Data":"319aa71d8e4b9690e64904978260695fcae1163baf1014ab285b451aeabac3a9"} Feb 24 05:14:02.518708 master-0 kubenswrapper[4158]: I0224 05:14:02.518646 4158 generic.go:334] "Generic (PLEG): container finished" podID="ba37be4c-fd93-485e-9599-de562820d909" containerID="590fe8d45e6d25b28dfc8d5e64f83cce4e0f5c535002110e93c3c9f684b27645" exitCode=0 Feb 24 05:14:02.518868 master-0 kubenswrapper[4158]: I0224 05:14:02.518753 4158 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-jtdzc" event={"ID":"ba37be4c-fd93-485e-9599-de562820d909","Type":"ContainerDied","Data":"590fe8d45e6d25b28dfc8d5e64f83cce4e0f5c535002110e93c3c9f684b27645"} Feb 24 05:14:02.521382 master-0 kubenswrapper[4158]: I0224 05:14:02.521246 4158 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-rlg4x" event={"ID":"c106275b-72b6-4877-95c3-830f93e35375","Type":"ContainerStarted","Data":"3c48cf95cb20519b43165b534538afb3afad0ec1beb464f9f497eefdb2dc3c0f"} Feb 24 05:14:02.521382 master-0 kubenswrapper[4158]: I0224 05:14:02.521344 4158 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-rlg4x" event={"ID":"c106275b-72b6-4877-95c3-830f93e35375","Type":"ContainerStarted","Data":"3c6414107ba4e270b37d4ad16b7d423fe9bba347c1c7107a12ec4d69a07a7201"} Feb 24 05:14:02.556954 master-0 kubenswrapper[4158]: I0224 05:14:02.556835 4158 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" podStartSLOduration=5.556794546 podStartE2EDuration="5.556794546s" podCreationTimestamp="2026-02-24 05:13:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-24 05:14:02.532016399 +0000 UTC m=+89.196013102" watchObservedRunningTime="2026-02-24 05:14:02.556794546 +0000 UTC m=+89.220791279" Feb 24 05:14:02.589793 master-0 kubenswrapper[4158]: I0224 05:14:02.589648 4158 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/bootstrap-kube-controller-manager-master-0" podStartSLOduration=2.589308033 podStartE2EDuration="2.589308033s" podCreationTimestamp="2026-02-24 05:14:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-24 05:14:02.557814954 +0000 UTC m=+89.221811667" watchObservedRunningTime="2026-02-24 05:14:02.589308033 +0000 UTC m=+89.253304776" Feb 24 05:14:02.647330 master-0 kubenswrapper[4158]: I0224 05:14:02.638176 4158 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-control-plane-5d8dfcdc87-b8ght" podStartSLOduration=4.216090217 podStartE2EDuration="21.638136459s" podCreationTimestamp="2026-02-24 05:13:41 +0000 UTC" firstStartedPulling="2026-02-24 05:13:44.223642285 +0000 UTC m=+70.887638978" lastFinishedPulling="2026-02-24 05:14:01.645688487 +0000 UTC m=+88.309685220" observedRunningTime="2026-02-24 05:14:02.605369646 +0000 UTC m=+89.269366379" watchObservedRunningTime="2026-02-24 05:14:02.638136459 +0000 UTC m=+89.302133192" Feb 24 05:14:02.698252 master-0 kubenswrapper[4158]: I0224 05:14:02.697748 4158 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-network-node-identity/network-node-identity-rlg4x" podStartSLOduration=2.081850065 podStartE2EDuration="15.697723875s" podCreationTimestamp="2026-02-24 05:13:47 +0000 UTC" firstStartedPulling="2026-02-24 05:13:48.101650864 +0000 UTC m=+74.765647557" lastFinishedPulling="2026-02-24 05:14:01.717524634 +0000 UTC m=+88.381521367" observedRunningTime="2026-02-24 05:14:02.659402952 +0000 UTC m=+89.323399645" watchObservedRunningTime="2026-02-24 05:14:02.697723875 +0000 UTC m=+89.361720568" Feb 24 05:14:03.144506 master-0 kubenswrapper[4158]: I0224 05:14:03.144441 4158 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-2vsjh" Feb 24 05:14:03.144640 master-0 kubenswrapper[4158]: E0224 05:14:03.144584 4158 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-2vsjh" podUID="7dcc5520-7aa8-4cd5-b06d-591827ed4e2a" Feb 24 05:14:03.532828 master-0 kubenswrapper[4158]: I0224 05:14:03.532740 4158 generic.go:334] "Generic (PLEG): container finished" podID="767424fb-babf-4b73-b5e2-0bee65fcf207" containerID="db9f1ce1d0787cc02e6669cdb33b3c44fb0d9c881cd88a981199272e23c784a9" exitCode=0 Feb 24 05:14:03.532828 master-0 kubenswrapper[4158]: I0224 05:14:03.532834 4158 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-jknmn" event={"ID":"767424fb-babf-4b73-b5e2-0bee65fcf207","Type":"ContainerDied","Data":"db9f1ce1d0787cc02e6669cdb33b3c44fb0d9c881cd88a981199272e23c784a9"} Feb 24 05:14:03.539165 master-0 kubenswrapper[4158]: I0224 05:14:03.538446 4158 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-jtdzc" event={"ID":"ba37be4c-fd93-485e-9599-de562820d909","Type":"ContainerStarted","Data":"95eacfe9fd75708247f1e22539a72a7a822f10f4feb0787530eb0888581d5f71"} Feb 24 05:14:03.539628 master-0 kubenswrapper[4158]: I0224 05:14:03.539579 4158 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-jtdzc" event={"ID":"ba37be4c-fd93-485e-9599-de562820d909","Type":"ContainerStarted","Data":"534fc9ea67b7965feba9acc2c3ff53cc31de9597f872ccc42a467fca80b78940"} Feb 24 05:14:03.539628 master-0 kubenswrapper[4158]: I0224 05:14:03.539622 4158 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-jtdzc" event={"ID":"ba37be4c-fd93-485e-9599-de562820d909","Type":"ContainerStarted","Data":"001b7389ff7284eddcb83468a108004a69a6764bb951301d77e6fd0ef2663f76"} Feb 24 05:14:03.539716 master-0 kubenswrapper[4158]: I0224 05:14:03.539643 4158 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-jtdzc" event={"ID":"ba37be4c-fd93-485e-9599-de562820d909","Type":"ContainerStarted","Data":"5af588ee5b7c246c5b0dcdbd846ebc8cb75231eafdc55bf5ff72f4f6b08e7bde"} Feb 24 05:14:03.539716 master-0 kubenswrapper[4158]: I0224 05:14:03.539663 4158 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-jtdzc" event={"ID":"ba37be4c-fd93-485e-9599-de562820d909","Type":"ContainerStarted","Data":"a4ee51818f73d3d21a93ac7287e5f4f702e637382e2f6334fc951ed591866bc6"} Feb 24 05:14:04.144347 master-0 kubenswrapper[4158]: I0224 05:14:04.144229 4158 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-vp2jg" Feb 24 05:14:04.145763 master-0 kubenswrapper[4158]: E0224 05:14:04.145673 4158 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-vp2jg" podUID="1d922f6f-70a3-46cb-b230-6a1e2b8cfdfa" Feb 24 05:14:04.553148 master-0 kubenswrapper[4158]: I0224 05:14:04.552983 4158 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-jknmn" event={"ID":"767424fb-babf-4b73-b5e2-0bee65fcf207","Type":"ContainerStarted","Data":"2110efa2c637e7d79b4e38031d61bac5ac97b568425115e2da35ac37b6942fcb"} Feb 24 05:14:04.561243 master-0 kubenswrapper[4158]: I0224 05:14:04.561147 4158 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-jtdzc" event={"ID":"ba37be4c-fd93-485e-9599-de562820d909","Type":"ContainerStarted","Data":"e20b37593cc5f12d7142d2db129fb62b3a308a8745bb7b141ded23e85d1c9b0d"} Feb 24 05:14:04.646921 master-0 kubenswrapper[4158]: I0224 05:14:04.646765 4158 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-additional-cni-plugins-jknmn" podStartSLOduration=4.389701446 podStartE2EDuration="36.64673925s" podCreationTimestamp="2026-02-24 05:13:28 +0000 UTC" firstStartedPulling="2026-02-24 05:13:29.268589587 +0000 UTC m=+55.932586310" lastFinishedPulling="2026-02-24 05:14:01.525627381 +0000 UTC m=+88.189624114" observedRunningTime="2026-02-24 05:14:04.645534087 +0000 UTC m=+91.309530850" watchObservedRunningTime="2026-02-24 05:14:04.64673925 +0000 UTC m=+91.310735983" Feb 24 05:14:05.144818 master-0 kubenswrapper[4158]: I0224 05:14:05.144632 4158 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-2vsjh" Feb 24 05:14:05.146450 master-0 kubenswrapper[4158]: E0224 05:14:05.144990 4158 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-2vsjh" podUID="7dcc5520-7aa8-4cd5-b06d-591827ed4e2a" Feb 24 05:14:06.144509 master-0 kubenswrapper[4158]: I0224 05:14:06.144280 4158 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-vp2jg" Feb 24 05:14:06.145170 master-0 kubenswrapper[4158]: E0224 05:14:06.144523 4158 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-vp2jg" podUID="1d922f6f-70a3-46cb-b230-6a1e2b8cfdfa" Feb 24 05:14:06.576180 master-0 kubenswrapper[4158]: I0224 05:14:06.576096 4158 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-jtdzc" event={"ID":"ba37be4c-fd93-485e-9599-de562820d909","Type":"ContainerStarted","Data":"3ac9024cb0a66dd8734739cbb5bcd6e679ea43f4883a07172cf2a9991710293b"} Feb 24 05:14:07.144117 master-0 kubenswrapper[4158]: I0224 05:14:07.143944 4158 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-2vsjh" Feb 24 05:14:07.144436 master-0 kubenswrapper[4158]: E0224 05:14:07.144204 4158 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-2vsjh" podUID="7dcc5520-7aa8-4cd5-b06d-591827ed4e2a" Feb 24 05:14:07.262416 master-0 kubenswrapper[4158]: I0224 05:14:07.262299 4158 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["kube-system/bootstrap-kube-scheduler-master-0"] Feb 24 05:14:07.279713 master-0 kubenswrapper[4158]: I0224 05:14:07.279612 4158 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-jtdzc"] Feb 24 05:14:08.144566 master-0 kubenswrapper[4158]: I0224 05:14:08.144479 4158 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-vp2jg" Feb 24 05:14:08.144960 master-0 kubenswrapper[4158]: E0224 05:14:08.144660 4158 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-vp2jg" podUID="1d922f6f-70a3-46cb-b230-6a1e2b8cfdfa" Feb 24 05:14:09.144366 master-0 kubenswrapper[4158]: I0224 05:14:09.143751 4158 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-2vsjh" Feb 24 05:14:09.145629 master-0 kubenswrapper[4158]: E0224 05:14:09.144417 4158 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-2vsjh" podUID="7dcc5520-7aa8-4cd5-b06d-591827ed4e2a" Feb 24 05:14:09.598916 master-0 kubenswrapper[4158]: I0224 05:14:09.598812 4158 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-jtdzc" event={"ID":"ba37be4c-fd93-485e-9599-de562820d909","Type":"ContainerStarted","Data":"c3581e200e0fedaabb78ae31391c819c18c3dc337e1b5bf4ba652ace6f481142"} Feb 24 05:14:09.599369 master-0 kubenswrapper[4158]: I0224 05:14:09.599158 4158 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-jtdzc" podUID="ba37be4c-fd93-485e-9599-de562820d909" containerName="ovn-controller" containerID="cri-o://a4ee51818f73d3d21a93ac7287e5f4f702e637382e2f6334fc951ed591866bc6" gracePeriod=30 Feb 24 05:14:09.599369 master-0 kubenswrapper[4158]: I0224 05:14:09.599222 4158 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-jtdzc" podUID="ba37be4c-fd93-485e-9599-de562820d909" containerName="nbdb" containerID="cri-o://e20b37593cc5f12d7142d2db129fb62b3a308a8745bb7b141ded23e85d1c9b0d" gracePeriod=30 Feb 24 05:14:09.599369 master-0 kubenswrapper[4158]: I0224 05:14:09.599322 4158 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-jtdzc" Feb 24 05:14:09.599369 master-0 kubenswrapper[4158]: I0224 05:14:09.599350 4158 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-jtdzc" Feb 24 05:14:09.599369 master-0 kubenswrapper[4158]: I0224 05:14:09.599296 4158 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-jtdzc" podUID="ba37be4c-fd93-485e-9599-de562820d909" containerName="kube-rbac-proxy-ovn-metrics" containerID="cri-o://534fc9ea67b7965feba9acc2c3ff53cc31de9597f872ccc42a467fca80b78940" gracePeriod=30 Feb 24 05:14:09.599770 master-0 kubenswrapper[4158]: I0224 05:14:09.599335 4158 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-jtdzc" podUID="ba37be4c-fd93-485e-9599-de562820d909" containerName="kube-rbac-proxy-node" containerID="cri-o://001b7389ff7284eddcb83468a108004a69a6764bb951301d77e6fd0ef2663f76" gracePeriod=30 Feb 24 05:14:09.599770 master-0 kubenswrapper[4158]: I0224 05:14:09.599515 4158 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-jtdzc" Feb 24 05:14:09.599770 master-0 kubenswrapper[4158]: I0224 05:14:09.599569 4158 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-jtdzc" podUID="ba37be4c-fd93-485e-9599-de562820d909" containerName="northd" containerID="cri-o://95eacfe9fd75708247f1e22539a72a7a822f10f4feb0787530eb0888581d5f71" gracePeriod=30 Feb 24 05:14:09.599770 master-0 kubenswrapper[4158]: I0224 05:14:09.599552 4158 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-jtdzc" podUID="ba37be4c-fd93-485e-9599-de562820d909" containerName="ovn-acl-logging" containerID="cri-o://5af588ee5b7c246c5b0dcdbd846ebc8cb75231eafdc55bf5ff72f4f6b08e7bde" gracePeriod=30 Feb 24 05:14:09.602764 master-0 kubenswrapper[4158]: I0224 05:14:09.599607 4158 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-jtdzc" podUID="ba37be4c-fd93-485e-9599-de562820d909" containerName="sbdb" containerID="cri-o://3ac9024cb0a66dd8734739cbb5bcd6e679ea43f4883a07172cf2a9991710293b" gracePeriod=30 Feb 24 05:14:09.635934 master-0 kubenswrapper[4158]: I0224 05:14:09.635732 4158 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-node-jtdzc" podStartSLOduration=10.92442195 podStartE2EDuration="28.635696366s" podCreationTimestamp="2026-02-24 05:13:41 +0000 UTC" firstStartedPulling="2026-02-24 05:13:43.975509179 +0000 UTC m=+70.639505872" lastFinishedPulling="2026-02-24 05:14:01.686783585 +0000 UTC m=+88.350780288" observedRunningTime="2026-02-24 05:14:09.635541002 +0000 UTC m=+96.299537725" watchObservedRunningTime="2026-02-24 05:14:09.635696366 +0000 UTC m=+96.299693069" Feb 24 05:14:09.641371 master-0 kubenswrapper[4158]: I0224 05:14:09.641320 4158 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-jtdzc" Feb 24 05:14:09.641485 master-0 kubenswrapper[4158]: I0224 05:14:09.641414 4158 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-jtdzc" Feb 24 05:14:09.649686 master-0 kubenswrapper[4158]: I0224 05:14:09.649579 4158 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-jtdzc" podUID="ba37be4c-fd93-485e-9599-de562820d909" containerName="ovnkube-controller" containerID="cri-o://c3581e200e0fedaabb78ae31391c819c18c3dc337e1b5bf4ba652ace6f481142" gracePeriod=30 Feb 24 05:14:09.651744 master-0 kubenswrapper[4158]: I0224 05:14:09.651655 4158 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/bootstrap-kube-scheduler-master-0" podStartSLOduration=2.651619206 podStartE2EDuration="2.651619206s" podCreationTimestamp="2026-02-24 05:14:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-24 05:14:09.651289937 +0000 UTC m=+96.315286660" watchObservedRunningTime="2026-02-24 05:14:09.651619206 +0000 UTC m=+96.315615999" Feb 24 05:14:10.144489 master-0 kubenswrapper[4158]: I0224 05:14:10.144372 4158 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-vp2jg" Feb 24 05:14:10.144982 master-0 kubenswrapper[4158]: E0224 05:14:10.144762 4158 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-vp2jg" podUID="1d922f6f-70a3-46cb-b230-6a1e2b8cfdfa" Feb 24 05:14:10.607120 master-0 kubenswrapper[4158]: I0224 05:14:10.607062 4158 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-jtdzc_ba37be4c-fd93-485e-9599-de562820d909/ovnkube-controller/0.log" Feb 24 05:14:10.609420 master-0 kubenswrapper[4158]: I0224 05:14:10.609337 4158 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-jtdzc_ba37be4c-fd93-485e-9599-de562820d909/kube-rbac-proxy-ovn-metrics/0.log" Feb 24 05:14:10.610010 master-0 kubenswrapper[4158]: I0224 05:14:10.609954 4158 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-jtdzc_ba37be4c-fd93-485e-9599-de562820d909/kube-rbac-proxy-node/0.log" Feb 24 05:14:10.610648 master-0 kubenswrapper[4158]: I0224 05:14:10.610560 4158 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-jtdzc_ba37be4c-fd93-485e-9599-de562820d909/ovn-acl-logging/0.log" Feb 24 05:14:10.611365 master-0 kubenswrapper[4158]: I0224 05:14:10.611292 4158 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-jtdzc_ba37be4c-fd93-485e-9599-de562820d909/ovn-controller/0.log" Feb 24 05:14:10.611848 master-0 kubenswrapper[4158]: I0224 05:14:10.611798 4158 generic.go:334] "Generic (PLEG): container finished" podID="ba37be4c-fd93-485e-9599-de562820d909" containerID="c3581e200e0fedaabb78ae31391c819c18c3dc337e1b5bf4ba652ace6f481142" exitCode=1 Feb 24 05:14:10.611848 master-0 kubenswrapper[4158]: I0224 05:14:10.611833 4158 generic.go:334] "Generic (PLEG): container finished" podID="ba37be4c-fd93-485e-9599-de562820d909" containerID="3ac9024cb0a66dd8734739cbb5bcd6e679ea43f4883a07172cf2a9991710293b" exitCode=0 Feb 24 05:14:10.611848 master-0 kubenswrapper[4158]: I0224 05:14:10.611846 4158 generic.go:334] "Generic (PLEG): container finished" podID="ba37be4c-fd93-485e-9599-de562820d909" containerID="e20b37593cc5f12d7142d2db129fb62b3a308a8745bb7b141ded23e85d1c9b0d" exitCode=0 Feb 24 05:14:10.611947 master-0 kubenswrapper[4158]: I0224 05:14:10.611856 4158 generic.go:334] "Generic (PLEG): container finished" podID="ba37be4c-fd93-485e-9599-de562820d909" containerID="95eacfe9fd75708247f1e22539a72a7a822f10f4feb0787530eb0888581d5f71" exitCode=0 Feb 24 05:14:10.611947 master-0 kubenswrapper[4158]: I0224 05:14:10.611850 4158 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-jtdzc" event={"ID":"ba37be4c-fd93-485e-9599-de562820d909","Type":"ContainerDied","Data":"c3581e200e0fedaabb78ae31391c819c18c3dc337e1b5bf4ba652ace6f481142"} Feb 24 05:14:10.611947 master-0 kubenswrapper[4158]: I0224 05:14:10.611915 4158 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-jtdzc" event={"ID":"ba37be4c-fd93-485e-9599-de562820d909","Type":"ContainerDied","Data":"3ac9024cb0a66dd8734739cbb5bcd6e679ea43f4883a07172cf2a9991710293b"} Feb 24 05:14:10.611947 master-0 kubenswrapper[4158]: I0224 05:14:10.611938 4158 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-jtdzc" event={"ID":"ba37be4c-fd93-485e-9599-de562820d909","Type":"ContainerDied","Data":"e20b37593cc5f12d7142d2db129fb62b3a308a8745bb7b141ded23e85d1c9b0d"} Feb 24 05:14:10.612049 master-0 kubenswrapper[4158]: I0224 05:14:10.611960 4158 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-jtdzc" event={"ID":"ba37be4c-fd93-485e-9599-de562820d909","Type":"ContainerDied","Data":"95eacfe9fd75708247f1e22539a72a7a822f10f4feb0787530eb0888581d5f71"} Feb 24 05:14:10.612049 master-0 kubenswrapper[4158]: I0224 05:14:10.611983 4158 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-jtdzc" event={"ID":"ba37be4c-fd93-485e-9599-de562820d909","Type":"ContainerDied","Data":"534fc9ea67b7965feba9acc2c3ff53cc31de9597f872ccc42a467fca80b78940"} Feb 24 05:14:10.612049 master-0 kubenswrapper[4158]: I0224 05:14:10.611868 4158 generic.go:334] "Generic (PLEG): container finished" podID="ba37be4c-fd93-485e-9599-de562820d909" containerID="534fc9ea67b7965feba9acc2c3ff53cc31de9597f872ccc42a467fca80b78940" exitCode=143 Feb 24 05:14:10.612049 master-0 kubenswrapper[4158]: I0224 05:14:10.612022 4158 generic.go:334] "Generic (PLEG): container finished" podID="ba37be4c-fd93-485e-9599-de562820d909" containerID="001b7389ff7284eddcb83468a108004a69a6764bb951301d77e6fd0ef2663f76" exitCode=143 Feb 24 05:14:10.612049 master-0 kubenswrapper[4158]: I0224 05:14:10.612042 4158 generic.go:334] "Generic (PLEG): container finished" podID="ba37be4c-fd93-485e-9599-de562820d909" containerID="5af588ee5b7c246c5b0dcdbd846ebc8cb75231eafdc55bf5ff72f4f6b08e7bde" exitCode=143 Feb 24 05:14:10.612165 master-0 kubenswrapper[4158]: I0224 05:14:10.612059 4158 generic.go:334] "Generic (PLEG): container finished" podID="ba37be4c-fd93-485e-9599-de562820d909" containerID="a4ee51818f73d3d21a93ac7287e5f4f702e637382e2f6334fc951ed591866bc6" exitCode=143 Feb 24 05:14:10.612165 master-0 kubenswrapper[4158]: I0224 05:14:10.612084 4158 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-jtdzc" event={"ID":"ba37be4c-fd93-485e-9599-de562820d909","Type":"ContainerDied","Data":"001b7389ff7284eddcb83468a108004a69a6764bb951301d77e6fd0ef2663f76"} Feb 24 05:14:10.612165 master-0 kubenswrapper[4158]: I0224 05:14:10.612104 4158 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-jtdzc" event={"ID":"ba37be4c-fd93-485e-9599-de562820d909","Type":"ContainerDied","Data":"5af588ee5b7c246c5b0dcdbd846ebc8cb75231eafdc55bf5ff72f4f6b08e7bde"} Feb 24 05:14:10.612165 master-0 kubenswrapper[4158]: I0224 05:14:10.612124 4158 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-jtdzc" event={"ID":"ba37be4c-fd93-485e-9599-de562820d909","Type":"ContainerDied","Data":"a4ee51818f73d3d21a93ac7287e5f4f702e637382e2f6334fc951ed591866bc6"} Feb 24 05:14:10.914435 master-0 kubenswrapper[4158]: I0224 05:14:10.914355 4158 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-jtdzc_ba37be4c-fd93-485e-9599-de562820d909/ovnkube-controller/0.log" Feb 24 05:14:10.917671 master-0 kubenswrapper[4158]: I0224 05:14:10.917614 4158 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-jtdzc_ba37be4c-fd93-485e-9599-de562820d909/kube-rbac-proxy-ovn-metrics/0.log" Feb 24 05:14:10.918825 master-0 kubenswrapper[4158]: I0224 05:14:10.918760 4158 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-jtdzc_ba37be4c-fd93-485e-9599-de562820d909/kube-rbac-proxy-node/0.log" Feb 24 05:14:10.919645 master-0 kubenswrapper[4158]: I0224 05:14:10.919599 4158 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-jtdzc_ba37be4c-fd93-485e-9599-de562820d909/ovn-acl-logging/0.log" Feb 24 05:14:10.920408 master-0 kubenswrapper[4158]: I0224 05:14:10.920361 4158 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-jtdzc_ba37be4c-fd93-485e-9599-de562820d909/ovn-controller/0.log" Feb 24 05:14:10.921235 master-0 kubenswrapper[4158]: I0224 05:14:10.921189 4158 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-jtdzc" Feb 24 05:14:10.928564 master-0 kubenswrapper[4158]: I0224 05:14:10.928510 4158 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/ba37be4c-fd93-485e-9599-de562820d909-run-openvswitch\") pod \"ba37be4c-fd93-485e-9599-de562820d909\" (UID: \"ba37be4c-fd93-485e-9599-de562820d909\") " Feb 24 05:14:10.928752 master-0 kubenswrapper[4158]: I0224 05:14:10.928689 4158 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ba37be4c-fd93-485e-9599-de562820d909-run-openvswitch" (OuterVolumeSpecName: "run-openvswitch") pod "ba37be4c-fd93-485e-9599-de562820d909" (UID: "ba37be4c-fd93-485e-9599-de562820d909"). InnerVolumeSpecName "run-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 24 05:14:10.928813 master-0 kubenswrapper[4158]: I0224 05:14:10.928619 4158 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/ba37be4c-fd93-485e-9599-de562820d909-etc-openvswitch\") pod \"ba37be4c-fd93-485e-9599-de562820d909\" (UID: \"ba37be4c-fd93-485e-9599-de562820d909\") " Feb 24 05:14:10.928870 master-0 kubenswrapper[4158]: I0224 05:14:10.928825 4158 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/ba37be4c-fd93-485e-9599-de562820d909-node-log\") pod \"ba37be4c-fd93-485e-9599-de562820d909\" (UID: \"ba37be4c-fd93-485e-9599-de562820d909\") " Feb 24 05:14:10.928927 master-0 kubenswrapper[4158]: I0224 05:14:10.928899 4158 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/ba37be4c-fd93-485e-9599-de562820d909-env-overrides\") pod \"ba37be4c-fd93-485e-9599-de562820d909\" (UID: \"ba37be4c-fd93-485e-9599-de562820d909\") " Feb 24 05:14:10.929000 master-0 kubenswrapper[4158]: I0224 05:14:10.928893 4158 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ba37be4c-fd93-485e-9599-de562820d909-etc-openvswitch" (OuterVolumeSpecName: "etc-openvswitch") pod "ba37be4c-fd93-485e-9599-de562820d909" (UID: "ba37be4c-fd93-485e-9599-de562820d909"). InnerVolumeSpecName "etc-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 24 05:14:10.929071 master-0 kubenswrapper[4158]: I0224 05:14:10.928930 4158 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ba37be4c-fd93-485e-9599-de562820d909-node-log" (OuterVolumeSpecName: "node-log") pod "ba37be4c-fd93-485e-9599-de562820d909" (UID: "ba37be4c-fd93-485e-9599-de562820d909"). InnerVolumeSpecName "node-log". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 24 05:14:10.929071 master-0 kubenswrapper[4158]: I0224 05:14:10.929006 4158 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ba37be4c-fd93-485e-9599-de562820d909-host-run-netns" (OuterVolumeSpecName: "host-run-netns") pod "ba37be4c-fd93-485e-9599-de562820d909" (UID: "ba37be4c-fd93-485e-9599-de562820d909"). InnerVolumeSpecName "host-run-netns". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 24 05:14:10.929071 master-0 kubenswrapper[4158]: I0224 05:14:10.928958 4158 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/ba37be4c-fd93-485e-9599-de562820d909-host-run-netns\") pod \"ba37be4c-fd93-485e-9599-de562820d909\" (UID: \"ba37be4c-fd93-485e-9599-de562820d909\") " Feb 24 05:14:10.929180 master-0 kubenswrapper[4158]: I0224 05:14:10.929099 4158 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/ba37be4c-fd93-485e-9599-de562820d909-ovnkube-config\") pod \"ba37be4c-fd93-485e-9599-de562820d909\" (UID: \"ba37be4c-fd93-485e-9599-de562820d909\") " Feb 24 05:14:10.929180 master-0 kubenswrapper[4158]: I0224 05:14:10.929147 4158 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/ba37be4c-fd93-485e-9599-de562820d909-systemd-units\") pod \"ba37be4c-fd93-485e-9599-de562820d909\" (UID: \"ba37be4c-fd93-485e-9599-de562820d909\") " Feb 24 05:14:10.929258 master-0 kubenswrapper[4158]: I0224 05:14:10.929199 4158 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/ba37be4c-fd93-485e-9599-de562820d909-host-run-ovn-kubernetes\") pod \"ba37be4c-fd93-485e-9599-de562820d909\" (UID: \"ba37be4c-fd93-485e-9599-de562820d909\") " Feb 24 05:14:10.929333 master-0 kubenswrapper[4158]: I0224 05:14:10.929257 4158 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/ba37be4c-fd93-485e-9599-de562820d909-var-lib-openvswitch\") pod \"ba37be4c-fd93-485e-9599-de562820d909\" (UID: \"ba37be4c-fd93-485e-9599-de562820d909\") " Feb 24 05:14:10.929390 master-0 kubenswrapper[4158]: I0224 05:14:10.929351 4158 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/ba37be4c-fd93-485e-9599-de562820d909-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ba37be4c-fd93-485e-9599-de562820d909\" (UID: \"ba37be4c-fd93-485e-9599-de562820d909\") " Feb 24 05:14:10.929438 master-0 kubenswrapper[4158]: I0224 05:14:10.929412 4158 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/ba37be4c-fd93-485e-9599-de562820d909-ovnkube-script-lib\") pod \"ba37be4c-fd93-485e-9599-de562820d909\" (UID: \"ba37be4c-fd93-485e-9599-de562820d909\") " Feb 24 05:14:10.929501 master-0 kubenswrapper[4158]: I0224 05:14:10.929459 4158 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/ba37be4c-fd93-485e-9599-de562820d909-host-slash\") pod \"ba37be4c-fd93-485e-9599-de562820d909\" (UID: \"ba37be4c-fd93-485e-9599-de562820d909\") " Feb 24 05:14:10.929559 master-0 kubenswrapper[4158]: I0224 05:14:10.929506 4158 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/ba37be4c-fd93-485e-9599-de562820d909-host-cni-netd\") pod \"ba37be4c-fd93-485e-9599-de562820d909\" (UID: \"ba37be4c-fd93-485e-9599-de562820d909\") " Feb 24 05:14:10.929615 master-0 kubenswrapper[4158]: I0224 05:14:10.929562 4158 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/ba37be4c-fd93-485e-9599-de562820d909-ovn-node-metrics-cert\") pod \"ba37be4c-fd93-485e-9599-de562820d909\" (UID: \"ba37be4c-fd93-485e-9599-de562820d909\") " Feb 24 05:14:10.929669 master-0 kubenswrapper[4158]: I0224 05:14:10.929609 4158 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/ba37be4c-fd93-485e-9599-de562820d909-host-cni-bin\") pod \"ba37be4c-fd93-485e-9599-de562820d909\" (UID: \"ba37be4c-fd93-485e-9599-de562820d909\") " Feb 24 05:14:10.929669 master-0 kubenswrapper[4158]: I0224 05:14:10.929657 4158 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/ba37be4c-fd93-485e-9599-de562820d909-run-ovn\") pod \"ba37be4c-fd93-485e-9599-de562820d909\" (UID: \"ba37be4c-fd93-485e-9599-de562820d909\") " Feb 24 05:14:10.929773 master-0 kubenswrapper[4158]: I0224 05:14:10.929706 4158 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/ba37be4c-fd93-485e-9599-de562820d909-run-systemd\") pod \"ba37be4c-fd93-485e-9599-de562820d909\" (UID: \"ba37be4c-fd93-485e-9599-de562820d909\") " Feb 24 05:14:10.929773 master-0 kubenswrapper[4158]: I0224 05:14:10.929758 4158 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lvshk\" (UniqueName: \"kubernetes.io/projected/ba37be4c-fd93-485e-9599-de562820d909-kube-api-access-lvshk\") pod \"ba37be4c-fd93-485e-9599-de562820d909\" (UID: \"ba37be4c-fd93-485e-9599-de562820d909\") " Feb 24 05:14:10.929884 master-0 kubenswrapper[4158]: I0224 05:14:10.929789 4158 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ba37be4c-fd93-485e-9599-de562820d909-host-run-ovn-kubernetes" (OuterVolumeSpecName: "host-run-ovn-kubernetes") pod "ba37be4c-fd93-485e-9599-de562820d909" (UID: "ba37be4c-fd93-485e-9599-de562820d909"). InnerVolumeSpecName "host-run-ovn-kubernetes". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 24 05:14:10.929884 master-0 kubenswrapper[4158]: I0224 05:14:10.929809 4158 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/ba37be4c-fd93-485e-9599-de562820d909-log-socket\") pod \"ba37be4c-fd93-485e-9599-de562820d909\" (UID: \"ba37be4c-fd93-485e-9599-de562820d909\") " Feb 24 05:14:10.929884 master-0 kubenswrapper[4158]: I0224 05:14:10.929846 4158 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ba37be4c-fd93-485e-9599-de562820d909-systemd-units" (OuterVolumeSpecName: "systemd-units") pod "ba37be4c-fd93-485e-9599-de562820d909" (UID: "ba37be4c-fd93-485e-9599-de562820d909"). InnerVolumeSpecName "systemd-units". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 24 05:14:10.929884 master-0 kubenswrapper[4158]: I0224 05:14:10.929864 4158 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/ba37be4c-fd93-485e-9599-de562820d909-host-kubelet\") pod \"ba37be4c-fd93-485e-9599-de562820d909\" (UID: \"ba37be4c-fd93-485e-9599-de562820d909\") " Feb 24 05:14:10.930079 master-0 kubenswrapper[4158]: I0224 05:14:10.929929 4158 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ba37be4c-fd93-485e-9599-de562820d909-host-cni-bin" (OuterVolumeSpecName: "host-cni-bin") pod "ba37be4c-fd93-485e-9599-de562820d909" (UID: "ba37be4c-fd93-485e-9599-de562820d909"). InnerVolumeSpecName "host-cni-bin". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 24 05:14:10.930079 master-0 kubenswrapper[4158]: I0224 05:14:10.929961 4158 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ba37be4c-fd93-485e-9599-de562820d909-host-slash" (OuterVolumeSpecName: "host-slash") pod "ba37be4c-fd93-485e-9599-de562820d909" (UID: "ba37be4c-fd93-485e-9599-de562820d909"). InnerVolumeSpecName "host-slash". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 24 05:14:10.930079 master-0 kubenswrapper[4158]: I0224 05:14:10.929994 4158 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ba37be4c-fd93-485e-9599-de562820d909-host-cni-netd" (OuterVolumeSpecName: "host-cni-netd") pod "ba37be4c-fd93-485e-9599-de562820d909" (UID: "ba37be4c-fd93-485e-9599-de562820d909"). InnerVolumeSpecName "host-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 24 05:14:10.930233 master-0 kubenswrapper[4158]: I0224 05:14:10.930108 4158 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ba37be4c-fd93-485e-9599-de562820d909-run-ovn" (OuterVolumeSpecName: "run-ovn") pod "ba37be4c-fd93-485e-9599-de562820d909" (UID: "ba37be4c-fd93-485e-9599-de562820d909"). InnerVolumeSpecName "run-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 24 05:14:10.930436 master-0 kubenswrapper[4158]: I0224 05:14:10.930374 4158 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ba37be4c-fd93-485e-9599-de562820d909-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "ba37be4c-fd93-485e-9599-de562820d909" (UID: "ba37be4c-fd93-485e-9599-de562820d909"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 24 05:14:10.930505 master-0 kubenswrapper[4158]: I0224 05:14:10.930414 4158 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ba37be4c-fd93-485e-9599-de562820d909-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "ba37be4c-fd93-485e-9599-de562820d909" (UID: "ba37be4c-fd93-485e-9599-de562820d909"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 24 05:14:10.930546 master-0 kubenswrapper[4158]: I0224 05:14:10.930498 4158 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ba37be4c-fd93-485e-9599-de562820d909-log-socket" (OuterVolumeSpecName: "log-socket") pod "ba37be4c-fd93-485e-9599-de562820d909" (UID: "ba37be4c-fd93-485e-9599-de562820d909"). InnerVolumeSpecName "log-socket". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 24 05:14:10.930641 master-0 kubenswrapper[4158]: I0224 05:14:10.930584 4158 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ba37be4c-fd93-485e-9599-de562820d909-var-lib-openvswitch" (OuterVolumeSpecName: "var-lib-openvswitch") pod "ba37be4c-fd93-485e-9599-de562820d909" (UID: "ba37be4c-fd93-485e-9599-de562820d909"). InnerVolumeSpecName "var-lib-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 24 05:14:10.930688 master-0 kubenswrapper[4158]: I0224 05:14:10.930622 4158 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ba37be4c-fd93-485e-9599-de562820d909-host-kubelet" (OuterVolumeSpecName: "host-kubelet") pod "ba37be4c-fd93-485e-9599-de562820d909" (UID: "ba37be4c-fd93-485e-9599-de562820d909"). InnerVolumeSpecName "host-kubelet". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 24 05:14:10.930738 master-0 kubenswrapper[4158]: I0224 05:14:10.930590 4158 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/ba37be4c-fd93-485e-9599-de562820d909-env-overrides\") on node \"master-0\" DevicePath \"\"" Feb 24 05:14:10.930784 master-0 kubenswrapper[4158]: I0224 05:14:10.930737 4158 reconciler_common.go:293] "Volume detached for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/ba37be4c-fd93-485e-9599-de562820d909-host-run-netns\") on node \"master-0\" DevicePath \"\"" Feb 24 05:14:10.930831 master-0 kubenswrapper[4158]: I0224 05:14:10.930690 4158 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ba37be4c-fd93-485e-9599-de562820d909-host-var-lib-cni-networks-ovn-kubernetes" (OuterVolumeSpecName: "host-var-lib-cni-networks-ovn-kubernetes") pod "ba37be4c-fd93-485e-9599-de562820d909" (UID: "ba37be4c-fd93-485e-9599-de562820d909"). InnerVolumeSpecName "host-var-lib-cni-networks-ovn-kubernetes". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 24 05:14:10.930831 master-0 kubenswrapper[4158]: I0224 05:14:10.930808 4158 reconciler_common.go:293] "Volume detached for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/ba37be4c-fd93-485e-9599-de562820d909-systemd-units\") on node \"master-0\" DevicePath \"\"" Feb 24 05:14:10.930831 master-0 kubenswrapper[4158]: I0224 05:14:10.930438 4158 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ba37be4c-fd93-485e-9599-de562820d909-ovnkube-script-lib" (OuterVolumeSpecName: "ovnkube-script-lib") pod "ba37be4c-fd93-485e-9599-de562820d909" (UID: "ba37be4c-fd93-485e-9599-de562820d909"). InnerVolumeSpecName "ovnkube-script-lib". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 24 05:14:10.930963 master-0 kubenswrapper[4158]: I0224 05:14:10.930835 4158 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/ba37be4c-fd93-485e-9599-de562820d909-ovnkube-config\") on node \"master-0\" DevicePath \"\"" Feb 24 05:14:10.930963 master-0 kubenswrapper[4158]: I0224 05:14:10.930902 4158 reconciler_common.go:293] "Volume detached for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/ba37be4c-fd93-485e-9599-de562820d909-host-run-ovn-kubernetes\") on node \"master-0\" DevicePath \"\"" Feb 24 05:14:10.931036 master-0 kubenswrapper[4158]: I0224 05:14:10.930937 4158 reconciler_common.go:293] "Volume detached for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/ba37be4c-fd93-485e-9599-de562820d909-host-slash\") on node \"master-0\" DevicePath \"\"" Feb 24 05:14:10.931036 master-0 kubenswrapper[4158]: I0224 05:14:10.931003 4158 reconciler_common.go:293] "Volume detached for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/ba37be4c-fd93-485e-9599-de562820d909-host-cni-netd\") on node \"master-0\" DevicePath \"\"" Feb 24 05:14:10.931036 master-0 kubenswrapper[4158]: I0224 05:14:10.931027 4158 reconciler_common.go:293] "Volume detached for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/ba37be4c-fd93-485e-9599-de562820d909-host-cni-bin\") on node \"master-0\" DevicePath \"\"" Feb 24 05:14:10.931138 master-0 kubenswrapper[4158]: I0224 05:14:10.931091 4158 reconciler_common.go:293] "Volume detached for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/ba37be4c-fd93-485e-9599-de562820d909-run-ovn\") on node \"master-0\" DevicePath \"\"" Feb 24 05:14:10.931138 master-0 kubenswrapper[4158]: I0224 05:14:10.931122 4158 reconciler_common.go:293] "Volume detached for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/ba37be4c-fd93-485e-9599-de562820d909-log-socket\") on node \"master-0\" DevicePath \"\"" Feb 24 05:14:10.931216 master-0 kubenswrapper[4158]: I0224 05:14:10.931186 4158 reconciler_common.go:293] "Volume detached for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/ba37be4c-fd93-485e-9599-de562820d909-node-log\") on node \"master-0\" DevicePath \"\"" Feb 24 05:14:10.931255 master-0 kubenswrapper[4158]: I0224 05:14:10.931211 4158 reconciler_common.go:293] "Volume detached for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/ba37be4c-fd93-485e-9599-de562820d909-run-openvswitch\") on node \"master-0\" DevicePath \"\"" Feb 24 05:14:10.931294 master-0 kubenswrapper[4158]: I0224 05:14:10.931268 4158 reconciler_common.go:293] "Volume detached for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/ba37be4c-fd93-485e-9599-de562820d909-etc-openvswitch\") on node \"master-0\" DevicePath \"\"" Feb 24 05:14:10.938762 master-0 kubenswrapper[4158]: I0224 05:14:10.938703 4158 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ba37be4c-fd93-485e-9599-de562820d909-ovn-node-metrics-cert" (OuterVolumeSpecName: "ovn-node-metrics-cert") pod "ba37be4c-fd93-485e-9599-de562820d909" (UID: "ba37be4c-fd93-485e-9599-de562820d909"). InnerVolumeSpecName "ovn-node-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 24 05:14:10.939844 master-0 kubenswrapper[4158]: I0224 05:14:10.939732 4158 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ba37be4c-fd93-485e-9599-de562820d909-kube-api-access-lvshk" (OuterVolumeSpecName: "kube-api-access-lvshk") pod "ba37be4c-fd93-485e-9599-de562820d909" (UID: "ba37be4c-fd93-485e-9599-de562820d909"). InnerVolumeSpecName "kube-api-access-lvshk". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 24 05:14:10.943922 master-0 kubenswrapper[4158]: I0224 05:14:10.943850 4158 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ba37be4c-fd93-485e-9599-de562820d909-run-systemd" (OuterVolumeSpecName: "run-systemd") pod "ba37be4c-fd93-485e-9599-de562820d909" (UID: "ba37be4c-fd93-485e-9599-de562820d909"). InnerVolumeSpecName "run-systemd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 24 05:14:10.986868 master-0 kubenswrapper[4158]: I0224 05:14:10.986801 4158 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-vd82q"] Feb 24 05:14:10.987034 master-0 kubenswrapper[4158]: E0224 05:14:10.986929 4158 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ba37be4c-fd93-485e-9599-de562820d909" containerName="northd" Feb 24 05:14:10.987034 master-0 kubenswrapper[4158]: I0224 05:14:10.986942 4158 state_mem.go:107] "Deleted CPUSet assignment" podUID="ba37be4c-fd93-485e-9599-de562820d909" containerName="northd" Feb 24 05:14:10.987034 master-0 kubenswrapper[4158]: E0224 05:14:10.986952 4158 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ba37be4c-fd93-485e-9599-de562820d909" containerName="kube-rbac-proxy-node" Feb 24 05:14:10.987034 master-0 kubenswrapper[4158]: I0224 05:14:10.986959 4158 state_mem.go:107] "Deleted CPUSet assignment" podUID="ba37be4c-fd93-485e-9599-de562820d909" containerName="kube-rbac-proxy-node" Feb 24 05:14:10.987034 master-0 kubenswrapper[4158]: E0224 05:14:10.986966 4158 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ba37be4c-fd93-485e-9599-de562820d909" containerName="ovn-controller" Feb 24 05:14:10.987034 master-0 kubenswrapper[4158]: I0224 05:14:10.986974 4158 state_mem.go:107] "Deleted CPUSet assignment" podUID="ba37be4c-fd93-485e-9599-de562820d909" containerName="ovn-controller" Feb 24 05:14:10.987034 master-0 kubenswrapper[4158]: E0224 05:14:10.986980 4158 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ba37be4c-fd93-485e-9599-de562820d909" containerName="kube-rbac-proxy-ovn-metrics" Feb 24 05:14:10.987034 master-0 kubenswrapper[4158]: I0224 05:14:10.986987 4158 state_mem.go:107] "Deleted CPUSet assignment" podUID="ba37be4c-fd93-485e-9599-de562820d909" containerName="kube-rbac-proxy-ovn-metrics" Feb 24 05:14:10.987034 master-0 kubenswrapper[4158]: E0224 05:14:10.986994 4158 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ba37be4c-fd93-485e-9599-de562820d909" containerName="sbdb" Feb 24 05:14:10.987034 master-0 kubenswrapper[4158]: I0224 05:14:10.987000 4158 state_mem.go:107] "Deleted CPUSet assignment" podUID="ba37be4c-fd93-485e-9599-de562820d909" containerName="sbdb" Feb 24 05:14:10.987034 master-0 kubenswrapper[4158]: E0224 05:14:10.987008 4158 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ba37be4c-fd93-485e-9599-de562820d909" containerName="kubecfg-setup" Feb 24 05:14:10.987034 master-0 kubenswrapper[4158]: I0224 05:14:10.987014 4158 state_mem.go:107] "Deleted CPUSet assignment" podUID="ba37be4c-fd93-485e-9599-de562820d909" containerName="kubecfg-setup" Feb 24 05:14:10.987034 master-0 kubenswrapper[4158]: E0224 05:14:10.987022 4158 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ba37be4c-fd93-485e-9599-de562820d909" containerName="ovn-acl-logging" Feb 24 05:14:10.987034 master-0 kubenswrapper[4158]: I0224 05:14:10.987030 4158 state_mem.go:107] "Deleted CPUSet assignment" podUID="ba37be4c-fd93-485e-9599-de562820d909" containerName="ovn-acl-logging" Feb 24 05:14:10.987034 master-0 kubenswrapper[4158]: E0224 05:14:10.987037 4158 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ba37be4c-fd93-485e-9599-de562820d909" containerName="nbdb" Feb 24 05:14:10.987034 master-0 kubenswrapper[4158]: I0224 05:14:10.987046 4158 state_mem.go:107] "Deleted CPUSet assignment" podUID="ba37be4c-fd93-485e-9599-de562820d909" containerName="nbdb" Feb 24 05:14:10.987034 master-0 kubenswrapper[4158]: E0224 05:14:10.987054 4158 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ba37be4c-fd93-485e-9599-de562820d909" containerName="ovnkube-controller" Feb 24 05:14:10.987690 master-0 kubenswrapper[4158]: I0224 05:14:10.987062 4158 state_mem.go:107] "Deleted CPUSet assignment" podUID="ba37be4c-fd93-485e-9599-de562820d909" containerName="ovnkube-controller" Feb 24 05:14:10.987690 master-0 kubenswrapper[4158]: I0224 05:14:10.987099 4158 memory_manager.go:354] "RemoveStaleState removing state" podUID="ba37be4c-fd93-485e-9599-de562820d909" containerName="ovn-controller" Feb 24 05:14:10.987690 master-0 kubenswrapper[4158]: I0224 05:14:10.987109 4158 memory_manager.go:354] "RemoveStaleState removing state" podUID="ba37be4c-fd93-485e-9599-de562820d909" containerName="kube-rbac-proxy-ovn-metrics" Feb 24 05:14:10.987690 master-0 kubenswrapper[4158]: I0224 05:14:10.987118 4158 memory_manager.go:354] "RemoveStaleState removing state" podUID="ba37be4c-fd93-485e-9599-de562820d909" containerName="sbdb" Feb 24 05:14:10.987690 master-0 kubenswrapper[4158]: I0224 05:14:10.987126 4158 memory_manager.go:354] "RemoveStaleState removing state" podUID="ba37be4c-fd93-485e-9599-de562820d909" containerName="ovnkube-controller" Feb 24 05:14:10.987690 master-0 kubenswrapper[4158]: I0224 05:14:10.987133 4158 memory_manager.go:354] "RemoveStaleState removing state" podUID="ba37be4c-fd93-485e-9599-de562820d909" containerName="kube-rbac-proxy-node" Feb 24 05:14:10.987690 master-0 kubenswrapper[4158]: I0224 05:14:10.987140 4158 memory_manager.go:354] "RemoveStaleState removing state" podUID="ba37be4c-fd93-485e-9599-de562820d909" containerName="ovn-acl-logging" Feb 24 05:14:10.987690 master-0 kubenswrapper[4158]: I0224 05:14:10.987147 4158 memory_manager.go:354] "RemoveStaleState removing state" podUID="ba37be4c-fd93-485e-9599-de562820d909" containerName="nbdb" Feb 24 05:14:10.987690 master-0 kubenswrapper[4158]: I0224 05:14:10.987153 4158 memory_manager.go:354] "RemoveStaleState removing state" podUID="ba37be4c-fd93-485e-9599-de562820d909" containerName="northd" Feb 24 05:14:10.993792 master-0 kubenswrapper[4158]: I0224 05:14:10.993743 4158 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-vd82q" Feb 24 05:14:11.032382 master-0 kubenswrapper[4158]: I0224 05:14:11.032142 4158 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/74e8b3c8-da80-492c-bfcf-199b40bde40b-host-run-ovn-kubernetes\") pod \"ovnkube-node-vd82q\" (UID: \"74e8b3c8-da80-492c-bfcf-199b40bde40b\") " pod="openshift-ovn-kubernetes/ovnkube-node-vd82q" Feb 24 05:14:11.032382 master-0 kubenswrapper[4158]: I0224 05:14:11.032271 4158 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/74e8b3c8-da80-492c-bfcf-199b40bde40b-var-lib-openvswitch\") pod \"ovnkube-node-vd82q\" (UID: \"74e8b3c8-da80-492c-bfcf-199b40bde40b\") " pod="openshift-ovn-kubernetes/ovnkube-node-vd82q" Feb 24 05:14:11.032664 master-0 kubenswrapper[4158]: I0224 05:14:11.032482 4158 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/74e8b3c8-da80-492c-bfcf-199b40bde40b-host-cni-netd\") pod \"ovnkube-node-vd82q\" (UID: \"74e8b3c8-da80-492c-bfcf-199b40bde40b\") " pod="openshift-ovn-kubernetes/ovnkube-node-vd82q" Feb 24 05:14:11.032664 master-0 kubenswrapper[4158]: I0224 05:14:11.032555 4158 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/74e8b3c8-da80-492c-bfcf-199b40bde40b-host-run-netns\") pod \"ovnkube-node-vd82q\" (UID: \"74e8b3c8-da80-492c-bfcf-199b40bde40b\") " pod="openshift-ovn-kubernetes/ovnkube-node-vd82q" Feb 24 05:14:11.032664 master-0 kubenswrapper[4158]: I0224 05:14:11.032597 4158 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/74e8b3c8-da80-492c-bfcf-199b40bde40b-etc-openvswitch\") pod \"ovnkube-node-vd82q\" (UID: \"74e8b3c8-da80-492c-bfcf-199b40bde40b\") " pod="openshift-ovn-kubernetes/ovnkube-node-vd82q" Feb 24 05:14:11.032664 master-0 kubenswrapper[4158]: I0224 05:14:11.032638 4158 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/74e8b3c8-da80-492c-bfcf-199b40bde40b-ovnkube-script-lib\") pod \"ovnkube-node-vd82q\" (UID: \"74e8b3c8-da80-492c-bfcf-199b40bde40b\") " pod="openshift-ovn-kubernetes/ovnkube-node-vd82q" Feb 24 05:14:11.032976 master-0 kubenswrapper[4158]: I0224 05:14:11.032679 4158 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/74e8b3c8-da80-492c-bfcf-199b40bde40b-host-kubelet\") pod \"ovnkube-node-vd82q\" (UID: \"74e8b3c8-da80-492c-bfcf-199b40bde40b\") " pod="openshift-ovn-kubernetes/ovnkube-node-vd82q" Feb 24 05:14:11.032976 master-0 kubenswrapper[4158]: I0224 05:14:11.032715 4158 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/74e8b3c8-da80-492c-bfcf-199b40bde40b-log-socket\") pod \"ovnkube-node-vd82q\" (UID: \"74e8b3c8-da80-492c-bfcf-199b40bde40b\") " pod="openshift-ovn-kubernetes/ovnkube-node-vd82q" Feb 24 05:14:11.032976 master-0 kubenswrapper[4158]: I0224 05:14:11.032777 4158 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/74e8b3c8-da80-492c-bfcf-199b40bde40b-systemd-units\") pod \"ovnkube-node-vd82q\" (UID: \"74e8b3c8-da80-492c-bfcf-199b40bde40b\") " pod="openshift-ovn-kubernetes/ovnkube-node-vd82q" Feb 24 05:14:11.032976 master-0 kubenswrapper[4158]: I0224 05:14:11.032821 4158 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/74e8b3c8-da80-492c-bfcf-199b40bde40b-host-slash\") pod \"ovnkube-node-vd82q\" (UID: \"74e8b3c8-da80-492c-bfcf-199b40bde40b\") " pod="openshift-ovn-kubernetes/ovnkube-node-vd82q" Feb 24 05:14:11.032976 master-0 kubenswrapper[4158]: I0224 05:14:11.032875 4158 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/74e8b3c8-da80-492c-bfcf-199b40bde40b-run-ovn\") pod \"ovnkube-node-vd82q\" (UID: \"74e8b3c8-da80-492c-bfcf-199b40bde40b\") " pod="openshift-ovn-kubernetes/ovnkube-node-vd82q" Feb 24 05:14:11.032976 master-0 kubenswrapper[4158]: I0224 05:14:11.032908 4158 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-79h66\" (UniqueName: \"kubernetes.io/projected/74e8b3c8-da80-492c-bfcf-199b40bde40b-kube-api-access-79h66\") pod \"ovnkube-node-vd82q\" (UID: \"74e8b3c8-da80-492c-bfcf-199b40bde40b\") " pod="openshift-ovn-kubernetes/ovnkube-node-vd82q" Feb 24 05:14:11.032976 master-0 kubenswrapper[4158]: I0224 05:14:11.032947 4158 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/74e8b3c8-da80-492c-bfcf-199b40bde40b-env-overrides\") pod \"ovnkube-node-vd82q\" (UID: \"74e8b3c8-da80-492c-bfcf-199b40bde40b\") " pod="openshift-ovn-kubernetes/ovnkube-node-vd82q" Feb 24 05:14:11.032976 master-0 kubenswrapper[4158]: I0224 05:14:11.032982 4158 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/74e8b3c8-da80-492c-bfcf-199b40bde40b-run-openvswitch\") pod \"ovnkube-node-vd82q\" (UID: \"74e8b3c8-da80-492c-bfcf-199b40bde40b\") " pod="openshift-ovn-kubernetes/ovnkube-node-vd82q" Feb 24 05:14:11.034117 master-0 kubenswrapper[4158]: I0224 05:14:11.033017 4158 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/74e8b3c8-da80-492c-bfcf-199b40bde40b-ovnkube-config\") pod \"ovnkube-node-vd82q\" (UID: \"74e8b3c8-da80-492c-bfcf-199b40bde40b\") " pod="openshift-ovn-kubernetes/ovnkube-node-vd82q" Feb 24 05:14:11.034117 master-0 kubenswrapper[4158]: I0224 05:14:11.033051 4158 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/74e8b3c8-da80-492c-bfcf-199b40bde40b-ovn-node-metrics-cert\") pod \"ovnkube-node-vd82q\" (UID: \"74e8b3c8-da80-492c-bfcf-199b40bde40b\") " pod="openshift-ovn-kubernetes/ovnkube-node-vd82q" Feb 24 05:14:11.034117 master-0 kubenswrapper[4158]: I0224 05:14:11.033090 4158 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/74e8b3c8-da80-492c-bfcf-199b40bde40b-host-cni-bin\") pod \"ovnkube-node-vd82q\" (UID: \"74e8b3c8-da80-492c-bfcf-199b40bde40b\") " pod="openshift-ovn-kubernetes/ovnkube-node-vd82q" Feb 24 05:14:11.034117 master-0 kubenswrapper[4158]: I0224 05:14:11.033147 4158 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/74e8b3c8-da80-492c-bfcf-199b40bde40b-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-vd82q\" (UID: \"74e8b3c8-da80-492c-bfcf-199b40bde40b\") " pod="openshift-ovn-kubernetes/ovnkube-node-vd82q" Feb 24 05:14:11.034117 master-0 kubenswrapper[4158]: I0224 05:14:11.033184 4158 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/74e8b3c8-da80-492c-bfcf-199b40bde40b-run-systemd\") pod \"ovnkube-node-vd82q\" (UID: \"74e8b3c8-da80-492c-bfcf-199b40bde40b\") " pod="openshift-ovn-kubernetes/ovnkube-node-vd82q" Feb 24 05:14:11.034117 master-0 kubenswrapper[4158]: I0224 05:14:11.033217 4158 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/74e8b3c8-da80-492c-bfcf-199b40bde40b-node-log\") pod \"ovnkube-node-vd82q\" (UID: \"74e8b3c8-da80-492c-bfcf-199b40bde40b\") " pod="openshift-ovn-kubernetes/ovnkube-node-vd82q" Feb 24 05:14:11.034117 master-0 kubenswrapper[4158]: I0224 05:14:11.033272 4158 reconciler_common.go:293] "Volume detached for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/ba37be4c-fd93-485e-9599-de562820d909-var-lib-openvswitch\") on node \"master-0\" DevicePath \"\"" Feb 24 05:14:11.034117 master-0 kubenswrapper[4158]: I0224 05:14:11.033710 4158 reconciler_common.go:293] "Volume detached for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/ba37be4c-fd93-485e-9599-de562820d909-ovnkube-script-lib\") on node \"master-0\" DevicePath \"\"" Feb 24 05:14:11.034117 master-0 kubenswrapper[4158]: I0224 05:14:11.033799 4158 reconciler_common.go:293] "Volume detached for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/ba37be4c-fd93-485e-9599-de562820d909-host-var-lib-cni-networks-ovn-kubernetes\") on node \"master-0\" DevicePath \"\"" Feb 24 05:14:11.034117 master-0 kubenswrapper[4158]: I0224 05:14:11.033862 4158 reconciler_common.go:293] "Volume detached for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/ba37be4c-fd93-485e-9599-de562820d909-ovn-node-metrics-cert\") on node \"master-0\" DevicePath \"\"" Feb 24 05:14:11.034117 master-0 kubenswrapper[4158]: I0224 05:14:11.033893 4158 reconciler_common.go:293] "Volume detached for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/ba37be4c-fd93-485e-9599-de562820d909-run-systemd\") on node \"master-0\" DevicePath \"\"" Feb 24 05:14:11.034117 master-0 kubenswrapper[4158]: I0224 05:14:11.033918 4158 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lvshk\" (UniqueName: \"kubernetes.io/projected/ba37be4c-fd93-485e-9599-de562820d909-kube-api-access-lvshk\") on node \"master-0\" DevicePath \"\"" Feb 24 05:14:11.034117 master-0 kubenswrapper[4158]: I0224 05:14:11.033944 4158 reconciler_common.go:293] "Volume detached for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/ba37be4c-fd93-485e-9599-de562820d909-host-kubelet\") on node \"master-0\" DevicePath \"\"" Feb 24 05:14:11.134694 master-0 kubenswrapper[4158]: I0224 05:14:11.134530 4158 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/74e8b3c8-da80-492c-bfcf-199b40bde40b-ovnkube-config\") pod \"ovnkube-node-vd82q\" (UID: \"74e8b3c8-da80-492c-bfcf-199b40bde40b\") " pod="openshift-ovn-kubernetes/ovnkube-node-vd82q" Feb 24 05:14:11.134694 master-0 kubenswrapper[4158]: I0224 05:14:11.134585 4158 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/74e8b3c8-da80-492c-bfcf-199b40bde40b-ovn-node-metrics-cert\") pod \"ovnkube-node-vd82q\" (UID: \"74e8b3c8-da80-492c-bfcf-199b40bde40b\") " pod="openshift-ovn-kubernetes/ovnkube-node-vd82q" Feb 24 05:14:11.134694 master-0 kubenswrapper[4158]: I0224 05:14:11.134608 4158 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/74e8b3c8-da80-492c-bfcf-199b40bde40b-host-cni-bin\") pod \"ovnkube-node-vd82q\" (UID: \"74e8b3c8-da80-492c-bfcf-199b40bde40b\") " pod="openshift-ovn-kubernetes/ovnkube-node-vd82q" Feb 24 05:14:11.134694 master-0 kubenswrapper[4158]: I0224 05:14:11.134640 4158 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/74e8b3c8-da80-492c-bfcf-199b40bde40b-run-systemd\") pod \"ovnkube-node-vd82q\" (UID: \"74e8b3c8-da80-492c-bfcf-199b40bde40b\") " pod="openshift-ovn-kubernetes/ovnkube-node-vd82q" Feb 24 05:14:11.134694 master-0 kubenswrapper[4158]: I0224 05:14:11.134657 4158 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/74e8b3c8-da80-492c-bfcf-199b40bde40b-node-log\") pod \"ovnkube-node-vd82q\" (UID: \"74e8b3c8-da80-492c-bfcf-199b40bde40b\") " pod="openshift-ovn-kubernetes/ovnkube-node-vd82q" Feb 24 05:14:11.135083 master-0 kubenswrapper[4158]: I0224 05:14:11.134938 4158 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/74e8b3c8-da80-492c-bfcf-199b40bde40b-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-vd82q\" (UID: \"74e8b3c8-da80-492c-bfcf-199b40bde40b\") " pod="openshift-ovn-kubernetes/ovnkube-node-vd82q" Feb 24 05:14:11.135083 master-0 kubenswrapper[4158]: I0224 05:14:11.135049 4158 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/74e8b3c8-da80-492c-bfcf-199b40bde40b-host-run-ovn-kubernetes\") pod \"ovnkube-node-vd82q\" (UID: \"74e8b3c8-da80-492c-bfcf-199b40bde40b\") " pod="openshift-ovn-kubernetes/ovnkube-node-vd82q" Feb 24 05:14:11.135083 master-0 kubenswrapper[4158]: I0224 05:14:11.134977 4158 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/74e8b3c8-da80-492c-bfcf-199b40bde40b-node-log\") pod \"ovnkube-node-vd82q\" (UID: \"74e8b3c8-da80-492c-bfcf-199b40bde40b\") " pod="openshift-ovn-kubernetes/ovnkube-node-vd82q" Feb 24 05:14:11.135224 master-0 kubenswrapper[4158]: I0224 05:14:11.135204 4158 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/74e8b3c8-da80-492c-bfcf-199b40bde40b-var-lib-openvswitch\") pod \"ovnkube-node-vd82q\" (UID: \"74e8b3c8-da80-492c-bfcf-199b40bde40b\") " pod="openshift-ovn-kubernetes/ovnkube-node-vd82q" Feb 24 05:14:11.135297 master-0 kubenswrapper[4158]: I0224 05:14:11.135207 4158 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/74e8b3c8-da80-492c-bfcf-199b40bde40b-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-vd82q\" (UID: \"74e8b3c8-da80-492c-bfcf-199b40bde40b\") " pod="openshift-ovn-kubernetes/ovnkube-node-vd82q" Feb 24 05:14:11.135390 master-0 kubenswrapper[4158]: I0224 05:14:11.135346 4158 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/74e8b3c8-da80-492c-bfcf-199b40bde40b-host-cni-bin\") pod \"ovnkube-node-vd82q\" (UID: \"74e8b3c8-da80-492c-bfcf-199b40bde40b\") " pod="openshift-ovn-kubernetes/ovnkube-node-vd82q" Feb 24 05:14:11.135465 master-0 kubenswrapper[4158]: I0224 05:14:11.135434 4158 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/74e8b3c8-da80-492c-bfcf-199b40bde40b-run-systemd\") pod \"ovnkube-node-vd82q\" (UID: \"74e8b3c8-da80-492c-bfcf-199b40bde40b\") " pod="openshift-ovn-kubernetes/ovnkube-node-vd82q" Feb 24 05:14:11.135508 master-0 kubenswrapper[4158]: I0224 05:14:11.135470 4158 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/74e8b3c8-da80-492c-bfcf-199b40bde40b-host-run-ovn-kubernetes\") pod \"ovnkube-node-vd82q\" (UID: \"74e8b3c8-da80-492c-bfcf-199b40bde40b\") " pod="openshift-ovn-kubernetes/ovnkube-node-vd82q" Feb 24 05:14:11.135657 master-0 kubenswrapper[4158]: I0224 05:14:11.135630 4158 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/74e8b3c8-da80-492c-bfcf-199b40bde40b-host-cni-netd\") pod \"ovnkube-node-vd82q\" (UID: \"74e8b3c8-da80-492c-bfcf-199b40bde40b\") " pod="openshift-ovn-kubernetes/ovnkube-node-vd82q" Feb 24 05:14:11.135955 master-0 kubenswrapper[4158]: I0224 05:14:11.135922 4158 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/74e8b3c8-da80-492c-bfcf-199b40bde40b-host-run-netns\") pod \"ovnkube-node-vd82q\" (UID: \"74e8b3c8-da80-492c-bfcf-199b40bde40b\") " pod="openshift-ovn-kubernetes/ovnkube-node-vd82q" Feb 24 05:14:11.136013 master-0 kubenswrapper[4158]: I0224 05:14:11.135986 4158 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/74e8b3c8-da80-492c-bfcf-199b40bde40b-host-cni-netd\") pod \"ovnkube-node-vd82q\" (UID: \"74e8b3c8-da80-492c-bfcf-199b40bde40b\") " pod="openshift-ovn-kubernetes/ovnkube-node-vd82q" Feb 24 05:14:11.136071 master-0 kubenswrapper[4158]: I0224 05:14:11.135952 4158 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/74e8b3c8-da80-492c-bfcf-199b40bde40b-var-lib-openvswitch\") pod \"ovnkube-node-vd82q\" (UID: \"74e8b3c8-da80-492c-bfcf-199b40bde40b\") " pod="openshift-ovn-kubernetes/ovnkube-node-vd82q" Feb 24 05:14:11.136126 master-0 kubenswrapper[4158]: I0224 05:14:11.136089 4158 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/74e8b3c8-da80-492c-bfcf-199b40bde40b-host-run-netns\") pod \"ovnkube-node-vd82q\" (UID: \"74e8b3c8-da80-492c-bfcf-199b40bde40b\") " pod="openshift-ovn-kubernetes/ovnkube-node-vd82q" Feb 24 05:14:11.136162 master-0 kubenswrapper[4158]: I0224 05:14:11.136135 4158 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/74e8b3c8-da80-492c-bfcf-199b40bde40b-etc-openvswitch\") pod \"ovnkube-node-vd82q\" (UID: \"74e8b3c8-da80-492c-bfcf-199b40bde40b\") " pod="openshift-ovn-kubernetes/ovnkube-node-vd82q" Feb 24 05:14:11.136213 master-0 kubenswrapper[4158]: I0224 05:14:11.136037 4158 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/74e8b3c8-da80-492c-bfcf-199b40bde40b-etc-openvswitch\") pod \"ovnkube-node-vd82q\" (UID: \"74e8b3c8-da80-492c-bfcf-199b40bde40b\") " pod="openshift-ovn-kubernetes/ovnkube-node-vd82q" Feb 24 05:14:11.136379 master-0 kubenswrapper[4158]: I0224 05:14:11.136358 4158 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/74e8b3c8-da80-492c-bfcf-199b40bde40b-host-kubelet\") pod \"ovnkube-node-vd82q\" (UID: \"74e8b3c8-da80-492c-bfcf-199b40bde40b\") " pod="openshift-ovn-kubernetes/ovnkube-node-vd82q" Feb 24 05:14:11.136442 master-0 kubenswrapper[4158]: I0224 05:14:11.136386 4158 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/74e8b3c8-da80-492c-bfcf-199b40bde40b-ovnkube-script-lib\") pod \"ovnkube-node-vd82q\" (UID: \"74e8b3c8-da80-492c-bfcf-199b40bde40b\") " pod="openshift-ovn-kubernetes/ovnkube-node-vd82q" Feb 24 05:14:11.136586 master-0 kubenswrapper[4158]: I0224 05:14:11.136504 4158 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/74e8b3c8-da80-492c-bfcf-199b40bde40b-host-kubelet\") pod \"ovnkube-node-vd82q\" (UID: \"74e8b3c8-da80-492c-bfcf-199b40bde40b\") " pod="openshift-ovn-kubernetes/ovnkube-node-vd82q" Feb 24 05:14:11.136624 master-0 kubenswrapper[4158]: I0224 05:14:11.136539 4158 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/74e8b3c8-da80-492c-bfcf-199b40bde40b-log-socket\") pod \"ovnkube-node-vd82q\" (UID: \"74e8b3c8-da80-492c-bfcf-199b40bde40b\") " pod="openshift-ovn-kubernetes/ovnkube-node-vd82q" Feb 24 05:14:11.136687 master-0 kubenswrapper[4158]: I0224 05:14:11.136566 4158 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/74e8b3c8-da80-492c-bfcf-199b40bde40b-log-socket\") pod \"ovnkube-node-vd82q\" (UID: \"74e8b3c8-da80-492c-bfcf-199b40bde40b\") " pod="openshift-ovn-kubernetes/ovnkube-node-vd82q" Feb 24 05:14:11.136687 master-0 kubenswrapper[4158]: I0224 05:14:11.136674 4158 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/74e8b3c8-da80-492c-bfcf-199b40bde40b-systemd-units\") pod \"ovnkube-node-vd82q\" (UID: \"74e8b3c8-da80-492c-bfcf-199b40bde40b\") " pod="openshift-ovn-kubernetes/ovnkube-node-vd82q" Feb 24 05:14:11.136769 master-0 kubenswrapper[4158]: I0224 05:14:11.136718 4158 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/74e8b3c8-da80-492c-bfcf-199b40bde40b-host-slash\") pod \"ovnkube-node-vd82q\" (UID: \"74e8b3c8-da80-492c-bfcf-199b40bde40b\") " pod="openshift-ovn-kubernetes/ovnkube-node-vd82q" Feb 24 05:14:11.136769 master-0 kubenswrapper[4158]: I0224 05:14:11.136715 4158 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/74e8b3c8-da80-492c-bfcf-199b40bde40b-ovnkube-config\") pod \"ovnkube-node-vd82q\" (UID: \"74e8b3c8-da80-492c-bfcf-199b40bde40b\") " pod="openshift-ovn-kubernetes/ovnkube-node-vd82q" Feb 24 05:14:11.136853 master-0 kubenswrapper[4158]: I0224 05:14:11.136782 4158 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/74e8b3c8-da80-492c-bfcf-199b40bde40b-host-slash\") pod \"ovnkube-node-vd82q\" (UID: \"74e8b3c8-da80-492c-bfcf-199b40bde40b\") " pod="openshift-ovn-kubernetes/ovnkube-node-vd82q" Feb 24 05:14:11.136853 master-0 kubenswrapper[4158]: I0224 05:14:11.136809 4158 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/74e8b3c8-da80-492c-bfcf-199b40bde40b-systemd-units\") pod \"ovnkube-node-vd82q\" (UID: \"74e8b3c8-da80-492c-bfcf-199b40bde40b\") " pod="openshift-ovn-kubernetes/ovnkube-node-vd82q" Feb 24 05:14:11.136927 master-0 kubenswrapper[4158]: I0224 05:14:11.136860 4158 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/74e8b3c8-da80-492c-bfcf-199b40bde40b-run-ovn\") pod \"ovnkube-node-vd82q\" (UID: \"74e8b3c8-da80-492c-bfcf-199b40bde40b\") " pod="openshift-ovn-kubernetes/ovnkube-node-vd82q" Feb 24 05:14:11.136961 master-0 kubenswrapper[4158]: I0224 05:14:11.136918 4158 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-79h66\" (UniqueName: \"kubernetes.io/projected/74e8b3c8-da80-492c-bfcf-199b40bde40b-kube-api-access-79h66\") pod \"ovnkube-node-vd82q\" (UID: \"74e8b3c8-da80-492c-bfcf-199b40bde40b\") " pod="openshift-ovn-kubernetes/ovnkube-node-vd82q" Feb 24 05:14:11.136997 master-0 kubenswrapper[4158]: I0224 05:14:11.136980 4158 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/74e8b3c8-da80-492c-bfcf-199b40bde40b-env-overrides\") pod \"ovnkube-node-vd82q\" (UID: \"74e8b3c8-da80-492c-bfcf-199b40bde40b\") " pod="openshift-ovn-kubernetes/ovnkube-node-vd82q" Feb 24 05:14:11.137027 master-0 kubenswrapper[4158]: I0224 05:14:11.136998 4158 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/74e8b3c8-da80-492c-bfcf-199b40bde40b-run-ovn\") pod \"ovnkube-node-vd82q\" (UID: \"74e8b3c8-da80-492c-bfcf-199b40bde40b\") " pod="openshift-ovn-kubernetes/ovnkube-node-vd82q" Feb 24 05:14:11.137085 master-0 kubenswrapper[4158]: I0224 05:14:11.137038 4158 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/74e8b3c8-da80-492c-bfcf-199b40bde40b-run-openvswitch\") pod \"ovnkube-node-vd82q\" (UID: \"74e8b3c8-da80-492c-bfcf-199b40bde40b\") " pod="openshift-ovn-kubernetes/ovnkube-node-vd82q" Feb 24 05:14:11.137116 master-0 kubenswrapper[4158]: I0224 05:14:11.137086 4158 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/74e8b3c8-da80-492c-bfcf-199b40bde40b-run-openvswitch\") pod \"ovnkube-node-vd82q\" (UID: \"74e8b3c8-da80-492c-bfcf-199b40bde40b\") " pod="openshift-ovn-kubernetes/ovnkube-node-vd82q" Feb 24 05:14:11.137186 master-0 kubenswrapper[4158]: I0224 05:14:11.137160 4158 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/74e8b3c8-da80-492c-bfcf-199b40bde40b-ovnkube-script-lib\") pod \"ovnkube-node-vd82q\" (UID: \"74e8b3c8-da80-492c-bfcf-199b40bde40b\") " pod="openshift-ovn-kubernetes/ovnkube-node-vd82q" Feb 24 05:14:11.137959 master-0 kubenswrapper[4158]: I0224 05:14:11.137903 4158 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/74e8b3c8-da80-492c-bfcf-199b40bde40b-env-overrides\") pod \"ovnkube-node-vd82q\" (UID: \"74e8b3c8-da80-492c-bfcf-199b40bde40b\") " pod="openshift-ovn-kubernetes/ovnkube-node-vd82q" Feb 24 05:14:11.140742 master-0 kubenswrapper[4158]: I0224 05:14:11.140711 4158 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/74e8b3c8-da80-492c-bfcf-199b40bde40b-ovn-node-metrics-cert\") pod \"ovnkube-node-vd82q\" (UID: \"74e8b3c8-da80-492c-bfcf-199b40bde40b\") " pod="openshift-ovn-kubernetes/ovnkube-node-vd82q" Feb 24 05:14:11.143725 master-0 kubenswrapper[4158]: I0224 05:14:11.143679 4158 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-2vsjh" Feb 24 05:14:11.144177 master-0 kubenswrapper[4158]: E0224 05:14:11.144125 4158 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-2vsjh" podUID="7dcc5520-7aa8-4cd5-b06d-591827ed4e2a" Feb 24 05:14:11.164005 master-0 kubenswrapper[4158]: I0224 05:14:11.163946 4158 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-79h66\" (UniqueName: \"kubernetes.io/projected/74e8b3c8-da80-492c-bfcf-199b40bde40b-kube-api-access-79h66\") pod \"ovnkube-node-vd82q\" (UID: \"74e8b3c8-da80-492c-bfcf-199b40bde40b\") " pod="openshift-ovn-kubernetes/ovnkube-node-vd82q" Feb 24 05:14:11.319893 master-0 kubenswrapper[4158]: I0224 05:14:11.319817 4158 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-vd82q" Feb 24 05:14:11.339911 master-0 kubenswrapper[4158]: W0224 05:14:11.339829 4158 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod74e8b3c8_da80_492c_bfcf_199b40bde40b.slice/crio-bec37b05d26590ac90852a463adcb2612e0087e0d2b710f75cef020a89559e29 WatchSource:0}: Error finding container bec37b05d26590ac90852a463adcb2612e0087e0d2b710f75cef020a89559e29: Status 404 returned error can't find the container with id bec37b05d26590ac90852a463adcb2612e0087e0d2b710f75cef020a89559e29 Feb 24 05:14:11.620605 master-0 kubenswrapper[4158]: I0224 05:14:11.620052 4158 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-jtdzc_ba37be4c-fd93-485e-9599-de562820d909/ovnkube-controller/0.log" Feb 24 05:14:11.622791 master-0 kubenswrapper[4158]: I0224 05:14:11.622755 4158 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-jtdzc_ba37be4c-fd93-485e-9599-de562820d909/kube-rbac-proxy-ovn-metrics/0.log" Feb 24 05:14:11.623801 master-0 kubenswrapper[4158]: I0224 05:14:11.623764 4158 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-jtdzc_ba37be4c-fd93-485e-9599-de562820d909/kube-rbac-proxy-node/0.log" Feb 24 05:14:11.624763 master-0 kubenswrapper[4158]: I0224 05:14:11.624699 4158 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-jtdzc_ba37be4c-fd93-485e-9599-de562820d909/ovn-acl-logging/0.log" Feb 24 05:14:11.625610 master-0 kubenswrapper[4158]: I0224 05:14:11.625543 4158 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-jtdzc_ba37be4c-fd93-485e-9599-de562820d909/ovn-controller/0.log" Feb 24 05:14:11.626573 master-0 kubenswrapper[4158]: I0224 05:14:11.626491 4158 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-jtdzc" event={"ID":"ba37be4c-fd93-485e-9599-de562820d909","Type":"ContainerDied","Data":"d91bf7b8d34e1f15ac85412f592332fa821c616af9acf0e1fcb802613907ca17"} Feb 24 05:14:11.626681 master-0 kubenswrapper[4158]: I0224 05:14:11.626593 4158 scope.go:117] "RemoveContainer" containerID="c3581e200e0fedaabb78ae31391c819c18c3dc337e1b5bf4ba652ace6f481142" Feb 24 05:14:11.626681 master-0 kubenswrapper[4158]: I0224 05:14:11.626607 4158 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-jtdzc" Feb 24 05:14:11.629234 master-0 kubenswrapper[4158]: I0224 05:14:11.629167 4158 generic.go:334] "Generic (PLEG): container finished" podID="74e8b3c8-da80-492c-bfcf-199b40bde40b" containerID="1bdb0179be74494ec4b280a7fe7b1b7a56e9431efa12bfe29e8db06ceb6772c4" exitCode=0 Feb 24 05:14:11.629384 master-0 kubenswrapper[4158]: I0224 05:14:11.629229 4158 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-vd82q" event={"ID":"74e8b3c8-da80-492c-bfcf-199b40bde40b","Type":"ContainerDied","Data":"1bdb0179be74494ec4b280a7fe7b1b7a56e9431efa12bfe29e8db06ceb6772c4"} Feb 24 05:14:11.629384 master-0 kubenswrapper[4158]: I0224 05:14:11.629360 4158 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-vd82q" event={"ID":"74e8b3c8-da80-492c-bfcf-199b40bde40b","Type":"ContainerStarted","Data":"bec37b05d26590ac90852a463adcb2612e0087e0d2b710f75cef020a89559e29"} Feb 24 05:14:11.651764 master-0 kubenswrapper[4158]: I0224 05:14:11.651724 4158 scope.go:117] "RemoveContainer" containerID="3ac9024cb0a66dd8734739cbb5bcd6e679ea43f4883a07172cf2a9991710293b" Feb 24 05:14:11.667021 master-0 kubenswrapper[4158]: I0224 05:14:11.666974 4158 scope.go:117] "RemoveContainer" containerID="e20b37593cc5f12d7142d2db129fb62b3a308a8745bb7b141ded23e85d1c9b0d" Feb 24 05:14:11.690814 master-0 kubenswrapper[4158]: I0224 05:14:11.690717 4158 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-jtdzc"] Feb 24 05:14:11.696150 master-0 kubenswrapper[4158]: I0224 05:14:11.694073 4158 scope.go:117] "RemoveContainer" containerID="95eacfe9fd75708247f1e22539a72a7a822f10f4feb0787530eb0888581d5f71" Feb 24 05:14:11.696150 master-0 kubenswrapper[4158]: I0224 05:14:11.695260 4158 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-jtdzc"] Feb 24 05:14:11.735580 master-0 kubenswrapper[4158]: I0224 05:14:11.735512 4158 scope.go:117] "RemoveContainer" containerID="534fc9ea67b7965feba9acc2c3ff53cc31de9597f872ccc42a467fca80b78940" Feb 24 05:14:11.751578 master-0 kubenswrapper[4158]: I0224 05:14:11.751485 4158 scope.go:117] "RemoveContainer" containerID="001b7389ff7284eddcb83468a108004a69a6764bb951301d77e6fd0ef2663f76" Feb 24 05:14:11.770185 master-0 kubenswrapper[4158]: I0224 05:14:11.770112 4158 scope.go:117] "RemoveContainer" containerID="5af588ee5b7c246c5b0dcdbd846ebc8cb75231eafdc55bf5ff72f4f6b08e7bde" Feb 24 05:14:11.791021 master-0 kubenswrapper[4158]: I0224 05:14:11.790962 4158 scope.go:117] "RemoveContainer" containerID="a4ee51818f73d3d21a93ac7287e5f4f702e637382e2f6334fc951ed591866bc6" Feb 24 05:14:11.815193 master-0 kubenswrapper[4158]: I0224 05:14:11.815137 4158 scope.go:117] "RemoveContainer" containerID="590fe8d45e6d25b28dfc8d5e64f83cce4e0f5c535002110e93c3c9f684b27645" Feb 24 05:14:12.145091 master-0 kubenswrapper[4158]: I0224 05:14:12.145004 4158 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-vp2jg" Feb 24 05:14:12.145399 master-0 kubenswrapper[4158]: E0224 05:14:12.145267 4158 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-vp2jg" podUID="1d922f6f-70a3-46cb-b230-6a1e2b8cfdfa" Feb 24 05:14:12.151708 master-0 kubenswrapper[4158]: I0224 05:14:12.151624 4158 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ba37be4c-fd93-485e-9599-de562820d909" path="/var/lib/kubelet/pods/ba37be4c-fd93-485e-9599-de562820d909/volumes" Feb 24 05:14:12.641374 master-0 kubenswrapper[4158]: I0224 05:14:12.641297 4158 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-vd82q" event={"ID":"74e8b3c8-da80-492c-bfcf-199b40bde40b","Type":"ContainerStarted","Data":"7c9aa8c973c65d8f4c452f9bf4f0d6841585cad2f7c52dea4e5f58edf2355a80"} Feb 24 05:14:12.641374 master-0 kubenswrapper[4158]: I0224 05:14:12.641371 4158 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-vd82q" event={"ID":"74e8b3c8-da80-492c-bfcf-199b40bde40b","Type":"ContainerStarted","Data":"203e4064407b2cdb815618524445346028447297b9402b824cb5a632941e0bee"} Feb 24 05:14:12.641374 master-0 kubenswrapper[4158]: I0224 05:14:12.641388 4158 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-vd82q" event={"ID":"74e8b3c8-da80-492c-bfcf-199b40bde40b","Type":"ContainerStarted","Data":"da009d497d7d9c08e6a6c911df5d86431339c875bd8a2c57960daa78be33978c"} Feb 24 05:14:12.641374 master-0 kubenswrapper[4158]: I0224 05:14:12.641398 4158 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-vd82q" event={"ID":"74e8b3c8-da80-492c-bfcf-199b40bde40b","Type":"ContainerStarted","Data":"e910011d31490322f7d43f80a8bc50e5f7853db0615cb333cdcf2dada835b1db"} Feb 24 05:14:12.641374 master-0 kubenswrapper[4158]: I0224 05:14:12.641410 4158 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-vd82q" event={"ID":"74e8b3c8-da80-492c-bfcf-199b40bde40b","Type":"ContainerStarted","Data":"8449b9a902b196b5b9c985da86c2e92456f05b33611c38d2eb60aed256b077ce"} Feb 24 05:14:12.642415 master-0 kubenswrapper[4158]: I0224 05:14:12.641419 4158 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-vd82q" event={"ID":"74e8b3c8-da80-492c-bfcf-199b40bde40b","Type":"ContainerStarted","Data":"4aaabdc179705a09585e7aca3fe3da186005a3f05e026f5dd3d730f14bd01725"} Feb 24 05:14:13.144081 master-0 kubenswrapper[4158]: I0224 05:14:13.143559 4158 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-2vsjh" Feb 24 05:14:13.144081 master-0 kubenswrapper[4158]: E0224 05:14:13.143776 4158 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-2vsjh" podUID="7dcc5520-7aa8-4cd5-b06d-591827ed4e2a" Feb 24 05:14:14.144020 master-0 kubenswrapper[4158]: I0224 05:14:14.143941 4158 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-vp2jg" Feb 24 05:14:14.146305 master-0 kubenswrapper[4158]: E0224 05:14:14.146223 4158 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-vp2jg" podUID="1d922f6f-70a3-46cb-b230-6a1e2b8cfdfa" Feb 24 05:14:14.659078 master-0 kubenswrapper[4158]: I0224 05:14:14.657814 4158 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-vd82q" event={"ID":"74e8b3c8-da80-492c-bfcf-199b40bde40b","Type":"ContainerStarted","Data":"453c83dc686fda5c23a1bd922ec1a329858ad134211012940998be820aa0950b"} Feb 24 05:14:14.974796 master-0 kubenswrapper[4158]: I0224 05:14:14.974645 4158 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7c4b448f-670e-45a1-bdd7-c42903c682a9-serving-cert\") pod \"cluster-version-operator-5cfd9759cf-r4rf2\" (UID: \"7c4b448f-670e-45a1-bdd7-c42903c682a9\") " pod="openshift-cluster-version/cluster-version-operator-5cfd9759cf-r4rf2" Feb 24 05:14:14.975165 master-0 kubenswrapper[4158]: E0224 05:14:14.974964 4158 secret.go:189] Couldn't get secret openshift-cluster-version/cluster-version-operator-serving-cert: secret "cluster-version-operator-serving-cert" not found Feb 24 05:14:14.975165 master-0 kubenswrapper[4158]: E0224 05:14:14.975123 4158 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/7c4b448f-670e-45a1-bdd7-c42903c682a9-serving-cert podName:7c4b448f-670e-45a1-bdd7-c42903c682a9 nodeName:}" failed. No retries permitted until 2026-02-24 05:15:18.975081928 +0000 UTC m=+165.639078651 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/7c4b448f-670e-45a1-bdd7-c42903c682a9-serving-cert") pod "cluster-version-operator-5cfd9759cf-r4rf2" (UID: "7c4b448f-670e-45a1-bdd7-c42903c682a9") : secret "cluster-version-operator-serving-cert" not found Feb 24 05:14:15.144370 master-0 kubenswrapper[4158]: I0224 05:14:15.144219 4158 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-2vsjh" Feb 24 05:14:15.145781 master-0 kubenswrapper[4158]: E0224 05:14:15.144434 4158 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-2vsjh" podUID="7dcc5520-7aa8-4cd5-b06d-591827ed4e2a" Feb 24 05:14:16.143814 master-0 kubenswrapper[4158]: I0224 05:14:16.143725 4158 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-vp2jg" Feb 24 05:14:16.144186 master-0 kubenswrapper[4158]: E0224 05:14:16.143871 4158 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-vp2jg" podUID="1d922f6f-70a3-46cb-b230-6a1e2b8cfdfa" Feb 24 05:14:16.287261 master-0 kubenswrapper[4158]: I0224 05:14:16.287133 4158 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ckfnc\" (UniqueName: \"kubernetes.io/projected/1d922f6f-70a3-46cb-b230-6a1e2b8cfdfa-kube-api-access-ckfnc\") pod \"network-check-target-vp2jg\" (UID: \"1d922f6f-70a3-46cb-b230-6a1e2b8cfdfa\") " pod="openshift-network-diagnostics/network-check-target-vp2jg" Feb 24 05:14:16.288172 master-0 kubenswrapper[4158]: E0224 05:14:16.287495 4158 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 24 05:14:16.288172 master-0 kubenswrapper[4158]: E0224 05:14:16.287553 4158 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 24 05:14:16.288172 master-0 kubenswrapper[4158]: E0224 05:14:16.287584 4158 projected.go:194] Error preparing data for projected volume kube-api-access-ckfnc for pod openshift-network-diagnostics/network-check-target-vp2jg: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 24 05:14:16.288172 master-0 kubenswrapper[4158]: E0224 05:14:16.287689 4158 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/1d922f6f-70a3-46cb-b230-6a1e2b8cfdfa-kube-api-access-ckfnc podName:1d922f6f-70a3-46cb-b230-6a1e2b8cfdfa nodeName:}" failed. No retries permitted until 2026-02-24 05:14:48.28765602 +0000 UTC m=+134.951652753 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-ckfnc" (UniqueName: "kubernetes.io/projected/1d922f6f-70a3-46cb-b230-6a1e2b8cfdfa-kube-api-access-ckfnc") pod "network-check-target-vp2jg" (UID: "1d922f6f-70a3-46cb-b230-6a1e2b8cfdfa") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 24 05:14:17.144894 master-0 kubenswrapper[4158]: I0224 05:14:17.144276 4158 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-2vsjh" Feb 24 05:14:17.145271 master-0 kubenswrapper[4158]: E0224 05:14:17.144995 4158 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-2vsjh" podUID="7dcc5520-7aa8-4cd5-b06d-591827ed4e2a" Feb 24 05:14:17.675645 master-0 kubenswrapper[4158]: I0224 05:14:17.675575 4158 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-vd82q" event={"ID":"74e8b3c8-da80-492c-bfcf-199b40bde40b","Type":"ContainerStarted","Data":"6998fda669e477b51cc7d9ea72f3a3895e17d2dc600009e90388810400e1d30e"} Feb 24 05:14:17.677152 master-0 kubenswrapper[4158]: I0224 05:14:17.676822 4158 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-vd82q" Feb 24 05:14:17.677152 master-0 kubenswrapper[4158]: I0224 05:14:17.676859 4158 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-vd82q" Feb 24 05:14:17.677152 master-0 kubenswrapper[4158]: I0224 05:14:17.676870 4158 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-vd82q" Feb 24 05:14:17.734865 master-0 kubenswrapper[4158]: I0224 05:14:17.734340 4158 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-node-vd82q" podStartSLOduration=7.734287073 podStartE2EDuration="7.734287073s" podCreationTimestamp="2026-02-24 05:14:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-24 05:14:17.733915953 +0000 UTC m=+104.397912666" watchObservedRunningTime="2026-02-24 05:14:17.734287073 +0000 UTC m=+104.398283806" Feb 24 05:14:17.740070 master-0 kubenswrapper[4158]: I0224 05:14:17.740007 4158 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-vd82q" Feb 24 05:14:17.742375 master-0 kubenswrapper[4158]: I0224 05:14:17.742334 4158 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-vd82q" Feb 24 05:14:18.106426 master-0 kubenswrapper[4158]: I0224 05:14:18.105784 4158 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/network-metrics-daemon-2vsjh"] Feb 24 05:14:18.106426 master-0 kubenswrapper[4158]: I0224 05:14:18.105938 4158 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-2vsjh" Feb 24 05:14:18.106426 master-0 kubenswrapper[4158]: E0224 05:14:18.106112 4158 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-2vsjh" podUID="7dcc5520-7aa8-4cd5-b06d-591827ed4e2a" Feb 24 05:14:18.112370 master-0 kubenswrapper[4158]: I0224 05:14:18.111497 4158 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-network-diagnostics/network-check-target-vp2jg"] Feb 24 05:14:18.112370 master-0 kubenswrapper[4158]: I0224 05:14:18.111706 4158 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-vp2jg" Feb 24 05:14:18.112370 master-0 kubenswrapper[4158]: E0224 05:14:18.111840 4158 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-vp2jg" podUID="1d922f6f-70a3-46cb-b230-6a1e2b8cfdfa" Feb 24 05:14:20.144603 master-0 kubenswrapper[4158]: I0224 05:14:20.144562 4158 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-2vsjh" Feb 24 05:14:20.145292 master-0 kubenswrapper[4158]: I0224 05:14:20.144638 4158 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-vp2jg" Feb 24 05:14:20.145436 master-0 kubenswrapper[4158]: E0224 05:14:20.145400 4158 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-vp2jg" podUID="1d922f6f-70a3-46cb-b230-6a1e2b8cfdfa" Feb 24 05:14:20.145512 master-0 kubenswrapper[4158]: E0224 05:14:20.145273 4158 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-2vsjh" podUID="7dcc5520-7aa8-4cd5-b06d-591827ed4e2a" Feb 24 05:14:22.144582 master-0 kubenswrapper[4158]: I0224 05:14:22.144481 4158 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-vp2jg" Feb 24 05:14:22.145657 master-0 kubenswrapper[4158]: I0224 05:14:22.144675 4158 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-2vsjh" Feb 24 05:14:22.145794 master-0 kubenswrapper[4158]: E0224 05:14:22.145720 4158 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-vp2jg" podUID="1d922f6f-70a3-46cb-b230-6a1e2b8cfdfa" Feb 24 05:14:22.145881 master-0 kubenswrapper[4158]: E0224 05:14:22.145818 4158 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-2vsjh" podUID="7dcc5520-7aa8-4cd5-b06d-591827ed4e2a" Feb 24 05:14:23.144191 master-0 kubenswrapper[4158]: I0224 05:14:23.144121 4158 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeReady" Feb 24 05:14:23.144420 master-0 kubenswrapper[4158]: I0224 05:14:23.144358 4158 kubelet_node_status.go:538] "Fast updating node status as it just became ready" Feb 24 05:14:23.195796 master-0 kubenswrapper[4158]: I0224 05:14:23.195103 4158 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-77cd4d9559-8l7xv"] Feb 24 05:14:23.195796 master-0 kubenswrapper[4158]: I0224 05:14:23.195596 4158 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-77cd4d9559-8l7xv" Feb 24 05:14:23.199147 master-0 kubenswrapper[4158]: I0224 05:14:23.199083 4158 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"kube-scheduler-operator-serving-cert" Feb 24 05:14:23.199384 master-0 kubenswrapper[4158]: I0224 05:14:23.199359 4158 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"kube-root-ca.crt" Feb 24 05:14:23.200043 master-0 kubenswrapper[4158]: I0224 05:14:23.199972 4158 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-config" Feb 24 05:14:23.200043 master-0 kubenswrapper[4158]: I0224 05:14:23.199948 4158 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-service-ca-operator/service-ca-operator-c48c8bf7c-mcdrl"] Feb 24 05:14:23.200413 master-0 kubenswrapper[4158]: I0224 05:14:23.200369 4158 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-c48c8bf7c-mcdrl" Feb 24 05:14:23.202224 master-0 kubenswrapper[4158]: I0224 05:14:23.200929 4158 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-5c75f78c8b-9d82f"] Feb 24 05:14:23.202224 master-0 kubenswrapper[4158]: I0224 05:14:23.201267 4158 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-5c75f78c8b-9d82f" Feb 24 05:14:23.202224 master-0 kubenswrapper[4158]: I0224 05:14:23.201920 4158 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-fc889cfd5-r6p58"] Feb 24 05:14:23.202224 master-0 kubenswrapper[4158]: I0224 05:14:23.201923 4158 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"openshift-service-ca.crt" Feb 24 05:14:23.202998 master-0 kubenswrapper[4158]: I0224 05:14:23.202426 4158 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-fc889cfd5-r6p58" Feb 24 05:14:23.202998 master-0 kubenswrapper[4158]: I0224 05:14:23.202507 4158 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"service-ca-operator-config" Feb 24 05:14:23.202998 master-0 kubenswrapper[4158]: I0224 05:14:23.202681 4158 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"kube-root-ca.crt" Feb 24 05:14:23.203463 master-0 kubenswrapper[4158]: I0224 05:14:23.203420 4158 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-etcd-operator/etcd-operator-545bf96f4d-tfmbs"] Feb 24 05:14:23.204062 master-0 kubenswrapper[4158]: I0224 05:14:23.204031 4158 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-545bf96f4d-tfmbs" Feb 24 05:14:23.206568 master-0 kubenswrapper[4158]: I0224 05:14:23.204449 4158 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"kube-root-ca.crt" Feb 24 05:14:23.206568 master-0 kubenswrapper[4158]: I0224 05:14:23.204699 4158 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-monitoring/cluster-monitoring-operator-6bb6d78bf-mzb7q"] Feb 24 05:14:23.206568 master-0 kubenswrapper[4158]: I0224 05:14:23.206490 4158 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/cluster-monitoring-operator-6bb6d78bf-mzb7q" Feb 24 05:14:23.208197 master-0 kubenswrapper[4158]: I0224 05:14:23.206857 4158 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"serving-cert" Feb 24 05:14:23.214943 master-0 kubenswrapper[4158]: I0224 05:14:23.210570 4158 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" Feb 24 05:14:23.214943 master-0 kubenswrapper[4158]: I0224 05:14:23.212238 4158 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-8586dccc9b-49fsv"] Feb 24 05:14:23.214943 master-0 kubenswrapper[4158]: I0224 05:14:23.212668 4158 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-8586dccc9b-49fsv" Feb 24 05:14:23.214943 master-0 kubenswrapper[4158]: I0224 05:14:23.214108 4158 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-7bcfbc574b-8zrj9"] Feb 24 05:14:23.214943 master-0 kubenswrapper[4158]: I0224 05:14:23.214444 4158 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-7bcfbc574b-8zrj9" Feb 24 05:14:23.219434 master-0 kubenswrapper[4158]: I0224 05:14:23.215466 4158 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert" Feb 24 05:14:23.219434 master-0 kubenswrapper[4158]: I0224 05:14:23.215959 4158 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"kube-root-ca.crt" Feb 24 05:14:23.219434 master-0 kubenswrapper[4158]: I0224 05:14:23.216072 4158 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"openshift-service-ca.crt" Feb 24 05:14:23.219434 master-0 kubenswrapper[4158]: I0224 05:14:23.216180 4158 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-client" Feb 24 05:14:23.219434 master-0 kubenswrapper[4158]: I0224 05:14:23.216578 4158 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication-operator/authentication-operator-5bd7c86784-kbb8z"] Feb 24 05:14:23.219434 master-0 kubenswrapper[4158]: I0224 05:14:23.216972 4158 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-5bd7c86784-kbb8z" Feb 24 05:14:23.219434 master-0 kubenswrapper[4158]: I0224 05:14:23.217000 4158 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-584cc7bcb5-zz9fm"] Feb 24 05:14:23.219434 master-0 kubenswrapper[4158]: I0224 05:14:23.217567 4158 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-584cc7bcb5-zz9fm" Feb 24 05:14:23.219850 master-0 kubenswrapper[4158]: I0224 05:14:23.219777 4158 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-admission-controller-5f98f4f8d5-b985k"] Feb 24 05:14:23.223057 master-0 kubenswrapper[4158]: I0224 05:14:23.219989 4158 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"serving-cert" Feb 24 05:14:23.223057 master-0 kubenswrapper[4158]: I0224 05:14:23.220263 4158 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-olm-operator/cluster-olm-operator-5bd7768f54-qh6j7"] Feb 24 05:14:23.223057 master-0 kubenswrapper[4158]: I0224 05:14:23.220422 4158 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"openshift-service-ca.crt" Feb 24 05:14:23.223057 master-0 kubenswrapper[4158]: I0224 05:14:23.220676 4158 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-olm-operator/cluster-olm-operator-5bd7768f54-qh6j7" Feb 24 05:14:23.223057 master-0 kubenswrapper[4158]: I0224 05:14:23.220714 4158 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"kube-root-ca.crt" Feb 24 05:14:23.223057 master-0 kubenswrapper[4158]: I0224 05:14:23.220959 4158 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"config" Feb 24 05:14:23.223057 master-0 kubenswrapper[4158]: I0224 05:14:23.221176 4158 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-5f98f4f8d5-b985k" Feb 24 05:14:23.223057 master-0 kubenswrapper[4158]: I0224 05:14:23.221566 4158 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"cluster-monitoring-operator-tls" Feb 24 05:14:23.223057 master-0 kubenswrapper[4158]: I0224 05:14:23.221608 4158 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-service-ca.crt" Feb 24 05:14:23.223057 master-0 kubenswrapper[4158]: I0224 05:14:23.222065 4158 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-operator-config" Feb 24 05:14:23.226333 master-0 kubenswrapper[4158]: I0224 05:14:23.226132 4158 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-bcf775fc9-h99t4"] Feb 24 05:14:23.226972 master-0 kubenswrapper[4158]: I0224 05:14:23.226828 4158 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-bcf775fc9-h99t4" Feb 24 05:14:23.227392 master-0 kubenswrapper[4158]: I0224 05:14:23.227189 4158 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-config" Feb 24 05:14:23.228134 master-0 kubenswrapper[4158]: I0224 05:14:23.227598 4158 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-service-ca-bundle" Feb 24 05:14:23.228134 master-0 kubenswrapper[4158]: I0224 05:14:23.228029 4158 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"service-ca-bundle" Feb 24 05:14:23.228286 master-0 kubenswrapper[4158]: I0224 05:14:23.228185 4158 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-root-ca.crt" Feb 24 05:14:23.229246 master-0 kubenswrapper[4158]: I0224 05:14:23.228429 4158 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"authentication-operator-config" Feb 24 05:14:23.240291 master-0 kubenswrapper[4158]: I0224 05:14:23.240233 4158 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"kube-root-ca.crt" Feb 24 05:14:23.247485 master-0 kubenswrapper[4158]: I0224 05:14:23.247297 4158 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-storage-operator/csi-snapshot-controller-operator-6fb4df594f-8tv99"] Feb 24 05:14:23.250535 master-0 kubenswrapper[4158]: I0224 05:14:23.247698 4158 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" Feb 24 05:14:23.250535 master-0 kubenswrapper[4158]: I0224 05:14:23.248162 4158 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert" Feb 24 05:14:23.253652 master-0 kubenswrapper[4158]: I0224 05:14:23.253613 4158 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config" Feb 24 05:14:23.254973 master-0 kubenswrapper[4158]: I0224 05:14:23.253827 4158 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-node-tuning-operator"/"performance-addon-operator-webhook-cert" Feb 24 05:14:23.254973 master-0 kubenswrapper[4158]: I0224 05:14:23.254134 4158 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress-operator/ingress-operator-6569778c84-rr8r7"] Feb 24 05:14:23.254973 master-0 kubenswrapper[4158]: I0224 05:14:23.254669 4158 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-storage-operator/csi-snapshot-controller-operator-6fb4df594f-8tv99" Feb 24 05:14:23.255126 master-0 kubenswrapper[4158]: I0224 05:14:23.254992 4158 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-6569778c84-rr8r7" Feb 24 05:14:23.256677 master-0 kubenswrapper[4158]: I0224 05:14:23.256636 4158 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-node-tuning-operator"/"node-tuning-operator-tls" Feb 24 05:14:23.256953 master-0 kubenswrapper[4158]: I0224 05:14:23.256860 4158 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-olm-operator"/"openshift-service-ca.crt" Feb 24 05:14:23.257023 master-0 kubenswrapper[4158]: I0224 05:14:23.257006 4158 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"kube-root-ca.crt" Feb 24 05:14:23.257173 master-0 kubenswrapper[4158]: I0224 05:14:23.257144 4158 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-config" Feb 24 05:14:23.257260 master-0 kubenswrapper[4158]: I0224 05:14:23.257177 4158 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert" Feb 24 05:14:23.257390 master-0 kubenswrapper[4158]: I0224 05:14:23.257362 4158 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"telemetry-config" Feb 24 05:14:23.257488 master-0 kubenswrapper[4158]: I0224 05:14:23.257451 4158 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-service-ca.crt" Feb 24 05:14:23.258003 master-0 kubenswrapper[4158]: I0224 05:14:23.257931 4158 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-ca-bundle" Feb 24 05:14:23.258045 master-0 kubenswrapper[4158]: I0224 05:14:23.257923 4158 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7a2c651d-ea1a-41f2-9745-04adc8d88904-config\") pod \"etcd-operator-545bf96f4d-tfmbs\" (UID: \"7a2c651d-ea1a-41f2-9745-04adc8d88904\") " pod="openshift-etcd-operator/etcd-operator-545bf96f4d-tfmbs" Feb 24 05:14:23.258134 master-0 kubenswrapper[4158]: I0224 05:14:23.258088 4158 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"kube-root-ca.crt" Feb 24 05:14:23.258698 master-0 kubenswrapper[4158]: I0224 05:14:23.258092 4158 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tlwzq\" (UniqueName: \"kubernetes.io/projected/c3fed34f-b275-42c6-af6c-8de3e6fe0f9e-kube-api-access-tlwzq\") pod \"kube-storage-version-migrator-operator-fc889cfd5-r6p58\" (UID: \"c3fed34f-b275-42c6-af6c-8de3e6fe0f9e\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-fc889cfd5-r6p58" Feb 24 05:14:23.258771 master-0 kubenswrapper[4158]: I0224 05:14:23.258725 4158 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d86d5bbe-3768-4695-810b-245a56e4fd1d-config\") pod \"service-ca-operator-c48c8bf7c-mcdrl\" (UID: \"d86d5bbe-3768-4695-810b-245a56e4fd1d\") " pod="openshift-service-ca-operator/service-ca-operator-c48c8bf7c-mcdrl" Feb 24 05:14:23.258771 master-0 kubenswrapper[4158]: I0224 05:14:23.258757 4158 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e6f05507-d5c1-4102-a220-1db715a496e3-serving-cert\") pod \"openshift-kube-scheduler-operator-77cd4d9559-8l7xv\" (UID: \"e6f05507-d5c1-4102-a220-1db715a496e3\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-77cd4d9559-8l7xv" Feb 24 05:14:23.258844 master-0 kubenswrapper[4158]: I0224 05:14:23.258784 4158 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/7a2c651d-ea1a-41f2-9745-04adc8d88904-etcd-ca\") pod \"etcd-operator-545bf96f4d-tfmbs\" (UID: \"7a2c651d-ea1a-41f2-9745-04adc8d88904\") " pod="openshift-etcd-operator/etcd-operator-545bf96f4d-tfmbs" Feb 24 05:14:23.258844 master-0 kubenswrapper[4158]: I0224 05:14:23.258816 4158 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7a2c651d-ea1a-41f2-9745-04adc8d88904-serving-cert\") pod \"etcd-operator-545bf96f4d-tfmbs\" (UID: \"7a2c651d-ea1a-41f2-9745-04adc8d88904\") " pod="openshift-etcd-operator/etcd-operator-545bf96f4d-tfmbs" Feb 24 05:14:23.258921 master-0 kubenswrapper[4158]: I0224 05:14:23.258843 4158 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d86d5bbe-3768-4695-810b-245a56e4fd1d-serving-cert\") pod \"service-ca-operator-c48c8bf7c-mcdrl\" (UID: \"d86d5bbe-3768-4695-810b-245a56e4fd1d\") " pod="openshift-service-ca-operator/service-ca-operator-c48c8bf7c-mcdrl" Feb 24 05:14:23.258921 master-0 kubenswrapper[4158]: I0224 05:14:23.258867 4158 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/7a2c651d-ea1a-41f2-9745-04adc8d88904-etcd-service-ca\") pod \"etcd-operator-545bf96f4d-tfmbs\" (UID: \"7a2c651d-ea1a-41f2-9745-04adc8d88904\") " pod="openshift-etcd-operator/etcd-operator-545bf96f4d-tfmbs" Feb 24 05:14:23.258921 master-0 kubenswrapper[4158]: I0224 05:14:23.258888 4158 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/7a2c651d-ea1a-41f2-9745-04adc8d88904-etcd-client\") pod \"etcd-operator-545bf96f4d-tfmbs\" (UID: \"7a2c651d-ea1a-41f2-9745-04adc8d88904\") " pod="openshift-etcd-operator/etcd-operator-545bf96f4d-tfmbs" Feb 24 05:14:23.258921 master-0 kubenswrapper[4158]: I0224 05:14:23.258912 4158 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c3fed34f-b275-42c6-af6c-8de3e6fe0f9e-config\") pod \"kube-storage-version-migrator-operator-fc889cfd5-r6p58\" (UID: \"c3fed34f-b275-42c6-af6c-8de3e6fe0f9e\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-fc889cfd5-r6p58" Feb 24 05:14:23.259056 master-0 kubenswrapper[4158]: I0224 05:14:23.258978 4158 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xj8cq\" (UniqueName: \"kubernetes.io/projected/d86d5bbe-3768-4695-810b-245a56e4fd1d-kube-api-access-xj8cq\") pod \"service-ca-operator-c48c8bf7c-mcdrl\" (UID: \"d86d5bbe-3768-4695-810b-245a56e4fd1d\") " pod="openshift-service-ca-operator/service-ca-operator-c48c8bf7c-mcdrl" Feb 24 05:14:23.259056 master-0 kubenswrapper[4158]: I0224 05:14:23.259007 4158 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e6f05507-d5c1-4102-a220-1db715a496e3-config\") pod \"openshift-kube-scheduler-operator-77cd4d9559-8l7xv\" (UID: \"e6f05507-d5c1-4102-a220-1db715a496e3\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-77cd4d9559-8l7xv" Feb 24 05:14:23.259056 master-0 kubenswrapper[4158]: I0224 05:14:23.259029 4158 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e6f05507-d5c1-4102-a220-1db715a496e3-kube-api-access\") pod \"openshift-kube-scheduler-operator-77cd4d9559-8l7xv\" (UID: \"e6f05507-d5c1-4102-a220-1db715a496e3\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-77cd4d9559-8l7xv" Feb 24 05:14:23.259174 master-0 kubenswrapper[4158]: I0224 05:14:23.259054 4158 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c3fed34f-b275-42c6-af6c-8de3e6fe0f9e-serving-cert\") pod \"kube-storage-version-migrator-operator-fc889cfd5-r6p58\" (UID: \"c3fed34f-b275-42c6-af6c-8de3e6fe0f9e\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-fc889cfd5-r6p58" Feb 24 05:14:23.260868 master-0 kubenswrapper[4158]: I0224 05:14:23.260792 4158 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"openshift-service-ca.crt" Feb 24 05:14:23.260983 master-0 kubenswrapper[4158]: I0224 05:14:23.260948 4158 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"serving-cert" Feb 24 05:14:23.261153 master-0 kubenswrapper[4158]: I0224 05:14:23.261119 4158 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-olm-operator"/"kube-root-ca.crt" Feb 24 05:14:23.261466 master-0 kubenswrapper[4158]: I0224 05:14:23.261123 4158 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-serving-cert" Feb 24 05:14:23.261575 master-0 kubenswrapper[4158]: I0224 05:14:23.261475 4158 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-olm-operator"/"cluster-olm-operator-serving-cert" Feb 24 05:14:23.261685 master-0 kubenswrapper[4158]: I0224 05:14:23.261653 4158 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-admission-controller-secret" Feb 24 05:14:23.261985 master-0 kubenswrapper[4158]: I0224 05:14:23.261958 4158 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert" Feb 24 05:14:23.262186 master-0 kubenswrapper[4158]: I0224 05:14:23.262163 4158 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" Feb 24 05:14:23.262655 master-0 kubenswrapper[4158]: I0224 05:14:23.262598 4158 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-node-tuning-operator"/"openshift-service-ca.crt" Feb 24 05:14:23.262856 master-0 kubenswrapper[4158]: I0224 05:14:23.262826 4158 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"openshift-service-ca.crt" Feb 24 05:14:23.263164 master-0 kubenswrapper[4158]: I0224 05:14:23.263137 4158 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-storage-operator"/"openshift-service-ca.crt" Feb 24 05:14:23.263333 master-0 kubenswrapper[4158]: I0224 05:14:23.263282 4158 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"metrics-tls" Feb 24 05:14:23.263446 master-0 kubenswrapper[4158]: I0224 05:14:23.263421 4158 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"kube-root-ca.crt" Feb 24 05:14:23.264366 master-0 kubenswrapper[4158]: I0224 05:14:23.263919 4158 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-storage-operator"/"kube-root-ca.crt" Feb 24 05:14:23.264485 master-0 kubenswrapper[4158]: I0224 05:14:23.264456 4158 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-node-tuning-operator"/"kube-root-ca.crt" Feb 24 05:14:23.266260 master-0 kubenswrapper[4158]: I0224 05:14:23.266223 4158 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"trusted-ca-bundle" Feb 24 05:14:23.266534 master-0 kubenswrapper[4158]: I0224 05:14:23.266498 4158 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/marketplace-operator-6f5488b997-dbsnm"] Feb 24 05:14:23.267137 master-0 kubenswrapper[4158]: I0224 05:14:23.267020 4158 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-node-tuning-operator"/"trusted-ca" Feb 24 05:14:23.267137 master-0 kubenswrapper[4158]: I0224 05:14:23.267114 4158 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns-operator/dns-operator-8c7d49845-4dhth"] Feb 24 05:14:23.267280 master-0 kubenswrapper[4158]: I0224 05:14:23.267030 4158 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-6f5488b997-dbsnm" Feb 24 05:14:23.267854 master-0 kubenswrapper[4158]: I0224 05:14:23.267798 4158 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-8c7d49845-4dhth" Feb 24 05:14:23.268454 master-0 kubenswrapper[4158]: I0224 05:14:23.268425 4158 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-5d87bf58c-ncrqj"] Feb 24 05:14:23.269648 master-0 kubenswrapper[4158]: I0224 05:14:23.268899 4158 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-5d87bf58c-ncrqj" Feb 24 05:14:23.269648 master-0 kubenswrapper[4158]: I0224 05:14:23.269406 4158 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-77cd4d9559-8l7xv"] Feb 24 05:14:23.271161 master-0 kubenswrapper[4158]: I0224 05:14:23.270239 4158 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca-operator/service-ca-operator-c48c8bf7c-mcdrl"] Feb 24 05:14:23.271161 master-0 kubenswrapper[4158]: I0224 05:14:23.270540 4158 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-metrics" Feb 24 05:14:23.271161 master-0 kubenswrapper[4158]: I0224 05:14:23.270987 4158 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"openshift-service-ca.crt" Feb 24 05:14:23.278021 master-0 kubenswrapper[4158]: I0224 05:14:23.271765 4158 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"kube-root-ca.crt" Feb 24 05:14:23.278021 master-0 kubenswrapper[4158]: I0224 05:14:23.272122 4158 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"openshift-service-ca.crt" Feb 24 05:14:23.278021 master-0 kubenswrapper[4158]: I0224 05:14:23.273277 4158 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-fc889cfd5-r6p58"] Feb 24 05:14:23.278021 master-0 kubenswrapper[4158]: I0224 05:14:23.275039 4158 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"trusted-ca" Feb 24 05:14:23.278021 master-0 kubenswrapper[4158]: I0224 05:14:23.277875 4158 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert" Feb 24 05:14:23.278406 master-0 kubenswrapper[4158]: I0224 05:14:23.278129 4158 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-config" Feb 24 05:14:23.278467 master-0 kubenswrapper[4158]: I0224 05:14:23.278434 4158 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"kube-root-ca.crt" Feb 24 05:14:23.278737 master-0 kubenswrapper[4158]: I0224 05:14:23.278672 4158 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-root-ca.crt" Feb 24 05:14:23.279264 master-0 kubenswrapper[4158]: I0224 05:14:23.279240 4158 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"metrics-tls" Feb 24 05:14:23.280123 master-0 kubenswrapper[4158]: I0224 05:14:23.280076 4158 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-etcd-operator/etcd-operator-545bf96f4d-tfmbs"] Feb 24 05:14:23.281633 master-0 kubenswrapper[4158]: I0224 05:14:23.281558 4158 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/cluster-monitoring-operator-6bb6d78bf-mzb7q"] Feb 24 05:14:23.282686 master-0 kubenswrapper[4158]: I0224 05:14:23.282637 4158 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-network-operator/iptables-alerter-r2vvc"] Feb 24 05:14:23.284930 master-0 kubenswrapper[4158]: I0224 05:14:23.283722 4158 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/iptables-alerter-r2vvc" Feb 24 05:14:23.284930 master-0 kubenswrapper[4158]: I0224 05:14:23.284018 4158 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"marketplace-trusted-ca" Feb 24 05:14:23.285225 master-0 kubenswrapper[4158]: I0224 05:14:23.285191 4158 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-5c75f78c8b-9d82f"] Feb 24 05:14:23.287865 master-0 kubenswrapper[4158]: I0224 05:14:23.287818 4158 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/multus-admission-controller-5f98f4f8d5-b985k"] Feb 24 05:14:23.289111 master-0 kubenswrapper[4158]: I0224 05:14:23.289083 4158 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-olm-operator/cluster-olm-operator-5bd7768f54-qh6j7"] Feb 24 05:14:23.289734 master-0 kubenswrapper[4158]: I0224 05:14:23.289703 4158 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"iptables-alerter-script" Feb 24 05:14:23.289928 master-0 kubenswrapper[4158]: I0224 05:14:23.289899 4158 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-7bcfbc574b-8zrj9"] Feb 24 05:14:23.299795 master-0 kubenswrapper[4158]: I0224 05:14:23.299752 4158 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-bcf775fc9-h99t4"] Feb 24 05:14:23.301842 master-0 kubenswrapper[4158]: I0224 05:14:23.301807 4158 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-operator/ingress-operator-6569778c84-rr8r7"] Feb 24 05:14:23.304281 master-0 kubenswrapper[4158]: I0224 05:14:23.304185 4158 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication-operator/authentication-operator-5bd7c86784-kbb8z"] Feb 24 05:14:23.307546 master-0 kubenswrapper[4158]: I0224 05:14:23.307486 4158 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-8586dccc9b-49fsv"] Feb 24 05:14:23.308640 master-0 kubenswrapper[4158]: I0224 05:14:23.308604 4158 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-584cc7bcb5-zz9fm"] Feb 24 05:14:23.309169 master-0 kubenswrapper[4158]: I0224 05:14:23.309138 4158 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns-operator/dns-operator-8c7d49845-4dhth"] Feb 24 05:14:23.310216 master-0 kubenswrapper[4158]: I0224 05:14:23.310022 4158 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-storage-operator/csi-snapshot-controller-operator-6fb4df594f-8tv99"] Feb 24 05:14:23.310791 master-0 kubenswrapper[4158]: I0224 05:14:23.310753 4158 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-5d87bf58c-ncrqj"] Feb 24 05:14:23.312424 master-0 kubenswrapper[4158]: I0224 05:14:23.312391 4158 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-6f5488b997-dbsnm"] Feb 24 05:14:23.360204 master-0 kubenswrapper[4158]: I0224 05:14:23.360127 4158 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wwc5b\" (UniqueName: \"kubernetes.io/projected/59333a14-5bdc-4590-a3da-af7300f086da-kube-api-access-wwc5b\") pod \"authentication-operator-5bd7c86784-kbb8z\" (UID: \"59333a14-5bdc-4590-a3da-af7300f086da\") " pod="openshift-authentication-operator/authentication-operator-5bd7c86784-kbb8z" Feb 24 05:14:23.360361 master-0 kubenswrapper[4158]: I0224 05:14:23.360239 4158 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/58ecd829-4749-4c8a-933b-16b4acccac90-serving-cert\") pod \"openshift-apiserver-operator-8586dccc9b-49fsv\" (UID: \"58ecd829-4749-4c8a-933b-16b4acccac90\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-8586dccc9b-49fsv" Feb 24 05:14:23.360361 master-0 kubenswrapper[4158]: I0224 05:14:23.360279 4158 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-tuning-operator-tls\" (UniqueName: \"kubernetes.io/secret/6e5ede6a-9d4b-47a2-b4ba-e6018910d05a-node-tuning-operator-tls\") pod \"cluster-node-tuning-operator-bcf775fc9-h99t4\" (UID: \"6e5ede6a-9d4b-47a2-b4ba-e6018910d05a\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-bcf775fc9-h99t4" Feb 24 05:14:23.360471 master-0 kubenswrapper[4158]: I0224 05:14:23.360363 4158 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/933beda1-c930-4831-a886-3cc6b7a992ad-config\") pod \"openshift-controller-manager-operator-584cc7bcb5-zz9fm\" (UID: \"933beda1-c930-4831-a886-3cc6b7a992ad\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-584cc7bcb5-zz9fm" Feb 24 05:14:23.360471 master-0 kubenswrapper[4158]: I0224 05:14:23.360398 4158 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zb68s\" (UniqueName: \"kubernetes.io/projected/6e5ede6a-9d4b-47a2-b4ba-e6018910d05a-kube-api-access-zb68s\") pod \"cluster-node-tuning-operator-bcf775fc9-h99t4\" (UID: \"6e5ede6a-9d4b-47a2-b4ba-e6018910d05a\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-bcf775fc9-h99t4" Feb 24 05:14:23.360471 master-0 kubenswrapper[4158]: I0224 05:14:23.360429 4158 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/59333a14-5bdc-4590-a3da-af7300f086da-service-ca-bundle\") pod \"authentication-operator-5bd7c86784-kbb8z\" (UID: \"59333a14-5bdc-4590-a3da-af7300f086da\") " pod="openshift-authentication-operator/authentication-operator-5bd7c86784-kbb8z" Feb 24 05:14:23.360471 master-0 kubenswrapper[4158]: I0224 05:14:23.360466 4158 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/7a2c651d-ea1a-41f2-9745-04adc8d88904-etcd-ca\") pod \"etcd-operator-545bf96f4d-tfmbs\" (UID: \"7a2c651d-ea1a-41f2-9745-04adc8d88904\") " pod="openshift-etcd-operator/etcd-operator-545bf96f4d-tfmbs" Feb 24 05:14:23.360591 master-0 kubenswrapper[4158]: I0224 05:14:23.360496 4158 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d86d5bbe-3768-4695-810b-245a56e4fd1d-serving-cert\") pod \"service-ca-operator-c48c8bf7c-mcdrl\" (UID: \"d86d5bbe-3768-4695-810b-245a56e4fd1d\") " pod="openshift-service-ca-operator/service-ca-operator-c48c8bf7c-mcdrl" Feb 24 05:14:23.360591 master-0 kubenswrapper[4158]: I0224 05:14:23.360523 4158 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/7a2c651d-ea1a-41f2-9745-04adc8d88904-etcd-client\") pod \"etcd-operator-545bf96f4d-tfmbs\" (UID: \"7a2c651d-ea1a-41f2-9745-04adc8d88904\") " pod="openshift-etcd-operator/etcd-operator-545bf96f4d-tfmbs" Feb 24 05:14:23.360591 master-0 kubenswrapper[4158]: I0224 05:14:23.360555 4158 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mdpfz\" (UniqueName: \"kubernetes.io/projected/c177f8fe-8145-4557-ae78-af121efe001c-kube-api-access-mdpfz\") pod \"cluster-monitoring-operator-6bb6d78bf-mzb7q\" (UID: \"c177f8fe-8145-4557-ae78-af121efe001c\") " pod="openshift-monitoring/cluster-monitoring-operator-6bb6d78bf-mzb7q" Feb 24 05:14:23.360591 master-0 kubenswrapper[4158]: I0224 05:14:23.360582 4158 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c3fed34f-b275-42c6-af6c-8de3e6fe0f9e-config\") pod \"kube-storage-version-migrator-operator-fc889cfd5-r6p58\" (UID: \"c3fed34f-b275-42c6-af6c-8de3e6fe0f9e\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-fc889cfd5-r6p58" Feb 24 05:14:23.360788 master-0 kubenswrapper[4158]: I0224 05:14:23.360616 4158 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fgf94\" (UniqueName: \"kubernetes.io/projected/7a2c651d-ea1a-41f2-9745-04adc8d88904-kube-api-access-fgf94\") pod \"etcd-operator-545bf96f4d-tfmbs\" (UID: \"7a2c651d-ea1a-41f2-9745-04adc8d88904\") " pod="openshift-etcd-operator/etcd-operator-545bf96f4d-tfmbs" Feb 24 05:14:23.360788 master-0 kubenswrapper[4158]: I0224 05:14:23.360649 4158 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-62xzk\" (UniqueName: \"kubernetes.io/projected/633d33a1-e1b1-40b0-b56a-afb0c1085d97-kube-api-access-62xzk\") pod \"cluster-olm-operator-5bd7768f54-qh6j7\" (UID: \"633d33a1-e1b1-40b0-b56a-afb0c1085d97\") " pod="openshift-cluster-olm-operator/cluster-olm-operator-5bd7768f54-qh6j7" Feb 24 05:14:23.360788 master-0 kubenswrapper[4158]: I0224 05:14:23.360678 4158 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xj8cq\" (UniqueName: \"kubernetes.io/projected/d86d5bbe-3768-4695-810b-245a56e4fd1d-kube-api-access-xj8cq\") pod \"service-ca-operator-c48c8bf7c-mcdrl\" (UID: \"d86d5bbe-3768-4695-810b-245a56e4fd1d\") " pod="openshift-service-ca-operator/service-ca-operator-c48c8bf7c-mcdrl" Feb 24 05:14:23.360788 master-0 kubenswrapper[4158]: I0224 05:14:23.360705 4158 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c3fed34f-b275-42c6-af6c-8de3e6fe0f9e-serving-cert\") pod \"kube-storage-version-migrator-operator-fc889cfd5-r6p58\" (UID: \"c3fed34f-b275-42c6-af6c-8de3e6fe0f9e\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-fc889cfd5-r6p58" Feb 24 05:14:23.360788 master-0 kubenswrapper[4158]: I0224 05:14:23.360733 4158 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e6f05507-d5c1-4102-a220-1db715a496e3-config\") pod \"openshift-kube-scheduler-operator-77cd4d9559-8l7xv\" (UID: \"e6f05507-d5c1-4102-a220-1db715a496e3\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-77cd4d9559-8l7xv" Feb 24 05:14:23.360788 master-0 kubenswrapper[4158]: I0224 05:14:23.360758 4158 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e6f05507-d5c1-4102-a220-1db715a496e3-kube-api-access\") pod \"openshift-kube-scheduler-operator-77cd4d9559-8l7xv\" (UID: \"e6f05507-d5c1-4102-a220-1db715a496e3\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-77cd4d9559-8l7xv" Feb 24 05:14:23.360788 master-0 kubenswrapper[4158]: I0224 05:14:23.360785 4158 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7a2c651d-ea1a-41f2-9745-04adc8d88904-config\") pod \"etcd-operator-545bf96f4d-tfmbs\" (UID: \"7a2c651d-ea1a-41f2-9745-04adc8d88904\") " pod="openshift-etcd-operator/etcd-operator-545bf96f4d-tfmbs" Feb 24 05:14:23.360970 master-0 kubenswrapper[4158]: I0224 05:14:23.360812 4158 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d4d5x\" (UniqueName: \"kubernetes.io/projected/49bfccec-61ec-4bef-a561-9f6e6f906215-kube-api-access-d4d5x\") pod \"package-server-manager-5c75f78c8b-9d82f\" (UID: \"49bfccec-61ec-4bef-a561-9f6e6f906215\") " pod="openshift-operator-lifecycle-manager/package-server-manager-5c75f78c8b-9d82f" Feb 24 05:14:23.360970 master-0 kubenswrapper[4158]: I0224 05:14:23.360906 4158 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/8be1f8db-3f0b-4d6f-be42-7564fba66820-webhook-certs\") pod \"multus-admission-controller-5f98f4f8d5-b985k\" (UID: \"8be1f8db-3f0b-4d6f-be42-7564fba66820\") " pod="openshift-multus/multus-admission-controller-5f98f4f8d5-b985k" Feb 24 05:14:23.360970 master-0 kubenswrapper[4158]: I0224 05:14:23.360946 4158 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tlwzq\" (UniqueName: \"kubernetes.io/projected/c3fed34f-b275-42c6-af6c-8de3e6fe0f9e-kube-api-access-tlwzq\") pod \"kube-storage-version-migrator-operator-fc889cfd5-r6p58\" (UID: \"c3fed34f-b275-42c6-af6c-8de3e6fe0f9e\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-fc889cfd5-r6p58" Feb 24 05:14:23.361048 master-0 kubenswrapper[4158]: I0224 05:14:23.360978 4158 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/59333a14-5bdc-4590-a3da-af7300f086da-serving-cert\") pod \"authentication-operator-5bd7c86784-kbb8z\" (UID: \"59333a14-5bdc-4590-a3da-af7300f086da\") " pod="openshift-authentication-operator/authentication-operator-5bd7c86784-kbb8z" Feb 24 05:14:23.361048 master-0 kubenswrapper[4158]: I0224 05:14:23.361009 4158 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cluster-monitoring-operator-tls\" (UniqueName: \"kubernetes.io/secret/c177f8fe-8145-4557-ae78-af121efe001c-cluster-monitoring-operator-tls\") pod \"cluster-monitoring-operator-6bb6d78bf-mzb7q\" (UID: \"c177f8fe-8145-4557-ae78-af121efe001c\") " pod="openshift-monitoring/cluster-monitoring-operator-6bb6d78bf-mzb7q" Feb 24 05:14:23.361048 master-0 kubenswrapper[4158]: I0224 05:14:23.361041 4158 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/59333a14-5bdc-4590-a3da-af7300f086da-trusted-ca-bundle\") pod \"authentication-operator-5bd7c86784-kbb8z\" (UID: \"59333a14-5bdc-4590-a3da-af7300f086da\") " pod="openshift-authentication-operator/authentication-operator-5bd7c86784-kbb8z" Feb 24 05:14:23.361128 master-0 kubenswrapper[4158]: I0224 05:14:23.361081 4158 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/22813c83-2f60-44ad-9624-ad367cec08f7-kube-api-access\") pod \"kube-controller-manager-operator-7bcfbc574b-8zrj9\" (UID: \"22813c83-2f60-44ad-9624-ad367cec08f7\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-7bcfbc574b-8zrj9" Feb 24 05:14:23.361128 master-0 kubenswrapper[4158]: I0224 05:14:23.361108 4158 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/6e5ede6a-9d4b-47a2-b4ba-e6018910d05a-trusted-ca\") pod \"cluster-node-tuning-operator-bcf775fc9-h99t4\" (UID: \"6e5ede6a-9d4b-47a2-b4ba-e6018910d05a\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-bcf775fc9-h99t4" Feb 24 05:14:23.361181 master-0 kubenswrapper[4158]: I0224 05:14:23.361134 4158 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gmf87\" (UniqueName: \"kubernetes.io/projected/933beda1-c930-4831-a886-3cc6b7a992ad-kube-api-access-gmf87\") pod \"openshift-controller-manager-operator-584cc7bcb5-zz9fm\" (UID: \"933beda1-c930-4831-a886-3cc6b7a992ad\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-584cc7bcb5-zz9fm" Feb 24 05:14:23.361181 master-0 kubenswrapper[4158]: I0224 05:14:23.361161 4158 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/58ecd829-4749-4c8a-933b-16b4acccac90-config\") pod \"openshift-apiserver-operator-8586dccc9b-49fsv\" (UID: \"58ecd829-4749-4c8a-933b-16b4acccac90\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-8586dccc9b-49fsv" Feb 24 05:14:23.361241 master-0 kubenswrapper[4158]: I0224 05:14:23.361188 4158 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d86d5bbe-3768-4695-810b-245a56e4fd1d-config\") pod \"service-ca-operator-c48c8bf7c-mcdrl\" (UID: \"d86d5bbe-3768-4695-810b-245a56e4fd1d\") " pod="openshift-service-ca-operator/service-ca-operator-c48c8bf7c-mcdrl" Feb 24 05:14:23.361241 master-0 kubenswrapper[4158]: I0224 05:14:23.361216 4158 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/933beda1-c930-4831-a886-3cc6b7a992ad-serving-cert\") pod \"openshift-controller-manager-operator-584cc7bcb5-zz9fm\" (UID: \"933beda1-c930-4831-a886-3cc6b7a992ad\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-584cc7bcb5-zz9fm" Feb 24 05:14:23.361296 master-0 kubenswrapper[4158]: I0224 05:14:23.361243 4158 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cluster-olm-operator-serving-cert\" (UniqueName: \"kubernetes.io/secret/633d33a1-e1b1-40b0-b56a-afb0c1085d97-cluster-olm-operator-serving-cert\") pod \"cluster-olm-operator-5bd7768f54-qh6j7\" (UID: \"633d33a1-e1b1-40b0-b56a-afb0c1085d97\") " pod="openshift-cluster-olm-operator/cluster-olm-operator-5bd7768f54-qh6j7" Feb 24 05:14:23.361377 master-0 kubenswrapper[4158]: I0224 05:14:23.361336 4158 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e6f05507-d5c1-4102-a220-1db715a496e3-serving-cert\") pod \"openshift-kube-scheduler-operator-77cd4d9559-8l7xv\" (UID: \"e6f05507-d5c1-4102-a220-1db715a496e3\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-77cd4d9559-8l7xv" Feb 24 05:14:23.361377 master-0 kubenswrapper[4158]: I0224 05:14:23.361371 4158 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7a2c651d-ea1a-41f2-9745-04adc8d88904-serving-cert\") pod \"etcd-operator-545bf96f4d-tfmbs\" (UID: \"7a2c651d-ea1a-41f2-9745-04adc8d88904\") " pod="openshift-etcd-operator/etcd-operator-545bf96f4d-tfmbs" Feb 24 05:14:23.361434 master-0 kubenswrapper[4158]: I0224 05:14:23.361402 4158 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/7a2c651d-ea1a-41f2-9745-04adc8d88904-etcd-service-ca\") pod \"etcd-operator-545bf96f4d-tfmbs\" (UID: \"7a2c651d-ea1a-41f2-9745-04adc8d88904\") " pod="openshift-etcd-operator/etcd-operator-545bf96f4d-tfmbs" Feb 24 05:14:23.361434 master-0 kubenswrapper[4158]: I0224 05:14:23.361430 4158 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/6e5ede6a-9d4b-47a2-b4ba-e6018910d05a-apiservice-cert\") pod \"cluster-node-tuning-operator-bcf775fc9-h99t4\" (UID: \"6e5ede6a-9d4b-47a2-b4ba-e6018910d05a\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-bcf775fc9-h99t4" Feb 24 05:14:23.361493 master-0 kubenswrapper[4158]: I0224 05:14:23.361475 4158 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/22813c83-2f60-44ad-9624-ad367cec08f7-serving-cert\") pod \"kube-controller-manager-operator-7bcfbc574b-8zrj9\" (UID: \"22813c83-2f60-44ad-9624-ad367cec08f7\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-7bcfbc574b-8zrj9" Feb 24 05:14:23.361576 master-0 kubenswrapper[4158]: I0224 05:14:23.361529 4158 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/22813c83-2f60-44ad-9624-ad367cec08f7-config\") pod \"kube-controller-manager-operator-7bcfbc574b-8zrj9\" (UID: \"22813c83-2f60-44ad-9624-ad367cec08f7\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-7bcfbc574b-8zrj9" Feb 24 05:14:23.361576 master-0 kubenswrapper[4158]: I0224 05:14:23.361564 4158 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m9kf2\" (UniqueName: \"kubernetes.io/projected/58ecd829-4749-4c8a-933b-16b4acccac90-kube-api-access-m9kf2\") pod \"openshift-apiserver-operator-8586dccc9b-49fsv\" (UID: \"58ecd829-4749-4c8a-933b-16b4acccac90\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-8586dccc9b-49fsv" Feb 24 05:14:23.361704 master-0 kubenswrapper[4158]: I0224 05:14:23.361592 4158 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/49bfccec-61ec-4bef-a561-9f6e6f906215-package-server-manager-serving-cert\") pod \"package-server-manager-5c75f78c8b-9d82f\" (UID: \"49bfccec-61ec-4bef-a561-9f6e6f906215\") " pod="openshift-operator-lifecycle-manager/package-server-manager-5c75f78c8b-9d82f" Feb 24 05:14:23.361963 master-0 kubenswrapper[4158]: I0224 05:14:23.361924 4158 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xj2tz\" (UniqueName: \"kubernetes.io/projected/8be1f8db-3f0b-4d6f-be42-7564fba66820-kube-api-access-xj2tz\") pod \"multus-admission-controller-5f98f4f8d5-b985k\" (UID: \"8be1f8db-3f0b-4d6f-be42-7564fba66820\") " pod="openshift-multus/multus-admission-controller-5f98f4f8d5-b985k" Feb 24 05:14:23.363024 master-0 kubenswrapper[4158]: I0224 05:14:23.362912 4158 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c3fed34f-b275-42c6-af6c-8de3e6fe0f9e-config\") pod \"kube-storage-version-migrator-operator-fc889cfd5-r6p58\" (UID: \"c3fed34f-b275-42c6-af6c-8de3e6fe0f9e\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-fc889cfd5-r6p58" Feb 24 05:14:23.363542 master-0 kubenswrapper[4158]: I0224 05:14:23.363506 4158 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operand-assets\" (UniqueName: \"kubernetes.io/empty-dir/633d33a1-e1b1-40b0-b56a-afb0c1085d97-operand-assets\") pod \"cluster-olm-operator-5bd7768f54-qh6j7\" (UID: \"633d33a1-e1b1-40b0-b56a-afb0c1085d97\") " pod="openshift-cluster-olm-operator/cluster-olm-operator-5bd7768f54-qh6j7" Feb 24 05:14:23.363610 master-0 kubenswrapper[4158]: I0224 05:14:23.363548 4158 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/59333a14-5bdc-4590-a3da-af7300f086da-config\") pod \"authentication-operator-5bd7c86784-kbb8z\" (UID: \"59333a14-5bdc-4590-a3da-af7300f086da\") " pod="openshift-authentication-operator/authentication-operator-5bd7c86784-kbb8z" Feb 24 05:14:23.363610 master-0 kubenswrapper[4158]: I0224 05:14:23.363573 4158 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"telemetry-config\" (UniqueName: \"kubernetes.io/configmap/c177f8fe-8145-4557-ae78-af121efe001c-telemetry-config\") pod \"cluster-monitoring-operator-6bb6d78bf-mzb7q\" (UID: \"c177f8fe-8145-4557-ae78-af121efe001c\") " pod="openshift-monitoring/cluster-monitoring-operator-6bb6d78bf-mzb7q" Feb 24 05:14:23.363707 master-0 kubenswrapper[4158]: I0224 05:14:23.363524 4158 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d86d5bbe-3768-4695-810b-245a56e4fd1d-config\") pod \"service-ca-operator-c48c8bf7c-mcdrl\" (UID: \"d86d5bbe-3768-4695-810b-245a56e4fd1d\") " pod="openshift-service-ca-operator/service-ca-operator-c48c8bf7c-mcdrl" Feb 24 05:14:23.364942 master-0 kubenswrapper[4158]: I0224 05:14:23.364899 4158 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7a2c651d-ea1a-41f2-9745-04adc8d88904-config\") pod \"etcd-operator-545bf96f4d-tfmbs\" (UID: \"7a2c651d-ea1a-41f2-9745-04adc8d88904\") " pod="openshift-etcd-operator/etcd-operator-545bf96f4d-tfmbs" Feb 24 05:14:23.365008 master-0 kubenswrapper[4158]: I0224 05:14:23.364962 4158 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/7a2c651d-ea1a-41f2-9745-04adc8d88904-etcd-service-ca\") pod \"etcd-operator-545bf96f4d-tfmbs\" (UID: \"7a2c651d-ea1a-41f2-9745-04adc8d88904\") " pod="openshift-etcd-operator/etcd-operator-545bf96f4d-tfmbs" Feb 24 05:14:23.365366 master-0 kubenswrapper[4158]: I0224 05:14:23.365329 4158 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e6f05507-d5c1-4102-a220-1db715a496e3-config\") pod \"openshift-kube-scheduler-operator-77cd4d9559-8l7xv\" (UID: \"e6f05507-d5c1-4102-a220-1db715a496e3\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-77cd4d9559-8l7xv" Feb 24 05:14:23.365936 master-0 kubenswrapper[4158]: I0224 05:14:23.365875 4158 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/7a2c651d-ea1a-41f2-9745-04adc8d88904-etcd-ca\") pod \"etcd-operator-545bf96f4d-tfmbs\" (UID: \"7a2c651d-ea1a-41f2-9745-04adc8d88904\") " pod="openshift-etcd-operator/etcd-operator-545bf96f4d-tfmbs" Feb 24 05:14:23.368209 master-0 kubenswrapper[4158]: I0224 05:14:23.368148 4158 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7a2c651d-ea1a-41f2-9745-04adc8d88904-serving-cert\") pod \"etcd-operator-545bf96f4d-tfmbs\" (UID: \"7a2c651d-ea1a-41f2-9745-04adc8d88904\") " pod="openshift-etcd-operator/etcd-operator-545bf96f4d-tfmbs" Feb 24 05:14:23.368294 master-0 kubenswrapper[4158]: I0224 05:14:23.368230 4158 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c3fed34f-b275-42c6-af6c-8de3e6fe0f9e-serving-cert\") pod \"kube-storage-version-migrator-operator-fc889cfd5-r6p58\" (UID: \"c3fed34f-b275-42c6-af6c-8de3e6fe0f9e\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-fc889cfd5-r6p58" Feb 24 05:14:23.368563 master-0 kubenswrapper[4158]: I0224 05:14:23.368509 4158 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e6f05507-d5c1-4102-a220-1db715a496e3-serving-cert\") pod \"openshift-kube-scheduler-operator-77cd4d9559-8l7xv\" (UID: \"e6f05507-d5c1-4102-a220-1db715a496e3\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-77cd4d9559-8l7xv" Feb 24 05:14:23.368748 master-0 kubenswrapper[4158]: I0224 05:14:23.368705 4158 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/7a2c651d-ea1a-41f2-9745-04adc8d88904-etcd-client\") pod \"etcd-operator-545bf96f4d-tfmbs\" (UID: \"7a2c651d-ea1a-41f2-9745-04adc8d88904\") " pod="openshift-etcd-operator/etcd-operator-545bf96f4d-tfmbs" Feb 24 05:14:23.368860 master-0 kubenswrapper[4158]: I0224 05:14:23.368763 4158 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d86d5bbe-3768-4695-810b-245a56e4fd1d-serving-cert\") pod \"service-ca-operator-c48c8bf7c-mcdrl\" (UID: \"d86d5bbe-3768-4695-810b-245a56e4fd1d\") " pod="openshift-service-ca-operator/service-ca-operator-c48c8bf7c-mcdrl" Feb 24 05:14:23.385211 master-0 kubenswrapper[4158]: I0224 05:14:23.385156 4158 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tlwzq\" (UniqueName: \"kubernetes.io/projected/c3fed34f-b275-42c6-af6c-8de3e6fe0f9e-kube-api-access-tlwzq\") pod \"kube-storage-version-migrator-operator-fc889cfd5-r6p58\" (UID: \"c3fed34f-b275-42c6-af6c-8de3e6fe0f9e\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-fc889cfd5-r6p58" Feb 24 05:14:23.385651 master-0 kubenswrapper[4158]: I0224 05:14:23.385605 4158 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e6f05507-d5c1-4102-a220-1db715a496e3-kube-api-access\") pod \"openshift-kube-scheduler-operator-77cd4d9559-8l7xv\" (UID: \"e6f05507-d5c1-4102-a220-1db715a496e3\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-77cd4d9559-8l7xv" Feb 24 05:14:23.386184 master-0 kubenswrapper[4158]: I0224 05:14:23.386153 4158 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xj8cq\" (UniqueName: \"kubernetes.io/projected/d86d5bbe-3768-4695-810b-245a56e4fd1d-kube-api-access-xj8cq\") pod \"service-ca-operator-c48c8bf7c-mcdrl\" (UID: \"d86d5bbe-3768-4695-810b-245a56e4fd1d\") " pod="openshift-service-ca-operator/service-ca-operator-c48c8bf7c-mcdrl" Feb 24 05:14:23.464866 master-0 kubenswrapper[4158]: I0224 05:14:23.464833 4158 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h5djr\" (UniqueName: \"kubernetes.io/projected/feee7fe8-e805-4807-b4c0-ecc7ef0f88d9-kube-api-access-h5djr\") pod \"csi-snapshot-controller-operator-6fb4df594f-8tv99\" (UID: \"feee7fe8-e805-4807-b4c0-ecc7ef0f88d9\") " pod="openshift-cluster-storage-operator/csi-snapshot-controller-operator-6fb4df594f-8tv99" Feb 24 05:14:23.465002 master-0 kubenswrapper[4158]: I0224 05:14:23.464988 4158 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fgf94\" (UniqueName: \"kubernetes.io/projected/7a2c651d-ea1a-41f2-9745-04adc8d88904-kube-api-access-fgf94\") pod \"etcd-operator-545bf96f4d-tfmbs\" (UID: \"7a2c651d-ea1a-41f2-9745-04adc8d88904\") " pod="openshift-etcd-operator/etcd-operator-545bf96f4d-tfmbs" Feb 24 05:14:23.465070 master-0 kubenswrapper[4158]: I0224 05:14:23.465059 4158 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mdpfz\" (UniqueName: \"kubernetes.io/projected/c177f8fe-8145-4557-ae78-af121efe001c-kube-api-access-mdpfz\") pod \"cluster-monitoring-operator-6bb6d78bf-mzb7q\" (UID: \"c177f8fe-8145-4557-ae78-af121efe001c\") " pod="openshift-monitoring/cluster-monitoring-operator-6bb6d78bf-mzb7q" Feb 24 05:14:23.465152 master-0 kubenswrapper[4158]: I0224 05:14:23.465140 4158 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-62xzk\" (UniqueName: \"kubernetes.io/projected/633d33a1-e1b1-40b0-b56a-afb0c1085d97-kube-api-access-62xzk\") pod \"cluster-olm-operator-5bd7768f54-qh6j7\" (UID: \"633d33a1-e1b1-40b0-b56a-afb0c1085d97\") " pod="openshift-cluster-olm-operator/cluster-olm-operator-5bd7768f54-qh6j7" Feb 24 05:14:23.465400 master-0 kubenswrapper[4158]: I0224 05:14:23.465359 4158 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/3d6b1ce7-1213-494c-829d-186d39eac7eb-bound-sa-token\") pod \"ingress-operator-6569778c84-rr8r7\" (UID: \"3d6b1ce7-1213-494c-829d-186d39eac7eb\") " pod="openshift-ingress-operator/ingress-operator-6569778c84-rr8r7" Feb 24 05:14:23.465444 master-0 kubenswrapper[4158]: I0224 05:14:23.465434 4158 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d4d5x\" (UniqueName: \"kubernetes.io/projected/49bfccec-61ec-4bef-a561-9f6e6f906215-kube-api-access-d4d5x\") pod \"package-server-manager-5c75f78c8b-9d82f\" (UID: \"49bfccec-61ec-4bef-a561-9f6e6f906215\") " pod="openshift-operator-lifecycle-manager/package-server-manager-5c75f78c8b-9d82f" Feb 24 05:14:23.465474 master-0 kubenswrapper[4158]: I0224 05:14:23.465462 4158 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/8be1f8db-3f0b-4d6f-be42-7564fba66820-webhook-certs\") pod \"multus-admission-controller-5f98f4f8d5-b985k\" (UID: \"8be1f8db-3f0b-4d6f-be42-7564fba66820\") " pod="openshift-multus/multus-admission-controller-5f98f4f8d5-b985k" Feb 24 05:14:23.465510 master-0 kubenswrapper[4158]: I0224 05:14:23.465491 4158 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zcb72\" (UniqueName: \"kubernetes.io/projected/dd29bef3-d27e-48b3-9aa0-d915e949b3d5-kube-api-access-zcb72\") pod \"marketplace-operator-6f5488b997-dbsnm\" (UID: \"dd29bef3-d27e-48b3-9aa0-d915e949b3d5\") " pod="openshift-marketplace/marketplace-operator-6f5488b997-dbsnm" Feb 24 05:14:23.465673 master-0 kubenswrapper[4158]: I0224 05:14:23.465656 4158 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/59333a14-5bdc-4590-a3da-af7300f086da-trusted-ca-bundle\") pod \"authentication-operator-5bd7c86784-kbb8z\" (UID: \"59333a14-5bdc-4590-a3da-af7300f086da\") " pod="openshift-authentication-operator/authentication-operator-5bd7c86784-kbb8z" Feb 24 05:14:23.465778 master-0 kubenswrapper[4158]: I0224 05:14:23.465763 4158 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/59333a14-5bdc-4590-a3da-af7300f086da-serving-cert\") pod \"authentication-operator-5bd7c86784-kbb8z\" (UID: \"59333a14-5bdc-4590-a3da-af7300f086da\") " pod="openshift-authentication-operator/authentication-operator-5bd7c86784-kbb8z" Feb 24 05:14:23.465886 master-0 kubenswrapper[4158]: E0224 05:14:23.465813 4158 secret.go:189] Couldn't get secret openshift-multus/multus-admission-controller-secret: secret "multus-admission-controller-secret" not found Feb 24 05:14:23.465978 master-0 kubenswrapper[4158]: I0224 05:14:23.465835 4158 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cluster-monitoring-operator-tls\" (UniqueName: \"kubernetes.io/secret/c177f8fe-8145-4557-ae78-af121efe001c-cluster-monitoring-operator-tls\") pod \"cluster-monitoring-operator-6bb6d78bf-mzb7q\" (UID: \"c177f8fe-8145-4557-ae78-af121efe001c\") " pod="openshift-monitoring/cluster-monitoring-operator-6bb6d78bf-mzb7q" Feb 24 05:14:23.466014 master-0 kubenswrapper[4158]: E0224 05:14:23.465959 4158 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/8be1f8db-3f0b-4d6f-be42-7564fba66820-webhook-certs podName:8be1f8db-3f0b-4d6f-be42-7564fba66820 nodeName:}" failed. No retries permitted until 2026-02-24 05:14:23.965922347 +0000 UTC m=+110.629919080 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/8be1f8db-3f0b-4d6f-be42-7564fba66820-webhook-certs") pod "multus-admission-controller-5f98f4f8d5-b985k" (UID: "8be1f8db-3f0b-4d6f-be42-7564fba66820") : secret "multus-admission-controller-secret" not found Feb 24 05:14:23.466056 master-0 kubenswrapper[4158]: I0224 05:14:23.466033 4158 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/6e5ede6a-9d4b-47a2-b4ba-e6018910d05a-trusted-ca\") pod \"cluster-node-tuning-operator-bcf775fc9-h99t4\" (UID: \"6e5ede6a-9d4b-47a2-b4ba-e6018910d05a\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-bcf775fc9-h99t4" Feb 24 05:14:23.466102 master-0 kubenswrapper[4158]: I0224 05:14:23.466079 4158 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/22813c83-2f60-44ad-9624-ad367cec08f7-kube-api-access\") pod \"kube-controller-manager-operator-7bcfbc574b-8zrj9\" (UID: \"22813c83-2f60-44ad-9624-ad367cec08f7\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-7bcfbc574b-8zrj9" Feb 24 05:14:23.466183 master-0 kubenswrapper[4158]: E0224 05:14:23.466166 4158 secret.go:189] Couldn't get secret openshift-monitoring/cluster-monitoring-operator-tls: secret "cluster-monitoring-operator-tls" not found Feb 24 05:14:23.466281 master-0 kubenswrapper[4158]: E0224 05:14:23.466270 4158 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c177f8fe-8145-4557-ae78-af121efe001c-cluster-monitoring-operator-tls podName:c177f8fe-8145-4557-ae78-af121efe001c nodeName:}" failed. No retries permitted until 2026-02-24 05:14:23.966249435 +0000 UTC m=+110.630246128 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cluster-monitoring-operator-tls" (UniqueName: "kubernetes.io/secret/c177f8fe-8145-4557-ae78-af121efe001c-cluster-monitoring-operator-tls") pod "cluster-monitoring-operator-6bb6d78bf-mzb7q" (UID: "c177f8fe-8145-4557-ae78-af121efe001c") : secret "cluster-monitoring-operator-tls" not found Feb 24 05:14:23.466363 master-0 kubenswrapper[4158]: I0224 05:14:23.466188 4158 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/f6690909-3a87-4bdc-b0ec-1cdd4df32e4b-host-slash\") pod \"iptables-alerter-r2vvc\" (UID: \"f6690909-3a87-4bdc-b0ec-1cdd4df32e4b\") " pod="openshift-network-operator/iptables-alerter-r2vvc" Feb 24 05:14:23.466446 master-0 kubenswrapper[4158]: I0224 05:14:23.466433 4158 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/58ecd829-4749-4c8a-933b-16b4acccac90-config\") pod \"openshift-apiserver-operator-8586dccc9b-49fsv\" (UID: \"58ecd829-4749-4c8a-933b-16b4acccac90\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-8586dccc9b-49fsv" Feb 24 05:14:23.466526 master-0 kubenswrapper[4158]: I0224 05:14:23.466512 4158 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gmf87\" (UniqueName: \"kubernetes.io/projected/933beda1-c930-4831-a886-3cc6b7a992ad-kube-api-access-gmf87\") pod \"openshift-controller-manager-operator-584cc7bcb5-zz9fm\" (UID: \"933beda1-c930-4831-a886-3cc6b7a992ad\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-584cc7bcb5-zz9fm" Feb 24 05:14:23.466603 master-0 kubenswrapper[4158]: I0224 05:14:23.466590 4158 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/933beda1-c930-4831-a886-3cc6b7a992ad-serving-cert\") pod \"openshift-controller-manager-operator-584cc7bcb5-zz9fm\" (UID: \"933beda1-c930-4831-a886-3cc6b7a992ad\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-584cc7bcb5-zz9fm" Feb 24 05:14:23.466670 master-0 kubenswrapper[4158]: I0224 05:14:23.466658 4158 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cluster-olm-operator-serving-cert\" (UniqueName: \"kubernetes.io/secret/633d33a1-e1b1-40b0-b56a-afb0c1085d97-cluster-olm-operator-serving-cert\") pod \"cluster-olm-operator-5bd7768f54-qh6j7\" (UID: \"633d33a1-e1b1-40b0-b56a-afb0c1085d97\") " pod="openshift-cluster-olm-operator/cluster-olm-operator-5bd7768f54-qh6j7" Feb 24 05:14:23.466745 master-0 kubenswrapper[4158]: I0224 05:14:23.466729 4158 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/3d6b1ce7-1213-494c-829d-186d39eac7eb-metrics-tls\") pod \"ingress-operator-6569778c84-rr8r7\" (UID: \"3d6b1ce7-1213-494c-829d-186d39eac7eb\") " pod="openshift-ingress-operator/ingress-operator-6569778c84-rr8r7" Feb 24 05:14:23.466821 master-0 kubenswrapper[4158]: I0224 05:14:23.466808 4158 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5q2r9\" (UniqueName: \"kubernetes.io/projected/3d6b1ce7-1213-494c-829d-186d39eac7eb-kube-api-access-5q2r9\") pod \"ingress-operator-6569778c84-rr8r7\" (UID: \"3d6b1ce7-1213-494c-829d-186d39eac7eb\") " pod="openshift-ingress-operator/ingress-operator-6569778c84-rr8r7" Feb 24 05:14:23.469495 master-0 kubenswrapper[4158]: I0224 05:14:23.469390 4158 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/6e5ede6a-9d4b-47a2-b4ba-e6018910d05a-trusted-ca\") pod \"cluster-node-tuning-operator-bcf775fc9-h99t4\" (UID: \"6e5ede6a-9d4b-47a2-b4ba-e6018910d05a\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-bcf775fc9-h99t4" Feb 24 05:14:23.469889 master-0 kubenswrapper[4158]: I0224 05:14:23.469858 4158 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/59333a14-5bdc-4590-a3da-af7300f086da-trusted-ca-bundle\") pod \"authentication-operator-5bd7c86784-kbb8z\" (UID: \"59333a14-5bdc-4590-a3da-af7300f086da\") " pod="openshift-authentication-operator/authentication-operator-5bd7c86784-kbb8z" Feb 24 05:14:23.470170 master-0 kubenswrapper[4158]: I0224 05:14:23.470131 4158 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/59333a14-5bdc-4590-a3da-af7300f086da-serving-cert\") pod \"authentication-operator-5bd7c86784-kbb8z\" (UID: \"59333a14-5bdc-4590-a3da-af7300f086da\") " pod="openshift-authentication-operator/authentication-operator-5bd7c86784-kbb8z" Feb 24 05:14:23.471076 master-0 kubenswrapper[4158]: I0224 05:14:23.470559 4158 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/17f8e10b-88dc-4158-a7c4-aaa2f5d5fb9d-serving-cert\") pod \"kube-apiserver-operator-5d87bf58c-ncrqj\" (UID: \"17f8e10b-88dc-4158-a7c4-aaa2f5d5fb9d\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-5d87bf58c-ncrqj" Feb 24 05:14:23.471439 master-0 kubenswrapper[4158]: I0224 05:14:23.471402 4158 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/6e5ede6a-9d4b-47a2-b4ba-e6018910d05a-apiservice-cert\") pod \"cluster-node-tuning-operator-bcf775fc9-h99t4\" (UID: \"6e5ede6a-9d4b-47a2-b4ba-e6018910d05a\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-bcf775fc9-h99t4" Feb 24 05:14:23.471498 master-0 kubenswrapper[4158]: I0224 05:14:23.471467 4158 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/3d6b1ce7-1213-494c-829d-186d39eac7eb-trusted-ca\") pod \"ingress-operator-6569778c84-rr8r7\" (UID: \"3d6b1ce7-1213-494c-829d-186d39eac7eb\") " pod="openshift-ingress-operator/ingress-operator-6569778c84-rr8r7" Feb 24 05:14:23.471539 master-0 kubenswrapper[4158]: I0224 05:14:23.471522 4158 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/22813c83-2f60-44ad-9624-ad367cec08f7-serving-cert\") pod \"kube-controller-manager-operator-7bcfbc574b-8zrj9\" (UID: \"22813c83-2f60-44ad-9624-ad367cec08f7\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-7bcfbc574b-8zrj9" Feb 24 05:14:23.471608 master-0 kubenswrapper[4158]: I0224 05:14:23.471574 4158 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/dd29bef3-d27e-48b3-9aa0-d915e949b3d5-marketplace-operator-metrics\") pod \"marketplace-operator-6f5488b997-dbsnm\" (UID: \"dd29bef3-d27e-48b3-9aa0-d915e949b3d5\") " pod="openshift-marketplace/marketplace-operator-6f5488b997-dbsnm" Feb 24 05:14:23.471750 master-0 kubenswrapper[4158]: I0224 05:14:23.471718 4158 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/22813c83-2f60-44ad-9624-ad367cec08f7-config\") pod \"kube-controller-manager-operator-7bcfbc574b-8zrj9\" (UID: \"22813c83-2f60-44ad-9624-ad367cec08f7\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-7bcfbc574b-8zrj9" Feb 24 05:14:23.471808 master-0 kubenswrapper[4158]: I0224 05:14:23.471761 4158 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/49bfccec-61ec-4bef-a561-9f6e6f906215-package-server-manager-serving-cert\") pod \"package-server-manager-5c75f78c8b-9d82f\" (UID: \"49bfccec-61ec-4bef-a561-9f6e6f906215\") " pod="openshift-operator-lifecycle-manager/package-server-manager-5c75f78c8b-9d82f" Feb 24 05:14:23.471808 master-0 kubenswrapper[4158]: I0224 05:14:23.471795 4158 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m9kf2\" (UniqueName: \"kubernetes.io/projected/58ecd829-4749-4c8a-933b-16b4acccac90-kube-api-access-m9kf2\") pod \"openshift-apiserver-operator-8586dccc9b-49fsv\" (UID: \"58ecd829-4749-4c8a-933b-16b4acccac90\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-8586dccc9b-49fsv" Feb 24 05:14:23.471881 master-0 kubenswrapper[4158]: I0224 05:14:23.471829 4158 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/59333a14-5bdc-4590-a3da-af7300f086da-config\") pod \"authentication-operator-5bd7c86784-kbb8z\" (UID: \"59333a14-5bdc-4590-a3da-af7300f086da\") " pod="openshift-authentication-operator/authentication-operator-5bd7c86784-kbb8z" Feb 24 05:14:23.471881 master-0 kubenswrapper[4158]: I0224 05:14:23.471857 4158 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"telemetry-config\" (UniqueName: \"kubernetes.io/configmap/c177f8fe-8145-4557-ae78-af121efe001c-telemetry-config\") pod \"cluster-monitoring-operator-6bb6d78bf-mzb7q\" (UID: \"c177f8fe-8145-4557-ae78-af121efe001c\") " pod="openshift-monitoring/cluster-monitoring-operator-6bb6d78bf-mzb7q" Feb 24 05:14:23.471881 master-0 kubenswrapper[4158]: I0224 05:14:23.471883 4158 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xj2tz\" (UniqueName: \"kubernetes.io/projected/8be1f8db-3f0b-4d6f-be42-7564fba66820-kube-api-access-xj2tz\") pod \"multus-admission-controller-5f98f4f8d5-b985k\" (UID: \"8be1f8db-3f0b-4d6f-be42-7564fba66820\") " pod="openshift-multus/multus-admission-controller-5f98f4f8d5-b985k" Feb 24 05:14:23.473089 master-0 kubenswrapper[4158]: I0224 05:14:23.471919 4158 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operand-assets\" (UniqueName: \"kubernetes.io/empty-dir/633d33a1-e1b1-40b0-b56a-afb0c1085d97-operand-assets\") pod \"cluster-olm-operator-5bd7768f54-qh6j7\" (UID: \"633d33a1-e1b1-40b0-b56a-afb0c1085d97\") " pod="openshift-cluster-olm-operator/cluster-olm-operator-5bd7768f54-qh6j7" Feb 24 05:14:23.473089 master-0 kubenswrapper[4158]: I0224 05:14:23.471964 4158 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/996ae0be-d36c-47f4-98b2-1c89591f9506-metrics-tls\") pod \"dns-operator-8c7d49845-4dhth\" (UID: \"996ae0be-d36c-47f4-98b2-1c89591f9506\") " pod="openshift-dns-operator/dns-operator-8c7d49845-4dhth" Feb 24 05:14:23.473089 master-0 kubenswrapper[4158]: I0224 05:14:23.471998 4158 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/17f8e10b-88dc-4158-a7c4-aaa2f5d5fb9d-kube-api-access\") pod \"kube-apiserver-operator-5d87bf58c-ncrqj\" (UID: \"17f8e10b-88dc-4158-a7c4-aaa2f5d5fb9d\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-5d87bf58c-ncrqj" Feb 24 05:14:23.473089 master-0 kubenswrapper[4158]: I0224 05:14:23.472026 4158 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/dd29bef3-d27e-48b3-9aa0-d915e949b3d5-marketplace-trusted-ca\") pod \"marketplace-operator-6f5488b997-dbsnm\" (UID: \"dd29bef3-d27e-48b3-9aa0-d915e949b3d5\") " pod="openshift-marketplace/marketplace-operator-6f5488b997-dbsnm" Feb 24 05:14:23.473089 master-0 kubenswrapper[4158]: I0224 05:14:23.472051 4158 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wwc5b\" (UniqueName: \"kubernetes.io/projected/59333a14-5bdc-4590-a3da-af7300f086da-kube-api-access-wwc5b\") pod \"authentication-operator-5bd7c86784-kbb8z\" (UID: \"59333a14-5bdc-4590-a3da-af7300f086da\") " pod="openshift-authentication-operator/authentication-operator-5bd7c86784-kbb8z" Feb 24 05:14:23.473089 master-0 kubenswrapper[4158]: I0224 05:14:23.472079 4158 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/58ecd829-4749-4c8a-933b-16b4acccac90-serving-cert\") pod \"openshift-apiserver-operator-8586dccc9b-49fsv\" (UID: \"58ecd829-4749-4c8a-933b-16b4acccac90\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-8586dccc9b-49fsv" Feb 24 05:14:23.473089 master-0 kubenswrapper[4158]: I0224 05:14:23.472110 4158 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-tuning-operator-tls\" (UniqueName: \"kubernetes.io/secret/6e5ede6a-9d4b-47a2-b4ba-e6018910d05a-node-tuning-operator-tls\") pod \"cluster-node-tuning-operator-bcf775fc9-h99t4\" (UID: \"6e5ede6a-9d4b-47a2-b4ba-e6018910d05a\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-bcf775fc9-h99t4" Feb 24 05:14:23.473089 master-0 kubenswrapper[4158]: I0224 05:14:23.472140 4158 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/933beda1-c930-4831-a886-3cc6b7a992ad-config\") pod \"openshift-controller-manager-operator-584cc7bcb5-zz9fm\" (UID: \"933beda1-c930-4831-a886-3cc6b7a992ad\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-584cc7bcb5-zz9fm" Feb 24 05:14:23.473089 master-0 kubenswrapper[4158]: I0224 05:14:23.472169 4158 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/f6690909-3a87-4bdc-b0ec-1cdd4df32e4b-iptables-alerter-script\") pod \"iptables-alerter-r2vvc\" (UID: \"f6690909-3a87-4bdc-b0ec-1cdd4df32e4b\") " pod="openshift-network-operator/iptables-alerter-r2vvc" Feb 24 05:14:23.473089 master-0 kubenswrapper[4158]: I0224 05:14:23.472198 4158 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zb68s\" (UniqueName: \"kubernetes.io/projected/6e5ede6a-9d4b-47a2-b4ba-e6018910d05a-kube-api-access-zb68s\") pod \"cluster-node-tuning-operator-bcf775fc9-h99t4\" (UID: \"6e5ede6a-9d4b-47a2-b4ba-e6018910d05a\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-bcf775fc9-h99t4" Feb 24 05:14:23.473089 master-0 kubenswrapper[4158]: I0224 05:14:23.472228 4158 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6b7f4\" (UniqueName: \"kubernetes.io/projected/f6690909-3a87-4bdc-b0ec-1cdd4df32e4b-kube-api-access-6b7f4\") pod \"iptables-alerter-r2vvc\" (UID: \"f6690909-3a87-4bdc-b0ec-1cdd4df32e4b\") " pod="openshift-network-operator/iptables-alerter-r2vvc" Feb 24 05:14:23.473089 master-0 kubenswrapper[4158]: I0224 05:14:23.472257 4158 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jrhmp\" (UniqueName: \"kubernetes.io/projected/996ae0be-d36c-47f4-98b2-1c89591f9506-kube-api-access-jrhmp\") pod \"dns-operator-8c7d49845-4dhth\" (UID: \"996ae0be-d36c-47f4-98b2-1c89591f9506\") " pod="openshift-dns-operator/dns-operator-8c7d49845-4dhth" Feb 24 05:14:23.473089 master-0 kubenswrapper[4158]: I0224 05:14:23.472289 4158 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/59333a14-5bdc-4590-a3da-af7300f086da-service-ca-bundle\") pod \"authentication-operator-5bd7c86784-kbb8z\" (UID: \"59333a14-5bdc-4590-a3da-af7300f086da\") " pod="openshift-authentication-operator/authentication-operator-5bd7c86784-kbb8z" Feb 24 05:14:23.473838 master-0 kubenswrapper[4158]: I0224 05:14:23.473803 4158 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/59333a14-5bdc-4590-a3da-af7300f086da-service-ca-bundle\") pod \"authentication-operator-5bd7c86784-kbb8z\" (UID: \"59333a14-5bdc-4590-a3da-af7300f086da\") " pod="openshift-authentication-operator/authentication-operator-5bd7c86784-kbb8z" Feb 24 05:14:23.475704 master-0 kubenswrapper[4158]: I0224 05:14:23.475660 4158 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/58ecd829-4749-4c8a-933b-16b4acccac90-serving-cert\") pod \"openshift-apiserver-operator-8586dccc9b-49fsv\" (UID: \"58ecd829-4749-4c8a-933b-16b4acccac90\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-8586dccc9b-49fsv" Feb 24 05:14:23.475831 master-0 kubenswrapper[4158]: E0224 05:14:23.475806 4158 secret.go:189] Couldn't get secret openshift-cluster-node-tuning-operator/node-tuning-operator-tls: secret "node-tuning-operator-tls" not found Feb 24 05:14:23.475896 master-0 kubenswrapper[4158]: I0224 05:14:23.475861 4158 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"telemetry-config\" (UniqueName: \"kubernetes.io/configmap/c177f8fe-8145-4557-ae78-af121efe001c-telemetry-config\") pod \"cluster-monitoring-operator-6bb6d78bf-mzb7q\" (UID: \"c177f8fe-8145-4557-ae78-af121efe001c\") " pod="openshift-monitoring/cluster-monitoring-operator-6bb6d78bf-mzb7q" Feb 24 05:14:23.475952 master-0 kubenswrapper[4158]: E0224 05:14:23.475924 4158 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6e5ede6a-9d4b-47a2-b4ba-e6018910d05a-node-tuning-operator-tls podName:6e5ede6a-9d4b-47a2-b4ba-e6018910d05a nodeName:}" failed. No retries permitted until 2026-02-24 05:14:23.975902976 +0000 UTC m=+110.639899669 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "node-tuning-operator-tls" (UniqueName: "kubernetes.io/secret/6e5ede6a-9d4b-47a2-b4ba-e6018910d05a-node-tuning-operator-tls") pod "cluster-node-tuning-operator-bcf775fc9-h99t4" (UID: "6e5ede6a-9d4b-47a2-b4ba-e6018910d05a") : secret "node-tuning-operator-tls" not found Feb 24 05:14:23.476778 master-0 kubenswrapper[4158]: I0224 05:14:23.476683 4158 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/933beda1-c930-4831-a886-3cc6b7a992ad-config\") pod \"openshift-controller-manager-operator-584cc7bcb5-zz9fm\" (UID: \"933beda1-c930-4831-a886-3cc6b7a992ad\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-584cc7bcb5-zz9fm" Feb 24 05:14:23.476868 master-0 kubenswrapper[4158]: I0224 05:14:23.476834 4158 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/58ecd829-4749-4c8a-933b-16b4acccac90-config\") pod \"openshift-apiserver-operator-8586dccc9b-49fsv\" (UID: \"58ecd829-4749-4c8a-933b-16b4acccac90\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-8586dccc9b-49fsv" Feb 24 05:14:23.476943 master-0 kubenswrapper[4158]: I0224 05:14:23.476918 4158 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/59333a14-5bdc-4590-a3da-af7300f086da-config\") pod \"authentication-operator-5bd7c86784-kbb8z\" (UID: \"59333a14-5bdc-4590-a3da-af7300f086da\") " pod="openshift-authentication-operator/authentication-operator-5bd7c86784-kbb8z" Feb 24 05:14:23.477097 master-0 kubenswrapper[4158]: E0224 05:14:23.477074 4158 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/package-server-manager-serving-cert: secret "package-server-manager-serving-cert" not found Feb 24 05:14:23.477598 master-0 kubenswrapper[4158]: I0224 05:14:23.477126 4158 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operand-assets\" (UniqueName: \"kubernetes.io/empty-dir/633d33a1-e1b1-40b0-b56a-afb0c1085d97-operand-assets\") pod \"cluster-olm-operator-5bd7768f54-qh6j7\" (UID: \"633d33a1-e1b1-40b0-b56a-afb0c1085d97\") " pod="openshift-cluster-olm-operator/cluster-olm-operator-5bd7768f54-qh6j7" Feb 24 05:14:23.477598 master-0 kubenswrapper[4158]: I0224 05:14:23.477184 4158 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/17f8e10b-88dc-4158-a7c4-aaa2f5d5fb9d-config\") pod \"kube-apiserver-operator-5d87bf58c-ncrqj\" (UID: \"17f8e10b-88dc-4158-a7c4-aaa2f5d5fb9d\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-5d87bf58c-ncrqj" Feb 24 05:14:23.477598 master-0 kubenswrapper[4158]: E0224 05:14:23.477224 4158 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/49bfccec-61ec-4bef-a561-9f6e6f906215-package-server-manager-serving-cert podName:49bfccec-61ec-4bef-a561-9f6e6f906215 nodeName:}" failed. No retries permitted until 2026-02-24 05:14:23.977205181 +0000 UTC m=+110.641201874 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "package-server-manager-serving-cert" (UniqueName: "kubernetes.io/secret/49bfccec-61ec-4bef-a561-9f6e6f906215-package-server-manager-serving-cert") pod "package-server-manager-5c75f78c8b-9d82f" (UID: "49bfccec-61ec-4bef-a561-9f6e6f906215") : secret "package-server-manager-serving-cert" not found Feb 24 05:14:23.479595 master-0 kubenswrapper[4158]: I0224 05:14:23.479566 4158 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cluster-olm-operator-serving-cert\" (UniqueName: \"kubernetes.io/secret/633d33a1-e1b1-40b0-b56a-afb0c1085d97-cluster-olm-operator-serving-cert\") pod \"cluster-olm-operator-5bd7768f54-qh6j7\" (UID: \"633d33a1-e1b1-40b0-b56a-afb0c1085d97\") " pod="openshift-cluster-olm-operator/cluster-olm-operator-5bd7768f54-qh6j7" Feb 24 05:14:23.480384 master-0 kubenswrapper[4158]: E0224 05:14:23.480352 4158 secret.go:189] Couldn't get secret openshift-cluster-node-tuning-operator/performance-addon-operator-webhook-cert: secret "performance-addon-operator-webhook-cert" not found Feb 24 05:14:23.480535 master-0 kubenswrapper[4158]: E0224 05:14:23.480458 4158 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6e5ede6a-9d4b-47a2-b4ba-e6018910d05a-apiservice-cert podName:6e5ede6a-9d4b-47a2-b4ba-e6018910d05a nodeName:}" failed. No retries permitted until 2026-02-24 05:14:23.980419307 +0000 UTC m=+110.644416010 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "apiservice-cert" (UniqueName: "kubernetes.io/secret/6e5ede6a-9d4b-47a2-b4ba-e6018910d05a-apiservice-cert") pod "cluster-node-tuning-operator-bcf775fc9-h99t4" (UID: "6e5ede6a-9d4b-47a2-b4ba-e6018910d05a") : secret "performance-addon-operator-webhook-cert" not found Feb 24 05:14:23.480758 master-0 kubenswrapper[4158]: I0224 05:14:23.480722 4158 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/22813c83-2f60-44ad-9624-ad367cec08f7-config\") pod \"kube-controller-manager-operator-7bcfbc574b-8zrj9\" (UID: \"22813c83-2f60-44ad-9624-ad367cec08f7\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-7bcfbc574b-8zrj9" Feb 24 05:14:23.480758 master-0 kubenswrapper[4158]: I0224 05:14:23.480738 4158 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/22813c83-2f60-44ad-9624-ad367cec08f7-serving-cert\") pod \"kube-controller-manager-operator-7bcfbc574b-8zrj9\" (UID: \"22813c83-2f60-44ad-9624-ad367cec08f7\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-7bcfbc574b-8zrj9" Feb 24 05:14:23.484541 master-0 kubenswrapper[4158]: I0224 05:14:23.484504 4158 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/933beda1-c930-4831-a886-3cc6b7a992ad-serving-cert\") pod \"openshift-controller-manager-operator-584cc7bcb5-zz9fm\" (UID: \"933beda1-c930-4831-a886-3cc6b7a992ad\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-584cc7bcb5-zz9fm" Feb 24 05:14:23.486902 master-0 kubenswrapper[4158]: I0224 05:14:23.486858 4158 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mdpfz\" (UniqueName: \"kubernetes.io/projected/c177f8fe-8145-4557-ae78-af121efe001c-kube-api-access-mdpfz\") pod \"cluster-monitoring-operator-6bb6d78bf-mzb7q\" (UID: \"c177f8fe-8145-4557-ae78-af121efe001c\") " pod="openshift-monitoring/cluster-monitoring-operator-6bb6d78bf-mzb7q" Feb 24 05:14:23.491049 master-0 kubenswrapper[4158]: I0224 05:14:23.491001 4158 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fgf94\" (UniqueName: \"kubernetes.io/projected/7a2c651d-ea1a-41f2-9745-04adc8d88904-kube-api-access-fgf94\") pod \"etcd-operator-545bf96f4d-tfmbs\" (UID: \"7a2c651d-ea1a-41f2-9745-04adc8d88904\") " pod="openshift-etcd-operator/etcd-operator-545bf96f4d-tfmbs" Feb 24 05:14:23.491176 master-0 kubenswrapper[4158]: I0224 05:14:23.491143 4158 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d4d5x\" (UniqueName: \"kubernetes.io/projected/49bfccec-61ec-4bef-a561-9f6e6f906215-kube-api-access-d4d5x\") pod \"package-server-manager-5c75f78c8b-9d82f\" (UID: \"49bfccec-61ec-4bef-a561-9f6e6f906215\") " pod="openshift-operator-lifecycle-manager/package-server-manager-5c75f78c8b-9d82f" Feb 24 05:14:23.496797 master-0 kubenswrapper[4158]: I0224 05:14:23.496756 4158 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/22813c83-2f60-44ad-9624-ad367cec08f7-kube-api-access\") pod \"kube-controller-manager-operator-7bcfbc574b-8zrj9\" (UID: \"22813c83-2f60-44ad-9624-ad367cec08f7\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-7bcfbc574b-8zrj9" Feb 24 05:14:23.500659 master-0 kubenswrapper[4158]: I0224 05:14:23.500539 4158 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gmf87\" (UniqueName: \"kubernetes.io/projected/933beda1-c930-4831-a886-3cc6b7a992ad-kube-api-access-gmf87\") pod \"openshift-controller-manager-operator-584cc7bcb5-zz9fm\" (UID: \"933beda1-c930-4831-a886-3cc6b7a992ad\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-584cc7bcb5-zz9fm" Feb 24 05:14:23.503346 master-0 kubenswrapper[4158]: I0224 05:14:23.503281 4158 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-62xzk\" (UniqueName: \"kubernetes.io/projected/633d33a1-e1b1-40b0-b56a-afb0c1085d97-kube-api-access-62xzk\") pod \"cluster-olm-operator-5bd7768f54-qh6j7\" (UID: \"633d33a1-e1b1-40b0-b56a-afb0c1085d97\") " pod="openshift-cluster-olm-operator/cluster-olm-operator-5bd7768f54-qh6j7" Feb 24 05:14:23.504854 master-0 kubenswrapper[4158]: I0224 05:14:23.504813 4158 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xj2tz\" (UniqueName: \"kubernetes.io/projected/8be1f8db-3f0b-4d6f-be42-7564fba66820-kube-api-access-xj2tz\") pod \"multus-admission-controller-5f98f4f8d5-b985k\" (UID: \"8be1f8db-3f0b-4d6f-be42-7564fba66820\") " pod="openshift-multus/multus-admission-controller-5f98f4f8d5-b985k" Feb 24 05:14:23.504854 master-0 kubenswrapper[4158]: I0224 05:14:23.504964 4158 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zb68s\" (UniqueName: \"kubernetes.io/projected/6e5ede6a-9d4b-47a2-b4ba-e6018910d05a-kube-api-access-zb68s\") pod \"cluster-node-tuning-operator-bcf775fc9-h99t4\" (UID: \"6e5ede6a-9d4b-47a2-b4ba-e6018910d05a\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-bcf775fc9-h99t4" Feb 24 05:14:23.509560 master-0 kubenswrapper[4158]: I0224 05:14:23.509497 4158 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wwc5b\" (UniqueName: \"kubernetes.io/projected/59333a14-5bdc-4590-a3da-af7300f086da-kube-api-access-wwc5b\") pod \"authentication-operator-5bd7c86784-kbb8z\" (UID: \"59333a14-5bdc-4590-a3da-af7300f086da\") " pod="openshift-authentication-operator/authentication-operator-5bd7c86784-kbb8z" Feb 24 05:14:23.511204 master-0 kubenswrapper[4158]: I0224 05:14:23.511166 4158 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m9kf2\" (UniqueName: \"kubernetes.io/projected/58ecd829-4749-4c8a-933b-16b4acccac90-kube-api-access-m9kf2\") pod \"openshift-apiserver-operator-8586dccc9b-49fsv\" (UID: \"58ecd829-4749-4c8a-933b-16b4acccac90\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-8586dccc9b-49fsv" Feb 24 05:14:23.557529 master-0 kubenswrapper[4158]: I0224 05:14:23.557458 4158 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-77cd4d9559-8l7xv" Feb 24 05:14:23.571716 master-0 kubenswrapper[4158]: I0224 05:14:23.571667 4158 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-c48c8bf7c-mcdrl" Feb 24 05:14:23.577933 master-0 kubenswrapper[4158]: I0224 05:14:23.577887 4158 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/f6690909-3a87-4bdc-b0ec-1cdd4df32e4b-host-slash\") pod \"iptables-alerter-r2vvc\" (UID: \"f6690909-3a87-4bdc-b0ec-1cdd4df32e4b\") " pod="openshift-network-operator/iptables-alerter-r2vvc" Feb 24 05:14:23.577991 master-0 kubenswrapper[4158]: I0224 05:14:23.577965 4158 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/f6690909-3a87-4bdc-b0ec-1cdd4df32e4b-host-slash\") pod \"iptables-alerter-r2vvc\" (UID: \"f6690909-3a87-4bdc-b0ec-1cdd4df32e4b\") " pod="openshift-network-operator/iptables-alerter-r2vvc" Feb 24 05:14:23.578109 master-0 kubenswrapper[4158]: I0224 05:14:23.578007 4158 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/3d6b1ce7-1213-494c-829d-186d39eac7eb-metrics-tls\") pod \"ingress-operator-6569778c84-rr8r7\" (UID: \"3d6b1ce7-1213-494c-829d-186d39eac7eb\") " pod="openshift-ingress-operator/ingress-operator-6569778c84-rr8r7" Feb 24 05:14:23.578554 master-0 kubenswrapper[4158]: I0224 05:14:23.578500 4158 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5q2r9\" (UniqueName: \"kubernetes.io/projected/3d6b1ce7-1213-494c-829d-186d39eac7eb-kube-api-access-5q2r9\") pod \"ingress-operator-6569778c84-rr8r7\" (UID: \"3d6b1ce7-1213-494c-829d-186d39eac7eb\") " pod="openshift-ingress-operator/ingress-operator-6569778c84-rr8r7" Feb 24 05:14:23.578592 master-0 kubenswrapper[4158]: I0224 05:14:23.578571 4158 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/17f8e10b-88dc-4158-a7c4-aaa2f5d5fb9d-serving-cert\") pod \"kube-apiserver-operator-5d87bf58c-ncrqj\" (UID: \"17f8e10b-88dc-4158-a7c4-aaa2f5d5fb9d\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-5d87bf58c-ncrqj" Feb 24 05:14:23.578645 master-0 kubenswrapper[4158]: E0224 05:14:23.578578 4158 secret.go:189] Couldn't get secret openshift-ingress-operator/metrics-tls: secret "metrics-tls" not found Feb 24 05:14:23.578750 master-0 kubenswrapper[4158]: E0224 05:14:23.578720 4158 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/3d6b1ce7-1213-494c-829d-186d39eac7eb-metrics-tls podName:3d6b1ce7-1213-494c-829d-186d39eac7eb nodeName:}" failed. No retries permitted until 2026-02-24 05:14:24.078688847 +0000 UTC m=+110.742685540 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-tls" (UniqueName: "kubernetes.io/secret/3d6b1ce7-1213-494c-829d-186d39eac7eb-metrics-tls") pod "ingress-operator-6569778c84-rr8r7" (UID: "3d6b1ce7-1213-494c-829d-186d39eac7eb") : secret "metrics-tls" not found Feb 24 05:14:23.578848 master-0 kubenswrapper[4158]: I0224 05:14:23.578815 4158 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/3d6b1ce7-1213-494c-829d-186d39eac7eb-trusted-ca\") pod \"ingress-operator-6569778c84-rr8r7\" (UID: \"3d6b1ce7-1213-494c-829d-186d39eac7eb\") " pod="openshift-ingress-operator/ingress-operator-6569778c84-rr8r7" Feb 24 05:14:23.578921 master-0 kubenswrapper[4158]: I0224 05:14:23.578886 4158 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/dd29bef3-d27e-48b3-9aa0-d915e949b3d5-marketplace-operator-metrics\") pod \"marketplace-operator-6f5488b997-dbsnm\" (UID: \"dd29bef3-d27e-48b3-9aa0-d915e949b3d5\") " pod="openshift-marketplace/marketplace-operator-6f5488b997-dbsnm" Feb 24 05:14:23.579115 master-0 kubenswrapper[4158]: E0224 05:14:23.579073 4158 secret.go:189] Couldn't get secret openshift-marketplace/marketplace-operator-metrics: secret "marketplace-operator-metrics" not found Feb 24 05:14:23.579195 master-0 kubenswrapper[4158]: E0224 05:14:23.579170 4158 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/dd29bef3-d27e-48b3-9aa0-d915e949b3d5-marketplace-operator-metrics podName:dd29bef3-d27e-48b3-9aa0-d915e949b3d5 nodeName:}" failed. No retries permitted until 2026-02-24 05:14:24.079135919 +0000 UTC m=+110.743132642 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "marketplace-operator-metrics" (UniqueName: "kubernetes.io/secret/dd29bef3-d27e-48b3-9aa0-d915e949b3d5-marketplace-operator-metrics") pod "marketplace-operator-6f5488b997-dbsnm" (UID: "dd29bef3-d27e-48b3-9aa0-d915e949b3d5") : secret "marketplace-operator-metrics" not found Feb 24 05:14:23.579350 master-0 kubenswrapper[4158]: I0224 05:14:23.579292 4158 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/996ae0be-d36c-47f4-98b2-1c89591f9506-metrics-tls\") pod \"dns-operator-8c7d49845-4dhth\" (UID: \"996ae0be-d36c-47f4-98b2-1c89591f9506\") " pod="openshift-dns-operator/dns-operator-8c7d49845-4dhth" Feb 24 05:14:23.579395 master-0 kubenswrapper[4158]: I0224 05:14:23.579372 4158 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/17f8e10b-88dc-4158-a7c4-aaa2f5d5fb9d-kube-api-access\") pod \"kube-apiserver-operator-5d87bf58c-ncrqj\" (UID: \"17f8e10b-88dc-4158-a7c4-aaa2f5d5fb9d\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-5d87bf58c-ncrqj" Feb 24 05:14:23.579434 master-0 kubenswrapper[4158]: I0224 05:14:23.579410 4158 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/dd29bef3-d27e-48b3-9aa0-d915e949b3d5-marketplace-trusted-ca\") pod \"marketplace-operator-6f5488b997-dbsnm\" (UID: \"dd29bef3-d27e-48b3-9aa0-d915e949b3d5\") " pod="openshift-marketplace/marketplace-operator-6f5488b997-dbsnm" Feb 24 05:14:23.579493 master-0 kubenswrapper[4158]: I0224 05:14:23.579463 4158 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/f6690909-3a87-4bdc-b0ec-1cdd4df32e4b-iptables-alerter-script\") pod \"iptables-alerter-r2vvc\" (UID: \"f6690909-3a87-4bdc-b0ec-1cdd4df32e4b\") " pod="openshift-network-operator/iptables-alerter-r2vvc" Feb 24 05:14:23.579717 master-0 kubenswrapper[4158]: I0224 05:14:23.579694 4158 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6b7f4\" (UniqueName: \"kubernetes.io/projected/f6690909-3a87-4bdc-b0ec-1cdd4df32e4b-kube-api-access-6b7f4\") pod \"iptables-alerter-r2vvc\" (UID: \"f6690909-3a87-4bdc-b0ec-1cdd4df32e4b\") " pod="openshift-network-operator/iptables-alerter-r2vvc" Feb 24 05:14:23.579799 master-0 kubenswrapper[4158]: I0224 05:14:23.579731 4158 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jrhmp\" (UniqueName: \"kubernetes.io/projected/996ae0be-d36c-47f4-98b2-1c89591f9506-kube-api-access-jrhmp\") pod \"dns-operator-8c7d49845-4dhth\" (UID: \"996ae0be-d36c-47f4-98b2-1c89591f9506\") " pod="openshift-dns-operator/dns-operator-8c7d49845-4dhth" Feb 24 05:14:23.579799 master-0 kubenswrapper[4158]: I0224 05:14:23.579774 4158 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/17f8e10b-88dc-4158-a7c4-aaa2f5d5fb9d-config\") pod \"kube-apiserver-operator-5d87bf58c-ncrqj\" (UID: \"17f8e10b-88dc-4158-a7c4-aaa2f5d5fb9d\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-5d87bf58c-ncrqj" Feb 24 05:14:23.579880 master-0 kubenswrapper[4158]: I0224 05:14:23.579813 4158 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h5djr\" (UniqueName: \"kubernetes.io/projected/feee7fe8-e805-4807-b4c0-ecc7ef0f88d9-kube-api-access-h5djr\") pod \"csi-snapshot-controller-operator-6fb4df594f-8tv99\" (UID: \"feee7fe8-e805-4807-b4c0-ecc7ef0f88d9\") " pod="openshift-cluster-storage-operator/csi-snapshot-controller-operator-6fb4df594f-8tv99" Feb 24 05:14:23.579880 master-0 kubenswrapper[4158]: I0224 05:14:23.579858 4158 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/3d6b1ce7-1213-494c-829d-186d39eac7eb-bound-sa-token\") pod \"ingress-operator-6569778c84-rr8r7\" (UID: \"3d6b1ce7-1213-494c-829d-186d39eac7eb\") " pod="openshift-ingress-operator/ingress-operator-6569778c84-rr8r7" Feb 24 05:14:23.579953 master-0 kubenswrapper[4158]: I0224 05:14:23.579913 4158 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zcb72\" (UniqueName: \"kubernetes.io/projected/dd29bef3-d27e-48b3-9aa0-d915e949b3d5-kube-api-access-zcb72\") pod \"marketplace-operator-6f5488b997-dbsnm\" (UID: \"dd29bef3-d27e-48b3-9aa0-d915e949b3d5\") " pod="openshift-marketplace/marketplace-operator-6f5488b997-dbsnm" Feb 24 05:14:23.580490 master-0 kubenswrapper[4158]: E0224 05:14:23.579861 4158 secret.go:189] Couldn't get secret openshift-dns-operator/metrics-tls: secret "metrics-tls" not found Feb 24 05:14:23.580547 master-0 kubenswrapper[4158]: I0224 05:14:23.580482 4158 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/3d6b1ce7-1213-494c-829d-186d39eac7eb-trusted-ca\") pod \"ingress-operator-6569778c84-rr8r7\" (UID: \"3d6b1ce7-1213-494c-829d-186d39eac7eb\") " pod="openshift-ingress-operator/ingress-operator-6569778c84-rr8r7" Feb 24 05:14:23.580547 master-0 kubenswrapper[4158]: E0224 05:14:23.580510 4158 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/996ae0be-d36c-47f4-98b2-1c89591f9506-metrics-tls podName:996ae0be-d36c-47f4-98b2-1c89591f9506 nodeName:}" failed. No retries permitted until 2026-02-24 05:14:24.080484765 +0000 UTC m=+110.744481458 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-tls" (UniqueName: "kubernetes.io/secret/996ae0be-d36c-47f4-98b2-1c89591f9506-metrics-tls") pod "dns-operator-8c7d49845-4dhth" (UID: "996ae0be-d36c-47f4-98b2-1c89591f9506") : secret "metrics-tls" not found Feb 24 05:14:23.581325 master-0 kubenswrapper[4158]: I0224 05:14:23.581236 4158 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/f6690909-3a87-4bdc-b0ec-1cdd4df32e4b-iptables-alerter-script\") pod \"iptables-alerter-r2vvc\" (UID: \"f6690909-3a87-4bdc-b0ec-1cdd4df32e4b\") " pod="openshift-network-operator/iptables-alerter-r2vvc" Feb 24 05:14:23.581438 master-0 kubenswrapper[4158]: I0224 05:14:23.581390 4158 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/17f8e10b-88dc-4158-a7c4-aaa2f5d5fb9d-config\") pod \"kube-apiserver-operator-5d87bf58c-ncrqj\" (UID: \"17f8e10b-88dc-4158-a7c4-aaa2f5d5fb9d\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-5d87bf58c-ncrqj" Feb 24 05:14:23.581724 master-0 kubenswrapper[4158]: I0224 05:14:23.581685 4158 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/dd29bef3-d27e-48b3-9aa0-d915e949b3d5-marketplace-trusted-ca\") pod \"marketplace-operator-6f5488b997-dbsnm\" (UID: \"dd29bef3-d27e-48b3-9aa0-d915e949b3d5\") " pod="openshift-marketplace/marketplace-operator-6f5488b997-dbsnm" Feb 24 05:14:23.582103 master-0 kubenswrapper[4158]: I0224 05:14:23.582063 4158 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/17f8e10b-88dc-4158-a7c4-aaa2f5d5fb9d-serving-cert\") pod \"kube-apiserver-operator-5d87bf58c-ncrqj\" (UID: \"17f8e10b-88dc-4158-a7c4-aaa2f5d5fb9d\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-5d87bf58c-ncrqj" Feb 24 05:14:23.603797 master-0 kubenswrapper[4158]: I0224 05:14:23.603729 4158 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-fc889cfd5-r6p58" Feb 24 05:14:23.614458 master-0 kubenswrapper[4158]: I0224 05:14:23.614391 4158 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5q2r9\" (UniqueName: \"kubernetes.io/projected/3d6b1ce7-1213-494c-829d-186d39eac7eb-kube-api-access-5q2r9\") pod \"ingress-operator-6569778c84-rr8r7\" (UID: \"3d6b1ce7-1213-494c-829d-186d39eac7eb\") " pod="openshift-ingress-operator/ingress-operator-6569778c84-rr8r7" Feb 24 05:14:23.640852 master-0 kubenswrapper[4158]: I0224 05:14:23.640775 4158 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-545bf96f4d-tfmbs" Feb 24 05:14:23.640980 master-0 kubenswrapper[4158]: I0224 05:14:23.640860 4158 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-8586dccc9b-49fsv" Feb 24 05:14:23.641416 master-0 kubenswrapper[4158]: I0224 05:14:23.641391 4158 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-7bcfbc574b-8zrj9" Feb 24 05:14:23.647506 master-0 kubenswrapper[4158]: I0224 05:14:23.647290 4158 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-5bd7c86784-kbb8z" Feb 24 05:14:23.652702 master-0 kubenswrapper[4158]: I0224 05:14:23.649415 4158 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/17f8e10b-88dc-4158-a7c4-aaa2f5d5fb9d-kube-api-access\") pod \"kube-apiserver-operator-5d87bf58c-ncrqj\" (UID: \"17f8e10b-88dc-4158-a7c4-aaa2f5d5fb9d\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-5d87bf58c-ncrqj" Feb 24 05:14:23.657353 master-0 kubenswrapper[4158]: I0224 05:14:23.655728 4158 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-584cc7bcb5-zz9fm" Feb 24 05:14:23.665654 master-0 kubenswrapper[4158]: I0224 05:14:23.664940 4158 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6b7f4\" (UniqueName: \"kubernetes.io/projected/f6690909-3a87-4bdc-b0ec-1cdd4df32e4b-kube-api-access-6b7f4\") pod \"iptables-alerter-r2vvc\" (UID: \"f6690909-3a87-4bdc-b0ec-1cdd4df32e4b\") " pod="openshift-network-operator/iptables-alerter-r2vvc" Feb 24 05:14:23.683159 master-0 kubenswrapper[4158]: I0224 05:14:23.683086 4158 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-olm-operator/cluster-olm-operator-5bd7768f54-qh6j7" Feb 24 05:14:23.718611 master-0 kubenswrapper[4158]: I0224 05:14:23.718572 4158 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zcb72\" (UniqueName: \"kubernetes.io/projected/dd29bef3-d27e-48b3-9aa0-d915e949b3d5-kube-api-access-zcb72\") pod \"marketplace-operator-6f5488b997-dbsnm\" (UID: \"dd29bef3-d27e-48b3-9aa0-d915e949b3d5\") " pod="openshift-marketplace/marketplace-operator-6f5488b997-dbsnm" Feb 24 05:14:23.721426 master-0 kubenswrapper[4158]: I0224 05:14:23.721384 4158 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jrhmp\" (UniqueName: \"kubernetes.io/projected/996ae0be-d36c-47f4-98b2-1c89591f9506-kube-api-access-jrhmp\") pod \"dns-operator-8c7d49845-4dhth\" (UID: \"996ae0be-d36c-47f4-98b2-1c89591f9506\") " pod="openshift-dns-operator/dns-operator-8c7d49845-4dhth" Feb 24 05:14:23.721689 master-0 kubenswrapper[4158]: I0224 05:14:23.721653 4158 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/3d6b1ce7-1213-494c-829d-186d39eac7eb-bound-sa-token\") pod \"ingress-operator-6569778c84-rr8r7\" (UID: \"3d6b1ce7-1213-494c-829d-186d39eac7eb\") " pod="openshift-ingress-operator/ingress-operator-6569778c84-rr8r7" Feb 24 05:14:23.739129 master-0 kubenswrapper[4158]: I0224 05:14:23.739089 4158 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h5djr\" (UniqueName: \"kubernetes.io/projected/feee7fe8-e805-4807-b4c0-ecc7ef0f88d9-kube-api-access-h5djr\") pod \"csi-snapshot-controller-operator-6fb4df594f-8tv99\" (UID: \"feee7fe8-e805-4807-b4c0-ecc7ef0f88d9\") " pod="openshift-cluster-storage-operator/csi-snapshot-controller-operator-6fb4df594f-8tv99" Feb 24 05:14:23.761345 master-0 kubenswrapper[4158]: I0224 05:14:23.760598 4158 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/iptables-alerter-r2vvc" Feb 24 05:14:23.769338 master-0 kubenswrapper[4158]: I0224 05:14:23.765843 4158 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-5d87bf58c-ncrqj" Feb 24 05:14:23.855724 master-0 kubenswrapper[4158]: I0224 05:14:23.853279 4158 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-77cd4d9559-8l7xv"] Feb 24 05:14:23.876442 master-0 kubenswrapper[4158]: I0224 05:14:23.876375 4158 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca-operator/service-ca-operator-c48c8bf7c-mcdrl"] Feb 24 05:14:23.887038 master-0 kubenswrapper[4158]: W0224 05:14:23.886918 4158 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd86d5bbe_3768_4695_810b_245a56e4fd1d.slice/crio-6bea8d6f03626b01b052e73eecef6934077ef78e8f1a77511bf8222ddfca016e WatchSource:0}: Error finding container 6bea8d6f03626b01b052e73eecef6934077ef78e8f1a77511bf8222ddfca016e: Status 404 returned error can't find the container with id 6bea8d6f03626b01b052e73eecef6934077ef78e8f1a77511bf8222ddfca016e Feb 24 05:14:23.898231 master-0 kubenswrapper[4158]: I0224 05:14:23.898178 4158 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-fc889cfd5-r6p58"] Feb 24 05:14:23.930203 master-0 kubenswrapper[4158]: W0224 05:14:23.930131 4158 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc3fed34f_b275_42c6_af6c_8de3e6fe0f9e.slice/crio-fd87d63ea110a273569e5b66501c57bfaf932272be25e92340e227a60cef6dea WatchSource:0}: Error finding container fd87d63ea110a273569e5b66501c57bfaf932272be25e92340e227a60cef6dea: Status 404 returned error can't find the container with id fd87d63ea110a273569e5b66501c57bfaf932272be25e92340e227a60cef6dea Feb 24 05:14:23.944636 master-0 kubenswrapper[4158]: I0224 05:14:23.942289 4158 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-etcd-operator/etcd-operator-545bf96f4d-tfmbs"] Feb 24 05:14:23.988418 master-0 kubenswrapper[4158]: I0224 05:14:23.988259 4158 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/49bfccec-61ec-4bef-a561-9f6e6f906215-package-server-manager-serving-cert\") pod \"package-server-manager-5c75f78c8b-9d82f\" (UID: \"49bfccec-61ec-4bef-a561-9f6e6f906215\") " pod="openshift-operator-lifecycle-manager/package-server-manager-5c75f78c8b-9d82f" Feb 24 05:14:23.988500 master-0 kubenswrapper[4158]: E0224 05:14:23.988461 4158 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/package-server-manager-serving-cert: secret "package-server-manager-serving-cert" not found Feb 24 05:14:23.988552 master-0 kubenswrapper[4158]: E0224 05:14:23.988544 4158 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/49bfccec-61ec-4bef-a561-9f6e6f906215-package-server-manager-serving-cert podName:49bfccec-61ec-4bef-a561-9f6e6f906215 nodeName:}" failed. No retries permitted until 2026-02-24 05:14:24.988522774 +0000 UTC m=+111.652519467 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "package-server-manager-serving-cert" (UniqueName: "kubernetes.io/secret/49bfccec-61ec-4bef-a561-9f6e6f906215-package-server-manager-serving-cert") pod "package-server-manager-5c75f78c8b-9d82f" (UID: "49bfccec-61ec-4bef-a561-9f6e6f906215") : secret "package-server-manager-serving-cert" not found Feb 24 05:14:23.988591 master-0 kubenswrapper[4158]: I0224 05:14:23.988561 4158 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-tuning-operator-tls\" (UniqueName: \"kubernetes.io/secret/6e5ede6a-9d4b-47a2-b4ba-e6018910d05a-node-tuning-operator-tls\") pod \"cluster-node-tuning-operator-bcf775fc9-h99t4\" (UID: \"6e5ede6a-9d4b-47a2-b4ba-e6018910d05a\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-bcf775fc9-h99t4" Feb 24 05:14:23.988725 master-0 kubenswrapper[4158]: I0224 05:14:23.988691 4158 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/8be1f8db-3f0b-4d6f-be42-7564fba66820-webhook-certs\") pod \"multus-admission-controller-5f98f4f8d5-b985k\" (UID: \"8be1f8db-3f0b-4d6f-be42-7564fba66820\") " pod="openshift-multus/multus-admission-controller-5f98f4f8d5-b985k" Feb 24 05:14:23.988767 master-0 kubenswrapper[4158]: E0224 05:14:23.988729 4158 secret.go:189] Couldn't get secret openshift-cluster-node-tuning-operator/node-tuning-operator-tls: secret "node-tuning-operator-tls" not found Feb 24 05:14:23.988825 master-0 kubenswrapper[4158]: E0224 05:14:23.988802 4158 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6e5ede6a-9d4b-47a2-b4ba-e6018910d05a-node-tuning-operator-tls podName:6e5ede6a-9d4b-47a2-b4ba-e6018910d05a nodeName:}" failed. No retries permitted until 2026-02-24 05:14:24.988781131 +0000 UTC m=+111.652777994 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "node-tuning-operator-tls" (UniqueName: "kubernetes.io/secret/6e5ede6a-9d4b-47a2-b4ba-e6018910d05a-node-tuning-operator-tls") pod "cluster-node-tuning-operator-bcf775fc9-h99t4" (UID: "6e5ede6a-9d4b-47a2-b4ba-e6018910d05a") : secret "node-tuning-operator-tls" not found Feb 24 05:14:23.988923 master-0 kubenswrapper[4158]: E0224 05:14:23.988872 4158 secret.go:189] Couldn't get secret openshift-multus/multus-admission-controller-secret: secret "multus-admission-controller-secret" not found Feb 24 05:14:23.988964 master-0 kubenswrapper[4158]: I0224 05:14:23.988915 4158 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cluster-monitoring-operator-tls\" (UniqueName: \"kubernetes.io/secret/c177f8fe-8145-4557-ae78-af121efe001c-cluster-monitoring-operator-tls\") pod \"cluster-monitoring-operator-6bb6d78bf-mzb7q\" (UID: \"c177f8fe-8145-4557-ae78-af121efe001c\") " pod="openshift-monitoring/cluster-monitoring-operator-6bb6d78bf-mzb7q" Feb 24 05:14:23.989002 master-0 kubenswrapper[4158]: E0224 05:14:23.988980 4158 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/8be1f8db-3f0b-4d6f-be42-7564fba66820-webhook-certs podName:8be1f8db-3f0b-4d6f-be42-7564fba66820 nodeName:}" failed. No retries permitted until 2026-02-24 05:14:24.988959975 +0000 UTC m=+111.652956668 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/8be1f8db-3f0b-4d6f-be42-7564fba66820-webhook-certs") pod "multus-admission-controller-5f98f4f8d5-b985k" (UID: "8be1f8db-3f0b-4d6f-be42-7564fba66820") : secret "multus-admission-controller-secret" not found Feb 24 05:14:23.989058 master-0 kubenswrapper[4158]: I0224 05:14:23.989033 4158 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/6e5ede6a-9d4b-47a2-b4ba-e6018910d05a-apiservice-cert\") pod \"cluster-node-tuning-operator-bcf775fc9-h99t4\" (UID: \"6e5ede6a-9d4b-47a2-b4ba-e6018910d05a\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-bcf775fc9-h99t4" Feb 24 05:14:23.989096 master-0 kubenswrapper[4158]: E0224 05:14:23.989057 4158 secret.go:189] Couldn't get secret openshift-monitoring/cluster-monitoring-operator-tls: secret "cluster-monitoring-operator-tls" not found Feb 24 05:14:23.989129 master-0 kubenswrapper[4158]: E0224 05:14:23.989106 4158 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c177f8fe-8145-4557-ae78-af121efe001c-cluster-monitoring-operator-tls podName:c177f8fe-8145-4557-ae78-af121efe001c nodeName:}" failed. No retries permitted until 2026-02-24 05:14:24.989094239 +0000 UTC m=+111.653091122 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "cluster-monitoring-operator-tls" (UniqueName: "kubernetes.io/secret/c177f8fe-8145-4557-ae78-af121efe001c-cluster-monitoring-operator-tls") pod "cluster-monitoring-operator-6bb6d78bf-mzb7q" (UID: "c177f8fe-8145-4557-ae78-af121efe001c") : secret "cluster-monitoring-operator-tls" not found Feb 24 05:14:23.989175 master-0 kubenswrapper[4158]: E0224 05:14:23.989152 4158 secret.go:189] Couldn't get secret openshift-cluster-node-tuning-operator/performance-addon-operator-webhook-cert: secret "performance-addon-operator-webhook-cert" not found Feb 24 05:14:23.989215 master-0 kubenswrapper[4158]: E0224 05:14:23.989195 4158 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6e5ede6a-9d4b-47a2-b4ba-e6018910d05a-apiservice-cert podName:6e5ede6a-9d4b-47a2-b4ba-e6018910d05a nodeName:}" failed. No retries permitted until 2026-02-24 05:14:24.989187031 +0000 UTC m=+111.653183724 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "apiservice-cert" (UniqueName: "kubernetes.io/secret/6e5ede6a-9d4b-47a2-b4ba-e6018910d05a-apiservice-cert") pod "cluster-node-tuning-operator-bcf775fc9-h99t4" (UID: "6e5ede6a-9d4b-47a2-b4ba-e6018910d05a") : secret "performance-addon-operator-webhook-cert" not found Feb 24 05:14:24.016697 master-0 kubenswrapper[4158]: I0224 05:14:24.016615 4158 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-storage-operator/csi-snapshot-controller-operator-6fb4df594f-8tv99" Feb 24 05:14:24.051992 master-0 kubenswrapper[4158]: I0224 05:14:24.051944 4158 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-5d87bf58c-ncrqj"] Feb 24 05:14:24.053211 master-0 kubenswrapper[4158]: I0224 05:14:24.053124 4158 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-7bcfbc574b-8zrj9"] Feb 24 05:14:24.056495 master-0 kubenswrapper[4158]: W0224 05:14:24.056412 4158 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod17f8e10b_88dc_4158_a7c4_aaa2f5d5fb9d.slice/crio-54e1df610bab1f2d6afe25113c517fd17a97b3a82ba411dc4888d98b1a65da1d WatchSource:0}: Error finding container 54e1df610bab1f2d6afe25113c517fd17a97b3a82ba411dc4888d98b1a65da1d: Status 404 returned error can't find the container with id 54e1df610bab1f2d6afe25113c517fd17a97b3a82ba411dc4888d98b1a65da1d Feb 24 05:14:24.089564 master-0 kubenswrapper[4158]: I0224 05:14:24.089511 4158 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/3d6b1ce7-1213-494c-829d-186d39eac7eb-metrics-tls\") pod \"ingress-operator-6569778c84-rr8r7\" (UID: \"3d6b1ce7-1213-494c-829d-186d39eac7eb\") " pod="openshift-ingress-operator/ingress-operator-6569778c84-rr8r7" Feb 24 05:14:24.089667 master-0 kubenswrapper[4158]: I0224 05:14:24.089573 4158 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/dd29bef3-d27e-48b3-9aa0-d915e949b3d5-marketplace-operator-metrics\") pod \"marketplace-operator-6f5488b997-dbsnm\" (UID: \"dd29bef3-d27e-48b3-9aa0-d915e949b3d5\") " pod="openshift-marketplace/marketplace-operator-6f5488b997-dbsnm" Feb 24 05:14:24.089745 master-0 kubenswrapper[4158]: E0224 05:14:24.089709 4158 secret.go:189] Couldn't get secret openshift-ingress-operator/metrics-tls: secret "metrics-tls" not found Feb 24 05:14:24.089823 master-0 kubenswrapper[4158]: E0224 05:14:24.089787 4158 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/3d6b1ce7-1213-494c-829d-186d39eac7eb-metrics-tls podName:3d6b1ce7-1213-494c-829d-186d39eac7eb nodeName:}" failed. No retries permitted until 2026-02-24 05:14:25.089765683 +0000 UTC m=+111.753762366 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "metrics-tls" (UniqueName: "kubernetes.io/secret/3d6b1ce7-1213-494c-829d-186d39eac7eb-metrics-tls") pod "ingress-operator-6569778c84-rr8r7" (UID: "3d6b1ce7-1213-494c-829d-186d39eac7eb") : secret "metrics-tls" not found Feb 24 05:14:24.089886 master-0 kubenswrapper[4158]: I0224 05:14:24.089840 4158 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/996ae0be-d36c-47f4-98b2-1c89591f9506-metrics-tls\") pod \"dns-operator-8c7d49845-4dhth\" (UID: \"996ae0be-d36c-47f4-98b2-1c89591f9506\") " pod="openshift-dns-operator/dns-operator-8c7d49845-4dhth" Feb 24 05:14:24.089929 master-0 kubenswrapper[4158]: E0224 05:14:24.089915 4158 secret.go:189] Couldn't get secret openshift-dns-operator/metrics-tls: secret "metrics-tls" not found Feb 24 05:14:24.089971 master-0 kubenswrapper[4158]: E0224 05:14:24.089940 4158 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/996ae0be-d36c-47f4-98b2-1c89591f9506-metrics-tls podName:996ae0be-d36c-47f4-98b2-1c89591f9506 nodeName:}" failed. No retries permitted until 2026-02-24 05:14:25.089932967 +0000 UTC m=+111.753929660 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "metrics-tls" (UniqueName: "kubernetes.io/secret/996ae0be-d36c-47f4-98b2-1c89591f9506-metrics-tls") pod "dns-operator-8c7d49845-4dhth" (UID: "996ae0be-d36c-47f4-98b2-1c89591f9506") : secret "metrics-tls" not found Feb 24 05:14:24.089971 master-0 kubenswrapper[4158]: E0224 05:14:24.089943 4158 secret.go:189] Couldn't get secret openshift-marketplace/marketplace-operator-metrics: secret "marketplace-operator-metrics" not found Feb 24 05:14:24.090053 master-0 kubenswrapper[4158]: E0224 05:14:24.090039 4158 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/dd29bef3-d27e-48b3-9aa0-d915e949b3d5-marketplace-operator-metrics podName:dd29bef3-d27e-48b3-9aa0-d915e949b3d5 nodeName:}" failed. No retries permitted until 2026-02-24 05:14:25.090015639 +0000 UTC m=+111.754012332 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "marketplace-operator-metrics" (UniqueName: "kubernetes.io/secret/dd29bef3-d27e-48b3-9aa0-d915e949b3d5-marketplace-operator-metrics") pod "marketplace-operator-6f5488b997-dbsnm" (UID: "dd29bef3-d27e-48b3-9aa0-d915e949b3d5") : secret "marketplace-operator-metrics" not found Feb 24 05:14:24.145637 master-0 kubenswrapper[4158]: I0224 05:14:24.145589 4158 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-vp2jg" Feb 24 05:14:24.149389 master-0 kubenswrapper[4158]: I0224 05:14:24.149361 4158 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-2vsjh" Feb 24 05:14:24.152583 master-0 kubenswrapper[4158]: I0224 05:14:24.152524 4158 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"kube-root-ca.crt" Feb 24 05:14:24.152802 master-0 kubenswrapper[4158]: I0224 05:14:24.152776 4158 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-secret" Feb 24 05:14:24.155372 master-0 kubenswrapper[4158]: I0224 05:14:24.155270 4158 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"openshift-service-ca.crt" Feb 24 05:14:24.187963 master-0 kubenswrapper[4158]: I0224 05:14:24.187884 4158 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-storage-operator/csi-snapshot-controller-operator-6fb4df594f-8tv99"] Feb 24 05:14:24.208180 master-0 kubenswrapper[4158]: W0224 05:14:24.200537 4158 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podfeee7fe8_e805_4807_b4c0_ecc7ef0f88d9.slice/crio-0655b027cab36844f1bd97da97e52b25a2bc334d369a5c8c6902c2874a930630 WatchSource:0}: Error finding container 0655b027cab36844f1bd97da97e52b25a2bc334d369a5c8c6902c2874a930630: Status 404 returned error can't find the container with id 0655b027cab36844f1bd97da97e52b25a2bc334d369a5c8c6902c2874a930630 Feb 24 05:14:24.223189 master-0 kubenswrapper[4158]: I0224 05:14:24.223142 4158 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication-operator/authentication-operator-5bd7c86784-kbb8z"] Feb 24 05:14:24.223266 master-0 kubenswrapper[4158]: I0224 05:14:24.223202 4158 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-584cc7bcb5-zz9fm"] Feb 24 05:14:24.225790 master-0 kubenswrapper[4158]: I0224 05:14:24.225733 4158 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-8586dccc9b-49fsv"] Feb 24 05:14:24.227099 master-0 kubenswrapper[4158]: I0224 05:14:24.226986 4158 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-olm-operator/cluster-olm-operator-5bd7768f54-qh6j7"] Feb 24 05:14:24.708189 master-0 kubenswrapper[4158]: I0224 05:14:24.707744 4158 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-fc889cfd5-r6p58" event={"ID":"c3fed34f-b275-42c6-af6c-8de3e6fe0f9e","Type":"ContainerStarted","Data":"fd87d63ea110a273569e5b66501c57bfaf932272be25e92340e227a60cef6dea"} Feb 24 05:14:24.709941 master-0 kubenswrapper[4158]: I0224 05:14:24.709857 4158 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-c48c8bf7c-mcdrl" event={"ID":"d86d5bbe-3768-4695-810b-245a56e4fd1d","Type":"ContainerStarted","Data":"6bea8d6f03626b01b052e73eecef6934077ef78e8f1a77511bf8222ddfca016e"} Feb 24 05:14:24.711835 master-0 kubenswrapper[4158]: I0224 05:14:24.711400 4158 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-77cd4d9559-8l7xv" event={"ID":"e6f05507-d5c1-4102-a220-1db715a496e3","Type":"ContainerStarted","Data":"a1b7fe82470a07c52d024e13d01069cc6897029891ba56a4cf999816f805e9a7"} Feb 24 05:14:24.713008 master-0 kubenswrapper[4158]: I0224 05:14:24.712860 4158 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-545bf96f4d-tfmbs" event={"ID":"7a2c651d-ea1a-41f2-9745-04adc8d88904","Type":"ContainerStarted","Data":"081425b6bb126676c8a3b61b952db3a17ca28803f3b46af593db55de6dd0db70"} Feb 24 05:14:24.714374 master-0 kubenswrapper[4158]: I0224 05:14:24.714348 4158 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-5bd7c86784-kbb8z" event={"ID":"59333a14-5bdc-4590-a3da-af7300f086da","Type":"ContainerStarted","Data":"0fcfa31d947740e8b2c9697ed507eb02078278c10de3439215a818d10753dde6"} Feb 24 05:14:24.716118 master-0 kubenswrapper[4158]: I0224 05:14:24.716069 4158 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-5d87bf58c-ncrqj" event={"ID":"17f8e10b-88dc-4158-a7c4-aaa2f5d5fb9d","Type":"ContainerStarted","Data":"f93fdb0961b7ab6c511e8eb1cee936b815e97917116f05d83d27c325437b676d"} Feb 24 05:14:24.716179 master-0 kubenswrapper[4158]: I0224 05:14:24.716132 4158 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-5d87bf58c-ncrqj" event={"ID":"17f8e10b-88dc-4158-a7c4-aaa2f5d5fb9d","Type":"ContainerStarted","Data":"54e1df610bab1f2d6afe25113c517fd17a97b3a82ba411dc4888d98b1a65da1d"} Feb 24 05:14:24.718338 master-0 kubenswrapper[4158]: I0224 05:14:24.718290 4158 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-storage-operator/csi-snapshot-controller-operator-6fb4df594f-8tv99" event={"ID":"feee7fe8-e805-4807-b4c0-ecc7ef0f88d9","Type":"ContainerStarted","Data":"0655b027cab36844f1bd97da97e52b25a2bc334d369a5c8c6902c2874a930630"} Feb 24 05:14:24.720243 master-0 kubenswrapper[4158]: I0224 05:14:24.720211 4158 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/iptables-alerter-r2vvc" event={"ID":"f6690909-3a87-4bdc-b0ec-1cdd4df32e4b","Type":"ContainerStarted","Data":"64d82ee2903a4034f2cd6f4a7fd22197c2cda9f27e9a4810423ee5ca5bc5cc6d"} Feb 24 05:14:24.725363 master-0 kubenswrapper[4158]: I0224 05:14:24.721757 4158 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-olm-operator/cluster-olm-operator-5bd7768f54-qh6j7" event={"ID":"633d33a1-e1b1-40b0-b56a-afb0c1085d97","Type":"ContainerStarted","Data":"31db0370c08dc41ae971998fe86ac9cb0b2bcc6c08ec28eb749ac1396b3c2667"} Feb 24 05:14:24.725363 master-0 kubenswrapper[4158]: I0224 05:14:24.722967 4158 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-8586dccc9b-49fsv" event={"ID":"58ecd829-4749-4c8a-933b-16b4acccac90","Type":"ContainerStarted","Data":"2e08dd98145938b80638e25896f965db6111532d375ded80b0d82dda78b2522d"} Feb 24 05:14:24.725363 master-0 kubenswrapper[4158]: I0224 05:14:24.724501 4158 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-7bcfbc574b-8zrj9" event={"ID":"22813c83-2f60-44ad-9624-ad367cec08f7","Type":"ContainerStarted","Data":"8a8cf406c663f290d9d876c25d67c60eea733c614a8da4d512ef2ea405de9382"} Feb 24 05:14:24.726200 master-0 kubenswrapper[4158]: I0224 05:14:24.726164 4158 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-584cc7bcb5-zz9fm" event={"ID":"933beda1-c930-4831-a886-3cc6b7a992ad","Type":"ContainerStarted","Data":"714673c16fe0665ef1b16d03b2319efbfe055f0459ee84843763239d325f2af4"} Feb 24 05:14:24.731861 master-0 kubenswrapper[4158]: I0224 05:14:24.731812 4158 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-5d87bf58c-ncrqj" podStartSLOduration=75.731798398 podStartE2EDuration="1m15.731798398s" podCreationTimestamp="2026-02-24 05:13:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-24 05:14:24.73036942 +0000 UTC m=+111.394366113" watchObservedRunningTime="2026-02-24 05:14:24.731798398 +0000 UTC m=+111.395795081" Feb 24 05:14:25.003537 master-0 kubenswrapper[4158]: I0224 05:14:25.003466 4158 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/6e5ede6a-9d4b-47a2-b4ba-e6018910d05a-apiservice-cert\") pod \"cluster-node-tuning-operator-bcf775fc9-h99t4\" (UID: \"6e5ede6a-9d4b-47a2-b4ba-e6018910d05a\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-bcf775fc9-h99t4" Feb 24 05:14:25.003722 master-0 kubenswrapper[4158]: I0224 05:14:25.003563 4158 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/49bfccec-61ec-4bef-a561-9f6e6f906215-package-server-manager-serving-cert\") pod \"package-server-manager-5c75f78c8b-9d82f\" (UID: \"49bfccec-61ec-4bef-a561-9f6e6f906215\") " pod="openshift-operator-lifecycle-manager/package-server-manager-5c75f78c8b-9d82f" Feb 24 05:14:25.003722 master-0 kubenswrapper[4158]: I0224 05:14:25.003602 4158 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-tuning-operator-tls\" (UniqueName: \"kubernetes.io/secret/6e5ede6a-9d4b-47a2-b4ba-e6018910d05a-node-tuning-operator-tls\") pod \"cluster-node-tuning-operator-bcf775fc9-h99t4\" (UID: \"6e5ede6a-9d4b-47a2-b4ba-e6018910d05a\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-bcf775fc9-h99t4" Feb 24 05:14:25.003722 master-0 kubenswrapper[4158]: I0224 05:14:25.003638 4158 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/8be1f8db-3f0b-4d6f-be42-7564fba66820-webhook-certs\") pod \"multus-admission-controller-5f98f4f8d5-b985k\" (UID: \"8be1f8db-3f0b-4d6f-be42-7564fba66820\") " pod="openshift-multus/multus-admission-controller-5f98f4f8d5-b985k" Feb 24 05:14:25.003722 master-0 kubenswrapper[4158]: I0224 05:14:25.003667 4158 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cluster-monitoring-operator-tls\" (UniqueName: \"kubernetes.io/secret/c177f8fe-8145-4557-ae78-af121efe001c-cluster-monitoring-operator-tls\") pod \"cluster-monitoring-operator-6bb6d78bf-mzb7q\" (UID: \"c177f8fe-8145-4557-ae78-af121efe001c\") " pod="openshift-monitoring/cluster-monitoring-operator-6bb6d78bf-mzb7q" Feb 24 05:14:25.003858 master-0 kubenswrapper[4158]: E0224 05:14:25.003805 4158 secret.go:189] Couldn't get secret openshift-monitoring/cluster-monitoring-operator-tls: secret "cluster-monitoring-operator-tls" not found Feb 24 05:14:25.003890 master-0 kubenswrapper[4158]: E0224 05:14:25.003861 4158 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c177f8fe-8145-4557-ae78-af121efe001c-cluster-monitoring-operator-tls podName:c177f8fe-8145-4557-ae78-af121efe001c nodeName:}" failed. No retries permitted until 2026-02-24 05:14:27.003844781 +0000 UTC m=+113.667841474 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "cluster-monitoring-operator-tls" (UniqueName: "kubernetes.io/secret/c177f8fe-8145-4557-ae78-af121efe001c-cluster-monitoring-operator-tls") pod "cluster-monitoring-operator-6bb6d78bf-mzb7q" (UID: "c177f8fe-8145-4557-ae78-af121efe001c") : secret "cluster-monitoring-operator-tls" not found Feb 24 05:14:25.004366 master-0 kubenswrapper[4158]: E0224 05:14:25.004169 4158 secret.go:189] Couldn't get secret openshift-multus/multus-admission-controller-secret: secret "multus-admission-controller-secret" not found Feb 24 05:14:25.004366 master-0 kubenswrapper[4158]: E0224 05:14:25.004231 4158 secret.go:189] Couldn't get secret openshift-cluster-node-tuning-operator/node-tuning-operator-tls: secret "node-tuning-operator-tls" not found Feb 24 05:14:25.004366 master-0 kubenswrapper[4158]: E0224 05:14:25.004281 4158 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/8be1f8db-3f0b-4d6f-be42-7564fba66820-webhook-certs podName:8be1f8db-3f0b-4d6f-be42-7564fba66820 nodeName:}" failed. No retries permitted until 2026-02-24 05:14:27.004258892 +0000 UTC m=+113.668255785 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/8be1f8db-3f0b-4d6f-be42-7564fba66820-webhook-certs") pod "multus-admission-controller-5f98f4f8d5-b985k" (UID: "8be1f8db-3f0b-4d6f-be42-7564fba66820") : secret "multus-admission-controller-secret" not found Feb 24 05:14:25.004366 master-0 kubenswrapper[4158]: E0224 05:14:25.004346 4158 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6e5ede6a-9d4b-47a2-b4ba-e6018910d05a-node-tuning-operator-tls podName:6e5ede6a-9d4b-47a2-b4ba-e6018910d05a nodeName:}" failed. No retries permitted until 2026-02-24 05:14:27.004303423 +0000 UTC m=+113.668321307 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "node-tuning-operator-tls" (UniqueName: "kubernetes.io/secret/6e5ede6a-9d4b-47a2-b4ba-e6018910d05a-node-tuning-operator-tls") pod "cluster-node-tuning-operator-bcf775fc9-h99t4" (UID: "6e5ede6a-9d4b-47a2-b4ba-e6018910d05a") : secret "node-tuning-operator-tls" not found Feb 24 05:14:25.050047 master-0 kubenswrapper[4158]: E0224 05:14:25.004366 4158 secret.go:189] Couldn't get secret openshift-cluster-node-tuning-operator/performance-addon-operator-webhook-cert: secret "performance-addon-operator-webhook-cert" not found Feb 24 05:14:25.050047 master-0 kubenswrapper[4158]: E0224 05:14:25.004401 4158 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6e5ede6a-9d4b-47a2-b4ba-e6018910d05a-apiservice-cert podName:6e5ede6a-9d4b-47a2-b4ba-e6018910d05a nodeName:}" failed. No retries permitted until 2026-02-24 05:14:27.004392266 +0000 UTC m=+113.668389189 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "apiservice-cert" (UniqueName: "kubernetes.io/secret/6e5ede6a-9d4b-47a2-b4ba-e6018910d05a-apiservice-cert") pod "cluster-node-tuning-operator-bcf775fc9-h99t4" (UID: "6e5ede6a-9d4b-47a2-b4ba-e6018910d05a") : secret "performance-addon-operator-webhook-cert" not found Feb 24 05:14:25.050047 master-0 kubenswrapper[4158]: E0224 05:14:25.004449 4158 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/package-server-manager-serving-cert: secret "package-server-manager-serving-cert" not found Feb 24 05:14:25.050047 master-0 kubenswrapper[4158]: E0224 05:14:25.004473 4158 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/49bfccec-61ec-4bef-a561-9f6e6f906215-package-server-manager-serving-cert podName:49bfccec-61ec-4bef-a561-9f6e6f906215 nodeName:}" failed. No retries permitted until 2026-02-24 05:14:27.004467038 +0000 UTC m=+113.668463961 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "package-server-manager-serving-cert" (UniqueName: "kubernetes.io/secret/49bfccec-61ec-4bef-a561-9f6e6f906215-package-server-manager-serving-cert") pod "package-server-manager-5c75f78c8b-9d82f" (UID: "49bfccec-61ec-4bef-a561-9f6e6f906215") : secret "package-server-manager-serving-cert" not found Feb 24 05:14:25.104206 master-0 kubenswrapper[4158]: I0224 05:14:25.104139 4158 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/3d6b1ce7-1213-494c-829d-186d39eac7eb-metrics-tls\") pod \"ingress-operator-6569778c84-rr8r7\" (UID: \"3d6b1ce7-1213-494c-829d-186d39eac7eb\") " pod="openshift-ingress-operator/ingress-operator-6569778c84-rr8r7" Feb 24 05:14:25.104455 master-0 kubenswrapper[4158]: I0224 05:14:25.104227 4158 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/dd29bef3-d27e-48b3-9aa0-d915e949b3d5-marketplace-operator-metrics\") pod \"marketplace-operator-6f5488b997-dbsnm\" (UID: \"dd29bef3-d27e-48b3-9aa0-d915e949b3d5\") " pod="openshift-marketplace/marketplace-operator-6f5488b997-dbsnm" Feb 24 05:14:25.104455 master-0 kubenswrapper[4158]: I0224 05:14:25.104437 4158 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/996ae0be-d36c-47f4-98b2-1c89591f9506-metrics-tls\") pod \"dns-operator-8c7d49845-4dhth\" (UID: \"996ae0be-d36c-47f4-98b2-1c89591f9506\") " pod="openshift-dns-operator/dns-operator-8c7d49845-4dhth" Feb 24 05:14:25.104522 master-0 kubenswrapper[4158]: E0224 05:14:25.104451 4158 secret.go:189] Couldn't get secret openshift-ingress-operator/metrics-tls: secret "metrics-tls" not found Feb 24 05:14:25.104588 master-0 kubenswrapper[4158]: E0224 05:14:25.104554 4158 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/3d6b1ce7-1213-494c-829d-186d39eac7eb-metrics-tls podName:3d6b1ce7-1213-494c-829d-186d39eac7eb nodeName:}" failed. No retries permitted until 2026-02-24 05:14:27.104531395 +0000 UTC m=+113.768528088 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "metrics-tls" (UniqueName: "kubernetes.io/secret/3d6b1ce7-1213-494c-829d-186d39eac7eb-metrics-tls") pod "ingress-operator-6569778c84-rr8r7" (UID: "3d6b1ce7-1213-494c-829d-186d39eac7eb") : secret "metrics-tls" not found Feb 24 05:14:25.104661 master-0 kubenswrapper[4158]: E0224 05:14:25.104622 4158 secret.go:189] Couldn't get secret openshift-marketplace/marketplace-operator-metrics: secret "marketplace-operator-metrics" not found Feb 24 05:14:25.104744 master-0 kubenswrapper[4158]: E0224 05:14:25.104714 4158 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/dd29bef3-d27e-48b3-9aa0-d915e949b3d5-marketplace-operator-metrics podName:dd29bef3-d27e-48b3-9aa0-d915e949b3d5 nodeName:}" failed. No retries permitted until 2026-02-24 05:14:27.104692989 +0000 UTC m=+113.768689682 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "marketplace-operator-metrics" (UniqueName: "kubernetes.io/secret/dd29bef3-d27e-48b3-9aa0-d915e949b3d5-marketplace-operator-metrics") pod "marketplace-operator-6f5488b997-dbsnm" (UID: "dd29bef3-d27e-48b3-9aa0-d915e949b3d5") : secret "marketplace-operator-metrics" not found Feb 24 05:14:25.104878 master-0 kubenswrapper[4158]: E0224 05:14:25.104782 4158 secret.go:189] Couldn't get secret openshift-dns-operator/metrics-tls: secret "metrics-tls" not found Feb 24 05:14:25.104878 master-0 kubenswrapper[4158]: E0224 05:14:25.104809 4158 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/996ae0be-d36c-47f4-98b2-1c89591f9506-metrics-tls podName:996ae0be-d36c-47f4-98b2-1c89591f9506 nodeName:}" failed. No retries permitted until 2026-02-24 05:14:27.104802452 +0000 UTC m=+113.768799145 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "metrics-tls" (UniqueName: "kubernetes.io/secret/996ae0be-d36c-47f4-98b2-1c89591f9506-metrics-tls") pod "dns-operator-8c7d49845-4dhth" (UID: "996ae0be-d36c-47f4-98b2-1c89591f9506") : secret "metrics-tls" not found Feb 24 05:14:27.042679 master-0 kubenswrapper[4158]: I0224 05:14:27.042589 4158 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-tuning-operator-tls\" (UniqueName: \"kubernetes.io/secret/6e5ede6a-9d4b-47a2-b4ba-e6018910d05a-node-tuning-operator-tls\") pod \"cluster-node-tuning-operator-bcf775fc9-h99t4\" (UID: \"6e5ede6a-9d4b-47a2-b4ba-e6018910d05a\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-bcf775fc9-h99t4" Feb 24 05:14:27.042679 master-0 kubenswrapper[4158]: I0224 05:14:27.042674 4158 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/8be1f8db-3f0b-4d6f-be42-7564fba66820-webhook-certs\") pod \"multus-admission-controller-5f98f4f8d5-b985k\" (UID: \"8be1f8db-3f0b-4d6f-be42-7564fba66820\") " pod="openshift-multus/multus-admission-controller-5f98f4f8d5-b985k" Feb 24 05:14:27.043800 master-0 kubenswrapper[4158]: E0224 05:14:27.042862 4158 secret.go:189] Couldn't get secret openshift-cluster-node-tuning-operator/node-tuning-operator-tls: secret "node-tuning-operator-tls" not found Feb 24 05:14:27.043800 master-0 kubenswrapper[4158]: I0224 05:14:27.042874 4158 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cluster-monitoring-operator-tls\" (UniqueName: \"kubernetes.io/secret/c177f8fe-8145-4557-ae78-af121efe001c-cluster-monitoring-operator-tls\") pod \"cluster-monitoring-operator-6bb6d78bf-mzb7q\" (UID: \"c177f8fe-8145-4557-ae78-af121efe001c\") " pod="openshift-monitoring/cluster-monitoring-operator-6bb6d78bf-mzb7q" Feb 24 05:14:27.043800 master-0 kubenswrapper[4158]: E0224 05:14:27.042977 4158 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6e5ede6a-9d4b-47a2-b4ba-e6018910d05a-node-tuning-operator-tls podName:6e5ede6a-9d4b-47a2-b4ba-e6018910d05a nodeName:}" failed. No retries permitted until 2026-02-24 05:14:31.042948938 +0000 UTC m=+117.706945631 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "node-tuning-operator-tls" (UniqueName: "kubernetes.io/secret/6e5ede6a-9d4b-47a2-b4ba-e6018910d05a-node-tuning-operator-tls") pod "cluster-node-tuning-operator-bcf775fc9-h99t4" (UID: "6e5ede6a-9d4b-47a2-b4ba-e6018910d05a") : secret "node-tuning-operator-tls" not found Feb 24 05:14:27.043800 master-0 kubenswrapper[4158]: E0224 05:14:27.043109 4158 secret.go:189] Couldn't get secret openshift-monitoring/cluster-monitoring-operator-tls: secret "cluster-monitoring-operator-tls" not found Feb 24 05:14:27.043800 master-0 kubenswrapper[4158]: E0224 05:14:27.043117 4158 secret.go:189] Couldn't get secret openshift-multus/multus-admission-controller-secret: secret "multus-admission-controller-secret" not found Feb 24 05:14:27.043800 master-0 kubenswrapper[4158]: E0224 05:14:27.043138 4158 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c177f8fe-8145-4557-ae78-af121efe001c-cluster-monitoring-operator-tls podName:c177f8fe-8145-4557-ae78-af121efe001c nodeName:}" failed. No retries permitted until 2026-02-24 05:14:31.043131303 +0000 UTC m=+117.707127996 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "cluster-monitoring-operator-tls" (UniqueName: "kubernetes.io/secret/c177f8fe-8145-4557-ae78-af121efe001c-cluster-monitoring-operator-tls") pod "cluster-monitoring-operator-6bb6d78bf-mzb7q" (UID: "c177f8fe-8145-4557-ae78-af121efe001c") : secret "cluster-monitoring-operator-tls" not found Feb 24 05:14:27.043800 master-0 kubenswrapper[4158]: I0224 05:14:27.043287 4158 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/6e5ede6a-9d4b-47a2-b4ba-e6018910d05a-apiservice-cert\") pod \"cluster-node-tuning-operator-bcf775fc9-h99t4\" (UID: \"6e5ede6a-9d4b-47a2-b4ba-e6018910d05a\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-bcf775fc9-h99t4" Feb 24 05:14:27.043800 master-0 kubenswrapper[4158]: E0224 05:14:27.043353 4158 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/8be1f8db-3f0b-4d6f-be42-7564fba66820-webhook-certs podName:8be1f8db-3f0b-4d6f-be42-7564fba66820 nodeName:}" failed. No retries permitted until 2026-02-24 05:14:31.043336179 +0000 UTC m=+117.707332872 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/8be1f8db-3f0b-4d6f-be42-7564fba66820-webhook-certs") pod "multus-admission-controller-5f98f4f8d5-b985k" (UID: "8be1f8db-3f0b-4d6f-be42-7564fba66820") : secret "multus-admission-controller-secret" not found Feb 24 05:14:27.043800 master-0 kubenswrapper[4158]: E0224 05:14:27.043382 4158 secret.go:189] Couldn't get secret openshift-cluster-node-tuning-operator/performance-addon-operator-webhook-cert: secret "performance-addon-operator-webhook-cert" not found Feb 24 05:14:27.043800 master-0 kubenswrapper[4158]: E0224 05:14:27.043426 4158 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6e5ede6a-9d4b-47a2-b4ba-e6018910d05a-apiservice-cert podName:6e5ede6a-9d4b-47a2-b4ba-e6018910d05a nodeName:}" failed. No retries permitted until 2026-02-24 05:14:31.043406921 +0000 UTC m=+117.707403614 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "apiservice-cert" (UniqueName: "kubernetes.io/secret/6e5ede6a-9d4b-47a2-b4ba-e6018910d05a-apiservice-cert") pod "cluster-node-tuning-operator-bcf775fc9-h99t4" (UID: "6e5ede6a-9d4b-47a2-b4ba-e6018910d05a") : secret "performance-addon-operator-webhook-cert" not found Feb 24 05:14:27.043800 master-0 kubenswrapper[4158]: I0224 05:14:27.043450 4158 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/49bfccec-61ec-4bef-a561-9f6e6f906215-package-server-manager-serving-cert\") pod \"package-server-manager-5c75f78c8b-9d82f\" (UID: \"49bfccec-61ec-4bef-a561-9f6e6f906215\") " pod="openshift-operator-lifecycle-manager/package-server-manager-5c75f78c8b-9d82f" Feb 24 05:14:27.043800 master-0 kubenswrapper[4158]: E0224 05:14:27.043742 4158 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/package-server-manager-serving-cert: secret "package-server-manager-serving-cert" not found Feb 24 05:14:27.044243 master-0 kubenswrapper[4158]: E0224 05:14:27.043859 4158 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/49bfccec-61ec-4bef-a561-9f6e6f906215-package-server-manager-serving-cert podName:49bfccec-61ec-4bef-a561-9f6e6f906215 nodeName:}" failed. No retries permitted until 2026-02-24 05:14:31.043838323 +0000 UTC m=+117.707835016 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "package-server-manager-serving-cert" (UniqueName: "kubernetes.io/secret/49bfccec-61ec-4bef-a561-9f6e6f906215-package-server-manager-serving-cert") pod "package-server-manager-5c75f78c8b-9d82f" (UID: "49bfccec-61ec-4bef-a561-9f6e6f906215") : secret "package-server-manager-serving-cert" not found Feb 24 05:14:27.144522 master-0 kubenswrapper[4158]: I0224 05:14:27.144430 4158 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/3d6b1ce7-1213-494c-829d-186d39eac7eb-metrics-tls\") pod \"ingress-operator-6569778c84-rr8r7\" (UID: \"3d6b1ce7-1213-494c-829d-186d39eac7eb\") " pod="openshift-ingress-operator/ingress-operator-6569778c84-rr8r7" Feb 24 05:14:27.144868 master-0 kubenswrapper[4158]: E0224 05:14:27.144653 4158 secret.go:189] Couldn't get secret openshift-ingress-operator/metrics-tls: secret "metrics-tls" not found Feb 24 05:14:27.144868 master-0 kubenswrapper[4158]: I0224 05:14:27.144772 4158 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/dd29bef3-d27e-48b3-9aa0-d915e949b3d5-marketplace-operator-metrics\") pod \"marketplace-operator-6f5488b997-dbsnm\" (UID: \"dd29bef3-d27e-48b3-9aa0-d915e949b3d5\") " pod="openshift-marketplace/marketplace-operator-6f5488b997-dbsnm" Feb 24 05:14:27.144963 master-0 kubenswrapper[4158]: E0224 05:14:27.144872 4158 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/3d6b1ce7-1213-494c-829d-186d39eac7eb-metrics-tls podName:3d6b1ce7-1213-494c-829d-186d39eac7eb nodeName:}" failed. No retries permitted until 2026-02-24 05:14:31.144841134 +0000 UTC m=+117.808837827 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "metrics-tls" (UniqueName: "kubernetes.io/secret/3d6b1ce7-1213-494c-829d-186d39eac7eb-metrics-tls") pod "ingress-operator-6569778c84-rr8r7" (UID: "3d6b1ce7-1213-494c-829d-186d39eac7eb") : secret "metrics-tls" not found Feb 24 05:14:27.144963 master-0 kubenswrapper[4158]: E0224 05:14:27.144949 4158 secret.go:189] Couldn't get secret openshift-marketplace/marketplace-operator-metrics: secret "marketplace-operator-metrics" not found Feb 24 05:14:27.145082 master-0 kubenswrapper[4158]: E0224 05:14:27.145032 4158 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/dd29bef3-d27e-48b3-9aa0-d915e949b3d5-marketplace-operator-metrics podName:dd29bef3-d27e-48b3-9aa0-d915e949b3d5 nodeName:}" failed. No retries permitted until 2026-02-24 05:14:31.145014559 +0000 UTC m=+117.809011242 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "marketplace-operator-metrics" (UniqueName: "kubernetes.io/secret/dd29bef3-d27e-48b3-9aa0-d915e949b3d5-marketplace-operator-metrics") pod "marketplace-operator-6f5488b997-dbsnm" (UID: "dd29bef3-d27e-48b3-9aa0-d915e949b3d5") : secret "marketplace-operator-metrics" not found Feb 24 05:14:27.145082 master-0 kubenswrapper[4158]: I0224 05:14:27.145075 4158 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/996ae0be-d36c-47f4-98b2-1c89591f9506-metrics-tls\") pod \"dns-operator-8c7d49845-4dhth\" (UID: \"996ae0be-d36c-47f4-98b2-1c89591f9506\") " pod="openshift-dns-operator/dns-operator-8c7d49845-4dhth" Feb 24 05:14:27.145244 master-0 kubenswrapper[4158]: E0224 05:14:27.145191 4158 secret.go:189] Couldn't get secret openshift-dns-operator/metrics-tls: secret "metrics-tls" not found Feb 24 05:14:27.145663 master-0 kubenswrapper[4158]: E0224 05:14:27.145232 4158 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/996ae0be-d36c-47f4-98b2-1c89591f9506-metrics-tls podName:996ae0be-d36c-47f4-98b2-1c89591f9506 nodeName:}" failed. No retries permitted until 2026-02-24 05:14:31.145223114 +0000 UTC m=+117.809219807 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "metrics-tls" (UniqueName: "kubernetes.io/secret/996ae0be-d36c-47f4-98b2-1c89591f9506-metrics-tls") pod "dns-operator-8c7d49845-4dhth" (UID: "996ae0be-d36c-47f4-98b2-1c89591f9506") : secret "metrics-tls" not found Feb 24 05:14:28.822598 master-0 systemd[1]: Stopping Kubernetes Kubelet... Feb 24 05:14:28.858533 master-0 systemd[1]: kubelet.service: Deactivated successfully. Feb 24 05:14:28.858988 master-0 systemd[1]: Stopped Kubernetes Kubelet. Feb 24 05:14:28.864009 master-0 systemd[1]: kubelet.service: Consumed 11.012s CPU time. Feb 24 05:14:28.886976 master-0 systemd[1]: Starting Kubernetes Kubelet... Feb 24 05:14:29.015284 master-0 kubenswrapper[7614]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 24 05:14:29.015284 master-0 kubenswrapper[7614]: Flag --minimum-container-ttl-duration has been deprecated, Use --eviction-hard or --eviction-soft instead. Will be removed in a future version. Feb 24 05:14:29.015284 master-0 kubenswrapper[7614]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 24 05:14:29.015284 master-0 kubenswrapper[7614]: Flag --register-with-taints has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 24 05:14:29.015284 master-0 kubenswrapper[7614]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Feb 24 05:14:29.015284 master-0 kubenswrapper[7614]: Flag --system-reserved has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 24 05:14:29.017148 master-0 kubenswrapper[7614]: I0224 05:14:29.015363 7614 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Feb 24 05:14:29.018135 master-0 kubenswrapper[7614]: W0224 05:14:29.018105 7614 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Feb 24 05:14:29.018135 master-0 kubenswrapper[7614]: W0224 05:14:29.018123 7614 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Feb 24 05:14:29.018135 master-0 kubenswrapper[7614]: W0224 05:14:29.018128 7614 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Feb 24 05:14:29.018135 master-0 kubenswrapper[7614]: W0224 05:14:29.018138 7614 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Feb 24 05:14:29.018293 master-0 kubenswrapper[7614]: W0224 05:14:29.018144 7614 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Feb 24 05:14:29.018293 master-0 kubenswrapper[7614]: W0224 05:14:29.018149 7614 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Feb 24 05:14:29.018293 master-0 kubenswrapper[7614]: W0224 05:14:29.018153 7614 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Feb 24 05:14:29.018293 master-0 kubenswrapper[7614]: W0224 05:14:29.018157 7614 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Feb 24 05:14:29.018293 master-0 kubenswrapper[7614]: W0224 05:14:29.018161 7614 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Feb 24 05:14:29.018293 master-0 kubenswrapper[7614]: W0224 05:14:29.018164 7614 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Feb 24 05:14:29.018293 master-0 kubenswrapper[7614]: W0224 05:14:29.018168 7614 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Feb 24 05:14:29.018293 master-0 kubenswrapper[7614]: W0224 05:14:29.018172 7614 feature_gate.go:330] unrecognized feature gate: GatewayAPI Feb 24 05:14:29.018293 master-0 kubenswrapper[7614]: W0224 05:14:29.018175 7614 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Feb 24 05:14:29.018293 master-0 kubenswrapper[7614]: W0224 05:14:29.018179 7614 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Feb 24 05:14:29.018293 master-0 kubenswrapper[7614]: W0224 05:14:29.018183 7614 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Feb 24 05:14:29.018293 master-0 kubenswrapper[7614]: W0224 05:14:29.018198 7614 feature_gate.go:330] unrecognized feature gate: ExternalOIDCWithUIDAndExtraClaimMappings Feb 24 05:14:29.018293 master-0 kubenswrapper[7614]: W0224 05:14:29.018202 7614 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Feb 24 05:14:29.018293 master-0 kubenswrapper[7614]: W0224 05:14:29.018205 7614 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Feb 24 05:14:29.018293 master-0 kubenswrapper[7614]: W0224 05:14:29.018208 7614 feature_gate.go:330] unrecognized feature gate: SignatureStores Feb 24 05:14:29.018293 master-0 kubenswrapper[7614]: W0224 05:14:29.018212 7614 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Feb 24 05:14:29.018293 master-0 kubenswrapper[7614]: W0224 05:14:29.018216 7614 feature_gate.go:330] unrecognized feature gate: NewOLM Feb 24 05:14:29.018293 master-0 kubenswrapper[7614]: W0224 05:14:29.018220 7614 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Feb 24 05:14:29.018293 master-0 kubenswrapper[7614]: W0224 05:14:29.018225 7614 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Feb 24 05:14:29.018293 master-0 kubenswrapper[7614]: W0224 05:14:29.018228 7614 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Feb 24 05:14:29.018860 master-0 kubenswrapper[7614]: W0224 05:14:29.018232 7614 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Feb 24 05:14:29.018860 master-0 kubenswrapper[7614]: W0224 05:14:29.018236 7614 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Feb 24 05:14:29.018860 master-0 kubenswrapper[7614]: W0224 05:14:29.018239 7614 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Feb 24 05:14:29.018860 master-0 kubenswrapper[7614]: W0224 05:14:29.018243 7614 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Feb 24 05:14:29.018860 master-0 kubenswrapper[7614]: W0224 05:14:29.018246 7614 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Feb 24 05:14:29.018860 master-0 kubenswrapper[7614]: W0224 05:14:29.018250 7614 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Feb 24 05:14:29.018860 master-0 kubenswrapper[7614]: W0224 05:14:29.018253 7614 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Feb 24 05:14:29.018860 master-0 kubenswrapper[7614]: W0224 05:14:29.018257 7614 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Feb 24 05:14:29.018860 master-0 kubenswrapper[7614]: W0224 05:14:29.018260 7614 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Feb 24 05:14:29.018860 master-0 kubenswrapper[7614]: W0224 05:14:29.018264 7614 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Feb 24 05:14:29.018860 master-0 kubenswrapper[7614]: W0224 05:14:29.018268 7614 feature_gate.go:330] unrecognized feature gate: PlatformOperators Feb 24 05:14:29.018860 master-0 kubenswrapper[7614]: W0224 05:14:29.018271 7614 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Feb 24 05:14:29.018860 master-0 kubenswrapper[7614]: W0224 05:14:29.018275 7614 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Feb 24 05:14:29.018860 master-0 kubenswrapper[7614]: W0224 05:14:29.018280 7614 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Feb 24 05:14:29.018860 master-0 kubenswrapper[7614]: W0224 05:14:29.018284 7614 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Feb 24 05:14:29.018860 master-0 kubenswrapper[7614]: W0224 05:14:29.018288 7614 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Feb 24 05:14:29.018860 master-0 kubenswrapper[7614]: W0224 05:14:29.018291 7614 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Feb 24 05:14:29.018860 master-0 kubenswrapper[7614]: W0224 05:14:29.018295 7614 feature_gate.go:330] unrecognized feature gate: PinnedImages Feb 24 05:14:29.018860 master-0 kubenswrapper[7614]: W0224 05:14:29.018300 7614 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Feb 24 05:14:29.018860 master-0 kubenswrapper[7614]: W0224 05:14:29.018320 7614 feature_gate.go:330] unrecognized feature gate: OVNObservability Feb 24 05:14:29.019513 master-0 kubenswrapper[7614]: W0224 05:14:29.018325 7614 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Feb 24 05:14:29.019513 master-0 kubenswrapper[7614]: W0224 05:14:29.018329 7614 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Feb 24 05:14:29.019513 master-0 kubenswrapper[7614]: W0224 05:14:29.018333 7614 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Feb 24 05:14:29.019513 master-0 kubenswrapper[7614]: W0224 05:14:29.018337 7614 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Feb 24 05:14:29.019513 master-0 kubenswrapper[7614]: W0224 05:14:29.018342 7614 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Feb 24 05:14:29.019513 master-0 kubenswrapper[7614]: W0224 05:14:29.018346 7614 feature_gate.go:330] unrecognized feature gate: InsightsConfig Feb 24 05:14:29.019513 master-0 kubenswrapper[7614]: W0224 05:14:29.018349 7614 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Feb 24 05:14:29.019513 master-0 kubenswrapper[7614]: W0224 05:14:29.018360 7614 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Feb 24 05:14:29.019513 master-0 kubenswrapper[7614]: W0224 05:14:29.018364 7614 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Feb 24 05:14:29.019513 master-0 kubenswrapper[7614]: W0224 05:14:29.018367 7614 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Feb 24 05:14:29.019513 master-0 kubenswrapper[7614]: W0224 05:14:29.018372 7614 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Feb 24 05:14:29.019513 master-0 kubenswrapper[7614]: W0224 05:14:29.018377 7614 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Feb 24 05:14:29.019513 master-0 kubenswrapper[7614]: W0224 05:14:29.018381 7614 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Feb 24 05:14:29.019513 master-0 kubenswrapper[7614]: W0224 05:14:29.018385 7614 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Feb 24 05:14:29.019513 master-0 kubenswrapper[7614]: W0224 05:14:29.018390 7614 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Feb 24 05:14:29.019513 master-0 kubenswrapper[7614]: W0224 05:14:29.018394 7614 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Feb 24 05:14:29.019513 master-0 kubenswrapper[7614]: W0224 05:14:29.018397 7614 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Feb 24 05:14:29.019513 master-0 kubenswrapper[7614]: W0224 05:14:29.018401 7614 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Feb 24 05:14:29.019513 master-0 kubenswrapper[7614]: W0224 05:14:29.018405 7614 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Feb 24 05:14:29.020003 master-0 kubenswrapper[7614]: W0224 05:14:29.018410 7614 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Feb 24 05:14:29.020003 master-0 kubenswrapper[7614]: W0224 05:14:29.018415 7614 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Feb 24 05:14:29.020003 master-0 kubenswrapper[7614]: W0224 05:14:29.018419 7614 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Feb 24 05:14:29.020003 master-0 kubenswrapper[7614]: W0224 05:14:29.018423 7614 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Feb 24 05:14:29.020003 master-0 kubenswrapper[7614]: W0224 05:14:29.018427 7614 feature_gate.go:330] unrecognized feature gate: Example Feb 24 05:14:29.020003 master-0 kubenswrapper[7614]: W0224 05:14:29.018431 7614 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Feb 24 05:14:29.020003 master-0 kubenswrapper[7614]: W0224 05:14:29.018435 7614 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Feb 24 05:14:29.020003 master-0 kubenswrapper[7614]: W0224 05:14:29.018438 7614 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Feb 24 05:14:29.020003 master-0 kubenswrapper[7614]: W0224 05:14:29.018442 7614 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Feb 24 05:14:29.020003 master-0 kubenswrapper[7614]: I0224 05:14:29.018540 7614 flags.go:64] FLAG: --address="0.0.0.0" Feb 24 05:14:29.020003 master-0 kubenswrapper[7614]: I0224 05:14:29.018549 7614 flags.go:64] FLAG: --allowed-unsafe-sysctls="[]" Feb 24 05:14:29.020003 master-0 kubenswrapper[7614]: I0224 05:14:29.018557 7614 flags.go:64] FLAG: --anonymous-auth="true" Feb 24 05:14:29.020003 master-0 kubenswrapper[7614]: I0224 05:14:29.018563 7614 flags.go:64] FLAG: --application-metrics-count-limit="100" Feb 24 05:14:29.020003 master-0 kubenswrapper[7614]: I0224 05:14:29.018568 7614 flags.go:64] FLAG: --authentication-token-webhook="false" Feb 24 05:14:29.020003 master-0 kubenswrapper[7614]: I0224 05:14:29.018572 7614 flags.go:64] FLAG: --authentication-token-webhook-cache-ttl="2m0s" Feb 24 05:14:29.020003 master-0 kubenswrapper[7614]: I0224 05:14:29.018577 7614 flags.go:64] FLAG: --authorization-mode="AlwaysAllow" Feb 24 05:14:29.020003 master-0 kubenswrapper[7614]: I0224 05:14:29.018583 7614 flags.go:64] FLAG: --authorization-webhook-cache-authorized-ttl="5m0s" Feb 24 05:14:29.020003 master-0 kubenswrapper[7614]: I0224 05:14:29.018587 7614 flags.go:64] FLAG: --authorization-webhook-cache-unauthorized-ttl="30s" Feb 24 05:14:29.020003 master-0 kubenswrapper[7614]: I0224 05:14:29.018591 7614 flags.go:64] FLAG: --boot-id-file="/proc/sys/kernel/random/boot_id" Feb 24 05:14:29.020003 master-0 kubenswrapper[7614]: I0224 05:14:29.018596 7614 flags.go:64] FLAG: --bootstrap-kubeconfig="/etc/kubernetes/kubeconfig" Feb 24 05:14:29.020003 master-0 kubenswrapper[7614]: I0224 05:14:29.018600 7614 flags.go:64] FLAG: --cert-dir="/var/lib/kubelet/pki" Feb 24 05:14:29.020519 master-0 kubenswrapper[7614]: I0224 05:14:29.018604 7614 flags.go:64] FLAG: --cgroup-driver="cgroupfs" Feb 24 05:14:29.020519 master-0 kubenswrapper[7614]: I0224 05:14:29.018608 7614 flags.go:64] FLAG: --cgroup-root="" Feb 24 05:14:29.020519 master-0 kubenswrapper[7614]: I0224 05:14:29.018612 7614 flags.go:64] FLAG: --cgroups-per-qos="true" Feb 24 05:14:29.020519 master-0 kubenswrapper[7614]: I0224 05:14:29.018647 7614 flags.go:64] FLAG: --client-ca-file="" Feb 24 05:14:29.020519 master-0 kubenswrapper[7614]: I0224 05:14:29.018652 7614 flags.go:64] FLAG: --cloud-config="" Feb 24 05:14:29.020519 master-0 kubenswrapper[7614]: I0224 05:14:29.018656 7614 flags.go:64] FLAG: --cloud-provider="" Feb 24 05:14:29.020519 master-0 kubenswrapper[7614]: I0224 05:14:29.018660 7614 flags.go:64] FLAG: --cluster-dns="[]" Feb 24 05:14:29.020519 master-0 kubenswrapper[7614]: I0224 05:14:29.018671 7614 flags.go:64] FLAG: --cluster-domain="" Feb 24 05:14:29.020519 master-0 kubenswrapper[7614]: I0224 05:14:29.018675 7614 flags.go:64] FLAG: --config="/etc/kubernetes/kubelet.conf" Feb 24 05:14:29.020519 master-0 kubenswrapper[7614]: I0224 05:14:29.018679 7614 flags.go:64] FLAG: --config-dir="" Feb 24 05:14:29.020519 master-0 kubenswrapper[7614]: I0224 05:14:29.018683 7614 flags.go:64] FLAG: --container-hints="/etc/cadvisor/container_hints.json" Feb 24 05:14:29.020519 master-0 kubenswrapper[7614]: I0224 05:14:29.018687 7614 flags.go:64] FLAG: --container-log-max-files="5" Feb 24 05:14:29.020519 master-0 kubenswrapper[7614]: I0224 05:14:29.018693 7614 flags.go:64] FLAG: --container-log-max-size="10Mi" Feb 24 05:14:29.020519 master-0 kubenswrapper[7614]: I0224 05:14:29.018698 7614 flags.go:64] FLAG: --container-runtime-endpoint="/var/run/crio/crio.sock" Feb 24 05:14:29.020519 master-0 kubenswrapper[7614]: I0224 05:14:29.018703 7614 flags.go:64] FLAG: --containerd="/run/containerd/containerd.sock" Feb 24 05:14:29.020519 master-0 kubenswrapper[7614]: I0224 05:14:29.018708 7614 flags.go:64] FLAG: --containerd-namespace="k8s.io" Feb 24 05:14:29.020519 master-0 kubenswrapper[7614]: I0224 05:14:29.018712 7614 flags.go:64] FLAG: --contention-profiling="false" Feb 24 05:14:29.020519 master-0 kubenswrapper[7614]: I0224 05:14:29.018718 7614 flags.go:64] FLAG: --cpu-cfs-quota="true" Feb 24 05:14:29.020519 master-0 kubenswrapper[7614]: I0224 05:14:29.018723 7614 flags.go:64] FLAG: --cpu-cfs-quota-period="100ms" Feb 24 05:14:29.020519 master-0 kubenswrapper[7614]: I0224 05:14:29.018727 7614 flags.go:64] FLAG: --cpu-manager-policy="none" Feb 24 05:14:29.020519 master-0 kubenswrapper[7614]: I0224 05:14:29.018731 7614 flags.go:64] FLAG: --cpu-manager-policy-options="" Feb 24 05:14:29.020519 master-0 kubenswrapper[7614]: I0224 05:14:29.018737 7614 flags.go:64] FLAG: --cpu-manager-reconcile-period="10s" Feb 24 05:14:29.020519 master-0 kubenswrapper[7614]: I0224 05:14:29.018741 7614 flags.go:64] FLAG: --enable-controller-attach-detach="true" Feb 24 05:14:29.020519 master-0 kubenswrapper[7614]: I0224 05:14:29.018745 7614 flags.go:64] FLAG: --enable-debugging-handlers="true" Feb 24 05:14:29.020519 master-0 kubenswrapper[7614]: I0224 05:14:29.018749 7614 flags.go:64] FLAG: --enable-load-reader="false" Feb 24 05:14:29.021104 master-0 kubenswrapper[7614]: I0224 05:14:29.018754 7614 flags.go:64] FLAG: --enable-server="true" Feb 24 05:14:29.021104 master-0 kubenswrapper[7614]: I0224 05:14:29.018758 7614 flags.go:64] FLAG: --enforce-node-allocatable="[pods]" Feb 24 05:14:29.021104 master-0 kubenswrapper[7614]: I0224 05:14:29.018767 7614 flags.go:64] FLAG: --event-burst="100" Feb 24 05:14:29.021104 master-0 kubenswrapper[7614]: I0224 05:14:29.018771 7614 flags.go:64] FLAG: --event-qps="50" Feb 24 05:14:29.021104 master-0 kubenswrapper[7614]: I0224 05:14:29.018775 7614 flags.go:64] FLAG: --event-storage-age-limit="default=0" Feb 24 05:14:29.021104 master-0 kubenswrapper[7614]: I0224 05:14:29.018779 7614 flags.go:64] FLAG: --event-storage-event-limit="default=0" Feb 24 05:14:29.021104 master-0 kubenswrapper[7614]: I0224 05:14:29.018784 7614 flags.go:64] FLAG: --eviction-hard="" Feb 24 05:14:29.021104 master-0 kubenswrapper[7614]: I0224 05:14:29.018789 7614 flags.go:64] FLAG: --eviction-max-pod-grace-period="0" Feb 24 05:14:29.021104 master-0 kubenswrapper[7614]: I0224 05:14:29.018793 7614 flags.go:64] FLAG: --eviction-minimum-reclaim="" Feb 24 05:14:29.021104 master-0 kubenswrapper[7614]: I0224 05:14:29.018797 7614 flags.go:64] FLAG: --eviction-pressure-transition-period="5m0s" Feb 24 05:14:29.021104 master-0 kubenswrapper[7614]: I0224 05:14:29.018801 7614 flags.go:64] FLAG: --eviction-soft="" Feb 24 05:14:29.021104 master-0 kubenswrapper[7614]: I0224 05:14:29.018806 7614 flags.go:64] FLAG: --eviction-soft-grace-period="" Feb 24 05:14:29.021104 master-0 kubenswrapper[7614]: I0224 05:14:29.018810 7614 flags.go:64] FLAG: --exit-on-lock-contention="false" Feb 24 05:14:29.021104 master-0 kubenswrapper[7614]: I0224 05:14:29.018814 7614 flags.go:64] FLAG: --experimental-allocatable-ignore-eviction="false" Feb 24 05:14:29.021104 master-0 kubenswrapper[7614]: I0224 05:14:29.018823 7614 flags.go:64] FLAG: --experimental-mounter-path="" Feb 24 05:14:29.021104 master-0 kubenswrapper[7614]: I0224 05:14:29.018827 7614 flags.go:64] FLAG: --fail-cgroupv1="false" Feb 24 05:14:29.021104 master-0 kubenswrapper[7614]: I0224 05:14:29.018832 7614 flags.go:64] FLAG: --fail-swap-on="true" Feb 24 05:14:29.021104 master-0 kubenswrapper[7614]: I0224 05:14:29.018836 7614 flags.go:64] FLAG: --feature-gates="" Feb 24 05:14:29.021104 master-0 kubenswrapper[7614]: I0224 05:14:29.018842 7614 flags.go:64] FLAG: --file-check-frequency="20s" Feb 24 05:14:29.021104 master-0 kubenswrapper[7614]: I0224 05:14:29.018846 7614 flags.go:64] FLAG: --global-housekeeping-interval="1m0s" Feb 24 05:14:29.021104 master-0 kubenswrapper[7614]: I0224 05:14:29.018851 7614 flags.go:64] FLAG: --hairpin-mode="promiscuous-bridge" Feb 24 05:14:29.021104 master-0 kubenswrapper[7614]: I0224 05:14:29.018856 7614 flags.go:64] FLAG: --healthz-bind-address="127.0.0.1" Feb 24 05:14:29.021104 master-0 kubenswrapper[7614]: I0224 05:14:29.018861 7614 flags.go:64] FLAG: --healthz-port="10248" Feb 24 05:14:29.021104 master-0 kubenswrapper[7614]: I0224 05:14:29.018866 7614 flags.go:64] FLAG: --help="false" Feb 24 05:14:29.021104 master-0 kubenswrapper[7614]: I0224 05:14:29.018873 7614 flags.go:64] FLAG: --hostname-override="" Feb 24 05:14:29.021104 master-0 kubenswrapper[7614]: I0224 05:14:29.018878 7614 flags.go:64] FLAG: --housekeeping-interval="10s" Feb 24 05:14:29.021726 master-0 kubenswrapper[7614]: I0224 05:14:29.018883 7614 flags.go:64] FLAG: --http-check-frequency="20s" Feb 24 05:14:29.021726 master-0 kubenswrapper[7614]: I0224 05:14:29.018888 7614 flags.go:64] FLAG: --image-credential-provider-bin-dir="" Feb 24 05:14:29.021726 master-0 kubenswrapper[7614]: I0224 05:14:29.018892 7614 flags.go:64] FLAG: --image-credential-provider-config="" Feb 24 05:14:29.021726 master-0 kubenswrapper[7614]: I0224 05:14:29.018897 7614 flags.go:64] FLAG: --image-gc-high-threshold="85" Feb 24 05:14:29.021726 master-0 kubenswrapper[7614]: I0224 05:14:29.018901 7614 flags.go:64] FLAG: --image-gc-low-threshold="80" Feb 24 05:14:29.021726 master-0 kubenswrapper[7614]: I0224 05:14:29.018905 7614 flags.go:64] FLAG: --image-service-endpoint="" Feb 24 05:14:29.021726 master-0 kubenswrapper[7614]: I0224 05:14:29.018909 7614 flags.go:64] FLAG: --kernel-memcg-notification="false" Feb 24 05:14:29.021726 master-0 kubenswrapper[7614]: I0224 05:14:29.018913 7614 flags.go:64] FLAG: --kube-api-burst="100" Feb 24 05:14:29.021726 master-0 kubenswrapper[7614]: I0224 05:14:29.018917 7614 flags.go:64] FLAG: --kube-api-content-type="application/vnd.kubernetes.protobuf" Feb 24 05:14:29.021726 master-0 kubenswrapper[7614]: I0224 05:14:29.018922 7614 flags.go:64] FLAG: --kube-api-qps="50" Feb 24 05:14:29.021726 master-0 kubenswrapper[7614]: I0224 05:14:29.018926 7614 flags.go:64] FLAG: --kube-reserved="" Feb 24 05:14:29.021726 master-0 kubenswrapper[7614]: I0224 05:14:29.018930 7614 flags.go:64] FLAG: --kube-reserved-cgroup="" Feb 24 05:14:29.021726 master-0 kubenswrapper[7614]: I0224 05:14:29.018934 7614 flags.go:64] FLAG: --kubeconfig="/var/lib/kubelet/kubeconfig" Feb 24 05:14:29.021726 master-0 kubenswrapper[7614]: I0224 05:14:29.018938 7614 flags.go:64] FLAG: --kubelet-cgroups="" Feb 24 05:14:29.021726 master-0 kubenswrapper[7614]: I0224 05:14:29.018942 7614 flags.go:64] FLAG: --local-storage-capacity-isolation="true" Feb 24 05:14:29.021726 master-0 kubenswrapper[7614]: I0224 05:14:29.018946 7614 flags.go:64] FLAG: --lock-file="" Feb 24 05:14:29.021726 master-0 kubenswrapper[7614]: I0224 05:14:29.018950 7614 flags.go:64] FLAG: --log-cadvisor-usage="false" Feb 24 05:14:29.021726 master-0 kubenswrapper[7614]: I0224 05:14:29.018954 7614 flags.go:64] FLAG: --log-flush-frequency="5s" Feb 24 05:14:29.021726 master-0 kubenswrapper[7614]: I0224 05:14:29.018959 7614 flags.go:64] FLAG: --log-json-info-buffer-size="0" Feb 24 05:14:29.021726 master-0 kubenswrapper[7614]: I0224 05:14:29.018965 7614 flags.go:64] FLAG: --log-json-split-stream="false" Feb 24 05:14:29.021726 master-0 kubenswrapper[7614]: I0224 05:14:29.018969 7614 flags.go:64] FLAG: --log-text-info-buffer-size="0" Feb 24 05:14:29.021726 master-0 kubenswrapper[7614]: I0224 05:14:29.018973 7614 flags.go:64] FLAG: --log-text-split-stream="false" Feb 24 05:14:29.021726 master-0 kubenswrapper[7614]: I0224 05:14:29.018978 7614 flags.go:64] FLAG: --logging-format="text" Feb 24 05:14:29.021726 master-0 kubenswrapper[7614]: I0224 05:14:29.018982 7614 flags.go:64] FLAG: --machine-id-file="/etc/machine-id,/var/lib/dbus/machine-id" Feb 24 05:14:29.021726 master-0 kubenswrapper[7614]: I0224 05:14:29.018992 7614 flags.go:64] FLAG: --make-iptables-util-chains="true" Feb 24 05:14:29.022340 master-0 kubenswrapper[7614]: I0224 05:14:29.018996 7614 flags.go:64] FLAG: --manifest-url="" Feb 24 05:14:29.022340 master-0 kubenswrapper[7614]: I0224 05:14:29.019000 7614 flags.go:64] FLAG: --manifest-url-header="" Feb 24 05:14:29.022340 master-0 kubenswrapper[7614]: I0224 05:14:29.019017 7614 flags.go:64] FLAG: --max-housekeeping-interval="15s" Feb 24 05:14:29.022340 master-0 kubenswrapper[7614]: I0224 05:14:29.019021 7614 flags.go:64] FLAG: --max-open-files="1000000" Feb 24 05:14:29.022340 master-0 kubenswrapper[7614]: I0224 05:14:29.019029 7614 flags.go:64] FLAG: --max-pods="110" Feb 24 05:14:29.022340 master-0 kubenswrapper[7614]: I0224 05:14:29.019033 7614 flags.go:64] FLAG: --maximum-dead-containers="-1" Feb 24 05:14:29.022340 master-0 kubenswrapper[7614]: I0224 05:14:29.019037 7614 flags.go:64] FLAG: --maximum-dead-containers-per-container="1" Feb 24 05:14:29.022340 master-0 kubenswrapper[7614]: I0224 05:14:29.019041 7614 flags.go:64] FLAG: --memory-manager-policy="None" Feb 24 05:14:29.022340 master-0 kubenswrapper[7614]: I0224 05:14:29.019046 7614 flags.go:64] FLAG: --minimum-container-ttl-duration="6m0s" Feb 24 05:14:29.022340 master-0 kubenswrapper[7614]: I0224 05:14:29.019051 7614 flags.go:64] FLAG: --minimum-image-ttl-duration="2m0s" Feb 24 05:14:29.022340 master-0 kubenswrapper[7614]: I0224 05:14:29.019054 7614 flags.go:64] FLAG: --node-ip="192.168.32.10" Feb 24 05:14:29.022340 master-0 kubenswrapper[7614]: I0224 05:14:29.019058 7614 flags.go:64] FLAG: --node-labels="node-role.kubernetes.io/control-plane=,node-role.kubernetes.io/master=,node.openshift.io/os_id=rhcos" Feb 24 05:14:29.022340 master-0 kubenswrapper[7614]: I0224 05:14:29.019072 7614 flags.go:64] FLAG: --node-status-max-images="50" Feb 24 05:14:29.022340 master-0 kubenswrapper[7614]: I0224 05:14:29.019077 7614 flags.go:64] FLAG: --node-status-update-frequency="10s" Feb 24 05:14:29.022340 master-0 kubenswrapper[7614]: I0224 05:14:29.019081 7614 flags.go:64] FLAG: --oom-score-adj="-999" Feb 24 05:14:29.022340 master-0 kubenswrapper[7614]: I0224 05:14:29.019085 7614 flags.go:64] FLAG: --pod-cidr="" Feb 24 05:14:29.022340 master-0 kubenswrapper[7614]: I0224 05:14:29.019089 7614 flags.go:64] FLAG: --pod-infra-container-image="quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6d5001a555eb05eef7f23d64667303c2b4db8343ee900c265f7613c40c1db229" Feb 24 05:14:29.022340 master-0 kubenswrapper[7614]: I0224 05:14:29.019096 7614 flags.go:64] FLAG: --pod-manifest-path="" Feb 24 05:14:29.022340 master-0 kubenswrapper[7614]: I0224 05:14:29.019100 7614 flags.go:64] FLAG: --pod-max-pids="-1" Feb 24 05:14:29.022340 master-0 kubenswrapper[7614]: I0224 05:14:29.019104 7614 flags.go:64] FLAG: --pods-per-core="0" Feb 24 05:14:29.022340 master-0 kubenswrapper[7614]: I0224 05:14:29.019108 7614 flags.go:64] FLAG: --port="10250" Feb 24 05:14:29.022340 master-0 kubenswrapper[7614]: I0224 05:14:29.019112 7614 flags.go:64] FLAG: --protect-kernel-defaults="false" Feb 24 05:14:29.022340 master-0 kubenswrapper[7614]: I0224 05:14:29.019116 7614 flags.go:64] FLAG: --provider-id="" Feb 24 05:14:29.022340 master-0 kubenswrapper[7614]: I0224 05:14:29.019120 7614 flags.go:64] FLAG: --qos-reserved="" Feb 24 05:14:29.022964 master-0 kubenswrapper[7614]: I0224 05:14:29.019124 7614 flags.go:64] FLAG: --read-only-port="10255" Feb 24 05:14:29.022964 master-0 kubenswrapper[7614]: I0224 05:14:29.019129 7614 flags.go:64] FLAG: --register-node="true" Feb 24 05:14:29.022964 master-0 kubenswrapper[7614]: I0224 05:14:29.019133 7614 flags.go:64] FLAG: --register-schedulable="true" Feb 24 05:14:29.022964 master-0 kubenswrapper[7614]: I0224 05:14:29.019137 7614 flags.go:64] FLAG: --register-with-taints="node-role.kubernetes.io/master=:NoSchedule" Feb 24 05:14:29.022964 master-0 kubenswrapper[7614]: I0224 05:14:29.019144 7614 flags.go:64] FLAG: --registry-burst="10" Feb 24 05:14:29.022964 master-0 kubenswrapper[7614]: I0224 05:14:29.019148 7614 flags.go:64] FLAG: --registry-qps="5" Feb 24 05:14:29.022964 master-0 kubenswrapper[7614]: I0224 05:14:29.019152 7614 flags.go:64] FLAG: --reserved-cpus="" Feb 24 05:14:29.022964 master-0 kubenswrapper[7614]: I0224 05:14:29.019156 7614 flags.go:64] FLAG: --reserved-memory="" Feb 24 05:14:29.022964 master-0 kubenswrapper[7614]: I0224 05:14:29.019161 7614 flags.go:64] FLAG: --resolv-conf="/etc/resolv.conf" Feb 24 05:14:29.022964 master-0 kubenswrapper[7614]: I0224 05:14:29.019165 7614 flags.go:64] FLAG: --root-dir="/var/lib/kubelet" Feb 24 05:14:29.022964 master-0 kubenswrapper[7614]: I0224 05:14:29.019169 7614 flags.go:64] FLAG: --rotate-certificates="false" Feb 24 05:14:29.022964 master-0 kubenswrapper[7614]: I0224 05:14:29.019183 7614 flags.go:64] FLAG: --rotate-server-certificates="false" Feb 24 05:14:29.022964 master-0 kubenswrapper[7614]: I0224 05:14:29.019187 7614 flags.go:64] FLAG: --runonce="false" Feb 24 05:14:29.022964 master-0 kubenswrapper[7614]: I0224 05:14:29.019192 7614 flags.go:64] FLAG: --runtime-cgroups="/system.slice/crio.service" Feb 24 05:14:29.022964 master-0 kubenswrapper[7614]: I0224 05:14:29.019197 7614 flags.go:64] FLAG: --runtime-request-timeout="2m0s" Feb 24 05:14:29.022964 master-0 kubenswrapper[7614]: I0224 05:14:29.019201 7614 flags.go:64] FLAG: --seccomp-default="false" Feb 24 05:14:29.022964 master-0 kubenswrapper[7614]: I0224 05:14:29.019205 7614 flags.go:64] FLAG: --serialize-image-pulls="true" Feb 24 05:14:29.022964 master-0 kubenswrapper[7614]: I0224 05:14:29.019210 7614 flags.go:64] FLAG: --storage-driver-buffer-duration="1m0s" Feb 24 05:14:29.022964 master-0 kubenswrapper[7614]: I0224 05:14:29.019214 7614 flags.go:64] FLAG: --storage-driver-db="cadvisor" Feb 24 05:14:29.022964 master-0 kubenswrapper[7614]: I0224 05:14:29.019219 7614 flags.go:64] FLAG: --storage-driver-host="localhost:8086" Feb 24 05:14:29.022964 master-0 kubenswrapper[7614]: I0224 05:14:29.019223 7614 flags.go:64] FLAG: --storage-driver-password="root" Feb 24 05:14:29.022964 master-0 kubenswrapper[7614]: I0224 05:14:29.019227 7614 flags.go:64] FLAG: --storage-driver-secure="false" Feb 24 05:14:29.022964 master-0 kubenswrapper[7614]: I0224 05:14:29.019231 7614 flags.go:64] FLAG: --storage-driver-table="stats" Feb 24 05:14:29.022964 master-0 kubenswrapper[7614]: I0224 05:14:29.019235 7614 flags.go:64] FLAG: --storage-driver-user="root" Feb 24 05:14:29.022964 master-0 kubenswrapper[7614]: I0224 05:14:29.019239 7614 flags.go:64] FLAG: --streaming-connection-idle-timeout="4h0m0s" Feb 24 05:14:29.023579 master-0 kubenswrapper[7614]: I0224 05:14:29.019243 7614 flags.go:64] FLAG: --sync-frequency="1m0s" Feb 24 05:14:29.023579 master-0 kubenswrapper[7614]: I0224 05:14:29.019247 7614 flags.go:64] FLAG: --system-cgroups="" Feb 24 05:14:29.023579 master-0 kubenswrapper[7614]: I0224 05:14:29.019251 7614 flags.go:64] FLAG: --system-reserved="cpu=500m,ephemeral-storage=1Gi,memory=1Gi" Feb 24 05:14:29.023579 master-0 kubenswrapper[7614]: I0224 05:14:29.019257 7614 flags.go:64] FLAG: --system-reserved-cgroup="" Feb 24 05:14:29.023579 master-0 kubenswrapper[7614]: I0224 05:14:29.019261 7614 flags.go:64] FLAG: --tls-cert-file="" Feb 24 05:14:29.023579 master-0 kubenswrapper[7614]: I0224 05:14:29.019265 7614 flags.go:64] FLAG: --tls-cipher-suites="[]" Feb 24 05:14:29.023579 master-0 kubenswrapper[7614]: I0224 05:14:29.019272 7614 flags.go:64] FLAG: --tls-min-version="" Feb 24 05:14:29.023579 master-0 kubenswrapper[7614]: I0224 05:14:29.019276 7614 flags.go:64] FLAG: --tls-private-key-file="" Feb 24 05:14:29.023579 master-0 kubenswrapper[7614]: I0224 05:14:29.019280 7614 flags.go:64] FLAG: --topology-manager-policy="none" Feb 24 05:14:29.023579 master-0 kubenswrapper[7614]: I0224 05:14:29.019284 7614 flags.go:64] FLAG: --topology-manager-policy-options="" Feb 24 05:14:29.023579 master-0 kubenswrapper[7614]: I0224 05:14:29.019288 7614 flags.go:64] FLAG: --topology-manager-scope="container" Feb 24 05:14:29.023579 master-0 kubenswrapper[7614]: I0224 05:14:29.019292 7614 flags.go:64] FLAG: --v="2" Feb 24 05:14:29.023579 master-0 kubenswrapper[7614]: I0224 05:14:29.019302 7614 flags.go:64] FLAG: --version="false" Feb 24 05:14:29.023579 master-0 kubenswrapper[7614]: I0224 05:14:29.019325 7614 flags.go:64] FLAG: --vmodule="" Feb 24 05:14:29.023579 master-0 kubenswrapper[7614]: I0224 05:14:29.019330 7614 flags.go:64] FLAG: --volume-plugin-dir="/etc/kubernetes/kubelet-plugins/volume/exec" Feb 24 05:14:29.023579 master-0 kubenswrapper[7614]: I0224 05:14:29.019334 7614 flags.go:64] FLAG: --volume-stats-agg-period="1m0s" Feb 24 05:14:29.023579 master-0 kubenswrapper[7614]: W0224 05:14:29.019499 7614 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Feb 24 05:14:29.023579 master-0 kubenswrapper[7614]: W0224 05:14:29.019505 7614 feature_gate.go:330] unrecognized feature gate: NewOLM Feb 24 05:14:29.023579 master-0 kubenswrapper[7614]: W0224 05:14:29.019512 7614 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Feb 24 05:14:29.023579 master-0 kubenswrapper[7614]: W0224 05:14:29.019517 7614 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Feb 24 05:14:29.023579 master-0 kubenswrapper[7614]: W0224 05:14:29.019521 7614 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Feb 24 05:14:29.023579 master-0 kubenswrapper[7614]: W0224 05:14:29.019526 7614 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Feb 24 05:14:29.023579 master-0 kubenswrapper[7614]: W0224 05:14:29.019535 7614 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Feb 24 05:14:29.023579 master-0 kubenswrapper[7614]: W0224 05:14:29.019539 7614 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Feb 24 05:14:29.024214 master-0 kubenswrapper[7614]: W0224 05:14:29.019543 7614 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Feb 24 05:14:29.024214 master-0 kubenswrapper[7614]: W0224 05:14:29.019547 7614 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Feb 24 05:14:29.024214 master-0 kubenswrapper[7614]: W0224 05:14:29.019551 7614 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Feb 24 05:14:29.024214 master-0 kubenswrapper[7614]: W0224 05:14:29.019554 7614 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Feb 24 05:14:29.024214 master-0 kubenswrapper[7614]: W0224 05:14:29.019558 7614 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Feb 24 05:14:29.024214 master-0 kubenswrapper[7614]: W0224 05:14:29.019561 7614 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Feb 24 05:14:29.024214 master-0 kubenswrapper[7614]: W0224 05:14:29.019565 7614 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Feb 24 05:14:29.024214 master-0 kubenswrapper[7614]: W0224 05:14:29.019569 7614 feature_gate.go:330] unrecognized feature gate: Example Feb 24 05:14:29.024214 master-0 kubenswrapper[7614]: W0224 05:14:29.019572 7614 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Feb 24 05:14:29.024214 master-0 kubenswrapper[7614]: W0224 05:14:29.019576 7614 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Feb 24 05:14:29.024214 master-0 kubenswrapper[7614]: W0224 05:14:29.019579 7614 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Feb 24 05:14:29.024214 master-0 kubenswrapper[7614]: W0224 05:14:29.019583 7614 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Feb 24 05:14:29.024214 master-0 kubenswrapper[7614]: W0224 05:14:29.019587 7614 feature_gate.go:330] unrecognized feature gate: SignatureStores Feb 24 05:14:29.024214 master-0 kubenswrapper[7614]: W0224 05:14:29.019591 7614 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Feb 24 05:14:29.024214 master-0 kubenswrapper[7614]: W0224 05:14:29.019594 7614 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Feb 24 05:14:29.024214 master-0 kubenswrapper[7614]: W0224 05:14:29.019598 7614 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Feb 24 05:14:29.024214 master-0 kubenswrapper[7614]: W0224 05:14:29.019601 7614 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Feb 24 05:14:29.024214 master-0 kubenswrapper[7614]: W0224 05:14:29.019605 7614 feature_gate.go:330] unrecognized feature gate: PinnedImages Feb 24 05:14:29.024214 master-0 kubenswrapper[7614]: W0224 05:14:29.019609 7614 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Feb 24 05:14:29.024214 master-0 kubenswrapper[7614]: W0224 05:14:29.019614 7614 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Feb 24 05:14:29.024699 master-0 kubenswrapper[7614]: W0224 05:14:29.019620 7614 feature_gate.go:330] unrecognized feature gate: OVNObservability Feb 24 05:14:29.024699 master-0 kubenswrapper[7614]: W0224 05:14:29.019624 7614 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Feb 24 05:14:29.024699 master-0 kubenswrapper[7614]: W0224 05:14:29.019627 7614 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Feb 24 05:14:29.024699 master-0 kubenswrapper[7614]: W0224 05:14:29.019632 7614 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Feb 24 05:14:29.024699 master-0 kubenswrapper[7614]: W0224 05:14:29.019637 7614 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Feb 24 05:14:29.024699 master-0 kubenswrapper[7614]: W0224 05:14:29.019642 7614 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Feb 24 05:14:29.024699 master-0 kubenswrapper[7614]: W0224 05:14:29.019649 7614 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Feb 24 05:14:29.024699 master-0 kubenswrapper[7614]: W0224 05:14:29.019652 7614 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Feb 24 05:14:29.024699 master-0 kubenswrapper[7614]: W0224 05:14:29.019657 7614 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Feb 24 05:14:29.024699 master-0 kubenswrapper[7614]: W0224 05:14:29.019661 7614 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Feb 24 05:14:29.024699 master-0 kubenswrapper[7614]: W0224 05:14:29.019664 7614 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Feb 24 05:14:29.024699 master-0 kubenswrapper[7614]: W0224 05:14:29.019668 7614 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Feb 24 05:14:29.024699 master-0 kubenswrapper[7614]: W0224 05:14:29.019671 7614 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Feb 24 05:14:29.024699 master-0 kubenswrapper[7614]: W0224 05:14:29.019675 7614 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Feb 24 05:14:29.024699 master-0 kubenswrapper[7614]: W0224 05:14:29.019683 7614 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Feb 24 05:14:29.024699 master-0 kubenswrapper[7614]: W0224 05:14:29.019688 7614 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Feb 24 05:14:29.024699 master-0 kubenswrapper[7614]: W0224 05:14:29.019693 7614 feature_gate.go:330] unrecognized feature gate: PlatformOperators Feb 24 05:14:29.024699 master-0 kubenswrapper[7614]: W0224 05:14:29.019697 7614 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Feb 24 05:14:29.024699 master-0 kubenswrapper[7614]: W0224 05:14:29.019701 7614 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Feb 24 05:14:29.025123 master-0 kubenswrapper[7614]: W0224 05:14:29.019706 7614 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Feb 24 05:14:29.025123 master-0 kubenswrapper[7614]: W0224 05:14:29.019710 7614 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Feb 24 05:14:29.025123 master-0 kubenswrapper[7614]: W0224 05:14:29.019714 7614 feature_gate.go:330] unrecognized feature gate: GatewayAPI Feb 24 05:14:29.025123 master-0 kubenswrapper[7614]: W0224 05:14:29.019718 7614 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Feb 24 05:14:29.025123 master-0 kubenswrapper[7614]: W0224 05:14:29.019722 7614 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Feb 24 05:14:29.025123 master-0 kubenswrapper[7614]: W0224 05:14:29.019726 7614 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Feb 24 05:14:29.025123 master-0 kubenswrapper[7614]: W0224 05:14:29.019729 7614 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Feb 24 05:14:29.025123 master-0 kubenswrapper[7614]: W0224 05:14:29.019733 7614 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Feb 24 05:14:29.025123 master-0 kubenswrapper[7614]: W0224 05:14:29.019737 7614 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Feb 24 05:14:29.025123 master-0 kubenswrapper[7614]: W0224 05:14:29.019740 7614 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Feb 24 05:14:29.025123 master-0 kubenswrapper[7614]: W0224 05:14:29.019744 7614 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Feb 24 05:14:29.025123 master-0 kubenswrapper[7614]: W0224 05:14:29.019747 7614 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Feb 24 05:14:29.025123 master-0 kubenswrapper[7614]: W0224 05:14:29.019751 7614 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Feb 24 05:14:29.025123 master-0 kubenswrapper[7614]: W0224 05:14:29.019757 7614 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Feb 24 05:14:29.025123 master-0 kubenswrapper[7614]: W0224 05:14:29.019760 7614 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Feb 24 05:14:29.025123 master-0 kubenswrapper[7614]: W0224 05:14:29.019764 7614 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Feb 24 05:14:29.025123 master-0 kubenswrapper[7614]: W0224 05:14:29.019768 7614 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Feb 24 05:14:29.025123 master-0 kubenswrapper[7614]: W0224 05:14:29.019771 7614 feature_gate.go:330] unrecognized feature gate: InsightsConfig Feb 24 05:14:29.025123 master-0 kubenswrapper[7614]: W0224 05:14:29.019775 7614 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Feb 24 05:14:29.025123 master-0 kubenswrapper[7614]: W0224 05:14:29.019780 7614 feature_gate.go:330] unrecognized feature gate: ExternalOIDCWithUIDAndExtraClaimMappings Feb 24 05:14:29.025740 master-0 kubenswrapper[7614]: W0224 05:14:29.019783 7614 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Feb 24 05:14:29.025740 master-0 kubenswrapper[7614]: W0224 05:14:29.019787 7614 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Feb 24 05:14:29.025740 master-0 kubenswrapper[7614]: W0224 05:14:29.019790 7614 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Feb 24 05:14:29.025740 master-0 kubenswrapper[7614]: W0224 05:14:29.019793 7614 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Feb 24 05:14:29.025740 master-0 kubenswrapper[7614]: W0224 05:14:29.019797 7614 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Feb 24 05:14:29.025740 master-0 kubenswrapper[7614]: I0224 05:14:29.019810 7614 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false StreamingCollectionEncodingToJSON:true StreamingCollectionEncodingToProtobuf:true TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Feb 24 05:14:29.033789 master-0 kubenswrapper[7614]: I0224 05:14:29.033726 7614 server.go:491] "Kubelet version" kubeletVersion="v1.31.14" Feb 24 05:14:29.033849 master-0 kubenswrapper[7614]: I0224 05:14:29.033793 7614 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Feb 24 05:14:29.034002 master-0 kubenswrapper[7614]: W0224 05:14:29.033978 7614 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Feb 24 05:14:29.034002 master-0 kubenswrapper[7614]: W0224 05:14:29.033998 7614 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Feb 24 05:14:29.034065 master-0 kubenswrapper[7614]: W0224 05:14:29.034006 7614 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Feb 24 05:14:29.034065 master-0 kubenswrapper[7614]: W0224 05:14:29.034012 7614 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Feb 24 05:14:29.034065 master-0 kubenswrapper[7614]: W0224 05:14:29.034019 7614 feature_gate.go:330] unrecognized feature gate: Example Feb 24 05:14:29.034065 master-0 kubenswrapper[7614]: W0224 05:14:29.034026 7614 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Feb 24 05:14:29.034065 master-0 kubenswrapper[7614]: W0224 05:14:29.034034 7614 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Feb 24 05:14:29.034065 master-0 kubenswrapper[7614]: W0224 05:14:29.034039 7614 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Feb 24 05:14:29.034065 master-0 kubenswrapper[7614]: W0224 05:14:29.034045 7614 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Feb 24 05:14:29.034065 master-0 kubenswrapper[7614]: W0224 05:14:29.034053 7614 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Feb 24 05:14:29.034065 master-0 kubenswrapper[7614]: W0224 05:14:29.034059 7614 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Feb 24 05:14:29.034065 master-0 kubenswrapper[7614]: W0224 05:14:29.034065 7614 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Feb 24 05:14:29.034321 master-0 kubenswrapper[7614]: W0224 05:14:29.034072 7614 feature_gate.go:330] unrecognized feature gate: SignatureStores Feb 24 05:14:29.034321 master-0 kubenswrapper[7614]: W0224 05:14:29.034078 7614 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Feb 24 05:14:29.034321 master-0 kubenswrapper[7614]: W0224 05:14:29.034083 7614 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Feb 24 05:14:29.034321 master-0 kubenswrapper[7614]: W0224 05:14:29.034088 7614 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Feb 24 05:14:29.034321 master-0 kubenswrapper[7614]: W0224 05:14:29.034096 7614 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Feb 24 05:14:29.034321 master-0 kubenswrapper[7614]: W0224 05:14:29.034107 7614 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Feb 24 05:14:29.034321 master-0 kubenswrapper[7614]: W0224 05:14:29.034114 7614 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Feb 24 05:14:29.034321 master-0 kubenswrapper[7614]: W0224 05:14:29.034119 7614 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Feb 24 05:14:29.034321 master-0 kubenswrapper[7614]: W0224 05:14:29.034125 7614 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Feb 24 05:14:29.034321 master-0 kubenswrapper[7614]: W0224 05:14:29.034131 7614 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Feb 24 05:14:29.034321 master-0 kubenswrapper[7614]: W0224 05:14:29.034137 7614 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Feb 24 05:14:29.034321 master-0 kubenswrapper[7614]: W0224 05:14:29.034144 7614 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Feb 24 05:14:29.034321 master-0 kubenswrapper[7614]: W0224 05:14:29.034151 7614 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Feb 24 05:14:29.034321 master-0 kubenswrapper[7614]: W0224 05:14:29.034157 7614 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Feb 24 05:14:29.034321 master-0 kubenswrapper[7614]: W0224 05:14:29.034164 7614 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Feb 24 05:14:29.034321 master-0 kubenswrapper[7614]: W0224 05:14:29.034171 7614 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Feb 24 05:14:29.034321 master-0 kubenswrapper[7614]: W0224 05:14:29.034179 7614 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Feb 24 05:14:29.034321 master-0 kubenswrapper[7614]: W0224 05:14:29.034186 7614 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Feb 24 05:14:29.034321 master-0 kubenswrapper[7614]: W0224 05:14:29.034192 7614 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Feb 24 05:14:29.034789 master-0 kubenswrapper[7614]: W0224 05:14:29.034199 7614 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Feb 24 05:14:29.034789 master-0 kubenswrapper[7614]: W0224 05:14:29.034205 7614 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Feb 24 05:14:29.034789 master-0 kubenswrapper[7614]: W0224 05:14:29.034210 7614 feature_gate.go:330] unrecognized feature gate: PlatformOperators Feb 24 05:14:29.034789 master-0 kubenswrapper[7614]: W0224 05:14:29.034216 7614 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Feb 24 05:14:29.034789 master-0 kubenswrapper[7614]: W0224 05:14:29.034221 7614 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Feb 24 05:14:29.034789 master-0 kubenswrapper[7614]: W0224 05:14:29.034226 7614 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Feb 24 05:14:29.034789 master-0 kubenswrapper[7614]: W0224 05:14:29.034231 7614 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Feb 24 05:14:29.034789 master-0 kubenswrapper[7614]: W0224 05:14:29.034236 7614 feature_gate.go:330] unrecognized feature gate: NewOLM Feb 24 05:14:29.034789 master-0 kubenswrapper[7614]: W0224 05:14:29.034241 7614 feature_gate.go:330] unrecognized feature gate: OVNObservability Feb 24 05:14:29.034789 master-0 kubenswrapper[7614]: W0224 05:14:29.034249 7614 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Feb 24 05:14:29.034789 master-0 kubenswrapper[7614]: W0224 05:14:29.034256 7614 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Feb 24 05:14:29.034789 master-0 kubenswrapper[7614]: W0224 05:14:29.034264 7614 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Feb 24 05:14:29.034789 master-0 kubenswrapper[7614]: W0224 05:14:29.034270 7614 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Feb 24 05:14:29.034789 master-0 kubenswrapper[7614]: W0224 05:14:29.034277 7614 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Feb 24 05:14:29.034789 master-0 kubenswrapper[7614]: W0224 05:14:29.034284 7614 feature_gate.go:330] unrecognized feature gate: PinnedImages Feb 24 05:14:29.034789 master-0 kubenswrapper[7614]: W0224 05:14:29.034290 7614 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Feb 24 05:14:29.034789 master-0 kubenswrapper[7614]: W0224 05:14:29.034295 7614 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Feb 24 05:14:29.034789 master-0 kubenswrapper[7614]: W0224 05:14:29.034301 7614 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Feb 24 05:14:29.034789 master-0 kubenswrapper[7614]: W0224 05:14:29.034334 7614 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Feb 24 05:14:29.035221 master-0 kubenswrapper[7614]: W0224 05:14:29.034343 7614 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Feb 24 05:14:29.035221 master-0 kubenswrapper[7614]: W0224 05:14:29.034349 7614 feature_gate.go:330] unrecognized feature gate: GatewayAPI Feb 24 05:14:29.035221 master-0 kubenswrapper[7614]: W0224 05:14:29.034356 7614 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Feb 24 05:14:29.035221 master-0 kubenswrapper[7614]: W0224 05:14:29.034362 7614 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Feb 24 05:14:29.035221 master-0 kubenswrapper[7614]: W0224 05:14:29.034368 7614 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Feb 24 05:14:29.035221 master-0 kubenswrapper[7614]: W0224 05:14:29.034375 7614 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Feb 24 05:14:29.035221 master-0 kubenswrapper[7614]: W0224 05:14:29.034380 7614 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Feb 24 05:14:29.035221 master-0 kubenswrapper[7614]: W0224 05:14:29.034386 7614 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Feb 24 05:14:29.035221 master-0 kubenswrapper[7614]: W0224 05:14:29.034392 7614 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Feb 24 05:14:29.035221 master-0 kubenswrapper[7614]: W0224 05:14:29.034398 7614 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Feb 24 05:14:29.035221 master-0 kubenswrapper[7614]: W0224 05:14:29.034403 7614 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Feb 24 05:14:29.035221 master-0 kubenswrapper[7614]: W0224 05:14:29.034409 7614 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Feb 24 05:14:29.035221 master-0 kubenswrapper[7614]: W0224 05:14:29.034414 7614 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Feb 24 05:14:29.035221 master-0 kubenswrapper[7614]: W0224 05:14:29.034419 7614 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Feb 24 05:14:29.035221 master-0 kubenswrapper[7614]: W0224 05:14:29.034424 7614 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Feb 24 05:14:29.035221 master-0 kubenswrapper[7614]: W0224 05:14:29.034429 7614 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Feb 24 05:14:29.035221 master-0 kubenswrapper[7614]: W0224 05:14:29.034434 7614 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Feb 24 05:14:29.035221 master-0 kubenswrapper[7614]: W0224 05:14:29.034440 7614 feature_gate.go:330] unrecognized feature gate: ExternalOIDCWithUIDAndExtraClaimMappings Feb 24 05:14:29.035221 master-0 kubenswrapper[7614]: W0224 05:14:29.034446 7614 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Feb 24 05:14:29.035809 master-0 kubenswrapper[7614]: W0224 05:14:29.034452 7614 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Feb 24 05:14:29.035809 master-0 kubenswrapper[7614]: W0224 05:14:29.034457 7614 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Feb 24 05:14:29.035809 master-0 kubenswrapper[7614]: W0224 05:14:29.034463 7614 feature_gate.go:330] unrecognized feature gate: InsightsConfig Feb 24 05:14:29.035809 master-0 kubenswrapper[7614]: I0224 05:14:29.034475 7614 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false StreamingCollectionEncodingToJSON:true StreamingCollectionEncodingToProtobuf:true TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Feb 24 05:14:29.035809 master-0 kubenswrapper[7614]: W0224 05:14:29.034791 7614 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Feb 24 05:14:29.035809 master-0 kubenswrapper[7614]: W0224 05:14:29.034801 7614 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Feb 24 05:14:29.035809 master-0 kubenswrapper[7614]: W0224 05:14:29.034815 7614 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Feb 24 05:14:29.035809 master-0 kubenswrapper[7614]: W0224 05:14:29.034822 7614 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Feb 24 05:14:29.035809 master-0 kubenswrapper[7614]: W0224 05:14:29.034827 7614 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Feb 24 05:14:29.035809 master-0 kubenswrapper[7614]: W0224 05:14:29.034834 7614 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Feb 24 05:14:29.035809 master-0 kubenswrapper[7614]: W0224 05:14:29.034840 7614 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Feb 24 05:14:29.035809 master-0 kubenswrapper[7614]: W0224 05:14:29.034846 7614 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Feb 24 05:14:29.035809 master-0 kubenswrapper[7614]: W0224 05:14:29.034851 7614 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Feb 24 05:14:29.035809 master-0 kubenswrapper[7614]: W0224 05:14:29.034856 7614 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Feb 24 05:14:29.035809 master-0 kubenswrapper[7614]: W0224 05:14:29.034863 7614 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Feb 24 05:14:29.036171 master-0 kubenswrapper[7614]: W0224 05:14:29.034868 7614 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Feb 24 05:14:29.036171 master-0 kubenswrapper[7614]: W0224 05:14:29.034874 7614 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Feb 24 05:14:29.036171 master-0 kubenswrapper[7614]: W0224 05:14:29.034880 7614 feature_gate.go:330] unrecognized feature gate: InsightsConfig Feb 24 05:14:29.036171 master-0 kubenswrapper[7614]: W0224 05:14:29.034885 7614 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Feb 24 05:14:29.036171 master-0 kubenswrapper[7614]: W0224 05:14:29.034890 7614 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Feb 24 05:14:29.036171 master-0 kubenswrapper[7614]: W0224 05:14:29.034895 7614 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Feb 24 05:14:29.036171 master-0 kubenswrapper[7614]: W0224 05:14:29.034901 7614 feature_gate.go:330] unrecognized feature gate: GatewayAPI Feb 24 05:14:29.036171 master-0 kubenswrapper[7614]: W0224 05:14:29.034906 7614 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Feb 24 05:14:29.036171 master-0 kubenswrapper[7614]: W0224 05:14:29.034911 7614 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Feb 24 05:14:29.036171 master-0 kubenswrapper[7614]: W0224 05:14:29.034917 7614 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Feb 24 05:14:29.036171 master-0 kubenswrapper[7614]: W0224 05:14:29.034922 7614 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Feb 24 05:14:29.036171 master-0 kubenswrapper[7614]: W0224 05:14:29.034929 7614 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Feb 24 05:14:29.036171 master-0 kubenswrapper[7614]: W0224 05:14:29.034938 7614 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Feb 24 05:14:29.036171 master-0 kubenswrapper[7614]: W0224 05:14:29.034945 7614 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Feb 24 05:14:29.036171 master-0 kubenswrapper[7614]: W0224 05:14:29.034951 7614 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Feb 24 05:14:29.036171 master-0 kubenswrapper[7614]: W0224 05:14:29.034956 7614 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Feb 24 05:14:29.036171 master-0 kubenswrapper[7614]: W0224 05:14:29.034963 7614 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Feb 24 05:14:29.036171 master-0 kubenswrapper[7614]: W0224 05:14:29.034970 7614 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Feb 24 05:14:29.036171 master-0 kubenswrapper[7614]: W0224 05:14:29.034977 7614 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Feb 24 05:14:29.036658 master-0 kubenswrapper[7614]: W0224 05:14:29.034982 7614 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Feb 24 05:14:29.036658 master-0 kubenswrapper[7614]: W0224 05:14:29.034988 7614 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Feb 24 05:14:29.036658 master-0 kubenswrapper[7614]: W0224 05:14:29.034993 7614 feature_gate.go:330] unrecognized feature gate: PinnedImages Feb 24 05:14:29.036658 master-0 kubenswrapper[7614]: W0224 05:14:29.034999 7614 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Feb 24 05:14:29.036658 master-0 kubenswrapper[7614]: W0224 05:14:29.035004 7614 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Feb 24 05:14:29.036658 master-0 kubenswrapper[7614]: W0224 05:14:29.035010 7614 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Feb 24 05:14:29.036658 master-0 kubenswrapper[7614]: W0224 05:14:29.035015 7614 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Feb 24 05:14:29.036658 master-0 kubenswrapper[7614]: W0224 05:14:29.035021 7614 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Feb 24 05:14:29.036658 master-0 kubenswrapper[7614]: W0224 05:14:29.035026 7614 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Feb 24 05:14:29.036658 master-0 kubenswrapper[7614]: W0224 05:14:29.035032 7614 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Feb 24 05:14:29.036658 master-0 kubenswrapper[7614]: W0224 05:14:29.035037 7614 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Feb 24 05:14:29.036658 master-0 kubenswrapper[7614]: W0224 05:14:29.035144 7614 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Feb 24 05:14:29.036658 master-0 kubenswrapper[7614]: W0224 05:14:29.035617 7614 feature_gate.go:330] unrecognized feature gate: NewOLM Feb 24 05:14:29.036658 master-0 kubenswrapper[7614]: W0224 05:14:29.035631 7614 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Feb 24 05:14:29.036658 master-0 kubenswrapper[7614]: W0224 05:14:29.035640 7614 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Feb 24 05:14:29.036658 master-0 kubenswrapper[7614]: W0224 05:14:29.035648 7614 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Feb 24 05:14:29.036658 master-0 kubenswrapper[7614]: W0224 05:14:29.035655 7614 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Feb 24 05:14:29.036658 master-0 kubenswrapper[7614]: W0224 05:14:29.035662 7614 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Feb 24 05:14:29.036658 master-0 kubenswrapper[7614]: W0224 05:14:29.035667 7614 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Feb 24 05:14:29.036658 master-0 kubenswrapper[7614]: W0224 05:14:29.035676 7614 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Feb 24 05:14:29.037266 master-0 kubenswrapper[7614]: W0224 05:14:29.035683 7614 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Feb 24 05:14:29.037266 master-0 kubenswrapper[7614]: W0224 05:14:29.035690 7614 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Feb 24 05:14:29.037266 master-0 kubenswrapper[7614]: W0224 05:14:29.035696 7614 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Feb 24 05:14:29.037266 master-0 kubenswrapper[7614]: W0224 05:14:29.035703 7614 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Feb 24 05:14:29.037266 master-0 kubenswrapper[7614]: W0224 05:14:29.035709 7614 feature_gate.go:330] unrecognized feature gate: OVNObservability Feb 24 05:14:29.037266 master-0 kubenswrapper[7614]: W0224 05:14:29.035724 7614 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Feb 24 05:14:29.037266 master-0 kubenswrapper[7614]: W0224 05:14:29.035734 7614 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Feb 24 05:14:29.037266 master-0 kubenswrapper[7614]: W0224 05:14:29.035742 7614 feature_gate.go:330] unrecognized feature gate: ExternalOIDCWithUIDAndExtraClaimMappings Feb 24 05:14:29.037266 master-0 kubenswrapper[7614]: W0224 05:14:29.035749 7614 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Feb 24 05:14:29.037266 master-0 kubenswrapper[7614]: W0224 05:14:29.035757 7614 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Feb 24 05:14:29.037266 master-0 kubenswrapper[7614]: W0224 05:14:29.035763 7614 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Feb 24 05:14:29.037266 master-0 kubenswrapper[7614]: W0224 05:14:29.035769 7614 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Feb 24 05:14:29.037266 master-0 kubenswrapper[7614]: W0224 05:14:29.035775 7614 feature_gate.go:330] unrecognized feature gate: Example Feb 24 05:14:29.037266 master-0 kubenswrapper[7614]: W0224 05:14:29.035781 7614 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Feb 24 05:14:29.037266 master-0 kubenswrapper[7614]: W0224 05:14:29.035787 7614 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Feb 24 05:14:29.037266 master-0 kubenswrapper[7614]: W0224 05:14:29.035793 7614 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Feb 24 05:14:29.037266 master-0 kubenswrapper[7614]: W0224 05:14:29.035799 7614 feature_gate.go:330] unrecognized feature gate: SignatureStores Feb 24 05:14:29.037266 master-0 kubenswrapper[7614]: W0224 05:14:29.035810 7614 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Feb 24 05:14:29.037266 master-0 kubenswrapper[7614]: W0224 05:14:29.035816 7614 feature_gate.go:330] unrecognized feature gate: PlatformOperators Feb 24 05:14:29.037698 master-0 kubenswrapper[7614]: W0224 05:14:29.035824 7614 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Feb 24 05:14:29.037698 master-0 kubenswrapper[7614]: W0224 05:14:29.035830 7614 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Feb 24 05:14:29.037698 master-0 kubenswrapper[7614]: W0224 05:14:29.035837 7614 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Feb 24 05:14:29.037698 master-0 kubenswrapper[7614]: I0224 05:14:29.035846 7614 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false StreamingCollectionEncodingToJSON:true StreamingCollectionEncodingToProtobuf:true TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Feb 24 05:14:29.037698 master-0 kubenswrapper[7614]: I0224 05:14:29.036289 7614 server.go:940] "Client rotation is on, will bootstrap in background" Feb 24 05:14:29.043042 master-0 kubenswrapper[7614]: I0224 05:14:29.042977 7614 bootstrap.go:85] "Current kubeconfig file contents are still valid, no bootstrap necessary" Feb 24 05:14:29.043272 master-0 kubenswrapper[7614]: I0224 05:14:29.043233 7614 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Feb 24 05:14:29.044147 master-0 kubenswrapper[7614]: I0224 05:14:29.044111 7614 server.go:997] "Starting client certificate rotation" Feb 24 05:14:29.044236 master-0 kubenswrapper[7614]: I0224 05:14:29.044204 7614 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate rotation is enabled Feb 24 05:14:29.044487 master-0 kubenswrapper[7614]: I0224 05:14:29.044402 7614 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate expiration is 2026-02-25 05:04:32 +0000 UTC, rotation deadline is 2026-02-24 23:21:31.031025895 +0000 UTC Feb 24 05:14:29.044525 master-0 kubenswrapper[7614]: I0224 05:14:29.044486 7614 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Waiting 18h7m1.986543505s for next certificate rotation Feb 24 05:14:29.045925 master-0 kubenswrapper[7614]: I0224 05:14:29.045881 7614 dynamic_cafile_content.go:123] "Loaded a new CA Bundle and Verifier" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Feb 24 05:14:29.050055 master-0 kubenswrapper[7614]: I0224 05:14:29.049978 7614 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Feb 24 05:14:29.056776 master-0 kubenswrapper[7614]: I0224 05:14:29.056736 7614 log.go:25] "Validated CRI v1 runtime API" Feb 24 05:14:29.059091 master-0 kubenswrapper[7614]: I0224 05:14:29.059056 7614 log.go:25] "Validated CRI v1 image API" Feb 24 05:14:29.060891 master-0 kubenswrapper[7614]: I0224 05:14:29.060613 7614 server.go:1437] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Feb 24 05:14:29.067450 master-0 kubenswrapper[7614]: I0224 05:14:29.067385 7614 fs.go:135] Filesystem UUIDs: map[7B77-95E7:/dev/vda2 910678ff-f77e-4a7d-8d53-86f2ac47a823:/dev/vda4 c6a7f20e-7412-4bcb-a694-c65c3535af20:/dev/vda3] Feb 24 05:14:29.067986 master-0 kubenswrapper[7614]: I0224 05:14:29.067440 7614 fs.go:136] Filesystem partitions: map[/dev/shm:{mountpoint:/dev/shm major:0 minor:22 fsType:tmpfs blockSize:0} /dev/vda3:{mountpoint:/boot major:252 minor:3 fsType:ext4 blockSize:0} /dev/vda4:{mountpoint:/var major:252 minor:4 fsType:xfs blockSize:0} /run:{mountpoint:/run major:0 minor:24 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/0655b027cab36844f1bd97da97e52b25a2bc334d369a5c8c6902c2874a930630/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/0655b027cab36844f1bd97da97e52b25a2bc334d369a5c8c6902c2874a930630/userdata/shm major:0 minor:301 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/081425b6bb126676c8a3b61b952db3a17ca28803f3b46af593db55de6dd0db70/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/081425b6bb126676c8a3b61b952db3a17ca28803f3b46af593db55de6dd0db70/userdata/shm major:0 minor:274 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/0fcfa31d947740e8b2c9697ed507eb02078278c10de3439215a818d10753dde6/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/0fcfa31d947740e8b2c9697ed507eb02078278c10de3439215a818d10753dde6/userdata/shm major:0 minor:281 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/2e08dd98145938b80638e25896f965db6111532d375ded80b0d82dda78b2522d/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/2e08dd98145938b80638e25896f965db6111532d375ded80b0d82dda78b2522d/userdata/shm major:0 minor:271 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/31db0370c08dc41ae971998fe86ac9cb0b2bcc6c08ec28eb749ac1396b3c2667/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/31db0370c08dc41ae971998fe86ac9cb0b2bcc6c08ec28eb749ac1396b3c2667/userdata/shm major:0 minor:282 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/334819f0fd2ed876c3fd6a59791380d72baaff02835bfb8dad2cfe7eb85f0397/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/334819f0fd2ed876c3fd6a59791380d72baaff02835bfb8dad2cfe7eb85f0397/userdata/shm major:0 minor:112 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/54e1df610bab1f2d6afe25113c517fd17a97b3a82ba411dc4888d98b1a65da1d/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/54e1df610bab1f2d6afe25113c517fd17a97b3a82ba411dc4888d98b1a65da1d/userdata/shm major:0 minor:291 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/64d82ee2903a4034f2cd6f4a7fd22197c2cda9f27e9a4810423ee5ca5bc5cc6d/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/64d82ee2903a4034f2cd6f4a7fd22197c2cda9f27e9a4810423ee5ca5bc5cc6d/userdata/shm major:0 minor:287 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/6bea8d6f03626b01b052e73eecef6934077ef78e8f1a77511bf8222ddfca016e/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/6bea8d6f03626b01b052e73eecef6934077ef78e8f1a77511bf8222ddfca016e/userdata/shm major:0 minor:263 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/714673c16fe0665ef1b16d03b2319efbfe055f0459ee84843763239d325f2af4/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/714673c16fe0665ef1b16d03b2319efbfe055f0459ee84843763239d325f2af4/userdata/shm major:0 minor:273 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/84b8e720c1d11da23dcffc231251263a604179069ed4f2a829aaaefed039c537/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/84b8e720c1d11da23dcffc231251263a604179069ed4f2a829aaaefed039c537/userdata/shm major:0 minor:108 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/8a8cf406c663f290d9d876c25d67c60eea733c614a8da4d512ef2ea405de9382/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/8a8cf406c663f290d9d876c25d67c60eea733c614a8da4d512ef2ea405de9382/userdata/shm major:0 minor:270 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/924c790b9f927c27385b4ab4089845c57c9181271438a831e175110ba7205a0b/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/924c790b9f927c27385b4ab4089845c57c9181271438a831e175110ba7205a0b/userdata/shm major:0 minor:131 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/a1b7fe82470a07c52d024e13d01069cc6897029891ba56a4cf999816f805e9a7/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/a1b7fe82470a07c52d024e13d01069cc6897029891ba56a4cf999816f805e9a7/userdata/shm major:0 minor:261 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/a2b7a210dee36e67d332da03e90107812f166b01198822dfb676fc0a9a05fc25/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/a2b7a210dee36e67d332da03e90107812f166b01198822dfb676fc0a9a05fc25/userdata/shm major:0 minor:168 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/af62c50cd75ed27beeb63e0f7014692299e172af746bf8738716ac3ff47c9622/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/af62c50cd75ed27beeb63e0f7014692299e172af746bf8738716ac3ff47c9622/userdata/shm major:0 minor:50 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/b372465c7e56b5169454db98ec70891520a7992edc8d9521f0da0806e2998e04/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/b372465c7e56b5169454db98ec70891520a7992edc8d9521f0da0806e2998e04/userdata/shm major:0 minor:41 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/b4dd28dfe0dfb965f7a49c3ef1803925b4da66fd0d5c36e6b22e6c8bf1f041ec/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/b4dd28dfe0dfb965f7a49c3ef1803925b4da66fd0d5c36e6b22e6c8bf1f041ec/userdata/shm major:0 minor:58 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/b59c858a83fd92adb897139656578eaefef3c02c4b1c6979cd2c3711ce4f5720/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/b59c858a83fd92adb897139656578eaefef3c02c4b1c6979cd2c3711ce4f5720/userdata/shm major:0 minor:145 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/b61b99b2785eeea3d1aff791e9d12068cc8f8c45a0b7df02a029df563a9b7817/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/b61b99b2785eeea3d1aff791e9d12068cc8f8c45a0b7df02a029df563a9b7817/userdata/shm major:0 minor:46 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/bec37b05d26590ac90852a463adcb2612e0087e0d2b710f75cef020a89559e29/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/bec37b05d26590ac90852a463adcb2612e0087e0d2b710f75cef020a89559e29/userdata/shm major:0 minor:144 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/f1fb923ea59745e7261babd35ceb4f756ebbc1afdb5f4b607af29ed59d22b5f8/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/f1fb923ea59745e7261babd35ceb4f756ebbc1afdb5f4b607af29ed59d22b5f8/userdata/shm major:0 minor:54 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/fd87d63ea110a273569e5b66501c57bfaf932272be25e92340e227a60cef6dea/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/fd87d63ea110a273569e5b66501c57bfaf932272be25e92340e227a60cef6dea/userdata/shm major:0 minor:266 fsType:tmpfs blockSize:0} /tmp:{mountpoint:/tmp major:0 minor:30 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/17f8e10b-88dc-4158-a7c4-aaa2f5d5fb9d/volumes/kubernetes.io~projected/kube-api-access:{mountpoint:/var/lib/kubelet/pods/17f8e10b-88dc-4158-a7c4-aaa2f5d5fb9d/volumes/kubernetes.io~projected/kube-api-access major:0 minor:269 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/17f8e10b-88dc-4158-a7c4-aaa2f5d5fb9d/volumes/kubernetes.io~secret/serving-cert:{mountpoint:/var/lib/kubelet/pods/17f8e10b-88dc-4158-a7c4-aaa2f5d5fb9d/volumes/kubernetes.io~secret/serving-cert major:0 minor:264 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/22813c83-2f60-44ad-9624-ad367cec08f7/volumes/kubernetes.io~projected/kube-api-access:{mountpoint:/var/lib/kubelet/pods/22813c83-2f60-44ad-9624-ad367cec08f7/volumes/kubernetes.io~projected/kube-api-access major:0 minor:254 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/22813c83-2f60-44ad-9624-ad367cec08f7/volumes/kubernetes.io~secret/serving-cert:{mountpoint:/var/lib/kubelet/pods/22813c83-2f60-44ad-9624-ad367cec08f7/volumes/kubernetes.io~secret/serving-cert major:0 minor:248 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/3d6b1ce7-1213-494c-829d-186d39eac7eb/volumes/kubernetes.io~projected/bound-sa-token:{mountpoint:/var/lib/kubelet/pods/3d6b1ce7-1213-494c-829d-186d39eac7eb/volumes/kubernetes.io~projected/bound-sa-token major:0 minor:279 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/3d6b1ce7-1213-494c-829d-186d39eac7eb/volumes/kubernetes.io~projected/kube-api-access-5q2r9:{mountpoint:/var/lib/kubelet/pods/3d6b1ce7-1213-494c-829d-186d39eac7eb/volumes/kubernetes.io~projected/kube-api-access-5q2r9 major:0 minor:267 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/49bfccec-61ec-4bef-a561-9f6e6f906215/volumes/kubernetes.io~projected/kube-api-access-d4d5x:{mountpoint:/var/lib/kubelet/pods/49bfccec-61ec-4bef-a561-9f6e6f906215/volumes/kubernetes.io~projected/kube-api-access-d4d5x major:0 minor:253 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/58ecd829-4749-4c8a-933b-16b4acccac90/volumes/kubernetes.io~projected/kube-api-access-m9kf2:{mountpoint:/var/lib/kubelet/pods/58ecd829-4749-4c8a-933b-16b4acccac90/volumes/kubernetes.io~projected/kube-api-access-m9kf2 major:0 minor:260 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/58ecd829-4749-4c8a-933b-16b4acccac90/volumes/kubernetes.io~secret/serving-cert:{mountpoint:/var/lib/kubelet/pods/58ecd829-4749-4c8a-933b-16b4acccac90/volumes/kubernetes.io~secret/serving-cert major:0 minor:249 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/59333a14-5bdc-4590-a3da-af7300f086da/volumes/kubernetes.io~projected/kube-api-access-wwc5b:{mountpoint:/var/lib/kubelet/pods/59333a14-5bdc-4590-a3da-af7300f086da/volumes/kubernetes.io~projected/kube-api-access-wwc5b major:0 minor:259 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/59333a14-5bdc-4590-a3da-af7300f086da/volumes/kubernetes.io~secret/serving-cert:{mountpoint:/var/lib/kubelet/pods/59333a14-5bdc-4590-a3da-af7300f086da/volumes/kubernetes.io~secret/serving-cert major:0 minor:246 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/633d33a1-e1b1-40b0-b56a-afb0c1085d97/volumes/kubernetes.io~projected/kube-api-access-62xzk:{mountpoint:/var/lib/kubelet/pods/633d33a1-e1b1-40b0-b56a-afb0c1085d97/volumes/kubernetes.io~projected/kube-api-access-62xzk major:0 minor:255 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/633d33a1-e1b1-40b0-b56a-afb0c1085d97/volumes/kubernetes.io~secret/cluster-olm-operator-serving-cert:{mountpoint:/var/lib/kubelet/pods/633d33a1-e1b1-40b0-b56a-afb0c1085d97/volumes/kubernetes.io~secret/cluster-olm-operator-serving-cert major:0 minor:250 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/6e5ede6a-9d4b-47a2-b4ba-e6018910d05a/volumes/kubernetes.io~projected/kube-api-access-zb68s:{mountpoint:/var/lib/kubelet/pods/6e5ede6a-9d4b-47a2-b4ba-e6018910d05a/volumes/kubernetes.io~projected/kube-api-access-zb68s major:0 minor:257 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/74e8b3c8-da80-492c-bfcf-199b40bde40b/volume-subpaths/run-systemd/ovnkube-controller/6:{mountpoint:/var/lib/kubelet/pods/74e8b3c8-da80-492c-bfcf-199b40bde40b/volume-subpaths/run-systemd/ovnkube-controller/6 major:0 minor:24 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/74e8b3c8-da80-492c-bfcf-199b40bde40b/volumes/kubernetes.io~projected/kube-api-access-79h66:{mountpoint:/var/lib/kubelet/pods/74e8b3c8-da80-492c-bfcf-199b40bde40b/volumes/kubernetes.io~projected/kube-api-access-79h66 major:0 minor:143 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/74e8b3c8-da80-492c-bfcf-199b40bde40b/volumes/kubernetes.io~secret/ovn-node-metrics-cert:{mountpoint:/var/lib/kubelet/pods/74e8b3c8-da80-492c-bfcf-199b40bde40b/volumes/kubernetes.io~secret/ovn-node-metrics-cert major:0 minor:142 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/767424fb-babf-4b73-b5e2-0bee65fcf207/volumes/kubernetes.io~projected/kube-api-access-hl828:{mountpoint:/var/lib/kubelet/pods/767424fb-babf-4b73-b5e2-0bee65fcf207/volumes/kubernetes.io~projected/kube-api-access-hl828 major:0 minor:130 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/7a2c651d-ea1a-41f2-9745-04adc8d88904/volumes/kubernetes.io~projected/kube-api-access-fgf94:{mountpoint:/var/lib/kubelet/pods/7a2c651d-ea1a-41f2-9745-04adc8d88904/volumes/kubernetes.io~projected/kube-api-access-fgf94 major:0 minor:252 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/7a2c651d-ea1a-41f2-9745-04adc8d88904/volumes/kubernetes.io~secret/etcd-client:{mountpoint:/var/lib/kubelet/pods/7a2c651d-ea1a-41f2-9745-04adc8d88904/volumes/kubernetes.io~secret/etcd-client major:0 minor:241 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/7a2c651d-ea1a-41f2-9745-04adc8d88904/volumes/kubernetes.io~secret/serving-cert:{mountpoint:/var/lib/kubelet/pods/7a2c651d-ea1a-41f2-9745-04adc8d88904/volumes/kubernetes.io~secret/serving-cert major:0 minor:240 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/7c4b448f-670e-45a1-bdd7-c42903c682a9/volumes/kubernetes.io~projected/kube-api-access:{mountpoint:/var/lib/kubelet/pods/7c4b448f-670e-45a1-bdd7-c42903c682a9/volumes/kubernetes.io~projected/kube-api-access major:0 minor:74 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/7dcc5520-7aa8-4cd5-b06d-591827ed4e2a/volumes/kubernetes.io~projected/kube-api-access-8ktz5:{mountpoint:/var/lib/kubelet/pods/7dcc5520-7aa8-4cd5-b06d-591827ed4e2a/volumes/kubernetes.io~projected/kube-api-access-8ktz5 major:0 minor:135 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/88b915ff-fd94-4998-aa09-70f95c0f1b8a/volumes/kubernetes.io~projected/kube-api-access-bs794:{mountpoint:/var/lib/kubelet/pods/88b915ff-fd94-4998-aa09-70f95c0f1b8a/volumes/kubernetes.io~projected/kube-api-access-bs794 major:0 minor:141 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/88b915ff-fd94-4998-aa09-70f95c0f1b8a/volumes/kubernetes.io~secret/ovn-control-plane-metrics-cert:{mountpoint:/var/lib/kubelet/pods/88b915ff-fd94-4998-aa09-70f95c0f1b8a/volumes/kubernetes.io~secret/ovn-control-plane-metrics-cert major:0 minor:140 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/8be1f8db-3f0b-4d6f-be42-7564fba66820/volumes/kubernetes.io~projected/kube-api-access-xj2tz:{mountpoint:/var/lib/kubelet/pods/8be1f8db-3f0b-4d6f-be42-7564fba66820/volumes/kubernetes.io~projected/kube-api-access-xj2tz major:0 minor:258 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/933beda1-c930-4831-a886-3cc6b7a992ad/volumes/kubernetes.io~projected/kube-api-access-gmf87:{mountpoint:/var/lib/kubelet/pods/933beda1-c930-4831-a886-3cc6b7a992ad/volumes/kubernetes.io~projected/kube-api-access-gmf87 major:0 minor:256 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/933beda1-c930-4831-a886-3cc6b7a992ad/volumes/kubernetes.io~secret/serving-cert:{mountpoint:/var/lib/kubelet/pods/933beda1-c930-4831-a886-3cc6b7a992ad/volumes/kubernetes.io~secret/serving-cert major:0 minor:247 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/996ae0be-d36c-47f4-98b2-1c89591f9506/volumes/kubernetes.io~projected/kube-api-access-jrhmp:{mountpoint:/var/lib/kubelet/pods/996ae0be-d36c-47f4-98b2-1c89591f9506/volumes/kubernetes.io~projected/kube-api-access-jrhmp major:0 minor:278 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/c00ee01c-143b-4e44-823c-c6bfdedb8ed6/volumes/kubernetes.io~projected/kube-api-access-jx4rw:{mountpoint:/var/lib/kubelet/pods/c00ee01c-143b-4e44-823c-c6bfdedb8ed6/volumes/kubernetes.io~projected/kube-api-access-jx4rw major:0 minor:73 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/c106275b-72b6-4877-95c3-830f93e35375/volumes/kubernetes.io~projected/kube-api-access-4p8zb:{mountpoint:/var/lib/kubelet/pods/c106275b-72b6-4877-95c3-830f93e35375/volumes/kubernetes.io~projected/kube-api-access-4p8zb major:0 minor:164 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/c106275b-72b6-4877-95c3-830f93e35375/volumes/kubernetes.io~secret/webhook-cert:{mountpoint:/var/lib/kubelet/pods/c106275b-72b6-4877-95c3-830f93e35375/volumes/kubernetes.io~secret/webhook-cert major:0 minor:167 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/c177f8fe-8145-4557-ae78-af121efe001c/volumes/kubernetes.io~projected/kube-api-access-mdpfz:{mountpoint:/var/lib/kubelet/pods/c177f8fe-8145-4557-ae78-af121efe001c/volumes/kubernetes.io~projected/kube-api-access-mdpfz major:0 minor:251 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/c3fed34f-b275-42c6-af6c-8de3e6fe0f9e/volumes/kubernetes.io~projected/kube-api-access-tlwzq:{mountpoint:/var/lib/kubelet/pods/c3fed34f-b275-42c6-af6c-8de3e6fe0f9e/volumes/kubernetes.io~projected/kube-api-access-tlwzq major:0 minor:243 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/c3fed34f-b275-42c6-af6c-8de3e6fe0f9e/volumes/kubernetes.io~secret/serving-cert:{mountpoint:/var/lib/kubelet/pods/c3fed34f-b275-42c6-af6c-8de3e6fe0f9e/volumes/kubernetes.io~secret/serving-cert major:0 minor:239 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/d86d5bbe-3768-4695-810b-245a56e4fd1d/volumes/kubernetes.io~projected/kube-api-access-xj8cq:{mountpoint:/var/lib/kubelet/pods/d86d5bbe-3768-4695-810b-245a56e4fd1d/volumes/kubernetes.io~projected/kube-api-access-xj8cq major:0 minor:245 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/d86d5bbe-3768-4695-810b-245a56e4fd1d/volumes/kubernetes.io~secret/serving-cert:{mountpoint:/var/lib/kubelet/pods/d86d5bbe-3768-4695-810b-245a56e4fd1d/volumes/kubernetes.io~secret/serving-cert major:0 minor:242 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/dd29bef3-d27e-48b3-9aa0-d915e949b3d5/volumes/kubernetes.io~projected/kube-api-access-zcb72:{mountpoint:/var/lib/kubelet/pods/dd29bef3-d27e-48b3-9aa0-d915e949b3d5/volumes/kubernetes.io~projected/kube-api-access-zcb72 major:0 minor:277 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/e6f05507-d5c1-4102-a220-1db715a496e3/volumes/kubernetes.io~projected/kube-api-access:{mountpoint:/var/lib/kubelet/pods/e6f05507-d5c1-4102-a220-1db715a496e3/volumes/kubernetes.io~projected/kube-api-access major:0 minor:244 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/e6f05507-d5c1-4102-a220-1db715a496e3/volumes/kubernetes.io~secret/serving-cert:{mountpoint:/var/lib/kubelet/pods/e6f05507-d5c1-4102-a220-1db715a496e3/volumes/kubernetes.io~secret/serving-cert major:0 minor:235 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/f6690909-3a87-4bdc-b0ec-1cdd4df32e4b/volumes/kubernetes.io~projected/kube-api-access-6b7f4:{mountpoint:/var/lib/kubelet/pods/f6690909-3a87-4bdc-b0ec-1cdd4df32e4b/volumes/kubernetes.io~projected/kube-api-access-6b7f4 major:0 minor:272 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/f77227c8-c52d-4a71-ae1b-792055f6f23d/volumes/kubernetes.io~projected/kube-api-access-dcj62:{mountpoint:/var/lib/kubelet/pods/f77227c8-c52d-4a71-ae1b-792055f6f23d/volumes/kubernetes.io~projected/kube-api-access-dcj62 major:0 minor:107 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/f77227c8-c52d-4a71-ae1b-792055f6f23d/volumes/kubernetes.io~secret/metrics-tls:{mountpoint:/var/lib/kubelet/pods/f77227c8-c52d-4a71-ae1b-792055f6f23d/volumes/kubernetes.io~secret/metrics-tls major:0 minor:43 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/feee7fe8-e805-4807-b4c0-ecc7ef0f88d9/volumes/kubernetes.io~projected/kube-api-access-h5djr:{mountpoint:/var/lib/kubelet/pods/feee7fe8-e805-4807-b4c0-ecc7ef0f88d9/volumes/kubernetes.io~projected/kube-api-access-h5djr major:0 minor:286 fsType:tmpfs blockSize:0} overlay_0-102:{mountpoint:/var/lib/containers/storage/overlay/44fe6c1778db0c91b1b45ea04e271340d7b72486fa632d6c44dcc083bdcbb1fc/merged major:0 minor:102 fsType:overlay blockSize:0} overlay_0-110:{mountpoint:/var/lib/containers/storage/overlay/81796ead5d4c2f08e3ddc9f813ddd71124a15a43d23db04c9cbf641e81a87798/merged major:0 minor:110 fsType:overlay blockSize:0} overlay_0-114:{mountpoint:/var/lib/containers/storage/overlay/86c64ebc49fa12a2558c14e0736340d2f710dc402f171b5bdd984d8da1c2f548/merged major:0 minor:114 fsType:overlay blockSize:0} overlay_0-116:{mountpoint:/var/lib/containers/storage/overlay/aa6c8c5245c37b4d24f04249c4b64d92d7333cab1aef5ef6434cc48c481826f2/merged major:0 minor:116 fsType:overlay blockSize:0} overlay_0-118:{mountpoint:/var/lib/containers/storage/overlay/582a6898b7fe7f85a28584d6800d32d73b8b0e2e6ef1f022270ae49ef504eb4b/merged major:0 minor:118 fsType:overlay blockSize:0} overlay_0-120:{mountpoint:/var/lib/containers/storage/overlay/a1b3e7ee9d58c6d9bf74643775215d673de149733e2b58eec692c5b8c2ec77cb/merged major:0 minor:120 fsType:overlay blockSize:0} overlay_0-125:{mountpoint:/var/lib/containers/storage/overlay/29814a2ed02eddd4a62d6e6ce12ed7d858b7084c06afdbf4b4cd03871ec1cb3d/merged major:0 minor:125 fsType:overlay blockSize:0} overlay_0-133:{mountpoint:/var/lib/containers/storage/overlay/b1b452a7ba83fd534073e80424169fe31afe1fd76960607a59176690abcdf3e9/merged major:0 minor:133 fsType:overlay blockSize:0} overlay_0-136:{mountpoint:/var/lib/containers/storage/overlay/fc0b8a651db054cc08c0734ffcc2a9f0d455f59a5f82d0b6f4c4bc2ec09464bb/merged major:0 minor:136 fsType:overlay blockSize:0} overlay_0-138:{mountpoint:/var/lib/containers/storage/overlay/8b9123ed0c8c5e692061aa2dd116a0bb4107301711c12468dbd267e1e8370177/merged major:0 minor:138 fsType:overlay blockSize:0} overlay_0-148:{mountpoint:/var/lib/containers/storage/overlay/ea3937862f477b84afb4497a226964794e29146baa015e84512a581e2754eb4f/merged major:0 minor:148 fsType:overlay blockSize:0} overlay_0-150:{mountpoint:/var/lib/containers/storage/overlay/388693ac608b221c22e30cc02916335a296781477a587a43dc4a921e285085cb/merged major:0 minor:150 fsType:overlay blockSize:0} overlay_0-152:{mountpoint:/var/lib/containers/storage/overlay/4d8776f592a3da95bcb765045bf24294ca12fca96bbbc9eea3834ebfd8edf1cb/merged major:0 minor:152 fsType:overlay blockSize:0} overlay_0-154:{mountpoint:/var/lib/containers/storage/overlay/83615b1a79296f07ea844a91f1ec6e1f36bad5a5dd21361ccaadd0abece8f611/merged major:0 minor:154 fsType:overlay blockSize:0} overlay_0-156:{mountpoint:/var/lib/containers/storage/overlay/e5675b379b806105ec1bd682c049511ae49fb88c3fe8dc64871a8d02d2889eae/merged major:0 minor:156 fsType:overlay blockSize:0} overlay_0-165:{mountpoint:/var/lib/containers/storage/overlay/68cb588003993d95c397a477f67ec210b3ea0ae2d8fdee8968778c568bc8e343/merged major:0 minor:165 fsType:overlay blockSize:0} overlay_0-170:{mountpoint:/var/lib/containers/storage/overlay/db63326da438136f8706338ebfbb7dc10886f1ce165e76ca27ec6590f26b6848/merged major:0 minor:170 fsType:overlay blockSize:0} overlay_0-172:{mountpoint:/var/lib/containers/storage/overlay/6814bad0fc4640ef79f58713a014817e8c81fed4399f08cc63392bf46c302761/merged major:0 minor:172 fsType:overlay blockSize:0} overlay_0-174:{mountpoint:/var/lib/containers/storage/overlay/9d69c031fbabf6c5f7bed3b3a0bbb8d20c6e4595fc1aed3f4e3d4b758baa7ab0/merged major:0 minor:174 fsType:overlay blockSize:0} overlay_0-176:{mountpoint:/var/lib/containers/storage/overlay/46b3081aa8abbf274ba1e5e16f9954a43be254afc9de68c600a4ffa427b30b8c/merged major:0 minor:176 fsType:overlay blockSize:0} overlay_0-177:{mountpoint:/var/lib/containers/storage/overlay/86c38e88780248cbf2f43b2e05119a2db1ca8386aac391b7e3b9769cd4da498c/merged major:0 minor:177 fsType:overlay blockSize:0} overlay_0-188:{mountpoint:/var/lib/containers/storage/overlay/778d243bb2436b0cf34464ccd846e245d236bd358dcf7d6c30447b2dae9cb4dc/merged major:0 minor:188 fsType:overlay blockSize:0} overlay_0-190:{mountpoint:/var/lib/containers/storage/overlay/86c4a6940606b9991c443606e3479aba93741d603f62260e4425b8acdb82a4d7/merged major:0 minor:190 fsType:overlay blockSize:0} overlay_0-193:{mountpoint:/var/lib/containers/storage/overlay/7e5cb4f0c88dc179fadd626d07bf6ac90cfe4c8de2819f4ef9a088534be44740/merged major:0 minor:193 fsType:overlay blockSize:0} overlay_0-195:{mountpoint:/var/lib/containers/storage/overlay/bbb1a8e076f18882f30eac3228b417a41071dcaf1784b7510b6010ef3e68394e/merged major:0 minor:195 fsType:overlay blockSize:0} overlay_0-197:{mountpoint:/var/lib/containers/storage/overlay/edb902b63708c2524971f38a31b7d59038aae4d7fc0837f7b8390ccfae7d1666/merged major:0 minor:197 fsType:overlay blockSize:0} overlay_0-205:{mountpoint:/var/lib/containers/storage/overlay/99f218b94c5d4eeb54bc364000b17aec0cf57543be54c3dd9f0339952bb54e0c/merged major:0 minor:205 fsType:overlay blockSize:0} overlay_0-210:{mountpoint:/var/lib/containers/storage/overlay/182dbb475f39f3238003607f30407f85e0cf67b81f873b627e644acea7e8dc51/merged major:0 minor:210 fsType:overlay blockSize:0} overlay_0-215:{mountpoint:/var/lib/containers/storage/overlay/bc858f41ec9d813d80b1b853bdebbb9e7c35c84758c25eb9e3dfefbdd14b8b85/merged major:0 minor:215 fsType:overlay blockSize:0} overlay_0-220:{mountpoint:/var/lib/containers/storage/overlay/7d37bda42d2075af4abf39a9abab42884c491e4d0c80048866a147265f324383/merged major:0 minor:220 fsType:overlay blockSize:0} overlay_0-221:{mountpoint:/var/lib/containers/storage/overlay/b10c9b87ac48b114bf1d7c8b6f6ebb9861c8162b8482e45ab559bf2708c84642/merged major:0 minor:221 fsType:overlay blockSize:0} overlay_0-230:{mountpoint:/var/lib/containers/storage/overlay/00605ea8170362cf6b21ae8c78d579d85217c64b531cf54aceb1fdc00a6f221e/merged major:0 minor:230 fsType:overlay blockSize:0} overlay_0-289:{mountpoint:/var/lib/containers/storage/overlay/136e726352ea6729573ed3b48631ba334e76c1fedfe103f06d67defb1caf2dd3/merged major:0 minor:289 fsType:overlay blockSize:0} overlay_0-293:{mountpoint:/var/lib/containers/storage/overlay/fcd3b8d4e8e55f8d657eebb5ea4e5533e4cff6840b0a26637dcf5def2242e73f/merged major:0 minor:293 fsType:overlay blockSize:0} overlay_0-295:{mountpoint:/var/lib/containers/storage/overlay/c11c3cefc93140c26cc04a46e0e9eacfa3670c3109e9577843ff872b77098701/merged major:0 minor:295 fsType:overlay blockSize:0} overlay_0-297:{mountpoint:/var/lib/containers/storage/overlay/03b8abd6f0c85d872372a92a78a046169fc4d97fa83a6d7076d106092278b0cd/merged major:0 minor:297 fsType:overlay blockSize:0} overlay_0-299:{mountpoint:/var/lib/containers/storage/overlay/506eaf5639e4e8343277178cd973f190e29ae0b9fd1a9f786735dd14f892fa42/merged major:0 minor:299 fsType:overlay blockSize:0} overlay_0-303:{mountpoint:/var/lib/containers/storage/overlay/66618ee9e9a82f6195f5a73b73c64ca596d0f30b46ef21156a622393beedc4ab/merged major:0 minor:303 fsType:overlay blockSize:0} overlay_0-305:{mountpoint:/var/lib/containers/storage/overlay/e74d84294102aab640a5eb2058d95be2024a38ade76430fb42ba1fb66b16ae5e/merged major:0 minor:305 fsType:overlay blockSize:0} overlay_0-307:{mountpoint:/var/lib/containers/storage/overlay/0c1f48f50d9575063ba5516ba6a3a0a0eb08858f7818e5dd430a6799f4bcc8c3/merged major:0 minor:307 fsType:overlay blockSize:0} overlay_0-309:{mountpoint:/var/lib/containers/storage/overlay/4d780b3ecac2672da8979576516038c89cf07a2f11c43bfd2dafa351c4d6b64a/merged major:0 minor:309 fsType:overlay blockSize:0} overlay_0-311:{mountpoint:/var/lib/containers/storage/overlay/c5d92dca52645885d67f69993695180d6cee3ccb13816ddfcad508f4200bd52a/merged major:0 minor:311 fsType:overlay blockSize:0} overlay_0-313:{mountpoint:/var/lib/containers/storage/overlay/0616c13dd7fa330b231d55f748821c9b8082756bb1e35cd1dfa1beba0f940c2a/merged major:0 minor:313 fsType:overlay blockSize:0} overlay_0-315:{mountpoint:/var/lib/containers/storage/overlay/b3edf2f0f7444ec792cc5e00172501a59d9d0437167275f2b52b85a88e8a2e01/merged major:0 minor:315 fsType:overlay blockSize:0} overlay_0-317:{mountpoint:/var/lib/containers/storage/overlay/9d56a59c966efa2f16ea35c40d25bc74ecc1e8029d724a43f1e36f7e9f3211de/merged major:0 minor:317 fsType:overlay blockSize:0} overlay_0-44:{mountpoint:/var/lib/containers/storage/overlay/375fd63d2be58f996f030856ee3db03992c4689e96e9017b589482c343bf5d8e/merged major:0 minor:44 fsType:overlay blockSize:0} overlay_0-48:{mountpoint:/var/lib/containers/storage/overlay/867e4d2808414bf68508c41a1634c2c06269aa014991c99b29f434164c5ca1d9/merged major:0 minor:48 fsType:overlay blockSize:0} overlay_0-52:{mountpoint:/var/lib/containers/storage/overlay/bc16d57a3358e4cf195bdc7d98f5efbf42b25fcecb89041131c54a0e8deea85f/merged major:0 minor:52 fsType:overlay blockSize:0} overlay_0-56:{mountpoint:/var/lib/containers/storage/overlay/2d129dc00bdf4e2b5bf68dfaaafc08d56a72a413b1bc95f20e84924367c6f199/merged major:0 minor:56 fsType:overlay blockSize:0} overlay_0-60:{mountpoint:/var/lib/containers/storage/overlay/c43b6cf4dd04809aca132575e9050ed539ceba5deb4bf9abbf8807cfeea9baa6/merged major:0 minor:60 fsType:overlay blockSize:0} overlay_0-62:{mountpoint:/var/lib/containers/storage/overlay/4b05a0568fd434eb80224ed6ba0a3077df60ed9e605c9475bb5b06b2e5b999b1/merged major:0 minor:62 fsType:overlay blockSize:0} overlay_0-64:{mountpoint:/var/lib/containers/storage/overlay/fd93962bf3e23ad6857a2d61e1b62d5bac49a312f5f2bf5730351c001b0389f6/merged major:0 minor:64 fsType:overlay blockSize:0} overlay_0-66:{mountpoint:/var/lib/containers/storage/overlay/fa1ae8b9522ade7e081791bec78943cdcd8ca4d45051d7030482e340b67d7fbb/merged major:0 minor:66 fsType:overlay blockSize:0} overlay_0-71:{mountpoint:/var/lib/containers/storage/overlay/3f42d42b741d220f6135830da7c7264e890e6d29f639e4a51712d9208e3867d1/merged major:0 minor:71 fsType:overlay blockSize:0} overlay_0-75:{mountpoint:/var/lib/containers/storage/overlay/ea6d706a99b676ddad0ed9bf7979f95c2d3132b914065e459ba5139069c9d882/merged major:0 minor:75 fsType:overlay blockSize:0} overlay_0-77:{mountpoint:/var/lib/containers/storage/overlay/2f656dec4b6e0d3cdd448534b0f0b53590b39ffa0aac4cda933bc4e1e7bec457/merged major:0 minor:77 fsType:overlay blockSize:0} overlay_0-82:{mountpoint:/var/lib/containers/storage/overlay/a83cd67f751334a43a5af255b0409b0ae4ea7d6c90d8372badfddbbc598a7908/merged major:0 minor:82 fsType:overlay blockSize:0} overlay_0-84:{mountpoint:/var/lib/containers/storage/overlay/a553eb0eb3553770f96361e326487dbc6b0a36260fe1f17bc46b2db686ed0f24/merged major:0 minor:84 fsType:overlay blockSize:0} overlay_0-86:{mountpoint:/var/lib/containers/storage/overlay/d06dca7fbd3d23c4631c6d9574813b383db54fccd01168748395388971e8379d/merged major:0 minor:86 fsType:overlay blockSize:0} overlay_0-97:{mountpoint:/var/lib/containers/storage/overlay/6fa322c1d2c61ba134bf6ec1a14c89ae8d42b0b9fd884384d7f40c0e85b8117c/merged major:0 minor:97 fsType:overlay blockSize:0}] Feb 24 05:14:29.107896 master-0 kubenswrapper[7614]: I0224 05:14:29.106102 7614 manager.go:217] Machine: {Timestamp:2026-02-24 05:14:29.103736466 +0000 UTC m=+0.138479692 CPUVendorID:AuthenticAMD NumCores:16 NumPhysicalCores:1 NumSockets:16 CpuFrequency:2799998 MemoryCapacity:50514145280 SwapCapacity:0 MemoryByType:map[] NVMInfo:{MemoryModeCapacity:0 AppDirectModeCapacity:0 AvgPowerBudget:0} HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] MachineID:8094cc4b75b94a6193669cda4f2ebd55 SystemUUID:8094cc4b-75b9-4a61-9366-9cda4f2ebd55 BootID:a3e360dd-b72b-40f0-a056-0eff64b26b55 Filesystems:[{Device:overlay_0-120 DeviceMajor:0 DeviceMinor:120 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-136 DeviceMajor:0 DeviceMinor:136 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/924c790b9f927c27385b4ab4089845c57c9181271438a831e175110ba7205a0b/userdata/shm DeviceMajor:0 DeviceMinor:131 Capacity:67108864 Type:vfs Inodes:6166277 HasInodes:true} {Device:overlay_0-170 DeviceMajor:0 DeviceMinor:170 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/c3fed34f-b275-42c6-af6c-8de3e6fe0f9e/volumes/kubernetes.io~projected/kube-api-access-tlwzq DeviceMajor:0 DeviceMinor:243 Capacity:49335545856 Type:vfs Inodes:6166277 HasInodes:true} {Device:/var/lib/kubelet/pods/f6690909-3a87-4bdc-b0ec-1cdd4df32e4b/volumes/kubernetes.io~projected/kube-api-access-6b7f4 DeviceMajor:0 DeviceMinor:272 Capacity:49335545856 Type:vfs Inodes:6166277 HasInodes:true} {Device:/var/lib/kubelet/pods/996ae0be-d36c-47f4-98b2-1c89591f9506/volumes/kubernetes.io~projected/kube-api-access-jrhmp DeviceMajor:0 DeviceMinor:278 Capacity:49335545856 Type:vfs Inodes:6166277 HasInodes:true} {Device:overlay_0-293 DeviceMajor:0 DeviceMinor:293 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/b61b99b2785eeea3d1aff791e9d12068cc8f8c45a0b7df02a029df563a9b7817/userdata/shm DeviceMajor:0 DeviceMinor:46 Capacity:67108864 Type:vfs Inodes:6166277 HasInodes:true} {Device:overlay_0-82 DeviceMajor:0 DeviceMinor:82 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-190 DeviceMajor:0 DeviceMinor:190 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/22813c83-2f60-44ad-9624-ad367cec08f7/volumes/kubernetes.io~projected/kube-api-access DeviceMajor:0 DeviceMinor:254 Capacity:49335545856 Type:vfs Inodes:6166277 HasInodes:true} {Device:overlay_0-64 DeviceMajor:0 DeviceMinor:64 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/7dcc5520-7aa8-4cd5-b06d-591827ed4e2a/volumes/kubernetes.io~projected/kube-api-access-8ktz5 DeviceMajor:0 DeviceMinor:135 Capacity:49335545856 Type:vfs Inodes:6166277 HasInodes:true} {Device:/var/lib/kubelet/pods/c177f8fe-8145-4557-ae78-af121efe001c/volumes/kubernetes.io~projected/kube-api-access-mdpfz DeviceMajor:0 DeviceMinor:251 Capacity:49335545856 Type:vfs Inodes:6166277 HasInodes:true} {Device:/run/containers/storage/overlay-containers/54e1df610bab1f2d6afe25113c517fd17a97b3a82ba411dc4888d98b1a65da1d/userdata/shm DeviceMajor:0 DeviceMinor:291 Capacity:67108864 Type:vfs Inodes:6166277 HasInodes:true} {Device:overlay_0-307 DeviceMajor:0 DeviceMinor:307 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-315 DeviceMajor:0 DeviceMinor:315 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-138 DeviceMajor:0 DeviceMinor:138 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/6e5ede6a-9d4b-47a2-b4ba-e6018910d05a/volumes/kubernetes.io~projected/kube-api-access-zb68s DeviceMajor:0 DeviceMinor:257 Capacity:49335545856 Type:vfs Inodes:6166277 HasInodes:true} {Device:overlay_0-313 DeviceMajor:0 DeviceMinor:313 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-52 DeviceMajor:0 DeviceMinor:52 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-114 DeviceMajor:0 DeviceMinor:114 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/c106275b-72b6-4877-95c3-830f93e35375/volumes/kubernetes.io~projected/kube-api-access-4p8zb DeviceMajor:0 DeviceMinor:164 Capacity:49335545856 Type:vfs Inodes:6166277 HasInodes:true} {Device:/var/lib/kubelet/pods/74e8b3c8-da80-492c-bfcf-199b40bde40b/volumes/kubernetes.io~secret/ovn-node-metrics-cert DeviceMajor:0 DeviceMinor:142 Capacity:49335545856 Type:vfs Inodes:6166277 HasInodes:true} {Device:overlay_0-197 DeviceMajor:0 DeviceMinor:197 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/64d82ee2903a4034f2cd6f4a7fd22197c2cda9f27e9a4810423ee5ca5bc5cc6d/userdata/shm DeviceMajor:0 DeviceMinor:287 Capacity:67108864 Type:vfs Inodes:6166277 HasInodes:true} {Device:/run/containers/storage/overlay-containers/0655b027cab36844f1bd97da97e52b25a2bc334d369a5c8c6902c2874a930630/userdata/shm DeviceMajor:0 DeviceMinor:301 Capacity:67108864 Type:vfs Inodes:6166277 HasInodes:true} {Device:overlay_0-133 DeviceMajor:0 DeviceMinor:133 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-150 DeviceMajor:0 DeviceMinor:150 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/bec37b05d26590ac90852a463adcb2612e0087e0d2b710f75cef020a89559e29/userdata/shm DeviceMajor:0 DeviceMinor:144 Capacity:67108864 Type:vfs Inodes:6166277 HasInodes:true} {Device:/var/lib/kubelet/pods/58ecd829-4749-4c8a-933b-16b4acccac90/volumes/kubernetes.io~secret/serving-cert DeviceMajor:0 DeviceMinor:249 Capacity:49335545856 Type:vfs Inodes:6166277 HasInodes:true} {Device:overlay_0-172 DeviceMajor:0 DeviceMinor:172 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-215 DeviceMajor:0 DeviceMinor:215 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/767424fb-babf-4b73-b5e2-0bee65fcf207/volumes/kubernetes.io~projected/kube-api-access-hl828 DeviceMajor:0 DeviceMinor:130 Capacity:49335545856 Type:vfs Inodes:6166277 HasInodes:true} {Device:/var/lib/kubelet/pods/17f8e10b-88dc-4158-a7c4-aaa2f5d5fb9d/volumes/kubernetes.io~secret/serving-cert DeviceMajor:0 DeviceMinor:264 Capacity:49335545856 Type:vfs Inodes:6166277 HasInodes:true} {Device:overlay_0-295 DeviceMajor:0 DeviceMinor:295 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-317 DeviceMajor:0 DeviceMinor:317 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-56 DeviceMajor:0 DeviceMinor:56 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-205 DeviceMajor:0 DeviceMinor:205 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/d86d5bbe-3768-4695-810b-245a56e4fd1d/volumes/kubernetes.io~secret/serving-cert DeviceMajor:0 DeviceMinor:242 Capacity:49335545856 Type:vfs Inodes:6166277 HasInodes:true} {Device:/var/lib/kubelet/pods/8be1f8db-3f0b-4d6f-be42-7564fba66820/volumes/kubernetes.io~projected/kube-api-access-xj2tz DeviceMajor:0 DeviceMinor:258 Capacity:49335545856 Type:vfs Inodes:6166277 HasInodes:true} {Device:/run/containers/storage/overlay-containers/a1b7fe82470a07c52d024e13d01069cc6897029891ba56a4cf999816f805e9a7/userdata/shm DeviceMajor:0 DeviceMinor:261 Capacity:67108864 Type:vfs Inodes:6166277 HasInodes:true} {Device:overlay_0-289 DeviceMajor:0 DeviceMinor:289 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-62 DeviceMajor:0 DeviceMinor:62 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-165 DeviceMajor:0 DeviceMinor:165 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/c106275b-72b6-4877-95c3-830f93e35375/volumes/kubernetes.io~secret/webhook-cert DeviceMajor:0 DeviceMinor:167 Capacity:49335545856 Type:vfs Inodes:6166277 HasInodes:true} {Device:/var/lib/kubelet/pods/74e8b3c8-da80-492c-bfcf-199b40bde40b/volumes/kubernetes.io~projected/kube-api-access-79h66 DeviceMajor:0 DeviceMinor:143 Capacity:49335545856 Type:vfs Inodes:6166277 HasInodes:true} {Device:overlay_0-210 DeviceMajor:0 DeviceMinor:210 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/74e8b3c8-da80-492c-bfcf-199b40bde40b/volume-subpaths/run-systemd/ovnkube-controller/6 DeviceMajor:0 DeviceMinor:24 Capacity:10102829056 Type:vfs Inodes:819200 HasInodes:true} {Device:/run/containers/storage/overlay-containers/2e08dd98145938b80638e25896f965db6111532d375ded80b0d82dda78b2522d/userdata/shm DeviceMajor:0 DeviceMinor:271 Capacity:67108864 Type:vfs Inodes:6166277 HasInodes:true} {Device:overlay_0-60 DeviceMajor:0 DeviceMinor:60 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-84 DeviceMajor:0 DeviceMinor:84 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/59333a14-5bdc-4590-a3da-af7300f086da/volumes/kubernetes.io~secret/serving-cert DeviceMajor:0 DeviceMinor:246 Capacity:49335545856 Type:vfs Inodes:6166277 HasInodes:true} {Device:/var/lib/kubelet/pods/7a2c651d-ea1a-41f2-9745-04adc8d88904/volumes/kubernetes.io~projected/kube-api-access-fgf94 DeviceMajor:0 DeviceMinor:252 Capacity:49335545856 Type:vfs Inodes:6166277 HasInodes:true} {Device:/var/lib/kubelet/pods/59333a14-5bdc-4590-a3da-af7300f086da/volumes/kubernetes.io~projected/kube-api-access-wwc5b DeviceMajor:0 DeviceMinor:259 Capacity:49335545856 Type:vfs Inodes:6166277 HasInodes:true} {Device:/var/lib/kubelet/pods/3d6b1ce7-1213-494c-829d-186d39eac7eb/volumes/kubernetes.io~projected/kube-api-access-5q2r9 DeviceMajor:0 DeviceMinor:267 Capacity:49335545856 Type:vfs Inodes:6166277 HasInodes:true} {Device:/run/containers/storage/overlay-containers/b4dd28dfe0dfb965f7a49c3ef1803925b4da66fd0d5c36e6b22e6c8bf1f041ec/userdata/shm DeviceMajor:0 DeviceMinor:58 Capacity:67108864 Type:vfs Inodes:6166277 HasInodes:true} {Device:overlay_0-75 DeviceMajor:0 DeviceMinor:75 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/a2b7a210dee36e67d332da03e90107812f166b01198822dfb676fc0a9a05fc25/userdata/shm DeviceMajor:0 DeviceMinor:168 Capacity:67108864 Type:vfs Inodes:6166277 HasInodes:true} {Device:overlay_0-176 DeviceMajor:0 DeviceMinor:176 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-230 DeviceMajor:0 DeviceMinor:230 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/dev/vda4 DeviceMajor:252 DeviceMinor:4 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/dev/vda3 DeviceMajor:252 DeviceMinor:3 Capacity:366869504 Type:vfs Inodes:98304 HasInodes:true} {Device:overlay_0-44 DeviceMajor:0 DeviceMinor:44 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-116 DeviceMajor:0 DeviceMinor:116 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/58ecd829-4749-4c8a-933b-16b4acccac90/volumes/kubernetes.io~projected/kube-api-access-m9kf2 DeviceMajor:0 DeviceMinor:260 Capacity:49335545856 Type:vfs Inodes:6166277 HasInodes:true} {Device:overlay_0-311 DeviceMajor:0 DeviceMinor:311 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/84b8e720c1d11da23dcffc231251263a604179069ed4f2a829aaaefed039c537/userdata/shm DeviceMajor:0 DeviceMinor:108 Capacity:67108864 Type:vfs Inodes:6166277 HasInodes:true} {Device:overlay_0-152 DeviceMajor:0 DeviceMinor:152 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-174 DeviceMajor:0 DeviceMinor:174 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-148 DeviceMajor:0 DeviceMinor:148 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/22813c83-2f60-44ad-9624-ad367cec08f7/volumes/kubernetes.io~secret/serving-cert DeviceMajor:0 DeviceMinor:248 Capacity:49335545856 Type:vfs Inodes:6166277 HasInodes:true} {Device:/run/containers/storage/overlay-containers/31db0370c08dc41ae971998fe86ac9cb0b2bcc6c08ec28eb749ac1396b3c2667/userdata/shm DeviceMajor:0 DeviceMinor:282 Capacity:67108864 Type:vfs Inodes:6166277 HasInodes:true} {Device:/run/containers/storage/overlay-containers/af62c50cd75ed27beeb63e0f7014692299e172af746bf8738716ac3ff47c9622/userdata/shm DeviceMajor:0 DeviceMinor:50 Capacity:67108864 Type:vfs Inodes:6166277 HasInodes:true} {Device:overlay_0-97 DeviceMajor:0 DeviceMinor:97 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/7c4b448f-670e-45a1-bdd7-c42903c682a9/volumes/kubernetes.io~projected/kube-api-access DeviceMajor:0 DeviceMinor:74 Capacity:49335545856 Type:vfs Inodes:6166277 HasInodes:true} {Device:overlay_0-193 DeviceMajor:0 DeviceMinor:193 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/c00ee01c-143b-4e44-823c-c6bfdedb8ed6/volumes/kubernetes.io~projected/kube-api-access-jx4rw DeviceMajor:0 DeviceMinor:73 Capacity:49335545856 Type:vfs Inodes:6166277 HasInodes:true} {Device:/var/lib/kubelet/pods/88b915ff-fd94-4998-aa09-70f95c0f1b8a/volumes/kubernetes.io~projected/kube-api-access-bs794 DeviceMajor:0 DeviceMinor:141 Capacity:49335545856 Type:vfs Inodes:6166277 HasInodes:true} {Device:/var/lib/kubelet/pods/49bfccec-61ec-4bef-a561-9f6e6f906215/volumes/kubernetes.io~projected/kube-api-access-d4d5x DeviceMajor:0 DeviceMinor:253 Capacity:49335545856 Type:vfs Inodes:6166277 HasInodes:true} {Device:/var/lib/kubelet/pods/dd29bef3-d27e-48b3-9aa0-d915e949b3d5/volumes/kubernetes.io~projected/kube-api-access-zcb72 DeviceMajor:0 DeviceMinor:277 Capacity:49335545856 Type:vfs Inodes:6166277 HasInodes:true} {Device:overlay_0-71 DeviceMajor:0 DeviceMinor:71 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-86 DeviceMajor:0 DeviceMinor:86 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-110 DeviceMajor:0 DeviceMinor:110 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/633d33a1-e1b1-40b0-b56a-afb0c1085d97/volumes/kubernetes.io~secret/cluster-olm-operator-serving-cert DeviceMajor:0 DeviceMinor:250 Capacity:49335545856 Type:vfs Inodes:6166277 HasInodes:true} {Device:overlay_0-48 DeviceMajor:0 DeviceMinor:48 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/17f8e10b-88dc-4158-a7c4-aaa2f5d5fb9d/volumes/kubernetes.io~projected/kube-api-access DeviceMajor:0 DeviceMinor:269 Capacity:49335545856 Type:vfs Inodes:6166277 HasInodes:true} {Device:/run/containers/storage/overlay-containers/0fcfa31d947740e8b2c9697ed507eb02078278c10de3439215a818d10753dde6/userdata/shm DeviceMajor:0 DeviceMinor:281 Capacity:67108864 Type:vfs Inodes:6166277 HasInodes:true} {Device:overlay_0-305 DeviceMajor:0 DeviceMinor:305 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/f77227c8-c52d-4a71-ae1b-792055f6f23d/volumes/kubernetes.io~projected/kube-api-access-dcj62 DeviceMajor:0 DeviceMinor:107 Capacity:49335545856 Type:vfs Inodes:6166277 HasInodes:true} {Device:/run/containers/storage/overlay-containers/b59c858a83fd92adb897139656578eaefef3c02c4b1c6979cd2c3711ce4f5720/userdata/shm DeviceMajor:0 DeviceMinor:145 Capacity:67108864 Type:vfs Inodes:6166277 HasInodes:true} {Device:/var/lib/kubelet/pods/7a2c651d-ea1a-41f2-9745-04adc8d88904/volumes/kubernetes.io~secret/serving-cert DeviceMajor:0 DeviceMinor:240 Capacity:49335545856 Type:vfs Inodes:6166277 HasInodes:true} {Device:/dev/shm DeviceMajor:0 DeviceMinor:22 Capacity:25257070592 Type:vfs Inodes:6166277 HasInodes:true} {Device:overlay_0-156 DeviceMajor:0 DeviceMinor:156 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/3d6b1ce7-1213-494c-829d-186d39eac7eb/volumes/kubernetes.io~projected/bound-sa-token DeviceMajor:0 DeviceMinor:279 Capacity:49335545856 Type:vfs Inodes:6166277 HasInodes:true} {Device:/run/containers/storage/overlay-containers/f1fb923ea59745e7261babd35ceb4f756ebbc1afdb5f4b607af29ed59d22b5f8/userdata/shm DeviceMajor:0 DeviceMinor:54 Capacity:67108864 Type:vfs Inodes:6166277 HasInodes:true} {Device:overlay_0-195 DeviceMajor:0 DeviceMinor:195 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-221 DeviceMajor:0 DeviceMinor:221 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/7a2c651d-ea1a-41f2-9745-04adc8d88904/volumes/kubernetes.io~secret/etcd-client DeviceMajor:0 DeviceMinor:241 Capacity:49335545856 Type:vfs Inodes:6166277 HasInodes:true} {Device:overlay_0-303 DeviceMajor:0 DeviceMinor:303 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-125 DeviceMajor:0 DeviceMinor:125 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-188 DeviceMajor:0 DeviceMinor:188 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/8a8cf406c663f290d9d876c25d67c60eea733c614a8da4d512ef2ea405de9382/userdata/shm DeviceMajor:0 DeviceMinor:270 Capacity:67108864 Type:vfs Inodes:6166277 HasInodes:true} {Device:overlay_0-297 DeviceMajor:0 DeviceMinor:297 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/933beda1-c930-4831-a886-3cc6b7a992ad/volumes/kubernetes.io~secret/serving-cert DeviceMajor:0 DeviceMinor:247 Capacity:49335545856 Type:vfs Inodes:6166277 HasInodes:true} {Device:/var/lib/kubelet/pods/feee7fe8-e805-4807-b4c0-ecc7ef0f88d9/volumes/kubernetes.io~projected/kube-api-access-h5djr DeviceMajor:0 DeviceMinor:286 Capacity:49335545856 Type:vfs Inodes:6166277 HasInodes:true} {Device:/run/containers/storage/overlay-containers/334819f0fd2ed876c3fd6a59791380d72baaff02835bfb8dad2cfe7eb85f0397/userdata/shm DeviceMajor:0 DeviceMinor:112 Capacity:67108864 Type:vfs Inodes:6166277 HasInodes:true} {Device:/var/lib/kubelet/pods/c3fed34f-b275-42c6-af6c-8de3e6fe0f9e/volumes/kubernetes.io~secret/serving-cert DeviceMajor:0 DeviceMinor:239 Capacity:49335545856 Type:vfs Inodes:6166277 HasInodes:true} {Device:/run/containers/storage/overlay-containers/6bea8d6f03626b01b052e73eecef6934077ef78e8f1a77511bf8222ddfca016e/userdata/shm DeviceMajor:0 DeviceMinor:263 Capacity:67108864 Type:vfs Inodes:6166277 HasInodes:true} {Device:/tmp DeviceMajor:0 DeviceMinor:30 Capacity:25257074688 Type:vfs Inodes:1048576 HasInodes:true} {Device:/run/containers/storage/overlay-containers/b372465c7e56b5169454db98ec70891520a7992edc8d9521f0da0806e2998e04/userdata/shm DeviceMajor:0 DeviceMinor:41 Capacity:67108864 Type:vfs Inodes:6166277 HasInodes:true} {Device:/var/lib/kubelet/pods/88b915ff-fd94-4998-aa09-70f95c0f1b8a/volumes/kubernetes.io~secret/ovn-control-plane-metrics-cert DeviceMajor:0 DeviceMinor:140 Capacity:49335545856 Type:vfs Inodes:6166277 HasInodes:true} {Device:overlay_0-154 DeviceMajor:0 DeviceMinor:154 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-177 DeviceMajor:0 DeviceMinor:177 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-66 DeviceMajor:0 DeviceMinor:66 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-102 DeviceMajor:0 DeviceMinor:102 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-118 DeviceMajor:0 DeviceMinor:118 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/e6f05507-d5c1-4102-a220-1db715a496e3/volumes/kubernetes.io~projected/kube-api-access DeviceMajor:0 DeviceMinor:244 Capacity:49335545856 Type:vfs Inodes:6166277 HasInodes:true} {Device:/var/lib/kubelet/pods/d86d5bbe-3768-4695-810b-245a56e4fd1d/volumes/kubernetes.io~projected/kube-api-access-xj8cq DeviceMajor:0 DeviceMinor:245 Capacity:49335545856 Type:vfs Inodes:6166277 HasInodes:true} {Device:/run/containers/storage/overlay-containers/fd87d63ea110a273569e5b66501c57bfaf932272be25e92340e227a60cef6dea/userdata/shm DeviceMajor:0 DeviceMinor:266 Capacity:67108864 Type:vfs Inodes:6166277 HasInodes:true} {Device:overlay_0-77 DeviceMajor:0 DeviceMinor:77 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/933beda1-c930-4831-a886-3cc6b7a992ad/volumes/kubernetes.io~projected/kube-api-access-gmf87 DeviceMajor:0 DeviceMinor:256 Capacity:49335545856 Type:vfs Inodes:6166277 HasInodes:true} {Device:/var/lib/kubelet/pods/633d33a1-e1b1-40b0-b56a-afb0c1085d97/volumes/kubernetes.io~projected/kube-api-access-62xzk DeviceMajor:0 DeviceMinor:255 Capacity:49335545856 Type:vfs Inodes:6166277 HasInodes:true} {Device:overlay_0-299 DeviceMajor:0 DeviceMinor:299 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/f77227c8-c52d-4a71-ae1b-792055f6f23d/volumes/kubernetes.io~secret/metrics-tls DeviceMajor:0 DeviceMinor:43 Capacity:49335545856 Type:vfs Inodes:6166277 HasInodes:true} {Device:overlay_0-220 DeviceMajor:0 DeviceMinor:220 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run DeviceMajor:0 DeviceMinor:24 Capacity:10102829056 Type:vfs Inodes:819200 HasInodes:true} {Device:/var/lib/kubelet/pods/e6f05507-d5c1-4102-a220-1db715a496e3/volumes/kubernetes.io~secret/serving-cert DeviceMajor:0 DeviceMinor:235 Capacity:49335545856 Type:vfs Inodes:6166277 HasInodes:true} {Device:/run/containers/storage/overlay-containers/714673c16fe0665ef1b16d03b2319efbfe055f0459ee84843763239d325f2af4/userdata/shm DeviceMajor:0 DeviceMinor:273 Capacity:67108864 Type:vfs Inodes:6166277 HasInodes:true} {Device:/run/containers/storage/overlay-containers/081425b6bb126676c8a3b61b952db3a17ca28803f3b46af593db55de6dd0db70/userdata/shm DeviceMajor:0 DeviceMinor:274 Capacity:67108864 Type:vfs Inodes:6166277 HasInodes:true} {Device:overlay_0-309 DeviceMajor:0 DeviceMinor:309 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true}] DiskMap:map[252:0:{Name:vda Major:252 Minor:0 Size:214748364800 Scheduler:none} 252:16:{Name:vdb Major:252 Minor:16 Size:21474836480 Scheduler:none} 252:32:{Name:vdc Major:252 Minor:32 Size:21474836480 Scheduler:none} 252:48:{Name:vdd Major:252 Minor:48 Size:21474836480 Scheduler:none} 252:64:{Name:vde Major:252 Minor:64 Size:21474836480 Scheduler:none}] NetworkDevices:[{Name:0655b027cab3684 MacAddress:b2:45:fc:0e:5c:5d Speed:10000 Mtu:8900} {Name:081425b6bb12667 MacAddress:82:44:29:3c:d9:a1 Speed:10000 Mtu:8900} {Name:0fcfa31d947740e MacAddress:42:d7:dc:9c:8f:89 Speed:10000 Mtu:8900} {Name:2e08dd98145938b MacAddress:d6:6a:a9:ce:b8:20 Speed:10000 Mtu:8900} {Name:31db0370c08dc41 MacAddress:be:3e:a5:2a:ce:91 Speed:10000 Mtu:8900} {Name:54e1df610bab1f2 MacAddress:36:73:3a:72:f0:cb Speed:10000 Mtu:8900} {Name:6bea8d6f03626b0 MacAddress:fe:14:ed:31:3b:e7 Speed:10000 Mtu:8900} {Name:714673c16fe0665 MacAddress:22:24:59:d3:7a:7e Speed:10000 Mtu:8900} {Name:8a8cf406c663f29 MacAddress:0e:7e:4c:d9:1a:36 Speed:10000 Mtu:8900} {Name:a1b7fe82470a07c MacAddress:ae:53:62:0b:d4:84 Speed:10000 Mtu:8900} {Name:br-ex MacAddress:fa:16:9e:81:f6:10 Speed:0 Mtu:9000} {Name:br-int MacAddress:aa:2b:07:18:10:d7 Speed:0 Mtu:8900} {Name:eth0 MacAddress:fa:16:9e:81:f6:10 Speed:-1 Mtu:9000} {Name:eth1 MacAddress:fa:16:3e:63:ba:dc Speed:-1 Mtu:9000} {Name:eth2 MacAddress:fa:16:3e:5d:e3:99 Speed:-1 Mtu:9000} {Name:fd87d63ea110a27 MacAddress:ea:68:92:84:e6:93 Speed:10000 Mtu:8900} {Name:ovn-k8s-mp0 MacAddress:0a:58:0a:80:00:02 Speed:0 Mtu:8900} {Name:ovs-system MacAddress:2e:18:f0:62:3d:21 Speed:0 Mtu:1500}] Topology:[{Id:0 Memory:50514145280 HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] Cores:[{Id:0 Threads:[0] Caches:[{Id:0 Size:32768 Type:Data Level:1} {Id:0 Size:32768 Type:Instruction Level:1} {Id:0 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:0 Size:16777216 Type:Unified Level:3}] SocketID:0 BookID: DrawerID:} {Id:0 Threads:[1] Caches:[{Id:1 Size:32768 Type:Data Level:1} {Id:1 Size:32768 Type:Instruction Level:1} {Id:1 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:1 Size:16777216 Type:Unified Level:3}] SocketID:1 BookID: DrawerID:} {Id:0 Threads:[10] Caches:[{Id:10 Size:32768 Type:Data Level:1} {Id:10 Size:32768 Type:Instruction Level:1} {Id:10 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:10 Size:16777216 Type:Unified Level:3}] SocketID:10 BookID: DrawerID:} {Id:0 Threads:[11] Caches:[{Id:11 Size:32768 Type:Data Level:1} {Id:11 Size:32768 Type:Instruction Level:1} {Id:11 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:11 Size:16777216 Type:Unified Level:3}] SocketID:11 BookID: DrawerID:} {Id:0 Threads:[12] Caches:[{Id:12 Size:32768 Type:Data Level:1} {Id:12 Size:32768 Type:Instruction Level:1} {Id:12 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:12 Size:16777216 Type:Unified Level:3}] SocketID:12 BookID: DrawerID:} {Id:0 Threads:[13] Caches:[{Id:13 Size:32768 Type:Data Level:1} {Id:13 Size:32768 Type:Instruction Level:1} {Id:13 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:13 Size:16777216 Type:Unified Level:3}] SocketID:13 BookID: DrawerID:} {Id:0 Threads:[14] Caches:[{Id:14 Size:32768 Type:Data Level:1} {Id:14 Size:32768 Type:Instruction Level:1} {Id:14 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:14 Size:16777216 Type:Unified Level:3}] SocketID:14 BookID: DrawerID:} {Id:0 Threads:[15] Caches:[{Id:15 Size:32768 Type:Data Level:1} {Id:15 Size:32768 Type:Instruction Level:1} {Id:15 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:15 Size:16777216 Type:Unified Level:3}] SocketID:15 BookID: DrawerID:} {Id:0 Threads:[2] Caches:[{Id:2 Size:32768 Type:Data Level:1} {Id:2 Size:32768 Type:Instruction Level:1} {Id:2 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:2 Size:16777216 Type:Unified Level:3}] SocketID:2 BookID: DrawerID:} {Id:0 Threads:[3] Caches:[{Id:3 Size:32768 Type:Data Level:1} {Id:3 Size:32768 Type:Instruction Level:1} {Id:3 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:3 Size:16777216 Type:Unified Level:3}] SocketID:3 BookID: DrawerID:} {Id:0 Threads:[4] Caches:[{Id:4 Size:32768 Type:Data Level:1} {Id:4 Size:32768 Type:Instruction Level:1} {Id:4 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:4 Size:16777216 Type:Unified Level:3}] SocketID:4 BookID: DrawerID:} {Id:0 Threads:[5] Caches:[{Id:5 Size:32768 Type:Data Level:1} {Id:5 Size:32768 Type:Instruction Level:1} {Id:5 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:5 Size:16777216 Type:Unified Level:3}] SocketID:5 BookID: DrawerID:} {Id:0 Threads:[6] Caches:[{Id:6 Size:32768 Type:Data Level:1} {Id:6 Size:32768 Type:Instruction Level:1} {Id:6 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:6 Size:16777216 Type:Unified Level:3}] SocketID:6 BookID: DrawerID:} {Id:0 Threads:[7] Caches:[{Id:7 Size:32768 Type:Data Level:1} {Id:7 Size:32768 Type:Instruction Level:1} {Id:7 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:7 Size:16777216 Type:Unified Level:3}] SocketID:7 BookID: DrawerID:} {Id:0 Threads:[8] Caches:[{Id:8 Size:32768 Type:Data Level:1} {Id:8 Size:32768 Type:Instruction Level:1} {Id:8 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:8 Size:16777216 Type:Unified Level:3}] SocketID:8 BookID: DrawerID:} {Id:0 Threads:[9] Caches:[{Id:9 Size:32768 Type:Data Level:1} {Id:9 Size:32768 Type:Instruction Level:1} {Id:9 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:9 Size:16777216 Type:Unified Level:3}] SocketID:9 BookID: DrawerID:}] Caches:[] Distances:[10]}] CloudProvider:Unknown InstanceType:Unknown InstanceID:None} Feb 24 05:14:29.107896 master-0 kubenswrapper[7614]: I0224 05:14:29.107816 7614 manager_no_libpfm.go:29] cAdvisor is build without cgo and/or libpfm support. Perf event counters are not available. Feb 24 05:14:29.108206 master-0 kubenswrapper[7614]: I0224 05:14:29.108035 7614 manager.go:233] Version: {KernelVersion:5.14.0-427.109.1.el9_4.x86_64 ContainerOsVersion:Red Hat Enterprise Linux CoreOS 418.94.202602022246-0 DockerVersion: DockerAPIVersion: CadvisorVersion: CadvisorRevision:} Feb 24 05:14:29.108929 master-0 kubenswrapper[7614]: I0224 05:14:29.108866 7614 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Feb 24 05:14:29.109330 master-0 kubenswrapper[7614]: I0224 05:14:29.109234 7614 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Feb 24 05:14:29.109727 master-0 kubenswrapper[7614]: I0224 05:14:29.109349 7614 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"master-0","RuntimeCgroupsName":"/system.slice/crio.service","SystemCgroupsName":"/system.slice","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":true,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":{"cpu":"500m","ephemeral-storage":"1Gi","memory":"1Gi"},"HardEvictionThresholds":[{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":4096,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Feb 24 05:14:29.109798 master-0 kubenswrapper[7614]: I0224 05:14:29.109769 7614 topology_manager.go:138] "Creating topology manager with none policy" Feb 24 05:14:29.109831 master-0 kubenswrapper[7614]: I0224 05:14:29.109799 7614 container_manager_linux.go:303] "Creating device plugin manager" Feb 24 05:14:29.109831 master-0 kubenswrapper[7614]: I0224 05:14:29.109818 7614 manager.go:142] "Creating Device Plugin manager" path="/var/lib/kubelet/device-plugins/kubelet.sock" Feb 24 05:14:29.109895 master-0 kubenswrapper[7614]: I0224 05:14:29.109862 7614 server.go:66] "Creating device plugin registration server" version="v1beta1" socket="/var/lib/kubelet/device-plugins/kubelet.sock" Feb 24 05:14:29.110192 master-0 kubenswrapper[7614]: I0224 05:14:29.110163 7614 state_mem.go:36] "Initialized new in-memory state store" Feb 24 05:14:29.110528 master-0 kubenswrapper[7614]: I0224 05:14:29.110496 7614 server.go:1245] "Using root directory" path="/var/lib/kubelet" Feb 24 05:14:29.110664 master-0 kubenswrapper[7614]: I0224 05:14:29.110632 7614 kubelet.go:418] "Attempting to sync node with API server" Feb 24 05:14:29.110715 master-0 kubenswrapper[7614]: I0224 05:14:29.110667 7614 kubelet.go:313] "Adding static pod path" path="/etc/kubernetes/manifests" Feb 24 05:14:29.110748 master-0 kubenswrapper[7614]: I0224 05:14:29.110718 7614 file.go:69] "Watching path" path="/etc/kubernetes/manifests" Feb 24 05:14:29.110748 master-0 kubenswrapper[7614]: I0224 05:14:29.110744 7614 kubelet.go:324] "Adding apiserver pod source" Feb 24 05:14:29.110803 master-0 kubenswrapper[7614]: I0224 05:14:29.110781 7614 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Feb 24 05:14:29.112610 master-0 kubenswrapper[7614]: I0224 05:14:29.112569 7614 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="cri-o" version="1.31.13-6.rhaos4.18.git7ed6156.el9" apiVersion="v1" Feb 24 05:14:29.112854 master-0 kubenswrapper[7614]: I0224 05:14:29.112819 7614 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-server-current.pem". Feb 24 05:14:29.113766 master-0 kubenswrapper[7614]: I0224 05:14:29.113727 7614 kubelet.go:854] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Feb 24 05:14:29.114239 master-0 kubenswrapper[7614]: I0224 05:14:29.114202 7614 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/portworx-volume" Feb 24 05:14:29.114328 master-0 kubenswrapper[7614]: I0224 05:14:29.114296 7614 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/empty-dir" Feb 24 05:14:29.114398 master-0 kubenswrapper[7614]: I0224 05:14:29.114368 7614 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/git-repo" Feb 24 05:14:29.114398 master-0 kubenswrapper[7614]: I0224 05:14:29.114395 7614 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/host-path" Feb 24 05:14:29.114470 master-0 kubenswrapper[7614]: I0224 05:14:29.114415 7614 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/nfs" Feb 24 05:14:29.114503 master-0 kubenswrapper[7614]: I0224 05:14:29.114470 7614 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/secret" Feb 24 05:14:29.114503 master-0 kubenswrapper[7614]: I0224 05:14:29.114485 7614 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/iscsi" Feb 24 05:14:29.114503 master-0 kubenswrapper[7614]: I0224 05:14:29.114499 7614 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/downward-api" Feb 24 05:14:29.114576 master-0 kubenswrapper[7614]: I0224 05:14:29.114553 7614 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/fc" Feb 24 05:14:29.114576 master-0 kubenswrapper[7614]: I0224 05:14:29.114570 7614 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/configmap" Feb 24 05:14:29.114731 master-0 kubenswrapper[7614]: I0224 05:14:29.114691 7614 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/projected" Feb 24 05:14:29.114819 master-0 kubenswrapper[7614]: I0224 05:14:29.114795 7614 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/local-volume" Feb 24 05:14:29.114930 master-0 kubenswrapper[7614]: I0224 05:14:29.114901 7614 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/csi" Feb 24 05:14:29.117631 master-0 kubenswrapper[7614]: I0224 05:14:29.116910 7614 server.go:1280] "Started kubelet" Feb 24 05:14:29.117905 master-0 kubenswrapper[7614]: I0224 05:14:29.117067 7614 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Feb 24 05:14:29.118285 master-0 kubenswrapper[7614]: I0224 05:14:29.117095 7614 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Feb 24 05:14:29.118394 master-0 kubenswrapper[7614]: I0224 05:14:29.118303 7614 server_v1.go:47] "podresources" method="list" useActivePods=true Feb 24 05:14:29.119016 master-0 kubenswrapper[7614]: I0224 05:14:29.118912 7614 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Feb 24 05:14:29.125160 master-0 systemd[1]: Started Kubernetes Kubelet. Feb 24 05:14:29.130301 master-0 kubenswrapper[7614]: I0224 05:14:29.124627 7614 server.go:449] "Adding debug handlers to kubelet server" Feb 24 05:14:29.130428 master-0 kubenswrapper[7614]: I0224 05:14:29.130174 7614 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate rotation is enabled Feb 24 05:14:29.130428 master-0 kubenswrapper[7614]: I0224 05:14:29.130403 7614 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Feb 24 05:14:29.130928 master-0 kubenswrapper[7614]: I0224 05:14:29.130804 7614 volume_manager.go:287] "The desired_state_of_world populator starts" Feb 24 05:14:29.130928 master-0 kubenswrapper[7614]: I0224 05:14:29.130848 7614 volume_manager.go:289] "Starting Kubelet Volume Manager" Feb 24 05:14:29.131235 master-0 kubenswrapper[7614]: I0224 05:14:29.130849 7614 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-25 05:04:32 +0000 UTC, rotation deadline is 2026-02-24 23:18:48.711041486 +0000 UTC Feb 24 05:14:29.131235 master-0 kubenswrapper[7614]: I0224 05:14:29.131084 7614 desired_state_of_world_populator.go:147] "Desired state populator starts to run" Feb 24 05:14:29.131235 master-0 kubenswrapper[7614]: I0224 05:14:29.131101 7614 certificate_manager.go:356] kubernetes.io/kubelet-serving: Waiting 18h4m19.579948832s for next certificate rotation Feb 24 05:14:29.132015 master-0 kubenswrapper[7614]: E0224 05:14:29.131214 7614 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 24 05:14:29.134659 master-0 kubenswrapper[7614]: I0224 05:14:29.133831 7614 factory.go:55] Registering systemd factory Feb 24 05:14:29.134659 master-0 kubenswrapper[7614]: I0224 05:14:29.133874 7614 factory.go:221] Registration of the systemd container factory successfully Feb 24 05:14:29.138503 master-0 kubenswrapper[7614]: I0224 05:14:29.136680 7614 factory.go:153] Registering CRI-O factory Feb 24 05:14:29.138503 master-0 kubenswrapper[7614]: I0224 05:14:29.136717 7614 factory.go:221] Registration of the crio container factory successfully Feb 24 05:14:29.138503 master-0 kubenswrapper[7614]: I0224 05:14:29.136827 7614 factory.go:219] Registration of the containerd container factory failed: unable to create containerd client: containerd: cannot unix dial containerd api service: dial unix /run/containerd/containerd.sock: connect: no such file or directory Feb 24 05:14:29.138503 master-0 kubenswrapper[7614]: I0224 05:14:29.136867 7614 factory.go:103] Registering Raw factory Feb 24 05:14:29.138503 master-0 kubenswrapper[7614]: I0224 05:14:29.136900 7614 manager.go:1196] Started watching for new ooms in manager Feb 24 05:14:29.138503 master-0 kubenswrapper[7614]: I0224 05:14:29.137362 7614 reflector.go:368] Caches populated for *v1.CSIDriver from k8s.io/client-go/informers/factory.go:160 Feb 24 05:14:29.138503 master-0 kubenswrapper[7614]: I0224 05:14:29.137860 7614 reflector.go:368] Caches populated for *v1.Node from k8s.io/client-go/informers/factory.go:160 Feb 24 05:14:29.138503 master-0 kubenswrapper[7614]: I0224 05:14:29.138069 7614 reflector.go:368] Caches populated for *v1.Service from k8s.io/client-go/informers/factory.go:160 Feb 24 05:14:29.138503 master-0 kubenswrapper[7614]: I0224 05:14:29.138331 7614 manager.go:319] Starting recovery of all containers Feb 24 05:14:29.141014 master-0 kubenswrapper[7614]: I0224 05:14:29.140347 7614 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7a2c651d-ea1a-41f2-9745-04adc8d88904" volumeName="kubernetes.io/configmap/7a2c651d-ea1a-41f2-9745-04adc8d88904-etcd-ca" seLinuxMountContext="" Feb 24 05:14:29.141014 master-0 kubenswrapper[7614]: I0224 05:14:29.140605 7614 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7a2c651d-ea1a-41f2-9745-04adc8d88904" volumeName="kubernetes.io/secret/7a2c651d-ea1a-41f2-9745-04adc8d88904-etcd-client" seLinuxMountContext="" Feb 24 05:14:29.141637 master-0 kubenswrapper[7614]: I0224 05:14:29.141271 7614 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7c4b448f-670e-45a1-bdd7-c42903c682a9" volumeName="kubernetes.io/configmap/7c4b448f-670e-45a1-bdd7-c42903c682a9-service-ca" seLinuxMountContext="" Feb 24 05:14:29.141637 master-0 kubenswrapper[7614]: I0224 05:14:29.141319 7614 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="996ae0be-d36c-47f4-98b2-1c89591f9506" volumeName="kubernetes.io/projected/996ae0be-d36c-47f4-98b2-1c89591f9506-kube-api-access-jrhmp" seLinuxMountContext="" Feb 24 05:14:29.141637 master-0 kubenswrapper[7614]: I0224 05:14:29.141338 7614 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c3fed34f-b275-42c6-af6c-8de3e6fe0f9e" volumeName="kubernetes.io/secret/c3fed34f-b275-42c6-af6c-8de3e6fe0f9e-serving-cert" seLinuxMountContext="" Feb 24 05:14:29.141637 master-0 kubenswrapper[7614]: I0224 05:14:29.141354 7614 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e6f05507-d5c1-4102-a220-1db715a496e3" volumeName="kubernetes.io/secret/e6f05507-d5c1-4102-a220-1db715a496e3-serving-cert" seLinuxMountContext="" Feb 24 05:14:29.141637 master-0 kubenswrapper[7614]: I0224 05:14:29.141373 7614 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f6690909-3a87-4bdc-b0ec-1cdd4df32e4b" volumeName="kubernetes.io/configmap/f6690909-3a87-4bdc-b0ec-1cdd4df32e4b-iptables-alerter-script" seLinuxMountContext="" Feb 24 05:14:29.141637 master-0 kubenswrapper[7614]: I0224 05:14:29.141389 7614 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="17f8e10b-88dc-4158-a7c4-aaa2f5d5fb9d" volumeName="kubernetes.io/configmap/17f8e10b-88dc-4158-a7c4-aaa2f5d5fb9d-config" seLinuxMountContext="" Feb 24 05:14:29.141637 master-0 kubenswrapper[7614]: I0224 05:14:29.141408 7614 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c106275b-72b6-4877-95c3-830f93e35375" volumeName="kubernetes.io/configmap/c106275b-72b6-4877-95c3-830f93e35375-ovnkube-identity-cm" seLinuxMountContext="" Feb 24 05:14:29.141637 master-0 kubenswrapper[7614]: I0224 05:14:29.141426 7614 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c106275b-72b6-4877-95c3-830f93e35375" volumeName="kubernetes.io/projected/c106275b-72b6-4877-95c3-830f93e35375-kube-api-access-4p8zb" seLinuxMountContext="" Feb 24 05:14:29.141637 master-0 kubenswrapper[7614]: I0224 05:14:29.141443 7614 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="633d33a1-e1b1-40b0-b56a-afb0c1085d97" volumeName="kubernetes.io/projected/633d33a1-e1b1-40b0-b56a-afb0c1085d97-kube-api-access-62xzk" seLinuxMountContext="" Feb 24 05:14:29.141637 master-0 kubenswrapper[7614]: I0224 05:14:29.141460 7614 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c00ee01c-143b-4e44-823c-c6bfdedb8ed6" volumeName="kubernetes.io/configmap/c00ee01c-143b-4e44-823c-c6bfdedb8ed6-multus-daemon-config" seLinuxMountContext="" Feb 24 05:14:29.141637 master-0 kubenswrapper[7614]: I0224 05:14:29.141475 7614 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c177f8fe-8145-4557-ae78-af121efe001c" volumeName="kubernetes.io/configmap/c177f8fe-8145-4557-ae78-af121efe001c-telemetry-config" seLinuxMountContext="" Feb 24 05:14:29.141637 master-0 kubenswrapper[7614]: I0224 05:14:29.141491 7614 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d86d5bbe-3768-4695-810b-245a56e4fd1d" volumeName="kubernetes.io/secret/d86d5bbe-3768-4695-810b-245a56e4fd1d-serving-cert" seLinuxMountContext="" Feb 24 05:14:29.141637 master-0 kubenswrapper[7614]: I0224 05:14:29.141505 7614 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="767424fb-babf-4b73-b5e2-0bee65fcf207" volumeName="kubernetes.io/configmap/767424fb-babf-4b73-b5e2-0bee65fcf207-cni-binary-copy" seLinuxMountContext="" Feb 24 05:14:29.141637 master-0 kubenswrapper[7614]: I0224 05:14:29.141522 7614 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7a2c651d-ea1a-41f2-9745-04adc8d88904" volumeName="kubernetes.io/secret/7a2c651d-ea1a-41f2-9745-04adc8d88904-serving-cert" seLinuxMountContext="" Feb 24 05:14:29.141637 master-0 kubenswrapper[7614]: I0224 05:14:29.141536 7614 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="88b915ff-fd94-4998-aa09-70f95c0f1b8a" volumeName="kubernetes.io/secret/88b915ff-fd94-4998-aa09-70f95c0f1b8a-ovn-control-plane-metrics-cert" seLinuxMountContext="" Feb 24 05:14:29.141637 master-0 kubenswrapper[7614]: I0224 05:14:29.141550 7614 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="58ecd829-4749-4c8a-933b-16b4acccac90" volumeName="kubernetes.io/configmap/58ecd829-4749-4c8a-933b-16b4acccac90-config" seLinuxMountContext="" Feb 24 05:14:29.141637 master-0 kubenswrapper[7614]: I0224 05:14:29.141603 7614 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="933beda1-c930-4831-a886-3cc6b7a992ad" volumeName="kubernetes.io/projected/933beda1-c930-4831-a886-3cc6b7a992ad-kube-api-access-gmf87" seLinuxMountContext="" Feb 24 05:14:29.141637 master-0 kubenswrapper[7614]: I0224 05:14:29.141620 7614 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="dd29bef3-d27e-48b3-9aa0-d915e949b3d5" volumeName="kubernetes.io/projected/dd29bef3-d27e-48b3-9aa0-d915e949b3d5-kube-api-access-zcb72" seLinuxMountContext="" Feb 24 05:14:29.141637 master-0 kubenswrapper[7614]: I0224 05:14:29.141635 7614 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6e5ede6a-9d4b-47a2-b4ba-e6018910d05a" volumeName="kubernetes.io/configmap/6e5ede6a-9d4b-47a2-b4ba-e6018910d05a-trusted-ca" seLinuxMountContext="" Feb 24 05:14:29.141637 master-0 kubenswrapper[7614]: I0224 05:14:29.141652 7614 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="74e8b3c8-da80-492c-bfcf-199b40bde40b" volumeName="kubernetes.io/configmap/74e8b3c8-da80-492c-bfcf-199b40bde40b-ovnkube-config" seLinuxMountContext="" Feb 24 05:14:29.141637 master-0 kubenswrapper[7614]: I0224 05:14:29.141668 7614 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="767424fb-babf-4b73-b5e2-0bee65fcf207" volumeName="kubernetes.io/configmap/767424fb-babf-4b73-b5e2-0bee65fcf207-whereabouts-configmap" seLinuxMountContext="" Feb 24 05:14:29.141637 master-0 kubenswrapper[7614]: I0224 05:14:29.141684 7614 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="88b915ff-fd94-4998-aa09-70f95c0f1b8a" volumeName="kubernetes.io/configmap/88b915ff-fd94-4998-aa09-70f95c0f1b8a-env-overrides" seLinuxMountContext="" Feb 24 05:14:29.142836 master-0 kubenswrapper[7614]: I0224 05:14:29.141702 7614 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c106275b-72b6-4877-95c3-830f93e35375" volumeName="kubernetes.io/secret/c106275b-72b6-4877-95c3-830f93e35375-webhook-cert" seLinuxMountContext="" Feb 24 05:14:29.142836 master-0 kubenswrapper[7614]: I0224 05:14:29.141750 7614 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d86d5bbe-3768-4695-810b-245a56e4fd1d" volumeName="kubernetes.io/projected/d86d5bbe-3768-4695-810b-245a56e4fd1d-kube-api-access-xj8cq" seLinuxMountContext="" Feb 24 05:14:29.142836 master-0 kubenswrapper[7614]: I0224 05:14:29.141769 7614 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="58ecd829-4749-4c8a-933b-16b4acccac90" volumeName="kubernetes.io/secret/58ecd829-4749-4c8a-933b-16b4acccac90-serving-cert" seLinuxMountContext="" Feb 24 05:14:29.142836 master-0 kubenswrapper[7614]: I0224 05:14:29.141787 7614 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="59333a14-5bdc-4590-a3da-af7300f086da" volumeName="kubernetes.io/configmap/59333a14-5bdc-4590-a3da-af7300f086da-service-ca-bundle" seLinuxMountContext="" Feb 24 05:14:29.142836 master-0 kubenswrapper[7614]: I0224 05:14:29.141802 7614 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="59333a14-5bdc-4590-a3da-af7300f086da" volumeName="kubernetes.io/projected/59333a14-5bdc-4590-a3da-af7300f086da-kube-api-access-wwc5b" seLinuxMountContext="" Feb 24 05:14:29.142836 master-0 kubenswrapper[7614]: I0224 05:14:29.141817 7614 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="767424fb-babf-4b73-b5e2-0bee65fcf207" volumeName="kubernetes.io/projected/767424fb-babf-4b73-b5e2-0bee65fcf207-kube-api-access-hl828" seLinuxMountContext="" Feb 24 05:14:29.142836 master-0 kubenswrapper[7614]: I0224 05:14:29.141832 7614 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7a2c651d-ea1a-41f2-9745-04adc8d88904" volumeName="kubernetes.io/configmap/7a2c651d-ea1a-41f2-9745-04adc8d88904-etcd-service-ca" seLinuxMountContext="" Feb 24 05:14:29.142836 master-0 kubenswrapper[7614]: I0224 05:14:29.141849 7614 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7a2c651d-ea1a-41f2-9745-04adc8d88904" volumeName="kubernetes.io/configmap/7a2c651d-ea1a-41f2-9745-04adc8d88904-config" seLinuxMountContext="" Feb 24 05:14:29.142836 master-0 kubenswrapper[7614]: I0224 05:14:29.141868 7614 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="933beda1-c930-4831-a886-3cc6b7a992ad" volumeName="kubernetes.io/configmap/933beda1-c930-4831-a886-3cc6b7a992ad-config" seLinuxMountContext="" Feb 24 05:14:29.142836 master-0 kubenswrapper[7614]: I0224 05:14:29.141881 7614 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e6f05507-d5c1-4102-a220-1db715a496e3" volumeName="kubernetes.io/configmap/e6f05507-d5c1-4102-a220-1db715a496e3-config" seLinuxMountContext="" Feb 24 05:14:29.142836 master-0 kubenswrapper[7614]: I0224 05:14:29.141895 7614 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="17f8e10b-88dc-4158-a7c4-aaa2f5d5fb9d" volumeName="kubernetes.io/projected/17f8e10b-88dc-4158-a7c4-aaa2f5d5fb9d-kube-api-access" seLinuxMountContext="" Feb 24 05:14:29.142836 master-0 kubenswrapper[7614]: I0224 05:14:29.141909 7614 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f77227c8-c52d-4a71-ae1b-792055f6f23d" volumeName="kubernetes.io/projected/f77227c8-c52d-4a71-ae1b-792055f6f23d-kube-api-access-dcj62" seLinuxMountContext="" Feb 24 05:14:29.142836 master-0 kubenswrapper[7614]: I0224 05:14:29.141924 7614 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f77227c8-c52d-4a71-ae1b-792055f6f23d" volumeName="kubernetes.io/secret/f77227c8-c52d-4a71-ae1b-792055f6f23d-metrics-tls" seLinuxMountContext="" Feb 24 05:14:29.142836 master-0 kubenswrapper[7614]: I0224 05:14:29.141939 7614 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f6690909-3a87-4bdc-b0ec-1cdd4df32e4b" volumeName="kubernetes.io/projected/f6690909-3a87-4bdc-b0ec-1cdd4df32e4b-kube-api-access-6b7f4" seLinuxMountContext="" Feb 24 05:14:29.142836 master-0 kubenswrapper[7614]: I0224 05:14:29.141957 7614 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="59333a14-5bdc-4590-a3da-af7300f086da" volumeName="kubernetes.io/configmap/59333a14-5bdc-4590-a3da-af7300f086da-trusted-ca-bundle" seLinuxMountContext="" Feb 24 05:14:29.142836 master-0 kubenswrapper[7614]: I0224 05:14:29.141973 7614 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="74e8b3c8-da80-492c-bfcf-199b40bde40b" volumeName="kubernetes.io/configmap/74e8b3c8-da80-492c-bfcf-199b40bde40b-ovnkube-script-lib" seLinuxMountContext="" Feb 24 05:14:29.142836 master-0 kubenswrapper[7614]: I0224 05:14:29.141987 7614 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c3fed34f-b275-42c6-af6c-8de3e6fe0f9e" volumeName="kubernetes.io/projected/c3fed34f-b275-42c6-af6c-8de3e6fe0f9e-kube-api-access-tlwzq" seLinuxMountContext="" Feb 24 05:14:29.142836 master-0 kubenswrapper[7614]: I0224 05:14:29.142000 7614 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="feee7fe8-e805-4807-b4c0-ecc7ef0f88d9" volumeName="kubernetes.io/projected/feee7fe8-e805-4807-b4c0-ecc7ef0f88d9-kube-api-access-h5djr" seLinuxMountContext="" Feb 24 05:14:29.142836 master-0 kubenswrapper[7614]: I0224 05:14:29.142013 7614 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="17f8e10b-88dc-4158-a7c4-aaa2f5d5fb9d" volumeName="kubernetes.io/secret/17f8e10b-88dc-4158-a7c4-aaa2f5d5fb9d-serving-cert" seLinuxMountContext="" Feb 24 05:14:29.142836 master-0 kubenswrapper[7614]: I0224 05:14:29.142026 7614 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3d6b1ce7-1213-494c-829d-186d39eac7eb" volumeName="kubernetes.io/configmap/3d6b1ce7-1213-494c-829d-186d39eac7eb-trusted-ca" seLinuxMountContext="" Feb 24 05:14:29.142836 master-0 kubenswrapper[7614]: I0224 05:14:29.142039 7614 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3d6b1ce7-1213-494c-829d-186d39eac7eb" volumeName="kubernetes.io/projected/3d6b1ce7-1213-494c-829d-186d39eac7eb-bound-sa-token" seLinuxMountContext="" Feb 24 05:14:29.142836 master-0 kubenswrapper[7614]: I0224 05:14:29.142057 7614 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49bfccec-61ec-4bef-a561-9f6e6f906215" volumeName="kubernetes.io/projected/49bfccec-61ec-4bef-a561-9f6e6f906215-kube-api-access-d4d5x" seLinuxMountContext="" Feb 24 05:14:29.142836 master-0 kubenswrapper[7614]: I0224 05:14:29.142072 7614 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="633d33a1-e1b1-40b0-b56a-afb0c1085d97" volumeName="kubernetes.io/secret/633d33a1-e1b1-40b0-b56a-afb0c1085d97-cluster-olm-operator-serving-cert" seLinuxMountContext="" Feb 24 05:14:29.142836 master-0 kubenswrapper[7614]: I0224 05:14:29.142086 7614 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="74e8b3c8-da80-492c-bfcf-199b40bde40b" volumeName="kubernetes.io/secret/74e8b3c8-da80-492c-bfcf-199b40bde40b-ovn-node-metrics-cert" seLinuxMountContext="" Feb 24 05:14:29.142836 master-0 kubenswrapper[7614]: I0224 05:14:29.142100 7614 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7dcc5520-7aa8-4cd5-b06d-591827ed4e2a" volumeName="kubernetes.io/projected/7dcc5520-7aa8-4cd5-b06d-591827ed4e2a-kube-api-access-8ktz5" seLinuxMountContext="" Feb 24 05:14:29.142836 master-0 kubenswrapper[7614]: I0224 05:14:29.142113 7614 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="88b915ff-fd94-4998-aa09-70f95c0f1b8a" volumeName="kubernetes.io/configmap/88b915ff-fd94-4998-aa09-70f95c0f1b8a-ovnkube-config" seLinuxMountContext="" Feb 24 05:14:29.142836 master-0 kubenswrapper[7614]: I0224 05:14:29.142127 7614 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22813c83-2f60-44ad-9624-ad367cec08f7" volumeName="kubernetes.io/secret/22813c83-2f60-44ad-9624-ad367cec08f7-serving-cert" seLinuxMountContext="" Feb 24 05:14:29.142836 master-0 kubenswrapper[7614]: I0224 05:14:29.142143 7614 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d86d5bbe-3768-4695-810b-245a56e4fd1d" volumeName="kubernetes.io/configmap/d86d5bbe-3768-4695-810b-245a56e4fd1d-config" seLinuxMountContext="" Feb 24 05:14:29.142836 master-0 kubenswrapper[7614]: I0224 05:14:29.142166 7614 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c106275b-72b6-4877-95c3-830f93e35375" volumeName="kubernetes.io/configmap/c106275b-72b6-4877-95c3-830f93e35375-env-overrides" seLinuxMountContext="" Feb 24 05:14:29.142836 master-0 kubenswrapper[7614]: I0224 05:14:29.142182 7614 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="59333a14-5bdc-4590-a3da-af7300f086da" volumeName="kubernetes.io/configmap/59333a14-5bdc-4590-a3da-af7300f086da-config" seLinuxMountContext="" Feb 24 05:14:29.142836 master-0 kubenswrapper[7614]: I0224 05:14:29.142197 7614 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6e5ede6a-9d4b-47a2-b4ba-e6018910d05a" volumeName="kubernetes.io/projected/6e5ede6a-9d4b-47a2-b4ba-e6018910d05a-kube-api-access-zb68s" seLinuxMountContext="" Feb 24 05:14:29.142836 master-0 kubenswrapper[7614]: I0224 05:14:29.142214 7614 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7a2c651d-ea1a-41f2-9745-04adc8d88904" volumeName="kubernetes.io/projected/7a2c651d-ea1a-41f2-9745-04adc8d88904-kube-api-access-fgf94" seLinuxMountContext="" Feb 24 05:14:29.142836 master-0 kubenswrapper[7614]: I0224 05:14:29.142227 7614 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8be1f8db-3f0b-4d6f-be42-7564fba66820" volumeName="kubernetes.io/projected/8be1f8db-3f0b-4d6f-be42-7564fba66820-kube-api-access-xj2tz" seLinuxMountContext="" Feb 24 05:14:29.142836 master-0 kubenswrapper[7614]: I0224 05:14:29.142245 7614 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="58ecd829-4749-4c8a-933b-16b4acccac90" volumeName="kubernetes.io/projected/58ecd829-4749-4c8a-933b-16b4acccac90-kube-api-access-m9kf2" seLinuxMountContext="" Feb 24 05:14:29.142836 master-0 kubenswrapper[7614]: I0224 05:14:29.142261 7614 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="59333a14-5bdc-4590-a3da-af7300f086da" volumeName="kubernetes.io/secret/59333a14-5bdc-4590-a3da-af7300f086da-serving-cert" seLinuxMountContext="" Feb 24 05:14:29.142836 master-0 kubenswrapper[7614]: I0224 05:14:29.142277 7614 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="74e8b3c8-da80-492c-bfcf-199b40bde40b" volumeName="kubernetes.io/projected/74e8b3c8-da80-492c-bfcf-199b40bde40b-kube-api-access-79h66" seLinuxMountContext="" Feb 24 05:14:29.142836 master-0 kubenswrapper[7614]: I0224 05:14:29.142290 7614 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="767424fb-babf-4b73-b5e2-0bee65fcf207" volumeName="kubernetes.io/configmap/767424fb-babf-4b73-b5e2-0bee65fcf207-cni-sysctl-allowlist" seLinuxMountContext="" Feb 24 05:14:29.142836 master-0 kubenswrapper[7614]: I0224 05:14:29.142303 7614 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="88b915ff-fd94-4998-aa09-70f95c0f1b8a" volumeName="kubernetes.io/projected/88b915ff-fd94-4998-aa09-70f95c0f1b8a-kube-api-access-bs794" seLinuxMountContext="" Feb 24 05:14:29.142836 master-0 kubenswrapper[7614]: I0224 05:14:29.142334 7614 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="933beda1-c930-4831-a886-3cc6b7a992ad" volumeName="kubernetes.io/secret/933beda1-c930-4831-a886-3cc6b7a992ad-serving-cert" seLinuxMountContext="" Feb 24 05:14:29.142836 master-0 kubenswrapper[7614]: I0224 05:14:29.142349 7614 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c177f8fe-8145-4557-ae78-af121efe001c" volumeName="kubernetes.io/projected/c177f8fe-8145-4557-ae78-af121efe001c-kube-api-access-mdpfz" seLinuxMountContext="" Feb 24 05:14:29.142836 master-0 kubenswrapper[7614]: I0224 05:14:29.142363 7614 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3d6b1ce7-1213-494c-829d-186d39eac7eb" volumeName="kubernetes.io/projected/3d6b1ce7-1213-494c-829d-186d39eac7eb-kube-api-access-5q2r9" seLinuxMountContext="" Feb 24 05:14:29.142836 master-0 kubenswrapper[7614]: I0224 05:14:29.142376 7614 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="dd29bef3-d27e-48b3-9aa0-d915e949b3d5" volumeName="kubernetes.io/configmap/dd29bef3-d27e-48b3-9aa0-d915e949b3d5-marketplace-trusted-ca" seLinuxMountContext="" Feb 24 05:14:29.142836 master-0 kubenswrapper[7614]: I0224 05:14:29.142392 7614 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22813c83-2f60-44ad-9624-ad367cec08f7" volumeName="kubernetes.io/configmap/22813c83-2f60-44ad-9624-ad367cec08f7-config" seLinuxMountContext="" Feb 24 05:14:29.142836 master-0 kubenswrapper[7614]: I0224 05:14:29.142408 7614 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="633d33a1-e1b1-40b0-b56a-afb0c1085d97" volumeName="kubernetes.io/empty-dir/633d33a1-e1b1-40b0-b56a-afb0c1085d97-operand-assets" seLinuxMountContext="" Feb 24 05:14:29.142836 master-0 kubenswrapper[7614]: I0224 05:14:29.142421 7614 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e6f05507-d5c1-4102-a220-1db715a496e3" volumeName="kubernetes.io/projected/e6f05507-d5c1-4102-a220-1db715a496e3-kube-api-access" seLinuxMountContext="" Feb 24 05:14:29.142836 master-0 kubenswrapper[7614]: I0224 05:14:29.142434 7614 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22813c83-2f60-44ad-9624-ad367cec08f7" volumeName="kubernetes.io/projected/22813c83-2f60-44ad-9624-ad367cec08f7-kube-api-access" seLinuxMountContext="" Feb 24 05:14:29.142836 master-0 kubenswrapper[7614]: I0224 05:14:29.142446 7614 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7c4b448f-670e-45a1-bdd7-c42903c682a9" volumeName="kubernetes.io/projected/7c4b448f-670e-45a1-bdd7-c42903c682a9-kube-api-access" seLinuxMountContext="" Feb 24 05:14:29.142836 master-0 kubenswrapper[7614]: I0224 05:14:29.142460 7614 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c00ee01c-143b-4e44-823c-c6bfdedb8ed6" volumeName="kubernetes.io/configmap/c00ee01c-143b-4e44-823c-c6bfdedb8ed6-cni-binary-copy" seLinuxMountContext="" Feb 24 05:14:29.142836 master-0 kubenswrapper[7614]: I0224 05:14:29.142473 7614 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c00ee01c-143b-4e44-823c-c6bfdedb8ed6" volumeName="kubernetes.io/projected/c00ee01c-143b-4e44-823c-c6bfdedb8ed6-kube-api-access-jx4rw" seLinuxMountContext="" Feb 24 05:14:29.142836 master-0 kubenswrapper[7614]: I0224 05:14:29.142487 7614 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c3fed34f-b275-42c6-af6c-8de3e6fe0f9e" volumeName="kubernetes.io/configmap/c3fed34f-b275-42c6-af6c-8de3e6fe0f9e-config" seLinuxMountContext="" Feb 24 05:14:29.142836 master-0 kubenswrapper[7614]: I0224 05:14:29.142500 7614 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="74e8b3c8-da80-492c-bfcf-199b40bde40b" volumeName="kubernetes.io/configmap/74e8b3c8-da80-492c-bfcf-199b40bde40b-env-overrides" seLinuxMountContext="" Feb 24 05:14:29.142836 master-0 kubenswrapper[7614]: I0224 05:14:29.142514 7614 reconstruct.go:97] "Volume reconstruction finished" Feb 24 05:14:29.142836 master-0 kubenswrapper[7614]: I0224 05:14:29.142525 7614 reconciler.go:26] "Reconciler: start to sync state" Feb 24 05:14:29.145932 master-0 kubenswrapper[7614]: I0224 05:14:29.144920 7614 reconstruct.go:205] "DevicePaths of reconstructed volumes updated" Feb 24 05:14:29.170862 master-0 kubenswrapper[7614]: I0224 05:14:29.170801 7614 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Feb 24 05:14:29.172874 master-0 kubenswrapper[7614]: I0224 05:14:29.172830 7614 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Feb 24 05:14:29.172874 master-0 kubenswrapper[7614]: I0224 05:14:29.172873 7614 status_manager.go:217] "Starting to sync pod status with apiserver" Feb 24 05:14:29.173117 master-0 kubenswrapper[7614]: I0224 05:14:29.172898 7614 kubelet.go:2335] "Starting kubelet main sync loop" Feb 24 05:14:29.173117 master-0 kubenswrapper[7614]: E0224 05:14:29.172948 7614 kubelet.go:2359] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Feb 24 05:14:29.174911 master-0 kubenswrapper[7614]: I0224 05:14:29.174859 7614 reflector.go:368] Caches populated for *v1.RuntimeClass from k8s.io/client-go/informers/factory.go:160 Feb 24 05:14:29.199078 master-0 kubenswrapper[7614]: I0224 05:14:29.199001 7614 generic.go:334] "Generic (PLEG): container finished" podID="74e8b3c8-da80-492c-bfcf-199b40bde40b" containerID="1bdb0179be74494ec4b280a7fe7b1b7a56e9431efa12bfe29e8db06ceb6772c4" exitCode=0 Feb 24 05:14:29.205461 master-0 kubenswrapper[7614]: I0224 05:14:29.205428 7614 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-config-operator_kube-rbac-proxy-crio-master-0_c997c8e9d3be51d454d8e61e376bef08/kube-rbac-proxy-crio/2.log" Feb 24 05:14:29.205892 master-0 kubenswrapper[7614]: I0224 05:14:29.205861 7614 generic.go:334] "Generic (PLEG): container finished" podID="c997c8e9d3be51d454d8e61e376bef08" containerID="3e6942d2ca28138c7420b132dcdbb1b9a811151a995bdac20311a616719b966c" exitCode=1 Feb 24 05:14:29.205892 master-0 kubenswrapper[7614]: I0224 05:14:29.205884 7614 generic.go:334] "Generic (PLEG): container finished" podID="c997c8e9d3be51d454d8e61e376bef08" containerID="23d5e42153d1239bec04afab6c545620b9ef683ee911bb6159c7f6877a1bbf3e" exitCode=0 Feb 24 05:14:29.208774 master-0 kubenswrapper[7614]: I0224 05:14:29.208716 7614 generic.go:334] "Generic (PLEG): container finished" podID="ba74ac93-7ad1-46e5-97c6-75c410d6a39e" containerID="c068b345adaab906615d4122b8703a382ed80a18092bab0453b7f7d8b6ad8324" exitCode=0 Feb 24 05:14:29.211025 master-0 kubenswrapper[7614]: I0224 05:14:29.210983 7614 generic.go:334] "Generic (PLEG): container finished" podID="8a278410-3079-49d9-8c59-4cedf3f50213" containerID="e982480a91e40cd1e1954911193f2f93b612563b4c71eb1b41d290507d50a572" exitCode=0 Feb 24 05:14:29.213670 master-0 kubenswrapper[7614]: I0224 05:14:29.213634 7614 generic.go:334] "Generic (PLEG): container finished" podID="c9ad9373c007a4fcd25e70622bdc8deb" containerID="d6dd4a61ed7af8ebd78eddfac6cf4fdcc660e18cd4faabe4c2d616a566d86ff6" exitCode=1 Feb 24 05:14:29.224102 master-0 kubenswrapper[7614]: I0224 05:14:29.224036 7614 generic.go:334] "Generic (PLEG): container finished" podID="767424fb-babf-4b73-b5e2-0bee65fcf207" containerID="db9f1ce1d0787cc02e6669cdb33b3c44fb0d9c881cd88a981199272e23c784a9" exitCode=0 Feb 24 05:14:29.224102 master-0 kubenswrapper[7614]: I0224 05:14:29.224096 7614 generic.go:334] "Generic (PLEG): container finished" podID="767424fb-babf-4b73-b5e2-0bee65fcf207" containerID="10e04cc7b2fe6f5614f2167cd49733daceb69f740134e7a457b65b54dad51b16" exitCode=0 Feb 24 05:14:29.224253 master-0 kubenswrapper[7614]: I0224 05:14:29.224113 7614 generic.go:334] "Generic (PLEG): container finished" podID="767424fb-babf-4b73-b5e2-0bee65fcf207" containerID="5ade2b4cc50015238a7faa7e8d4af8c535b8fa2c1005c60f4da3c1f127ccbe16" exitCode=0 Feb 24 05:14:29.224253 master-0 kubenswrapper[7614]: I0224 05:14:29.224133 7614 generic.go:334] "Generic (PLEG): container finished" podID="767424fb-babf-4b73-b5e2-0bee65fcf207" containerID="08804aa446128a3eba2bae15a34a0cc35ebced6e192e0098ad42bbf36874d56b" exitCode=0 Feb 24 05:14:29.224253 master-0 kubenswrapper[7614]: I0224 05:14:29.224147 7614 generic.go:334] "Generic (PLEG): container finished" podID="767424fb-babf-4b73-b5e2-0bee65fcf207" containerID="1273096ef4d43d16e5ea21290ec73d25330bc531d5f7358ac2c2166cc791f502" exitCode=0 Feb 24 05:14:29.224253 master-0 kubenswrapper[7614]: I0224 05:14:29.224160 7614 generic.go:334] "Generic (PLEG): container finished" podID="767424fb-babf-4b73-b5e2-0bee65fcf207" containerID="2b0f6afa851de70b995ddec42c066893d0946d31fc515e6b27f74dd91d84efa9" exitCode=0 Feb 24 05:14:29.231374 master-0 kubenswrapper[7614]: I0224 05:14:29.231325 7614 generic.go:334] "Generic (PLEG): container finished" podID="687e92a6cecf1e2beeef16a0b322ad08" containerID="8a354a07387d3c5b777d3173404f7e00d07019b84f1dd84ee4973912d46b023d" exitCode=0 Feb 24 05:14:29.273646 master-0 kubenswrapper[7614]: E0224 05:14:29.273559 7614 kubelet.go:2359] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Feb 24 05:14:29.302680 master-0 kubenswrapper[7614]: I0224 05:14:29.302632 7614 manager.go:324] Recovery completed Feb 24 05:14:29.350745 master-0 kubenswrapper[7614]: I0224 05:14:29.350686 7614 cpu_manager.go:225] "Starting CPU manager" policy="none" Feb 24 05:14:29.350745 master-0 kubenswrapper[7614]: I0224 05:14:29.350723 7614 cpu_manager.go:226] "Reconciling" reconcilePeriod="10s" Feb 24 05:14:29.350745 master-0 kubenswrapper[7614]: I0224 05:14:29.350749 7614 state_mem.go:36] "Initialized new in-memory state store" Feb 24 05:14:29.351081 master-0 kubenswrapper[7614]: I0224 05:14:29.350977 7614 state_mem.go:88] "Updated default CPUSet" cpuSet="" Feb 24 05:14:29.351081 master-0 kubenswrapper[7614]: I0224 05:14:29.350996 7614 state_mem.go:96] "Updated CPUSet assignments" assignments={} Feb 24 05:14:29.351081 master-0 kubenswrapper[7614]: I0224 05:14:29.351028 7614 state_checkpoint.go:136] "State checkpoint: restored state from checkpoint" Feb 24 05:14:29.351081 master-0 kubenswrapper[7614]: I0224 05:14:29.351038 7614 state_checkpoint.go:137] "State checkpoint: defaultCPUSet" defaultCpuSet="" Feb 24 05:14:29.351081 master-0 kubenswrapper[7614]: I0224 05:14:29.351047 7614 policy_none.go:49] "None policy: Start" Feb 24 05:14:29.352694 master-0 kubenswrapper[7614]: I0224 05:14:29.352669 7614 memory_manager.go:170] "Starting memorymanager" policy="None" Feb 24 05:14:29.352764 master-0 kubenswrapper[7614]: I0224 05:14:29.352699 7614 state_mem.go:35] "Initializing new in-memory state store" Feb 24 05:14:29.352932 master-0 kubenswrapper[7614]: I0224 05:14:29.352905 7614 state_mem.go:75] "Updated machine memory state" Feb 24 05:14:29.352932 master-0 kubenswrapper[7614]: I0224 05:14:29.352927 7614 state_checkpoint.go:82] "State checkpoint: restored state from checkpoint" Feb 24 05:14:29.373223 master-0 kubenswrapper[7614]: I0224 05:14:29.372963 7614 manager.go:334] "Starting Device Plugin manager" Feb 24 05:14:29.373851 master-0 kubenswrapper[7614]: I0224 05:14:29.373402 7614 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Feb 24 05:14:29.373851 master-0 kubenswrapper[7614]: I0224 05:14:29.373492 7614 server.go:79] "Starting device plugin registration server" Feb 24 05:14:29.374635 master-0 kubenswrapper[7614]: I0224 05:14:29.374592 7614 eviction_manager.go:189] "Eviction manager: starting control loop" Feb 24 05:14:29.374740 master-0 kubenswrapper[7614]: I0224 05:14:29.374681 7614 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Feb 24 05:14:29.375968 master-0 kubenswrapper[7614]: I0224 05:14:29.375932 7614 plugin_watcher.go:51] "Plugin Watcher Start" path="/var/lib/kubelet/plugins_registry" Feb 24 05:14:29.376025 master-0 kubenswrapper[7614]: I0224 05:14:29.376011 7614 plugin_manager.go:116] "The desired_state_of_world populator (plugin watcher) starts" Feb 24 05:14:29.376025 master-0 kubenswrapper[7614]: I0224 05:14:29.376020 7614 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Feb 24 05:14:29.475345 master-0 kubenswrapper[7614]: I0224 05:14:29.474912 7614 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-etcd/etcd-master-0-master-0","openshift-kube-apiserver/bootstrap-kube-apiserver-master-0","kube-system/bootstrap-kube-controller-manager-master-0","kube-system/bootstrap-kube-scheduler-master-0","openshift-machine-config-operator/kube-rbac-proxy-crio-master-0"] Feb 24 05:14:29.476632 master-0 kubenswrapper[7614]: I0224 05:14:29.476536 7614 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" event={"ID":"c997c8e9d3be51d454d8e61e376bef08","Type":"ContainerStarted","Data":"c041af7c63d223942ce08c38d39df788b42cf76c6700a1fcbc754b1fc0059d6c"} Feb 24 05:14:29.476690 master-0 kubenswrapper[7614]: I0224 05:14:29.476649 7614 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" event={"ID":"c997c8e9d3be51d454d8e61e376bef08","Type":"ContainerDied","Data":"3e6942d2ca28138c7420b132dcdbb1b9a811151a995bdac20311a616719b966c"} Feb 24 05:14:29.476690 master-0 kubenswrapper[7614]: I0224 05:14:29.476667 7614 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" event={"ID":"c997c8e9d3be51d454d8e61e376bef08","Type":"ContainerDied","Data":"23d5e42153d1239bec04afab6c545620b9ef683ee911bb6159c7f6877a1bbf3e"} Feb 24 05:14:29.476690 master-0 kubenswrapper[7614]: I0224 05:14:29.476682 7614 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" event={"ID":"c997c8e9d3be51d454d8e61e376bef08","Type":"ContainerStarted","Data":"af62c50cd75ed27beeb63e0f7014692299e172af746bf8738716ac3ff47c9622"} Feb 24 05:14:29.476796 master-0 kubenswrapper[7614]: I0224 05:14:29.476701 7614 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="87cd00dcbfae0a09b15eeee8498d1b2df616ce62ff83ab180ef147871919e915" Feb 24 05:14:29.476796 master-0 kubenswrapper[7614]: I0224 05:14:29.476720 7614 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f2b2e64cf1008b56ca7ac547f9f48c6ff5064b81e3d54d12e96dc4d8b69f818b" Feb 24 05:14:29.476796 master-0 kubenswrapper[7614]: I0224 05:14:29.476731 7614 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="kube-system/bootstrap-kube-controller-manager-master-0" event={"ID":"c9ad9373c007a4fcd25e70622bdc8deb","Type":"ContainerStarted","Data":"487b980bd30f01a4370e0f26f33ccb604a296e479126f79a526ff15ecc98aaf8"} Feb 24 05:14:29.476796 master-0 kubenswrapper[7614]: I0224 05:14:29.476745 7614 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="kube-system/bootstrap-kube-controller-manager-master-0" event={"ID":"c9ad9373c007a4fcd25e70622bdc8deb","Type":"ContainerStarted","Data":"f4b821030f840d2d8e85c4724668b0193cc5837173c2d037d234e06dfa6a0d7e"} Feb 24 05:14:29.476796 master-0 kubenswrapper[7614]: I0224 05:14:29.476756 7614 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="kube-system/bootstrap-kube-controller-manager-master-0" event={"ID":"c9ad9373c007a4fcd25e70622bdc8deb","Type":"ContainerDied","Data":"d6dd4a61ed7af8ebd78eddfac6cf4fdcc660e18cd4faabe4c2d616a566d86ff6"} Feb 24 05:14:29.476796 master-0 kubenswrapper[7614]: I0224 05:14:29.476770 7614 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="kube-system/bootstrap-kube-controller-manager-master-0" event={"ID":"c9ad9373c007a4fcd25e70622bdc8deb","Type":"ContainerStarted","Data":"b61b99b2785eeea3d1aff791e9d12068cc8f8c45a0b7df02a029df563a9b7817"} Feb 24 05:14:29.476796 master-0 kubenswrapper[7614]: I0224 05:14:29.476782 7614 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="kube-system/bootstrap-kube-scheduler-master-0" event={"ID":"56c3cb71c9851003c8de7e7c5db4b87e","Type":"ContainerStarted","Data":"ec92c2ccaab799d81de24af8faba27c40dd8197fcd80279d1de6e4daee2ed87c"} Feb 24 05:14:29.476796 master-0 kubenswrapper[7614]: I0224 05:14:29.476794 7614 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="kube-system/bootstrap-kube-scheduler-master-0" event={"ID":"56c3cb71c9851003c8de7e7c5db4b87e","Type":"ContainerStarted","Data":"b4dd28dfe0dfb965f7a49c3ef1803925b4da66fd0d5c36e6b22e6c8bf1f041ec"} Feb 24 05:14:29.477088 master-0 kubenswrapper[7614]: I0224 05:14:29.476811 7614 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d91bf7b8d34e1f15ac85412f592332fa821c616af9acf0e1fcb802613907ca17" Feb 24 05:14:29.477088 master-0 kubenswrapper[7614]: I0224 05:14:29.476859 7614 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-master-0-master-0" event={"ID":"12dab5d350ebc129b0bfa4714d330b15","Type":"ContainerStarted","Data":"b935af68bde88f6abe85c555e4940dca0ca8f352e0978bf5d688472276e6f4de"} Feb 24 05:14:29.477088 master-0 kubenswrapper[7614]: I0224 05:14:29.476873 7614 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-master-0-master-0" event={"ID":"12dab5d350ebc129b0bfa4714d330b15","Type":"ContainerStarted","Data":"60a30f593dbe2d93b00c55ee834611c7e89cff694c686371357f4d3a921ea7a0"} Feb 24 05:14:29.477088 master-0 kubenswrapper[7614]: I0224 05:14:29.476885 7614 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-master-0-master-0" event={"ID":"12dab5d350ebc129b0bfa4714d330b15","Type":"ContainerStarted","Data":"f1fb923ea59745e7261babd35ceb4f756ebbc1afdb5f4b607af29ed59d22b5f8"} Feb 24 05:14:29.477088 master-0 kubenswrapper[7614]: I0224 05:14:29.476924 7614 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" event={"ID":"687e92a6cecf1e2beeef16a0b322ad08","Type":"ContainerStarted","Data":"675a6b8500a8182d86383462e937389b91fb39e2924ac2657060054e1ef499ea"} Feb 24 05:14:29.477088 master-0 kubenswrapper[7614]: I0224 05:14:29.476938 7614 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" event={"ID":"687e92a6cecf1e2beeef16a0b322ad08","Type":"ContainerStarted","Data":"cc6b6656b384d618f557c520566b8453504d4b13ef2e0b51275773dd51dc1d14"} Feb 24 05:14:29.477088 master-0 kubenswrapper[7614]: I0224 05:14:29.476951 7614 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" event={"ID":"687e92a6cecf1e2beeef16a0b322ad08","Type":"ContainerDied","Data":"8a354a07387d3c5b777d3173404f7e00d07019b84f1dd84ee4973912d46b023d"} Feb 24 05:14:29.477088 master-0 kubenswrapper[7614]: I0224 05:14:29.476965 7614 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" event={"ID":"687e92a6cecf1e2beeef16a0b322ad08","Type":"ContainerStarted","Data":"b372465c7e56b5169454db98ec70891520a7992edc8d9521f0da0806e2998e04"} Feb 24 05:14:29.478360 master-0 kubenswrapper[7614]: I0224 05:14:29.478324 7614 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 24 05:14:29.482785 master-0 kubenswrapper[7614]: I0224 05:14:29.481570 7614 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Feb 24 05:14:29.482785 master-0 kubenswrapper[7614]: I0224 05:14:29.481655 7614 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Feb 24 05:14:29.482785 master-0 kubenswrapper[7614]: I0224 05:14:29.481667 7614 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Feb 24 05:14:29.482785 master-0 kubenswrapper[7614]: I0224 05:14:29.481735 7614 kubelet_node_status.go:76] "Attempting to register node" node="master-0" Feb 24 05:14:29.513745 master-0 kubenswrapper[7614]: E0224 05:14:29.513574 7614 kubelet.go:1929] "Failed creating a mirror pod for" err="pods \"bootstrap-kube-scheduler-master-0\" already exists" pod="kube-system/bootstrap-kube-scheduler-master-0" Feb 24 05:14:29.513745 master-0 kubenswrapper[7614]: W0224 05:14:29.513682 7614 warnings.go:70] would violate PodSecurity "restricted:latest": host namespaces (hostNetwork=true), hostPort (container "etcd" uses hostPorts 2379, 2380), privileged (containers "etcdctl", "etcd" must not set securityContext.privileged=true), allowPrivilegeEscalation != false (containers "etcdctl", "etcd" must set securityContext.allowPrivilegeEscalation=false), unrestricted capabilities (containers "etcdctl", "etcd" must set securityContext.capabilities.drop=["ALL"]), restricted volume types (volumes "certs", "data-dir" use restricted volume type "hostPath"), runAsNonRoot != true (pod or containers "etcdctl", "etcd" must set securityContext.runAsNonRoot=true), seccompProfile (pod or containers "etcdctl", "etcd" must set securityContext.seccompProfile.type to "RuntimeDefault" or "Localhost") Feb 24 05:14:29.513745 master-0 kubenswrapper[7614]: E0224 05:14:29.513751 7614 kubelet.go:1929] "Failed creating a mirror pod for" err="pods \"etcd-master-0-master-0\" already exists" pod="openshift-etcd/etcd-master-0-master-0" Feb 24 05:14:29.514060 master-0 kubenswrapper[7614]: E0224 05:14:29.513785 7614 kubelet.go:1929] "Failed creating a mirror pod for" err="pods \"bootstrap-kube-controller-manager-master-0\" already exists" pod="kube-system/bootstrap-kube-controller-manager-master-0" Feb 24 05:14:29.514060 master-0 kubenswrapper[7614]: E0224 05:14:29.513745 7614 kubelet.go:1929] "Failed creating a mirror pod for" err="pods \"kube-rbac-proxy-crio-master-0\" already exists" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" Feb 24 05:14:29.514343 master-0 kubenswrapper[7614]: E0224 05:14:29.514275 7614 kubelet.go:1929] "Failed creating a mirror pod for" err="pods \"bootstrap-kube-apiserver-master-0\" already exists" pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Feb 24 05:14:29.517905 master-0 kubenswrapper[7614]: I0224 05:14:29.516599 7614 kubelet_node_status.go:115] "Node was previously registered" node="master-0" Feb 24 05:14:29.517905 master-0 kubenswrapper[7614]: I0224 05:14:29.516687 7614 kubelet_node_status.go:79] "Successfully registered node" node="master-0" Feb 24 05:14:29.546578 master-0 kubenswrapper[7614]: I0224 05:14:29.546514 7614 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/host-path/687e92a6cecf1e2beeef16a0b322ad08-config\") pod \"bootstrap-kube-apiserver-master-0\" (UID: \"687e92a6cecf1e2beeef16a0b322ad08\") " pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Feb 24 05:14:29.546578 master-0 kubenswrapper[7614]: I0224 05:14:29.546553 7614 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/host-path/c9ad9373c007a4fcd25e70622bdc8deb-config\") pod \"bootstrap-kube-controller-manager-master-0\" (UID: \"c9ad9373c007a4fcd25e70622bdc8deb\") " pod="kube-system/bootstrap-kube-controller-manager-master-0" Feb 24 05:14:29.546578 master-0 kubenswrapper[7614]: I0224 05:14:29.546577 7614 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssl-certs-host\" (UniqueName: \"kubernetes.io/host-path/c9ad9373c007a4fcd25e70622bdc8deb-ssl-certs-host\") pod \"bootstrap-kube-controller-manager-master-0\" (UID: \"c9ad9373c007a4fcd25e70622bdc8deb\") " pod="kube-system/bootstrap-kube-controller-manager-master-0" Feb 24 05:14:29.546901 master-0 kubenswrapper[7614]: I0224 05:14:29.546600 7614 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secrets\" (UniqueName: \"kubernetes.io/host-path/56c3cb71c9851003c8de7e7c5db4b87e-secrets\") pod \"bootstrap-kube-scheduler-master-0\" (UID: \"56c3cb71c9851003c8de7e7c5db4b87e\") " pod="kube-system/bootstrap-kube-scheduler-master-0" Feb 24 05:14:29.546901 master-0 kubenswrapper[7614]: I0224 05:14:29.546674 7614 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/12dab5d350ebc129b0bfa4714d330b15-data-dir\") pod \"etcd-master-0-master-0\" (UID: \"12dab5d350ebc129b0bfa4714d330b15\") " pod="openshift-etcd/etcd-master-0-master-0" Feb 24 05:14:29.546901 master-0 kubenswrapper[7614]: I0224 05:14:29.546696 7614 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssl-certs-host\" (UniqueName: \"kubernetes.io/host-path/687e92a6cecf1e2beeef16a0b322ad08-ssl-certs-host\") pod \"bootstrap-kube-apiserver-master-0\" (UID: \"687e92a6cecf1e2beeef16a0b322ad08\") " pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Feb 24 05:14:29.546901 master-0 kubenswrapper[7614]: I0224 05:14:29.546711 7614 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secrets\" (UniqueName: \"kubernetes.io/host-path/687e92a6cecf1e2beeef16a0b322ad08-secrets\") pod \"bootstrap-kube-apiserver-master-0\" (UID: \"687e92a6cecf1e2beeef16a0b322ad08\") " pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Feb 24 05:14:29.546901 master-0 kubenswrapper[7614]: I0224 05:14:29.546729 7614 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kubernetes-cloud\" (UniqueName: \"kubernetes.io/host-path/687e92a6cecf1e2beeef16a0b322ad08-etc-kubernetes-cloud\") pod \"bootstrap-kube-apiserver-master-0\" (UID: \"687e92a6cecf1e2beeef16a0b322ad08\") " pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Feb 24 05:14:29.546901 master-0 kubenswrapper[7614]: I0224 05:14:29.546745 7614 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/687e92a6cecf1e2beeef16a0b322ad08-audit-dir\") pod \"bootstrap-kube-apiserver-master-0\" (UID: \"687e92a6cecf1e2beeef16a0b322ad08\") " pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Feb 24 05:14:29.546901 master-0 kubenswrapper[7614]: I0224 05:14:29.546763 7614 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secrets\" (UniqueName: \"kubernetes.io/host-path/c9ad9373c007a4fcd25e70622bdc8deb-secrets\") pod \"bootstrap-kube-controller-manager-master-0\" (UID: \"c9ad9373c007a4fcd25e70622bdc8deb\") " pod="kube-system/bootstrap-kube-controller-manager-master-0" Feb 24 05:14:29.546901 master-0 kubenswrapper[7614]: I0224 05:14:29.546780 7614 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kubernetes-cloud\" (UniqueName: \"kubernetes.io/host-path/c9ad9373c007a4fcd25e70622bdc8deb-etc-kubernetes-cloud\") pod \"bootstrap-kube-controller-manager-master-0\" (UID: \"c9ad9373c007a4fcd25e70622bdc8deb\") " pod="kube-system/bootstrap-kube-controller-manager-master-0" Feb 24 05:14:29.546901 master-0 kubenswrapper[7614]: I0224 05:14:29.546840 7614 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/host-path/c9ad9373c007a4fcd25e70622bdc8deb-logs\") pod \"bootstrap-kube-controller-manager-master-0\" (UID: \"c9ad9373c007a4fcd25e70622bdc8deb\") " pod="kube-system/bootstrap-kube-controller-manager-master-0" Feb 24 05:14:29.547278 master-0 kubenswrapper[7614]: I0224 05:14:29.546960 7614 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/host-path/56c3cb71c9851003c8de7e7c5db4b87e-logs\") pod \"bootstrap-kube-scheduler-master-0\" (UID: \"56c3cb71c9851003c8de7e7c5db4b87e\") " pod="kube-system/bootstrap-kube-scheduler-master-0" Feb 24 05:14:29.547278 master-0 kubenswrapper[7614]: I0224 05:14:29.547026 7614 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/c997c8e9d3be51d454d8e61e376bef08-etc-kube\") pod \"kube-rbac-proxy-crio-master-0\" (UID: \"c997c8e9d3be51d454d8e61e376bef08\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" Feb 24 05:14:29.547278 master-0 kubenswrapper[7614]: I0224 05:14:29.547057 7614 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/host-path/12dab5d350ebc129b0bfa4714d330b15-certs\") pod \"etcd-master-0-master-0\" (UID: \"12dab5d350ebc129b0bfa4714d330b15\") " pod="openshift-etcd/etcd-master-0-master-0" Feb 24 05:14:29.547278 master-0 kubenswrapper[7614]: I0224 05:14:29.547101 7614 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/c997c8e9d3be51d454d8e61e376bef08-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-master-0\" (UID: \"c997c8e9d3be51d454d8e61e376bef08\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" Feb 24 05:14:29.547278 master-0 kubenswrapper[7614]: I0224 05:14:29.547123 7614 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/host-path/687e92a6cecf1e2beeef16a0b322ad08-logs\") pod \"bootstrap-kube-apiserver-master-0\" (UID: \"687e92a6cecf1e2beeef16a0b322ad08\") " pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Feb 24 05:14:29.647617 master-0 kubenswrapper[7614]: I0224 05:14:29.647440 7614 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/host-path/687e92a6cecf1e2beeef16a0b322ad08-logs\") pod \"bootstrap-kube-apiserver-master-0\" (UID: \"687e92a6cecf1e2beeef16a0b322ad08\") " pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Feb 24 05:14:29.647617 master-0 kubenswrapper[7614]: I0224 05:14:29.647522 7614 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/c997c8e9d3be51d454d8e61e376bef08-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-master-0\" (UID: \"c997c8e9d3be51d454d8e61e376bef08\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" Feb 24 05:14:29.647617 master-0 kubenswrapper[7614]: I0224 05:14:29.647564 7614 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/12dab5d350ebc129b0bfa4714d330b15-data-dir\") pod \"etcd-master-0-master-0\" (UID: \"12dab5d350ebc129b0bfa4714d330b15\") " pod="openshift-etcd/etcd-master-0-master-0" Feb 24 05:14:29.647617 master-0 kubenswrapper[7614]: I0224 05:14:29.647597 7614 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/host-path/687e92a6cecf1e2beeef16a0b322ad08-config\") pod \"bootstrap-kube-apiserver-master-0\" (UID: \"687e92a6cecf1e2beeef16a0b322ad08\") " pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Feb 24 05:14:29.647985 master-0 kubenswrapper[7614]: I0224 05:14:29.647632 7614 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/host-path/c9ad9373c007a4fcd25e70622bdc8deb-config\") pod \"bootstrap-kube-controller-manager-master-0\" (UID: \"c9ad9373c007a4fcd25e70622bdc8deb\") " pod="kube-system/bootstrap-kube-controller-manager-master-0" Feb 24 05:14:29.647985 master-0 kubenswrapper[7614]: I0224 05:14:29.647665 7614 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssl-certs-host\" (UniqueName: \"kubernetes.io/host-path/c9ad9373c007a4fcd25e70622bdc8deb-ssl-certs-host\") pod \"bootstrap-kube-controller-manager-master-0\" (UID: \"c9ad9373c007a4fcd25e70622bdc8deb\") " pod="kube-system/bootstrap-kube-controller-manager-master-0" Feb 24 05:14:29.647985 master-0 kubenswrapper[7614]: I0224 05:14:29.647698 7614 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secrets\" (UniqueName: \"kubernetes.io/host-path/56c3cb71c9851003c8de7e7c5db4b87e-secrets\") pod \"bootstrap-kube-scheduler-master-0\" (UID: \"56c3cb71c9851003c8de7e7c5db4b87e\") " pod="kube-system/bootstrap-kube-scheduler-master-0" Feb 24 05:14:29.647985 master-0 kubenswrapper[7614]: I0224 05:14:29.647731 7614 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secrets\" (UniqueName: \"kubernetes.io/host-path/687e92a6cecf1e2beeef16a0b322ad08-secrets\") pod \"bootstrap-kube-apiserver-master-0\" (UID: \"687e92a6cecf1e2beeef16a0b322ad08\") " pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Feb 24 05:14:29.647985 master-0 kubenswrapper[7614]: I0224 05:14:29.647765 7614 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssl-certs-host\" (UniqueName: \"kubernetes.io/host-path/687e92a6cecf1e2beeef16a0b322ad08-ssl-certs-host\") pod \"bootstrap-kube-apiserver-master-0\" (UID: \"687e92a6cecf1e2beeef16a0b322ad08\") " pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Feb 24 05:14:29.647985 master-0 kubenswrapper[7614]: I0224 05:14:29.647798 7614 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/host-path/c9ad9373c007a4fcd25e70622bdc8deb-logs\") pod \"bootstrap-kube-controller-manager-master-0\" (UID: \"c9ad9373c007a4fcd25e70622bdc8deb\") " pod="kube-system/bootstrap-kube-controller-manager-master-0" Feb 24 05:14:29.647985 master-0 kubenswrapper[7614]: I0224 05:14:29.647829 7614 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/host-path/56c3cb71c9851003c8de7e7c5db4b87e-logs\") pod \"bootstrap-kube-scheduler-master-0\" (UID: \"56c3cb71c9851003c8de7e7c5db4b87e\") " pod="kube-system/bootstrap-kube-scheduler-master-0" Feb 24 05:14:29.647985 master-0 kubenswrapper[7614]: I0224 05:14:29.647865 7614 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/c997c8e9d3be51d454d8e61e376bef08-etc-kube\") pod \"kube-rbac-proxy-crio-master-0\" (UID: \"c997c8e9d3be51d454d8e61e376bef08\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" Feb 24 05:14:29.647985 master-0 kubenswrapper[7614]: I0224 05:14:29.647910 7614 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/host-path/12dab5d350ebc129b0bfa4714d330b15-certs\") pod \"etcd-master-0-master-0\" (UID: \"12dab5d350ebc129b0bfa4714d330b15\") " pod="openshift-etcd/etcd-master-0-master-0" Feb 24 05:14:29.647985 master-0 kubenswrapper[7614]: I0224 05:14:29.647944 7614 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kubernetes-cloud\" (UniqueName: \"kubernetes.io/host-path/687e92a6cecf1e2beeef16a0b322ad08-etc-kubernetes-cloud\") pod \"bootstrap-kube-apiserver-master-0\" (UID: \"687e92a6cecf1e2beeef16a0b322ad08\") " pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Feb 24 05:14:29.647985 master-0 kubenswrapper[7614]: I0224 05:14:29.647977 7614 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/687e92a6cecf1e2beeef16a0b322ad08-audit-dir\") pod \"bootstrap-kube-apiserver-master-0\" (UID: \"687e92a6cecf1e2beeef16a0b322ad08\") " pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Feb 24 05:14:29.648439 master-0 kubenswrapper[7614]: I0224 05:14:29.648008 7614 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secrets\" (UniqueName: \"kubernetes.io/host-path/c9ad9373c007a4fcd25e70622bdc8deb-secrets\") pod \"bootstrap-kube-controller-manager-master-0\" (UID: \"c9ad9373c007a4fcd25e70622bdc8deb\") " pod="kube-system/bootstrap-kube-controller-manager-master-0" Feb 24 05:14:29.648439 master-0 kubenswrapper[7614]: I0224 05:14:29.648043 7614 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kubernetes-cloud\" (UniqueName: \"kubernetes.io/host-path/c9ad9373c007a4fcd25e70622bdc8deb-etc-kubernetes-cloud\") pod \"bootstrap-kube-controller-manager-master-0\" (UID: \"c9ad9373c007a4fcd25e70622bdc8deb\") " pod="kube-system/bootstrap-kube-controller-manager-master-0" Feb 24 05:14:29.648439 master-0 kubenswrapper[7614]: I0224 05:14:29.648169 7614 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kubernetes-cloud\" (UniqueName: \"kubernetes.io/host-path/c9ad9373c007a4fcd25e70622bdc8deb-etc-kubernetes-cloud\") pod \"bootstrap-kube-controller-manager-master-0\" (UID: \"c9ad9373c007a4fcd25e70622bdc8deb\") " pod="kube-system/bootstrap-kube-controller-manager-master-0" Feb 24 05:14:29.648439 master-0 kubenswrapper[7614]: I0224 05:14:29.648265 7614 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/host-path/687e92a6cecf1e2beeef16a0b322ad08-logs\") pod \"bootstrap-kube-apiserver-master-0\" (UID: \"687e92a6cecf1e2beeef16a0b322ad08\") " pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Feb 24 05:14:29.648439 master-0 kubenswrapper[7614]: I0224 05:14:29.648340 7614 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/c997c8e9d3be51d454d8e61e376bef08-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-master-0\" (UID: \"c997c8e9d3be51d454d8e61e376bef08\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" Feb 24 05:14:29.648439 master-0 kubenswrapper[7614]: I0224 05:14:29.648389 7614 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/12dab5d350ebc129b0bfa4714d330b15-data-dir\") pod \"etcd-master-0-master-0\" (UID: \"12dab5d350ebc129b0bfa4714d330b15\") " pod="openshift-etcd/etcd-master-0-master-0" Feb 24 05:14:29.648439 master-0 kubenswrapper[7614]: I0224 05:14:29.648433 7614 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/host-path/687e92a6cecf1e2beeef16a0b322ad08-config\") pod \"bootstrap-kube-apiserver-master-0\" (UID: \"687e92a6cecf1e2beeef16a0b322ad08\") " pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Feb 24 05:14:29.648714 master-0 kubenswrapper[7614]: I0224 05:14:29.648480 7614 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/host-path/c9ad9373c007a4fcd25e70622bdc8deb-config\") pod \"bootstrap-kube-controller-manager-master-0\" (UID: \"c9ad9373c007a4fcd25e70622bdc8deb\") " pod="kube-system/bootstrap-kube-controller-manager-master-0" Feb 24 05:14:29.648714 master-0 kubenswrapper[7614]: I0224 05:14:29.648528 7614 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssl-certs-host\" (UniqueName: \"kubernetes.io/host-path/c9ad9373c007a4fcd25e70622bdc8deb-ssl-certs-host\") pod \"bootstrap-kube-controller-manager-master-0\" (UID: \"c9ad9373c007a4fcd25e70622bdc8deb\") " pod="kube-system/bootstrap-kube-controller-manager-master-0" Feb 24 05:14:29.648714 master-0 kubenswrapper[7614]: I0224 05:14:29.648574 7614 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secrets\" (UniqueName: \"kubernetes.io/host-path/56c3cb71c9851003c8de7e7c5db4b87e-secrets\") pod \"bootstrap-kube-scheduler-master-0\" (UID: \"56c3cb71c9851003c8de7e7c5db4b87e\") " pod="kube-system/bootstrap-kube-scheduler-master-0" Feb 24 05:14:29.648714 master-0 kubenswrapper[7614]: I0224 05:14:29.648620 7614 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secrets\" (UniqueName: \"kubernetes.io/host-path/687e92a6cecf1e2beeef16a0b322ad08-secrets\") pod \"bootstrap-kube-apiserver-master-0\" (UID: \"687e92a6cecf1e2beeef16a0b322ad08\") " pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Feb 24 05:14:29.648714 master-0 kubenswrapper[7614]: I0224 05:14:29.648680 7614 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssl-certs-host\" (UniqueName: \"kubernetes.io/host-path/687e92a6cecf1e2beeef16a0b322ad08-ssl-certs-host\") pod \"bootstrap-kube-apiserver-master-0\" (UID: \"687e92a6cecf1e2beeef16a0b322ad08\") " pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Feb 24 05:14:29.648903 master-0 kubenswrapper[7614]: I0224 05:14:29.648726 7614 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/host-path/c9ad9373c007a4fcd25e70622bdc8deb-logs\") pod \"bootstrap-kube-controller-manager-master-0\" (UID: \"c9ad9373c007a4fcd25e70622bdc8deb\") " pod="kube-system/bootstrap-kube-controller-manager-master-0" Feb 24 05:14:29.648903 master-0 kubenswrapper[7614]: I0224 05:14:29.648772 7614 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/host-path/56c3cb71c9851003c8de7e7c5db4b87e-logs\") pod \"bootstrap-kube-scheduler-master-0\" (UID: \"56c3cb71c9851003c8de7e7c5db4b87e\") " pod="kube-system/bootstrap-kube-scheduler-master-0" Feb 24 05:14:29.648903 master-0 kubenswrapper[7614]: I0224 05:14:29.648819 7614 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/c997c8e9d3be51d454d8e61e376bef08-etc-kube\") pod \"kube-rbac-proxy-crio-master-0\" (UID: \"c997c8e9d3be51d454d8e61e376bef08\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" Feb 24 05:14:29.648903 master-0 kubenswrapper[7614]: I0224 05:14:29.648863 7614 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"certs\" (UniqueName: \"kubernetes.io/host-path/12dab5d350ebc129b0bfa4714d330b15-certs\") pod \"etcd-master-0-master-0\" (UID: \"12dab5d350ebc129b0bfa4714d330b15\") " pod="openshift-etcd/etcd-master-0-master-0" Feb 24 05:14:29.649056 master-0 kubenswrapper[7614]: I0224 05:14:29.648915 7614 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kubernetes-cloud\" (UniqueName: \"kubernetes.io/host-path/687e92a6cecf1e2beeef16a0b322ad08-etc-kubernetes-cloud\") pod \"bootstrap-kube-apiserver-master-0\" (UID: \"687e92a6cecf1e2beeef16a0b322ad08\") " pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Feb 24 05:14:29.649056 master-0 kubenswrapper[7614]: I0224 05:14:29.648975 7614 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/687e92a6cecf1e2beeef16a0b322ad08-audit-dir\") pod \"bootstrap-kube-apiserver-master-0\" (UID: \"687e92a6cecf1e2beeef16a0b322ad08\") " pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Feb 24 05:14:29.649056 master-0 kubenswrapper[7614]: I0224 05:14:29.649017 7614 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secrets\" (UniqueName: \"kubernetes.io/host-path/c9ad9373c007a4fcd25e70622bdc8deb-secrets\") pod \"bootstrap-kube-controller-manager-master-0\" (UID: \"c9ad9373c007a4fcd25e70622bdc8deb\") " pod="kube-system/bootstrap-kube-controller-manager-master-0" Feb 24 05:14:30.112235 master-0 kubenswrapper[7614]: I0224 05:14:30.111814 7614 apiserver.go:52] "Watching apiserver" Feb 24 05:14:30.123888 master-0 kubenswrapper[7614]: I0224 05:14:30.123824 7614 reflector.go:368] Caches populated for *v1.Pod from pkg/kubelet/config/apiserver.go:66 Feb 24 05:14:30.124830 master-0 kubenswrapper[7614]: I0224 05:14:30.124761 7614 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["assisted-installer/assisted-installer-controller-r6zx7","openshift-cluster-storage-operator/csi-snapshot-controller-operator-6fb4df594f-8tv99","openshift-ingress-operator/ingress-operator-6569778c84-rr8r7","openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-fc889cfd5-r6p58","openshift-monitoring/cluster-monitoring-operator-6bb6d78bf-mzb7q","kube-system/bootstrap-kube-scheduler-master-0","openshift-network-diagnostics/network-check-target-vp2jg","openshift-network-node-identity/network-node-identity-rlg4x","openshift-ovn-kubernetes/ovnkube-control-plane-5d8dfcdc87-b8ght","openshift-service-ca-operator/service-ca-operator-c48c8bf7c-mcdrl","openshift-apiserver-operator/openshift-apiserver-operator-8586dccc9b-49fsv","openshift-cluster-olm-operator/cluster-olm-operator-5bd7768f54-qh6j7","openshift-kube-apiserver-operator/kube-apiserver-operator-5d87bf58c-ncrqj","openshift-kube-controller-manager-operator/kube-controller-manager-operator-7bcfbc574b-8zrj9","openshift-ovn-kubernetes/ovnkube-node-vd82q","openshift-multus/network-metrics-daemon-2vsjh","kube-system/bootstrap-kube-controller-manager-master-0","openshift-authentication-operator/authentication-operator-5bd7c86784-kbb8z","openshift-kube-apiserver/bootstrap-kube-apiserver-master-0","openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-77cd4d9559-8l7xv","openshift-multus/multus-8qp5g","openshift-etcd-operator/etcd-operator-545bf96f4d-tfmbs","openshift-etcd/etcd-master-0-master-0","openshift-cluster-version/cluster-version-operator-5cfd9759cf-r4rf2","openshift-controller-manager-operator/openshift-controller-manager-operator-584cc7bcb5-zz9fm","openshift-dns-operator/dns-operator-8c7d49845-4dhth","openshift-machine-config-operator/kube-rbac-proxy-crio-master-0","openshift-marketplace/marketplace-operator-6f5488b997-dbsnm","openshift-operator-lifecycle-manager/package-server-manager-5c75f78c8b-9d82f","openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-bcf775fc9-h99t4","openshift-multus/multus-additional-cni-plugins-jknmn","openshift-multus/multus-admission-controller-5f98f4f8d5-b985k","openshift-network-operator/iptables-alerter-r2vvc","openshift-network-operator/network-operator-7d7db75979-4fk6k"] Feb 24 05:14:30.125187 master-0 kubenswrapper[7614]: I0224 05:14:30.125116 7614 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="assisted-installer/assisted-installer-controller-r6zx7" Feb 24 05:14:30.125257 master-0 kubenswrapper[7614]: I0224 05:14:30.125236 7614 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/cluster-monitoring-operator-6bb6d78bf-mzb7q" Feb 24 05:14:30.125351 master-0 kubenswrapper[7614]: I0224 05:14:30.125236 7614 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-bcf775fc9-h99t4" Feb 24 05:14:30.125351 master-0 kubenswrapper[7614]: I0224 05:14:30.125281 7614 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-8c7d49845-4dhth" Feb 24 05:14:30.126164 master-0 kubenswrapper[7614]: I0224 05:14:30.126122 7614 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-5c75f78c8b-9d82f" Feb 24 05:14:30.127718 master-0 kubenswrapper[7614]: I0224 05:14:30.126861 7614 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-6f5488b997-dbsnm" Feb 24 05:14:30.127949 master-0 kubenswrapper[7614]: I0224 05:14:30.127840 7614 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-storage-operator"/"openshift-service-ca.crt" Feb 24 05:14:30.128624 master-0 kubenswrapper[7614]: I0224 05:14:30.128585 7614 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-5cfd9759cf-r4rf2" Feb 24 05:14:30.128846 master-0 kubenswrapper[7614]: I0224 05:14:30.128808 7614 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-config" Feb 24 05:14:30.129114 master-0 kubenswrapper[7614]: I0224 05:14:30.129017 7614 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"kube-root-ca.crt" Feb 24 05:14:30.129254 master-0 kubenswrapper[7614]: I0224 05:14:30.129150 7614 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-2vsjh" Feb 24 05:14:30.129254 master-0 kubenswrapper[7614]: I0224 05:14:30.129171 7614 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-6569778c84-rr8r7" Feb 24 05:14:30.129483 master-0 kubenswrapper[7614]: I0224 05:14:30.129423 7614 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-5f98f4f8d5-b985k" Feb 24 05:14:30.129582 master-0 kubenswrapper[7614]: I0224 05:14:30.129554 7614 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-vp2jg" Feb 24 05:14:30.129744 master-0 kubenswrapper[7614]: I0224 05:14:30.129684 7614 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"kube-scheduler-operator-serving-cert" Feb 24 05:14:30.131973 master-0 kubenswrapper[7614]: I0224 05:14:30.131925 7614 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-node-tuning-operator"/"node-tuning-operator-tls" Feb 24 05:14:30.132071 master-0 kubenswrapper[7614]: I0224 05:14:30.131931 7614 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"metrics-tls" Feb 24 05:14:30.132162 master-0 kubenswrapper[7614]: I0224 05:14:30.132144 7614 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"openshift-service-ca.crt" Feb 24 05:14:30.132877 master-0 kubenswrapper[7614]: I0224 05:14:30.132251 7614 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"kube-root-ca.crt" Feb 24 05:14:30.132877 master-0 kubenswrapper[7614]: I0224 05:14:30.132546 7614 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-root-ca.crt" Feb 24 05:14:30.132877 master-0 kubenswrapper[7614]: I0224 05:14:30.132638 7614 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-storage-operator"/"kube-root-ca.crt" Feb 24 05:14:30.132877 master-0 kubenswrapper[7614]: I0224 05:14:30.132717 7614 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-node-tuning-operator"/"openshift-service-ca.crt" Feb 24 05:14:30.132877 master-0 kubenswrapper[7614]: I0224 05:14:30.132752 7614 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert" Feb 24 05:14:30.132877 master-0 kubenswrapper[7614]: I0224 05:14:30.132774 7614 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-node-tuning-operator"/"performance-addon-operator-webhook-cert" Feb 24 05:14:30.132877 master-0 kubenswrapper[7614]: I0224 05:14:30.132875 7614 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-config" Feb 24 05:14:30.133294 master-0 kubenswrapper[7614]: I0224 05:14:30.133146 7614 desired_state_of_world_populator.go:155] "Finished populating initial desired state of world" Feb 24 05:14:30.133294 master-0 kubenswrapper[7614]: I0224 05:14:30.133286 7614 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-node-tuning-operator"/"kube-root-ca.crt" Feb 24 05:14:30.134363 master-0 kubenswrapper[7614]: I0224 05:14:30.134187 7614 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"openshift-service-ca.crt" Feb 24 05:14:30.134658 master-0 kubenswrapper[7614]: I0224 05:14:30.134371 7614 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config" Feb 24 05:14:30.134658 master-0 kubenswrapper[7614]: I0224 05:14:30.134407 7614 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-operator"/"metrics-tls" Feb 24 05:14:30.134658 master-0 kubenswrapper[7614]: I0224 05:14:30.134436 7614 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"cluster-monitoring-operator-tls" Feb 24 05:14:30.134658 master-0 kubenswrapper[7614]: I0224 05:14:30.134550 7614 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"telemetry-config" Feb 24 05:14:30.134658 master-0 kubenswrapper[7614]: I0224 05:14:30.134623 7614 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"kube-root-ca.crt" Feb 24 05:14:30.134658 master-0 kubenswrapper[7614]: I0224 05:14:30.134633 7614 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-root-ca.crt" Feb 24 05:14:30.135001 master-0 kubenswrapper[7614]: I0224 05:14:30.134691 7614 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert" Feb 24 05:14:30.138938 master-0 kubenswrapper[7614]: I0224 05:14:30.138828 7614 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"openshift-service-ca.crt" Feb 24 05:14:30.139158 master-0 kubenswrapper[7614]: I0224 05:14:30.138969 7614 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"kube-root-ca.crt" Feb 24 05:14:30.139158 master-0 kubenswrapper[7614]: I0224 05:14:30.139035 7614 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-olm-operator"/"openshift-service-ca.crt" Feb 24 05:14:30.139659 master-0 kubenswrapper[7614]: I0224 05:14:30.139495 7614 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"openshift-service-ca.crt" Feb 24 05:14:30.140220 master-0 kubenswrapper[7614]: I0224 05:14:30.140091 7614 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-olm-operator"/"cluster-olm-operator-serving-cert" Feb 24 05:14:30.141595 master-0 kubenswrapper[7614]: I0224 05:14:30.140528 7614 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"serving-cert" Feb 24 05:14:30.144573 master-0 kubenswrapper[7614]: I0224 05:14:30.144521 7614 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-metrics" Feb 24 05:14:30.144803 master-0 kubenswrapper[7614]: I0224 05:14:30.144785 7614 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"openshift-service-ca.crt" Feb 24 05:14:30.144856 master-0 kubenswrapper[7614]: I0224 05:14:30.144822 7614 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"authentication-operator-config" Feb 24 05:14:30.144891 master-0 kubenswrapper[7614]: I0224 05:14:30.144864 7614 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"serving-cert" Feb 24 05:14:30.145026 master-0 kubenswrapper[7614]: I0224 05:14:30.145004 7614 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"config" Feb 24 05:14:30.145062 master-0 kubenswrapper[7614]: I0224 05:14:30.145021 7614 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" Feb 24 05:14:30.145148 master-0 kubenswrapper[7614]: I0224 05:14:30.145128 7614 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"service-ca-bundle" Feb 24 05:14:30.145148 master-0 kubenswrapper[7614]: I0224 05:14:30.145033 7614 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert" Feb 24 05:14:30.145303 master-0 kubenswrapper[7614]: I0224 05:14:30.145272 7614 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" Feb 24 05:14:30.145501 master-0 kubenswrapper[7614]: I0224 05:14:30.145436 7614 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"kube-root-ca.crt" Feb 24 05:14:30.146359 master-0 kubenswrapper[7614]: I0224 05:14:30.145395 7614 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" Feb 24 05:14:30.146413 master-0 kubenswrapper[7614]: I0224 05:14:30.146361 7614 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"service-ca-operator-config" Feb 24 05:14:30.146604 master-0 kubenswrapper[7614]: I0224 05:14:30.146553 7614 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"serving-cert" Feb 24 05:14:30.146683 master-0 kubenswrapper[7614]: I0224 05:14:30.146640 7614 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"kube-root-ca.crt" Feb 24 05:14:30.146726 master-0 kubenswrapper[7614]: I0224 05:14:30.146686 7614 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert" Feb 24 05:14:30.147019 master-0 kubenswrapper[7614]: I0224 05:14:30.146978 7614 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-config" Feb 24 05:14:30.147019 master-0 kubenswrapper[7614]: I0224 05:14:30.146989 7614 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"kube-root-ca.crt" Feb 24 05:14:30.147552 master-0 kubenswrapper[7614]: I0224 05:14:30.147513 7614 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-olm-operator"/"kube-root-ca.crt" Feb 24 05:14:30.148168 master-0 kubenswrapper[7614]: I0224 05:14:30.148135 7614 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-service-ca.crt" Feb 24 05:14:30.148301 master-0 kubenswrapper[7614]: I0224 05:14:30.148136 7614 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"kube-root-ca.crt" Feb 24 05:14:30.149178 master-0 kubenswrapper[7614]: I0224 05:14:30.149118 7614 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-secret" Feb 24 05:14:30.149286 master-0 kubenswrapper[7614]: I0224 05:14:30.149266 7614 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"openshift-service-ca.crt" Feb 24 05:14:30.149350 master-0 kubenswrapper[7614]: I0224 05:14:30.149303 7614 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"cluster-version-operator-serving-cert" Feb 24 05:14:30.149415 master-0 kubenswrapper[7614]: I0224 05:14:30.149394 7614 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"kube-root-ca.crt" Feb 24 05:14:30.149655 master-0 kubenswrapper[7614]: I0224 05:14:30.149632 7614 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"kube-root-ca.crt" Feb 24 05:14:30.150433 master-0 kubenswrapper[7614]: I0224 05:14:30.150401 7614 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"openshift-service-ca.crt" Feb 24 05:14:30.150635 master-0 kubenswrapper[7614]: I0224 05:14:30.150608 7614 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert" Feb 24 05:14:30.150928 master-0 kubenswrapper[7614]: I0224 05:14:30.150892 7614 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/17f8e10b-88dc-4158-a7c4-aaa2f5d5fb9d-config\") pod \"kube-apiserver-operator-5d87bf58c-ncrqj\" (UID: \"17f8e10b-88dc-4158-a7c4-aaa2f5d5fb9d\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-5d87bf58c-ncrqj" Feb 24 05:14:30.151281 master-0 kubenswrapper[7614]: I0224 05:14:30.151249 7614 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/c00ee01c-143b-4e44-823c-c6bfdedb8ed6-etc-kubernetes\") pod \"multus-8qp5g\" (UID: \"c00ee01c-143b-4e44-823c-c6bfdedb8ed6\") " pod="openshift-multus/multus-8qp5g" Feb 24 05:14:30.151343 master-0 kubenswrapper[7614]: I0224 05:14:30.151300 7614 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/3d6b1ce7-1213-494c-829d-186d39eac7eb-trusted-ca\") pod \"ingress-operator-6569778c84-rr8r7\" (UID: \"3d6b1ce7-1213-494c-829d-186d39eac7eb\") " pod="openshift-ingress-operator/ingress-operator-6569778c84-rr8r7" Feb 24 05:14:30.151343 master-0 kubenswrapper[7614]: I0224 05:14:30.151338 7614 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/c00ee01c-143b-4e44-823c-c6bfdedb8ed6-system-cni-dir\") pod \"multus-8qp5g\" (UID: \"c00ee01c-143b-4e44-823c-c6bfdedb8ed6\") " pod="openshift-multus/multus-8qp5g" Feb 24 05:14:30.151492 master-0 kubenswrapper[7614]: I0224 05:14:30.151463 7614 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/88b915ff-fd94-4998-aa09-70f95c0f1b8a-ovnkube-config\") pod \"ovnkube-control-plane-5d8dfcdc87-b8ght\" (UID: \"88b915ff-fd94-4998-aa09-70f95c0f1b8a\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-5d8dfcdc87-b8ght" Feb 24 05:14:30.151575 master-0 kubenswrapper[7614]: I0224 05:14:30.151527 7614 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/74e8b3c8-da80-492c-bfcf-199b40bde40b-ovnkube-script-lib\") pod \"ovnkube-node-vd82q\" (UID: \"74e8b3c8-da80-492c-bfcf-199b40bde40b\") " pod="openshift-ovn-kubernetes/ovnkube-node-vd82q" Feb 24 05:14:30.151610 master-0 kubenswrapper[7614]: I0224 05:14:30.151584 7614 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/c00ee01c-143b-4e44-823c-c6bfdedb8ed6-multus-socket-dir-parent\") pod \"multus-8qp5g\" (UID: \"c00ee01c-143b-4e44-823c-c6bfdedb8ed6\") " pod="openshift-multus/multus-8qp5g" Feb 24 05:14:30.151652 master-0 kubenswrapper[7614]: I0224 05:14:30.151628 7614 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/74e8b3c8-da80-492c-bfcf-199b40bde40b-host-run-ovn-kubernetes\") pod \"ovnkube-node-vd82q\" (UID: \"74e8b3c8-da80-492c-bfcf-199b40bde40b\") " pod="openshift-ovn-kubernetes/ovnkube-node-vd82q" Feb 24 05:14:30.151686 master-0 kubenswrapper[7614]: I0224 05:14:30.151671 7614 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/c00ee01c-143b-4e44-823c-c6bfdedb8ed6-cni-binary-copy\") pod \"multus-8qp5g\" (UID: \"c00ee01c-143b-4e44-823c-c6bfdedb8ed6\") " pod="openshift-multus/multus-8qp5g" Feb 24 05:14:30.151717 master-0 kubenswrapper[7614]: I0224 05:14:30.151706 7614 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/933beda1-c930-4831-a886-3cc6b7a992ad-serving-cert\") pod \"openshift-controller-manager-operator-584cc7bcb5-zz9fm\" (UID: \"933beda1-c930-4831-a886-3cc6b7a992ad\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-584cc7bcb5-zz9fm" Feb 24 05:14:30.151756 master-0 kubenswrapper[7614]: I0224 05:14:30.151737 7614 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gmf87\" (UniqueName: \"kubernetes.io/projected/933beda1-c930-4831-a886-3cc6b7a992ad-kube-api-access-gmf87\") pod \"openshift-controller-manager-operator-584cc7bcb5-zz9fm\" (UID: \"933beda1-c930-4831-a886-3cc6b7a992ad\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-584cc7bcb5-zz9fm" Feb 24 05:14:30.151786 master-0 kubenswrapper[7614]: I0224 05:14:30.151773 7614 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zcb72\" (UniqueName: \"kubernetes.io/projected/dd29bef3-d27e-48b3-9aa0-d915e949b3d5-kube-api-access-zcb72\") pod \"marketplace-operator-6f5488b997-dbsnm\" (UID: \"dd29bef3-d27e-48b3-9aa0-d915e949b3d5\") " pod="openshift-marketplace/marketplace-operator-6f5488b997-dbsnm" Feb 24 05:14:30.151836 master-0 kubenswrapper[7614]: I0224 05:14:30.151811 7614 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mdpfz\" (UniqueName: \"kubernetes.io/projected/c177f8fe-8145-4557-ae78-af121efe001c-kube-api-access-mdpfz\") pod \"cluster-monitoring-operator-6bb6d78bf-mzb7q\" (UID: \"c177f8fe-8145-4557-ae78-af121efe001c\") " pod="openshift-monitoring/cluster-monitoring-operator-6bb6d78bf-mzb7q" Feb 24 05:14:30.151932 master-0 kubenswrapper[7614]: I0224 05:14:30.151904 7614 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/c00ee01c-143b-4e44-823c-c6bfdedb8ed6-host-var-lib-cni-multus\") pod \"multus-8qp5g\" (UID: \"c00ee01c-143b-4e44-823c-c6bfdedb8ed6\") " pod="openshift-multus/multus-8qp5g" Feb 24 05:14:30.152108 master-0 kubenswrapper[7614]: I0224 05:14:30.152072 7614 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/17f8e10b-88dc-4158-a7c4-aaa2f5d5fb9d-config\") pod \"kube-apiserver-operator-5d87bf58c-ncrqj\" (UID: \"17f8e10b-88dc-4158-a7c4-aaa2f5d5fb9d\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-5d87bf58c-ncrqj" Feb 24 05:14:30.152148 master-0 kubenswrapper[7614]: I0224 05:14:30.152102 7614 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-config" Feb 24 05:14:30.152202 master-0 kubenswrapper[7614]: I0224 05:14:30.152152 7614 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/74e8b3c8-da80-492c-bfcf-199b40bde40b-ovnkube-config\") pod \"ovnkube-node-vd82q\" (UID: \"74e8b3c8-da80-492c-bfcf-199b40bde40b\") " pod="openshift-ovn-kubernetes/ovnkube-node-vd82q" Feb 24 05:14:30.152237 master-0 kubenswrapper[7614]: I0224 05:14:30.152218 7614 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"telemetry-config\" (UniqueName: \"kubernetes.io/configmap/c177f8fe-8145-4557-ae78-af121efe001c-telemetry-config\") pod \"cluster-monitoring-operator-6bb6d78bf-mzb7q\" (UID: \"c177f8fe-8145-4557-ae78-af121efe001c\") " pod="openshift-monitoring/cluster-monitoring-operator-6bb6d78bf-mzb7q" Feb 24 05:14:30.152237 master-0 kubenswrapper[7614]: I0224 05:14:30.152231 7614 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"kube-root-ca.crt" Feb 24 05:14:30.152296 master-0 kubenswrapper[7614]: I0224 05:14:30.152248 7614 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/f6690909-3a87-4bdc-b0ec-1cdd4df32e4b-iptables-alerter-script\") pod \"iptables-alerter-r2vvc\" (UID: \"f6690909-3a87-4bdc-b0ec-1cdd4df32e4b\") " pod="openshift-network-operator/iptables-alerter-r2vvc" Feb 24 05:14:30.152296 master-0 kubenswrapper[7614]: I0224 05:14:30.152269 7614 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c3fed34f-b275-42c6-af6c-8de3e6fe0f9e-config\") pod \"kube-storage-version-migrator-operator-fc889cfd5-r6p58\" (UID: \"c3fed34f-b275-42c6-af6c-8de3e6fe0f9e\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-fc889cfd5-r6p58" Feb 24 05:14:30.152296 master-0 kubenswrapper[7614]: I0224 05:14:30.152289 7614 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fgf94\" (UniqueName: \"kubernetes.io/projected/7a2c651d-ea1a-41f2-9745-04adc8d88904-kube-api-access-fgf94\") pod \"etcd-operator-545bf96f4d-tfmbs\" (UID: \"7a2c651d-ea1a-41f2-9745-04adc8d88904\") " pod="openshift-etcd-operator/etcd-operator-545bf96f4d-tfmbs" Feb 24 05:14:30.152395 master-0 kubenswrapper[7614]: I0224 05:14:30.152321 7614 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/767424fb-babf-4b73-b5e2-0bee65fcf207-tuning-conf-dir\") pod \"multus-additional-cni-plugins-jknmn\" (UID: \"767424fb-babf-4b73-b5e2-0bee65fcf207\") " pod="openshift-multus/multus-additional-cni-plugins-jknmn" Feb 24 05:14:30.152395 master-0 kubenswrapper[7614]: I0224 05:14:30.152342 7614 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zb68s\" (UniqueName: \"kubernetes.io/projected/6e5ede6a-9d4b-47a2-b4ba-e6018910d05a-kube-api-access-zb68s\") pod \"cluster-node-tuning-operator-bcf775fc9-h99t4\" (UID: \"6e5ede6a-9d4b-47a2-b4ba-e6018910d05a\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-bcf775fc9-h99t4" Feb 24 05:14:30.152395 master-0 kubenswrapper[7614]: I0224 05:14:30.152365 7614 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/74e8b3c8-da80-492c-bfcf-199b40bde40b-host-kubelet\") pod \"ovnkube-node-vd82q\" (UID: \"74e8b3c8-da80-492c-bfcf-199b40bde40b\") " pod="openshift-ovn-kubernetes/ovnkube-node-vd82q" Feb 24 05:14:30.152395 master-0 kubenswrapper[7614]: I0224 05:14:30.152387 7614 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7a2c651d-ea1a-41f2-9745-04adc8d88904-config\") pod \"etcd-operator-545bf96f4d-tfmbs\" (UID: \"7a2c651d-ea1a-41f2-9745-04adc8d88904\") " pod="openshift-etcd-operator/etcd-operator-545bf96f4d-tfmbs" Feb 24 05:14:30.152531 master-0 kubenswrapper[7614]: I0224 05:14:30.152405 7614 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/7a2c651d-ea1a-41f2-9745-04adc8d88904-etcd-client\") pod \"etcd-operator-545bf96f4d-tfmbs\" (UID: \"7a2c651d-ea1a-41f2-9745-04adc8d88904\") " pod="openshift-etcd-operator/etcd-operator-545bf96f4d-tfmbs" Feb 24 05:14:30.152531 master-0 kubenswrapper[7614]: I0224 05:14:30.152446 7614 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/7dcc5520-7aa8-4cd5-b06d-591827ed4e2a-metrics-certs\") pod \"network-metrics-daemon-2vsjh\" (UID: \"7dcc5520-7aa8-4cd5-b06d-591827ed4e2a\") " pod="openshift-multus/network-metrics-daemon-2vsjh" Feb 24 05:14:30.152531 master-0 kubenswrapper[7614]: I0224 05:14:30.152451 7614 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/933beda1-c930-4831-a886-3cc6b7a992ad-serving-cert\") pod \"openshift-controller-manager-operator-584cc7bcb5-zz9fm\" (UID: \"933beda1-c930-4831-a886-3cc6b7a992ad\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-584cc7bcb5-zz9fm" Feb 24 05:14:30.152630 master-0 kubenswrapper[7614]: I0224 05:14:30.152598 7614 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/6e5ede6a-9d4b-47a2-b4ba-e6018910d05a-trusted-ca\") pod \"cluster-node-tuning-operator-bcf775fc9-h99t4\" (UID: \"6e5ede6a-9d4b-47a2-b4ba-e6018910d05a\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-bcf775fc9-h99t4" Feb 24 05:14:30.152730 master-0 kubenswrapper[7614]: I0224 05:14:30.152686 7614 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/f77227c8-c52d-4a71-ae1b-792055f6f23d-host-etc-kube\") pod \"network-operator-7d7db75979-4fk6k\" (UID: \"f77227c8-c52d-4a71-ae1b-792055f6f23d\") " pod="openshift-network-operator/network-operator-7d7db75979-4fk6k" Feb 24 05:14:30.152783 master-0 kubenswrapper[7614]: I0224 05:14:30.152712 7614 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"telemetry-config\" (UniqueName: \"kubernetes.io/configmap/c177f8fe-8145-4557-ae78-af121efe001c-telemetry-config\") pod \"cluster-monitoring-operator-6bb6d78bf-mzb7q\" (UID: \"c177f8fe-8145-4557-ae78-af121efe001c\") " pod="openshift-monitoring/cluster-monitoring-operator-6bb6d78bf-mzb7q" Feb 24 05:14:30.152864 master-0 kubenswrapper[7614]: I0224 05:14:30.152737 7614 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"kube-root-ca.crt" Feb 24 05:14:30.152901 master-0 kubenswrapper[7614]: I0224 05:14:30.152703 7614 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c3fed34f-b275-42c6-af6c-8de3e6fe0f9e-config\") pod \"kube-storage-version-migrator-operator-fc889cfd5-r6p58\" (UID: \"c3fed34f-b275-42c6-af6c-8de3e6fe0f9e\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-fc889cfd5-r6p58" Feb 24 05:14:30.152965 master-0 kubenswrapper[7614]: I0224 05:14:30.152761 7614 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/74e8b3c8-da80-492c-bfcf-199b40bde40b-run-ovn\") pod \"ovnkube-node-vd82q\" (UID: \"74e8b3c8-da80-492c-bfcf-199b40bde40b\") " pod="openshift-ovn-kubernetes/ovnkube-node-vd82q" Feb 24 05:14:30.153702 master-0 kubenswrapper[7614]: I0224 05:14:30.153669 7614 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"openshift-service-ca.crt" Feb 24 05:14:30.154535 master-0 kubenswrapper[7614]: I0224 05:14:30.154486 7614 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/74e8b3c8-da80-492c-bfcf-199b40bde40b-log-socket\") pod \"ovnkube-node-vd82q\" (UID: \"74e8b3c8-da80-492c-bfcf-199b40bde40b\") " pod="openshift-ovn-kubernetes/ovnkube-node-vd82q" Feb 24 05:14:30.154582 master-0 kubenswrapper[7614]: I0224 05:14:30.154559 7614 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"openshift-service-ca.crt" Feb 24 05:14:30.154728 master-0 kubenswrapper[7614]: I0224 05:14:30.154558 7614 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/7c4b448f-670e-45a1-bdd7-c42903c682a9-etc-ssl-certs\") pod \"cluster-version-operator-5cfd9759cf-r4rf2\" (UID: \"7c4b448f-670e-45a1-bdd7-c42903c682a9\") " pod="openshift-cluster-version/cluster-version-operator-5cfd9759cf-r4rf2" Feb 24 05:14:30.154780 master-0 kubenswrapper[7614]: I0224 05:14:30.154731 7614 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"metrics-tls" Feb 24 05:14:30.154780 master-0 kubenswrapper[7614]: I0224 05:14:30.154752 7614 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/6e5ede6a-9d4b-47a2-b4ba-e6018910d05a-apiservice-cert\") pod \"cluster-node-tuning-operator-bcf775fc9-h99t4\" (UID: \"6e5ede6a-9d4b-47a2-b4ba-e6018910d05a\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-bcf775fc9-h99t4" Feb 24 05:14:30.154839 master-0 kubenswrapper[7614]: I0224 05:14:30.154790 7614 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/c00ee01c-143b-4e44-823c-c6bfdedb8ed6-multus-conf-dir\") pod \"multus-8qp5g\" (UID: \"c00ee01c-143b-4e44-823c-c6bfdedb8ed6\") " pod="openshift-multus/multus-8qp5g" Feb 24 05:14:30.154839 master-0 kubenswrapper[7614]: I0224 05:14:30.154824 7614 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/c00ee01c-143b-4e44-823c-c6bfdedb8ed6-host-run-multus-certs\") pod \"multus-8qp5g\" (UID: \"c00ee01c-143b-4e44-823c-c6bfdedb8ed6\") " pod="openshift-multus/multus-8qp5g" Feb 24 05:14:30.155103 master-0 kubenswrapper[7614]: I0224 05:14:30.154845 7614 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-service-ca.crt" Feb 24 05:14:30.155103 master-0 kubenswrapper[7614]: I0224 05:14:30.154858 7614 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ckfnc\" (UniqueName: \"kubernetes.io/projected/1d922f6f-70a3-46cb-b230-6a1e2b8cfdfa-kube-api-access-ckfnc\") pod \"network-check-target-vp2jg\" (UID: \"1d922f6f-70a3-46cb-b230-6a1e2b8cfdfa\") " pod="openshift-network-diagnostics/network-check-target-vp2jg" Feb 24 05:14:30.155103 master-0 kubenswrapper[7614]: I0224 05:14:30.154993 7614 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/f77227c8-c52d-4a71-ae1b-792055f6f23d-metrics-tls\") pod \"network-operator-7d7db75979-4fk6k\" (UID: \"f77227c8-c52d-4a71-ae1b-792055f6f23d\") " pod="openshift-network-operator/network-operator-7d7db75979-4fk6k" Feb 24 05:14:30.155103 master-0 kubenswrapper[7614]: I0224 05:14:30.155030 7614 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jrhmp\" (UniqueName: \"kubernetes.io/projected/996ae0be-d36c-47f4-98b2-1c89591f9506-kube-api-access-jrhmp\") pod \"dns-operator-8c7d49845-4dhth\" (UID: \"996ae0be-d36c-47f4-98b2-1c89591f9506\") " pod="openshift-dns-operator/dns-operator-8c7d49845-4dhth" Feb 24 05:14:30.155214 master-0 kubenswrapper[7614]: I0224 05:14:30.155196 7614 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"openshift-service-ca.crt" Feb 24 05:14:30.155279 master-0 kubenswrapper[7614]: I0224 05:14:30.155228 7614 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/88b915ff-fd94-4998-aa09-70f95c0f1b8a-env-overrides\") pod \"ovnkube-control-plane-5d8dfcdc87-b8ght\" (UID: \"88b915ff-fd94-4998-aa09-70f95c0f1b8a\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-5d8dfcdc87-b8ght" Feb 24 05:14:30.155337 master-0 kubenswrapper[7614]: I0224 05:14:30.155288 7614 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/f77227c8-c52d-4a71-ae1b-792055f6f23d-metrics-tls\") pod \"network-operator-7d7db75979-4fk6k\" (UID: \"f77227c8-c52d-4a71-ae1b-792055f6f23d\") " pod="openshift-network-operator/network-operator-7d7db75979-4fk6k" Feb 24 05:14:30.155375 master-0 kubenswrapper[7614]: I0224 05:14:30.155346 7614 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/22813c83-2f60-44ad-9624-ad367cec08f7-config\") pod \"kube-controller-manager-operator-7bcfbc574b-8zrj9\" (UID: \"22813c83-2f60-44ad-9624-ad367cec08f7\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-7bcfbc574b-8zrj9" Feb 24 05:14:30.155474 master-0 kubenswrapper[7614]: I0224 05:14:30.155415 7614 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/c00ee01c-143b-4e44-823c-c6bfdedb8ed6-cnibin\") pod \"multus-8qp5g\" (UID: \"c00ee01c-143b-4e44-823c-c6bfdedb8ed6\") " pod="openshift-multus/multus-8qp5g" Feb 24 05:14:30.155546 master-0 kubenswrapper[7614]: I0224 05:14:30.155510 7614 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/59333a14-5bdc-4590-a3da-af7300f086da-trusted-ca-bundle\") pod \"authentication-operator-5bd7c86784-kbb8z\" (UID: \"59333a14-5bdc-4590-a3da-af7300f086da\") " pod="openshift-authentication-operator/authentication-operator-5bd7c86784-kbb8z" Feb 24 05:14:30.155611 master-0 kubenswrapper[7614]: I0224 05:14:30.155577 7614 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/767424fb-babf-4b73-b5e2-0bee65fcf207-cni-binary-copy\") pod \"multus-additional-cni-plugins-jknmn\" (UID: \"767424fb-babf-4b73-b5e2-0bee65fcf207\") " pod="openshift-multus/multus-additional-cni-plugins-jknmn" Feb 24 05:14:30.155652 master-0 kubenswrapper[7614]: I0224 05:14:30.155586 7614 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/22813c83-2f60-44ad-9624-ad367cec08f7-config\") pod \"kube-controller-manager-operator-7bcfbc574b-8zrj9\" (UID: \"22813c83-2f60-44ad-9624-ad367cec08f7\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-7bcfbc574b-8zrj9" Feb 24 05:14:30.155685 master-0 kubenswrapper[7614]: I0224 05:14:30.155642 7614 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/c00ee01c-143b-4e44-823c-c6bfdedb8ed6-multus-cni-dir\") pod \"multus-8qp5g\" (UID: \"c00ee01c-143b-4e44-823c-c6bfdedb8ed6\") " pod="openshift-multus/multus-8qp5g" Feb 24 05:14:30.155733 master-0 kubenswrapper[7614]: I0224 05:14:30.155699 7614 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/c00ee01c-143b-4e44-823c-c6bfdedb8ed6-host-var-lib-cni-bin\") pod \"multus-8qp5g\" (UID: \"c00ee01c-143b-4e44-823c-c6bfdedb8ed6\") " pod="openshift-multus/multus-8qp5g" Feb 24 05:14:30.155769 master-0 kubenswrapper[7614]: I0224 05:14:30.155756 7614 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xj2tz\" (UniqueName: \"kubernetes.io/projected/8be1f8db-3f0b-4d6f-be42-7564fba66820-kube-api-access-xj2tz\") pod \"multus-admission-controller-5f98f4f8d5-b985k\" (UID: \"8be1f8db-3f0b-4d6f-be42-7564fba66820\") " pod="openshift-multus/multus-admission-controller-5f98f4f8d5-b985k" Feb 24 05:14:30.156906 master-0 kubenswrapper[7614]: I0224 05:14:30.156877 7614 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"kube-root-ca.crt" Feb 24 05:14:30.157227 master-0 kubenswrapper[7614]: I0224 05:14:30.157075 7614 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/74e8b3c8-da80-492c-bfcf-199b40bde40b-ovn-node-metrics-cert\") pod \"ovnkube-node-vd82q\" (UID: \"74e8b3c8-da80-492c-bfcf-199b40bde40b\") " pod="openshift-ovn-kubernetes/ovnkube-node-vd82q" Feb 24 05:14:30.157270 master-0 kubenswrapper[7614]: I0224 05:14:30.157234 7614 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/17f8e10b-88dc-4158-a7c4-aaa2f5d5fb9d-serving-cert\") pod \"kube-apiserver-operator-5d87bf58c-ncrqj\" (UID: \"17f8e10b-88dc-4158-a7c4-aaa2f5d5fb9d\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-5d87bf58c-ncrqj" Feb 24 05:14:30.157379 master-0 kubenswrapper[7614]: I0224 05:14:30.157285 7614 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cluster-monitoring-operator-tls\" (UniqueName: \"kubernetes.io/secret/c177f8fe-8145-4557-ae78-af121efe001c-cluster-monitoring-operator-tls\") pod \"cluster-monitoring-operator-6bb6d78bf-mzb7q\" (UID: \"c177f8fe-8145-4557-ae78-af121efe001c\") " pod="openshift-monitoring/cluster-monitoring-operator-6bb6d78bf-mzb7q" Feb 24 05:14:30.157423 master-0 kubenswrapper[7614]: I0224 05:14:30.157369 7614 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/17f8e10b-88dc-4158-a7c4-aaa2f5d5fb9d-serving-cert\") pod \"kube-apiserver-operator-5d87bf58c-ncrqj\" (UID: \"17f8e10b-88dc-4158-a7c4-aaa2f5d5fb9d\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-5d87bf58c-ncrqj" Feb 24 05:14:30.157509 master-0 kubenswrapper[7614]: I0224 05:14:30.157464 7614 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/88b915ff-fd94-4998-aa09-70f95c0f1b8a-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-5d8dfcdc87-b8ght\" (UID: \"88b915ff-fd94-4998-aa09-70f95c0f1b8a\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-5d8dfcdc87-b8ght" Feb 24 05:14:30.157580 master-0 kubenswrapper[7614]: I0224 05:14:30.157544 7614 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-79h66\" (UniqueName: \"kubernetes.io/projected/74e8b3c8-da80-492c-bfcf-199b40bde40b-kube-api-access-79h66\") pod \"ovnkube-node-vd82q\" (UID: \"74e8b3c8-da80-492c-bfcf-199b40bde40b\") " pod="openshift-ovn-kubernetes/ovnkube-node-vd82q" Feb 24 05:14:30.157580 master-0 kubenswrapper[7614]: I0224 05:14:30.157481 7614 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"cni-copy-resources" Feb 24 05:14:30.157638 master-0 kubenswrapper[7614]: I0224 05:14:30.157577 7614 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/74e8b3c8-da80-492c-bfcf-199b40bde40b-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-vd82q\" (UID: \"74e8b3c8-da80-492c-bfcf-199b40bde40b\") " pod="openshift-ovn-kubernetes/ovnkube-node-vd82q" Feb 24 05:14:30.157638 master-0 kubenswrapper[7614]: I0224 05:14:30.157624 7614 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/7c4b448f-670e-45a1-bdd7-c42903c682a9-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5cfd9759cf-r4rf2\" (UID: \"7c4b448f-670e-45a1-bdd7-c42903c682a9\") " pod="openshift-cluster-version/cluster-version-operator-5cfd9759cf-r4rf2" Feb 24 05:14:30.157697 master-0 kubenswrapper[7614]: I0224 05:14:30.157652 7614 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/3d6b1ce7-1213-494c-829d-186d39eac7eb-bound-sa-token\") pod \"ingress-operator-6569778c84-rr8r7\" (UID: \"3d6b1ce7-1213-494c-829d-186d39eac7eb\") " pod="openshift-ingress-operator/ingress-operator-6569778c84-rr8r7" Feb 24 05:14:30.157796 master-0 kubenswrapper[7614]: I0224 05:14:30.157751 7614 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/c00ee01c-143b-4e44-823c-c6bfdedb8ed6-multus-daemon-config\") pod \"multus-8qp5g\" (UID: \"c00ee01c-143b-4e44-823c-c6bfdedb8ed6\") " pod="openshift-multus/multus-8qp5g" Feb 24 05:14:30.157834 master-0 kubenswrapper[7614]: I0224 05:14:30.157797 7614 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-node-identity"/"network-node-identity-cert" Feb 24 05:14:30.157883 master-0 kubenswrapper[7614]: I0224 05:14:30.157811 7614 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"whereabouts-configmap\" (UniqueName: \"kubernetes.io/configmap/767424fb-babf-4b73-b5e2-0bee65fcf207-whereabouts-configmap\") pod \"multus-additional-cni-plugins-jknmn\" (UID: \"767424fb-babf-4b73-b5e2-0bee65fcf207\") " pod="openshift-multus/multus-additional-cni-plugins-jknmn" Feb 24 05:14:30.157922 master-0 kubenswrapper[7614]: I0224 05:14:30.157897 7614 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/74e8b3c8-da80-492c-bfcf-199b40bde40b-var-lib-openvswitch\") pod \"ovnkube-node-vd82q\" (UID: \"74e8b3c8-da80-492c-bfcf-199b40bde40b\") " pod="openshift-ovn-kubernetes/ovnkube-node-vd82q" Feb 24 05:14:30.157981 master-0 kubenswrapper[7614]: I0224 05:14:30.157957 7614 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/74e8b3c8-da80-492c-bfcf-199b40bde40b-run-openvswitch\") pod \"ovnkube-node-vd82q\" (UID: \"74e8b3c8-da80-492c-bfcf-199b40bde40b\") " pod="openshift-ovn-kubernetes/ovnkube-node-vd82q" Feb 24 05:14:30.158018 master-0 kubenswrapper[7614]: I0224 05:14:30.157993 7614 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d4d5x\" (UniqueName: \"kubernetes.io/projected/49bfccec-61ec-4bef-a561-9f6e6f906215-kube-api-access-d4d5x\") pod \"package-server-manager-5c75f78c8b-9d82f\" (UID: \"49bfccec-61ec-4bef-a561-9f6e6f906215\") " pod="openshift-operator-lifecycle-manager/package-server-manager-5c75f78c8b-9d82f" Feb 24 05:14:30.158047 master-0 kubenswrapper[7614]: I0224 05:14:30.158037 7614 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/7c4b448f-670e-45a1-bdd7-c42903c682a9-kube-api-access\") pod \"cluster-version-operator-5cfd9759cf-r4rf2\" (UID: \"7c4b448f-670e-45a1-bdd7-c42903c682a9\") " pod="openshift-cluster-version/cluster-version-operator-5cfd9759cf-r4rf2" Feb 24 05:14:30.158251 master-0 kubenswrapper[7614]: I0224 05:14:30.158223 7614 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/933beda1-c930-4831-a886-3cc6b7a992ad-config\") pod \"openshift-controller-manager-operator-584cc7bcb5-zz9fm\" (UID: \"933beda1-c930-4831-a886-3cc6b7a992ad\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-584cc7bcb5-zz9fm" Feb 24 05:14:30.158360 master-0 kubenswrapper[7614]: I0224 05:14:30.158339 7614 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xj8cq\" (UniqueName: \"kubernetes.io/projected/d86d5bbe-3768-4695-810b-245a56e4fd1d-kube-api-access-xj8cq\") pod \"service-ca-operator-c48c8bf7c-mcdrl\" (UID: \"d86d5bbe-3768-4695-810b-245a56e4fd1d\") " pod="openshift-service-ca-operator/service-ca-operator-c48c8bf7c-mcdrl" Feb 24 05:14:30.158415 master-0 kubenswrapper[7614]: I0224 05:14:30.158371 7614 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/22813c83-2f60-44ad-9624-ad367cec08f7-kube-api-access\") pod \"kube-controller-manager-operator-7bcfbc574b-8zrj9\" (UID: \"22813c83-2f60-44ad-9624-ad367cec08f7\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-7bcfbc574b-8zrj9" Feb 24 05:14:30.158415 master-0 kubenswrapper[7614]: I0224 05:14:30.158401 7614 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/c106275b-72b6-4877-95c3-830f93e35375-env-overrides\") pod \"network-node-identity-rlg4x\" (UID: \"c106275b-72b6-4877-95c3-830f93e35375\") " pod="openshift-network-node-identity/network-node-identity-rlg4x" Feb 24 05:14:30.158477 master-0 kubenswrapper[7614]: I0224 05:14:30.158429 7614 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e6f05507-d5c1-4102-a220-1db715a496e3-config\") pod \"openshift-kube-scheduler-operator-77cd4d9559-8l7xv\" (UID: \"e6f05507-d5c1-4102-a220-1db715a496e3\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-77cd4d9559-8l7xv" Feb 24 05:14:30.158477 master-0 kubenswrapper[7614]: I0224 05:14:30.158455 7614 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dcj62\" (UniqueName: \"kubernetes.io/projected/f77227c8-c52d-4a71-ae1b-792055f6f23d-kube-api-access-dcj62\") pod \"network-operator-7d7db75979-4fk6k\" (UID: \"f77227c8-c52d-4a71-ae1b-792055f6f23d\") " pod="openshift-network-operator/network-operator-7d7db75979-4fk6k" Feb 24 05:14:30.158543 master-0 kubenswrapper[7614]: I0224 05:14:30.158479 7614 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bs794\" (UniqueName: \"kubernetes.io/projected/88b915ff-fd94-4998-aa09-70f95c0f1b8a-kube-api-access-bs794\") pod \"ovnkube-control-plane-5d8dfcdc87-b8ght\" (UID: \"88b915ff-fd94-4998-aa09-70f95c0f1b8a\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-5d8dfcdc87-b8ght" Feb 24 05:14:30.158543 master-0 kubenswrapper[7614]: I0224 05:14:30.158505 7614 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c3fed34f-b275-42c6-af6c-8de3e6fe0f9e-serving-cert\") pod \"kube-storage-version-migrator-operator-fc889cfd5-r6p58\" (UID: \"c3fed34f-b275-42c6-af6c-8de3e6fe0f9e\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-fc889cfd5-r6p58" Feb 24 05:14:30.158543 master-0 kubenswrapper[7614]: I0224 05:14:30.158529 7614 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/c00ee01c-143b-4e44-823c-c6bfdedb8ed6-os-release\") pod \"multus-8qp5g\" (UID: \"c00ee01c-143b-4e44-823c-c6bfdedb8ed6\") " pod="openshift-multus/multus-8qp5g" Feb 24 05:14:30.158629 master-0 kubenswrapper[7614]: I0224 05:14:30.158552 7614 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/767424fb-babf-4b73-b5e2-0bee65fcf207-system-cni-dir\") pod \"multus-additional-cni-plugins-jknmn\" (UID: \"767424fb-babf-4b73-b5e2-0bee65fcf207\") " pod="openshift-multus/multus-additional-cni-plugins-jknmn" Feb 24 05:14:30.158629 master-0 kubenswrapper[7614]: I0224 05:14:30.158575 7614 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/767424fb-babf-4b73-b5e2-0bee65fcf207-os-release\") pod \"multus-additional-cni-plugins-jknmn\" (UID: \"767424fb-babf-4b73-b5e2-0bee65fcf207\") " pod="openshift-multus/multus-additional-cni-plugins-jknmn" Feb 24 05:14:30.158629 master-0 kubenswrapper[7614]: I0224 05:14:30.158600 7614 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h5djr\" (UniqueName: \"kubernetes.io/projected/feee7fe8-e805-4807-b4c0-ecc7ef0f88d9-kube-api-access-h5djr\") pod \"csi-snapshot-controller-operator-6fb4df594f-8tv99\" (UID: \"feee7fe8-e805-4807-b4c0-ecc7ef0f88d9\") " pod="openshift-cluster-storage-operator/csi-snapshot-controller-operator-6fb4df594f-8tv99" Feb 24 05:14:30.158629 master-0 kubenswrapper[7614]: I0224 05:14:30.158625 7614 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m9kf2\" (UniqueName: \"kubernetes.io/projected/58ecd829-4749-4c8a-933b-16b4acccac90-kube-api-access-m9kf2\") pod \"openshift-apiserver-operator-8586dccc9b-49fsv\" (UID: \"58ecd829-4749-4c8a-933b-16b4acccac90\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-8586dccc9b-49fsv" Feb 24 05:14:30.158738 master-0 kubenswrapper[7614]: I0224 05:14:30.158649 7614 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5q2r9\" (UniqueName: \"kubernetes.io/projected/3d6b1ce7-1213-494c-829d-186d39eac7eb-kube-api-access-5q2r9\") pod \"ingress-operator-6569778c84-rr8r7\" (UID: \"3d6b1ce7-1213-494c-829d-186d39eac7eb\") " pod="openshift-ingress-operator/ingress-operator-6569778c84-rr8r7" Feb 24 05:14:30.158738 master-0 kubenswrapper[7614]: I0224 05:14:30.158671 7614 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/c00ee01c-143b-4e44-823c-c6bfdedb8ed6-host-run-k8s-cni-cncf-io\") pod \"multus-8qp5g\" (UID: \"c00ee01c-143b-4e44-823c-c6bfdedb8ed6\") " pod="openshift-multus/multus-8qp5g" Feb 24 05:14:30.158738 master-0 kubenswrapper[7614]: I0224 05:14:30.158695 7614 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hl828\" (UniqueName: \"kubernetes.io/projected/767424fb-babf-4b73-b5e2-0bee65fcf207-kube-api-access-hl828\") pod \"multus-additional-cni-plugins-jknmn\" (UID: \"767424fb-babf-4b73-b5e2-0bee65fcf207\") " pod="openshift-multus/multus-additional-cni-plugins-jknmn" Feb 24 05:14:30.158738 master-0 kubenswrapper[7614]: I0224 05:14:30.158719 7614 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8ktz5\" (UniqueName: \"kubernetes.io/projected/7dcc5520-7aa8-4cd5-b06d-591827ed4e2a-kube-api-access-8ktz5\") pod \"network-metrics-daemon-2vsjh\" (UID: \"7dcc5520-7aa8-4cd5-b06d-591827ed4e2a\") " pod="openshift-multus/network-metrics-daemon-2vsjh" Feb 24 05:14:30.158738 master-0 kubenswrapper[7614]: I0224 05:14:30.158737 7614 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/c00ee01c-143b-4e44-823c-c6bfdedb8ed6-hostroot\") pod \"multus-8qp5g\" (UID: \"c00ee01c-143b-4e44-823c-c6bfdedb8ed6\") " pod="openshift-multus/multus-8qp5g" Feb 24 05:14:30.158865 master-0 kubenswrapper[7614]: I0224 05:14:30.158763 7614 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cluster-olm-operator-serving-cert\" (UniqueName: \"kubernetes.io/secret/633d33a1-e1b1-40b0-b56a-afb0c1085d97-cluster-olm-operator-serving-cert\") pod \"cluster-olm-operator-5bd7768f54-qh6j7\" (UID: \"633d33a1-e1b1-40b0-b56a-afb0c1085d97\") " pod="openshift-cluster-olm-operator/cluster-olm-operator-5bd7768f54-qh6j7" Feb 24 05:14:30.158865 master-0 kubenswrapper[7614]: I0224 05:14:30.158787 7614 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7a2c651d-ea1a-41f2-9745-04adc8d88904-serving-cert\") pod \"etcd-operator-545bf96f4d-tfmbs\" (UID: \"7a2c651d-ea1a-41f2-9745-04adc8d88904\") " pod="openshift-etcd-operator/etcd-operator-545bf96f4d-tfmbs" Feb 24 05:14:30.158865 master-0 kubenswrapper[7614]: I0224 05:14:30.158810 7614 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/dd29bef3-d27e-48b3-9aa0-d915e949b3d5-marketplace-trusted-ca\") pod \"marketplace-operator-6f5488b997-dbsnm\" (UID: \"dd29bef3-d27e-48b3-9aa0-d915e949b3d5\") " pod="openshift-marketplace/marketplace-operator-6f5488b997-dbsnm" Feb 24 05:14:30.158865 master-0 kubenswrapper[7614]: I0224 05:14:30.158834 7614 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/74e8b3c8-da80-492c-bfcf-199b40bde40b-run-systemd\") pod \"ovnkube-node-vd82q\" (UID: \"74e8b3c8-da80-492c-bfcf-199b40bde40b\") " pod="openshift-ovn-kubernetes/ovnkube-node-vd82q" Feb 24 05:14:30.158865 master-0 kubenswrapper[7614]: I0224 05:14:30.158858 7614 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d86d5bbe-3768-4695-810b-245a56e4fd1d-config\") pod \"service-ca-operator-c48c8bf7c-mcdrl\" (UID: \"d86d5bbe-3768-4695-810b-245a56e4fd1d\") " pod="openshift-service-ca-operator/service-ca-operator-c48c8bf7c-mcdrl" Feb 24 05:14:30.159063 master-0 kubenswrapper[7614]: I0224 05:14:30.158881 7614 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operand-assets\" (UniqueName: \"kubernetes.io/empty-dir/633d33a1-e1b1-40b0-b56a-afb0c1085d97-operand-assets\") pod \"cluster-olm-operator-5bd7768f54-qh6j7\" (UID: \"633d33a1-e1b1-40b0-b56a-afb0c1085d97\") " pod="openshift-cluster-olm-operator/cluster-olm-operator-5bd7768f54-qh6j7" Feb 24 05:14:30.159063 master-0 kubenswrapper[7614]: I0224 05:14:30.158907 7614 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/8be1f8db-3f0b-4d6f-be42-7564fba66820-webhook-certs\") pod \"multus-admission-controller-5f98f4f8d5-b985k\" (UID: \"8be1f8db-3f0b-4d6f-be42-7564fba66820\") " pod="openshift-multus/multus-admission-controller-5f98f4f8d5-b985k" Feb 24 05:14:30.159063 master-0 kubenswrapper[7614]: I0224 05:14:30.158930 7614 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e6f05507-d5c1-4102-a220-1db715a496e3-serving-cert\") pod \"openshift-kube-scheduler-operator-77cd4d9559-8l7xv\" (UID: \"e6f05507-d5c1-4102-a220-1db715a496e3\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-77cd4d9559-8l7xv" Feb 24 05:14:30.159063 master-0 kubenswrapper[7614]: I0224 05:14:30.158951 7614 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/22813c83-2f60-44ad-9624-ad367cec08f7-serving-cert\") pod \"kube-controller-manager-operator-7bcfbc574b-8zrj9\" (UID: \"22813c83-2f60-44ad-9624-ad367cec08f7\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-7bcfbc574b-8zrj9" Feb 24 05:14:30.159063 master-0 kubenswrapper[7614]: I0224 05:14:30.158979 7614 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e6f05507-d5c1-4102-a220-1db715a496e3-config\") pod \"openshift-kube-scheduler-operator-77cd4d9559-8l7xv\" (UID: \"e6f05507-d5c1-4102-a220-1db715a496e3\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-77cd4d9559-8l7xv" Feb 24 05:14:30.159469 master-0 kubenswrapper[7614]: I0224 05:14:30.159408 7614 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"default-cni-sysctl-allowlist" Feb 24 05:14:30.160138 master-0 kubenswrapper[7614]: I0224 05:14:30.159755 7614 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tlwzq\" (UniqueName: \"kubernetes.io/projected/c3fed34f-b275-42c6-af6c-8de3e6fe0f9e-kube-api-access-tlwzq\") pod \"kube-storage-version-migrator-operator-fc889cfd5-r6p58\" (UID: \"c3fed34f-b275-42c6-af6c-8de3e6fe0f9e\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-fc889cfd5-r6p58" Feb 24 05:14:30.160138 master-0 kubenswrapper[7614]: I0224 05:14:30.159807 7614 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"kube-root-ca.crt" Feb 24 05:14:30.160138 master-0 kubenswrapper[7614]: I0224 05:14:30.159856 7614 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"whereabouts-config" Feb 24 05:14:30.160542 master-0 kubenswrapper[7614]: I0224 05:14:30.160500 7614 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"ovnkube-identity-cm" Feb 24 05:14:30.160638 master-0 kubenswrapper[7614]: I0224 05:14:30.160605 7614 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-admission-controller-secret" Feb 24 05:14:30.160704 master-0 kubenswrapper[7614]: I0224 05:14:30.160680 7614 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"openshift-service-ca.crt" Feb 24 05:14:30.160835 master-0 kubenswrapper[7614]: I0224 05:14:30.160802 7614 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/74e8b3c8-da80-492c-bfcf-199b40bde40b-etc-openvswitch\") pod \"ovnkube-node-vd82q\" (UID: \"74e8b3c8-da80-492c-bfcf-199b40bde40b\") " pod="openshift-ovn-kubernetes/ovnkube-node-vd82q" Feb 24 05:14:30.160878 master-0 kubenswrapper[7614]: I0224 05:14:30.160845 7614 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/c00ee01c-143b-4e44-823c-c6bfdedb8ed6-host-var-lib-kubelet\") pod \"multus-8qp5g\" (UID: \"c00ee01c-143b-4e44-823c-c6bfdedb8ed6\") " pod="openshift-multus/multus-8qp5g" Feb 24 05:14:30.160912 master-0 kubenswrapper[7614]: I0224 05:14:30.160813 7614 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"env-overrides" Feb 24 05:14:30.160942 master-0 kubenswrapper[7614]: I0224 05:14:30.160909 7614 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"openshift-service-ca.crt" Feb 24 05:14:30.160999 master-0 kubenswrapper[7614]: I0224 05:14:30.160966 7614 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"multus-daemon-config" Feb 24 05:14:30.161033 master-0 kubenswrapper[7614]: I0224 05:14:30.160876 7614 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/c106275b-72b6-4877-95c3-830f93e35375-ovnkube-identity-cm\") pod \"network-node-identity-rlg4x\" (UID: \"c106275b-72b6-4877-95c3-830f93e35375\") " pod="openshift-network-node-identity/network-node-identity-rlg4x" Feb 24 05:14:30.161033 master-0 kubenswrapper[7614]: I0224 05:14:30.160874 7614 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"env-overrides" Feb 24 05:14:30.161153 master-0 kubenswrapper[7614]: I0224 05:14:30.161034 7614 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/767424fb-babf-4b73-b5e2-0bee65fcf207-cnibin\") pod \"multus-additional-cni-plugins-jknmn\" (UID: \"767424fb-babf-4b73-b5e2-0bee65fcf207\") " pod="openshift-multus/multus-additional-cni-plugins-jknmn" Feb 24 05:14:30.161153 master-0 kubenswrapper[7614]: I0224 05:14:30.161049 7614 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/22813c83-2f60-44ad-9624-ad367cec08f7-serving-cert\") pod \"kube-controller-manager-operator-7bcfbc574b-8zrj9\" (UID: \"22813c83-2f60-44ad-9624-ad367cec08f7\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-7bcfbc574b-8zrj9" Feb 24 05:14:30.161153 master-0 kubenswrapper[7614]: I0224 05:14:30.161093 7614 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/58ecd829-4749-4c8a-933b-16b4acccac90-config\") pod \"openshift-apiserver-operator-8586dccc9b-49fsv\" (UID: \"58ecd829-4749-4c8a-933b-16b4acccac90\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-8586dccc9b-49fsv" Feb 24 05:14:30.161153 master-0 kubenswrapper[7614]: I0224 05:14:30.161120 7614 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e6f05507-d5c1-4102-a220-1db715a496e3-serving-cert\") pod \"openshift-kube-scheduler-operator-77cd4d9559-8l7xv\" (UID: \"e6f05507-d5c1-4102-a220-1db715a496e3\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-77cd4d9559-8l7xv" Feb 24 05:14:30.161261 master-0 kubenswrapper[7614]: I0224 05:14:30.161139 7614 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/59333a14-5bdc-4590-a3da-af7300f086da-serving-cert\") pod \"authentication-operator-5bd7c86784-kbb8z\" (UID: \"59333a14-5bdc-4590-a3da-af7300f086da\") " pod="openshift-authentication-operator/authentication-operator-5bd7c86784-kbb8z" Feb 24 05:14:30.161261 master-0 kubenswrapper[7614]: I0224 05:14:30.161203 7614 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/dd29bef3-d27e-48b3-9aa0-d915e949b3d5-marketplace-operator-metrics\") pod \"marketplace-operator-6f5488b997-dbsnm\" (UID: \"dd29bef3-d27e-48b3-9aa0-d915e949b3d5\") " pod="openshift-marketplace/marketplace-operator-6f5488b997-dbsnm" Feb 24 05:14:30.161422 master-0 kubenswrapper[7614]: I0224 05:14:30.161289 7614 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7c4b448f-670e-45a1-bdd7-c42903c682a9-serving-cert\") pod \"cluster-version-operator-5cfd9759cf-r4rf2\" (UID: \"7c4b448f-670e-45a1-bdd7-c42903c682a9\") " pod="openshift-cluster-version/cluster-version-operator-5cfd9759cf-r4rf2" Feb 24 05:14:30.161422 master-0 kubenswrapper[7614]: I0224 05:14:30.161388 7614 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/c106275b-72b6-4877-95c3-830f93e35375-ovnkube-identity-cm\") pod \"network-node-identity-rlg4x\" (UID: \"c106275b-72b6-4877-95c3-830f93e35375\") " pod="openshift-network-node-identity/network-node-identity-rlg4x" Feb 24 05:14:30.161422 master-0 kubenswrapper[7614]: I0224 05:14:30.161408 7614 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/59333a14-5bdc-4590-a3da-af7300f086da-serving-cert\") pod \"authentication-operator-5bd7c86784-kbb8z\" (UID: \"59333a14-5bdc-4590-a3da-af7300f086da\") " pod="openshift-authentication-operator/authentication-operator-5bd7c86784-kbb8z" Feb 24 05:14:30.161512 master-0 kubenswrapper[7614]: I0224 05:14:30.161483 7614 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/58ecd829-4749-4c8a-933b-16b4acccac90-config\") pod \"openshift-apiserver-operator-8586dccc9b-49fsv\" (UID: \"58ecd829-4749-4c8a-933b-16b4acccac90\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-8586dccc9b-49fsv" Feb 24 05:14:30.161684 master-0 kubenswrapper[7614]: I0224 05:14:30.161632 7614 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operand-assets\" (UniqueName: \"kubernetes.io/empty-dir/633d33a1-e1b1-40b0-b56a-afb0c1085d97-operand-assets\") pod \"cluster-olm-operator-5bd7768f54-qh6j7\" (UID: \"633d33a1-e1b1-40b0-b56a-afb0c1085d97\") " pod="openshift-cluster-olm-operator/cluster-olm-operator-5bd7768f54-qh6j7" Feb 24 05:14:30.161684 master-0 kubenswrapper[7614]: I0224 05:14:30.161666 7614 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cluster-olm-operator-serving-cert\" (UniqueName: \"kubernetes.io/secret/633d33a1-e1b1-40b0-b56a-afb0c1085d97-cluster-olm-operator-serving-cert\") pod \"cluster-olm-operator-5bd7768f54-qh6j7\" (UID: \"633d33a1-e1b1-40b0-b56a-afb0c1085d97\") " pod="openshift-cluster-olm-operator/cluster-olm-operator-5bd7768f54-qh6j7" Feb 24 05:14:30.161746 master-0 kubenswrapper[7614]: I0224 05:14:30.161726 7614 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/c106275b-72b6-4877-95c3-830f93e35375-webhook-cert\") pod \"network-node-identity-rlg4x\" (UID: \"c106275b-72b6-4877-95c3-830f93e35375\") " pod="openshift-network-node-identity/network-node-identity-rlg4x" Feb 24 05:14:30.161780 master-0 kubenswrapper[7614]: I0224 05:14:30.161767 7614 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d86d5bbe-3768-4695-810b-245a56e4fd1d-serving-cert\") pod \"service-ca-operator-c48c8bf7c-mcdrl\" (UID: \"d86d5bbe-3768-4695-810b-245a56e4fd1d\") " pod="openshift-service-ca-operator/service-ca-operator-c48c8bf7c-mcdrl" Feb 24 05:14:30.161780 master-0 kubenswrapper[7614]: I0224 05:14:30.161795 7614 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/c00ee01c-143b-4e44-823c-c6bfdedb8ed6-host-run-netns\") pod \"multus-8qp5g\" (UID: \"c00ee01c-143b-4e44-823c-c6bfdedb8ed6\") " pod="openshift-multus/multus-8qp5g" Feb 24 05:14:30.161780 master-0 kubenswrapper[7614]: I0224 05:14:30.161819 7614 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wwc5b\" (UniqueName: \"kubernetes.io/projected/59333a14-5bdc-4590-a3da-af7300f086da-kube-api-access-wwc5b\") pod \"authentication-operator-5bd7c86784-kbb8z\" (UID: \"59333a14-5bdc-4590-a3da-af7300f086da\") " pod="openshift-authentication-operator/authentication-operator-5bd7c86784-kbb8z" Feb 24 05:14:30.161780 master-0 kubenswrapper[7614]: I0224 05:14:30.161840 7614 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/767424fb-babf-4b73-b5e2-0bee65fcf207-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-jknmn\" (UID: \"767424fb-babf-4b73-b5e2-0bee65fcf207\") " pod="openshift-multus/multus-additional-cni-plugins-jknmn" Feb 24 05:14:30.162031 master-0 kubenswrapper[7614]: I0224 05:14:30.161861 7614 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/74e8b3c8-da80-492c-bfcf-199b40bde40b-env-overrides\") pod \"ovnkube-node-vd82q\" (UID: \"74e8b3c8-da80-492c-bfcf-199b40bde40b\") " pod="openshift-ovn-kubernetes/ovnkube-node-vd82q" Feb 24 05:14:30.162031 master-0 kubenswrapper[7614]: I0224 05:14:30.161891 7614 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/7c4b448f-670e-45a1-bdd7-c42903c682a9-service-ca\") pod \"cluster-version-operator-5cfd9759cf-r4rf2\" (UID: \"7c4b448f-670e-45a1-bdd7-c42903c682a9\") " pod="openshift-cluster-version/cluster-version-operator-5cfd9759cf-r4rf2" Feb 24 05:14:30.162031 master-0 kubenswrapper[7614]: I0224 05:14:30.161920 7614 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4p8zb\" (UniqueName: \"kubernetes.io/projected/c106275b-72b6-4877-95c3-830f93e35375-kube-api-access-4p8zb\") pod \"network-node-identity-rlg4x\" (UID: \"c106275b-72b6-4877-95c3-830f93e35375\") " pod="openshift-network-node-identity/network-node-identity-rlg4x" Feb 24 05:14:30.162031 master-0 kubenswrapper[7614]: I0224 05:14:30.161923 7614 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d86d5bbe-3768-4695-810b-245a56e4fd1d-serving-cert\") pod \"service-ca-operator-c48c8bf7c-mcdrl\" (UID: \"d86d5bbe-3768-4695-810b-245a56e4fd1d\") " pod="openshift-service-ca-operator/service-ca-operator-c48c8bf7c-mcdrl" Feb 24 05:14:30.162031 master-0 kubenswrapper[7614]: I0224 05:14:30.161950 7614 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/c106275b-72b6-4877-95c3-830f93e35375-webhook-cert\") pod \"network-node-identity-rlg4x\" (UID: \"c106275b-72b6-4877-95c3-830f93e35375\") " pod="openshift-network-node-identity/network-node-identity-rlg4x" Feb 24 05:14:30.162364 master-0 kubenswrapper[7614]: I0224 05:14:30.162121 7614 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/59333a14-5bdc-4590-a3da-af7300f086da-config\") pod \"authentication-operator-5bd7c86784-kbb8z\" (UID: \"59333a14-5bdc-4590-a3da-af7300f086da\") " pod="openshift-authentication-operator/authentication-operator-5bd7c86784-kbb8z" Feb 24 05:14:30.162364 master-0 kubenswrapper[7614]: I0224 05:14:30.162153 7614 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/996ae0be-d36c-47f4-98b2-1c89591f9506-metrics-tls\") pod \"dns-operator-8c7d49845-4dhth\" (UID: \"996ae0be-d36c-47f4-98b2-1c89591f9506\") " pod="openshift-dns-operator/dns-operator-8c7d49845-4dhth" Feb 24 05:14:30.162364 master-0 kubenswrapper[7614]: I0224 05:14:30.162161 7614 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/74e8b3c8-da80-492c-bfcf-199b40bde40b-env-overrides\") pod \"ovnkube-node-vd82q\" (UID: \"74e8b3c8-da80-492c-bfcf-199b40bde40b\") " pod="openshift-ovn-kubernetes/ovnkube-node-vd82q" Feb 24 05:14:30.162364 master-0 kubenswrapper[7614]: I0224 05:14:30.162163 7614 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/c00ee01c-143b-4e44-823c-c6bfdedb8ed6-cni-binary-copy\") pod \"multus-8qp5g\" (UID: \"c00ee01c-143b-4e44-823c-c6bfdedb8ed6\") " pod="openshift-multus/multus-8qp5g" Feb 24 05:14:30.162364 master-0 kubenswrapper[7614]: I0224 05:14:30.162254 7614 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"kube-root-ca.crt" Feb 24 05:14:30.162364 master-0 kubenswrapper[7614]: I0224 05:14:30.162296 7614 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-config" Feb 24 05:14:30.162364 master-0 kubenswrapper[7614]: I0224 05:14:30.162174 7614 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/74e8b3c8-da80-492c-bfcf-199b40bde40b-node-log\") pod \"ovnkube-node-vd82q\" (UID: \"74e8b3c8-da80-492c-bfcf-199b40bde40b\") " pod="openshift-ovn-kubernetes/ovnkube-node-vd82q" Feb 24 05:14:30.162547 master-0 kubenswrapper[7614]: I0224 05:14:30.162388 7614 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/f6690909-3a87-4bdc-b0ec-1cdd4df32e4b-host-slash\") pod \"iptables-alerter-r2vvc\" (UID: \"f6690909-3a87-4bdc-b0ec-1cdd4df32e4b\") " pod="openshift-network-operator/iptables-alerter-r2vvc" Feb 24 05:14:30.162547 master-0 kubenswrapper[7614]: I0224 05:14:30.162412 7614 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6b7f4\" (UniqueName: \"kubernetes.io/projected/f6690909-3a87-4bdc-b0ec-1cdd4df32e4b-kube-api-access-6b7f4\") pod \"iptables-alerter-r2vvc\" (UID: \"f6690909-3a87-4bdc-b0ec-1cdd4df32e4b\") " pod="openshift-network-operator/iptables-alerter-r2vvc" Feb 24 05:14:30.162547 master-0 kubenswrapper[7614]: I0224 05:14:30.162433 7614 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/74e8b3c8-da80-492c-bfcf-199b40bde40b-host-cni-bin\") pod \"ovnkube-node-vd82q\" (UID: \"74e8b3c8-da80-492c-bfcf-199b40bde40b\") " pod="openshift-ovn-kubernetes/ovnkube-node-vd82q" Feb 24 05:14:30.162547 master-0 kubenswrapper[7614]: I0224 05:14:30.162453 7614 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/74e8b3c8-da80-492c-bfcf-199b40bde40b-host-cni-netd\") pod \"ovnkube-node-vd82q\" (UID: \"74e8b3c8-da80-492c-bfcf-199b40bde40b\") " pod="openshift-ovn-kubernetes/ovnkube-node-vd82q" Feb 24 05:14:30.162547 master-0 kubenswrapper[7614]: I0224 05:14:30.162482 7614 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/17f8e10b-88dc-4158-a7c4-aaa2f5d5fb9d-kube-api-access\") pod \"kube-apiserver-operator-5d87bf58c-ncrqj\" (UID: \"17f8e10b-88dc-4158-a7c4-aaa2f5d5fb9d\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-5d87bf58c-ncrqj" Feb 24 05:14:30.162547 master-0 kubenswrapper[7614]: I0224 05:14:30.162505 7614 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/58ecd829-4749-4c8a-933b-16b4acccac90-serving-cert\") pod \"openshift-apiserver-operator-8586dccc9b-49fsv\" (UID: \"58ecd829-4749-4c8a-933b-16b4acccac90\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-8586dccc9b-49fsv" Feb 24 05:14:30.162547 master-0 kubenswrapper[7614]: I0224 05:14:30.162530 7614 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jx4rw\" (UniqueName: \"kubernetes.io/projected/c00ee01c-143b-4e44-823c-c6bfdedb8ed6-kube-api-access-jx4rw\") pod \"multus-8qp5g\" (UID: \"c00ee01c-143b-4e44-823c-c6bfdedb8ed6\") " pod="openshift-multus/multus-8qp5g" Feb 24 05:14:30.162757 master-0 kubenswrapper[7614]: I0224 05:14:30.162553 7614 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-62xzk\" (UniqueName: \"kubernetes.io/projected/633d33a1-e1b1-40b0-b56a-afb0c1085d97-kube-api-access-62xzk\") pod \"cluster-olm-operator-5bd7768f54-qh6j7\" (UID: \"633d33a1-e1b1-40b0-b56a-afb0c1085d97\") " pod="openshift-cluster-olm-operator/cluster-olm-operator-5bd7768f54-qh6j7" Feb 24 05:14:30.162757 master-0 kubenswrapper[7614]: I0224 05:14:30.162559 7614 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/767424fb-babf-4b73-b5e2-0bee65fcf207-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-jknmn\" (UID: \"767424fb-babf-4b73-b5e2-0bee65fcf207\") " pod="openshift-multus/multus-additional-cni-plugins-jknmn" Feb 24 05:14:30.162757 master-0 kubenswrapper[7614]: I0224 05:14:30.162577 7614 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/74e8b3c8-da80-492c-bfcf-199b40bde40b-host-slash\") pod \"ovnkube-node-vd82q\" (UID: \"74e8b3c8-da80-492c-bfcf-199b40bde40b\") " pod="openshift-ovn-kubernetes/ovnkube-node-vd82q" Feb 24 05:14:30.162757 master-0 kubenswrapper[7614]: I0224 05:14:30.162582 7614 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/59333a14-5bdc-4590-a3da-af7300f086da-config\") pod \"authentication-operator-5bd7c86784-kbb8z\" (UID: \"59333a14-5bdc-4590-a3da-af7300f086da\") " pod="openshift-authentication-operator/authentication-operator-5bd7c86784-kbb8z" Feb 24 05:14:30.162757 master-0 kubenswrapper[7614]: I0224 05:14:30.162601 7614 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/49bfccec-61ec-4bef-a561-9f6e6f906215-package-server-manager-serving-cert\") pod \"package-server-manager-5c75f78c8b-9d82f\" (UID: \"49bfccec-61ec-4bef-a561-9f6e6f906215\") " pod="openshift-operator-lifecycle-manager/package-server-manager-5c75f78c8b-9d82f" Feb 24 05:14:30.162757 master-0 kubenswrapper[7614]: I0224 05:14:30.162554 7614 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/7c4b448f-670e-45a1-bdd7-c42903c682a9-service-ca\") pod \"cluster-version-operator-5cfd9759cf-r4rf2\" (UID: \"7c4b448f-670e-45a1-bdd7-c42903c682a9\") " pod="openshift-cluster-version/cluster-version-operator-5cfd9759cf-r4rf2" Feb 24 05:14:30.162757 master-0 kubenswrapper[7614]: I0224 05:14:30.162672 7614 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-client" Feb 24 05:14:30.162941 master-0 kubenswrapper[7614]: I0224 05:14:30.162923 7614 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/7a2c651d-ea1a-41f2-9745-04adc8d88904-etcd-client\") pod \"etcd-operator-545bf96f4d-tfmbs\" (UID: \"7a2c651d-ea1a-41f2-9745-04adc8d88904\") " pod="openshift-etcd-operator/etcd-operator-545bf96f4d-tfmbs" Feb 24 05:14:30.163264 master-0 kubenswrapper[7614]: I0224 05:14:30.163211 7614 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/3d6b1ce7-1213-494c-829d-186d39eac7eb-metrics-tls\") pod \"ingress-operator-6569778c84-rr8r7\" (UID: \"3d6b1ce7-1213-494c-829d-186d39eac7eb\") " pod="openshift-ingress-operator/ingress-operator-6569778c84-rr8r7" Feb 24 05:14:30.163446 master-0 kubenswrapper[7614]: I0224 05:14:30.163396 7614 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"kube-root-ca.crt" Feb 24 05:14:30.163651 master-0 kubenswrapper[7614]: I0224 05:14:30.163625 7614 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-serving-cert" Feb 24 05:14:30.163857 master-0 kubenswrapper[7614]: I0224 05:14:30.163830 7614 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-ca-bundle" Feb 24 05:14:30.164026 master-0 kubenswrapper[7614]: I0224 05:14:30.163992 7614 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-node-metrics-cert" Feb 24 05:14:30.164163 master-0 kubenswrapper[7614]: I0224 05:14:30.164120 7614 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-control-plane-metrics-cert" Feb 24 05:14:30.165706 master-0 kubenswrapper[7614]: I0224 05:14:30.165672 7614 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-node-tuning-operator"/"trusted-ca" Feb 24 05:14:30.166025 master-0 kubenswrapper[7614]: I0224 05:14:30.165994 7614 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/59333a14-5bdc-4590-a3da-af7300f086da-service-ca-bundle\") pod \"authentication-operator-5bd7c86784-kbb8z\" (UID: \"59333a14-5bdc-4590-a3da-af7300f086da\") " pod="openshift-authentication-operator/authentication-operator-5bd7c86784-kbb8z" Feb 24 05:14:30.166072 master-0 kubenswrapper[7614]: I0224 05:14:30.166055 7614 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/7a2c651d-ea1a-41f2-9745-04adc8d88904-etcd-service-ca\") pod \"etcd-operator-545bf96f4d-tfmbs\" (UID: \"7a2c651d-ea1a-41f2-9745-04adc8d88904\") " pod="openshift-etcd-operator/etcd-operator-545bf96f4d-tfmbs" Feb 24 05:14:30.166195 master-0 kubenswrapper[7614]: I0224 05:14:30.166168 7614 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d86d5bbe-3768-4695-810b-245a56e4fd1d-config\") pod \"service-ca-operator-c48c8bf7c-mcdrl\" (UID: \"d86d5bbe-3768-4695-810b-245a56e4fd1d\") " pod="openshift-service-ca-operator/service-ca-operator-c48c8bf7c-mcdrl" Feb 24 05:14:30.166270 master-0 kubenswrapper[7614]: I0224 05:14:30.166246 7614 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/933beda1-c930-4831-a886-3cc6b7a992ad-config\") pod \"openshift-controller-manager-operator-584cc7bcb5-zz9fm\" (UID: \"933beda1-c930-4831-a886-3cc6b7a992ad\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-584cc7bcb5-zz9fm" Feb 24 05:14:30.168166 master-0 kubenswrapper[7614]: I0224 05:14:30.168131 7614 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-script-lib" Feb 24 05:14:30.168401 master-0 kubenswrapper[7614]: I0224 05:14:30.168361 7614 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"whereabouts-configmap\" (UniqueName: \"kubernetes.io/configmap/767424fb-babf-4b73-b5e2-0bee65fcf207-whereabouts-configmap\") pod \"multus-additional-cni-plugins-jknmn\" (UID: \"767424fb-babf-4b73-b5e2-0bee65fcf207\") " pod="openshift-multus/multus-additional-cni-plugins-jknmn" Feb 24 05:14:30.168477 master-0 kubenswrapper[7614]: I0224 05:14:30.168452 7614 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"marketplace-trusted-ca" Feb 24 05:14:30.168651 master-0 kubenswrapper[7614]: I0224 05:14:30.168627 7614 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-operator-config" Feb 24 05:14:30.168651 master-0 kubenswrapper[7614]: I0224 05:14:30.164258 7614 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-service-ca-bundle" Feb 24 05:14:30.168715 master-0 kubenswrapper[7614]: I0224 05:14:30.164293 7614 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"iptables-alerter-script" Feb 24 05:14:30.169159 master-0 kubenswrapper[7614]: I0224 05:14:30.169116 7614 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"openshift-service-ca.crt" Feb 24 05:14:30.169341 master-0 kubenswrapper[7614]: I0224 05:14:30.169286 7614 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"kube-root-ca.crt" Feb 24 05:14:30.169716 master-0 kubenswrapper[7614]: I0224 05:14:30.169670 7614 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/74e8b3c8-da80-492c-bfcf-199b40bde40b-ovn-node-metrics-cert\") pod \"ovnkube-node-vd82q\" (UID: \"74e8b3c8-da80-492c-bfcf-199b40bde40b\") " pod="openshift-ovn-kubernetes/ovnkube-node-vd82q" Feb 24 05:14:30.169893 master-0 kubenswrapper[7614]: I0224 05:14:30.169860 7614 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e6f05507-d5c1-4102-a220-1db715a496e3-kube-api-access\") pod \"openshift-kube-scheduler-operator-77cd4d9559-8l7xv\" (UID: \"e6f05507-d5c1-4102-a220-1db715a496e3\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-77cd4d9559-8l7xv" Feb 24 05:14:30.169966 master-0 kubenswrapper[7614]: I0224 05:14:30.169934 7614 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/74e8b3c8-da80-492c-bfcf-199b40bde40b-systemd-units\") pod \"ovnkube-node-vd82q\" (UID: \"74e8b3c8-da80-492c-bfcf-199b40bde40b\") " pod="openshift-ovn-kubernetes/ovnkube-node-vd82q" Feb 24 05:14:30.170021 master-0 kubenswrapper[7614]: I0224 05:14:30.169995 7614 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-tuning-operator-tls\" (UniqueName: \"kubernetes.io/secret/6e5ede6a-9d4b-47a2-b4ba-e6018910d05a-node-tuning-operator-tls\") pod \"cluster-node-tuning-operator-bcf775fc9-h99t4\" (UID: \"6e5ede6a-9d4b-47a2-b4ba-e6018910d05a\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-bcf775fc9-h99t4" Feb 24 05:14:30.170102 master-0 kubenswrapper[7614]: I0224 05:14:30.170057 7614 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/7a2c651d-ea1a-41f2-9745-04adc8d88904-etcd-ca\") pod \"etcd-operator-545bf96f4d-tfmbs\" (UID: \"7a2c651d-ea1a-41f2-9745-04adc8d88904\") " pod="openshift-etcd-operator/etcd-operator-545bf96f4d-tfmbs" Feb 24 05:14:30.170141 master-0 kubenswrapper[7614]: I0224 05:14:30.170106 7614 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/74e8b3c8-da80-492c-bfcf-199b40bde40b-host-run-netns\") pod \"ovnkube-node-vd82q\" (UID: \"74e8b3c8-da80-492c-bfcf-199b40bde40b\") " pod="openshift-ovn-kubernetes/ovnkube-node-vd82q" Feb 24 05:14:30.170195 master-0 kubenswrapper[7614]: I0224 05:14:30.170149 7614 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"trusted-ca-bundle" Feb 24 05:14:30.170642 master-0 kubenswrapper[7614]: I0224 05:14:30.170614 7614 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c3fed34f-b275-42c6-af6c-8de3e6fe0f9e-serving-cert\") pod \"kube-storage-version-migrator-operator-fc889cfd5-r6p58\" (UID: \"c3fed34f-b275-42c6-af6c-8de3e6fe0f9e\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-fc889cfd5-r6p58" Feb 24 05:14:30.170769 master-0 kubenswrapper[7614]: I0224 05:14:30.170740 7614 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/7a2c651d-ea1a-41f2-9745-04adc8d88904-etcd-ca\") pod \"etcd-operator-545bf96f4d-tfmbs\" (UID: \"7a2c651d-ea1a-41f2-9745-04adc8d88904\") " pod="openshift-etcd-operator/etcd-operator-545bf96f4d-tfmbs" Feb 24 05:14:30.170840 master-0 kubenswrapper[7614]: I0224 05:14:30.170812 7614 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/7a2c651d-ea1a-41f2-9745-04adc8d88904-etcd-service-ca\") pod \"etcd-operator-545bf96f4d-tfmbs\" (UID: \"7a2c651d-ea1a-41f2-9745-04adc8d88904\") " pod="openshift-etcd-operator/etcd-operator-545bf96f4d-tfmbs" Feb 24 05:14:30.171260 master-0 kubenswrapper[7614]: I0224 05:14:30.171220 7614 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/59333a14-5bdc-4590-a3da-af7300f086da-service-ca-bundle\") pod \"authentication-operator-5bd7c86784-kbb8z\" (UID: \"59333a14-5bdc-4590-a3da-af7300f086da\") " pod="openshift-authentication-operator/authentication-operator-5bd7c86784-kbb8z" Feb 24 05:14:30.172618 master-0 kubenswrapper[7614]: I0224 05:14:30.171611 7614 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/58ecd829-4749-4c8a-933b-16b4acccac90-serving-cert\") pod \"openshift-apiserver-operator-8586dccc9b-49fsv\" (UID: \"58ecd829-4749-4c8a-933b-16b4acccac90\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-8586dccc9b-49fsv" Feb 24 05:14:30.172618 master-0 kubenswrapper[7614]: I0224 05:14:30.172162 7614 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/88b915ff-fd94-4998-aa09-70f95c0f1b8a-ovnkube-config\") pod \"ovnkube-control-plane-5d8dfcdc87-b8ght\" (UID: \"88b915ff-fd94-4998-aa09-70f95c0f1b8a\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-5d8dfcdc87-b8ght" Feb 24 05:14:30.172618 master-0 kubenswrapper[7614]: I0224 05:14:30.172140 7614 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/c106275b-72b6-4877-95c3-830f93e35375-env-overrides\") pod \"network-node-identity-rlg4x\" (UID: \"c106275b-72b6-4877-95c3-830f93e35375\") " pod="openshift-network-node-identity/network-node-identity-rlg4x" Feb 24 05:14:30.172898 master-0 kubenswrapper[7614]: I0224 05:14:30.172743 7614 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/88b915ff-fd94-4998-aa09-70f95c0f1b8a-env-overrides\") pod \"ovnkube-control-plane-5d8dfcdc87-b8ght\" (UID: \"88b915ff-fd94-4998-aa09-70f95c0f1b8a\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-5d8dfcdc87-b8ght" Feb 24 05:14:30.172898 master-0 kubenswrapper[7614]: I0224 05:14:30.172768 7614 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/c00ee01c-143b-4e44-823c-c6bfdedb8ed6-multus-daemon-config\") pod \"multus-8qp5g\" (UID: \"c00ee01c-143b-4e44-823c-c6bfdedb8ed6\") " pod="openshift-multus/multus-8qp5g" Feb 24 05:14:30.173448 master-0 kubenswrapper[7614]: I0224 05:14:30.173411 7614 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7a2c651d-ea1a-41f2-9745-04adc8d88904-serving-cert\") pod \"etcd-operator-545bf96f4d-tfmbs\" (UID: \"7a2c651d-ea1a-41f2-9745-04adc8d88904\") " pod="openshift-etcd-operator/etcd-operator-545bf96f4d-tfmbs" Feb 24 05:14:30.174458 master-0 kubenswrapper[7614]: I0224 05:14:30.174038 7614 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/88b915ff-fd94-4998-aa09-70f95c0f1b8a-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-5d8dfcdc87-b8ght\" (UID: \"88b915ff-fd94-4998-aa09-70f95c0f1b8a\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-5d8dfcdc87-b8ght" Feb 24 05:14:30.174458 master-0 kubenswrapper[7614]: I0224 05:14:30.174001 7614 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/dd29bef3-d27e-48b3-9aa0-d915e949b3d5-marketplace-trusted-ca\") pod \"marketplace-operator-6f5488b997-dbsnm\" (UID: \"dd29bef3-d27e-48b3-9aa0-d915e949b3d5\") " pod="openshift-marketplace/marketplace-operator-6f5488b997-dbsnm" Feb 24 05:14:30.174628 master-0 kubenswrapper[7614]: I0224 05:14:30.174455 7614 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/59333a14-5bdc-4590-a3da-af7300f086da-trusted-ca-bundle\") pod \"authentication-operator-5bd7c86784-kbb8z\" (UID: \"59333a14-5bdc-4590-a3da-af7300f086da\") " pod="openshift-authentication-operator/authentication-operator-5bd7c86784-kbb8z" Feb 24 05:14:30.174628 master-0 kubenswrapper[7614]: I0224 05:14:30.174495 7614 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/767424fb-babf-4b73-b5e2-0bee65fcf207-cni-binary-copy\") pod \"multus-additional-cni-plugins-jknmn\" (UID: \"767424fb-babf-4b73-b5e2-0bee65fcf207\") " pod="openshift-multus/multus-additional-cni-plugins-jknmn" Feb 24 05:14:30.174808 master-0 kubenswrapper[7614]: I0224 05:14:30.174650 7614 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/74e8b3c8-da80-492c-bfcf-199b40bde40b-ovnkube-config\") pod \"ovnkube-node-vd82q\" (UID: \"74e8b3c8-da80-492c-bfcf-199b40bde40b\") " pod="openshift-ovn-kubernetes/ovnkube-node-vd82q" Feb 24 05:14:30.174889 master-0 kubenswrapper[7614]: I0224 05:14:30.174862 7614 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7a2c651d-ea1a-41f2-9745-04adc8d88904-config\") pod \"etcd-operator-545bf96f4d-tfmbs\" (UID: \"7a2c651d-ea1a-41f2-9745-04adc8d88904\") " pod="openshift-etcd-operator/etcd-operator-545bf96f4d-tfmbs" Feb 24 05:14:30.175047 master-0 kubenswrapper[7614]: I0224 05:14:30.174902 7614 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/74e8b3c8-da80-492c-bfcf-199b40bde40b-ovnkube-script-lib\") pod \"ovnkube-node-vd82q\" (UID: \"74e8b3c8-da80-492c-bfcf-199b40bde40b\") " pod="openshift-ovn-kubernetes/ovnkube-node-vd82q" Feb 24 05:14:30.175109 master-0 kubenswrapper[7614]: I0224 05:14:30.174935 7614 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/f6690909-3a87-4bdc-b0ec-1cdd4df32e4b-iptables-alerter-script\") pod \"iptables-alerter-r2vvc\" (UID: \"f6690909-3a87-4bdc-b0ec-1cdd4df32e4b\") " pod="openshift-network-operator/iptables-alerter-r2vvc" Feb 24 05:14:30.175397 master-0 kubenswrapper[7614]: I0224 05:14:30.175297 7614 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/6e5ede6a-9d4b-47a2-b4ba-e6018910d05a-trusted-ca\") pod \"cluster-node-tuning-operator-bcf775fc9-h99t4\" (UID: \"6e5ede6a-9d4b-47a2-b4ba-e6018910d05a\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-bcf775fc9-h99t4" Feb 24 05:14:30.190293 master-0 kubenswrapper[7614]: I0224 05:14:30.190242 7614 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"trusted-ca" Feb 24 05:14:30.191894 master-0 kubenswrapper[7614]: I0224 05:14:30.191859 7614 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fgf94\" (UniqueName: \"kubernetes.io/projected/7a2c651d-ea1a-41f2-9745-04adc8d88904-kube-api-access-fgf94\") pod \"etcd-operator-545bf96f4d-tfmbs\" (UID: \"7a2c651d-ea1a-41f2-9745-04adc8d88904\") " pod="openshift-etcd-operator/etcd-operator-545bf96f4d-tfmbs" Feb 24 05:14:30.191894 master-0 kubenswrapper[7614]: I0224 05:14:30.191879 7614 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/3d6b1ce7-1213-494c-829d-186d39eac7eb-trusted-ca\") pod \"ingress-operator-6569778c84-rr8r7\" (UID: \"3d6b1ce7-1213-494c-829d-186d39eac7eb\") " pod="openshift-ingress-operator/ingress-operator-6569778c84-rr8r7" Feb 24 05:14:30.191976 master-0 kubenswrapper[7614]: I0224 05:14:30.191888 7614 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gmf87\" (UniqueName: \"kubernetes.io/projected/933beda1-c930-4831-a886-3cc6b7a992ad-kube-api-access-gmf87\") pod \"openshift-controller-manager-operator-584cc7bcb5-zz9fm\" (UID: \"933beda1-c930-4831-a886-3cc6b7a992ad\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-584cc7bcb5-zz9fm" Feb 24 05:14:30.201144 master-0 kubenswrapper[7614]: I0224 05:14:30.201095 7614 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mdpfz\" (UniqueName: \"kubernetes.io/projected/c177f8fe-8145-4557-ae78-af121efe001c-kube-api-access-mdpfz\") pod \"cluster-monitoring-operator-6bb6d78bf-mzb7q\" (UID: \"c177f8fe-8145-4557-ae78-af121efe001c\") " pod="openshift-monitoring/cluster-monitoring-operator-6bb6d78bf-mzb7q" Feb 24 05:14:30.204388 master-0 kubenswrapper[7614]: I0224 05:14:30.204302 7614 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zcb72\" (UniqueName: \"kubernetes.io/projected/dd29bef3-d27e-48b3-9aa0-d915e949b3d5-kube-api-access-zcb72\") pod \"marketplace-operator-6f5488b997-dbsnm\" (UID: \"dd29bef3-d27e-48b3-9aa0-d915e949b3d5\") " pod="openshift-marketplace/marketplace-operator-6f5488b997-dbsnm" Feb 24 05:14:30.206586 master-0 kubenswrapper[7614]: I0224 05:14:30.206562 7614 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zb68s\" (UniqueName: \"kubernetes.io/projected/6e5ede6a-9d4b-47a2-b4ba-e6018910d05a-kube-api-access-zb68s\") pod \"cluster-node-tuning-operator-bcf775fc9-h99t4\" (UID: \"6e5ede6a-9d4b-47a2-b4ba-e6018910d05a\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-bcf775fc9-h99t4" Feb 24 05:14:30.208906 master-0 kubenswrapper[7614]: I0224 05:14:30.208873 7614 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jrhmp\" (UniqueName: \"kubernetes.io/projected/996ae0be-d36c-47f4-98b2-1c89591f9506-kube-api-access-jrhmp\") pod \"dns-operator-8c7d49845-4dhth\" (UID: \"996ae0be-d36c-47f4-98b2-1c89591f9506\") " pod="openshift-dns-operator/dns-operator-8c7d49845-4dhth" Feb 24 05:14:30.229278 master-0 kubenswrapper[7614]: I0224 05:14:30.229222 7614 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xj2tz\" (UniqueName: \"kubernetes.io/projected/8be1f8db-3f0b-4d6f-be42-7564fba66820-kube-api-access-xj2tz\") pod \"multus-admission-controller-5f98f4f8d5-b985k\" (UID: \"8be1f8db-3f0b-4d6f-be42-7564fba66820\") " pod="openshift-multus/multus-admission-controller-5f98f4f8d5-b985k" Feb 24 05:14:30.250107 master-0 kubenswrapper[7614]: I0224 05:14:30.250066 7614 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-79h66\" (UniqueName: \"kubernetes.io/projected/74e8b3c8-da80-492c-bfcf-199b40bde40b-kube-api-access-79h66\") pod \"ovnkube-node-vd82q\" (UID: \"74e8b3c8-da80-492c-bfcf-199b40bde40b\") " pod="openshift-ovn-kubernetes/ovnkube-node-vd82q" Feb 24 05:14:30.266117 master-0 kubenswrapper[7614]: I0224 05:14:30.266074 7614 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/3d6b1ce7-1213-494c-829d-186d39eac7eb-bound-sa-token\") pod \"ingress-operator-6569778c84-rr8r7\" (UID: \"3d6b1ce7-1213-494c-829d-186d39eac7eb\") " pod="openshift-ingress-operator/ingress-operator-6569778c84-rr8r7" Feb 24 05:14:30.270966 master-0 kubenswrapper[7614]: I0224 05:14:30.270929 7614 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/c00ee01c-143b-4e44-823c-c6bfdedb8ed6-multus-socket-dir-parent\") pod \"multus-8qp5g\" (UID: \"c00ee01c-143b-4e44-823c-c6bfdedb8ed6\") " pod="openshift-multus/multus-8qp5g" Feb 24 05:14:30.271025 master-0 kubenswrapper[7614]: I0224 05:14:30.270984 7614 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/74e8b3c8-da80-492c-bfcf-199b40bde40b-host-run-ovn-kubernetes\") pod \"ovnkube-node-vd82q\" (UID: \"74e8b3c8-da80-492c-bfcf-199b40bde40b\") " pod="openshift-ovn-kubernetes/ovnkube-node-vd82q" Feb 24 05:14:30.271184 master-0 kubenswrapper[7614]: I0224 05:14:30.271156 7614 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/c00ee01c-143b-4e44-823c-c6bfdedb8ed6-host-var-lib-cni-multus\") pod \"multus-8qp5g\" (UID: \"c00ee01c-143b-4e44-823c-c6bfdedb8ed6\") " pod="openshift-multus/multus-8qp5g" Feb 24 05:14:30.271230 master-0 kubenswrapper[7614]: I0224 05:14:30.271168 7614 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/74e8b3c8-da80-492c-bfcf-199b40bde40b-host-run-ovn-kubernetes\") pod \"ovnkube-node-vd82q\" (UID: \"74e8b3c8-da80-492c-bfcf-199b40bde40b\") " pod="openshift-ovn-kubernetes/ovnkube-node-vd82q" Feb 24 05:14:30.271261 master-0 kubenswrapper[7614]: I0224 05:14:30.271237 7614 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/767424fb-babf-4b73-b5e2-0bee65fcf207-tuning-conf-dir\") pod \"multus-additional-cni-plugins-jknmn\" (UID: \"767424fb-babf-4b73-b5e2-0bee65fcf207\") " pod="openshift-multus/multus-additional-cni-plugins-jknmn" Feb 24 05:14:30.271261 master-0 kubenswrapper[7614]: I0224 05:14:30.271239 7614 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/c00ee01c-143b-4e44-823c-c6bfdedb8ed6-multus-socket-dir-parent\") pod \"multus-8qp5g\" (UID: \"c00ee01c-143b-4e44-823c-c6bfdedb8ed6\") " pod="openshift-multus/multus-8qp5g" Feb 24 05:14:30.271337 master-0 kubenswrapper[7614]: I0224 05:14:30.271255 7614 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/c00ee01c-143b-4e44-823c-c6bfdedb8ed6-host-var-lib-cni-multus\") pod \"multus-8qp5g\" (UID: \"c00ee01c-143b-4e44-823c-c6bfdedb8ed6\") " pod="openshift-multus/multus-8qp5g" Feb 24 05:14:30.271374 master-0 kubenswrapper[7614]: I0224 05:14:30.271352 7614 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/767424fb-babf-4b73-b5e2-0bee65fcf207-tuning-conf-dir\") pod \"multus-additional-cni-plugins-jknmn\" (UID: \"767424fb-babf-4b73-b5e2-0bee65fcf207\") " pod="openshift-multus/multus-additional-cni-plugins-jknmn" Feb 24 05:14:30.271374 master-0 kubenswrapper[7614]: I0224 05:14:30.271361 7614 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/74e8b3c8-da80-492c-bfcf-199b40bde40b-host-kubelet\") pod \"ovnkube-node-vd82q\" (UID: \"74e8b3c8-da80-492c-bfcf-199b40bde40b\") " pod="openshift-ovn-kubernetes/ovnkube-node-vd82q" Feb 24 05:14:30.271429 master-0 kubenswrapper[7614]: I0224 05:14:30.271409 7614 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/74e8b3c8-da80-492c-bfcf-199b40bde40b-host-kubelet\") pod \"ovnkube-node-vd82q\" (UID: \"74e8b3c8-da80-492c-bfcf-199b40bde40b\") " pod="openshift-ovn-kubernetes/ovnkube-node-vd82q" Feb 24 05:14:30.271491 master-0 kubenswrapper[7614]: I0224 05:14:30.271462 7614 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/7dcc5520-7aa8-4cd5-b06d-591827ed4e2a-metrics-certs\") pod \"network-metrics-daemon-2vsjh\" (UID: \"7dcc5520-7aa8-4cd5-b06d-591827ed4e2a\") " pod="openshift-multus/network-metrics-daemon-2vsjh" Feb 24 05:14:30.271548 master-0 kubenswrapper[7614]: I0224 05:14:30.271491 7614 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/f77227c8-c52d-4a71-ae1b-792055f6f23d-host-etc-kube\") pod \"network-operator-7d7db75979-4fk6k\" (UID: \"f77227c8-c52d-4a71-ae1b-792055f6f23d\") " pod="openshift-network-operator/network-operator-7d7db75979-4fk6k" Feb 24 05:14:30.271548 master-0 kubenswrapper[7614]: I0224 05:14:30.271547 7614 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/74e8b3c8-da80-492c-bfcf-199b40bde40b-run-ovn\") pod \"ovnkube-node-vd82q\" (UID: \"74e8b3c8-da80-492c-bfcf-199b40bde40b\") " pod="openshift-ovn-kubernetes/ovnkube-node-vd82q" Feb 24 05:14:30.271641 master-0 kubenswrapper[7614]: I0224 05:14:30.271573 7614 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/74e8b3c8-da80-492c-bfcf-199b40bde40b-log-socket\") pod \"ovnkube-node-vd82q\" (UID: \"74e8b3c8-da80-492c-bfcf-199b40bde40b\") " pod="openshift-ovn-kubernetes/ovnkube-node-vd82q" Feb 24 05:14:30.271641 master-0 kubenswrapper[7614]: I0224 05:14:30.271608 7614 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/7c4b448f-670e-45a1-bdd7-c42903c682a9-etc-ssl-certs\") pod \"cluster-version-operator-5cfd9759cf-r4rf2\" (UID: \"7c4b448f-670e-45a1-bdd7-c42903c682a9\") " pod="openshift-cluster-version/cluster-version-operator-5cfd9759cf-r4rf2" Feb 24 05:14:30.271641 master-0 kubenswrapper[7614]: I0224 05:14:30.271637 7614 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/6e5ede6a-9d4b-47a2-b4ba-e6018910d05a-apiservice-cert\") pod \"cluster-node-tuning-operator-bcf775fc9-h99t4\" (UID: \"6e5ede6a-9d4b-47a2-b4ba-e6018910d05a\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-bcf775fc9-h99t4" Feb 24 05:14:30.271725 master-0 kubenswrapper[7614]: I0224 05:14:30.271666 7614 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/c00ee01c-143b-4e44-823c-c6bfdedb8ed6-multus-conf-dir\") pod \"multus-8qp5g\" (UID: \"c00ee01c-143b-4e44-823c-c6bfdedb8ed6\") " pod="openshift-multus/multus-8qp5g" Feb 24 05:14:30.271725 master-0 kubenswrapper[7614]: I0224 05:14:30.271698 7614 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/c00ee01c-143b-4e44-823c-c6bfdedb8ed6-host-run-multus-certs\") pod \"multus-8qp5g\" (UID: \"c00ee01c-143b-4e44-823c-c6bfdedb8ed6\") " pod="openshift-multus/multus-8qp5g" Feb 24 05:14:30.271779 master-0 kubenswrapper[7614]: I0224 05:14:30.271727 7614 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ckfnc\" (UniqueName: \"kubernetes.io/projected/1d922f6f-70a3-46cb-b230-6a1e2b8cfdfa-kube-api-access-ckfnc\") pod \"network-check-target-vp2jg\" (UID: \"1d922f6f-70a3-46cb-b230-6a1e2b8cfdfa\") " pod="openshift-network-diagnostics/network-check-target-vp2jg" Feb 24 05:14:30.271779 master-0 kubenswrapper[7614]: I0224 05:14:30.271755 7614 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/f77227c8-c52d-4a71-ae1b-792055f6f23d-host-etc-kube\") pod \"network-operator-7d7db75979-4fk6k\" (UID: \"f77227c8-c52d-4a71-ae1b-792055f6f23d\") " pod="openshift-network-operator/network-operator-7d7db75979-4fk6k" Feb 24 05:14:30.271779 master-0 kubenswrapper[7614]: I0224 05:14:30.271764 7614 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/c00ee01c-143b-4e44-823c-c6bfdedb8ed6-cnibin\") pod \"multus-8qp5g\" (UID: \"c00ee01c-143b-4e44-823c-c6bfdedb8ed6\") " pod="openshift-multus/multus-8qp5g" Feb 24 05:14:30.271857 master-0 kubenswrapper[7614]: I0224 05:14:30.271816 7614 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/c00ee01c-143b-4e44-823c-c6bfdedb8ed6-multus-cni-dir\") pod \"multus-8qp5g\" (UID: \"c00ee01c-143b-4e44-823c-c6bfdedb8ed6\") " pod="openshift-multus/multus-8qp5g" Feb 24 05:14:30.271857 master-0 kubenswrapper[7614]: I0224 05:14:30.271828 7614 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/74e8b3c8-da80-492c-bfcf-199b40bde40b-log-socket\") pod \"ovnkube-node-vd82q\" (UID: \"74e8b3c8-da80-492c-bfcf-199b40bde40b\") " pod="openshift-ovn-kubernetes/ovnkube-node-vd82q" Feb 24 05:14:30.271857 master-0 kubenswrapper[7614]: I0224 05:14:30.271845 7614 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/c00ee01c-143b-4e44-823c-c6bfdedb8ed6-host-var-lib-cni-bin\") pod \"multus-8qp5g\" (UID: \"c00ee01c-143b-4e44-823c-c6bfdedb8ed6\") " pod="openshift-multus/multus-8qp5g" Feb 24 05:14:30.271935 master-0 kubenswrapper[7614]: E0224 05:14:30.271866 7614 secret.go:189] Couldn't get secret openshift-multus/metrics-daemon-secret: secret "metrics-daemon-secret" not found Feb 24 05:14:30.271935 master-0 kubenswrapper[7614]: I0224 05:14:30.271876 7614 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/7c4b448f-670e-45a1-bdd7-c42903c682a9-etc-ssl-certs\") pod \"cluster-version-operator-5cfd9759cf-r4rf2\" (UID: \"7c4b448f-670e-45a1-bdd7-c42903c682a9\") " pod="openshift-cluster-version/cluster-version-operator-5cfd9759cf-r4rf2" Feb 24 05:14:30.271935 master-0 kubenswrapper[7614]: I0224 05:14:30.271879 7614 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cluster-monitoring-operator-tls\" (UniqueName: \"kubernetes.io/secret/c177f8fe-8145-4557-ae78-af121efe001c-cluster-monitoring-operator-tls\") pod \"cluster-monitoring-operator-6bb6d78bf-mzb7q\" (UID: \"c177f8fe-8145-4557-ae78-af121efe001c\") " pod="openshift-monitoring/cluster-monitoring-operator-6bb6d78bf-mzb7q" Feb 24 05:14:30.271935 master-0 kubenswrapper[7614]: I0224 05:14:30.271932 7614 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/74e8b3c8-da80-492c-bfcf-199b40bde40b-run-ovn\") pod \"ovnkube-node-vd82q\" (UID: \"74e8b3c8-da80-492c-bfcf-199b40bde40b\") " pod="openshift-ovn-kubernetes/ovnkube-node-vd82q" Feb 24 05:14:30.272043 master-0 kubenswrapper[7614]: I0224 05:14:30.271930 7614 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/74e8b3c8-da80-492c-bfcf-199b40bde40b-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-vd82q\" (UID: \"74e8b3c8-da80-492c-bfcf-199b40bde40b\") " pod="openshift-ovn-kubernetes/ovnkube-node-vd82q" Feb 24 05:14:30.272043 master-0 kubenswrapper[7614]: E0224 05:14:30.271960 7614 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/7dcc5520-7aa8-4cd5-b06d-591827ed4e2a-metrics-certs podName:7dcc5520-7aa8-4cd5-b06d-591827ed4e2a nodeName:}" failed. No retries permitted until 2026-02-24 05:14:30.771930197 +0000 UTC m=+1.806673343 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/7dcc5520-7aa8-4cd5-b06d-591827ed4e2a-metrics-certs") pod "network-metrics-daemon-2vsjh" (UID: "7dcc5520-7aa8-4cd5-b06d-591827ed4e2a") : secret "metrics-daemon-secret" not found Feb 24 05:14:30.272043 master-0 kubenswrapper[7614]: I0224 05:14:30.271963 7614 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/74e8b3c8-da80-492c-bfcf-199b40bde40b-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-vd82q\" (UID: \"74e8b3c8-da80-492c-bfcf-199b40bde40b\") " pod="openshift-ovn-kubernetes/ovnkube-node-vd82q" Feb 24 05:14:30.272043 master-0 kubenswrapper[7614]: I0224 05:14:30.271983 7614 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/7c4b448f-670e-45a1-bdd7-c42903c682a9-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5cfd9759cf-r4rf2\" (UID: \"7c4b448f-670e-45a1-bdd7-c42903c682a9\") " pod="openshift-cluster-version/cluster-version-operator-5cfd9759cf-r4rf2" Feb 24 05:14:30.272043 master-0 kubenswrapper[7614]: I0224 05:14:30.272005 7614 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/c00ee01c-143b-4e44-823c-c6bfdedb8ed6-multus-cni-dir\") pod \"multus-8qp5g\" (UID: \"c00ee01c-143b-4e44-823c-c6bfdedb8ed6\") " pod="openshift-multus/multus-8qp5g" Feb 24 05:14:30.272043 master-0 kubenswrapper[7614]: I0224 05:14:30.272028 7614 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/7c4b448f-670e-45a1-bdd7-c42903c682a9-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5cfd9759cf-r4rf2\" (UID: \"7c4b448f-670e-45a1-bdd7-c42903c682a9\") " pod="openshift-cluster-version/cluster-version-operator-5cfd9759cf-r4rf2" Feb 24 05:14:30.272210 master-0 kubenswrapper[7614]: I0224 05:14:30.272053 7614 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/c00ee01c-143b-4e44-823c-c6bfdedb8ed6-host-var-lib-cni-bin\") pod \"multus-8qp5g\" (UID: \"c00ee01c-143b-4e44-823c-c6bfdedb8ed6\") " pod="openshift-multus/multus-8qp5g" Feb 24 05:14:30.272210 master-0 kubenswrapper[7614]: E0224 05:14:30.272101 7614 secret.go:189] Couldn't get secret openshift-cluster-node-tuning-operator/performance-addon-operator-webhook-cert: secret "performance-addon-operator-webhook-cert" not found Feb 24 05:14:30.272210 master-0 kubenswrapper[7614]: I0224 05:14:30.272111 7614 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/c00ee01c-143b-4e44-823c-c6bfdedb8ed6-cnibin\") pod \"multus-8qp5g\" (UID: \"c00ee01c-143b-4e44-823c-c6bfdedb8ed6\") " pod="openshift-multus/multus-8qp5g" Feb 24 05:14:30.272210 master-0 kubenswrapper[7614]: E0224 05:14:30.272166 7614 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6e5ede6a-9d4b-47a2-b4ba-e6018910d05a-apiservice-cert podName:6e5ede6a-9d4b-47a2-b4ba-e6018910d05a nodeName:}" failed. No retries permitted until 2026-02-24 05:14:30.772144233 +0000 UTC m=+1.806887619 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "apiservice-cert" (UniqueName: "kubernetes.io/secret/6e5ede6a-9d4b-47a2-b4ba-e6018910d05a-apiservice-cert") pod "cluster-node-tuning-operator-bcf775fc9-h99t4" (UID: "6e5ede6a-9d4b-47a2-b4ba-e6018910d05a") : secret "performance-addon-operator-webhook-cert" not found Feb 24 05:14:30.272381 master-0 kubenswrapper[7614]: E0224 05:14:30.272224 7614 secret.go:189] Couldn't get secret openshift-monitoring/cluster-monitoring-operator-tls: secret "cluster-monitoring-operator-tls" not found Feb 24 05:14:30.272381 master-0 kubenswrapper[7614]: E0224 05:14:30.272258 7614 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c177f8fe-8145-4557-ae78-af121efe001c-cluster-monitoring-operator-tls podName:c177f8fe-8145-4557-ae78-af121efe001c nodeName:}" failed. No retries permitted until 2026-02-24 05:14:30.772248976 +0000 UTC m=+1.806992352 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cluster-monitoring-operator-tls" (UniqueName: "kubernetes.io/secret/c177f8fe-8145-4557-ae78-af121efe001c-cluster-monitoring-operator-tls") pod "cluster-monitoring-operator-6bb6d78bf-mzb7q" (UID: "c177f8fe-8145-4557-ae78-af121efe001c") : secret "cluster-monitoring-operator-tls" not found Feb 24 05:14:30.272381 master-0 kubenswrapper[7614]: I0224 05:14:30.272281 7614 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/c00ee01c-143b-4e44-823c-c6bfdedb8ed6-multus-conf-dir\") pod \"multus-8qp5g\" (UID: \"c00ee01c-143b-4e44-823c-c6bfdedb8ed6\") " pod="openshift-multus/multus-8qp5g" Feb 24 05:14:30.272381 master-0 kubenswrapper[7614]: I0224 05:14:30.272289 7614 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/74e8b3c8-da80-492c-bfcf-199b40bde40b-var-lib-openvswitch\") pod \"ovnkube-node-vd82q\" (UID: \"74e8b3c8-da80-492c-bfcf-199b40bde40b\") " pod="openshift-ovn-kubernetes/ovnkube-node-vd82q" Feb 24 05:14:30.272381 master-0 kubenswrapper[7614]: I0224 05:14:30.272354 7614 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/c00ee01c-143b-4e44-823c-c6bfdedb8ed6-host-run-multus-certs\") pod \"multus-8qp5g\" (UID: \"c00ee01c-143b-4e44-823c-c6bfdedb8ed6\") " pod="openshift-multus/multus-8qp5g" Feb 24 05:14:30.272381 master-0 kubenswrapper[7614]: I0224 05:14:30.272358 7614 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/74e8b3c8-da80-492c-bfcf-199b40bde40b-var-lib-openvswitch\") pod \"ovnkube-node-vd82q\" (UID: \"74e8b3c8-da80-492c-bfcf-199b40bde40b\") " pod="openshift-ovn-kubernetes/ovnkube-node-vd82q" Feb 24 05:14:30.272381 master-0 kubenswrapper[7614]: I0224 05:14:30.272388 7614 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/74e8b3c8-da80-492c-bfcf-199b40bde40b-run-openvswitch\") pod \"ovnkube-node-vd82q\" (UID: \"74e8b3c8-da80-492c-bfcf-199b40bde40b\") " pod="openshift-ovn-kubernetes/ovnkube-node-vd82q" Feb 24 05:14:30.272571 master-0 kubenswrapper[7614]: I0224 05:14:30.272479 7614 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/c00ee01c-143b-4e44-823c-c6bfdedb8ed6-os-release\") pod \"multus-8qp5g\" (UID: \"c00ee01c-143b-4e44-823c-c6bfdedb8ed6\") " pod="openshift-multus/multus-8qp5g" Feb 24 05:14:30.272571 master-0 kubenswrapper[7614]: I0224 05:14:30.272501 7614 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/767424fb-babf-4b73-b5e2-0bee65fcf207-system-cni-dir\") pod \"multus-additional-cni-plugins-jknmn\" (UID: \"767424fb-babf-4b73-b5e2-0bee65fcf207\") " pod="openshift-multus/multus-additional-cni-plugins-jknmn" Feb 24 05:14:30.272571 master-0 kubenswrapper[7614]: I0224 05:14:30.272522 7614 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/767424fb-babf-4b73-b5e2-0bee65fcf207-os-release\") pod \"multus-additional-cni-plugins-jknmn\" (UID: \"767424fb-babf-4b73-b5e2-0bee65fcf207\") " pod="openshift-multus/multus-additional-cni-plugins-jknmn" Feb 24 05:14:30.272571 master-0 kubenswrapper[7614]: I0224 05:14:30.272570 7614 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/c00ee01c-143b-4e44-823c-c6bfdedb8ed6-host-run-k8s-cni-cncf-io\") pod \"multus-8qp5g\" (UID: \"c00ee01c-143b-4e44-823c-c6bfdedb8ed6\") " pod="openshift-multus/multus-8qp5g" Feb 24 05:14:30.272669 master-0 kubenswrapper[7614]: I0224 05:14:30.272615 7614 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/c00ee01c-143b-4e44-823c-c6bfdedb8ed6-hostroot\") pod \"multus-8qp5g\" (UID: \"c00ee01c-143b-4e44-823c-c6bfdedb8ed6\") " pod="openshift-multus/multus-8qp5g" Feb 24 05:14:30.272669 master-0 kubenswrapper[7614]: I0224 05:14:30.272627 7614 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/c00ee01c-143b-4e44-823c-c6bfdedb8ed6-os-release\") pod \"multus-8qp5g\" (UID: \"c00ee01c-143b-4e44-823c-c6bfdedb8ed6\") " pod="openshift-multus/multus-8qp5g" Feb 24 05:14:30.272669 master-0 kubenswrapper[7614]: I0224 05:14:30.272641 7614 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/74e8b3c8-da80-492c-bfcf-199b40bde40b-run-systemd\") pod \"ovnkube-node-vd82q\" (UID: \"74e8b3c8-da80-492c-bfcf-199b40bde40b\") " pod="openshift-ovn-kubernetes/ovnkube-node-vd82q" Feb 24 05:14:30.272759 master-0 kubenswrapper[7614]: I0224 05:14:30.272683 7614 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/74e8b3c8-da80-492c-bfcf-199b40bde40b-run-openvswitch\") pod \"ovnkube-node-vd82q\" (UID: \"74e8b3c8-da80-492c-bfcf-199b40bde40b\") " pod="openshift-ovn-kubernetes/ovnkube-node-vd82q" Feb 24 05:14:30.272759 master-0 kubenswrapper[7614]: I0224 05:14:30.272690 7614 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/74e8b3c8-da80-492c-bfcf-199b40bde40b-run-systemd\") pod \"ovnkube-node-vd82q\" (UID: \"74e8b3c8-da80-492c-bfcf-199b40bde40b\") " pod="openshift-ovn-kubernetes/ovnkube-node-vd82q" Feb 24 05:14:30.272759 master-0 kubenswrapper[7614]: I0224 05:14:30.272733 7614 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/8be1f8db-3f0b-4d6f-be42-7564fba66820-webhook-certs\") pod \"multus-admission-controller-5f98f4f8d5-b985k\" (UID: \"8be1f8db-3f0b-4d6f-be42-7564fba66820\") " pod="openshift-multus/multus-admission-controller-5f98f4f8d5-b985k" Feb 24 05:14:30.272759 master-0 kubenswrapper[7614]: I0224 05:14:30.272749 7614 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/767424fb-babf-4b73-b5e2-0bee65fcf207-system-cni-dir\") pod \"multus-additional-cni-plugins-jknmn\" (UID: \"767424fb-babf-4b73-b5e2-0bee65fcf207\") " pod="openshift-multus/multus-additional-cni-plugins-jknmn" Feb 24 05:14:30.272860 master-0 kubenswrapper[7614]: I0224 05:14:30.272785 7614 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/c00ee01c-143b-4e44-823c-c6bfdedb8ed6-host-run-k8s-cni-cncf-io\") pod \"multus-8qp5g\" (UID: \"c00ee01c-143b-4e44-823c-c6bfdedb8ed6\") " pod="openshift-multus/multus-8qp5g" Feb 24 05:14:30.272860 master-0 kubenswrapper[7614]: I0224 05:14:30.272813 7614 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/74e8b3c8-da80-492c-bfcf-199b40bde40b-etc-openvswitch\") pod \"ovnkube-node-vd82q\" (UID: \"74e8b3c8-da80-492c-bfcf-199b40bde40b\") " pod="openshift-ovn-kubernetes/ovnkube-node-vd82q" Feb 24 05:14:30.272860 master-0 kubenswrapper[7614]: I0224 05:14:30.272836 7614 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/767424fb-babf-4b73-b5e2-0bee65fcf207-os-release\") pod \"multus-additional-cni-plugins-jknmn\" (UID: \"767424fb-babf-4b73-b5e2-0bee65fcf207\") " pod="openshift-multus/multus-additional-cni-plugins-jknmn" Feb 24 05:14:30.272942 master-0 kubenswrapper[7614]: I0224 05:14:30.272863 7614 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/c00ee01c-143b-4e44-823c-c6bfdedb8ed6-hostroot\") pod \"multus-8qp5g\" (UID: \"c00ee01c-143b-4e44-823c-c6bfdedb8ed6\") " pod="openshift-multus/multus-8qp5g" Feb 24 05:14:30.272942 master-0 kubenswrapper[7614]: I0224 05:14:30.272861 7614 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/c00ee01c-143b-4e44-823c-c6bfdedb8ed6-host-var-lib-kubelet\") pod \"multus-8qp5g\" (UID: \"c00ee01c-143b-4e44-823c-c6bfdedb8ed6\") " pod="openshift-multus/multus-8qp5g" Feb 24 05:14:30.272942 master-0 kubenswrapper[7614]: I0224 05:14:30.272901 7614 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/767424fb-babf-4b73-b5e2-0bee65fcf207-cnibin\") pod \"multus-additional-cni-plugins-jknmn\" (UID: \"767424fb-babf-4b73-b5e2-0bee65fcf207\") " pod="openshift-multus/multus-additional-cni-plugins-jknmn" Feb 24 05:14:30.273023 master-0 kubenswrapper[7614]: E0224 05:14:30.272942 7614 secret.go:189] Couldn't get secret openshift-multus/multus-admission-controller-secret: secret "multus-admission-controller-secret" not found Feb 24 05:14:30.273023 master-0 kubenswrapper[7614]: E0224 05:14:30.272993 7614 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/8be1f8db-3f0b-4d6f-be42-7564fba66820-webhook-certs podName:8be1f8db-3f0b-4d6f-be42-7564fba66820 nodeName:}" failed. No retries permitted until 2026-02-24 05:14:30.772972925 +0000 UTC m=+1.807716311 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/8be1f8db-3f0b-4d6f-be42-7564fba66820-webhook-certs") pod "multus-admission-controller-5f98f4f8d5-b985k" (UID: "8be1f8db-3f0b-4d6f-be42-7564fba66820") : secret "multus-admission-controller-secret" not found Feb 24 05:14:30.273023 master-0 kubenswrapper[7614]: E0224 05:14:30.273001 7614 secret.go:189] Couldn't get secret openshift-marketplace/marketplace-operator-metrics: secret "marketplace-operator-metrics" not found Feb 24 05:14:30.273106 master-0 kubenswrapper[7614]: I0224 05:14:30.273050 7614 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/74e8b3c8-da80-492c-bfcf-199b40bde40b-etc-openvswitch\") pod \"ovnkube-node-vd82q\" (UID: \"74e8b3c8-da80-492c-bfcf-199b40bde40b\") " pod="openshift-ovn-kubernetes/ovnkube-node-vd82q" Feb 24 05:14:30.273106 master-0 kubenswrapper[7614]: E0224 05:14:30.273061 7614 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/dd29bef3-d27e-48b3-9aa0-d915e949b3d5-marketplace-operator-metrics podName:dd29bef3-d27e-48b3-9aa0-d915e949b3d5 nodeName:}" failed. No retries permitted until 2026-02-24 05:14:30.773041957 +0000 UTC m=+1.807785113 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "marketplace-operator-metrics" (UniqueName: "kubernetes.io/secret/dd29bef3-d27e-48b3-9aa0-d915e949b3d5-marketplace-operator-metrics") pod "marketplace-operator-6f5488b997-dbsnm" (UID: "dd29bef3-d27e-48b3-9aa0-d915e949b3d5") : secret "marketplace-operator-metrics" not found Feb 24 05:14:30.273106 master-0 kubenswrapper[7614]: I0224 05:14:30.272944 7614 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/dd29bef3-d27e-48b3-9aa0-d915e949b3d5-marketplace-operator-metrics\") pod \"marketplace-operator-6f5488b997-dbsnm\" (UID: \"dd29bef3-d27e-48b3-9aa0-d915e949b3d5\") " pod="openshift-marketplace/marketplace-operator-6f5488b997-dbsnm" Feb 24 05:14:30.273106 master-0 kubenswrapper[7614]: I0224 05:14:30.273094 7614 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/767424fb-babf-4b73-b5e2-0bee65fcf207-cnibin\") pod \"multus-additional-cni-plugins-jknmn\" (UID: \"767424fb-babf-4b73-b5e2-0bee65fcf207\") " pod="openshift-multus/multus-additional-cni-plugins-jknmn" Feb 24 05:14:30.273206 master-0 kubenswrapper[7614]: I0224 05:14:30.273113 7614 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7c4b448f-670e-45a1-bdd7-c42903c682a9-serving-cert\") pod \"cluster-version-operator-5cfd9759cf-r4rf2\" (UID: \"7c4b448f-670e-45a1-bdd7-c42903c682a9\") " pod="openshift-cluster-version/cluster-version-operator-5cfd9759cf-r4rf2" Feb 24 05:14:30.273206 master-0 kubenswrapper[7614]: I0224 05:14:30.273123 7614 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/c00ee01c-143b-4e44-823c-c6bfdedb8ed6-host-var-lib-kubelet\") pod \"multus-8qp5g\" (UID: \"c00ee01c-143b-4e44-823c-c6bfdedb8ed6\") " pod="openshift-multus/multus-8qp5g" Feb 24 05:14:30.273206 master-0 kubenswrapper[7614]: I0224 05:14:30.273150 7614 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/c00ee01c-143b-4e44-823c-c6bfdedb8ed6-host-run-netns\") pod \"multus-8qp5g\" (UID: \"c00ee01c-143b-4e44-823c-c6bfdedb8ed6\") " pod="openshift-multus/multus-8qp5g" Feb 24 05:14:30.273206 master-0 kubenswrapper[7614]: I0224 05:14:30.273204 7614 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/996ae0be-d36c-47f4-98b2-1c89591f9506-metrics-tls\") pod \"dns-operator-8c7d49845-4dhth\" (UID: \"996ae0be-d36c-47f4-98b2-1c89591f9506\") " pod="openshift-dns-operator/dns-operator-8c7d49845-4dhth" Feb 24 05:14:30.273369 master-0 kubenswrapper[7614]: I0224 05:14:30.273234 7614 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/74e8b3c8-da80-492c-bfcf-199b40bde40b-node-log\") pod \"ovnkube-node-vd82q\" (UID: \"74e8b3c8-da80-492c-bfcf-199b40bde40b\") " pod="openshift-ovn-kubernetes/ovnkube-node-vd82q" Feb 24 05:14:30.273369 master-0 kubenswrapper[7614]: E0224 05:14:30.273248 7614 secret.go:189] Couldn't get secret openshift-cluster-version/cluster-version-operator-serving-cert: secret "cluster-version-operator-serving-cert" not found Feb 24 05:14:30.273369 master-0 kubenswrapper[7614]: I0224 05:14:30.273266 7614 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/f6690909-3a87-4bdc-b0ec-1cdd4df32e4b-host-slash\") pod \"iptables-alerter-r2vvc\" (UID: \"f6690909-3a87-4bdc-b0ec-1cdd4df32e4b\") " pod="openshift-network-operator/iptables-alerter-r2vvc" Feb 24 05:14:30.273369 master-0 kubenswrapper[7614]: E0224 05:14:30.273275 7614 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/7c4b448f-670e-45a1-bdd7-c42903c682a9-serving-cert podName:7c4b448f-670e-45a1-bdd7-c42903c682a9 nodeName:}" failed. No retries permitted until 2026-02-24 05:14:30.773267523 +0000 UTC m=+1.808010679 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/7c4b448f-670e-45a1-bdd7-c42903c682a9-serving-cert") pod "cluster-version-operator-5cfd9759cf-r4rf2" (UID: "7c4b448f-670e-45a1-bdd7-c42903c682a9") : secret "cluster-version-operator-serving-cert" not found Feb 24 05:14:30.273369 master-0 kubenswrapper[7614]: I0224 05:14:30.273341 7614 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/74e8b3c8-da80-492c-bfcf-199b40bde40b-host-cni-bin\") pod \"ovnkube-node-vd82q\" (UID: \"74e8b3c8-da80-492c-bfcf-199b40bde40b\") " pod="openshift-ovn-kubernetes/ovnkube-node-vd82q" Feb 24 05:14:30.273369 master-0 kubenswrapper[7614]: E0224 05:14:30.273352 7614 secret.go:189] Couldn't get secret openshift-dns-operator/metrics-tls: secret "metrics-tls" not found Feb 24 05:14:30.273369 master-0 kubenswrapper[7614]: I0224 05:14:30.273372 7614 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/74e8b3c8-da80-492c-bfcf-199b40bde40b-host-cni-netd\") pod \"ovnkube-node-vd82q\" (UID: \"74e8b3c8-da80-492c-bfcf-199b40bde40b\") " pod="openshift-ovn-kubernetes/ovnkube-node-vd82q" Feb 24 05:14:30.273566 master-0 kubenswrapper[7614]: E0224 05:14:30.273377 7614 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/996ae0be-d36c-47f4-98b2-1c89591f9506-metrics-tls podName:996ae0be-d36c-47f4-98b2-1c89591f9506 nodeName:}" failed. No retries permitted until 2026-02-24 05:14:30.773370446 +0000 UTC m=+1.808113602 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-tls" (UniqueName: "kubernetes.io/secret/996ae0be-d36c-47f4-98b2-1c89591f9506-metrics-tls") pod "dns-operator-8c7d49845-4dhth" (UID: "996ae0be-d36c-47f4-98b2-1c89591f9506") : secret "metrics-tls" not found Feb 24 05:14:30.273566 master-0 kubenswrapper[7614]: I0224 05:14:30.273437 7614 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/74e8b3c8-da80-492c-bfcf-199b40bde40b-host-cni-netd\") pod \"ovnkube-node-vd82q\" (UID: \"74e8b3c8-da80-492c-bfcf-199b40bde40b\") " pod="openshift-ovn-kubernetes/ovnkube-node-vd82q" Feb 24 05:14:30.273566 master-0 kubenswrapper[7614]: I0224 05:14:30.273513 7614 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/74e8b3c8-da80-492c-bfcf-199b40bde40b-host-slash\") pod \"ovnkube-node-vd82q\" (UID: \"74e8b3c8-da80-492c-bfcf-199b40bde40b\") " pod="openshift-ovn-kubernetes/ovnkube-node-vd82q" Feb 24 05:14:30.273566 master-0 kubenswrapper[7614]: I0224 05:14:30.273534 7614 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/49bfccec-61ec-4bef-a561-9f6e6f906215-package-server-manager-serving-cert\") pod \"package-server-manager-5c75f78c8b-9d82f\" (UID: \"49bfccec-61ec-4bef-a561-9f6e6f906215\") " pod="openshift-operator-lifecycle-manager/package-server-manager-5c75f78c8b-9d82f" Feb 24 05:14:30.273566 master-0 kubenswrapper[7614]: I0224 05:14:30.273554 7614 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/3d6b1ce7-1213-494c-829d-186d39eac7eb-metrics-tls\") pod \"ingress-operator-6569778c84-rr8r7\" (UID: \"3d6b1ce7-1213-494c-829d-186d39eac7eb\") " pod="openshift-ingress-operator/ingress-operator-6569778c84-rr8r7" Feb 24 05:14:30.273696 master-0 kubenswrapper[7614]: I0224 05:14:30.273605 7614 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/74e8b3c8-da80-492c-bfcf-199b40bde40b-host-slash\") pod \"ovnkube-node-vd82q\" (UID: \"74e8b3c8-da80-492c-bfcf-199b40bde40b\") " pod="openshift-ovn-kubernetes/ovnkube-node-vd82q" Feb 24 05:14:30.273696 master-0 kubenswrapper[7614]: E0224 05:14:30.273631 7614 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/package-server-manager-serving-cert: secret "package-server-manager-serving-cert" not found Feb 24 05:14:30.273696 master-0 kubenswrapper[7614]: E0224 05:14:30.273680 7614 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/49bfccec-61ec-4bef-a561-9f6e6f906215-package-server-manager-serving-cert podName:49bfccec-61ec-4bef-a561-9f6e6f906215 nodeName:}" failed. No retries permitted until 2026-02-24 05:14:30.773650873 +0000 UTC m=+1.808394029 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "package-server-manager-serving-cert" (UniqueName: "kubernetes.io/secret/49bfccec-61ec-4bef-a561-9f6e6f906215-package-server-manager-serving-cert") pod "package-server-manager-5c75f78c8b-9d82f" (UID: "49bfccec-61ec-4bef-a561-9f6e6f906215") : secret "package-server-manager-serving-cert" not found Feb 24 05:14:30.273777 master-0 kubenswrapper[7614]: E0224 05:14:30.273696 7614 secret.go:189] Couldn't get secret openshift-ingress-operator/metrics-tls: secret "metrics-tls" not found Feb 24 05:14:30.273777 master-0 kubenswrapper[7614]: I0224 05:14:30.273703 7614 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/74e8b3c8-da80-492c-bfcf-199b40bde40b-host-cni-bin\") pod \"ovnkube-node-vd82q\" (UID: \"74e8b3c8-da80-492c-bfcf-199b40bde40b\") " pod="openshift-ovn-kubernetes/ovnkube-node-vd82q" Feb 24 05:14:30.273777 master-0 kubenswrapper[7614]: I0224 05:14:30.273733 7614 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/74e8b3c8-da80-492c-bfcf-199b40bde40b-node-log\") pod \"ovnkube-node-vd82q\" (UID: \"74e8b3c8-da80-492c-bfcf-199b40bde40b\") " pod="openshift-ovn-kubernetes/ovnkube-node-vd82q" Feb 24 05:14:30.273777 master-0 kubenswrapper[7614]: E0224 05:14:30.273740 7614 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/3d6b1ce7-1213-494c-829d-186d39eac7eb-metrics-tls podName:3d6b1ce7-1213-494c-829d-186d39eac7eb nodeName:}" failed. No retries permitted until 2026-02-24 05:14:30.773727335 +0000 UTC m=+1.808470711 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-tls" (UniqueName: "kubernetes.io/secret/3d6b1ce7-1213-494c-829d-186d39eac7eb-metrics-tls") pod "ingress-operator-6569778c84-rr8r7" (UID: "3d6b1ce7-1213-494c-829d-186d39eac7eb") : secret "metrics-tls" not found Feb 24 05:14:30.273777 master-0 kubenswrapper[7614]: I0224 05:14:30.273757 7614 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/74e8b3c8-da80-492c-bfcf-199b40bde40b-systemd-units\") pod \"ovnkube-node-vd82q\" (UID: \"74e8b3c8-da80-492c-bfcf-199b40bde40b\") " pod="openshift-ovn-kubernetes/ovnkube-node-vd82q" Feb 24 05:14:30.273904 master-0 kubenswrapper[7614]: I0224 05:14:30.273778 7614 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/f6690909-3a87-4bdc-b0ec-1cdd4df32e4b-host-slash\") pod \"iptables-alerter-r2vvc\" (UID: \"f6690909-3a87-4bdc-b0ec-1cdd4df32e4b\") " pod="openshift-network-operator/iptables-alerter-r2vvc" Feb 24 05:14:30.273904 master-0 kubenswrapper[7614]: E0224 05:14:30.273823 7614 secret.go:189] Couldn't get secret openshift-cluster-node-tuning-operator/node-tuning-operator-tls: secret "node-tuning-operator-tls" not found Feb 24 05:14:30.273904 master-0 kubenswrapper[7614]: E0224 05:14:30.273849 7614 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6e5ede6a-9d4b-47a2-b4ba-e6018910d05a-node-tuning-operator-tls podName:6e5ede6a-9d4b-47a2-b4ba-e6018910d05a nodeName:}" failed. No retries permitted until 2026-02-24 05:14:30.773842879 +0000 UTC m=+1.808586035 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "node-tuning-operator-tls" (UniqueName: "kubernetes.io/secret/6e5ede6a-9d4b-47a2-b4ba-e6018910d05a-node-tuning-operator-tls") pod "cluster-node-tuning-operator-bcf775fc9-h99t4" (UID: "6e5ede6a-9d4b-47a2-b4ba-e6018910d05a") : secret "node-tuning-operator-tls" not found Feb 24 05:14:30.273904 master-0 kubenswrapper[7614]: I0224 05:14:30.273783 7614 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-tuning-operator-tls\" (UniqueName: \"kubernetes.io/secret/6e5ede6a-9d4b-47a2-b4ba-e6018910d05a-node-tuning-operator-tls\") pod \"cluster-node-tuning-operator-bcf775fc9-h99t4\" (UID: \"6e5ede6a-9d4b-47a2-b4ba-e6018910d05a\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-bcf775fc9-h99t4" Feb 24 05:14:30.273904 master-0 kubenswrapper[7614]: I0224 05:14:30.273874 7614 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/74e8b3c8-da80-492c-bfcf-199b40bde40b-systemd-units\") pod \"ovnkube-node-vd82q\" (UID: \"74e8b3c8-da80-492c-bfcf-199b40bde40b\") " pod="openshift-ovn-kubernetes/ovnkube-node-vd82q" Feb 24 05:14:30.273904 master-0 kubenswrapper[7614]: I0224 05:14:30.273889 7614 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/74e8b3c8-da80-492c-bfcf-199b40bde40b-host-run-netns\") pod \"ovnkube-node-vd82q\" (UID: \"74e8b3c8-da80-492c-bfcf-199b40bde40b\") " pod="openshift-ovn-kubernetes/ovnkube-node-vd82q" Feb 24 05:14:30.274056 master-0 kubenswrapper[7614]: I0224 05:14:30.273917 7614 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/c00ee01c-143b-4e44-823c-c6bfdedb8ed6-host-run-netns\") pod \"multus-8qp5g\" (UID: \"c00ee01c-143b-4e44-823c-c6bfdedb8ed6\") " pod="openshift-multus/multus-8qp5g" Feb 24 05:14:30.274056 master-0 kubenswrapper[7614]: I0224 05:14:30.273924 7614 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/c00ee01c-143b-4e44-823c-c6bfdedb8ed6-etc-kubernetes\") pod \"multus-8qp5g\" (UID: \"c00ee01c-143b-4e44-823c-c6bfdedb8ed6\") " pod="openshift-multus/multus-8qp5g" Feb 24 05:14:30.274056 master-0 kubenswrapper[7614]: I0224 05:14:30.273954 7614 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/c00ee01c-143b-4e44-823c-c6bfdedb8ed6-etc-kubernetes\") pod \"multus-8qp5g\" (UID: \"c00ee01c-143b-4e44-823c-c6bfdedb8ed6\") " pod="openshift-multus/multus-8qp5g" Feb 24 05:14:30.274056 master-0 kubenswrapper[7614]: I0224 05:14:30.274014 7614 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/c00ee01c-143b-4e44-823c-c6bfdedb8ed6-system-cni-dir\") pod \"multus-8qp5g\" (UID: \"c00ee01c-143b-4e44-823c-c6bfdedb8ed6\") " pod="openshift-multus/multus-8qp5g" Feb 24 05:14:30.274433 master-0 kubenswrapper[7614]: I0224 05:14:30.274408 7614 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/74e8b3c8-da80-492c-bfcf-199b40bde40b-host-run-netns\") pod \"ovnkube-node-vd82q\" (UID: \"74e8b3c8-da80-492c-bfcf-199b40bde40b\") " pod="openshift-ovn-kubernetes/ovnkube-node-vd82q" Feb 24 05:14:30.274483 master-0 kubenswrapper[7614]: I0224 05:14:30.274406 7614 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/c00ee01c-143b-4e44-823c-c6bfdedb8ed6-system-cni-dir\") pod \"multus-8qp5g\" (UID: \"c00ee01c-143b-4e44-823c-c6bfdedb8ed6\") " pod="openshift-multus/multus-8qp5g" Feb 24 05:14:30.287415 master-0 kubenswrapper[7614]: I0224 05:14:30.287384 7614 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/7c4b448f-670e-45a1-bdd7-c42903c682a9-kube-api-access\") pod \"cluster-version-operator-5cfd9759cf-r4rf2\" (UID: \"7c4b448f-670e-45a1-bdd7-c42903c682a9\") " pod="openshift-cluster-version/cluster-version-operator-5cfd9759cf-r4rf2" Feb 24 05:14:30.330138 master-0 kubenswrapper[7614]: I0224 05:14:30.330094 7614 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xj8cq\" (UniqueName: \"kubernetes.io/projected/d86d5bbe-3768-4695-810b-245a56e4fd1d-kube-api-access-xj8cq\") pod \"service-ca-operator-c48c8bf7c-mcdrl\" (UID: \"d86d5bbe-3768-4695-810b-245a56e4fd1d\") " pod="openshift-service-ca-operator/service-ca-operator-c48c8bf7c-mcdrl" Feb 24 05:14:30.330547 master-0 kubenswrapper[7614]: I0224 05:14:30.330518 7614 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d4d5x\" (UniqueName: \"kubernetes.io/projected/49bfccec-61ec-4bef-a561-9f6e6f906215-kube-api-access-d4d5x\") pod \"package-server-manager-5c75f78c8b-9d82f\" (UID: \"49bfccec-61ec-4bef-a561-9f6e6f906215\") " pod="openshift-operator-lifecycle-manager/package-server-manager-5c75f78c8b-9d82f" Feb 24 05:14:30.355249 master-0 kubenswrapper[7614]: I0224 05:14:30.355168 7614 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hl828\" (UniqueName: \"kubernetes.io/projected/767424fb-babf-4b73-b5e2-0bee65fcf207-kube-api-access-hl828\") pod \"multus-additional-cni-plugins-jknmn\" (UID: \"767424fb-babf-4b73-b5e2-0bee65fcf207\") " pod="openshift-multus/multus-additional-cni-plugins-jknmn" Feb 24 05:14:30.369850 master-0 kubenswrapper[7614]: I0224 05:14:30.369755 7614 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dcj62\" (UniqueName: \"kubernetes.io/projected/f77227c8-c52d-4a71-ae1b-792055f6f23d-kube-api-access-dcj62\") pod \"network-operator-7d7db75979-4fk6k\" (UID: \"f77227c8-c52d-4a71-ae1b-792055f6f23d\") " pod="openshift-network-operator/network-operator-7d7db75979-4fk6k" Feb 24 05:14:30.389728 master-0 kubenswrapper[7614]: I0224 05:14:30.389696 7614 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5q2r9\" (UniqueName: \"kubernetes.io/projected/3d6b1ce7-1213-494c-829d-186d39eac7eb-kube-api-access-5q2r9\") pod \"ingress-operator-6569778c84-rr8r7\" (UID: \"3d6b1ce7-1213-494c-829d-186d39eac7eb\") " pod="openshift-ingress-operator/ingress-operator-6569778c84-rr8r7" Feb 24 05:14:30.416078 master-0 kubenswrapper[7614]: I0224 05:14:30.416024 7614 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h5djr\" (UniqueName: \"kubernetes.io/projected/feee7fe8-e805-4807-b4c0-ecc7ef0f88d9-kube-api-access-h5djr\") pod \"csi-snapshot-controller-operator-6fb4df594f-8tv99\" (UID: \"feee7fe8-e805-4807-b4c0-ecc7ef0f88d9\") " pod="openshift-cluster-storage-operator/csi-snapshot-controller-operator-6fb4df594f-8tv99" Feb 24 05:14:30.426638 master-0 kubenswrapper[7614]: I0224 05:14:30.426589 7614 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m9kf2\" (UniqueName: \"kubernetes.io/projected/58ecd829-4749-4c8a-933b-16b4acccac90-kube-api-access-m9kf2\") pod \"openshift-apiserver-operator-8586dccc9b-49fsv\" (UID: \"58ecd829-4749-4c8a-933b-16b4acccac90\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-8586dccc9b-49fsv" Feb 24 05:14:30.429318 master-0 kubenswrapper[7614]: I0224 05:14:30.428770 7614 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 24 05:14:30.448402 master-0 kubenswrapper[7614]: I0224 05:14:30.448346 7614 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tlwzq\" (UniqueName: \"kubernetes.io/projected/c3fed34f-b275-42c6-af6c-8de3e6fe0f9e-kube-api-access-tlwzq\") pod \"kube-storage-version-migrator-operator-fc889cfd5-r6p58\" (UID: \"c3fed34f-b275-42c6-af6c-8de3e6fe0f9e\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-fc889cfd5-r6p58" Feb 24 05:14:30.468400 master-0 kubenswrapper[7614]: I0224 05:14:30.468349 7614 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8ktz5\" (UniqueName: \"kubernetes.io/projected/7dcc5520-7aa8-4cd5-b06d-591827ed4e2a-kube-api-access-8ktz5\") pod \"network-metrics-daemon-2vsjh\" (UID: \"7dcc5520-7aa8-4cd5-b06d-591827ed4e2a\") " pod="openshift-multus/network-metrics-daemon-2vsjh" Feb 24 05:14:30.486196 master-0 kubenswrapper[7614]: I0224 05:14:30.486074 7614 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/22813c83-2f60-44ad-9624-ad367cec08f7-kube-api-access\") pod \"kube-controller-manager-operator-7bcfbc574b-8zrj9\" (UID: \"22813c83-2f60-44ad-9624-ad367cec08f7\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-7bcfbc574b-8zrj9" Feb 24 05:14:30.506137 master-0 kubenswrapper[7614]: I0224 05:14:30.506098 7614 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wwc5b\" (UniqueName: \"kubernetes.io/projected/59333a14-5bdc-4590-a3da-af7300f086da-kube-api-access-wwc5b\") pod \"authentication-operator-5bd7c86784-kbb8z\" (UID: \"59333a14-5bdc-4590-a3da-af7300f086da\") " pod="openshift-authentication-operator/authentication-operator-5bd7c86784-kbb8z" Feb 24 05:14:30.536143 master-0 kubenswrapper[7614]: I0224 05:14:30.536098 7614 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4p8zb\" (UniqueName: \"kubernetes.io/projected/c106275b-72b6-4877-95c3-830f93e35375-kube-api-access-4p8zb\") pod \"network-node-identity-rlg4x\" (UID: \"c106275b-72b6-4877-95c3-830f93e35375\") " pod="openshift-network-node-identity/network-node-identity-rlg4x" Feb 24 05:14:30.547989 master-0 kubenswrapper[7614]: I0224 05:14:30.547939 7614 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/17f8e10b-88dc-4158-a7c4-aaa2f5d5fb9d-kube-api-access\") pod \"kube-apiserver-operator-5d87bf58c-ncrqj\" (UID: \"17f8e10b-88dc-4158-a7c4-aaa2f5d5fb9d\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-5d87bf58c-ncrqj" Feb 24 05:14:30.575835 master-0 kubenswrapper[7614]: I0224 05:14:30.575708 7614 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6b7f4\" (UniqueName: \"kubernetes.io/projected/f6690909-3a87-4bdc-b0ec-1cdd4df32e4b-kube-api-access-6b7f4\") pod \"iptables-alerter-r2vvc\" (UID: \"f6690909-3a87-4bdc-b0ec-1cdd4df32e4b\") " pod="openshift-network-operator/iptables-alerter-r2vvc" Feb 24 05:14:30.584670 master-0 kubenswrapper[7614]: I0224 05:14:30.584620 7614 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jx4rw\" (UniqueName: \"kubernetes.io/projected/c00ee01c-143b-4e44-823c-c6bfdedb8ed6-kube-api-access-jx4rw\") pod \"multus-8qp5g\" (UID: \"c00ee01c-143b-4e44-823c-c6bfdedb8ed6\") " pod="openshift-multus/multus-8qp5g" Feb 24 05:14:30.607953 master-0 kubenswrapper[7614]: I0224 05:14:30.607909 7614 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e6f05507-d5c1-4102-a220-1db715a496e3-kube-api-access\") pod \"openshift-kube-scheduler-operator-77cd4d9559-8l7xv\" (UID: \"e6f05507-d5c1-4102-a220-1db715a496e3\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-77cd4d9559-8l7xv" Feb 24 05:14:30.633633 master-0 kubenswrapper[7614]: I0224 05:14:30.633526 7614 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bs794\" (UniqueName: \"kubernetes.io/projected/88b915ff-fd94-4998-aa09-70f95c0f1b8a-kube-api-access-bs794\") pod \"ovnkube-control-plane-5d8dfcdc87-b8ght\" (UID: \"88b915ff-fd94-4998-aa09-70f95c0f1b8a\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-5d8dfcdc87-b8ght" Feb 24 05:14:30.646768 master-0 kubenswrapper[7614]: I0224 05:14:30.646688 7614 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-62xzk\" (UniqueName: \"kubernetes.io/projected/633d33a1-e1b1-40b0-b56a-afb0c1085d97-kube-api-access-62xzk\") pod \"cluster-olm-operator-5bd7768f54-qh6j7\" (UID: \"633d33a1-e1b1-40b0-b56a-afb0c1085d97\") " pod="openshift-cluster-olm-operator/cluster-olm-operator-5bd7768f54-qh6j7" Feb 24 05:14:30.686364 master-0 kubenswrapper[7614]: E0224 05:14:30.686286 7614 kubelet.go:1929] "Failed creating a mirror pod for" err="pods \"bootstrap-kube-controller-manager-master-0\" already exists" pod="kube-system/bootstrap-kube-controller-manager-master-0" Feb 24 05:14:30.708333 master-0 kubenswrapper[7614]: E0224 05:14:30.708250 7614 kubelet.go:1929] "Failed creating a mirror pod for" err="pods \"kube-rbac-proxy-crio-master-0\" already exists" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" Feb 24 05:14:30.722760 master-0 kubenswrapper[7614]: E0224 05:14:30.722727 7614 kubelet.go:1929] "Failed creating a mirror pod for" err="pods \"bootstrap-kube-apiserver-master-0\" already exists" pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Feb 24 05:14:30.742537 master-0 kubenswrapper[7614]: W0224 05:14:30.742502 7614 warnings.go:70] would violate PodSecurity "restricted:latest": host namespaces (hostNetwork=true), hostPort (container "etcd" uses hostPorts 2379, 2380), privileged (containers "etcdctl", "etcd" must not set securityContext.privileged=true), allowPrivilegeEscalation != false (containers "etcdctl", "etcd" must set securityContext.allowPrivilegeEscalation=false), unrestricted capabilities (containers "etcdctl", "etcd" must set securityContext.capabilities.drop=["ALL"]), restricted volume types (volumes "certs", "data-dir" use restricted volume type "hostPath"), runAsNonRoot != true (pod or containers "etcdctl", "etcd" must set securityContext.runAsNonRoot=true), seccompProfile (pod or containers "etcdctl", "etcd" must set securityContext.seccompProfile.type to "RuntimeDefault" or "Localhost") Feb 24 05:14:30.742660 master-0 kubenswrapper[7614]: E0224 05:14:30.742591 7614 kubelet.go:1929] "Failed creating a mirror pod for" err="pods \"etcd-master-0-master-0\" already exists" pod="openshift-etcd/etcd-master-0-master-0" Feb 24 05:14:30.767113 master-0 kubenswrapper[7614]: E0224 05:14:30.763757 7614 kubelet.go:1929] "Failed creating a mirror pod for" err="pods \"bootstrap-kube-scheduler-master-0\" already exists" pod="kube-system/bootstrap-kube-scheduler-master-0" Feb 24 05:14:30.786920 master-0 kubenswrapper[7614]: I0224 05:14:30.786856 7614 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory" Feb 24 05:14:30.790728 master-0 kubenswrapper[7614]: I0224 05:14:30.790684 7614 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/49bfccec-61ec-4bef-a561-9f6e6f906215-package-server-manager-serving-cert\") pod \"package-server-manager-5c75f78c8b-9d82f\" (UID: \"49bfccec-61ec-4bef-a561-9f6e6f906215\") " pod="openshift-operator-lifecycle-manager/package-server-manager-5c75f78c8b-9d82f" Feb 24 05:14:30.791033 master-0 kubenswrapper[7614]: I0224 05:14:30.790973 7614 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/3d6b1ce7-1213-494c-829d-186d39eac7eb-metrics-tls\") pod \"ingress-operator-6569778c84-rr8r7\" (UID: \"3d6b1ce7-1213-494c-829d-186d39eac7eb\") " pod="openshift-ingress-operator/ingress-operator-6569778c84-rr8r7" Feb 24 05:14:30.791097 master-0 kubenswrapper[7614]: E0224 05:14:30.791011 7614 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/package-server-manager-serving-cert: secret "package-server-manager-serving-cert" not found Feb 24 05:14:30.791130 master-0 kubenswrapper[7614]: E0224 05:14:30.791111 7614 secret.go:189] Couldn't get secret openshift-ingress-operator/metrics-tls: secret "metrics-tls" not found Feb 24 05:14:30.791191 master-0 kubenswrapper[7614]: E0224 05:14:30.791167 7614 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/3d6b1ce7-1213-494c-829d-186d39eac7eb-metrics-tls podName:3d6b1ce7-1213-494c-829d-186d39eac7eb nodeName:}" failed. No retries permitted until 2026-02-24 05:14:31.791150765 +0000 UTC m=+2.825893911 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "metrics-tls" (UniqueName: "kubernetes.io/secret/3d6b1ce7-1213-494c-829d-186d39eac7eb-metrics-tls") pod "ingress-operator-6569778c84-rr8r7" (UID: "3d6b1ce7-1213-494c-829d-186d39eac7eb") : secret "metrics-tls" not found Feb 24 05:14:30.791238 master-0 kubenswrapper[7614]: E0224 05:14:30.791196 7614 secret.go:189] Couldn't get secret openshift-cluster-node-tuning-operator/node-tuning-operator-tls: secret "node-tuning-operator-tls" not found Feb 24 05:14:30.791238 master-0 kubenswrapper[7614]: I0224 05:14:30.791073 7614 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-tuning-operator-tls\" (UniqueName: \"kubernetes.io/secret/6e5ede6a-9d4b-47a2-b4ba-e6018910d05a-node-tuning-operator-tls\") pod \"cluster-node-tuning-operator-bcf775fc9-h99t4\" (UID: \"6e5ede6a-9d4b-47a2-b4ba-e6018910d05a\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-bcf775fc9-h99t4" Feb 24 05:14:30.791325 master-0 kubenswrapper[7614]: E0224 05:14:30.791267 7614 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/49bfccec-61ec-4bef-a561-9f6e6f906215-package-server-manager-serving-cert podName:49bfccec-61ec-4bef-a561-9f6e6f906215 nodeName:}" failed. No retries permitted until 2026-02-24 05:14:31.791230327 +0000 UTC m=+2.825973503 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "package-server-manager-serving-cert" (UniqueName: "kubernetes.io/secret/49bfccec-61ec-4bef-a561-9f6e6f906215-package-server-manager-serving-cert") pod "package-server-manager-5c75f78c8b-9d82f" (UID: "49bfccec-61ec-4bef-a561-9f6e6f906215") : secret "package-server-manager-serving-cert" not found Feb 24 05:14:30.791368 master-0 kubenswrapper[7614]: E0224 05:14:30.791350 7614 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6e5ede6a-9d4b-47a2-b4ba-e6018910d05a-node-tuning-operator-tls podName:6e5ede6a-9d4b-47a2-b4ba-e6018910d05a nodeName:}" failed. No retries permitted until 2026-02-24 05:14:31.79133385 +0000 UTC m=+2.826077026 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "node-tuning-operator-tls" (UniqueName: "kubernetes.io/secret/6e5ede6a-9d4b-47a2-b4ba-e6018910d05a-node-tuning-operator-tls") pod "cluster-node-tuning-operator-bcf775fc9-h99t4" (UID: "6e5ede6a-9d4b-47a2-b4ba-e6018910d05a") : secret "node-tuning-operator-tls" not found Feb 24 05:14:30.791489 master-0 kubenswrapper[7614]: I0224 05:14:30.791442 7614 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/7dcc5520-7aa8-4cd5-b06d-591827ed4e2a-metrics-certs\") pod \"network-metrics-daemon-2vsjh\" (UID: \"7dcc5520-7aa8-4cd5-b06d-591827ed4e2a\") " pod="openshift-multus/network-metrics-daemon-2vsjh" Feb 24 05:14:30.791549 master-0 kubenswrapper[7614]: E0224 05:14:30.791527 7614 secret.go:189] Couldn't get secret openshift-multus/metrics-daemon-secret: secret "metrics-daemon-secret" not found Feb 24 05:14:30.791580 master-0 kubenswrapper[7614]: I0224 05:14:30.791559 7614 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/6e5ede6a-9d4b-47a2-b4ba-e6018910d05a-apiservice-cert\") pod \"cluster-node-tuning-operator-bcf775fc9-h99t4\" (UID: \"6e5ede6a-9d4b-47a2-b4ba-e6018910d05a\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-bcf775fc9-h99t4" Feb 24 05:14:30.791611 master-0 kubenswrapper[7614]: E0224 05:14:30.791580 7614 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/7dcc5520-7aa8-4cd5-b06d-591827ed4e2a-metrics-certs podName:7dcc5520-7aa8-4cd5-b06d-591827ed4e2a nodeName:}" failed. No retries permitted until 2026-02-24 05:14:31.791558856 +0000 UTC m=+2.826302202 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/7dcc5520-7aa8-4cd5-b06d-591827ed4e2a-metrics-certs") pod "network-metrics-daemon-2vsjh" (UID: "7dcc5520-7aa8-4cd5-b06d-591827ed4e2a") : secret "metrics-daemon-secret" not found Feb 24 05:14:30.791665 master-0 kubenswrapper[7614]: I0224 05:14:30.791637 7614 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cluster-monitoring-operator-tls\" (UniqueName: \"kubernetes.io/secret/c177f8fe-8145-4557-ae78-af121efe001c-cluster-monitoring-operator-tls\") pod \"cluster-monitoring-operator-6bb6d78bf-mzb7q\" (UID: \"c177f8fe-8145-4557-ae78-af121efe001c\") " pod="openshift-monitoring/cluster-monitoring-operator-6bb6d78bf-mzb7q" Feb 24 05:14:30.791721 master-0 kubenswrapper[7614]: E0224 05:14:30.791698 7614 secret.go:189] Couldn't get secret openshift-cluster-node-tuning-operator/performance-addon-operator-webhook-cert: secret "performance-addon-operator-webhook-cert" not found Feb 24 05:14:30.791758 master-0 kubenswrapper[7614]: E0224 05:14:30.791717 7614 secret.go:189] Couldn't get secret openshift-monitoring/cluster-monitoring-operator-tls: secret "cluster-monitoring-operator-tls" not found Feb 24 05:14:30.791818 master-0 kubenswrapper[7614]: I0224 05:14:30.791785 7614 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/8be1f8db-3f0b-4d6f-be42-7564fba66820-webhook-certs\") pod \"multus-admission-controller-5f98f4f8d5-b985k\" (UID: \"8be1f8db-3f0b-4d6f-be42-7564fba66820\") " pod="openshift-multus/multus-admission-controller-5f98f4f8d5-b985k" Feb 24 05:14:30.791884 master-0 kubenswrapper[7614]: E0224 05:14:30.791863 7614 secret.go:189] Couldn't get secret openshift-multus/multus-admission-controller-secret: secret "multus-admission-controller-secret" not found Feb 24 05:14:30.791946 master-0 kubenswrapper[7614]: E0224 05:14:30.791898 7614 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6e5ede6a-9d4b-47a2-b4ba-e6018910d05a-apiservice-cert podName:6e5ede6a-9d4b-47a2-b4ba-e6018910d05a nodeName:}" failed. No retries permitted until 2026-02-24 05:14:31.791819563 +0000 UTC m=+2.826562719 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "apiservice-cert" (UniqueName: "kubernetes.io/secret/6e5ede6a-9d4b-47a2-b4ba-e6018910d05a-apiservice-cert") pod "cluster-node-tuning-operator-bcf775fc9-h99t4" (UID: "6e5ede6a-9d4b-47a2-b4ba-e6018910d05a") : secret "performance-addon-operator-webhook-cert" not found Feb 24 05:14:30.791991 master-0 kubenswrapper[7614]: E0224 05:14:30.791964 7614 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c177f8fe-8145-4557-ae78-af121efe001c-cluster-monitoring-operator-tls podName:c177f8fe-8145-4557-ae78-af121efe001c nodeName:}" failed. No retries permitted until 2026-02-24 05:14:31.791955016 +0000 UTC m=+2.826698172 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "cluster-monitoring-operator-tls" (UniqueName: "kubernetes.io/secret/c177f8fe-8145-4557-ae78-af121efe001c-cluster-monitoring-operator-tls") pod "cluster-monitoring-operator-6bb6d78bf-mzb7q" (UID: "c177f8fe-8145-4557-ae78-af121efe001c") : secret "cluster-monitoring-operator-tls" not found Feb 24 05:14:30.792033 master-0 kubenswrapper[7614]: I0224 05:14:30.792020 7614 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/dd29bef3-d27e-48b3-9aa0-d915e949b3d5-marketplace-operator-metrics\") pod \"marketplace-operator-6f5488b997-dbsnm\" (UID: \"dd29bef3-d27e-48b3-9aa0-d915e949b3d5\") " pod="openshift-marketplace/marketplace-operator-6f5488b997-dbsnm" Feb 24 05:14:30.792099 master-0 kubenswrapper[7614]: I0224 05:14:30.792075 7614 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7c4b448f-670e-45a1-bdd7-c42903c682a9-serving-cert\") pod \"cluster-version-operator-5cfd9759cf-r4rf2\" (UID: \"7c4b448f-670e-45a1-bdd7-c42903c682a9\") " pod="openshift-cluster-version/cluster-version-operator-5cfd9759cf-r4rf2" Feb 24 05:14:30.792099 master-0 kubenswrapper[7614]: E0224 05:14:30.792083 7614 secret.go:189] Couldn't get secret openshift-marketplace/marketplace-operator-metrics: secret "marketplace-operator-metrics" not found Feb 24 05:14:30.792162 master-0 kubenswrapper[7614]: E0224 05:14:30.792119 7614 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/8be1f8db-3f0b-4d6f-be42-7564fba66820-webhook-certs podName:8be1f8db-3f0b-4d6f-be42-7564fba66820 nodeName:}" failed. No retries permitted until 2026-02-24 05:14:31.792105311 +0000 UTC m=+2.826848657 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/8be1f8db-3f0b-4d6f-be42-7564fba66820-webhook-certs") pod "multus-admission-controller-5f98f4f8d5-b985k" (UID: "8be1f8db-3f0b-4d6f-be42-7564fba66820") : secret "multus-admission-controller-secret" not found Feb 24 05:14:30.792162 master-0 kubenswrapper[7614]: I0224 05:14:30.792151 7614 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/996ae0be-d36c-47f4-98b2-1c89591f9506-metrics-tls\") pod \"dns-operator-8c7d49845-4dhth\" (UID: \"996ae0be-d36c-47f4-98b2-1c89591f9506\") " pod="openshift-dns-operator/dns-operator-8c7d49845-4dhth" Feb 24 05:14:30.792240 master-0 kubenswrapper[7614]: E0224 05:14:30.792219 7614 secret.go:189] Couldn't get secret openshift-dns-operator/metrics-tls: secret "metrics-tls" not found Feb 24 05:14:30.792275 master-0 kubenswrapper[7614]: E0224 05:14:30.792249 7614 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/dd29bef3-d27e-48b3-9aa0-d915e949b3d5-marketplace-operator-metrics podName:dd29bef3-d27e-48b3-9aa0-d915e949b3d5 nodeName:}" failed. No retries permitted until 2026-02-24 05:14:31.792235105 +0000 UTC m=+2.826978261 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "marketplace-operator-metrics" (UniqueName: "kubernetes.io/secret/dd29bef3-d27e-48b3-9aa0-d915e949b3d5-marketplace-operator-metrics") pod "marketplace-operator-6f5488b997-dbsnm" (UID: "dd29bef3-d27e-48b3-9aa0-d915e949b3d5") : secret "marketplace-operator-metrics" not found Feb 24 05:14:30.792275 master-0 kubenswrapper[7614]: E0224 05:14:30.792272 7614 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/996ae0be-d36c-47f4-98b2-1c89591f9506-metrics-tls podName:996ae0be-d36c-47f4-98b2-1c89591f9506 nodeName:}" failed. No retries permitted until 2026-02-24 05:14:31.792263765 +0000 UTC m=+2.827006921 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "metrics-tls" (UniqueName: "kubernetes.io/secret/996ae0be-d36c-47f4-98b2-1c89591f9506-metrics-tls") pod "dns-operator-8c7d49845-4dhth" (UID: "996ae0be-d36c-47f4-98b2-1c89591f9506") : secret "metrics-tls" not found Feb 24 05:14:30.792356 master-0 kubenswrapper[7614]: E0224 05:14:30.792327 7614 secret.go:189] Couldn't get secret openshift-cluster-version/cluster-version-operator-serving-cert: secret "cluster-version-operator-serving-cert" not found Feb 24 05:14:30.792636 master-0 kubenswrapper[7614]: E0224 05:14:30.792391 7614 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/7c4b448f-670e-45a1-bdd7-c42903c682a9-serving-cert podName:7c4b448f-670e-45a1-bdd7-c42903c682a9 nodeName:}" failed. No retries permitted until 2026-02-24 05:14:31.792381999 +0000 UTC m=+2.827125155 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/7c4b448f-670e-45a1-bdd7-c42903c682a9-serving-cert") pod "cluster-version-operator-5cfd9759cf-r4rf2" (UID: "7c4b448f-670e-45a1-bdd7-c42903c682a9") : secret "cluster-version-operator-serving-cert" not found Feb 24 05:14:30.794749 master-0 kubenswrapper[7614]: I0224 05:14:30.794703 7614 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ckfnc\" (UniqueName: \"kubernetes.io/projected/1d922f6f-70a3-46cb-b230-6a1e2b8cfdfa-kube-api-access-ckfnc\") pod \"network-check-target-vp2jg\" (UID: \"1d922f6f-70a3-46cb-b230-6a1e2b8cfdfa\") " pod="openshift-network-diagnostics/network-check-target-vp2jg" Feb 24 05:14:30.806488 master-0 kubenswrapper[7614]: E0224 05:14:30.806435 7614 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:897708222502e4d710dd737923f74d153c084ba6048bffceb16dfd30f79a6ecc" Feb 24 05:14:30.806734 master-0 kubenswrapper[7614]: E0224 05:14:30.806681 7614 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:kube-storage-version-migrator-operator,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:897708222502e4d710dd737923f74d153c084ba6048bffceb16dfd30f79a6ecc,Command:[cluster-kube-storage-version-migrator-operator start],Args:[--config=/var/run/configmaps/config/config.yaml],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:metrics,HostPort:0,ContainerPort:8443,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eef7d0364bb9259fdc66e57df6df3a59ce7bf957a77d0ca25d4fedb5f122015,ValueFrom:nil,},EnvVar{Name:OPERATOR_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:897708222502e4d710dd737923f74d153c084ba6048bffceb16dfd30f79a6ecc,ValueFrom:nil,},EnvVar{Name:OPERATOR_IMAGE_VERSION,Value:4.18.33,ValueFrom:nil,},EnvVar{Name:OPERAND_IMAGE_VERSION,Value:4.18.33,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:false,MountPath:/var/run/configmaps/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:serving-cert,ReadOnly:false,MountPath:/var/run/secrets/serving-cert,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-tlwzq,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1001,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod kube-storage-version-migrator-operator-fc889cfd5-r6p58_openshift-kube-storage-version-migrator-operator(c3fed34f-b275-42c6-af6c-8de3e6fe0f9e): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 24 05:14:30.808019 master-0 kubenswrapper[7614]: E0224 05:14:30.807912 7614 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-storage-version-migrator-operator\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-fc889cfd5-r6p58" podUID="c3fed34f-b275-42c6-af6c-8de3e6fe0f9e" Feb 24 05:14:31.030287 master-0 kubenswrapper[7614]: I0224 05:14:31.030194 7614 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-vp2jg" Feb 24 05:14:31.370044 master-0 kubenswrapper[7614]: E0224 05:14:31.369921 7614 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c8618d42fe4da4881abe39e98691d187e13713981b66d0dac0a11cb1287482b7" Feb 24 05:14:31.370784 master-0 kubenswrapper[7614]: E0224 05:14:31.370116 7614 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:kube-scheduler-operator-container,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c8618d42fe4da4881abe39e98691d187e13713981b66d0dac0a11cb1287482b7,Command:[cluster-kube-scheduler-operator operator],Args:[--config=/var/run/configmaps/config/config.yaml],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8177c465e14c63854e5c0fa95ca0635cffc9b5dd3d077ecf971feedbc42b1274,ValueFrom:nil,},EnvVar{Name:OPERATOR_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c8618d42fe4da4881abe39e98691d187e13713981b66d0dac0a11cb1287482b7,ValueFrom:nil,},EnvVar{Name:OPERATOR_IMAGE_VERSION,Value:4.18.33,ValueFrom:nil,},EnvVar{Name:OPERAND_IMAGE_VERSION,Value:1.31.14,ValueFrom:nil,},EnvVar{Name:POD_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.name,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:false,MountPath:/var/run/configmaps/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:serving-cert,ReadOnly:false,MountPath:/var/run/secrets/serving-cert,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod openshift-kube-scheduler-operator-77cd4d9559-8l7xv_openshift-kube-scheduler-operator(e6f05507-d5c1-4102-a220-1db715a496e3): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 24 05:14:31.371444 master-0 kubenswrapper[7614]: E0224 05:14:31.371370 7614 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-scheduler-operator-container\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-77cd4d9559-8l7xv" podUID="e6f05507-d5c1-4102-a220-1db715a496e3" Feb 24 05:14:31.404549 master-0 kubenswrapper[7614]: I0224 05:14:31.404492 7614 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Feb 24 05:14:31.802436 master-0 kubenswrapper[7614]: I0224 05:14:31.802388 7614 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/7dcc5520-7aa8-4cd5-b06d-591827ed4e2a-metrics-certs\") pod \"network-metrics-daemon-2vsjh\" (UID: \"7dcc5520-7aa8-4cd5-b06d-591827ed4e2a\") " pod="openshift-multus/network-metrics-daemon-2vsjh" Feb 24 05:14:31.802699 master-0 kubenswrapper[7614]: I0224 05:14:31.802445 7614 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/6e5ede6a-9d4b-47a2-b4ba-e6018910d05a-apiservice-cert\") pod \"cluster-node-tuning-operator-bcf775fc9-h99t4\" (UID: \"6e5ede6a-9d4b-47a2-b4ba-e6018910d05a\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-bcf775fc9-h99t4" Feb 24 05:14:31.802699 master-0 kubenswrapper[7614]: E0224 05:14:31.802543 7614 secret.go:189] Couldn't get secret openshift-cluster-node-tuning-operator/performance-addon-operator-webhook-cert: secret "performance-addon-operator-webhook-cert" not found Feb 24 05:14:31.802699 master-0 kubenswrapper[7614]: E0224 05:14:31.802598 7614 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6e5ede6a-9d4b-47a2-b4ba-e6018910d05a-apiservice-cert podName:6e5ede6a-9d4b-47a2-b4ba-e6018910d05a nodeName:}" failed. No retries permitted until 2026-02-24 05:14:33.802581533 +0000 UTC m=+4.837324689 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "apiservice-cert" (UniqueName: "kubernetes.io/secret/6e5ede6a-9d4b-47a2-b4ba-e6018910d05a-apiservice-cert") pod "cluster-node-tuning-operator-bcf775fc9-h99t4" (UID: "6e5ede6a-9d4b-47a2-b4ba-e6018910d05a") : secret "performance-addon-operator-webhook-cert" not found Feb 24 05:14:31.802699 master-0 kubenswrapper[7614]: E0224 05:14:31.802543 7614 secret.go:189] Couldn't get secret openshift-multus/metrics-daemon-secret: secret "metrics-daemon-secret" not found Feb 24 05:14:31.802699 master-0 kubenswrapper[7614]: E0224 05:14:31.802663 7614 secret.go:189] Couldn't get secret openshift-monitoring/cluster-monitoring-operator-tls: secret "cluster-monitoring-operator-tls" not found Feb 24 05:14:31.802883 master-0 kubenswrapper[7614]: I0224 05:14:31.802618 7614 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cluster-monitoring-operator-tls\" (UniqueName: \"kubernetes.io/secret/c177f8fe-8145-4557-ae78-af121efe001c-cluster-monitoring-operator-tls\") pod \"cluster-monitoring-operator-6bb6d78bf-mzb7q\" (UID: \"c177f8fe-8145-4557-ae78-af121efe001c\") " pod="openshift-monitoring/cluster-monitoring-operator-6bb6d78bf-mzb7q" Feb 24 05:14:31.802883 master-0 kubenswrapper[7614]: E0224 05:14:31.802664 7614 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/7dcc5520-7aa8-4cd5-b06d-591827ed4e2a-metrics-certs podName:7dcc5520-7aa8-4cd5-b06d-591827ed4e2a nodeName:}" failed. No retries permitted until 2026-02-24 05:14:33.802653365 +0000 UTC m=+4.837396521 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/7dcc5520-7aa8-4cd5-b06d-591827ed4e2a-metrics-certs") pod "network-metrics-daemon-2vsjh" (UID: "7dcc5520-7aa8-4cd5-b06d-591827ed4e2a") : secret "metrics-daemon-secret" not found Feb 24 05:14:31.802883 master-0 kubenswrapper[7614]: I0224 05:14:31.802847 7614 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/8be1f8db-3f0b-4d6f-be42-7564fba66820-webhook-certs\") pod \"multus-admission-controller-5f98f4f8d5-b985k\" (UID: \"8be1f8db-3f0b-4d6f-be42-7564fba66820\") " pod="openshift-multus/multus-admission-controller-5f98f4f8d5-b985k" Feb 24 05:14:31.802997 master-0 kubenswrapper[7614]: I0224 05:14:31.802904 7614 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/dd29bef3-d27e-48b3-9aa0-d915e949b3d5-marketplace-operator-metrics\") pod \"marketplace-operator-6f5488b997-dbsnm\" (UID: \"dd29bef3-d27e-48b3-9aa0-d915e949b3d5\") " pod="openshift-marketplace/marketplace-operator-6f5488b997-dbsnm" Feb 24 05:14:31.802997 master-0 kubenswrapper[7614]: E0224 05:14:31.802929 7614 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c177f8fe-8145-4557-ae78-af121efe001c-cluster-monitoring-operator-tls podName:c177f8fe-8145-4557-ae78-af121efe001c nodeName:}" failed. No retries permitted until 2026-02-24 05:14:33.802920032 +0000 UTC m=+4.837663188 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "cluster-monitoring-operator-tls" (UniqueName: "kubernetes.io/secret/c177f8fe-8145-4557-ae78-af121efe001c-cluster-monitoring-operator-tls") pod "cluster-monitoring-operator-6bb6d78bf-mzb7q" (UID: "c177f8fe-8145-4557-ae78-af121efe001c") : secret "cluster-monitoring-operator-tls" not found Feb 24 05:14:31.802997 master-0 kubenswrapper[7614]: I0224 05:14:31.802954 7614 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7c4b448f-670e-45a1-bdd7-c42903c682a9-serving-cert\") pod \"cluster-version-operator-5cfd9759cf-r4rf2\" (UID: \"7c4b448f-670e-45a1-bdd7-c42903c682a9\") " pod="openshift-cluster-version/cluster-version-operator-5cfd9759cf-r4rf2" Feb 24 05:14:31.802997 master-0 kubenswrapper[7614]: I0224 05:14:31.802986 7614 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/996ae0be-d36c-47f4-98b2-1c89591f9506-metrics-tls\") pod \"dns-operator-8c7d49845-4dhth\" (UID: \"996ae0be-d36c-47f4-98b2-1c89591f9506\") " pod="openshift-dns-operator/dns-operator-8c7d49845-4dhth" Feb 24 05:14:31.803146 master-0 kubenswrapper[7614]: I0224 05:14:31.803006 7614 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/49bfccec-61ec-4bef-a561-9f6e6f906215-package-server-manager-serving-cert\") pod \"package-server-manager-5c75f78c8b-9d82f\" (UID: \"49bfccec-61ec-4bef-a561-9f6e6f906215\") " pod="openshift-operator-lifecycle-manager/package-server-manager-5c75f78c8b-9d82f" Feb 24 05:14:31.803146 master-0 kubenswrapper[7614]: E0224 05:14:31.803020 7614 secret.go:189] Couldn't get secret openshift-marketplace/marketplace-operator-metrics: secret "marketplace-operator-metrics" not found Feb 24 05:14:31.803146 master-0 kubenswrapper[7614]: E0224 05:14:31.803034 7614 secret.go:189] Couldn't get secret openshift-multus/multus-admission-controller-secret: secret "multus-admission-controller-secret" not found Feb 24 05:14:31.803146 master-0 kubenswrapper[7614]: E0224 05:14:31.803080 7614 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/dd29bef3-d27e-48b3-9aa0-d915e949b3d5-marketplace-operator-metrics podName:dd29bef3-d27e-48b3-9aa0-d915e949b3d5 nodeName:}" failed. No retries permitted until 2026-02-24 05:14:33.803062925 +0000 UTC m=+4.837806081 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "marketplace-operator-metrics" (UniqueName: "kubernetes.io/secret/dd29bef3-d27e-48b3-9aa0-d915e949b3d5-marketplace-operator-metrics") pod "marketplace-operator-6f5488b997-dbsnm" (UID: "dd29bef3-d27e-48b3-9aa0-d915e949b3d5") : secret "marketplace-operator-metrics" not found Feb 24 05:14:31.803146 master-0 kubenswrapper[7614]: E0224 05:14:31.803087 7614 secret.go:189] Couldn't get secret openshift-cluster-version/cluster-version-operator-serving-cert: secret "cluster-version-operator-serving-cert" not found Feb 24 05:14:31.803146 master-0 kubenswrapper[7614]: I0224 05:14:31.803024 7614 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/3d6b1ce7-1213-494c-829d-186d39eac7eb-metrics-tls\") pod \"ingress-operator-6569778c84-rr8r7\" (UID: \"3d6b1ce7-1213-494c-829d-186d39eac7eb\") " pod="openshift-ingress-operator/ingress-operator-6569778c84-rr8r7" Feb 24 05:14:31.803146 master-0 kubenswrapper[7614]: E0224 05:14:31.803103 7614 secret.go:189] Couldn't get secret openshift-ingress-operator/metrics-tls: secret "metrics-tls" not found Feb 24 05:14:31.803146 master-0 kubenswrapper[7614]: E0224 05:14:31.803125 7614 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/7c4b448f-670e-45a1-bdd7-c42903c682a9-serving-cert podName:7c4b448f-670e-45a1-bdd7-c42903c682a9 nodeName:}" failed. No retries permitted until 2026-02-24 05:14:33.803114227 +0000 UTC m=+4.837857473 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/7c4b448f-670e-45a1-bdd7-c42903c682a9-serving-cert") pod "cluster-version-operator-5cfd9759cf-r4rf2" (UID: "7c4b448f-670e-45a1-bdd7-c42903c682a9") : secret "cluster-version-operator-serving-cert" not found Feb 24 05:14:31.803146 master-0 kubenswrapper[7614]: E0224 05:14:31.803079 7614 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/package-server-manager-serving-cert: secret "package-server-manager-serving-cert" not found Feb 24 05:14:31.803146 master-0 kubenswrapper[7614]: E0224 05:14:31.803148 7614 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/8be1f8db-3f0b-4d6f-be42-7564fba66820-webhook-certs podName:8be1f8db-3f0b-4d6f-be42-7564fba66820 nodeName:}" failed. No retries permitted until 2026-02-24 05:14:33.803136577 +0000 UTC m=+4.837879963 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/8be1f8db-3f0b-4d6f-be42-7564fba66820-webhook-certs") pod "multus-admission-controller-5f98f4f8d5-b985k" (UID: "8be1f8db-3f0b-4d6f-be42-7564fba66820") : secret "multus-admission-controller-secret" not found Feb 24 05:14:31.803146 master-0 kubenswrapper[7614]: E0224 05:14:31.803148 7614 secret.go:189] Couldn't get secret openshift-dns-operator/metrics-tls: secret "metrics-tls" not found Feb 24 05:14:31.803146 master-0 kubenswrapper[7614]: E0224 05:14:31.803161 7614 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/49bfccec-61ec-4bef-a561-9f6e6f906215-package-server-manager-serving-cert podName:49bfccec-61ec-4bef-a561-9f6e6f906215 nodeName:}" failed. No retries permitted until 2026-02-24 05:14:33.803155018 +0000 UTC m=+4.837898174 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "package-server-manager-serving-cert" (UniqueName: "kubernetes.io/secret/49bfccec-61ec-4bef-a561-9f6e6f906215-package-server-manager-serving-cert") pod "package-server-manager-5c75f78c8b-9d82f" (UID: "49bfccec-61ec-4bef-a561-9f6e6f906215") : secret "package-server-manager-serving-cert" not found Feb 24 05:14:31.803610 master-0 kubenswrapper[7614]: E0224 05:14:31.803177 7614 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/996ae0be-d36c-47f4-98b2-1c89591f9506-metrics-tls podName:996ae0be-d36c-47f4-98b2-1c89591f9506 nodeName:}" failed. No retries permitted until 2026-02-24 05:14:33.803169728 +0000 UTC m=+4.837912884 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "metrics-tls" (UniqueName: "kubernetes.io/secret/996ae0be-d36c-47f4-98b2-1c89591f9506-metrics-tls") pod "dns-operator-8c7d49845-4dhth" (UID: "996ae0be-d36c-47f4-98b2-1c89591f9506") : secret "metrics-tls" not found Feb 24 05:14:31.803610 master-0 kubenswrapper[7614]: E0224 05:14:31.803189 7614 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/3d6b1ce7-1213-494c-829d-186d39eac7eb-metrics-tls podName:3d6b1ce7-1213-494c-829d-186d39eac7eb nodeName:}" failed. No retries permitted until 2026-02-24 05:14:33.803184289 +0000 UTC m=+4.837927445 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "metrics-tls" (UniqueName: "kubernetes.io/secret/3d6b1ce7-1213-494c-829d-186d39eac7eb-metrics-tls") pod "ingress-operator-6569778c84-rr8r7" (UID: "3d6b1ce7-1213-494c-829d-186d39eac7eb") : secret "metrics-tls" not found Feb 24 05:14:31.803610 master-0 kubenswrapper[7614]: I0224 05:14:31.803205 7614 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-tuning-operator-tls\" (UniqueName: \"kubernetes.io/secret/6e5ede6a-9d4b-47a2-b4ba-e6018910d05a-node-tuning-operator-tls\") pod \"cluster-node-tuning-operator-bcf775fc9-h99t4\" (UID: \"6e5ede6a-9d4b-47a2-b4ba-e6018910d05a\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-bcf775fc9-h99t4" Feb 24 05:14:31.803610 master-0 kubenswrapper[7614]: E0224 05:14:31.803430 7614 secret.go:189] Couldn't get secret openshift-cluster-node-tuning-operator/node-tuning-operator-tls: secret "node-tuning-operator-tls" not found Feb 24 05:14:31.803610 master-0 kubenswrapper[7614]: E0224 05:14:31.803540 7614 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6e5ede6a-9d4b-47a2-b4ba-e6018910d05a-node-tuning-operator-tls podName:6e5ede6a-9d4b-47a2-b4ba-e6018910d05a nodeName:}" failed. No retries permitted until 2026-02-24 05:14:33.803516918 +0000 UTC m=+4.838260074 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "node-tuning-operator-tls" (UniqueName: "kubernetes.io/secret/6e5ede6a-9d4b-47a2-b4ba-e6018910d05a-node-tuning-operator-tls") pod "cluster-node-tuning-operator-bcf775fc9-h99t4" (UID: "6e5ede6a-9d4b-47a2-b4ba-e6018910d05a") : secret "node-tuning-operator-tls" not found Feb 24 05:14:32.009602 master-0 kubenswrapper[7614]: E0224 05:14:32.009511 7614 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a9dcbc6b966928b7597d4a822948ae6f07b62feecb91679c1d825d0d19426e19" Feb 24 05:14:32.011547 master-0 kubenswrapper[7614]: E0224 05:14:32.009868 7614 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:openshift-apiserver-operator,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a9dcbc6b966928b7597d4a822948ae6f07b62feecb91679c1d825d0d19426e19,Command:[cluster-openshift-apiserver-operator operator],Args:[--config=/var/run/configmaps/config/config.yaml],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:metrics,HostPort:0,ContainerPort:8443,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3e089c4e4fa9a22803b2673b776215e021a1f12a856dbcaba2fadee29bee10a3,ValueFrom:nil,},EnvVar{Name:OPERATOR_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a9dcbc6b966928b7597d4a822948ae6f07b62feecb91679c1d825d0d19426e19,ValueFrom:nil,},EnvVar{Name:OPERATOR_IMAGE_VERSION,Value:4.18.33,ValueFrom:nil,},EnvVar{Name:OPERAND_IMAGE_VERSION,Value:4.18.33,ValueFrom:nil,},EnvVar{Name:KUBE_APISERVER_OPERATOR_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fd63e2c1185e529c6e9f6e1426222ff2ac195132b44a1775f407e4593b66d4c,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:false,MountPath:/var/run/configmaps/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:serving-cert,ReadOnly:false,MountPath:/var/run/secrets/serving-cert,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-m9kf2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod openshift-apiserver-operator-8586dccc9b-49fsv_openshift-apiserver-operator(58ecd829-4749-4c8a-933b-16b4acccac90): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 24 05:14:32.011547 master-0 kubenswrapper[7614]: E0224 05:14:32.011085 7614 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"openshift-apiserver-operator\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openshift-apiserver-operator/openshift-apiserver-operator-8586dccc9b-49fsv" podUID="58ecd829-4749-4c8a-933b-16b4acccac90" Feb 24 05:14:32.835568 master-0 kubenswrapper[7614]: E0224 05:14:32.835492 7614 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1582ea693f35073e3316e2380a18227b78096ca7f4e1328f1dd8a2c423da26e9" Feb 24 05:14:32.836044 master-0 kubenswrapper[7614]: E0224 05:14:32.835710 7614 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:iptables-alerter,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1582ea693f35073e3316e2380a18227b78096ca7f4e1328f1dd8a2c423da26e9,Command:[/iptables-alerter/iptables-alerter.sh],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONTAINER_RUNTIME_ENDPOINT,Value:unix:///run/crio/crio.sock,ValueFrom:nil,},EnvVar{Name:ALERTER_POD_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.name,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{68157440 0} {} 65Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:iptables-alerter-script,ReadOnly:false,MountPath:/iptables-alerter,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:host-slash,ReadOnly:true,MountPath:/host,SubPath:,MountPropagation:*HostToContainer,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-6b7f4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:*true,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod iptables-alerter-r2vvc_openshift-network-operator(f6690909-3a87-4bdc-b0ec-1cdd4df32e4b): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 24 05:14:32.843299 master-0 kubenswrapper[7614]: E0224 05:14:32.836945 7614 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"iptables-alerter\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openshift-network-operator/iptables-alerter-r2vvc" podUID="f6690909-3a87-4bdc-b0ec-1cdd4df32e4b" Feb 24 05:14:33.422856 master-0 kubenswrapper[7614]: E0224 05:14:33.422775 7614 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ce471c00b59fd855a59f7efa9afdb3f0f9cbf1c4bcce3a82fe1a4cb82e90f52e" Feb 24 05:14:33.423247 master-0 kubenswrapper[7614]: E0224 05:14:33.423011 7614 kuberuntime_manager.go:1274] "Unhandled Error" err=< Feb 24 05:14:33.423247 master-0 kubenswrapper[7614]: container &Container{Name:authentication-operator,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ce471c00b59fd855a59f7efa9afdb3f0f9cbf1c4bcce3a82fe1a4cb82e90f52e,Command:[/bin/bash -ec],Args:[if [ -s /var/run/configmaps/trusted-ca-bundle/ca-bundle.crt ]; then Feb 24 05:14:33.423247 master-0 kubenswrapper[7614]: echo "Copying system trust bundle" Feb 24 05:14:33.423247 master-0 kubenswrapper[7614]: cp -f /var/run/configmaps/trusted-ca-bundle/ca-bundle.crt /etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem Feb 24 05:14:33.423247 master-0 kubenswrapper[7614]: fi Feb 24 05:14:33.423247 master-0 kubenswrapper[7614]: exec authentication-operator operator --config=/var/run/configmaps/config/operator-config.yaml --v=2 --terminate-on-files=/var/run/configmaps/trusted-ca-bundle/ca-bundle.crt --terminate-on-files=/tmp/terminate Feb 24 05:14:33.423247 master-0 kubenswrapper[7614]: ],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:IMAGE_OAUTH_SERVER,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3167ddf67ad2f83e1a3f49ac6c7ee826469ce9ec16db6390f6a94dac24f6a346,ValueFrom:nil,},EnvVar{Name:IMAGE_OAUTH_APISERVER,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ce68078d909b63bb5b872d94c04829aa1b5812c416abbaf9024840d348ee68b1,ValueFrom:nil,},EnvVar{Name:OPERATOR_IMAGE_VERSION,Value:4.18.33,ValueFrom:nil,},EnvVar{Name:OPERAND_OAUTH_SERVER_IMAGE_VERSION,Value:4.18.33_openshift,ValueFrom:nil,},EnvVar{Name:POD_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.name,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{20 -3} {} 20m DecimalSI},memory: {{209715200 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:false,MountPath:/var/run/configmaps/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:serving-cert,ReadOnly:false,MountPath:/var/run/secrets/serving-cert,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:trusted-ca-bundle,ReadOnly:true,MountPath:/var/run/configmaps/trusted-ca-bundle,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:service-ca-bundle,ReadOnly:true,MountPath:/var/run/configmaps/service-ca-bundle,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-wwc5b,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:healthz,Port:{0 8443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:30,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:nil,ReadOnlyRootFilesystem:*false,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod authentication-operator-5bd7c86784-kbb8z_openshift-authentication-operator(59333a14-5bdc-4590-a3da-af7300f086da): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled Feb 24 05:14:33.423247 master-0 kubenswrapper[7614]: > logger="UnhandledError" Feb 24 05:14:33.424274 master-0 kubenswrapper[7614]: E0224 05:14:33.424201 7614 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"authentication-operator\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openshift-authentication-operator/authentication-operator-5bd7c86784-kbb8z" podUID="59333a14-5bdc-4590-a3da-af7300f086da" Feb 24 05:14:33.826412 master-0 kubenswrapper[7614]: I0224 05:14:33.826273 7614 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/49bfccec-61ec-4bef-a561-9f6e6f906215-package-server-manager-serving-cert\") pod \"package-server-manager-5c75f78c8b-9d82f\" (UID: \"49bfccec-61ec-4bef-a561-9f6e6f906215\") " pod="openshift-operator-lifecycle-manager/package-server-manager-5c75f78c8b-9d82f" Feb 24 05:14:33.826412 master-0 kubenswrapper[7614]: I0224 05:14:33.826364 7614 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/3d6b1ce7-1213-494c-829d-186d39eac7eb-metrics-tls\") pod \"ingress-operator-6569778c84-rr8r7\" (UID: \"3d6b1ce7-1213-494c-829d-186d39eac7eb\") " pod="openshift-ingress-operator/ingress-operator-6569778c84-rr8r7" Feb 24 05:14:33.826412 master-0 kubenswrapper[7614]: I0224 05:14:33.826396 7614 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-tuning-operator-tls\" (UniqueName: \"kubernetes.io/secret/6e5ede6a-9d4b-47a2-b4ba-e6018910d05a-node-tuning-operator-tls\") pod \"cluster-node-tuning-operator-bcf775fc9-h99t4\" (UID: \"6e5ede6a-9d4b-47a2-b4ba-e6018910d05a\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-bcf775fc9-h99t4" Feb 24 05:14:33.826412 master-0 kubenswrapper[7614]: I0224 05:14:33.826428 7614 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/7dcc5520-7aa8-4cd5-b06d-591827ed4e2a-metrics-certs\") pod \"network-metrics-daemon-2vsjh\" (UID: \"7dcc5520-7aa8-4cd5-b06d-591827ed4e2a\") " pod="openshift-multus/network-metrics-daemon-2vsjh" Feb 24 05:14:33.826412 master-0 kubenswrapper[7614]: I0224 05:14:33.826451 7614 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/6e5ede6a-9d4b-47a2-b4ba-e6018910d05a-apiservice-cert\") pod \"cluster-node-tuning-operator-bcf775fc9-h99t4\" (UID: \"6e5ede6a-9d4b-47a2-b4ba-e6018910d05a\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-bcf775fc9-h99t4" Feb 24 05:14:33.827093 master-0 kubenswrapper[7614]: I0224 05:14:33.826476 7614 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cluster-monitoring-operator-tls\" (UniqueName: \"kubernetes.io/secret/c177f8fe-8145-4557-ae78-af121efe001c-cluster-monitoring-operator-tls\") pod \"cluster-monitoring-operator-6bb6d78bf-mzb7q\" (UID: \"c177f8fe-8145-4557-ae78-af121efe001c\") " pod="openshift-monitoring/cluster-monitoring-operator-6bb6d78bf-mzb7q" Feb 24 05:14:33.827093 master-0 kubenswrapper[7614]: I0224 05:14:33.826677 7614 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/8be1f8db-3f0b-4d6f-be42-7564fba66820-webhook-certs\") pod \"multus-admission-controller-5f98f4f8d5-b985k\" (UID: \"8be1f8db-3f0b-4d6f-be42-7564fba66820\") " pod="openshift-multus/multus-admission-controller-5f98f4f8d5-b985k" Feb 24 05:14:33.827093 master-0 kubenswrapper[7614]: E0224 05:14:33.826703 7614 secret.go:189] Couldn't get secret openshift-multus/metrics-daemon-secret: secret "metrics-daemon-secret" not found Feb 24 05:14:33.827093 master-0 kubenswrapper[7614]: E0224 05:14:33.826754 7614 secret.go:189] Couldn't get secret openshift-ingress-operator/metrics-tls: secret "metrics-tls" not found Feb 24 05:14:33.827093 master-0 kubenswrapper[7614]: E0224 05:14:33.826832 7614 secret.go:189] Couldn't get secret openshift-multus/multus-admission-controller-secret: secret "multus-admission-controller-secret" not found Feb 24 05:14:33.827093 master-0 kubenswrapper[7614]: E0224 05:14:33.826838 7614 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/7dcc5520-7aa8-4cd5-b06d-591827ed4e2a-metrics-certs podName:7dcc5520-7aa8-4cd5-b06d-591827ed4e2a nodeName:}" failed. No retries permitted until 2026-02-24 05:14:37.82680257 +0000 UTC m=+8.861545716 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/7dcc5520-7aa8-4cd5-b06d-591827ed4e2a-metrics-certs") pod "network-metrics-daemon-2vsjh" (UID: "7dcc5520-7aa8-4cd5-b06d-591827ed4e2a") : secret "metrics-daemon-secret" not found Feb 24 05:14:33.827093 master-0 kubenswrapper[7614]: I0224 05:14:33.826716 7614 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/dd29bef3-d27e-48b3-9aa0-d915e949b3d5-marketplace-operator-metrics\") pod \"marketplace-operator-6f5488b997-dbsnm\" (UID: \"dd29bef3-d27e-48b3-9aa0-d915e949b3d5\") " pod="openshift-marketplace/marketplace-operator-6f5488b997-dbsnm" Feb 24 05:14:33.827093 master-0 kubenswrapper[7614]: E0224 05:14:33.826907 7614 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/8be1f8db-3f0b-4d6f-be42-7564fba66820-webhook-certs podName:8be1f8db-3f0b-4d6f-be42-7564fba66820 nodeName:}" failed. No retries permitted until 2026-02-24 05:14:37.826890853 +0000 UTC m=+8.861634009 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/8be1f8db-3f0b-4d6f-be42-7564fba66820-webhook-certs") pod "multus-admission-controller-5f98f4f8d5-b985k" (UID: "8be1f8db-3f0b-4d6f-be42-7564fba66820") : secret "multus-admission-controller-secret" not found Feb 24 05:14:33.827093 master-0 kubenswrapper[7614]: E0224 05:14:33.826923 7614 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/3d6b1ce7-1213-494c-829d-186d39eac7eb-metrics-tls podName:3d6b1ce7-1213-494c-829d-186d39eac7eb nodeName:}" failed. No retries permitted until 2026-02-24 05:14:37.826916023 +0000 UTC m=+8.861659179 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "metrics-tls" (UniqueName: "kubernetes.io/secret/3d6b1ce7-1213-494c-829d-186d39eac7eb-metrics-tls") pod "ingress-operator-6569778c84-rr8r7" (UID: "3d6b1ce7-1213-494c-829d-186d39eac7eb") : secret "metrics-tls" not found Feb 24 05:14:33.827093 master-0 kubenswrapper[7614]: E0224 05:14:33.826882 7614 secret.go:189] Couldn't get secret openshift-monitoring/cluster-monitoring-operator-tls: secret "cluster-monitoring-operator-tls" not found Feb 24 05:14:33.827093 master-0 kubenswrapper[7614]: E0224 05:14:33.826944 7614 secret.go:189] Couldn't get secret openshift-cluster-node-tuning-operator/node-tuning-operator-tls: secret "node-tuning-operator-tls" not found Feb 24 05:14:33.827093 master-0 kubenswrapper[7614]: E0224 05:14:33.826908 7614 secret.go:189] Couldn't get secret openshift-marketplace/marketplace-operator-metrics: secret "marketplace-operator-metrics" not found Feb 24 05:14:33.827093 master-0 kubenswrapper[7614]: E0224 05:14:33.826972 7614 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/dd29bef3-d27e-48b3-9aa0-d915e949b3d5-marketplace-operator-metrics podName:dd29bef3-d27e-48b3-9aa0-d915e949b3d5 nodeName:}" failed. No retries permitted until 2026-02-24 05:14:37.826965365 +0000 UTC m=+8.861708521 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "marketplace-operator-metrics" (UniqueName: "kubernetes.io/secret/dd29bef3-d27e-48b3-9aa0-d915e949b3d5-marketplace-operator-metrics") pod "marketplace-operator-6f5488b997-dbsnm" (UID: "dd29bef3-d27e-48b3-9aa0-d915e949b3d5") : secret "marketplace-operator-metrics" not found Feb 24 05:14:33.827093 master-0 kubenswrapper[7614]: E0224 05:14:33.826983 7614 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6e5ede6a-9d4b-47a2-b4ba-e6018910d05a-node-tuning-operator-tls podName:6e5ede6a-9d4b-47a2-b4ba-e6018910d05a nodeName:}" failed. No retries permitted until 2026-02-24 05:14:37.826978935 +0000 UTC m=+8.861722091 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "node-tuning-operator-tls" (UniqueName: "kubernetes.io/secret/6e5ede6a-9d4b-47a2-b4ba-e6018910d05a-node-tuning-operator-tls") pod "cluster-node-tuning-operator-bcf775fc9-h99t4" (UID: "6e5ede6a-9d4b-47a2-b4ba-e6018910d05a") : secret "node-tuning-operator-tls" not found Feb 24 05:14:33.827093 master-0 kubenswrapper[7614]: E0224 05:14:33.826996 7614 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c177f8fe-8145-4557-ae78-af121efe001c-cluster-monitoring-operator-tls podName:c177f8fe-8145-4557-ae78-af121efe001c nodeName:}" failed. No retries permitted until 2026-02-24 05:14:37.826991165 +0000 UTC m=+8.861734321 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "cluster-monitoring-operator-tls" (UniqueName: "kubernetes.io/secret/c177f8fe-8145-4557-ae78-af121efe001c-cluster-monitoring-operator-tls") pod "cluster-monitoring-operator-6bb6d78bf-mzb7q" (UID: "c177f8fe-8145-4557-ae78-af121efe001c") : secret "cluster-monitoring-operator-tls" not found Feb 24 05:14:33.827093 master-0 kubenswrapper[7614]: E0224 05:14:33.827000 7614 secret.go:189] Couldn't get secret openshift-cluster-node-tuning-operator/performance-addon-operator-webhook-cert: secret "performance-addon-operator-webhook-cert" not found Feb 24 05:14:33.827093 master-0 kubenswrapper[7614]: E0224 05:14:33.827023 7614 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6e5ede6a-9d4b-47a2-b4ba-e6018910d05a-apiservice-cert podName:6e5ede6a-9d4b-47a2-b4ba-e6018910d05a nodeName:}" failed. No retries permitted until 2026-02-24 05:14:37.827013666 +0000 UTC m=+8.861756822 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "apiservice-cert" (UniqueName: "kubernetes.io/secret/6e5ede6a-9d4b-47a2-b4ba-e6018910d05a-apiservice-cert") pod "cluster-node-tuning-operator-bcf775fc9-h99t4" (UID: "6e5ede6a-9d4b-47a2-b4ba-e6018910d05a") : secret "performance-addon-operator-webhook-cert" not found Feb 24 05:14:33.827093 master-0 kubenswrapper[7614]: E0224 05:14:33.827049 7614 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/package-server-manager-serving-cert: secret "package-server-manager-serving-cert" not found Feb 24 05:14:33.827093 master-0 kubenswrapper[7614]: E0224 05:14:33.827051 7614 secret.go:189] Couldn't get secret openshift-cluster-version/cluster-version-operator-serving-cert: secret "cluster-version-operator-serving-cert" not found Feb 24 05:14:33.827093 master-0 kubenswrapper[7614]: E0224 05:14:33.827071 7614 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/49bfccec-61ec-4bef-a561-9f6e6f906215-package-server-manager-serving-cert podName:49bfccec-61ec-4bef-a561-9f6e6f906215 nodeName:}" failed. No retries permitted until 2026-02-24 05:14:37.827062867 +0000 UTC m=+8.861806023 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "package-server-manager-serving-cert" (UniqueName: "kubernetes.io/secret/49bfccec-61ec-4bef-a561-9f6e6f906215-package-server-manager-serving-cert") pod "package-server-manager-5c75f78c8b-9d82f" (UID: "49bfccec-61ec-4bef-a561-9f6e6f906215") : secret "package-server-manager-serving-cert" not found Feb 24 05:14:33.827093 master-0 kubenswrapper[7614]: I0224 05:14:33.826974 7614 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7c4b448f-670e-45a1-bdd7-c42903c682a9-serving-cert\") pod \"cluster-version-operator-5cfd9759cf-r4rf2\" (UID: \"7c4b448f-670e-45a1-bdd7-c42903c682a9\") " pod="openshift-cluster-version/cluster-version-operator-5cfd9759cf-r4rf2" Feb 24 05:14:33.827093 master-0 kubenswrapper[7614]: E0224 05:14:33.827108 7614 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/7c4b448f-670e-45a1-bdd7-c42903c682a9-serving-cert podName:7c4b448f-670e-45a1-bdd7-c42903c682a9 nodeName:}" failed. No retries permitted until 2026-02-24 05:14:37.827080298 +0000 UTC m=+8.861823484 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/7c4b448f-670e-45a1-bdd7-c42903c682a9-serving-cert") pod "cluster-version-operator-5cfd9759cf-r4rf2" (UID: "7c4b448f-670e-45a1-bdd7-c42903c682a9") : secret "cluster-version-operator-serving-cert" not found Feb 24 05:14:33.827093 master-0 kubenswrapper[7614]: I0224 05:14:33.827162 7614 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/996ae0be-d36c-47f4-98b2-1c89591f9506-metrics-tls\") pod \"dns-operator-8c7d49845-4dhth\" (UID: \"996ae0be-d36c-47f4-98b2-1c89591f9506\") " pod="openshift-dns-operator/dns-operator-8c7d49845-4dhth" Feb 24 05:14:33.827093 master-0 kubenswrapper[7614]: E0224 05:14:33.827260 7614 secret.go:189] Couldn't get secret openshift-dns-operator/metrics-tls: secret "metrics-tls" not found Feb 24 05:14:33.827093 master-0 kubenswrapper[7614]: E0224 05:14:33.827296 7614 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/996ae0be-d36c-47f4-98b2-1c89591f9506-metrics-tls podName:996ae0be-d36c-47f4-98b2-1c89591f9506 nodeName:}" failed. No retries permitted until 2026-02-24 05:14:37.827287703 +0000 UTC m=+8.862030859 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "metrics-tls" (UniqueName: "kubernetes.io/secret/996ae0be-d36c-47f4-98b2-1c89591f9506-metrics-tls") pod "dns-operator-8c7d49845-4dhth" (UID: "996ae0be-d36c-47f4-98b2-1c89591f9506") : secret "metrics-tls" not found Feb 24 05:14:34.191157 master-0 kubenswrapper[7614]: I0224 05:14:34.190964 7614 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Feb 24 05:14:34.231063 master-0 kubenswrapper[7614]: I0224 05:14:34.230977 7614 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Feb 24 05:14:34.254818 master-0 kubenswrapper[7614]: E0224 05:14:34.254721 7614 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d7bd3361d506dcc1be3afa62d35080c5dd37afccc26cd36019e2b9db2c45f896" Feb 24 05:14:34.255078 master-0 kubenswrapper[7614]: E0224 05:14:34.254992 7614 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:openshift-controller-manager-operator,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d7bd3361d506dcc1be3afa62d35080c5dd37afccc26cd36019e2b9db2c45f896,Command:[cluster-openshift-controller-manager-operator operator],Args:[--config=/var/run/configmaps/config/config.yaml],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:metrics,HostPort:0,ContainerPort:8443,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:RELEASE_VERSION,Value:4.18.33,ValueFrom:nil,},EnvVar{Name:IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:314be88d356b2c8a3c4416daeb4cfcd58d617a4526319c01ddaffae4b4179e74,ValueFrom:nil,},EnvVar{Name:OPERATOR_IMAGE_VERSION,Value:4.18.33,ValueFrom:nil,},EnvVar{Name:ROUTE_CONTROLLER_MANAGER_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:117a846734fc8159b7172a40ed2feb43a969b7dbc113ee1a572cbf6f9f922655,ValueFrom:nil,},EnvVar{Name:POD_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.name,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:false,MountPath:/var/run/configmaps/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:serving-cert,ReadOnly:false,MountPath:/var/run/secrets/serving-cert,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-gmf87,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod openshift-controller-manager-operator-584cc7bcb5-zz9fm_openshift-controller-manager-operator(933beda1-c930-4831-a886-3cc6b7a992ad): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 24 05:14:34.256156 master-0 kubenswrapper[7614]: E0224 05:14:34.256091 7614 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"openshift-controller-manager-operator\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-584cc7bcb5-zz9fm" podUID="933beda1-c930-4831-a886-3cc6b7a992ad" Feb 24 05:14:34.258105 master-0 kubenswrapper[7614]: I0224 05:14:34.258066 7614 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Feb 24 05:14:35.199716 master-0 kubenswrapper[7614]: I0224 05:14:35.199666 7614 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="kube-system/bootstrap-kube-controller-manager-master-0" Feb 24 05:14:35.200753 master-0 kubenswrapper[7614]: E0224 05:14:35.200637 7614 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:034588ffd95ce834e866279bf80a45af2cddda631c6c9a6344c1bb2e033fd83e" Feb 24 05:14:35.201056 master-0 kubenswrapper[7614]: E0224 05:14:35.200947 7614 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:csi-snapshot-controller-operator,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:034588ffd95ce834e866279bf80a45af2cddda631c6c9a6344c1bb2e033fd83e,Command:[],Args:[start -v=2],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:OPERAND_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:39d04e6e7ced98e7e189aff1bf392a4d4526e011fc6adead5c6b27dbd08776a9,ValueFrom:nil,},EnvVar{Name:WEBHOOK_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d953b34fe1ab03e9a57b3c91de4220683cf92e804edb5f5c230e5888e1c5a6d2,ValueFrom:nil,},EnvVar{Name:OPERATOR_IMAGE_VERSION,Value:4.18.33,ValueFrom:nil,},EnvVar{Name:OPERAND_IMAGE_VERSION,Value:4.18.33,ValueFrom:nil,},EnvVar{Name:POD_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.name,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{68157440 0} {} 65Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-h5djr,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000150000,RunAsNonRoot:nil,ReadOnlyRootFilesystem:*false,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-snapshot-controller-operator-6fb4df594f-8tv99_openshift-cluster-storage-operator(feee7fe8-e805-4807-b4c0-ecc7ef0f88d9): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 24 05:14:35.202144 master-0 kubenswrapper[7614]: E0224 05:14:35.202092 7614 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"csi-snapshot-controller-operator\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openshift-cluster-storage-operator/csi-snapshot-controller-operator-6fb4df594f-8tv99" podUID="feee7fe8-e805-4807-b4c0-ecc7ef0f88d9" Feb 24 05:14:35.203983 master-0 kubenswrapper[7614]: I0224 05:14:35.203951 7614 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="kube-system/bootstrap-kube-controller-manager-master-0" Feb 24 05:14:35.408434 master-0 kubenswrapper[7614]: I0224 05:14:35.405964 7614 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-network-diagnostics/network-check-target-vp2jg"] Feb 24 05:14:35.578446 master-0 kubenswrapper[7614]: I0224 05:14:35.578400 7614 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="kube-system/bootstrap-kube-controller-manager-master-0" Feb 24 05:14:35.584422 master-0 kubenswrapper[7614]: I0224 05:14:35.584380 7614 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="kube-system/bootstrap-kube-controller-manager-master-0" Feb 24 05:14:35.923283 master-0 kubenswrapper[7614]: I0224 05:14:35.923178 7614 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="kube-system/bootstrap-kube-controller-manager-master-0" Feb 24 05:14:36.259482 master-0 kubenswrapper[7614]: I0224 05:14:36.259395 7614 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-545bf96f4d-tfmbs" event={"ID":"7a2c651d-ea1a-41f2-9745-04adc8d88904","Type":"ContainerStarted","Data":"1fe643ed33a9f72192d56893c5e0183a5530b52d1fd5cb43d00c8adaabb5837c"} Feb 24 05:14:36.261445 master-0 kubenswrapper[7614]: I0224 05:14:36.261409 7614 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-c48c8bf7c-mcdrl" event={"ID":"d86d5bbe-3768-4695-810b-245a56e4fd1d","Type":"ContainerStarted","Data":"104b76f7ac0ef4084c50822d35c6690afc0cd965133c5d489594ae901dd1b9f2"} Feb 24 05:14:36.263691 master-0 kubenswrapper[7614]: I0224 05:14:36.263636 7614 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-target-vp2jg" event={"ID":"1d922f6f-70a3-46cb-b230-6a1e2b8cfdfa","Type":"ContainerStarted","Data":"869ee4400ec52ed45dd8f4a46a91fdbc333be1390c9aa162fbaf199cff1662a2"} Feb 24 05:14:36.263738 master-0 kubenswrapper[7614]: I0224 05:14:36.263697 7614 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-target-vp2jg" event={"ID":"1d922f6f-70a3-46cb-b230-6a1e2b8cfdfa","Type":"ContainerStarted","Data":"42dcfde8494f887ef3a1248e80ba66a922da1760343eca1d2afd960d88b81901"} Feb 24 05:14:36.263899 master-0 kubenswrapper[7614]: I0224 05:14:36.263850 7614 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-network-diagnostics/network-check-target-vp2jg" Feb 24 05:14:36.266337 master-0 kubenswrapper[7614]: I0224 05:14:36.266274 7614 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-7bcfbc574b-8zrj9" event={"ID":"22813c83-2f60-44ad-9624-ad367cec08f7","Type":"ContainerStarted","Data":"3d7e3ee020313467e6fefd173d6752fc4e4ffcc2fae974414212fcbe51114f7d"} Feb 24 05:14:36.268301 master-0 kubenswrapper[7614]: I0224 05:14:36.268253 7614 generic.go:334] "Generic (PLEG): container finished" podID="633d33a1-e1b1-40b0-b56a-afb0c1085d97" containerID="949e362ec4e4631e4492e74c9f4477ed75b0f79c5280bc8dd59a6bd3118464ad" exitCode=0 Feb 24 05:14:36.268457 master-0 kubenswrapper[7614]: I0224 05:14:36.268390 7614 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-olm-operator/cluster-olm-operator-5bd7768f54-qh6j7" event={"ID":"633d33a1-e1b1-40b0-b56a-afb0c1085d97","Type":"ContainerDied","Data":"949e362ec4e4631e4492e74c9f4477ed75b0f79c5280bc8dd59a6bd3118464ad"} Feb 24 05:14:36.613147 master-0 kubenswrapper[7614]: I0224 05:14:36.612686 7614 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-vd82q" Feb 24 05:14:36.691815 master-0 kubenswrapper[7614]: I0224 05:14:36.691688 7614 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-vd82q" Feb 24 05:14:37.272154 master-0 kubenswrapper[7614]: I0224 05:14:37.272082 7614 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 24 05:14:37.272154 master-0 kubenswrapper[7614]: I0224 05:14:37.272136 7614 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 24 05:14:37.889706 master-0 kubenswrapper[7614]: I0224 05:14:37.889573 7614 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/7dcc5520-7aa8-4cd5-b06d-591827ed4e2a-metrics-certs\") pod \"network-metrics-daemon-2vsjh\" (UID: \"7dcc5520-7aa8-4cd5-b06d-591827ed4e2a\") " pod="openshift-multus/network-metrics-daemon-2vsjh" Feb 24 05:14:37.889706 master-0 kubenswrapper[7614]: I0224 05:14:37.889678 7614 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/6e5ede6a-9d4b-47a2-b4ba-e6018910d05a-apiservice-cert\") pod \"cluster-node-tuning-operator-bcf775fc9-h99t4\" (UID: \"6e5ede6a-9d4b-47a2-b4ba-e6018910d05a\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-bcf775fc9-h99t4" Feb 24 05:14:37.890178 master-0 kubenswrapper[7614]: I0224 05:14:37.889734 7614 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cluster-monitoring-operator-tls\" (UniqueName: \"kubernetes.io/secret/c177f8fe-8145-4557-ae78-af121efe001c-cluster-monitoring-operator-tls\") pod \"cluster-monitoring-operator-6bb6d78bf-mzb7q\" (UID: \"c177f8fe-8145-4557-ae78-af121efe001c\") " pod="openshift-monitoring/cluster-monitoring-operator-6bb6d78bf-mzb7q" Feb 24 05:14:37.890178 master-0 kubenswrapper[7614]: I0224 05:14:37.889791 7614 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/8be1f8db-3f0b-4d6f-be42-7564fba66820-webhook-certs\") pod \"multus-admission-controller-5f98f4f8d5-b985k\" (UID: \"8be1f8db-3f0b-4d6f-be42-7564fba66820\") " pod="openshift-multus/multus-admission-controller-5f98f4f8d5-b985k" Feb 24 05:14:37.890178 master-0 kubenswrapper[7614]: I0224 05:14:37.889836 7614 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/dd29bef3-d27e-48b3-9aa0-d915e949b3d5-marketplace-operator-metrics\") pod \"marketplace-operator-6f5488b997-dbsnm\" (UID: \"dd29bef3-d27e-48b3-9aa0-d915e949b3d5\") " pod="openshift-marketplace/marketplace-operator-6f5488b997-dbsnm" Feb 24 05:14:37.890178 master-0 kubenswrapper[7614]: E0224 05:14:37.889850 7614 secret.go:189] Couldn't get secret openshift-multus/metrics-daemon-secret: secret "metrics-daemon-secret" not found Feb 24 05:14:37.890178 master-0 kubenswrapper[7614]: E0224 05:14:37.889961 7614 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/7dcc5520-7aa8-4cd5-b06d-591827ed4e2a-metrics-certs podName:7dcc5520-7aa8-4cd5-b06d-591827ed4e2a nodeName:}" failed. No retries permitted until 2026-02-24 05:14:45.88992803 +0000 UTC m=+16.924671216 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/7dcc5520-7aa8-4cd5-b06d-591827ed4e2a-metrics-certs") pod "network-metrics-daemon-2vsjh" (UID: "7dcc5520-7aa8-4cd5-b06d-591827ed4e2a") : secret "metrics-daemon-secret" not found Feb 24 05:14:37.890178 master-0 kubenswrapper[7614]: E0224 05:14:37.890070 7614 secret.go:189] Couldn't get secret openshift-cluster-node-tuning-operator/performance-addon-operator-webhook-cert: secret "performance-addon-operator-webhook-cert" not found Feb 24 05:14:37.890178 master-0 kubenswrapper[7614]: I0224 05:14:37.890157 7614 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7c4b448f-670e-45a1-bdd7-c42903c682a9-serving-cert\") pod \"cluster-version-operator-5cfd9759cf-r4rf2\" (UID: \"7c4b448f-670e-45a1-bdd7-c42903c682a9\") " pod="openshift-cluster-version/cluster-version-operator-5cfd9759cf-r4rf2" Feb 24 05:14:37.890771 master-0 kubenswrapper[7614]: E0224 05:14:37.890231 7614 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6e5ede6a-9d4b-47a2-b4ba-e6018910d05a-apiservice-cert podName:6e5ede6a-9d4b-47a2-b4ba-e6018910d05a nodeName:}" failed. No retries permitted until 2026-02-24 05:14:45.890191937 +0000 UTC m=+16.924935123 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "apiservice-cert" (UniqueName: "kubernetes.io/secret/6e5ede6a-9d4b-47a2-b4ba-e6018910d05a-apiservice-cert") pod "cluster-node-tuning-operator-bcf775fc9-h99t4" (UID: "6e5ede6a-9d4b-47a2-b4ba-e6018910d05a") : secret "performance-addon-operator-webhook-cert" not found Feb 24 05:14:37.890771 master-0 kubenswrapper[7614]: E0224 05:14:37.890301 7614 secret.go:189] Couldn't get secret openshift-cluster-version/cluster-version-operator-serving-cert: secret "cluster-version-operator-serving-cert" not found Feb 24 05:14:37.890771 master-0 kubenswrapper[7614]: I0224 05:14:37.890335 7614 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/996ae0be-d36c-47f4-98b2-1c89591f9506-metrics-tls\") pod \"dns-operator-8c7d49845-4dhth\" (UID: \"996ae0be-d36c-47f4-98b2-1c89591f9506\") " pod="openshift-dns-operator/dns-operator-8c7d49845-4dhth" Feb 24 05:14:37.890771 master-0 kubenswrapper[7614]: E0224 05:14:37.890411 7614 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/7c4b448f-670e-45a1-bdd7-c42903c682a9-serving-cert podName:7c4b448f-670e-45a1-bdd7-c42903c682a9 nodeName:}" failed. No retries permitted until 2026-02-24 05:14:45.890388022 +0000 UTC m=+16.925131378 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/7c4b448f-670e-45a1-bdd7-c42903c682a9-serving-cert") pod "cluster-version-operator-5cfd9759cf-r4rf2" (UID: "7c4b448f-670e-45a1-bdd7-c42903c682a9") : secret "cluster-version-operator-serving-cert" not found Feb 24 05:14:37.890771 master-0 kubenswrapper[7614]: E0224 05:14:37.890337 7614 secret.go:189] Couldn't get secret openshift-marketplace/marketplace-operator-metrics: secret "marketplace-operator-metrics" not found Feb 24 05:14:37.890771 master-0 kubenswrapper[7614]: E0224 05:14:37.890438 7614 secret.go:189] Couldn't get secret openshift-monitoring/cluster-monitoring-operator-tls: secret "cluster-monitoring-operator-tls" not found Feb 24 05:14:37.890771 master-0 kubenswrapper[7614]: E0224 05:14:37.890492 7614 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/dd29bef3-d27e-48b3-9aa0-d915e949b3d5-marketplace-operator-metrics podName:dd29bef3-d27e-48b3-9aa0-d915e949b3d5 nodeName:}" failed. No retries permitted until 2026-02-24 05:14:45.890473074 +0000 UTC m=+16.925216530 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "marketplace-operator-metrics" (UniqueName: "kubernetes.io/secret/dd29bef3-d27e-48b3-9aa0-d915e949b3d5-marketplace-operator-metrics") pod "marketplace-operator-6f5488b997-dbsnm" (UID: "dd29bef3-d27e-48b3-9aa0-d915e949b3d5") : secret "marketplace-operator-metrics" not found Feb 24 05:14:37.890771 master-0 kubenswrapper[7614]: E0224 05:14:37.890440 7614 secret.go:189] Couldn't get secret openshift-dns-operator/metrics-tls: secret "metrics-tls" not found Feb 24 05:14:37.890771 master-0 kubenswrapper[7614]: E0224 05:14:37.890511 7614 secret.go:189] Couldn't get secret openshift-multus/multus-admission-controller-secret: secret "multus-admission-controller-secret" not found Feb 24 05:14:37.890771 master-0 kubenswrapper[7614]: E0224 05:14:37.890523 7614 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c177f8fe-8145-4557-ae78-af121efe001c-cluster-monitoring-operator-tls podName:c177f8fe-8145-4557-ae78-af121efe001c nodeName:}" failed. No retries permitted until 2026-02-24 05:14:45.890507135 +0000 UTC m=+16.925250591 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "cluster-monitoring-operator-tls" (UniqueName: "kubernetes.io/secret/c177f8fe-8145-4557-ae78-af121efe001c-cluster-monitoring-operator-tls") pod "cluster-monitoring-operator-6bb6d78bf-mzb7q" (UID: "c177f8fe-8145-4557-ae78-af121efe001c") : secret "cluster-monitoring-operator-tls" not found Feb 24 05:14:37.890771 master-0 kubenswrapper[7614]: E0224 05:14:37.890599 7614 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/996ae0be-d36c-47f4-98b2-1c89591f9506-metrics-tls podName:996ae0be-d36c-47f4-98b2-1c89591f9506 nodeName:}" failed. No retries permitted until 2026-02-24 05:14:45.890580617 +0000 UTC m=+16.925323803 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "metrics-tls" (UniqueName: "kubernetes.io/secret/996ae0be-d36c-47f4-98b2-1c89591f9506-metrics-tls") pod "dns-operator-8c7d49845-4dhth" (UID: "996ae0be-d36c-47f4-98b2-1c89591f9506") : secret "metrics-tls" not found Feb 24 05:14:37.890771 master-0 kubenswrapper[7614]: E0224 05:14:37.890623 7614 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/8be1f8db-3f0b-4d6f-be42-7564fba66820-webhook-certs podName:8be1f8db-3f0b-4d6f-be42-7564fba66820 nodeName:}" failed. No retries permitted until 2026-02-24 05:14:45.890612238 +0000 UTC m=+16.925355424 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/8be1f8db-3f0b-4d6f-be42-7564fba66820-webhook-certs") pod "multus-admission-controller-5f98f4f8d5-b985k" (UID: "8be1f8db-3f0b-4d6f-be42-7564fba66820") : secret "multus-admission-controller-secret" not found Feb 24 05:14:37.890771 master-0 kubenswrapper[7614]: I0224 05:14:37.890654 7614 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/3d6b1ce7-1213-494c-829d-186d39eac7eb-metrics-tls\") pod \"ingress-operator-6569778c84-rr8r7\" (UID: \"3d6b1ce7-1213-494c-829d-186d39eac7eb\") " pod="openshift-ingress-operator/ingress-operator-6569778c84-rr8r7" Feb 24 05:14:37.890771 master-0 kubenswrapper[7614]: I0224 05:14:37.890716 7614 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/49bfccec-61ec-4bef-a561-9f6e6f906215-package-server-manager-serving-cert\") pod \"package-server-manager-5c75f78c8b-9d82f\" (UID: \"49bfccec-61ec-4bef-a561-9f6e6f906215\") " pod="openshift-operator-lifecycle-manager/package-server-manager-5c75f78c8b-9d82f" Feb 24 05:14:37.890771 master-0 kubenswrapper[7614]: I0224 05:14:37.890760 7614 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-tuning-operator-tls\" (UniqueName: \"kubernetes.io/secret/6e5ede6a-9d4b-47a2-b4ba-e6018910d05a-node-tuning-operator-tls\") pod \"cluster-node-tuning-operator-bcf775fc9-h99t4\" (UID: \"6e5ede6a-9d4b-47a2-b4ba-e6018910d05a\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-bcf775fc9-h99t4" Feb 24 05:14:37.891801 master-0 kubenswrapper[7614]: E0224 05:14:37.890847 7614 secret.go:189] Couldn't get secret openshift-ingress-operator/metrics-tls: secret "metrics-tls" not found Feb 24 05:14:37.891801 master-0 kubenswrapper[7614]: E0224 05:14:37.890896 7614 secret.go:189] Couldn't get secret openshift-cluster-node-tuning-operator/node-tuning-operator-tls: secret "node-tuning-operator-tls" not found Feb 24 05:14:37.891801 master-0 kubenswrapper[7614]: E0224 05:14:37.890900 7614 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/package-server-manager-serving-cert: secret "package-server-manager-serving-cert" not found Feb 24 05:14:37.891801 master-0 kubenswrapper[7614]: E0224 05:14:37.890927 7614 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/3d6b1ce7-1213-494c-829d-186d39eac7eb-metrics-tls podName:3d6b1ce7-1213-494c-829d-186d39eac7eb nodeName:}" failed. No retries permitted until 2026-02-24 05:14:45.890908727 +0000 UTC m=+16.925651913 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "metrics-tls" (UniqueName: "kubernetes.io/secret/3d6b1ce7-1213-494c-829d-186d39eac7eb-metrics-tls") pod "ingress-operator-6569778c84-rr8r7" (UID: "3d6b1ce7-1213-494c-829d-186d39eac7eb") : secret "metrics-tls" not found Feb 24 05:14:37.891801 master-0 kubenswrapper[7614]: E0224 05:14:37.890963 7614 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6e5ede6a-9d4b-47a2-b4ba-e6018910d05a-node-tuning-operator-tls podName:6e5ede6a-9d4b-47a2-b4ba-e6018910d05a nodeName:}" failed. No retries permitted until 2026-02-24 05:14:45.890943558 +0000 UTC m=+16.925686744 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "node-tuning-operator-tls" (UniqueName: "kubernetes.io/secret/6e5ede6a-9d4b-47a2-b4ba-e6018910d05a-node-tuning-operator-tls") pod "cluster-node-tuning-operator-bcf775fc9-h99t4" (UID: "6e5ede6a-9d4b-47a2-b4ba-e6018910d05a") : secret "node-tuning-operator-tls" not found Feb 24 05:14:37.891801 master-0 kubenswrapper[7614]: E0224 05:14:37.890985 7614 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/49bfccec-61ec-4bef-a561-9f6e6f906215-package-server-manager-serving-cert podName:49bfccec-61ec-4bef-a561-9f6e6f906215 nodeName:}" failed. No retries permitted until 2026-02-24 05:14:45.890975229 +0000 UTC m=+16.925718415 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "package-server-manager-serving-cert" (UniqueName: "kubernetes.io/secret/49bfccec-61ec-4bef-a561-9f6e6f906215-package-server-manager-serving-cert") pod "package-server-manager-5c75f78c8b-9d82f" (UID: "49bfccec-61ec-4bef-a561-9f6e6f906215") : secret "package-server-manager-serving-cert" not found Feb 24 05:14:38.500598 master-0 kubenswrapper[7614]: I0224 05:14:38.500335 7614 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="kube-system/bootstrap-kube-controller-manager-master-0" Feb 24 05:14:38.510600 master-0 kubenswrapper[7614]: I0224 05:14:38.510511 7614 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="kube-system/bootstrap-kube-controller-manager-master-0" Feb 24 05:14:38.618639 master-0 kubenswrapper[7614]: I0224 05:14:38.618533 7614 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-vd82q" Feb 24 05:14:38.618980 master-0 kubenswrapper[7614]: I0224 05:14:38.618909 7614 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 24 05:14:38.618980 master-0 kubenswrapper[7614]: I0224 05:14:38.618960 7614 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 24 05:14:38.635531 master-0 kubenswrapper[7614]: I0224 05:14:38.635419 7614 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-service-ca/service-ca-576b4d78bd-fsmrl"] Feb 24 05:14:38.635859 master-0 kubenswrapper[7614]: E0224 05:14:38.635716 7614 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ba74ac93-7ad1-46e5-97c6-75c410d6a39e" containerName="prober" Feb 24 05:14:38.635859 master-0 kubenswrapper[7614]: I0224 05:14:38.635740 7614 state_mem.go:107] "Deleted CPUSet assignment" podUID="ba74ac93-7ad1-46e5-97c6-75c410d6a39e" containerName="prober" Feb 24 05:14:38.635859 master-0 kubenswrapper[7614]: E0224 05:14:38.635765 7614 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8a278410-3079-49d9-8c59-4cedf3f50213" containerName="assisted-installer-controller" Feb 24 05:14:38.635859 master-0 kubenswrapper[7614]: I0224 05:14:38.635778 7614 state_mem.go:107] "Deleted CPUSet assignment" podUID="8a278410-3079-49d9-8c59-4cedf3f50213" containerName="assisted-installer-controller" Feb 24 05:14:38.636128 master-0 kubenswrapper[7614]: I0224 05:14:38.635903 7614 memory_manager.go:354] "RemoveStaleState removing state" podUID="ba74ac93-7ad1-46e5-97c6-75c410d6a39e" containerName="prober" Feb 24 05:14:38.636128 master-0 kubenswrapper[7614]: I0224 05:14:38.635927 7614 memory_manager.go:354] "RemoveStaleState removing state" podUID="8a278410-3079-49d9-8c59-4cedf3f50213" containerName="assisted-installer-controller" Feb 24 05:14:38.636645 master-0 kubenswrapper[7614]: I0224 05:14:38.636579 7614 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-576b4d78bd-fsmrl" Feb 24 05:14:38.639557 master-0 kubenswrapper[7614]: I0224 05:14:38.639481 7614 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"signing-key" Feb 24 05:14:38.641133 master-0 kubenswrapper[7614]: I0224 05:14:38.641064 7614 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"openshift-service-ca.crt" Feb 24 05:14:38.641556 master-0 kubenswrapper[7614]: I0224 05:14:38.641515 7614 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"kube-root-ca.crt" Feb 24 05:14:38.641916 master-0 kubenswrapper[7614]: I0224 05:14:38.641871 7614 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"signing-cabundle" Feb 24 05:14:38.657619 master-0 kubenswrapper[7614]: I0224 05:14:38.655021 7614 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca/service-ca-576b4d78bd-fsmrl"] Feb 24 05:14:38.680891 master-0 kubenswrapper[7614]: I0224 05:14:38.680811 7614 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-vd82q" Feb 24 05:14:38.703438 master-0 kubenswrapper[7614]: I0224 05:14:38.703357 7614 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/ab5afff8-1081-4acc-8ab9-d6bfd8df1d67-signing-cabundle\") pod \"service-ca-576b4d78bd-fsmrl\" (UID: \"ab5afff8-1081-4acc-8ab9-d6bfd8df1d67\") " pod="openshift-service-ca/service-ca-576b4d78bd-fsmrl" Feb 24 05:14:38.703698 master-0 kubenswrapper[7614]: I0224 05:14:38.703643 7614 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p67bp\" (UniqueName: \"kubernetes.io/projected/ab5afff8-1081-4acc-8ab9-d6bfd8df1d67-kube-api-access-p67bp\") pod \"service-ca-576b4d78bd-fsmrl\" (UID: \"ab5afff8-1081-4acc-8ab9-d6bfd8df1d67\") " pod="openshift-service-ca/service-ca-576b4d78bd-fsmrl" Feb 24 05:14:38.704278 master-0 kubenswrapper[7614]: I0224 05:14:38.704214 7614 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/ab5afff8-1081-4acc-8ab9-d6bfd8df1d67-signing-key\") pod \"service-ca-576b4d78bd-fsmrl\" (UID: \"ab5afff8-1081-4acc-8ab9-d6bfd8df1d67\") " pod="openshift-service-ca/service-ca-576b4d78bd-fsmrl" Feb 24 05:14:38.806147 master-0 kubenswrapper[7614]: I0224 05:14:38.805970 7614 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/ab5afff8-1081-4acc-8ab9-d6bfd8df1d67-signing-cabundle\") pod \"service-ca-576b4d78bd-fsmrl\" (UID: \"ab5afff8-1081-4acc-8ab9-d6bfd8df1d67\") " pod="openshift-service-ca/service-ca-576b4d78bd-fsmrl" Feb 24 05:14:38.806147 master-0 kubenswrapper[7614]: I0224 05:14:38.806086 7614 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p67bp\" (UniqueName: \"kubernetes.io/projected/ab5afff8-1081-4acc-8ab9-d6bfd8df1d67-kube-api-access-p67bp\") pod \"service-ca-576b4d78bd-fsmrl\" (UID: \"ab5afff8-1081-4acc-8ab9-d6bfd8df1d67\") " pod="openshift-service-ca/service-ca-576b4d78bd-fsmrl" Feb 24 05:14:38.806922 master-0 kubenswrapper[7614]: I0224 05:14:38.806861 7614 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/ab5afff8-1081-4acc-8ab9-d6bfd8df1d67-signing-key\") pod \"service-ca-576b4d78bd-fsmrl\" (UID: \"ab5afff8-1081-4acc-8ab9-d6bfd8df1d67\") " pod="openshift-service-ca/service-ca-576b4d78bd-fsmrl" Feb 24 05:14:38.807838 master-0 kubenswrapper[7614]: I0224 05:14:38.807771 7614 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/ab5afff8-1081-4acc-8ab9-d6bfd8df1d67-signing-cabundle\") pod \"service-ca-576b4d78bd-fsmrl\" (UID: \"ab5afff8-1081-4acc-8ab9-d6bfd8df1d67\") " pod="openshift-service-ca/service-ca-576b4d78bd-fsmrl" Feb 24 05:14:38.818269 master-0 kubenswrapper[7614]: I0224 05:14:38.818189 7614 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/ab5afff8-1081-4acc-8ab9-d6bfd8df1d67-signing-key\") pod \"service-ca-576b4d78bd-fsmrl\" (UID: \"ab5afff8-1081-4acc-8ab9-d6bfd8df1d67\") " pod="openshift-service-ca/service-ca-576b4d78bd-fsmrl" Feb 24 05:14:38.839095 master-0 kubenswrapper[7614]: I0224 05:14:38.839029 7614 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p67bp\" (UniqueName: \"kubernetes.io/projected/ab5afff8-1081-4acc-8ab9-d6bfd8df1d67-kube-api-access-p67bp\") pod \"service-ca-576b4d78bd-fsmrl\" (UID: \"ab5afff8-1081-4acc-8ab9-d6bfd8df1d67\") " pod="openshift-service-ca/service-ca-576b4d78bd-fsmrl" Feb 24 05:14:38.993750 master-0 kubenswrapper[7614]: I0224 05:14:38.993671 7614 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-576b4d78bd-fsmrl" Feb 24 05:14:39.265134 master-0 kubenswrapper[7614]: I0224 05:14:39.264781 7614 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca/service-ca-576b4d78bd-fsmrl"] Feb 24 05:14:39.277120 master-0 kubenswrapper[7614]: W0224 05:14:39.277047 7614 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podab5afff8_1081_4acc_8ab9_d6bfd8df1d67.slice/crio-aa70a59110835e6aad43cf1cb5ed855bb86de37892d716ff87772c740d916d65 WatchSource:0}: Error finding container aa70a59110835e6aad43cf1cb5ed855bb86de37892d716ff87772c740d916d65: Status 404 returned error can't find the container with id aa70a59110835e6aad43cf1cb5ed855bb86de37892d716ff87772c740d916d65 Feb 24 05:14:39.280591 master-0 kubenswrapper[7614]: I0224 05:14:39.279574 7614 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 24 05:14:39.286266 master-0 kubenswrapper[7614]: I0224 05:14:39.286216 7614 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="kube-system/bootstrap-kube-controller-manager-master-0" Feb 24 05:14:40.296146 master-0 kubenswrapper[7614]: I0224 05:14:40.295814 7614 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-576b4d78bd-fsmrl" event={"ID":"ab5afff8-1081-4acc-8ab9-d6bfd8df1d67","Type":"ContainerStarted","Data":"6c52c639645d2cd2c7e662742a4602420e9f03d769221f35786d315c1351ca22"} Feb 24 05:14:40.296146 master-0 kubenswrapper[7614]: I0224 05:14:40.296134 7614 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-576b4d78bd-fsmrl" event={"ID":"ab5afff8-1081-4acc-8ab9-d6bfd8df1d67","Type":"ContainerStarted","Data":"aa70a59110835e6aad43cf1cb5ed855bb86de37892d716ff87772c740d916d65"} Feb 24 05:14:40.321037 master-0 kubenswrapper[7614]: I0224 05:14:40.320913 7614 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-service-ca/service-ca-576b4d78bd-fsmrl" podStartSLOduration=2.320891256 podStartE2EDuration="2.320891256s" podCreationTimestamp="2026-02-24 05:14:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-24 05:14:40.320881125 +0000 UTC m=+11.355624291" watchObservedRunningTime="2026-02-24 05:14:40.320891256 +0000 UTC m=+11.355634412" Feb 24 05:14:41.305909 master-0 kubenswrapper[7614]: I0224 05:14:41.305369 7614 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-olm-operator/cluster-olm-operator-5bd7768f54-qh6j7" event={"ID":"633d33a1-e1b1-40b0-b56a-afb0c1085d97","Type":"ContainerDied","Data":"4a59d0e70f795f32652b83ec45dcee79f28f6c433debe212f2b8fdd27b68d652"} Feb 24 05:14:41.305909 master-0 kubenswrapper[7614]: I0224 05:14:41.305374 7614 generic.go:334] "Generic (PLEG): container finished" podID="633d33a1-e1b1-40b0-b56a-afb0c1085d97" containerID="4a59d0e70f795f32652b83ec45dcee79f28f6c433debe212f2b8fdd27b68d652" exitCode=0 Feb 24 05:14:42.892581 master-0 kubenswrapper[7614]: I0224 05:14:42.890212 7614 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-vd82q" Feb 24 05:14:42.892581 master-0 kubenswrapper[7614]: I0224 05:14:42.890610 7614 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 24 05:14:42.921231 master-0 kubenswrapper[7614]: I0224 05:14:42.921160 7614 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-vd82q" Feb 24 05:14:43.322634 master-0 kubenswrapper[7614]: I0224 05:14:43.322553 7614 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-fc889cfd5-r6p58" event={"ID":"c3fed34f-b275-42c6-af6c-8de3e6fe0f9e","Type":"ContainerStarted","Data":"80dce2d75efa45ca36b53637a94f5b4155d200b7759d2e7b129815f6f4324f5a"} Feb 24 05:14:44.327157 master-0 kubenswrapper[7614]: I0224 05:14:44.327072 7614 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-olm-operator/cluster-olm-operator-5bd7768f54-qh6j7" event={"ID":"633d33a1-e1b1-40b0-b56a-afb0c1085d97","Type":"ContainerStarted","Data":"f0a59447aa5599eed278c625c9ff436eeea9214419570f5ba689ba155470685a"} Feb 24 05:14:44.578714 master-0 kubenswrapper[7614]: I0224 05:14:44.578633 7614 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-storage-version-migrator/migrator-5c85bff57-txt9d"] Feb 24 05:14:44.580228 master-0 kubenswrapper[7614]: I0224 05:14:44.579415 7614 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-5c85bff57-txt9d" Feb 24 05:14:44.585829 master-0 kubenswrapper[7614]: I0224 05:14:44.585775 7614 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" Feb 24 05:14:44.590359 master-0 kubenswrapper[7614]: I0224 05:14:44.586696 7614 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"kube-root-ca.crt" Feb 24 05:14:44.590359 master-0 kubenswrapper[7614]: I0224 05:14:44.590096 7614 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator/migrator-5c85bff57-txt9d"] Feb 24 05:14:44.688048 master-0 kubenswrapper[7614]: I0224 05:14:44.686093 7614 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f9pp4\" (UniqueName: \"kubernetes.io/projected/03e4cebe-f3df-423f-be2b-7fb22bd58341-kube-api-access-f9pp4\") pod \"migrator-5c85bff57-txt9d\" (UID: \"03e4cebe-f3df-423f-be2b-7fb22bd58341\") " pod="openshift-kube-storage-version-migrator/migrator-5c85bff57-txt9d" Feb 24 05:14:44.787980 master-0 kubenswrapper[7614]: I0224 05:14:44.787846 7614 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f9pp4\" (UniqueName: \"kubernetes.io/projected/03e4cebe-f3df-423f-be2b-7fb22bd58341-kube-api-access-f9pp4\") pod \"migrator-5c85bff57-txt9d\" (UID: \"03e4cebe-f3df-423f-be2b-7fb22bd58341\") " pod="openshift-kube-storage-version-migrator/migrator-5c85bff57-txt9d" Feb 24 05:14:44.818722 master-0 kubenswrapper[7614]: I0224 05:14:44.818661 7614 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f9pp4\" (UniqueName: \"kubernetes.io/projected/03e4cebe-f3df-423f-be2b-7fb22bd58341-kube-api-access-f9pp4\") pod \"migrator-5c85bff57-txt9d\" (UID: \"03e4cebe-f3df-423f-be2b-7fb22bd58341\") " pod="openshift-kube-storage-version-migrator/migrator-5c85bff57-txt9d" Feb 24 05:14:44.966345 master-0 kubenswrapper[7614]: I0224 05:14:44.965677 7614 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-5c85bff57-txt9d" Feb 24 05:14:45.206685 master-0 kubenswrapper[7614]: I0224 05:14:45.206213 7614 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator/migrator-5c85bff57-txt9d"] Feb 24 05:14:45.223376 master-0 kubenswrapper[7614]: W0224 05:14:45.223268 7614 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod03e4cebe_f3df_423f_be2b_7fb22bd58341.slice/crio-da13c43822ff6ebef72ea5dada557656eab3613ad082a77190dd348e4d4caec1 WatchSource:0}: Error finding container da13c43822ff6ebef72ea5dada557656eab3613ad082a77190dd348e4d4caec1: Status 404 returned error can't find the container with id da13c43822ff6ebef72ea5dada557656eab3613ad082a77190dd348e4d4caec1 Feb 24 05:14:45.352654 master-0 kubenswrapper[7614]: I0224 05:14:45.352456 7614 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-77cd4d9559-8l7xv" event={"ID":"e6f05507-d5c1-4102-a220-1db715a496e3","Type":"ContainerStarted","Data":"acdec98fa977010c1aa977c4f0cce838f4bc4ae8e6cd6029b1446085a34e0532"} Feb 24 05:14:45.357075 master-0 kubenswrapper[7614]: I0224 05:14:45.356484 7614 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-5bd7c86784-kbb8z" event={"ID":"59333a14-5bdc-4590-a3da-af7300f086da","Type":"ContainerStarted","Data":"d5ce8ccd581f3f0a727f122a907bfeeff964d35571ffdd52c3f7804a92dfb1d9"} Feb 24 05:14:45.361608 master-0 kubenswrapper[7614]: I0224 05:14:45.361538 7614 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-5c85bff57-txt9d" event={"ID":"03e4cebe-f3df-423f-be2b-7fb22bd58341","Type":"ContainerStarted","Data":"da13c43822ff6ebef72ea5dada557656eab3613ad082a77190dd348e4d4caec1"} Feb 24 05:14:45.910460 master-0 kubenswrapper[7614]: I0224 05:14:45.910374 7614 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/7dcc5520-7aa8-4cd5-b06d-591827ed4e2a-metrics-certs\") pod \"network-metrics-daemon-2vsjh\" (UID: \"7dcc5520-7aa8-4cd5-b06d-591827ed4e2a\") " pod="openshift-multus/network-metrics-daemon-2vsjh" Feb 24 05:14:45.910460 master-0 kubenswrapper[7614]: I0224 05:14:45.910450 7614 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/6e5ede6a-9d4b-47a2-b4ba-e6018910d05a-apiservice-cert\") pod \"cluster-node-tuning-operator-bcf775fc9-h99t4\" (UID: \"6e5ede6a-9d4b-47a2-b4ba-e6018910d05a\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-bcf775fc9-h99t4" Feb 24 05:14:45.910824 master-0 kubenswrapper[7614]: E0224 05:14:45.910656 7614 secret.go:189] Couldn't get secret openshift-multus/metrics-daemon-secret: secret "metrics-daemon-secret" not found Feb 24 05:14:45.910895 master-0 kubenswrapper[7614]: E0224 05:14:45.910843 7614 secret.go:189] Couldn't get secret openshift-monitoring/cluster-monitoring-operator-tls: secret "cluster-monitoring-operator-tls" not found Feb 24 05:14:45.913559 master-0 kubenswrapper[7614]: I0224 05:14:45.913483 7614 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cluster-monitoring-operator-tls\" (UniqueName: \"kubernetes.io/secret/c177f8fe-8145-4557-ae78-af121efe001c-cluster-monitoring-operator-tls\") pod \"cluster-monitoring-operator-6bb6d78bf-mzb7q\" (UID: \"c177f8fe-8145-4557-ae78-af121efe001c\") " pod="openshift-monitoring/cluster-monitoring-operator-6bb6d78bf-mzb7q" Feb 24 05:14:45.913806 master-0 kubenswrapper[7614]: I0224 05:14:45.913633 7614 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/8be1f8db-3f0b-4d6f-be42-7564fba66820-webhook-certs\") pod \"multus-admission-controller-5f98f4f8d5-b985k\" (UID: \"8be1f8db-3f0b-4d6f-be42-7564fba66820\") " pod="openshift-multus/multus-admission-controller-5f98f4f8d5-b985k" Feb 24 05:14:45.913806 master-0 kubenswrapper[7614]: I0224 05:14:45.913673 7614 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/dd29bef3-d27e-48b3-9aa0-d915e949b3d5-marketplace-operator-metrics\") pod \"marketplace-operator-6f5488b997-dbsnm\" (UID: \"dd29bef3-d27e-48b3-9aa0-d915e949b3d5\") " pod="openshift-marketplace/marketplace-operator-6f5488b997-dbsnm" Feb 24 05:14:45.913806 master-0 kubenswrapper[7614]: I0224 05:14:45.913693 7614 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7c4b448f-670e-45a1-bdd7-c42903c682a9-serving-cert\") pod \"cluster-version-operator-5cfd9759cf-r4rf2\" (UID: \"7c4b448f-670e-45a1-bdd7-c42903c682a9\") " pod="openshift-cluster-version/cluster-version-operator-5cfd9759cf-r4rf2" Feb 24 05:14:45.913806 master-0 kubenswrapper[7614]: I0224 05:14:45.913724 7614 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/996ae0be-d36c-47f4-98b2-1c89591f9506-metrics-tls\") pod \"dns-operator-8c7d49845-4dhth\" (UID: \"996ae0be-d36c-47f4-98b2-1c89591f9506\") " pod="openshift-dns-operator/dns-operator-8c7d49845-4dhth" Feb 24 05:14:45.913806 master-0 kubenswrapper[7614]: I0224 05:14:45.913752 7614 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/49bfccec-61ec-4bef-a561-9f6e6f906215-package-server-manager-serving-cert\") pod \"package-server-manager-5c75f78c8b-9d82f\" (UID: \"49bfccec-61ec-4bef-a561-9f6e6f906215\") " pod="openshift-operator-lifecycle-manager/package-server-manager-5c75f78c8b-9d82f" Feb 24 05:14:45.913806 master-0 kubenswrapper[7614]: I0224 05:14:45.913771 7614 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/3d6b1ce7-1213-494c-829d-186d39eac7eb-metrics-tls\") pod \"ingress-operator-6569778c84-rr8r7\" (UID: \"3d6b1ce7-1213-494c-829d-186d39eac7eb\") " pod="openshift-ingress-operator/ingress-operator-6569778c84-rr8r7" Feb 24 05:14:45.913806 master-0 kubenswrapper[7614]: I0224 05:14:45.913798 7614 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-tuning-operator-tls\" (UniqueName: \"kubernetes.io/secret/6e5ede6a-9d4b-47a2-b4ba-e6018910d05a-node-tuning-operator-tls\") pod \"cluster-node-tuning-operator-bcf775fc9-h99t4\" (UID: \"6e5ede6a-9d4b-47a2-b4ba-e6018910d05a\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-bcf775fc9-h99t4" Feb 24 05:14:45.916670 master-0 kubenswrapper[7614]: E0224 05:14:45.916591 7614 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/package-server-manager-serving-cert: secret "package-server-manager-serving-cert" not found Feb 24 05:14:45.916859 master-0 kubenswrapper[7614]: E0224 05:14:45.916726 7614 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/49bfccec-61ec-4bef-a561-9f6e6f906215-package-server-manager-serving-cert podName:49bfccec-61ec-4bef-a561-9f6e6f906215 nodeName:}" failed. No retries permitted until 2026-02-24 05:15:01.916687918 +0000 UTC m=+32.951431114 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "package-server-manager-serving-cert" (UniqueName: "kubernetes.io/secret/49bfccec-61ec-4bef-a561-9f6e6f906215-package-server-manager-serving-cert") pod "package-server-manager-5c75f78c8b-9d82f" (UID: "49bfccec-61ec-4bef-a561-9f6e6f906215") : secret "package-server-manager-serving-cert" not found Feb 24 05:14:45.917474 master-0 kubenswrapper[7614]: E0224 05:14:45.916854 7614 secret.go:189] Couldn't get secret openshift-multus/multus-admission-controller-secret: secret "multus-admission-controller-secret" not found Feb 24 05:14:45.917474 master-0 kubenswrapper[7614]: E0224 05:14:45.916920 7614 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/8be1f8db-3f0b-4d6f-be42-7564fba66820-webhook-certs podName:8be1f8db-3f0b-4d6f-be42-7564fba66820 nodeName:}" failed. No retries permitted until 2026-02-24 05:15:01.916897515 +0000 UTC m=+32.951640711 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/8be1f8db-3f0b-4d6f-be42-7564fba66820-webhook-certs") pod "multus-admission-controller-5f98f4f8d5-b985k" (UID: "8be1f8db-3f0b-4d6f-be42-7564fba66820") : secret "multus-admission-controller-secret" not found Feb 24 05:14:45.917474 master-0 kubenswrapper[7614]: E0224 05:14:45.916962 7614 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c177f8fe-8145-4557-ae78-af121efe001c-cluster-monitoring-operator-tls podName:c177f8fe-8145-4557-ae78-af121efe001c nodeName:}" failed. No retries permitted until 2026-02-24 05:15:01.916942606 +0000 UTC m=+32.951685802 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "cluster-monitoring-operator-tls" (UniqueName: "kubernetes.io/secret/c177f8fe-8145-4557-ae78-af121efe001c-cluster-monitoring-operator-tls") pod "cluster-monitoring-operator-6bb6d78bf-mzb7q" (UID: "c177f8fe-8145-4557-ae78-af121efe001c") : secret "cluster-monitoring-operator-tls" not found Feb 24 05:14:45.917474 master-0 kubenswrapper[7614]: E0224 05:14:45.916995 7614 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/7dcc5520-7aa8-4cd5-b06d-591827ed4e2a-metrics-certs podName:7dcc5520-7aa8-4cd5-b06d-591827ed4e2a nodeName:}" failed. No retries permitted until 2026-02-24 05:15:01.916978537 +0000 UTC m=+32.951721793 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/7dcc5520-7aa8-4cd5-b06d-591827ed4e2a-metrics-certs") pod "network-metrics-daemon-2vsjh" (UID: "7dcc5520-7aa8-4cd5-b06d-591827ed4e2a") : secret "metrics-daemon-secret" not found Feb 24 05:14:45.917474 master-0 kubenswrapper[7614]: E0224 05:14:45.917096 7614 secret.go:189] Couldn't get secret openshift-marketplace/marketplace-operator-metrics: secret "marketplace-operator-metrics" not found Feb 24 05:14:45.917474 master-0 kubenswrapper[7614]: E0224 05:14:45.917150 7614 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/dd29bef3-d27e-48b3-9aa0-d915e949b3d5-marketplace-operator-metrics podName:dd29bef3-d27e-48b3-9aa0-d915e949b3d5 nodeName:}" failed. No retries permitted until 2026-02-24 05:15:01.917133511 +0000 UTC m=+32.951876707 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "marketplace-operator-metrics" (UniqueName: "kubernetes.io/secret/dd29bef3-d27e-48b3-9aa0-d915e949b3d5-marketplace-operator-metrics") pod "marketplace-operator-6f5488b997-dbsnm" (UID: "dd29bef3-d27e-48b3-9aa0-d915e949b3d5") : secret "marketplace-operator-metrics" not found Feb 24 05:14:45.917474 master-0 kubenswrapper[7614]: I0224 05:14:45.917253 7614 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/6e5ede6a-9d4b-47a2-b4ba-e6018910d05a-apiservice-cert\") pod \"cluster-node-tuning-operator-bcf775fc9-h99t4\" (UID: \"6e5ede6a-9d4b-47a2-b4ba-e6018910d05a\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-bcf775fc9-h99t4" Feb 24 05:14:45.920759 master-0 kubenswrapper[7614]: I0224 05:14:45.920707 7614 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-tuning-operator-tls\" (UniqueName: \"kubernetes.io/secret/6e5ede6a-9d4b-47a2-b4ba-e6018910d05a-node-tuning-operator-tls\") pod \"cluster-node-tuning-operator-bcf775fc9-h99t4\" (UID: \"6e5ede6a-9d4b-47a2-b4ba-e6018910d05a\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-bcf775fc9-h99t4" Feb 24 05:14:45.921613 master-0 kubenswrapper[7614]: I0224 05:14:45.921577 7614 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/3d6b1ce7-1213-494c-829d-186d39eac7eb-metrics-tls\") pod \"ingress-operator-6569778c84-rr8r7\" (UID: \"3d6b1ce7-1213-494c-829d-186d39eac7eb\") " pod="openshift-ingress-operator/ingress-operator-6569778c84-rr8r7" Feb 24 05:14:45.923630 master-0 kubenswrapper[7614]: I0224 05:14:45.923592 7614 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/996ae0be-d36c-47f4-98b2-1c89591f9506-metrics-tls\") pod \"dns-operator-8c7d49845-4dhth\" (UID: \"996ae0be-d36c-47f4-98b2-1c89591f9506\") " pod="openshift-dns-operator/dns-operator-8c7d49845-4dhth" Feb 24 05:14:45.933580 master-0 kubenswrapper[7614]: I0224 05:14:45.933514 7614 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7c4b448f-670e-45a1-bdd7-c42903c682a9-serving-cert\") pod \"cluster-version-operator-5cfd9759cf-r4rf2\" (UID: \"7c4b448f-670e-45a1-bdd7-c42903c682a9\") " pod="openshift-cluster-version/cluster-version-operator-5cfd9759cf-r4rf2" Feb 24 05:14:46.028263 master-0 kubenswrapper[7614]: I0224 05:14:46.028192 7614 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-8c7d49845-4dhth" Feb 24 05:14:46.028510 master-0 kubenswrapper[7614]: I0224 05:14:46.028236 7614 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-bcf775fc9-h99t4" Feb 24 05:14:46.029968 master-0 kubenswrapper[7614]: I0224 05:14:46.029936 7614 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-5cfd9759cf-r4rf2" Feb 24 05:14:46.030194 master-0 kubenswrapper[7614]: I0224 05:14:46.030157 7614 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-6569778c84-rr8r7" Feb 24 05:14:46.079435 master-0 kubenswrapper[7614]: W0224 05:14:46.079372 7614 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod7c4b448f_670e_45a1_bdd7_c42903c682a9.slice/crio-019ddbfba3ca4b29c85cce38fc32243e83dcf06f54ada15a33120765deb62756 WatchSource:0}: Error finding container 019ddbfba3ca4b29c85cce38fc32243e83dcf06f54ada15a33120765deb62756: Status 404 returned error can't find the container with id 019ddbfba3ca4b29c85cce38fc32243e83dcf06f54ada15a33120765deb62756 Feb 24 05:14:46.262224 master-0 kubenswrapper[7614]: I0224 05:14:46.262138 7614 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-bcf775fc9-h99t4"] Feb 24 05:14:46.294238 master-0 kubenswrapper[7614]: I0224 05:14:46.294177 7614 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns-operator/dns-operator-8c7d49845-4dhth"] Feb 24 05:14:46.325285 master-0 kubenswrapper[7614]: I0224 05:14:46.325226 7614 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-operator/ingress-operator-6569778c84-rr8r7"] Feb 24 05:14:46.339375 master-0 kubenswrapper[7614]: W0224 05:14:46.339302 7614 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3d6b1ce7_1213_494c_829d_186d39eac7eb.slice/crio-e0b212afd7d07d05ad4af03681bd28027ddd652c6e3c593a77163ced8697a47e WatchSource:0}: Error finding container e0b212afd7d07d05ad4af03681bd28027ddd652c6e3c593a77163ced8697a47e: Status 404 returned error can't find the container with id e0b212afd7d07d05ad4af03681bd28027ddd652c6e3c593a77163ced8697a47e Feb 24 05:14:46.366446 master-0 kubenswrapper[7614]: I0224 05:14:46.365904 7614 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-8c7d49845-4dhth" event={"ID":"996ae0be-d36c-47f4-98b2-1c89591f9506","Type":"ContainerStarted","Data":"cd174549be5b88f39588bafbc22af8049014b8bbed26dfd817fa5184b48774e3"} Feb 24 05:14:46.367125 master-0 kubenswrapper[7614]: I0224 05:14:46.367051 7614 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-bcf775fc9-h99t4" event={"ID":"6e5ede6a-9d4b-47a2-b4ba-e6018910d05a","Type":"ContainerStarted","Data":"2b0278ee2f5e88257e8f5b58fed5df5f9b9d95fcd14996f65f2dd1c054e4ac57"} Feb 24 05:14:46.368035 master-0 kubenswrapper[7614]: I0224 05:14:46.368000 7614 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-6569778c84-rr8r7" event={"ID":"3d6b1ce7-1213-494c-829d-186d39eac7eb","Type":"ContainerStarted","Data":"e0b212afd7d07d05ad4af03681bd28027ddd652c6e3c593a77163ced8697a47e"} Feb 24 05:14:46.369319 master-0 kubenswrapper[7614]: I0224 05:14:46.369273 7614 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-5cfd9759cf-r4rf2" event={"ID":"7c4b448f-670e-45a1-bdd7-c42903c682a9","Type":"ContainerStarted","Data":"019ddbfba3ca4b29c85cce38fc32243e83dcf06f54ada15a33120765deb62756"} Feb 24 05:14:47.374799 master-0 kubenswrapper[7614]: I0224 05:14:47.373793 7614 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-8586dccc9b-49fsv" event={"ID":"58ecd829-4749-4c8a-933b-16b4acccac90","Type":"ContainerStarted","Data":"19a4a70cd708813c9cf34e54dd49971eba939aacdcaa013905918a3ca917b13e"} Feb 24 05:14:47.379158 master-0 kubenswrapper[7614]: I0224 05:14:47.379059 7614 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-5c85bff57-txt9d" event={"ID":"03e4cebe-f3df-423f-be2b-7fb22bd58341","Type":"ContainerStarted","Data":"c10b931b1e59c5324862f7e3b31c3bd36099ae2748775cd4b569e4bf0137e64e"} Feb 24 05:14:47.379466 master-0 kubenswrapper[7614]: I0224 05:14:47.379359 7614 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-5c85bff57-txt9d" event={"ID":"03e4cebe-f3df-423f-be2b-7fb22bd58341","Type":"ContainerStarted","Data":"9772f9cf798b64aa772b12919799a9a01e15b2097fe75057547c144dbea32f22"} Feb 24 05:14:47.413935 master-0 kubenswrapper[7614]: I0224 05:14:47.413858 7614 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-storage-version-migrator/migrator-5c85bff57-txt9d" podStartSLOduration=1.6966538180000001 podStartE2EDuration="3.413837624s" podCreationTimestamp="2026-02-24 05:14:44 +0000 UTC" firstStartedPulling="2026-02-24 05:14:45.228953941 +0000 UTC m=+16.263697127" lastFinishedPulling="2026-02-24 05:14:46.946137777 +0000 UTC m=+17.980880933" observedRunningTime="2026-02-24 05:14:47.411951262 +0000 UTC m=+18.446694408" watchObservedRunningTime="2026-02-24 05:14:47.413837624 +0000 UTC m=+18.448580780" Feb 24 05:14:50.392231 master-0 kubenswrapper[7614]: I0224 05:14:50.392147 7614 generic.go:334] "Generic (PLEG): container finished" podID="e6f05507-d5c1-4102-a220-1db715a496e3" containerID="acdec98fa977010c1aa977c4f0cce838f4bc4ae8e6cd6029b1446085a34e0532" exitCode=0 Feb 24 05:14:50.392231 master-0 kubenswrapper[7614]: I0224 05:14:50.392222 7614 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-77cd4d9559-8l7xv" event={"ID":"e6f05507-d5c1-4102-a220-1db715a496e3","Type":"ContainerDied","Data":"acdec98fa977010c1aa977c4f0cce838f4bc4ae8e6cd6029b1446085a34e0532"} Feb 24 05:14:50.393578 master-0 kubenswrapper[7614]: I0224 05:14:50.392898 7614 scope.go:117] "RemoveContainer" containerID="acdec98fa977010c1aa977c4f0cce838f4bc4ae8e6cd6029b1446085a34e0532" Feb 24 05:14:51.398301 master-0 kubenswrapper[7614]: I0224 05:14:51.398112 7614 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/iptables-alerter-r2vvc" event={"ID":"f6690909-3a87-4bdc-b0ec-1cdd4df32e4b","Type":"ContainerStarted","Data":"496edea1de8f6dedfc55e5f52ccb92c796e4298902e9604d1121ff32d68bbbe0"} Feb 24 05:14:53.301731 master-0 kubenswrapper[7614]: I0224 05:14:53.301241 7614 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-node-tuning-operator/tuned-2w6mj"] Feb 24 05:14:53.309660 master-0 kubenswrapper[7614]: I0224 05:14:53.302206 7614 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-node-tuning-operator/tuned-2w6mj" Feb 24 05:14:53.414335 master-0 kubenswrapper[7614]: I0224 05:14:53.413038 7614 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-storage-operator/csi-snapshot-controller-operator-6fb4df594f-8tv99" event={"ID":"feee7fe8-e805-4807-b4c0-ecc7ef0f88d9","Type":"ContainerStarted","Data":"e0310f65eb21da7836bef1892997027dc547f133c634a87f14b119b040f60bd1"} Feb 24 05:14:53.419151 master-0 kubenswrapper[7614]: I0224 05:14:53.416507 7614 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-584cc7bcb5-zz9fm" event={"ID":"933beda1-c930-4831-a886-3cc6b7a992ad","Type":"ContainerStarted","Data":"2c56b69fc4337064fa388eb97509499abfd2df910bf7a2fa34bbdc4682b29843"} Feb 24 05:14:53.424336 master-0 kubenswrapper[7614]: I0224 05:14:53.421490 7614 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-systemd\" (UniqueName: \"kubernetes.io/host-path/073c9b40-bb80-41a2-bcd2-cfdbe040a5a4-etc-systemd\") pod \"tuned-2w6mj\" (UID: \"073c9b40-bb80-41a2-bcd2-cfdbe040a5a4\") " pod="openshift-cluster-node-tuning-operator/tuned-2w6mj" Feb 24 05:14:53.424336 master-0 kubenswrapper[7614]: I0224 05:14:53.421538 7614 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/073c9b40-bb80-41a2-bcd2-cfdbe040a5a4-sys\") pod \"tuned-2w6mj\" (UID: \"073c9b40-bb80-41a2-bcd2-cfdbe040a5a4\") " pod="openshift-cluster-node-tuning-operator/tuned-2w6mj" Feb 24 05:14:53.424336 master-0 kubenswrapper[7614]: I0224 05:14:53.421565 7614 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-sysconfig\" (UniqueName: \"kubernetes.io/host-path/073c9b40-bb80-41a2-bcd2-cfdbe040a5a4-etc-sysconfig\") pod \"tuned-2w6mj\" (UID: \"073c9b40-bb80-41a2-bcd2-cfdbe040a5a4\") " pod="openshift-cluster-node-tuning-operator/tuned-2w6mj" Feb 24 05:14:53.424336 master-0 kubenswrapper[7614]: I0224 05:14:53.421582 7614 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-sysctl-conf\" (UniqueName: \"kubernetes.io/host-path/073c9b40-bb80-41a2-bcd2-cfdbe040a5a4-etc-sysctl-conf\") pod \"tuned-2w6mj\" (UID: \"073c9b40-bb80-41a2-bcd2-cfdbe040a5a4\") " pod="openshift-cluster-node-tuning-operator/tuned-2w6mj" Feb 24 05:14:53.424336 master-0 kubenswrapper[7614]: I0224 05:14:53.421598 7614 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-tuned\" (UniqueName: \"kubernetes.io/empty-dir/073c9b40-bb80-41a2-bcd2-cfdbe040a5a4-etc-tuned\") pod \"tuned-2w6mj\" (UID: \"073c9b40-bb80-41a2-bcd2-cfdbe040a5a4\") " pod="openshift-cluster-node-tuning-operator/tuned-2w6mj" Feb 24 05:14:53.424336 master-0 kubenswrapper[7614]: I0224 05:14:53.421615 7614 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/073c9b40-bb80-41a2-bcd2-cfdbe040a5a4-host\") pod \"tuned-2w6mj\" (UID: \"073c9b40-bb80-41a2-bcd2-cfdbe040a5a4\") " pod="openshift-cluster-node-tuning-operator/tuned-2w6mj" Feb 24 05:14:53.424336 master-0 kubenswrapper[7614]: I0224 05:14:53.421632 7614 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-modprobe-d\" (UniqueName: \"kubernetes.io/host-path/073c9b40-bb80-41a2-bcd2-cfdbe040a5a4-etc-modprobe-d\") pod \"tuned-2w6mj\" (UID: \"073c9b40-bb80-41a2-bcd2-cfdbe040a5a4\") " pod="openshift-cluster-node-tuning-operator/tuned-2w6mj" Feb 24 05:14:53.424336 master-0 kubenswrapper[7614]: I0224 05:14:53.421650 7614 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/073c9b40-bb80-41a2-bcd2-cfdbe040a5a4-etc-kubernetes\") pod \"tuned-2w6mj\" (UID: \"073c9b40-bb80-41a2-bcd2-cfdbe040a5a4\") " pod="openshift-cluster-node-tuning-operator/tuned-2w6mj" Feb 24 05:14:53.424336 master-0 kubenswrapper[7614]: I0224 05:14:53.421677 7614 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/073c9b40-bb80-41a2-bcd2-cfdbe040a5a4-lib-modules\") pod \"tuned-2w6mj\" (UID: \"073c9b40-bb80-41a2-bcd2-cfdbe040a5a4\") " pod="openshift-cluster-node-tuning-operator/tuned-2w6mj" Feb 24 05:14:53.424336 master-0 kubenswrapper[7614]: I0224 05:14:53.421695 7614 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/073c9b40-bb80-41a2-bcd2-cfdbe040a5a4-tmp\") pod \"tuned-2w6mj\" (UID: \"073c9b40-bb80-41a2-bcd2-cfdbe040a5a4\") " pod="openshift-cluster-node-tuning-operator/tuned-2w6mj" Feb 24 05:14:53.424336 master-0 kubenswrapper[7614]: I0224 05:14:53.421715 7614 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/073c9b40-bb80-41a2-bcd2-cfdbe040a5a4-var-lib-kubelet\") pod \"tuned-2w6mj\" (UID: \"073c9b40-bb80-41a2-bcd2-cfdbe040a5a4\") " pod="openshift-cluster-node-tuning-operator/tuned-2w6mj" Feb 24 05:14:53.424336 master-0 kubenswrapper[7614]: I0224 05:14:53.421747 7614 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dh2rh\" (UniqueName: \"kubernetes.io/projected/073c9b40-bb80-41a2-bcd2-cfdbe040a5a4-kube-api-access-dh2rh\") pod \"tuned-2w6mj\" (UID: \"073c9b40-bb80-41a2-bcd2-cfdbe040a5a4\") " pod="openshift-cluster-node-tuning-operator/tuned-2w6mj" Feb 24 05:14:53.424336 master-0 kubenswrapper[7614]: I0224 05:14:53.421776 7614 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-sysctl-d\" (UniqueName: \"kubernetes.io/host-path/073c9b40-bb80-41a2-bcd2-cfdbe040a5a4-etc-sysctl-d\") pod \"tuned-2w6mj\" (UID: \"073c9b40-bb80-41a2-bcd2-cfdbe040a5a4\") " pod="openshift-cluster-node-tuning-operator/tuned-2w6mj" Feb 24 05:14:53.424336 master-0 kubenswrapper[7614]: I0224 05:14:53.421816 7614 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/073c9b40-bb80-41a2-bcd2-cfdbe040a5a4-run\") pod \"tuned-2w6mj\" (UID: \"073c9b40-bb80-41a2-bcd2-cfdbe040a5a4\") " pod="openshift-cluster-node-tuning-operator/tuned-2w6mj" Feb 24 05:14:53.424336 master-0 kubenswrapper[7614]: I0224 05:14:53.423974 7614 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-77cd4d9559-8l7xv" event={"ID":"e6f05507-d5c1-4102-a220-1db715a496e3","Type":"ContainerStarted","Data":"e2064230fd04624f769c4f745b80aa38ea29b6c2deabd8a0fd7e19128af8486a"} Feb 24 05:14:53.430718 master-0 kubenswrapper[7614]: I0224 05:14:53.429611 7614 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-5cfd9759cf-r4rf2" event={"ID":"7c4b448f-670e-45a1-bdd7-c42903c682a9","Type":"ContainerStarted","Data":"e338e09a246700858efa3e983721a941e7283cc7d53a58bf5899c50605032792"} Feb 24 05:14:53.441162 master-0 kubenswrapper[7614]: I0224 05:14:53.439543 7614 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-8c7d49845-4dhth" event={"ID":"996ae0be-d36c-47f4-98b2-1c89591f9506","Type":"ContainerStarted","Data":"516b8e03c3aef3f9c784011155b7e0f97b511da3f0900967f8f66fa814221de6"} Feb 24 05:14:53.444426 master-0 kubenswrapper[7614]: I0224 05:14:53.443828 7614 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-bcf775fc9-h99t4" event={"ID":"6e5ede6a-9d4b-47a2-b4ba-e6018910d05a","Type":"ContainerStarted","Data":"8e61e1d5a62185ea40dd7889454ccd250bbeb0122433d8e3015d94ba9f1d1334"} Feb 24 05:14:53.450748 master-0 kubenswrapper[7614]: I0224 05:14:53.450409 7614 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-6569778c84-rr8r7" event={"ID":"3d6b1ce7-1213-494c-829d-186d39eac7eb","Type":"ContainerStarted","Data":"3a3e30c4351711cf4f4182871c1ce5a932bdfff6eaed56b06c55a876d10de8a1"} Feb 24 05:14:53.450748 master-0 kubenswrapper[7614]: I0224 05:14:53.450470 7614 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-6569778c84-rr8r7" event={"ID":"3d6b1ce7-1213-494c-829d-186d39eac7eb","Type":"ContainerStarted","Data":"dd6d3f4e8c90f9e72cf283fa2ee57699a971df08e7b5a82fbc21deb33aca4d26"} Feb 24 05:14:53.522601 master-0 kubenswrapper[7614]: I0224 05:14:53.522527 7614 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-modprobe-d\" (UniqueName: \"kubernetes.io/host-path/073c9b40-bb80-41a2-bcd2-cfdbe040a5a4-etc-modprobe-d\") pod \"tuned-2w6mj\" (UID: \"073c9b40-bb80-41a2-bcd2-cfdbe040a5a4\") " pod="openshift-cluster-node-tuning-operator/tuned-2w6mj" Feb 24 05:14:53.522601 master-0 kubenswrapper[7614]: I0224 05:14:53.522602 7614 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/073c9b40-bb80-41a2-bcd2-cfdbe040a5a4-etc-kubernetes\") pod \"tuned-2w6mj\" (UID: \"073c9b40-bb80-41a2-bcd2-cfdbe040a5a4\") " pod="openshift-cluster-node-tuning-operator/tuned-2w6mj" Feb 24 05:14:53.523712 master-0 kubenswrapper[7614]: I0224 05:14:53.522757 7614 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/073c9b40-bb80-41a2-bcd2-cfdbe040a5a4-lib-modules\") pod \"tuned-2w6mj\" (UID: \"073c9b40-bb80-41a2-bcd2-cfdbe040a5a4\") " pod="openshift-cluster-node-tuning-operator/tuned-2w6mj" Feb 24 05:14:53.523712 master-0 kubenswrapper[7614]: I0224 05:14:53.522784 7614 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/073c9b40-bb80-41a2-bcd2-cfdbe040a5a4-tmp\") pod \"tuned-2w6mj\" (UID: \"073c9b40-bb80-41a2-bcd2-cfdbe040a5a4\") " pod="openshift-cluster-node-tuning-operator/tuned-2w6mj" Feb 24 05:14:53.523712 master-0 kubenswrapper[7614]: I0224 05:14:53.522806 7614 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/073c9b40-bb80-41a2-bcd2-cfdbe040a5a4-var-lib-kubelet\") pod \"tuned-2w6mj\" (UID: \"073c9b40-bb80-41a2-bcd2-cfdbe040a5a4\") " pod="openshift-cluster-node-tuning-operator/tuned-2w6mj" Feb 24 05:14:53.523712 master-0 kubenswrapper[7614]: I0224 05:14:53.522885 7614 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dh2rh\" (UniqueName: \"kubernetes.io/projected/073c9b40-bb80-41a2-bcd2-cfdbe040a5a4-kube-api-access-dh2rh\") pod \"tuned-2w6mj\" (UID: \"073c9b40-bb80-41a2-bcd2-cfdbe040a5a4\") " pod="openshift-cluster-node-tuning-operator/tuned-2w6mj" Feb 24 05:14:53.523712 master-0 kubenswrapper[7614]: I0224 05:14:53.522947 7614 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-sysctl-d\" (UniqueName: \"kubernetes.io/host-path/073c9b40-bb80-41a2-bcd2-cfdbe040a5a4-etc-sysctl-d\") pod \"tuned-2w6mj\" (UID: \"073c9b40-bb80-41a2-bcd2-cfdbe040a5a4\") " pod="openshift-cluster-node-tuning-operator/tuned-2w6mj" Feb 24 05:14:53.523712 master-0 kubenswrapper[7614]: I0224 05:14:53.522980 7614 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/073c9b40-bb80-41a2-bcd2-cfdbe040a5a4-run\") pod \"tuned-2w6mj\" (UID: \"073c9b40-bb80-41a2-bcd2-cfdbe040a5a4\") " pod="openshift-cluster-node-tuning-operator/tuned-2w6mj" Feb 24 05:14:53.523712 master-0 kubenswrapper[7614]: I0224 05:14:53.523088 7614 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-systemd\" (UniqueName: \"kubernetes.io/host-path/073c9b40-bb80-41a2-bcd2-cfdbe040a5a4-etc-systemd\") pod \"tuned-2w6mj\" (UID: \"073c9b40-bb80-41a2-bcd2-cfdbe040a5a4\") " pod="openshift-cluster-node-tuning-operator/tuned-2w6mj" Feb 24 05:14:53.523712 master-0 kubenswrapper[7614]: I0224 05:14:53.523176 7614 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/073c9b40-bb80-41a2-bcd2-cfdbe040a5a4-sys\") pod \"tuned-2w6mj\" (UID: \"073c9b40-bb80-41a2-bcd2-cfdbe040a5a4\") " pod="openshift-cluster-node-tuning-operator/tuned-2w6mj" Feb 24 05:14:53.523712 master-0 kubenswrapper[7614]: I0224 05:14:53.523226 7614 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-sysconfig\" (UniqueName: \"kubernetes.io/host-path/073c9b40-bb80-41a2-bcd2-cfdbe040a5a4-etc-sysconfig\") pod \"tuned-2w6mj\" (UID: \"073c9b40-bb80-41a2-bcd2-cfdbe040a5a4\") " pod="openshift-cluster-node-tuning-operator/tuned-2w6mj" Feb 24 05:14:53.523712 master-0 kubenswrapper[7614]: I0224 05:14:53.523257 7614 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-sysctl-conf\" (UniqueName: \"kubernetes.io/host-path/073c9b40-bb80-41a2-bcd2-cfdbe040a5a4-etc-sysctl-conf\") pod \"tuned-2w6mj\" (UID: \"073c9b40-bb80-41a2-bcd2-cfdbe040a5a4\") " pod="openshift-cluster-node-tuning-operator/tuned-2w6mj" Feb 24 05:14:53.523712 master-0 kubenswrapper[7614]: I0224 05:14:53.523297 7614 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-tuned\" (UniqueName: \"kubernetes.io/empty-dir/073c9b40-bb80-41a2-bcd2-cfdbe040a5a4-etc-tuned\") pod \"tuned-2w6mj\" (UID: \"073c9b40-bb80-41a2-bcd2-cfdbe040a5a4\") " pod="openshift-cluster-node-tuning-operator/tuned-2w6mj" Feb 24 05:14:53.523712 master-0 kubenswrapper[7614]: I0224 05:14:53.523337 7614 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/073c9b40-bb80-41a2-bcd2-cfdbe040a5a4-host\") pod \"tuned-2w6mj\" (UID: \"073c9b40-bb80-41a2-bcd2-cfdbe040a5a4\") " pod="openshift-cluster-node-tuning-operator/tuned-2w6mj" Feb 24 05:14:53.525646 master-0 kubenswrapper[7614]: I0224 05:14:53.525599 7614 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-sysctl-conf\" (UniqueName: \"kubernetes.io/host-path/073c9b40-bb80-41a2-bcd2-cfdbe040a5a4-etc-sysctl-conf\") pod \"tuned-2w6mj\" (UID: \"073c9b40-bb80-41a2-bcd2-cfdbe040a5a4\") " pod="openshift-cluster-node-tuning-operator/tuned-2w6mj" Feb 24 05:14:53.526106 master-0 kubenswrapper[7614]: I0224 05:14:53.526072 7614 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-sysconfig\" (UniqueName: \"kubernetes.io/host-path/073c9b40-bb80-41a2-bcd2-cfdbe040a5a4-etc-sysconfig\") pod \"tuned-2w6mj\" (UID: \"073c9b40-bb80-41a2-bcd2-cfdbe040a5a4\") " pod="openshift-cluster-node-tuning-operator/tuned-2w6mj" Feb 24 05:14:53.526328 master-0 kubenswrapper[7614]: I0224 05:14:53.526221 7614 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/073c9b40-bb80-41a2-bcd2-cfdbe040a5a4-host\") pod \"tuned-2w6mj\" (UID: \"073c9b40-bb80-41a2-bcd2-cfdbe040a5a4\") " pod="openshift-cluster-node-tuning-operator/tuned-2w6mj" Feb 24 05:14:53.526387 master-0 kubenswrapper[7614]: I0224 05:14:53.526298 7614 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-modprobe-d\" (UniqueName: \"kubernetes.io/host-path/073c9b40-bb80-41a2-bcd2-cfdbe040a5a4-etc-modprobe-d\") pod \"tuned-2w6mj\" (UID: \"073c9b40-bb80-41a2-bcd2-cfdbe040a5a4\") " pod="openshift-cluster-node-tuning-operator/tuned-2w6mj" Feb 24 05:14:53.526599 master-0 kubenswrapper[7614]: I0224 05:14:53.526550 7614 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/073c9b40-bb80-41a2-bcd2-cfdbe040a5a4-sys\") pod \"tuned-2w6mj\" (UID: \"073c9b40-bb80-41a2-bcd2-cfdbe040a5a4\") " pod="openshift-cluster-node-tuning-operator/tuned-2w6mj" Feb 24 05:14:53.526772 master-0 kubenswrapper[7614]: I0224 05:14:53.526714 7614 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/073c9b40-bb80-41a2-bcd2-cfdbe040a5a4-etc-kubernetes\") pod \"tuned-2w6mj\" (UID: \"073c9b40-bb80-41a2-bcd2-cfdbe040a5a4\") " pod="openshift-cluster-node-tuning-operator/tuned-2w6mj" Feb 24 05:14:53.527530 master-0 kubenswrapper[7614]: I0224 05:14:53.527492 7614 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/073c9b40-bb80-41a2-bcd2-cfdbe040a5a4-var-lib-kubelet\") pod \"tuned-2w6mj\" (UID: \"073c9b40-bb80-41a2-bcd2-cfdbe040a5a4\") " pod="openshift-cluster-node-tuning-operator/tuned-2w6mj" Feb 24 05:14:53.527592 master-0 kubenswrapper[7614]: I0224 05:14:53.527553 7614 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/073c9b40-bb80-41a2-bcd2-cfdbe040a5a4-lib-modules\") pod \"tuned-2w6mj\" (UID: \"073c9b40-bb80-41a2-bcd2-cfdbe040a5a4\") " pod="openshift-cluster-node-tuning-operator/tuned-2w6mj" Feb 24 05:14:53.528074 master-0 kubenswrapper[7614]: I0224 05:14:53.527992 7614 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run\" (UniqueName: \"kubernetes.io/host-path/073c9b40-bb80-41a2-bcd2-cfdbe040a5a4-run\") pod \"tuned-2w6mj\" (UID: \"073c9b40-bb80-41a2-bcd2-cfdbe040a5a4\") " pod="openshift-cluster-node-tuning-operator/tuned-2w6mj" Feb 24 05:14:53.528436 master-0 kubenswrapper[7614]: I0224 05:14:53.528398 7614 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-systemd\" (UniqueName: \"kubernetes.io/host-path/073c9b40-bb80-41a2-bcd2-cfdbe040a5a4-etc-systemd\") pod \"tuned-2w6mj\" (UID: \"073c9b40-bb80-41a2-bcd2-cfdbe040a5a4\") " pod="openshift-cluster-node-tuning-operator/tuned-2w6mj" Feb 24 05:14:53.533779 master-0 kubenswrapper[7614]: I0224 05:14:53.533114 7614 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-tuned\" (UniqueName: \"kubernetes.io/empty-dir/073c9b40-bb80-41a2-bcd2-cfdbe040a5a4-etc-tuned\") pod \"tuned-2w6mj\" (UID: \"073c9b40-bb80-41a2-bcd2-cfdbe040a5a4\") " pod="openshift-cluster-node-tuning-operator/tuned-2w6mj" Feb 24 05:14:53.534220 master-0 kubenswrapper[7614]: I0224 05:14:53.534168 7614 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-sysctl-d\" (UniqueName: \"kubernetes.io/host-path/073c9b40-bb80-41a2-bcd2-cfdbe040a5a4-etc-sysctl-d\") pod \"tuned-2w6mj\" (UID: \"073c9b40-bb80-41a2-bcd2-cfdbe040a5a4\") " pod="openshift-cluster-node-tuning-operator/tuned-2w6mj" Feb 24 05:14:53.543140 master-0 kubenswrapper[7614]: I0224 05:14:53.542592 7614 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/073c9b40-bb80-41a2-bcd2-cfdbe040a5a4-tmp\") pod \"tuned-2w6mj\" (UID: \"073c9b40-bb80-41a2-bcd2-cfdbe040a5a4\") " pod="openshift-cluster-node-tuning-operator/tuned-2w6mj" Feb 24 05:14:53.598338 master-0 kubenswrapper[7614]: I0224 05:14:53.587379 7614 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dh2rh\" (UniqueName: \"kubernetes.io/projected/073c9b40-bb80-41a2-bcd2-cfdbe040a5a4-kube-api-access-dh2rh\") pod \"tuned-2w6mj\" (UID: \"073c9b40-bb80-41a2-bcd2-cfdbe040a5a4\") " pod="openshift-cluster-node-tuning-operator/tuned-2w6mj" Feb 24 05:14:53.639298 master-0 kubenswrapper[7614]: I0224 05:14:53.639236 7614 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-node-tuning-operator/tuned-2w6mj" Feb 24 05:14:53.834749 master-0 kubenswrapper[7614]: I0224 05:14:53.834688 7614 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns/dns-default-cdk2w"] Feb 24 05:14:53.835828 master-0 kubenswrapper[7614]: I0224 05:14:53.835802 7614 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-cdk2w" Feb 24 05:14:53.837985 master-0 kubenswrapper[7614]: I0224 05:14:53.837934 7614 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"dns-default" Feb 24 05:14:53.838191 master-0 kubenswrapper[7614]: I0224 05:14:53.838154 7614 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-default-metrics-tls" Feb 24 05:14:53.838990 master-0 kubenswrapper[7614]: I0224 05:14:53.838962 7614 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"openshift-service-ca.crt" Feb 24 05:14:53.839209 master-0 kubenswrapper[7614]: I0224 05:14:53.839171 7614 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"kube-root-ca.crt" Feb 24 05:14:53.843989 master-0 kubenswrapper[7614]: I0224 05:14:53.843937 7614 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns/dns-default-cdk2w"] Feb 24 05:14:53.937662 master-0 kubenswrapper[7614]: I0224 05:14:53.937589 7614 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/3363f001-1cfa-41f5-b245-30cc99dd09cb-metrics-tls\") pod \"dns-default-cdk2w\" (UID: \"3363f001-1cfa-41f5-b245-30cc99dd09cb\") " pod="openshift-dns/dns-default-cdk2w" Feb 24 05:14:53.937893 master-0 kubenswrapper[7614]: I0224 05:14:53.937699 7614 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/3363f001-1cfa-41f5-b245-30cc99dd09cb-config-volume\") pod \"dns-default-cdk2w\" (UID: \"3363f001-1cfa-41f5-b245-30cc99dd09cb\") " pod="openshift-dns/dns-default-cdk2w" Feb 24 05:14:53.937893 master-0 kubenswrapper[7614]: I0224 05:14:53.937734 7614 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-589rv\" (UniqueName: \"kubernetes.io/projected/3363f001-1cfa-41f5-b245-30cc99dd09cb-kube-api-access-589rv\") pod \"dns-default-cdk2w\" (UID: \"3363f001-1cfa-41f5-b245-30cc99dd09cb\") " pod="openshift-dns/dns-default-cdk2w" Feb 24 05:14:54.038837 master-0 kubenswrapper[7614]: I0224 05:14:54.038758 7614 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/3363f001-1cfa-41f5-b245-30cc99dd09cb-config-volume\") pod \"dns-default-cdk2w\" (UID: \"3363f001-1cfa-41f5-b245-30cc99dd09cb\") " pod="openshift-dns/dns-default-cdk2w" Feb 24 05:14:54.038837 master-0 kubenswrapper[7614]: I0224 05:14:54.038839 7614 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-589rv\" (UniqueName: \"kubernetes.io/projected/3363f001-1cfa-41f5-b245-30cc99dd09cb-kube-api-access-589rv\") pod \"dns-default-cdk2w\" (UID: \"3363f001-1cfa-41f5-b245-30cc99dd09cb\") " pod="openshift-dns/dns-default-cdk2w" Feb 24 05:14:54.039075 master-0 kubenswrapper[7614]: I0224 05:14:54.038887 7614 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/3363f001-1cfa-41f5-b245-30cc99dd09cb-metrics-tls\") pod \"dns-default-cdk2w\" (UID: \"3363f001-1cfa-41f5-b245-30cc99dd09cb\") " pod="openshift-dns/dns-default-cdk2w" Feb 24 05:14:54.039127 master-0 kubenswrapper[7614]: E0224 05:14:54.039075 7614 secret.go:189] Couldn't get secret openshift-dns/dns-default-metrics-tls: secret "dns-default-metrics-tls" not found Feb 24 05:14:54.039185 master-0 kubenswrapper[7614]: E0224 05:14:54.039139 7614 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/3363f001-1cfa-41f5-b245-30cc99dd09cb-metrics-tls podName:3363f001-1cfa-41f5-b245-30cc99dd09cb nodeName:}" failed. No retries permitted until 2026-02-24 05:14:54.539119708 +0000 UTC m=+25.573862864 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-tls" (UniqueName: "kubernetes.io/secret/3363f001-1cfa-41f5-b245-30cc99dd09cb-metrics-tls") pod "dns-default-cdk2w" (UID: "3363f001-1cfa-41f5-b245-30cc99dd09cb") : secret "dns-default-metrics-tls" not found Feb 24 05:14:54.040581 master-0 kubenswrapper[7614]: I0224 05:14:54.040132 7614 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/3363f001-1cfa-41f5-b245-30cc99dd09cb-config-volume\") pod \"dns-default-cdk2w\" (UID: \"3363f001-1cfa-41f5-b245-30cc99dd09cb\") " pod="openshift-dns/dns-default-cdk2w" Feb 24 05:14:54.061585 master-0 kubenswrapper[7614]: I0224 05:14:54.061473 7614 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-589rv\" (UniqueName: \"kubernetes.io/projected/3363f001-1cfa-41f5-b245-30cc99dd09cb-kube-api-access-589rv\") pod \"dns-default-cdk2w\" (UID: \"3363f001-1cfa-41f5-b245-30cc99dd09cb\") " pod="openshift-dns/dns-default-cdk2w" Feb 24 05:14:54.257891 master-0 kubenswrapper[7614]: I0224 05:14:54.257808 7614 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns/node-resolver-ng8tz"] Feb 24 05:14:54.258383 master-0 kubenswrapper[7614]: I0224 05:14:54.258341 7614 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/node-resolver-ng8tz" Feb 24 05:14:54.342971 master-0 kubenswrapper[7614]: I0224 05:14:54.342522 7614 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jl24z\" (UniqueName: \"kubernetes.io/projected/798dcf46-8377-46b8-8387-5261d9bbefa1-kube-api-access-jl24z\") pod \"node-resolver-ng8tz\" (UID: \"798dcf46-8377-46b8-8387-5261d9bbefa1\") " pod="openshift-dns/node-resolver-ng8tz" Feb 24 05:14:54.342971 master-0 kubenswrapper[7614]: I0224 05:14:54.342950 7614 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/798dcf46-8377-46b8-8387-5261d9bbefa1-hosts-file\") pod \"node-resolver-ng8tz\" (UID: \"798dcf46-8377-46b8-8387-5261d9bbefa1\") " pod="openshift-dns/node-resolver-ng8tz" Feb 24 05:14:54.391825 master-0 kubenswrapper[7614]: I0224 05:14:54.391774 7614 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-controller/operator-controller-controller-manager-9cc7d7bb-t75jj"] Feb 24 05:14:54.392537 master-0 kubenswrapper[7614]: I0224 05:14:54.392489 7614 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-controller/operator-controller-controller-manager-9cc7d7bb-t75jj" Feb 24 05:14:54.394584 master-0 kubenswrapper[7614]: I0224 05:14:54.394541 7614 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-controller"/"openshift-service-ca.crt" Feb 24 05:14:54.395450 master-0 kubenswrapper[7614]: I0224 05:14:54.395408 7614 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-controller"/"operator-controller-trusted-ca-bundle" Feb 24 05:14:54.395653 master-0 kubenswrapper[7614]: I0224 05:14:54.395639 7614 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-controller"/"kube-root-ca.crt" Feb 24 05:14:54.403804 master-0 kubenswrapper[7614]: I0224 05:14:54.403768 7614 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-controller/operator-controller-controller-manager-9cc7d7bb-t75jj"] Feb 24 05:14:54.443943 master-0 kubenswrapper[7614]: I0224 05:14:54.443871 7614 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-docker\" (UniqueName: \"kubernetes.io/host-path/347c43e5-86d5-436f-bdc5-1c7bbe19ab2a-etc-docker\") pod \"operator-controller-controller-manager-9cc7d7bb-t75jj\" (UID: \"347c43e5-86d5-436f-bdc5-1c7bbe19ab2a\") " pod="openshift-operator-controller/operator-controller-controller-manager-9cc7d7bb-t75jj" Feb 24 05:14:54.443943 master-0 kubenswrapper[7614]: I0224 05:14:54.443948 7614 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/798dcf46-8377-46b8-8387-5261d9bbefa1-hosts-file\") pod \"node-resolver-ng8tz\" (UID: \"798dcf46-8377-46b8-8387-5261d9bbefa1\") " pod="openshift-dns/node-resolver-ng8tz" Feb 24 05:14:54.444216 master-0 kubenswrapper[7614]: I0224 05:14:54.444005 7614 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/347c43e5-86d5-436f-bdc5-1c7bbe19ab2a-cache\") pod \"operator-controller-controller-manager-9cc7d7bb-t75jj\" (UID: \"347c43e5-86d5-436f-bdc5-1c7bbe19ab2a\") " pod="openshift-operator-controller/operator-controller-controller-manager-9cc7d7bb-t75jj" Feb 24 05:14:54.444216 master-0 kubenswrapper[7614]: I0224 05:14:54.444036 7614 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/projected/347c43e5-86d5-436f-bdc5-1c7bbe19ab2a-ca-certs\") pod \"operator-controller-controller-manager-9cc7d7bb-t75jj\" (UID: \"347c43e5-86d5-436f-bdc5-1c7bbe19ab2a\") " pod="openshift-operator-controller/operator-controller-controller-manager-9cc7d7bb-t75jj" Feb 24 05:14:54.444216 master-0 kubenswrapper[7614]: I0224 05:14:54.444052 7614 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-containers\" (UniqueName: \"kubernetes.io/host-path/347c43e5-86d5-436f-bdc5-1c7bbe19ab2a-etc-containers\") pod \"operator-controller-controller-manager-9cc7d7bb-t75jj\" (UID: \"347c43e5-86d5-436f-bdc5-1c7bbe19ab2a\") " pod="openshift-operator-controller/operator-controller-controller-manager-9cc7d7bb-t75jj" Feb 24 05:14:54.444216 master-0 kubenswrapper[7614]: I0224 05:14:54.444091 7614 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jl24z\" (UniqueName: \"kubernetes.io/projected/798dcf46-8377-46b8-8387-5261d9bbefa1-kube-api-access-jl24z\") pod \"node-resolver-ng8tz\" (UID: \"798dcf46-8377-46b8-8387-5261d9bbefa1\") " pod="openshift-dns/node-resolver-ng8tz" Feb 24 05:14:54.444216 master-0 kubenswrapper[7614]: I0224 05:14:54.444111 7614 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qgl4j\" (UniqueName: \"kubernetes.io/projected/347c43e5-86d5-436f-bdc5-1c7bbe19ab2a-kube-api-access-qgl4j\") pod \"operator-controller-controller-manager-9cc7d7bb-t75jj\" (UID: \"347c43e5-86d5-436f-bdc5-1c7bbe19ab2a\") " pod="openshift-operator-controller/operator-controller-controller-manager-9cc7d7bb-t75jj" Feb 24 05:14:54.444390 master-0 kubenswrapper[7614]: I0224 05:14:54.444232 7614 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/798dcf46-8377-46b8-8387-5261d9bbefa1-hosts-file\") pod \"node-resolver-ng8tz\" (UID: \"798dcf46-8377-46b8-8387-5261d9bbefa1\") " pod="openshift-dns/node-resolver-ng8tz" Feb 24 05:14:54.459731 master-0 kubenswrapper[7614]: I0224 05:14:54.459669 7614 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-8c7d49845-4dhth" event={"ID":"996ae0be-d36c-47f4-98b2-1c89591f9506","Type":"ContainerStarted","Data":"067af2398899ef3b6182e65cca2cf0e4064b7df16944553f74a6d4f063f2f255"} Feb 24 05:14:54.465341 master-0 kubenswrapper[7614]: I0224 05:14:54.462013 7614 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-node-tuning-operator/tuned-2w6mj" event={"ID":"073c9b40-bb80-41a2-bcd2-cfdbe040a5a4","Type":"ContainerStarted","Data":"59fc54497b341e21266514ce4e7ac41faf2a69bb5401fb09a5a81eb7948af21e"} Feb 24 05:14:54.465341 master-0 kubenswrapper[7614]: I0224 05:14:54.462061 7614 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-node-tuning-operator/tuned-2w6mj" event={"ID":"073c9b40-bb80-41a2-bcd2-cfdbe040a5a4","Type":"ContainerStarted","Data":"31328f5bd3de951decccf64c5d8038935ce8020c865bd9d8960f280ab1bc9568"} Feb 24 05:14:54.489714 master-0 kubenswrapper[7614]: I0224 05:14:54.489651 7614 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-node-tuning-operator/tuned-2w6mj" podStartSLOduration=1.4896277150000001 podStartE2EDuration="1.489627715s" podCreationTimestamp="2026-02-24 05:14:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-24 05:14:54.486695084 +0000 UTC m=+25.521438240" watchObservedRunningTime="2026-02-24 05:14:54.489627715 +0000 UTC m=+25.524370861" Feb 24 05:14:54.490862 master-0 kubenswrapper[7614]: I0224 05:14:54.490841 7614 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-storage-operator/csi-snapshot-controller-6847bb4785-vqn96"] Feb 24 05:14:54.491545 master-0 kubenswrapper[7614]: I0224 05:14:54.491530 7614 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-storage-operator/csi-snapshot-controller-6847bb4785-vqn96" Feb 24 05:14:54.498282 master-0 kubenswrapper[7614]: I0224 05:14:54.494982 7614 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jl24z\" (UniqueName: \"kubernetes.io/projected/798dcf46-8377-46b8-8387-5261d9bbefa1-kube-api-access-jl24z\") pod \"node-resolver-ng8tz\" (UID: \"798dcf46-8377-46b8-8387-5261d9bbefa1\") " pod="openshift-dns/node-resolver-ng8tz" Feb 24 05:14:54.509482 master-0 kubenswrapper[7614]: I0224 05:14:54.507234 7614 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-storage-operator/csi-snapshot-controller-6847bb4785-vqn96"] Feb 24 05:14:54.529263 master-0 kubenswrapper[7614]: I0224 05:14:54.529204 7614 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-catalogd/catalogd-controller-manager-84b8d9d697-zvzxs"] Feb 24 05:14:54.532276 master-0 kubenswrapper[7614]: I0224 05:14:54.532226 7614 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-catalogd/catalogd-controller-manager-84b8d9d697-zvzxs" Feb 24 05:14:54.543460 master-0 kubenswrapper[7614]: I0224 05:14:54.543374 7614 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-catalogd"/"kube-root-ca.crt" Feb 24 05:14:54.544126 master-0 kubenswrapper[7614]: I0224 05:14:54.544101 7614 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-catalogd"/"catalogserver-cert" Feb 24 05:14:54.546358 master-0 kubenswrapper[7614]: I0224 05:14:54.544376 7614 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-catalogd"/"openshift-service-ca.crt" Feb 24 05:14:54.546358 master-0 kubenswrapper[7614]: I0224 05:14:54.544599 7614 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-docker\" (UniqueName: \"kubernetes.io/host-path/347c43e5-86d5-436f-bdc5-1c7bbe19ab2a-etc-docker\") pod \"operator-controller-controller-manager-9cc7d7bb-t75jj\" (UID: \"347c43e5-86d5-436f-bdc5-1c7bbe19ab2a\") " pod="openshift-operator-controller/operator-controller-controller-manager-9cc7d7bb-t75jj" Feb 24 05:14:54.546358 master-0 kubenswrapper[7614]: I0224 05:14:54.544637 7614 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zb4rw\" (UniqueName: \"kubernetes.io/projected/b79ef90c-dc66-4d5f-8943-2c3ac68796ba-kube-api-access-zb4rw\") pod \"csi-snapshot-controller-6847bb4785-vqn96\" (UID: \"b79ef90c-dc66-4d5f-8943-2c3ac68796ba\") " pod="openshift-cluster-storage-operator/csi-snapshot-controller-6847bb4785-vqn96" Feb 24 05:14:54.546358 master-0 kubenswrapper[7614]: I0224 05:14:54.544788 7614 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/347c43e5-86d5-436f-bdc5-1c7bbe19ab2a-cache\") pod \"operator-controller-controller-manager-9cc7d7bb-t75jj\" (UID: \"347c43e5-86d5-436f-bdc5-1c7bbe19ab2a\") " pod="openshift-operator-controller/operator-controller-controller-manager-9cc7d7bb-t75jj" Feb 24 05:14:54.546358 master-0 kubenswrapper[7614]: I0224 05:14:54.544813 7614 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/3363f001-1cfa-41f5-b245-30cc99dd09cb-metrics-tls\") pod \"dns-default-cdk2w\" (UID: \"3363f001-1cfa-41f5-b245-30cc99dd09cb\") " pod="openshift-dns/dns-default-cdk2w" Feb 24 05:14:54.546358 master-0 kubenswrapper[7614]: I0224 05:14:54.544848 7614 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/projected/347c43e5-86d5-436f-bdc5-1c7bbe19ab2a-ca-certs\") pod \"operator-controller-controller-manager-9cc7d7bb-t75jj\" (UID: \"347c43e5-86d5-436f-bdc5-1c7bbe19ab2a\") " pod="openshift-operator-controller/operator-controller-controller-manager-9cc7d7bb-t75jj" Feb 24 05:14:54.546358 master-0 kubenswrapper[7614]: I0224 05:14:54.544870 7614 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-containers\" (UniqueName: \"kubernetes.io/host-path/347c43e5-86d5-436f-bdc5-1c7bbe19ab2a-etc-containers\") pod \"operator-controller-controller-manager-9cc7d7bb-t75jj\" (UID: \"347c43e5-86d5-436f-bdc5-1c7bbe19ab2a\") " pod="openshift-operator-controller/operator-controller-controller-manager-9cc7d7bb-t75jj" Feb 24 05:14:54.546358 master-0 kubenswrapper[7614]: I0224 05:14:54.544956 7614 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qgl4j\" (UniqueName: \"kubernetes.io/projected/347c43e5-86d5-436f-bdc5-1c7bbe19ab2a-kube-api-access-qgl4j\") pod \"operator-controller-controller-manager-9cc7d7bb-t75jj\" (UID: \"347c43e5-86d5-436f-bdc5-1c7bbe19ab2a\") " pod="openshift-operator-controller/operator-controller-controller-manager-9cc7d7bb-t75jj" Feb 24 05:14:54.546628 master-0 kubenswrapper[7614]: I0224 05:14:54.546435 7614 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/347c43e5-86d5-436f-bdc5-1c7bbe19ab2a-cache\") pod \"operator-controller-controller-manager-9cc7d7bb-t75jj\" (UID: \"347c43e5-86d5-436f-bdc5-1c7bbe19ab2a\") " pod="openshift-operator-controller/operator-controller-controller-manager-9cc7d7bb-t75jj" Feb 24 05:14:54.546695 master-0 kubenswrapper[7614]: I0224 05:14:54.546665 7614 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-docker\" (UniqueName: \"kubernetes.io/host-path/347c43e5-86d5-436f-bdc5-1c7bbe19ab2a-etc-docker\") pod \"operator-controller-controller-manager-9cc7d7bb-t75jj\" (UID: \"347c43e5-86d5-436f-bdc5-1c7bbe19ab2a\") " pod="openshift-operator-controller/operator-controller-controller-manager-9cc7d7bb-t75jj" Feb 24 05:14:54.547680 master-0 kubenswrapper[7614]: E0224 05:14:54.547661 7614 projected.go:301] Couldn't get configMap payload openshift-operator-controller/operator-controller-trusted-ca-bundle: configmap references non-existent config key: ca-bundle.crt Feb 24 05:14:54.547750 master-0 kubenswrapper[7614]: E0224 05:14:54.547740 7614 projected.go:194] Error preparing data for projected volume ca-certs for pod openshift-operator-controller/operator-controller-controller-manager-9cc7d7bb-t75jj: configmap references non-existent config key: ca-bundle.crt Feb 24 05:14:54.547856 master-0 kubenswrapper[7614]: E0224 05:14:54.547845 7614 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/347c43e5-86d5-436f-bdc5-1c7bbe19ab2a-ca-certs podName:347c43e5-86d5-436f-bdc5-1c7bbe19ab2a nodeName:}" failed. No retries permitted until 2026-02-24 05:14:55.047825944 +0000 UTC m=+26.082569100 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "ca-certs" (UniqueName: "kubernetes.io/projected/347c43e5-86d5-436f-bdc5-1c7bbe19ab2a-ca-certs") pod "operator-controller-controller-manager-9cc7d7bb-t75jj" (UID: "347c43e5-86d5-436f-bdc5-1c7bbe19ab2a") : configmap references non-existent config key: ca-bundle.crt Feb 24 05:14:54.547978 master-0 kubenswrapper[7614]: I0224 05:14:54.547959 7614 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-containers\" (UniqueName: \"kubernetes.io/host-path/347c43e5-86d5-436f-bdc5-1c7bbe19ab2a-etc-containers\") pod \"operator-controller-controller-manager-9cc7d7bb-t75jj\" (UID: \"347c43e5-86d5-436f-bdc5-1c7bbe19ab2a\") " pod="openshift-operator-controller/operator-controller-controller-manager-9cc7d7bb-t75jj" Feb 24 05:14:54.550134 master-0 kubenswrapper[7614]: I0224 05:14:54.549492 7614 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/3363f001-1cfa-41f5-b245-30cc99dd09cb-metrics-tls\") pod \"dns-default-cdk2w\" (UID: \"3363f001-1cfa-41f5-b245-30cc99dd09cb\") " pod="openshift-dns/dns-default-cdk2w" Feb 24 05:14:54.559342 master-0 kubenswrapper[7614]: I0224 05:14:54.557412 7614 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-catalogd/catalogd-controller-manager-84b8d9d697-zvzxs"] Feb 24 05:14:54.568336 master-0 kubenswrapper[7614]: I0224 05:14:54.567106 7614 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-catalogd"/"catalogd-trusted-ca-bundle" Feb 24 05:14:54.580642 master-0 kubenswrapper[7614]: I0224 05:14:54.580595 7614 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/node-resolver-ng8tz" Feb 24 05:14:54.585901 master-0 kubenswrapper[7614]: I0224 05:14:54.585870 7614 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qgl4j\" (UniqueName: \"kubernetes.io/projected/347c43e5-86d5-436f-bdc5-1c7bbe19ab2a-kube-api-access-qgl4j\") pod \"operator-controller-controller-manager-9cc7d7bb-t75jj\" (UID: \"347c43e5-86d5-436f-bdc5-1c7bbe19ab2a\") " pod="openshift-operator-controller/operator-controller-controller-manager-9cc7d7bb-t75jj" Feb 24 05:14:54.652333 master-0 kubenswrapper[7614]: I0224 05:14:54.648636 7614 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zb4rw\" (UniqueName: \"kubernetes.io/projected/b79ef90c-dc66-4d5f-8943-2c3ac68796ba-kube-api-access-zb4rw\") pod \"csi-snapshot-controller-6847bb4785-vqn96\" (UID: \"b79ef90c-dc66-4d5f-8943-2c3ac68796ba\") " pod="openshift-cluster-storage-operator/csi-snapshot-controller-6847bb4785-vqn96" Feb 24 05:14:54.652333 master-0 kubenswrapper[7614]: I0224 05:14:54.648724 7614 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fzp4b\" (UniqueName: \"kubernetes.io/projected/d9492fbf-d0f4-4ecf-84ba-b089d69535c1-kube-api-access-fzp4b\") pod \"catalogd-controller-manager-84b8d9d697-zvzxs\" (UID: \"d9492fbf-d0f4-4ecf-84ba-b089d69535c1\") " pod="openshift-catalogd/catalogd-controller-manager-84b8d9d697-zvzxs" Feb 24 05:14:54.652333 master-0 kubenswrapper[7614]: I0224 05:14:54.649069 7614 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-docker\" (UniqueName: \"kubernetes.io/host-path/d9492fbf-d0f4-4ecf-84ba-b089d69535c1-etc-docker\") pod \"catalogd-controller-manager-84b8d9d697-zvzxs\" (UID: \"d9492fbf-d0f4-4ecf-84ba-b089d69535c1\") " pod="openshift-catalogd/catalogd-controller-manager-84b8d9d697-zvzxs" Feb 24 05:14:54.652333 master-0 kubenswrapper[7614]: I0224 05:14:54.649181 7614 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalogserver-certs\" (UniqueName: \"kubernetes.io/secret/d9492fbf-d0f4-4ecf-84ba-b089d69535c1-catalogserver-certs\") pod \"catalogd-controller-manager-84b8d9d697-zvzxs\" (UID: \"d9492fbf-d0f4-4ecf-84ba-b089d69535c1\") " pod="openshift-catalogd/catalogd-controller-manager-84b8d9d697-zvzxs" Feb 24 05:14:54.652333 master-0 kubenswrapper[7614]: I0224 05:14:54.649227 7614 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/d9492fbf-d0f4-4ecf-84ba-b089d69535c1-cache\") pod \"catalogd-controller-manager-84b8d9d697-zvzxs\" (UID: \"d9492fbf-d0f4-4ecf-84ba-b089d69535c1\") " pod="openshift-catalogd/catalogd-controller-manager-84b8d9d697-zvzxs" Feb 24 05:14:54.652333 master-0 kubenswrapper[7614]: I0224 05:14:54.649272 7614 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/projected/d9492fbf-d0f4-4ecf-84ba-b089d69535c1-ca-certs\") pod \"catalogd-controller-manager-84b8d9d697-zvzxs\" (UID: \"d9492fbf-d0f4-4ecf-84ba-b089d69535c1\") " pod="openshift-catalogd/catalogd-controller-manager-84b8d9d697-zvzxs" Feb 24 05:14:54.652333 master-0 kubenswrapper[7614]: I0224 05:14:54.649298 7614 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-containers\" (UniqueName: \"kubernetes.io/host-path/d9492fbf-d0f4-4ecf-84ba-b089d69535c1-etc-containers\") pod \"catalogd-controller-manager-84b8d9d697-zvzxs\" (UID: \"d9492fbf-d0f4-4ecf-84ba-b089d69535c1\") " pod="openshift-catalogd/catalogd-controller-manager-84b8d9d697-zvzxs" Feb 24 05:14:54.679334 master-0 kubenswrapper[7614]: I0224 05:14:54.675247 7614 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zb4rw\" (UniqueName: \"kubernetes.io/projected/b79ef90c-dc66-4d5f-8943-2c3ac68796ba-kube-api-access-zb4rw\") pod \"csi-snapshot-controller-6847bb4785-vqn96\" (UID: \"b79ef90c-dc66-4d5f-8943-2c3ac68796ba\") " pod="openshift-cluster-storage-operator/csi-snapshot-controller-6847bb4785-vqn96" Feb 24 05:14:54.751331 master-0 kubenswrapper[7614]: I0224 05:14:54.750754 7614 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalogserver-certs\" (UniqueName: \"kubernetes.io/secret/d9492fbf-d0f4-4ecf-84ba-b089d69535c1-catalogserver-certs\") pod \"catalogd-controller-manager-84b8d9d697-zvzxs\" (UID: \"d9492fbf-d0f4-4ecf-84ba-b089d69535c1\") " pod="openshift-catalogd/catalogd-controller-manager-84b8d9d697-zvzxs" Feb 24 05:14:54.757323 master-0 kubenswrapper[7614]: I0224 05:14:54.751610 7614 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/d9492fbf-d0f4-4ecf-84ba-b089d69535c1-cache\") pod \"catalogd-controller-manager-84b8d9d697-zvzxs\" (UID: \"d9492fbf-d0f4-4ecf-84ba-b089d69535c1\") " pod="openshift-catalogd/catalogd-controller-manager-84b8d9d697-zvzxs" Feb 24 05:14:54.757323 master-0 kubenswrapper[7614]: I0224 05:14:54.751665 7614 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/projected/d9492fbf-d0f4-4ecf-84ba-b089d69535c1-ca-certs\") pod \"catalogd-controller-manager-84b8d9d697-zvzxs\" (UID: \"d9492fbf-d0f4-4ecf-84ba-b089d69535c1\") " pod="openshift-catalogd/catalogd-controller-manager-84b8d9d697-zvzxs" Feb 24 05:14:54.757323 master-0 kubenswrapper[7614]: I0224 05:14:54.751692 7614 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-containers\" (UniqueName: \"kubernetes.io/host-path/d9492fbf-d0f4-4ecf-84ba-b089d69535c1-etc-containers\") pod \"catalogd-controller-manager-84b8d9d697-zvzxs\" (UID: \"d9492fbf-d0f4-4ecf-84ba-b089d69535c1\") " pod="openshift-catalogd/catalogd-controller-manager-84b8d9d697-zvzxs" Feb 24 05:14:54.757323 master-0 kubenswrapper[7614]: I0224 05:14:54.751745 7614 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fzp4b\" (UniqueName: \"kubernetes.io/projected/d9492fbf-d0f4-4ecf-84ba-b089d69535c1-kube-api-access-fzp4b\") pod \"catalogd-controller-manager-84b8d9d697-zvzxs\" (UID: \"d9492fbf-d0f4-4ecf-84ba-b089d69535c1\") " pod="openshift-catalogd/catalogd-controller-manager-84b8d9d697-zvzxs" Feb 24 05:14:54.757323 master-0 kubenswrapper[7614]: I0224 05:14:54.751775 7614 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-docker\" (UniqueName: \"kubernetes.io/host-path/d9492fbf-d0f4-4ecf-84ba-b089d69535c1-etc-docker\") pod \"catalogd-controller-manager-84b8d9d697-zvzxs\" (UID: \"d9492fbf-d0f4-4ecf-84ba-b089d69535c1\") " pod="openshift-catalogd/catalogd-controller-manager-84b8d9d697-zvzxs" Feb 24 05:14:54.757323 master-0 kubenswrapper[7614]: I0224 05:14:54.751909 7614 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-docker\" (UniqueName: \"kubernetes.io/host-path/d9492fbf-d0f4-4ecf-84ba-b089d69535c1-etc-docker\") pod \"catalogd-controller-manager-84b8d9d697-zvzxs\" (UID: \"d9492fbf-d0f4-4ecf-84ba-b089d69535c1\") " pod="openshift-catalogd/catalogd-controller-manager-84b8d9d697-zvzxs" Feb 24 05:14:54.757323 master-0 kubenswrapper[7614]: I0224 05:14:54.754057 7614 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-containers\" (UniqueName: \"kubernetes.io/host-path/d9492fbf-d0f4-4ecf-84ba-b089d69535c1-etc-containers\") pod \"catalogd-controller-manager-84b8d9d697-zvzxs\" (UID: \"d9492fbf-d0f4-4ecf-84ba-b089d69535c1\") " pod="openshift-catalogd/catalogd-controller-manager-84b8d9d697-zvzxs" Feb 24 05:14:54.757323 master-0 kubenswrapper[7614]: I0224 05:14:54.754473 7614 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/d9492fbf-d0f4-4ecf-84ba-b089d69535c1-cache\") pod \"catalogd-controller-manager-84b8d9d697-zvzxs\" (UID: \"d9492fbf-d0f4-4ecf-84ba-b089d69535c1\") " pod="openshift-catalogd/catalogd-controller-manager-84b8d9d697-zvzxs" Feb 24 05:14:54.757323 master-0 kubenswrapper[7614]: I0224 05:14:54.756917 7614 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalogserver-certs\" (UniqueName: \"kubernetes.io/secret/d9492fbf-d0f4-4ecf-84ba-b089d69535c1-catalogserver-certs\") pod \"catalogd-controller-manager-84b8d9d697-zvzxs\" (UID: \"d9492fbf-d0f4-4ecf-84ba-b089d69535c1\") " pod="openshift-catalogd/catalogd-controller-manager-84b8d9d697-zvzxs" Feb 24 05:14:54.766361 master-0 kubenswrapper[7614]: I0224 05:14:54.763454 7614 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-cdk2w" Feb 24 05:14:54.766361 master-0 kubenswrapper[7614]: I0224 05:14:54.764104 7614 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-certs\" (UniqueName: \"kubernetes.io/projected/d9492fbf-d0f4-4ecf-84ba-b089d69535c1-ca-certs\") pod \"catalogd-controller-manager-84b8d9d697-zvzxs\" (UID: \"d9492fbf-d0f4-4ecf-84ba-b089d69535c1\") " pod="openshift-catalogd/catalogd-controller-manager-84b8d9d697-zvzxs" Feb 24 05:14:54.773598 master-0 kubenswrapper[7614]: I0224 05:14:54.773567 7614 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fzp4b\" (UniqueName: \"kubernetes.io/projected/d9492fbf-d0f4-4ecf-84ba-b089d69535c1-kube-api-access-fzp4b\") pod \"catalogd-controller-manager-84b8d9d697-zvzxs\" (UID: \"d9492fbf-d0f4-4ecf-84ba-b089d69535c1\") " pod="openshift-catalogd/catalogd-controller-manager-84b8d9d697-zvzxs" Feb 24 05:14:54.829080 master-0 kubenswrapper[7614]: I0224 05:14:54.828691 7614 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-storage-operator/csi-snapshot-controller-6847bb4785-vqn96" Feb 24 05:14:54.855138 master-0 kubenswrapper[7614]: I0224 05:14:54.855086 7614 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-catalogd/catalogd-controller-manager-84b8d9d697-zvzxs" Feb 24 05:14:54.937414 master-0 kubenswrapper[7614]: I0224 05:14:54.935441 7614 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-666d7db58c-6d9wp"] Feb 24 05:14:54.944145 master-0 kubenswrapper[7614]: I0224 05:14:54.943817 7614 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-666d7db58c-6d9wp"] Feb 24 05:14:54.954354 master-0 kubenswrapper[7614]: I0224 05:14:54.950426 7614 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-666d7db58c-6d9wp" Feb 24 05:14:54.956049 master-0 kubenswrapper[7614]: I0224 05:14:54.955628 7614 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Feb 24 05:14:54.958993 master-0 kubenswrapper[7614]: I0224 05:14:54.958933 7614 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Feb 24 05:14:54.959241 master-0 kubenswrapper[7614]: I0224 05:14:54.959209 7614 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Feb 24 05:14:54.959454 master-0 kubenswrapper[7614]: I0224 05:14:54.959426 7614 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Feb 24 05:14:54.959673 master-0 kubenswrapper[7614]: I0224 05:14:54.959644 7614 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Feb 24 05:14:54.986917 master-0 kubenswrapper[7614]: I0224 05:14:54.964069 7614 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/34b7f824-db5f-4c7a-8e5d-ea70227dec1a-config\") pod \"controller-manager-666d7db58c-6d9wp\" (UID: \"34b7f824-db5f-4c7a-8e5d-ea70227dec1a\") " pod="openshift-controller-manager/controller-manager-666d7db58c-6d9wp" Feb 24 05:14:54.986917 master-0 kubenswrapper[7614]: I0224 05:14:54.964138 7614 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/34b7f824-db5f-4c7a-8e5d-ea70227dec1a-proxy-ca-bundles\") pod \"controller-manager-666d7db58c-6d9wp\" (UID: \"34b7f824-db5f-4c7a-8e5d-ea70227dec1a\") " pod="openshift-controller-manager/controller-manager-666d7db58c-6d9wp" Feb 24 05:14:54.986917 master-0 kubenswrapper[7614]: I0224 05:14:54.964221 7614 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/34b7f824-db5f-4c7a-8e5d-ea70227dec1a-serving-cert\") pod \"controller-manager-666d7db58c-6d9wp\" (UID: \"34b7f824-db5f-4c7a-8e5d-ea70227dec1a\") " pod="openshift-controller-manager/controller-manager-666d7db58c-6d9wp" Feb 24 05:14:54.986917 master-0 kubenswrapper[7614]: I0224 05:14:54.964300 7614 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/34b7f824-db5f-4c7a-8e5d-ea70227dec1a-client-ca\") pod \"controller-manager-666d7db58c-6d9wp\" (UID: \"34b7f824-db5f-4c7a-8e5d-ea70227dec1a\") " pod="openshift-controller-manager/controller-manager-666d7db58c-6d9wp" Feb 24 05:14:54.986917 master-0 kubenswrapper[7614]: I0224 05:14:54.964343 7614 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kx66n\" (UniqueName: \"kubernetes.io/projected/34b7f824-db5f-4c7a-8e5d-ea70227dec1a-kube-api-access-kx66n\") pod \"controller-manager-666d7db58c-6d9wp\" (UID: \"34b7f824-db5f-4c7a-8e5d-ea70227dec1a\") " pod="openshift-controller-manager/controller-manager-666d7db58c-6d9wp" Feb 24 05:14:54.986917 master-0 kubenswrapper[7614]: I0224 05:14:54.971030 7614 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Feb 24 05:14:55.064021 master-0 kubenswrapper[7614]: I0224 05:14:55.060943 7614 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns/dns-default-cdk2w"] Feb 24 05:14:55.069120 master-0 kubenswrapper[7614]: I0224 05:14:55.065213 7614 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/34b7f824-db5f-4c7a-8e5d-ea70227dec1a-config\") pod \"controller-manager-666d7db58c-6d9wp\" (UID: \"34b7f824-db5f-4c7a-8e5d-ea70227dec1a\") " pod="openshift-controller-manager/controller-manager-666d7db58c-6d9wp" Feb 24 05:14:55.069120 master-0 kubenswrapper[7614]: I0224 05:14:55.065255 7614 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/34b7f824-db5f-4c7a-8e5d-ea70227dec1a-proxy-ca-bundles\") pod \"controller-manager-666d7db58c-6d9wp\" (UID: \"34b7f824-db5f-4c7a-8e5d-ea70227dec1a\") " pod="openshift-controller-manager/controller-manager-666d7db58c-6d9wp" Feb 24 05:14:55.069120 master-0 kubenswrapper[7614]: I0224 05:14:55.065321 7614 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/projected/347c43e5-86d5-436f-bdc5-1c7bbe19ab2a-ca-certs\") pod \"operator-controller-controller-manager-9cc7d7bb-t75jj\" (UID: \"347c43e5-86d5-436f-bdc5-1c7bbe19ab2a\") " pod="openshift-operator-controller/operator-controller-controller-manager-9cc7d7bb-t75jj" Feb 24 05:14:55.069120 master-0 kubenswrapper[7614]: I0224 05:14:55.065340 7614 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/34b7f824-db5f-4c7a-8e5d-ea70227dec1a-serving-cert\") pod \"controller-manager-666d7db58c-6d9wp\" (UID: \"34b7f824-db5f-4c7a-8e5d-ea70227dec1a\") " pod="openshift-controller-manager/controller-manager-666d7db58c-6d9wp" Feb 24 05:14:55.069120 master-0 kubenswrapper[7614]: I0224 05:14:55.065365 7614 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/34b7f824-db5f-4c7a-8e5d-ea70227dec1a-client-ca\") pod \"controller-manager-666d7db58c-6d9wp\" (UID: \"34b7f824-db5f-4c7a-8e5d-ea70227dec1a\") " pod="openshift-controller-manager/controller-manager-666d7db58c-6d9wp" Feb 24 05:14:55.069120 master-0 kubenswrapper[7614]: I0224 05:14:55.065383 7614 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kx66n\" (UniqueName: \"kubernetes.io/projected/34b7f824-db5f-4c7a-8e5d-ea70227dec1a-kube-api-access-kx66n\") pod \"controller-manager-666d7db58c-6d9wp\" (UID: \"34b7f824-db5f-4c7a-8e5d-ea70227dec1a\") " pod="openshift-controller-manager/controller-manager-666d7db58c-6d9wp" Feb 24 05:14:55.069120 master-0 kubenswrapper[7614]: E0224 05:14:55.065801 7614 configmap.go:193] Couldn't get configMap openshift-controller-manager/config: configmap "config" not found Feb 24 05:14:55.069120 master-0 kubenswrapper[7614]: E0224 05:14:55.065854 7614 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/34b7f824-db5f-4c7a-8e5d-ea70227dec1a-config podName:34b7f824-db5f-4c7a-8e5d-ea70227dec1a nodeName:}" failed. No retries permitted until 2026-02-24 05:14:55.565835776 +0000 UTC m=+26.600578922 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/34b7f824-db5f-4c7a-8e5d-ea70227dec1a-config") pod "controller-manager-666d7db58c-6d9wp" (UID: "34b7f824-db5f-4c7a-8e5d-ea70227dec1a") : configmap "config" not found Feb 24 05:14:55.069120 master-0 kubenswrapper[7614]: E0224 05:14:55.066041 7614 configmap.go:193] Couldn't get configMap openshift-controller-manager/client-ca: configmap "client-ca" not found Feb 24 05:14:55.069120 master-0 kubenswrapper[7614]: E0224 05:14:55.066114 7614 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/34b7f824-db5f-4c7a-8e5d-ea70227dec1a-client-ca podName:34b7f824-db5f-4c7a-8e5d-ea70227dec1a nodeName:}" failed. No retries permitted until 2026-02-24 05:14:55.566090884 +0000 UTC m=+26.600834040 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "client-ca" (UniqueName: "kubernetes.io/configmap/34b7f824-db5f-4c7a-8e5d-ea70227dec1a-client-ca") pod "controller-manager-666d7db58c-6d9wp" (UID: "34b7f824-db5f-4c7a-8e5d-ea70227dec1a") : configmap "client-ca" not found Feb 24 05:14:55.069120 master-0 kubenswrapper[7614]: E0224 05:14:55.066214 7614 secret.go:189] Couldn't get secret openshift-controller-manager/serving-cert: secret "serving-cert" not found Feb 24 05:14:55.069120 master-0 kubenswrapper[7614]: E0224 05:14:55.066244 7614 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/34b7f824-db5f-4c7a-8e5d-ea70227dec1a-serving-cert podName:34b7f824-db5f-4c7a-8e5d-ea70227dec1a nodeName:}" failed. No retries permitted until 2026-02-24 05:14:55.566233968 +0000 UTC m=+26.600977124 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/34b7f824-db5f-4c7a-8e5d-ea70227dec1a-serving-cert") pod "controller-manager-666d7db58c-6d9wp" (UID: "34b7f824-db5f-4c7a-8e5d-ea70227dec1a") : secret "serving-cert" not found Feb 24 05:14:55.069120 master-0 kubenswrapper[7614]: I0224 05:14:55.068366 7614 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/34b7f824-db5f-4c7a-8e5d-ea70227dec1a-proxy-ca-bundles\") pod \"controller-manager-666d7db58c-6d9wp\" (UID: \"34b7f824-db5f-4c7a-8e5d-ea70227dec1a\") " pod="openshift-controller-manager/controller-manager-666d7db58c-6d9wp" Feb 24 05:14:55.085026 master-0 kubenswrapper[7614]: I0224 05:14:55.084985 7614 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kx66n\" (UniqueName: \"kubernetes.io/projected/34b7f824-db5f-4c7a-8e5d-ea70227dec1a-kube-api-access-kx66n\") pod \"controller-manager-666d7db58c-6d9wp\" (UID: \"34b7f824-db5f-4c7a-8e5d-ea70227dec1a\") " pod="openshift-controller-manager/controller-manager-666d7db58c-6d9wp" Feb 24 05:14:55.087282 master-0 kubenswrapper[7614]: I0224 05:14:55.087249 7614 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-certs\" (UniqueName: \"kubernetes.io/projected/347c43e5-86d5-436f-bdc5-1c7bbe19ab2a-ca-certs\") pod \"operator-controller-controller-manager-9cc7d7bb-t75jj\" (UID: \"347c43e5-86d5-436f-bdc5-1c7bbe19ab2a\") " pod="openshift-operator-controller/operator-controller-controller-manager-9cc7d7bb-t75jj" Feb 24 05:14:55.135903 master-0 kubenswrapper[7614]: I0224 05:14:55.135661 7614 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-storage-operator/csi-snapshot-controller-6847bb4785-vqn96"] Feb 24 05:14:55.208987 master-0 kubenswrapper[7614]: I0224 05:14:55.208699 7614 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-catalogd/catalogd-controller-manager-84b8d9d697-zvzxs"] Feb 24 05:14:55.309345 master-0 kubenswrapper[7614]: I0224 05:14:55.308904 7614 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-controller/operator-controller-controller-manager-9cc7d7bb-t75jj" Feb 24 05:14:55.478843 master-0 kubenswrapper[7614]: I0224 05:14:55.477833 7614 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-storage-operator/csi-snapshot-controller-6847bb4785-vqn96" event={"ID":"b79ef90c-dc66-4d5f-8943-2c3ac68796ba","Type":"ContainerStarted","Data":"b5410db202b2d2565e3f21ef6f188dc18cdaa71ef843bfa19039eca0376e0d6a"} Feb 24 05:14:55.481344 master-0 kubenswrapper[7614]: I0224 05:14:55.481116 7614 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-catalogd/catalogd-controller-manager-84b8d9d697-zvzxs" event={"ID":"d9492fbf-d0f4-4ecf-84ba-b089d69535c1","Type":"ContainerStarted","Data":"0de580e3a4de4a7d062f7572a6d4a10fb107356c71fe5f479e8d76eb00cfe863"} Feb 24 05:14:55.486015 master-0 kubenswrapper[7614]: I0224 05:14:55.485968 7614 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/node-resolver-ng8tz" event={"ID":"798dcf46-8377-46b8-8387-5261d9bbefa1","Type":"ContainerStarted","Data":"fc92402c40e6077c1f09677e9bfca310101f3300a080adfc12dfeb6a7b235f58"} Feb 24 05:14:55.486015 master-0 kubenswrapper[7614]: I0224 05:14:55.486022 7614 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/node-resolver-ng8tz" event={"ID":"798dcf46-8377-46b8-8387-5261d9bbefa1","Type":"ContainerStarted","Data":"b74c9c781dd953b15122d114627fe038414c5f0f995df649cb54aad5bc2f4e07"} Feb 24 05:14:55.488592 master-0 kubenswrapper[7614]: I0224 05:14:55.487613 7614 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-cdk2w" event={"ID":"3363f001-1cfa-41f5-b245-30cc99dd09cb","Type":"ContainerStarted","Data":"0e75a15a8297368a6c95abe6074b8d1fd12c66b5f2515773157daf62c40e79a8"} Feb 24 05:14:55.507799 master-0 kubenswrapper[7614]: I0224 05:14:55.505899 7614 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns/node-resolver-ng8tz" podStartSLOduration=1.505865112 podStartE2EDuration="1.505865112s" podCreationTimestamp="2026-02-24 05:14:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-24 05:14:55.50469855 +0000 UTC m=+26.539441716" watchObservedRunningTime="2026-02-24 05:14:55.505865112 +0000 UTC m=+26.540608278" Feb 24 05:14:55.573571 master-0 kubenswrapper[7614]: I0224 05:14:55.573509 7614 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/34b7f824-db5f-4c7a-8e5d-ea70227dec1a-serving-cert\") pod \"controller-manager-666d7db58c-6d9wp\" (UID: \"34b7f824-db5f-4c7a-8e5d-ea70227dec1a\") " pod="openshift-controller-manager/controller-manager-666d7db58c-6d9wp" Feb 24 05:14:55.573827 master-0 kubenswrapper[7614]: I0224 05:14:55.573587 7614 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/34b7f824-db5f-4c7a-8e5d-ea70227dec1a-client-ca\") pod \"controller-manager-666d7db58c-6d9wp\" (UID: \"34b7f824-db5f-4c7a-8e5d-ea70227dec1a\") " pod="openshift-controller-manager/controller-manager-666d7db58c-6d9wp" Feb 24 05:14:55.574222 master-0 kubenswrapper[7614]: E0224 05:14:55.574197 7614 configmap.go:193] Couldn't get configMap openshift-controller-manager/client-ca: configmap "client-ca" not found Feb 24 05:14:55.574284 master-0 kubenswrapper[7614]: E0224 05:14:55.574273 7614 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/34b7f824-db5f-4c7a-8e5d-ea70227dec1a-client-ca podName:34b7f824-db5f-4c7a-8e5d-ea70227dec1a nodeName:}" failed. No retries permitted until 2026-02-24 05:14:56.574252194 +0000 UTC m=+27.608995350 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "client-ca" (UniqueName: "kubernetes.io/configmap/34b7f824-db5f-4c7a-8e5d-ea70227dec1a-client-ca") pod "controller-manager-666d7db58c-6d9wp" (UID: "34b7f824-db5f-4c7a-8e5d-ea70227dec1a") : configmap "client-ca" not found Feb 24 05:14:55.574625 master-0 kubenswrapper[7614]: I0224 05:14:55.574533 7614 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/34b7f824-db5f-4c7a-8e5d-ea70227dec1a-config\") pod \"controller-manager-666d7db58c-6d9wp\" (UID: \"34b7f824-db5f-4c7a-8e5d-ea70227dec1a\") " pod="openshift-controller-manager/controller-manager-666d7db58c-6d9wp" Feb 24 05:14:55.574693 master-0 kubenswrapper[7614]: E0224 05:14:55.574656 7614 configmap.go:193] Couldn't get configMap openshift-controller-manager/config: configmap "config" not found Feb 24 05:14:55.576074 master-0 kubenswrapper[7614]: E0224 05:14:55.574754 7614 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/34b7f824-db5f-4c7a-8e5d-ea70227dec1a-config podName:34b7f824-db5f-4c7a-8e5d-ea70227dec1a nodeName:}" failed. No retries permitted until 2026-02-24 05:14:56.574731997 +0000 UTC m=+27.609475153 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/34b7f824-db5f-4c7a-8e5d-ea70227dec1a-config") pod "controller-manager-666d7db58c-6d9wp" (UID: "34b7f824-db5f-4c7a-8e5d-ea70227dec1a") : configmap "config" not found Feb 24 05:14:55.578744 master-0 kubenswrapper[7614]: I0224 05:14:55.578682 7614 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/34b7f824-db5f-4c7a-8e5d-ea70227dec1a-serving-cert\") pod \"controller-manager-666d7db58c-6d9wp\" (UID: \"34b7f824-db5f-4c7a-8e5d-ea70227dec1a\") " pod="openshift-controller-manager/controller-manager-666d7db58c-6d9wp" Feb 24 05:14:55.683830 master-0 kubenswrapper[7614]: I0224 05:14:55.677119 7614 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7bcb58f8c7-49bnf"] Feb 24 05:14:55.683830 master-0 kubenswrapper[7614]: I0224 05:14:55.678052 7614 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-7bcb58f8c7-49bnf" Feb 24 05:14:55.685476 master-0 kubenswrapper[7614]: I0224 05:14:55.684823 7614 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Feb 24 05:14:55.685476 master-0 kubenswrapper[7614]: I0224 05:14:55.684941 7614 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Feb 24 05:14:55.685476 master-0 kubenswrapper[7614]: I0224 05:14:55.685036 7614 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Feb 24 05:14:55.685476 master-0 kubenswrapper[7614]: I0224 05:14:55.685207 7614 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Feb 24 05:14:55.685476 master-0 kubenswrapper[7614]: I0224 05:14:55.685296 7614 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Feb 24 05:14:55.690073 master-0 kubenswrapper[7614]: I0224 05:14:55.689957 7614 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7bcb58f8c7-49bnf"] Feb 24 05:14:55.692870 master-0 kubenswrapper[7614]: I0224 05:14:55.692793 7614 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-controller/operator-controller-controller-manager-9cc7d7bb-t75jj"] Feb 24 05:14:55.777847 master-0 kubenswrapper[7614]: I0224 05:14:55.777732 7614 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/de841409-bae0-4887-a92a-ec71cf6fae5e-config\") pod \"route-controller-manager-7bcb58f8c7-49bnf\" (UID: \"de841409-bae0-4887-a92a-ec71cf6fae5e\") " pod="openshift-route-controller-manager/route-controller-manager-7bcb58f8c7-49bnf" Feb 24 05:14:55.777847 master-0 kubenswrapper[7614]: I0224 05:14:55.777843 7614 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rq4wp\" (UniqueName: \"kubernetes.io/projected/de841409-bae0-4887-a92a-ec71cf6fae5e-kube-api-access-rq4wp\") pod \"route-controller-manager-7bcb58f8c7-49bnf\" (UID: \"de841409-bae0-4887-a92a-ec71cf6fae5e\") " pod="openshift-route-controller-manager/route-controller-manager-7bcb58f8c7-49bnf" Feb 24 05:14:55.778053 master-0 kubenswrapper[7614]: I0224 05:14:55.777915 7614 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/de841409-bae0-4887-a92a-ec71cf6fae5e-client-ca\") pod \"route-controller-manager-7bcb58f8c7-49bnf\" (UID: \"de841409-bae0-4887-a92a-ec71cf6fae5e\") " pod="openshift-route-controller-manager/route-controller-manager-7bcb58f8c7-49bnf" Feb 24 05:14:55.778053 master-0 kubenswrapper[7614]: I0224 05:14:55.777959 7614 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/de841409-bae0-4887-a92a-ec71cf6fae5e-serving-cert\") pod \"route-controller-manager-7bcb58f8c7-49bnf\" (UID: \"de841409-bae0-4887-a92a-ec71cf6fae5e\") " pod="openshift-route-controller-manager/route-controller-manager-7bcb58f8c7-49bnf" Feb 24 05:14:55.880077 master-0 kubenswrapper[7614]: I0224 05:14:55.880014 7614 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/de841409-bae0-4887-a92a-ec71cf6fae5e-serving-cert\") pod \"route-controller-manager-7bcb58f8c7-49bnf\" (UID: \"de841409-bae0-4887-a92a-ec71cf6fae5e\") " pod="openshift-route-controller-manager/route-controller-manager-7bcb58f8c7-49bnf" Feb 24 05:14:55.880176 master-0 kubenswrapper[7614]: I0224 05:14:55.880122 7614 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/de841409-bae0-4887-a92a-ec71cf6fae5e-config\") pod \"route-controller-manager-7bcb58f8c7-49bnf\" (UID: \"de841409-bae0-4887-a92a-ec71cf6fae5e\") " pod="openshift-route-controller-manager/route-controller-manager-7bcb58f8c7-49bnf" Feb 24 05:14:55.880176 master-0 kubenswrapper[7614]: I0224 05:14:55.880141 7614 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rq4wp\" (UniqueName: \"kubernetes.io/projected/de841409-bae0-4887-a92a-ec71cf6fae5e-kube-api-access-rq4wp\") pod \"route-controller-manager-7bcb58f8c7-49bnf\" (UID: \"de841409-bae0-4887-a92a-ec71cf6fae5e\") " pod="openshift-route-controller-manager/route-controller-manager-7bcb58f8c7-49bnf" Feb 24 05:14:55.880276 master-0 kubenswrapper[7614]: I0224 05:14:55.880192 7614 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/de841409-bae0-4887-a92a-ec71cf6fae5e-client-ca\") pod \"route-controller-manager-7bcb58f8c7-49bnf\" (UID: \"de841409-bae0-4887-a92a-ec71cf6fae5e\") " pod="openshift-route-controller-manager/route-controller-manager-7bcb58f8c7-49bnf" Feb 24 05:14:55.880888 master-0 kubenswrapper[7614]: E0224 05:14:55.880832 7614 configmap.go:193] Couldn't get configMap openshift-route-controller-manager/client-ca: configmap "client-ca" not found Feb 24 05:14:55.880982 master-0 kubenswrapper[7614]: E0224 05:14:55.880960 7614 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/de841409-bae0-4887-a92a-ec71cf6fae5e-client-ca podName:de841409-bae0-4887-a92a-ec71cf6fae5e nodeName:}" failed. No retries permitted until 2026-02-24 05:14:56.380931601 +0000 UTC m=+27.415674757 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "client-ca" (UniqueName: "kubernetes.io/configmap/de841409-bae0-4887-a92a-ec71cf6fae5e-client-ca") pod "route-controller-manager-7bcb58f8c7-49bnf" (UID: "de841409-bae0-4887-a92a-ec71cf6fae5e") : configmap "client-ca" not found Feb 24 05:14:55.881375 master-0 kubenswrapper[7614]: E0224 05:14:55.881353 7614 configmap.go:193] Couldn't get configMap openshift-route-controller-manager/config: configmap "config" not found Feb 24 05:14:55.881417 master-0 kubenswrapper[7614]: E0224 05:14:55.881387 7614 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/de841409-bae0-4887-a92a-ec71cf6fae5e-config podName:de841409-bae0-4887-a92a-ec71cf6fae5e nodeName:}" failed. No retries permitted until 2026-02-24 05:14:56.381378724 +0000 UTC m=+27.416121880 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/de841409-bae0-4887-a92a-ec71cf6fae5e-config") pod "route-controller-manager-7bcb58f8c7-49bnf" (UID: "de841409-bae0-4887-a92a-ec71cf6fae5e") : configmap "config" not found Feb 24 05:14:55.890893 master-0 kubenswrapper[7614]: I0224 05:14:55.890660 7614 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/de841409-bae0-4887-a92a-ec71cf6fae5e-serving-cert\") pod \"route-controller-manager-7bcb58f8c7-49bnf\" (UID: \"de841409-bae0-4887-a92a-ec71cf6fae5e\") " pod="openshift-route-controller-manager/route-controller-manager-7bcb58f8c7-49bnf" Feb 24 05:14:55.908378 master-0 kubenswrapper[7614]: I0224 05:14:55.906732 7614 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rq4wp\" (UniqueName: \"kubernetes.io/projected/de841409-bae0-4887-a92a-ec71cf6fae5e-kube-api-access-rq4wp\") pod \"route-controller-manager-7bcb58f8c7-49bnf\" (UID: \"de841409-bae0-4887-a92a-ec71cf6fae5e\") " pod="openshift-route-controller-manager/route-controller-manager-7bcb58f8c7-49bnf" Feb 24 05:14:56.388368 master-0 kubenswrapper[7614]: I0224 05:14:56.387985 7614 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/de841409-bae0-4887-a92a-ec71cf6fae5e-config\") pod \"route-controller-manager-7bcb58f8c7-49bnf\" (UID: \"de841409-bae0-4887-a92a-ec71cf6fae5e\") " pod="openshift-route-controller-manager/route-controller-manager-7bcb58f8c7-49bnf" Feb 24 05:14:56.388368 master-0 kubenswrapper[7614]: I0224 05:14:56.388365 7614 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/de841409-bae0-4887-a92a-ec71cf6fae5e-client-ca\") pod \"route-controller-manager-7bcb58f8c7-49bnf\" (UID: \"de841409-bae0-4887-a92a-ec71cf6fae5e\") " pod="openshift-route-controller-manager/route-controller-manager-7bcb58f8c7-49bnf" Feb 24 05:14:56.388794 master-0 kubenswrapper[7614]: E0224 05:14:56.388196 7614 configmap.go:193] Couldn't get configMap openshift-route-controller-manager/config: configmap "config" not found Feb 24 05:14:56.388794 master-0 kubenswrapper[7614]: E0224 05:14:56.388553 7614 configmap.go:193] Couldn't get configMap openshift-route-controller-manager/client-ca: configmap "client-ca" not found Feb 24 05:14:56.388794 master-0 kubenswrapper[7614]: E0224 05:14:56.388754 7614 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/de841409-bae0-4887-a92a-ec71cf6fae5e-config podName:de841409-bae0-4887-a92a-ec71cf6fae5e nodeName:}" failed. No retries permitted until 2026-02-24 05:14:57.388599518 +0000 UTC m=+28.423342724 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/de841409-bae0-4887-a92a-ec71cf6fae5e-config") pod "route-controller-manager-7bcb58f8c7-49bnf" (UID: "de841409-bae0-4887-a92a-ec71cf6fae5e") : configmap "config" not found Feb 24 05:14:56.388999 master-0 kubenswrapper[7614]: E0224 05:14:56.388806 7614 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/de841409-bae0-4887-a92a-ec71cf6fae5e-client-ca podName:de841409-bae0-4887-a92a-ec71cf6fae5e nodeName:}" failed. No retries permitted until 2026-02-24 05:14:57.388794543 +0000 UTC m=+28.423537699 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "client-ca" (UniqueName: "kubernetes.io/configmap/de841409-bae0-4887-a92a-ec71cf6fae5e-client-ca") pod "route-controller-manager-7bcb58f8c7-49bnf" (UID: "de841409-bae0-4887-a92a-ec71cf6fae5e") : configmap "client-ca" not found Feb 24 05:14:56.527354 master-0 kubenswrapper[7614]: I0224 05:14:56.520057 7614 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-etcd/installer-1-master-0"] Feb 24 05:14:56.527354 master-0 kubenswrapper[7614]: I0224 05:14:56.520635 7614 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/installer-1-master-0" Feb 24 05:14:56.527354 master-0 kubenswrapper[7614]: I0224 05:14:56.522091 7614 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-controller/operator-controller-controller-manager-9cc7d7bb-t75jj" event={"ID":"347c43e5-86d5-436f-bdc5-1c7bbe19ab2a","Type":"ContainerStarted","Data":"2d045474bcc808e888ea99613b34e01a0a66e116cc0e638b11e46fe3a36672c9"} Feb 24 05:14:56.527354 master-0 kubenswrapper[7614]: I0224 05:14:56.522116 7614 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-controller/operator-controller-controller-manager-9cc7d7bb-t75jj" event={"ID":"347c43e5-86d5-436f-bdc5-1c7bbe19ab2a","Type":"ContainerStarted","Data":"54f08b019978c50707a9af7625f4b1969ac2f9de3d91bdb89125a98cc8b35f5f"} Feb 24 05:14:56.527354 master-0 kubenswrapper[7614]: I0224 05:14:56.522126 7614 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-controller/operator-controller-controller-manager-9cc7d7bb-t75jj" event={"ID":"347c43e5-86d5-436f-bdc5-1c7bbe19ab2a","Type":"ContainerStarted","Data":"32f719b1fae3e7d132b769e21e46c31c5ab4d99d85c92e0fd1953cfcbf40dc0a"} Feb 24 05:14:56.527354 master-0 kubenswrapper[7614]: I0224 05:14:56.522687 7614 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-controller/operator-controller-controller-manager-9cc7d7bb-t75jj" Feb 24 05:14:56.563160 master-0 kubenswrapper[7614]: I0224 05:14:56.562693 7614 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd"/"kube-root-ca.crt" Feb 24 05:14:56.567331 master-0 kubenswrapper[7614]: I0224 05:14:56.566635 7614 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-catalogd/catalogd-controller-manager-84b8d9d697-zvzxs" event={"ID":"d9492fbf-d0f4-4ecf-84ba-b089d69535c1","Type":"ContainerStarted","Data":"189c37430c077be09301cf49e843b65676efb76e5d67d2ea4dd214f2f7102ef5"} Feb 24 05:14:56.567331 master-0 kubenswrapper[7614]: I0224 05:14:56.566679 7614 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-catalogd/catalogd-controller-manager-84b8d9d697-zvzxs" event={"ID":"d9492fbf-d0f4-4ecf-84ba-b089d69535c1","Type":"ContainerStarted","Data":"9e2715addd0f34bc4408328c5d9cad44d2936e4baa20d1665cd908d3e434bdfd"} Feb 24 05:14:56.567331 master-0 kubenswrapper[7614]: I0224 05:14:56.567135 7614 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-catalogd/catalogd-controller-manager-84b8d9d697-zvzxs" Feb 24 05:14:56.584261 master-0 kubenswrapper[7614]: I0224 05:14:56.583130 7614 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-etcd/installer-1-master-0"] Feb 24 05:14:56.594353 master-0 kubenswrapper[7614]: I0224 05:14:56.590491 7614 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/34b7f824-db5f-4c7a-8e5d-ea70227dec1a-config\") pod \"controller-manager-666d7db58c-6d9wp\" (UID: \"34b7f824-db5f-4c7a-8e5d-ea70227dec1a\") " pod="openshift-controller-manager/controller-manager-666d7db58c-6d9wp" Feb 24 05:14:56.594353 master-0 kubenswrapper[7614]: I0224 05:14:56.590620 7614 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/2d3d57f1-cd67-4f1d-b267-f652b9bb3448-var-lock\") pod \"installer-1-master-0\" (UID: \"2d3d57f1-cd67-4f1d-b267-f652b9bb3448\") " pod="openshift-etcd/installer-1-master-0" Feb 24 05:14:56.594353 master-0 kubenswrapper[7614]: E0224 05:14:56.590818 7614 configmap.go:193] Couldn't get configMap openshift-controller-manager/config: configmap "config" not found Feb 24 05:14:56.594353 master-0 kubenswrapper[7614]: E0224 05:14:56.590870 7614 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/34b7f824-db5f-4c7a-8e5d-ea70227dec1a-config podName:34b7f824-db5f-4c7a-8e5d-ea70227dec1a nodeName:}" failed. No retries permitted until 2026-02-24 05:14:58.590853812 +0000 UTC m=+29.625596968 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/34b7f824-db5f-4c7a-8e5d-ea70227dec1a-config") pod "controller-manager-666d7db58c-6d9wp" (UID: "34b7f824-db5f-4c7a-8e5d-ea70227dec1a") : configmap "config" not found Feb 24 05:14:56.594353 master-0 kubenswrapper[7614]: E0224 05:14:56.591829 7614 configmap.go:193] Couldn't get configMap openshift-controller-manager/client-ca: configmap "client-ca" not found Feb 24 05:14:56.594353 master-0 kubenswrapper[7614]: I0224 05:14:56.591909 7614 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/34b7f824-db5f-4c7a-8e5d-ea70227dec1a-client-ca\") pod \"controller-manager-666d7db58c-6d9wp\" (UID: \"34b7f824-db5f-4c7a-8e5d-ea70227dec1a\") " pod="openshift-controller-manager/controller-manager-666d7db58c-6d9wp" Feb 24 05:14:56.594353 master-0 kubenswrapper[7614]: E0224 05:14:56.591922 7614 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/34b7f824-db5f-4c7a-8e5d-ea70227dec1a-client-ca podName:34b7f824-db5f-4c7a-8e5d-ea70227dec1a nodeName:}" failed. No retries permitted until 2026-02-24 05:14:58.59189582 +0000 UTC m=+29.626638976 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "client-ca" (UniqueName: "kubernetes.io/configmap/34b7f824-db5f-4c7a-8e5d-ea70227dec1a-client-ca") pod "controller-manager-666d7db58c-6d9wp" (UID: "34b7f824-db5f-4c7a-8e5d-ea70227dec1a") : configmap "client-ca" not found Feb 24 05:14:56.594353 master-0 kubenswrapper[7614]: I0224 05:14:56.592072 7614 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/2d3d57f1-cd67-4f1d-b267-f652b9bb3448-kubelet-dir\") pod \"installer-1-master-0\" (UID: \"2d3d57f1-cd67-4f1d-b267-f652b9bb3448\") " pod="openshift-etcd/installer-1-master-0" Feb 24 05:14:56.594353 master-0 kubenswrapper[7614]: I0224 05:14:56.592092 7614 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/2d3d57f1-cd67-4f1d-b267-f652b9bb3448-kube-api-access\") pod \"installer-1-master-0\" (UID: \"2d3d57f1-cd67-4f1d-b267-f652b9bb3448\") " pod="openshift-etcd/installer-1-master-0" Feb 24 05:14:56.607465 master-0 kubenswrapper[7614]: I0224 05:14:56.607381 7614 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-controller/operator-controller-controller-manager-9cc7d7bb-t75jj" podStartSLOduration=2.607353271 podStartE2EDuration="2.607353271s" podCreationTimestamp="2026-02-24 05:14:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-24 05:14:56.604493211 +0000 UTC m=+27.639236387" watchObservedRunningTime="2026-02-24 05:14:56.607353271 +0000 UTC m=+27.642096427" Feb 24 05:14:56.643339 master-0 kubenswrapper[7614]: I0224 05:14:56.641150 7614 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-catalogd/catalogd-controller-manager-84b8d9d697-zvzxs" podStartSLOduration=2.641118689 podStartE2EDuration="2.641118689s" podCreationTimestamp="2026-02-24 05:14:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-24 05:14:56.623449117 +0000 UTC m=+27.658192283" watchObservedRunningTime="2026-02-24 05:14:56.641118689 +0000 UTC m=+27.675861845" Feb 24 05:14:56.695714 master-0 kubenswrapper[7614]: I0224 05:14:56.695386 7614 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/2d3d57f1-cd67-4f1d-b267-f652b9bb3448-kubelet-dir\") pod \"installer-1-master-0\" (UID: \"2d3d57f1-cd67-4f1d-b267-f652b9bb3448\") " pod="openshift-etcd/installer-1-master-0" Feb 24 05:14:56.695714 master-0 kubenswrapper[7614]: I0224 05:14:56.695434 7614 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/2d3d57f1-cd67-4f1d-b267-f652b9bb3448-kube-api-access\") pod \"installer-1-master-0\" (UID: \"2d3d57f1-cd67-4f1d-b267-f652b9bb3448\") " pod="openshift-etcd/installer-1-master-0" Feb 24 05:14:56.695714 master-0 kubenswrapper[7614]: I0224 05:14:56.695571 7614 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/2d3d57f1-cd67-4f1d-b267-f652b9bb3448-kubelet-dir\") pod \"installer-1-master-0\" (UID: \"2d3d57f1-cd67-4f1d-b267-f652b9bb3448\") " pod="openshift-etcd/installer-1-master-0" Feb 24 05:14:56.696084 master-0 kubenswrapper[7614]: I0224 05:14:56.695814 7614 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/2d3d57f1-cd67-4f1d-b267-f652b9bb3448-var-lock\") pod \"installer-1-master-0\" (UID: \"2d3d57f1-cd67-4f1d-b267-f652b9bb3448\") " pod="openshift-etcd/installer-1-master-0" Feb 24 05:14:56.696084 master-0 kubenswrapper[7614]: I0224 05:14:56.696037 7614 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/2d3d57f1-cd67-4f1d-b267-f652b9bb3448-var-lock\") pod \"installer-1-master-0\" (UID: \"2d3d57f1-cd67-4f1d-b267-f652b9bb3448\") " pod="openshift-etcd/installer-1-master-0" Feb 24 05:14:56.733373 master-0 kubenswrapper[7614]: I0224 05:14:56.732362 7614 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/2d3d57f1-cd67-4f1d-b267-f652b9bb3448-kube-api-access\") pod \"installer-1-master-0\" (UID: \"2d3d57f1-cd67-4f1d-b267-f652b9bb3448\") " pod="openshift-etcd/installer-1-master-0" Feb 24 05:14:56.892434 master-0 kubenswrapper[7614]: I0224 05:14:56.892273 7614 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/installer-1-master-0" Feb 24 05:14:57.189413 master-0 kubenswrapper[7614]: I0224 05:14:57.189262 7614 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-666d7db58c-6d9wp"] Feb 24 05:14:57.189698 master-0 kubenswrapper[7614]: E0224 05:14:57.189664 7614 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[client-ca config], unattached volumes=[], failed to process volumes=[]: context canceled" pod="openshift-controller-manager/controller-manager-666d7db58c-6d9wp" podUID="34b7f824-db5f-4c7a-8e5d-ea70227dec1a" Feb 24 05:14:57.218500 master-0 kubenswrapper[7614]: I0224 05:14:57.212758 7614 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7bcb58f8c7-49bnf"] Feb 24 05:14:57.218500 master-0 kubenswrapper[7614]: E0224 05:14:57.213338 7614 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[client-ca config], unattached volumes=[], failed to process volumes=[]: context canceled" pod="openshift-route-controller-manager/route-controller-manager-7bcb58f8c7-49bnf" podUID="de841409-bae0-4887-a92a-ec71cf6fae5e" Feb 24 05:14:57.414333 master-0 kubenswrapper[7614]: I0224 05:14:57.409794 7614 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/de841409-bae0-4887-a92a-ec71cf6fae5e-client-ca\") pod \"route-controller-manager-7bcb58f8c7-49bnf\" (UID: \"de841409-bae0-4887-a92a-ec71cf6fae5e\") " pod="openshift-route-controller-manager/route-controller-manager-7bcb58f8c7-49bnf" Feb 24 05:14:57.414333 master-0 kubenswrapper[7614]: I0224 05:14:57.409942 7614 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/de841409-bae0-4887-a92a-ec71cf6fae5e-config\") pod \"route-controller-manager-7bcb58f8c7-49bnf\" (UID: \"de841409-bae0-4887-a92a-ec71cf6fae5e\") " pod="openshift-route-controller-manager/route-controller-manager-7bcb58f8c7-49bnf" Feb 24 05:14:57.414333 master-0 kubenswrapper[7614]: I0224 05:14:57.411338 7614 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/de841409-bae0-4887-a92a-ec71cf6fae5e-config\") pod \"route-controller-manager-7bcb58f8c7-49bnf\" (UID: \"de841409-bae0-4887-a92a-ec71cf6fae5e\") " pod="openshift-route-controller-manager/route-controller-manager-7bcb58f8c7-49bnf" Feb 24 05:14:57.414333 master-0 kubenswrapper[7614]: I0224 05:14:57.411447 7614 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/de841409-bae0-4887-a92a-ec71cf6fae5e-client-ca\") pod \"route-controller-manager-7bcb58f8c7-49bnf\" (UID: \"de841409-bae0-4887-a92a-ec71cf6fae5e\") " pod="openshift-route-controller-manager/route-controller-manager-7bcb58f8c7-49bnf" Feb 24 05:14:57.570700 master-0 kubenswrapper[7614]: I0224 05:14:57.570650 7614 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-7bcb58f8c7-49bnf" Feb 24 05:14:57.571195 master-0 kubenswrapper[7614]: I0224 05:14:57.571116 7614 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-666d7db58c-6d9wp" Feb 24 05:14:57.583722 master-0 kubenswrapper[7614]: I0224 05:14:57.583635 7614 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-7bcb58f8c7-49bnf" Feb 24 05:14:57.590505 master-0 kubenswrapper[7614]: I0224 05:14:57.590459 7614 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-666d7db58c-6d9wp" Feb 24 05:14:57.613816 master-0 kubenswrapper[7614]: I0224 05:14:57.613387 7614 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/de841409-bae0-4887-a92a-ec71cf6fae5e-serving-cert\") pod \"de841409-bae0-4887-a92a-ec71cf6fae5e\" (UID: \"de841409-bae0-4887-a92a-ec71cf6fae5e\") " Feb 24 05:14:57.613816 master-0 kubenswrapper[7614]: I0224 05:14:57.613446 7614 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/de841409-bae0-4887-a92a-ec71cf6fae5e-client-ca\") pod \"de841409-bae0-4887-a92a-ec71cf6fae5e\" (UID: \"de841409-bae0-4887-a92a-ec71cf6fae5e\") " Feb 24 05:14:57.613816 master-0 kubenswrapper[7614]: I0224 05:14:57.613519 7614 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/de841409-bae0-4887-a92a-ec71cf6fae5e-config\") pod \"de841409-bae0-4887-a92a-ec71cf6fae5e\" (UID: \"de841409-bae0-4887-a92a-ec71cf6fae5e\") " Feb 24 05:14:57.613816 master-0 kubenswrapper[7614]: I0224 05:14:57.613545 7614 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rq4wp\" (UniqueName: \"kubernetes.io/projected/de841409-bae0-4887-a92a-ec71cf6fae5e-kube-api-access-rq4wp\") pod \"de841409-bae0-4887-a92a-ec71cf6fae5e\" (UID: \"de841409-bae0-4887-a92a-ec71cf6fae5e\") " Feb 24 05:14:57.613816 master-0 kubenswrapper[7614]: I0224 05:14:57.613568 7614 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/34b7f824-db5f-4c7a-8e5d-ea70227dec1a-proxy-ca-bundles\") pod \"34b7f824-db5f-4c7a-8e5d-ea70227dec1a\" (UID: \"34b7f824-db5f-4c7a-8e5d-ea70227dec1a\") " Feb 24 05:14:57.614297 master-0 kubenswrapper[7614]: I0224 05:14:57.614257 7614 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kx66n\" (UniqueName: \"kubernetes.io/projected/34b7f824-db5f-4c7a-8e5d-ea70227dec1a-kube-api-access-kx66n\") pod \"34b7f824-db5f-4c7a-8e5d-ea70227dec1a\" (UID: \"34b7f824-db5f-4c7a-8e5d-ea70227dec1a\") " Feb 24 05:14:57.614350 master-0 kubenswrapper[7614]: I0224 05:14:57.614324 7614 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/34b7f824-db5f-4c7a-8e5d-ea70227dec1a-serving-cert\") pod \"34b7f824-db5f-4c7a-8e5d-ea70227dec1a\" (UID: \"34b7f824-db5f-4c7a-8e5d-ea70227dec1a\") " Feb 24 05:14:57.614753 master-0 kubenswrapper[7614]: I0224 05:14:57.614695 7614 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/34b7f824-db5f-4c7a-8e5d-ea70227dec1a-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "34b7f824-db5f-4c7a-8e5d-ea70227dec1a" (UID: "34b7f824-db5f-4c7a-8e5d-ea70227dec1a"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 24 05:14:57.615099 master-0 kubenswrapper[7614]: I0224 05:14:57.615078 7614 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/de841409-bae0-4887-a92a-ec71cf6fae5e-client-ca" (OuterVolumeSpecName: "client-ca") pod "de841409-bae0-4887-a92a-ec71cf6fae5e" (UID: "de841409-bae0-4887-a92a-ec71cf6fae5e"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 24 05:14:57.616820 master-0 kubenswrapper[7614]: I0224 05:14:57.616715 7614 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/de841409-bae0-4887-a92a-ec71cf6fae5e-config" (OuterVolumeSpecName: "config") pod "de841409-bae0-4887-a92a-ec71cf6fae5e" (UID: "de841409-bae0-4887-a92a-ec71cf6fae5e"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 24 05:14:57.622398 master-0 kubenswrapper[7614]: I0224 05:14:57.622340 7614 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/de841409-bae0-4887-a92a-ec71cf6fae5e-kube-api-access-rq4wp" (OuterVolumeSpecName: "kube-api-access-rq4wp") pod "de841409-bae0-4887-a92a-ec71cf6fae5e" (UID: "de841409-bae0-4887-a92a-ec71cf6fae5e"). InnerVolumeSpecName "kube-api-access-rq4wp". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 24 05:14:57.622470 master-0 kubenswrapper[7614]: I0224 05:14:57.622360 7614 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/de841409-bae0-4887-a92a-ec71cf6fae5e-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "de841409-bae0-4887-a92a-ec71cf6fae5e" (UID: "de841409-bae0-4887-a92a-ec71cf6fae5e"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 24 05:14:57.622517 master-0 kubenswrapper[7614]: I0224 05:14:57.622430 7614 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/34b7f824-db5f-4c7a-8e5d-ea70227dec1a-kube-api-access-kx66n" (OuterVolumeSpecName: "kube-api-access-kx66n") pod "34b7f824-db5f-4c7a-8e5d-ea70227dec1a" (UID: "34b7f824-db5f-4c7a-8e5d-ea70227dec1a"). InnerVolumeSpecName "kube-api-access-kx66n". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 24 05:14:57.624842 master-0 kubenswrapper[7614]: I0224 05:14:57.624754 7614 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/34b7f824-db5f-4c7a-8e5d-ea70227dec1a-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "34b7f824-db5f-4c7a-8e5d-ea70227dec1a" (UID: "34b7f824-db5f-4c7a-8e5d-ea70227dec1a"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 24 05:14:57.716335 master-0 kubenswrapper[7614]: I0224 05:14:57.716237 7614 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/de841409-bae0-4887-a92a-ec71cf6fae5e-config\") on node \"master-0\" DevicePath \"\"" Feb 24 05:14:57.716335 master-0 kubenswrapper[7614]: I0224 05:14:57.716295 7614 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rq4wp\" (UniqueName: \"kubernetes.io/projected/de841409-bae0-4887-a92a-ec71cf6fae5e-kube-api-access-rq4wp\") on node \"master-0\" DevicePath \"\"" Feb 24 05:14:57.716335 master-0 kubenswrapper[7614]: I0224 05:14:57.716327 7614 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/34b7f824-db5f-4c7a-8e5d-ea70227dec1a-proxy-ca-bundles\") on node \"master-0\" DevicePath \"\"" Feb 24 05:14:57.716335 master-0 kubenswrapper[7614]: I0224 05:14:57.716341 7614 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kx66n\" (UniqueName: \"kubernetes.io/projected/34b7f824-db5f-4c7a-8e5d-ea70227dec1a-kube-api-access-kx66n\") on node \"master-0\" DevicePath \"\"" Feb 24 05:14:57.716335 master-0 kubenswrapper[7614]: I0224 05:14:57.716353 7614 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/34b7f824-db5f-4c7a-8e5d-ea70227dec1a-serving-cert\") on node \"master-0\" DevicePath \"\"" Feb 24 05:14:57.716697 master-0 kubenswrapper[7614]: I0224 05:14:57.716366 7614 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/de841409-bae0-4887-a92a-ec71cf6fae5e-serving-cert\") on node \"master-0\" DevicePath \"\"" Feb 24 05:14:57.716697 master-0 kubenswrapper[7614]: I0224 05:14:57.716379 7614 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/de841409-bae0-4887-a92a-ec71cf6fae5e-client-ca\") on node \"master-0\" DevicePath \"\"" Feb 24 05:14:58.052060 master-0 kubenswrapper[7614]: I0224 05:14:58.051704 7614 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-apiserver/apiserver-786f58c449-64k2s"] Feb 24 05:14:58.052646 master-0 kubenswrapper[7614]: I0224 05:14:58.052613 7614 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-786f58c449-64k2s" Feb 24 05:14:58.057063 master-0 kubenswrapper[7614]: I0224 05:14:58.056792 7614 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"etcd-client" Feb 24 05:14:58.057063 master-0 kubenswrapper[7614]: I0224 05:14:58.056802 7614 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"encryption-config-0" Feb 24 05:14:58.057200 master-0 kubenswrapper[7614]: I0224 05:14:58.057164 7614 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"config" Feb 24 05:14:58.060187 master-0 kubenswrapper[7614]: I0224 05:14:58.057853 7614 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"serving-cert" Feb 24 05:14:58.060187 master-0 kubenswrapper[7614]: I0224 05:14:58.058352 7614 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"audit-0" Feb 24 05:14:58.060187 master-0 kubenswrapper[7614]: I0224 05:14:58.060098 7614 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"openshift-service-ca.crt" Feb 24 05:14:58.061081 master-0 kubenswrapper[7614]: I0224 05:14:58.060569 7614 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"image-import-ca" Feb 24 05:14:58.066986 master-0 kubenswrapper[7614]: I0224 05:14:58.063574 7614 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"etcd-serving-ca" Feb 24 05:14:58.066986 master-0 kubenswrapper[7614]: I0224 05:14:58.064836 7614 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"kube-root-ca.crt" Feb 24 05:14:58.066986 master-0 kubenswrapper[7614]: I0224 05:14:58.066484 7614 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver/apiserver-786f58c449-64k2s"] Feb 24 05:14:58.068420 master-0 kubenswrapper[7614]: I0224 05:14:58.067861 7614 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"trusted-ca-bundle" Feb 24 05:14:58.125341 master-0 kubenswrapper[7614]: I0224 05:14:58.125246 7614 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/5afb5486-5f33-4755-bfe9-c993d0d9ca71-node-pullsecrets\") pod \"apiserver-786f58c449-64k2s\" (UID: \"5afb5486-5f33-4755-bfe9-c993d0d9ca71\") " pod="openshift-apiserver/apiserver-786f58c449-64k2s" Feb 24 05:14:58.125496 master-0 kubenswrapper[7614]: I0224 05:14:58.125377 7614 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/5afb5486-5f33-4755-bfe9-c993d0d9ca71-etcd-serving-ca\") pod \"apiserver-786f58c449-64k2s\" (UID: \"5afb5486-5f33-4755-bfe9-c993d0d9ca71\") " pod="openshift-apiserver/apiserver-786f58c449-64k2s" Feb 24 05:14:58.125496 master-0 kubenswrapper[7614]: I0224 05:14:58.125463 7614 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5afb5486-5f33-4755-bfe9-c993d0d9ca71-trusted-ca-bundle\") pod \"apiserver-786f58c449-64k2s\" (UID: \"5afb5486-5f33-4755-bfe9-c993d0d9ca71\") " pod="openshift-apiserver/apiserver-786f58c449-64k2s" Feb 24 05:14:58.125645 master-0 kubenswrapper[7614]: I0224 05:14:58.125533 7614 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/5afb5486-5f33-4755-bfe9-c993d0d9ca71-etcd-client\") pod \"apiserver-786f58c449-64k2s\" (UID: \"5afb5486-5f33-4755-bfe9-c993d0d9ca71\") " pod="openshift-apiserver/apiserver-786f58c449-64k2s" Feb 24 05:14:58.125645 master-0 kubenswrapper[7614]: I0224 05:14:58.125626 7614 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5afb5486-5f33-4755-bfe9-c993d0d9ca71-config\") pod \"apiserver-786f58c449-64k2s\" (UID: \"5afb5486-5f33-4755-bfe9-c993d0d9ca71\") " pod="openshift-apiserver/apiserver-786f58c449-64k2s" Feb 24 05:14:58.125750 master-0 kubenswrapper[7614]: I0224 05:14:58.125665 7614 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5afb5486-5f33-4755-bfe9-c993d0d9ca71-serving-cert\") pod \"apiserver-786f58c449-64k2s\" (UID: \"5afb5486-5f33-4755-bfe9-c993d0d9ca71\") " pod="openshift-apiserver/apiserver-786f58c449-64k2s" Feb 24 05:14:58.125750 master-0 kubenswrapper[7614]: I0224 05:14:58.125702 7614 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/5afb5486-5f33-4755-bfe9-c993d0d9ca71-encryption-config\") pod \"apiserver-786f58c449-64k2s\" (UID: \"5afb5486-5f33-4755-bfe9-c993d0d9ca71\") " pod="openshift-apiserver/apiserver-786f58c449-64k2s" Feb 24 05:14:58.125858 master-0 kubenswrapper[7614]: I0224 05:14:58.125756 7614 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/5afb5486-5f33-4755-bfe9-c993d0d9ca71-audit\") pod \"apiserver-786f58c449-64k2s\" (UID: \"5afb5486-5f33-4755-bfe9-c993d0d9ca71\") " pod="openshift-apiserver/apiserver-786f58c449-64k2s" Feb 24 05:14:58.125858 master-0 kubenswrapper[7614]: I0224 05:14:58.125796 7614 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/5afb5486-5f33-4755-bfe9-c993d0d9ca71-image-import-ca\") pod \"apiserver-786f58c449-64k2s\" (UID: \"5afb5486-5f33-4755-bfe9-c993d0d9ca71\") " pod="openshift-apiserver/apiserver-786f58c449-64k2s" Feb 24 05:14:58.125858 master-0 kubenswrapper[7614]: I0224 05:14:58.125829 7614 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/5afb5486-5f33-4755-bfe9-c993d0d9ca71-audit-dir\") pod \"apiserver-786f58c449-64k2s\" (UID: \"5afb5486-5f33-4755-bfe9-c993d0d9ca71\") " pod="openshift-apiserver/apiserver-786f58c449-64k2s" Feb 24 05:14:58.126012 master-0 kubenswrapper[7614]: I0224 05:14:58.125896 7614 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s67l4\" (UniqueName: \"kubernetes.io/projected/5afb5486-5f33-4755-bfe9-c993d0d9ca71-kube-api-access-s67l4\") pod \"apiserver-786f58c449-64k2s\" (UID: \"5afb5486-5f33-4755-bfe9-c993d0d9ca71\") " pod="openshift-apiserver/apiserver-786f58c449-64k2s" Feb 24 05:14:58.236752 master-0 kubenswrapper[7614]: I0224 05:14:58.236692 7614 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/5afb5486-5f33-4755-bfe9-c993d0d9ca71-audit-dir\") pod \"apiserver-786f58c449-64k2s\" (UID: \"5afb5486-5f33-4755-bfe9-c993d0d9ca71\") " pod="openshift-apiserver/apiserver-786f58c449-64k2s" Feb 24 05:14:58.237059 master-0 kubenswrapper[7614]: I0224 05:14:58.236908 7614 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/5afb5486-5f33-4755-bfe9-c993d0d9ca71-audit-dir\") pod \"apiserver-786f58c449-64k2s\" (UID: \"5afb5486-5f33-4755-bfe9-c993d0d9ca71\") " pod="openshift-apiserver/apiserver-786f58c449-64k2s" Feb 24 05:14:58.237059 master-0 kubenswrapper[7614]: I0224 05:14:58.237035 7614 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s67l4\" (UniqueName: \"kubernetes.io/projected/5afb5486-5f33-4755-bfe9-c993d0d9ca71-kube-api-access-s67l4\") pod \"apiserver-786f58c449-64k2s\" (UID: \"5afb5486-5f33-4755-bfe9-c993d0d9ca71\") " pod="openshift-apiserver/apiserver-786f58c449-64k2s" Feb 24 05:14:58.237331 master-0 kubenswrapper[7614]: I0224 05:14:58.237077 7614 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/5afb5486-5f33-4755-bfe9-c993d0d9ca71-node-pullsecrets\") pod \"apiserver-786f58c449-64k2s\" (UID: \"5afb5486-5f33-4755-bfe9-c993d0d9ca71\") " pod="openshift-apiserver/apiserver-786f58c449-64k2s" Feb 24 05:14:58.237331 master-0 kubenswrapper[7614]: I0224 05:14:58.237125 7614 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/5afb5486-5f33-4755-bfe9-c993d0d9ca71-etcd-serving-ca\") pod \"apiserver-786f58c449-64k2s\" (UID: \"5afb5486-5f33-4755-bfe9-c993d0d9ca71\") " pod="openshift-apiserver/apiserver-786f58c449-64k2s" Feb 24 05:14:58.237331 master-0 kubenswrapper[7614]: I0224 05:14:58.237183 7614 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5afb5486-5f33-4755-bfe9-c993d0d9ca71-trusted-ca-bundle\") pod \"apiserver-786f58c449-64k2s\" (UID: \"5afb5486-5f33-4755-bfe9-c993d0d9ca71\") " pod="openshift-apiserver/apiserver-786f58c449-64k2s" Feb 24 05:14:58.237331 master-0 kubenswrapper[7614]: I0224 05:14:58.237209 7614 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/5afb5486-5f33-4755-bfe9-c993d0d9ca71-etcd-client\") pod \"apiserver-786f58c449-64k2s\" (UID: \"5afb5486-5f33-4755-bfe9-c993d0d9ca71\") " pod="openshift-apiserver/apiserver-786f58c449-64k2s" Feb 24 05:14:58.237646 master-0 kubenswrapper[7614]: I0224 05:14:58.237415 7614 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/5afb5486-5f33-4755-bfe9-c993d0d9ca71-node-pullsecrets\") pod \"apiserver-786f58c449-64k2s\" (UID: \"5afb5486-5f33-4755-bfe9-c993d0d9ca71\") " pod="openshift-apiserver/apiserver-786f58c449-64k2s" Feb 24 05:14:58.237646 master-0 kubenswrapper[7614]: I0224 05:14:58.237524 7614 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5afb5486-5f33-4755-bfe9-c993d0d9ca71-config\") pod \"apiserver-786f58c449-64k2s\" (UID: \"5afb5486-5f33-4755-bfe9-c993d0d9ca71\") " pod="openshift-apiserver/apiserver-786f58c449-64k2s" Feb 24 05:14:58.237646 master-0 kubenswrapper[7614]: I0224 05:14:58.237606 7614 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5afb5486-5f33-4755-bfe9-c993d0d9ca71-serving-cert\") pod \"apiserver-786f58c449-64k2s\" (UID: \"5afb5486-5f33-4755-bfe9-c993d0d9ca71\") " pod="openshift-apiserver/apiserver-786f58c449-64k2s" Feb 24 05:14:58.237646 master-0 kubenswrapper[7614]: I0224 05:14:58.237639 7614 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/5afb5486-5f33-4755-bfe9-c993d0d9ca71-encryption-config\") pod \"apiserver-786f58c449-64k2s\" (UID: \"5afb5486-5f33-4755-bfe9-c993d0d9ca71\") " pod="openshift-apiserver/apiserver-786f58c449-64k2s" Feb 24 05:14:58.237943 master-0 kubenswrapper[7614]: I0224 05:14:58.237704 7614 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/5afb5486-5f33-4755-bfe9-c993d0d9ca71-audit\") pod \"apiserver-786f58c449-64k2s\" (UID: \"5afb5486-5f33-4755-bfe9-c993d0d9ca71\") " pod="openshift-apiserver/apiserver-786f58c449-64k2s" Feb 24 05:14:58.237943 master-0 kubenswrapper[7614]: I0224 05:14:58.237741 7614 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/5afb5486-5f33-4755-bfe9-c993d0d9ca71-image-import-ca\") pod \"apiserver-786f58c449-64k2s\" (UID: \"5afb5486-5f33-4755-bfe9-c993d0d9ca71\") " pod="openshift-apiserver/apiserver-786f58c449-64k2s" Feb 24 05:14:58.238163 master-0 kubenswrapper[7614]: E0224 05:14:58.238095 7614 configmap.go:193] Couldn't get configMap openshift-apiserver/audit-0: configmap "audit-0" not found Feb 24 05:14:58.238204 master-0 kubenswrapper[7614]: E0224 05:14:58.238177 7614 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5afb5486-5f33-4755-bfe9-c993d0d9ca71-audit podName:5afb5486-5f33-4755-bfe9-c993d0d9ca71 nodeName:}" failed. No retries permitted until 2026-02-24 05:14:58.738151346 +0000 UTC m=+29.772894512 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "audit" (UniqueName: "kubernetes.io/configmap/5afb5486-5f33-4755-bfe9-c993d0d9ca71-audit") pod "apiserver-786f58c449-64k2s" (UID: "5afb5486-5f33-4755-bfe9-c993d0d9ca71") : configmap "audit-0" not found Feb 24 05:14:58.238204 master-0 kubenswrapper[7614]: I0224 05:14:58.238186 7614 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/5afb5486-5f33-4755-bfe9-c993d0d9ca71-etcd-serving-ca\") pod \"apiserver-786f58c449-64k2s\" (UID: \"5afb5486-5f33-4755-bfe9-c993d0d9ca71\") " pod="openshift-apiserver/apiserver-786f58c449-64k2s" Feb 24 05:14:58.238826 master-0 kubenswrapper[7614]: I0224 05:14:58.238751 7614 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/5afb5486-5f33-4755-bfe9-c993d0d9ca71-image-import-ca\") pod \"apiserver-786f58c449-64k2s\" (UID: \"5afb5486-5f33-4755-bfe9-c993d0d9ca71\") " pod="openshift-apiserver/apiserver-786f58c449-64k2s" Feb 24 05:14:58.239057 master-0 kubenswrapper[7614]: I0224 05:14:58.239012 7614 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5afb5486-5f33-4755-bfe9-c993d0d9ca71-config\") pod \"apiserver-786f58c449-64k2s\" (UID: \"5afb5486-5f33-4755-bfe9-c993d0d9ca71\") " pod="openshift-apiserver/apiserver-786f58c449-64k2s" Feb 24 05:14:58.239775 master-0 kubenswrapper[7614]: I0224 05:14:58.239735 7614 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5afb5486-5f33-4755-bfe9-c993d0d9ca71-trusted-ca-bundle\") pod \"apiserver-786f58c449-64k2s\" (UID: \"5afb5486-5f33-4755-bfe9-c993d0d9ca71\") " pod="openshift-apiserver/apiserver-786f58c449-64k2s" Feb 24 05:14:58.241683 master-0 kubenswrapper[7614]: I0224 05:14:58.241655 7614 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/5afb5486-5f33-4755-bfe9-c993d0d9ca71-etcd-client\") pod \"apiserver-786f58c449-64k2s\" (UID: \"5afb5486-5f33-4755-bfe9-c993d0d9ca71\") " pod="openshift-apiserver/apiserver-786f58c449-64k2s" Feb 24 05:14:58.242972 master-0 kubenswrapper[7614]: I0224 05:14:58.242401 7614 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/5afb5486-5f33-4755-bfe9-c993d0d9ca71-encryption-config\") pod \"apiserver-786f58c449-64k2s\" (UID: \"5afb5486-5f33-4755-bfe9-c993d0d9ca71\") " pod="openshift-apiserver/apiserver-786f58c449-64k2s" Feb 24 05:14:58.244078 master-0 kubenswrapper[7614]: I0224 05:14:58.244010 7614 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5afb5486-5f33-4755-bfe9-c993d0d9ca71-serving-cert\") pod \"apiserver-786f58c449-64k2s\" (UID: \"5afb5486-5f33-4755-bfe9-c993d0d9ca71\") " pod="openshift-apiserver/apiserver-786f58c449-64k2s" Feb 24 05:14:58.263604 master-0 kubenswrapper[7614]: I0224 05:14:58.263491 7614 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s67l4\" (UniqueName: \"kubernetes.io/projected/5afb5486-5f33-4755-bfe9-c993d0d9ca71-kube-api-access-s67l4\") pod \"apiserver-786f58c449-64k2s\" (UID: \"5afb5486-5f33-4755-bfe9-c993d0d9ca71\") " pod="openshift-apiserver/apiserver-786f58c449-64k2s" Feb 24 05:14:58.452610 master-0 kubenswrapper[7614]: I0224 05:14:58.452553 7614 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-etcd/installer-1-master-0"] Feb 24 05:14:58.580285 master-0 kubenswrapper[7614]: I0224 05:14:58.580213 7614 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/installer-1-master-0" event={"ID":"2d3d57f1-cd67-4f1d-b267-f652b9bb3448","Type":"ContainerStarted","Data":"345bd8023fa43822945ff7359cdfe764906fb44812bf8f7d37334c964ddefedc"} Feb 24 05:14:58.583858 master-0 kubenswrapper[7614]: I0224 05:14:58.583794 7614 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-cdk2w" event={"ID":"3363f001-1cfa-41f5-b245-30cc99dd09cb","Type":"ContainerStarted","Data":"dc47fbb72c6439dc34785d0d757efea6d3caf49fa222c8310abefbc4c023cd4e"} Feb 24 05:14:58.583972 master-0 kubenswrapper[7614]: I0224 05:14:58.583862 7614 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-cdk2w" event={"ID":"3363f001-1cfa-41f5-b245-30cc99dd09cb","Type":"ContainerStarted","Data":"720624cacb7fe8516dce728a79168e1519d8df180996511d68531eb6635508fc"} Feb 24 05:14:58.584348 master-0 kubenswrapper[7614]: I0224 05:14:58.584293 7614 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-dns/dns-default-cdk2w" Feb 24 05:14:58.587178 master-0 kubenswrapper[7614]: I0224 05:14:58.586921 7614 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-666d7db58c-6d9wp" Feb 24 05:14:58.587178 master-0 kubenswrapper[7614]: I0224 05:14:58.587006 7614 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-storage-operator/csi-snapshot-controller-6847bb4785-vqn96" event={"ID":"b79ef90c-dc66-4d5f-8943-2c3ac68796ba","Type":"ContainerStarted","Data":"92100dde9dbd51740744fac31aa4b79ba4dfcf0cd902c28d6ae66b9259196300"} Feb 24 05:14:58.587346 master-0 kubenswrapper[7614]: I0224 05:14:58.587245 7614 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-7bcb58f8c7-49bnf" Feb 24 05:14:58.608222 master-0 kubenswrapper[7614]: I0224 05:14:58.608125 7614 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns/dns-default-cdk2w" podStartSLOduration=2.670767707 podStartE2EDuration="5.608102003s" podCreationTimestamp="2026-02-24 05:14:53 +0000 UTC" firstStartedPulling="2026-02-24 05:14:55.084568528 +0000 UTC m=+26.119311684" lastFinishedPulling="2026-02-24 05:14:58.021902794 +0000 UTC m=+29.056645980" observedRunningTime="2026-02-24 05:14:58.607564118 +0000 UTC m=+29.642307294" watchObservedRunningTime="2026-02-24 05:14:58.608102003 +0000 UTC m=+29.642845159" Feb 24 05:14:58.631925 master-0 kubenswrapper[7614]: I0224 05:14:58.631837 7614 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-storage-operator/csi-snapshot-controller-6847bb4785-vqn96" podStartSLOduration=1.773871875 podStartE2EDuration="4.631798512s" podCreationTimestamp="2026-02-24 05:14:54 +0000 UTC" firstStartedPulling="2026-02-24 05:14:55.155738287 +0000 UTC m=+26.190481433" lastFinishedPulling="2026-02-24 05:14:58.013664904 +0000 UTC m=+29.048408070" observedRunningTime="2026-02-24 05:14:58.625745273 +0000 UTC m=+29.660488429" watchObservedRunningTime="2026-02-24 05:14:58.631798512 +0000 UTC m=+29.666541678" Feb 24 05:14:58.646434 master-0 kubenswrapper[7614]: I0224 05:14:58.646362 7614 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/34b7f824-db5f-4c7a-8e5d-ea70227dec1a-config\") pod \"controller-manager-666d7db58c-6d9wp\" (UID: \"34b7f824-db5f-4c7a-8e5d-ea70227dec1a\") " pod="openshift-controller-manager/controller-manager-666d7db58c-6d9wp" Feb 24 05:14:58.647135 master-0 kubenswrapper[7614]: E0224 05:14:58.647081 7614 configmap.go:193] Couldn't get configMap openshift-controller-manager/config: object "openshift-controller-manager"/"config" not registered Feb 24 05:14:58.647196 master-0 kubenswrapper[7614]: E0224 05:14:58.647169 7614 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/34b7f824-db5f-4c7a-8e5d-ea70227dec1a-config podName:34b7f824-db5f-4c7a-8e5d-ea70227dec1a nodeName:}" failed. No retries permitted until 2026-02-24 05:15:02.647142809 +0000 UTC m=+33.681886155 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/34b7f824-db5f-4c7a-8e5d-ea70227dec1a-config") pod "controller-manager-666d7db58c-6d9wp" (UID: "34b7f824-db5f-4c7a-8e5d-ea70227dec1a") : object "openshift-controller-manager"/"config" not registered Feb 24 05:14:58.648637 master-0 kubenswrapper[7614]: I0224 05:14:58.648292 7614 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/34b7f824-db5f-4c7a-8e5d-ea70227dec1a-client-ca\") pod \"controller-manager-666d7db58c-6d9wp\" (UID: \"34b7f824-db5f-4c7a-8e5d-ea70227dec1a\") " pod="openshift-controller-manager/controller-manager-666d7db58c-6d9wp" Feb 24 05:14:58.648637 master-0 kubenswrapper[7614]: E0224 05:14:58.648460 7614 configmap.go:193] Couldn't get configMap openshift-controller-manager/client-ca: object "openshift-controller-manager"/"client-ca" not registered Feb 24 05:14:58.648637 master-0 kubenswrapper[7614]: E0224 05:14:58.648523 7614 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/34b7f824-db5f-4c7a-8e5d-ea70227dec1a-client-ca podName:34b7f824-db5f-4c7a-8e5d-ea70227dec1a nodeName:}" failed. No retries permitted until 2026-02-24 05:15:02.648489346 +0000 UTC m=+33.683232692 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "client-ca" (UniqueName: "kubernetes.io/configmap/34b7f824-db5f-4c7a-8e5d-ea70227dec1a-client-ca") pod "controller-manager-666d7db58c-6d9wp" (UID: "34b7f824-db5f-4c7a-8e5d-ea70227dec1a") : object "openshift-controller-manager"/"client-ca" not registered Feb 24 05:14:58.655905 master-0 kubenswrapper[7614]: I0224 05:14:58.655851 7614 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-669d5ddb7c-jzjkh"] Feb 24 05:14:58.656526 master-0 kubenswrapper[7614]: I0224 05:14:58.656500 7614 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-669d5ddb7c-jzjkh" Feb 24 05:14:58.659532 master-0 kubenswrapper[7614]: I0224 05:14:58.659497 7614 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Feb 24 05:14:58.682516 master-0 kubenswrapper[7614]: I0224 05:14:58.682467 7614 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Feb 24 05:14:58.683778 master-0 kubenswrapper[7614]: I0224 05:14:58.683749 7614 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Feb 24 05:14:58.686212 master-0 kubenswrapper[7614]: I0224 05:14:58.686083 7614 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Feb 24 05:14:58.686639 master-0 kubenswrapper[7614]: I0224 05:14:58.686595 7614 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Feb 24 05:14:58.718469 master-0 kubenswrapper[7614]: I0224 05:14:58.716465 7614 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Feb 24 05:14:58.719982 master-0 kubenswrapper[7614]: I0224 05:14:58.719909 7614 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-666d7db58c-6d9wp"] Feb 24 05:14:58.722984 master-0 kubenswrapper[7614]: I0224 05:14:58.722938 7614 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-669d5ddb7c-jzjkh"] Feb 24 05:14:58.725114 master-0 kubenswrapper[7614]: I0224 05:14:58.725048 7614 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-666d7db58c-6d9wp"] Feb 24 05:14:58.750767 master-0 kubenswrapper[7614]: I0224 05:14:58.749704 7614 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d458399b-415c-4aa6-be1a-7364c42841c7-config\") pod \"controller-manager-669d5ddb7c-jzjkh\" (UID: \"d458399b-415c-4aa6-be1a-7364c42841c7\") " pod="openshift-controller-manager/controller-manager-669d5ddb7c-jzjkh" Feb 24 05:14:58.750767 master-0 kubenswrapper[7614]: I0224 05:14:58.749777 7614 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d458399b-415c-4aa6-be1a-7364c42841c7-serving-cert\") pod \"controller-manager-669d5ddb7c-jzjkh\" (UID: \"d458399b-415c-4aa6-be1a-7364c42841c7\") " pod="openshift-controller-manager/controller-manager-669d5ddb7c-jzjkh" Feb 24 05:14:58.750767 master-0 kubenswrapper[7614]: I0224 05:14:58.749815 7614 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/d458399b-415c-4aa6-be1a-7364c42841c7-client-ca\") pod \"controller-manager-669d5ddb7c-jzjkh\" (UID: \"d458399b-415c-4aa6-be1a-7364c42841c7\") " pod="openshift-controller-manager/controller-manager-669d5ddb7c-jzjkh" Feb 24 05:14:58.750767 master-0 kubenswrapper[7614]: I0224 05:14:58.749894 7614 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/d458399b-415c-4aa6-be1a-7364c42841c7-proxy-ca-bundles\") pod \"controller-manager-669d5ddb7c-jzjkh\" (UID: \"d458399b-415c-4aa6-be1a-7364c42841c7\") " pod="openshift-controller-manager/controller-manager-669d5ddb7c-jzjkh" Feb 24 05:14:58.750767 master-0 kubenswrapper[7614]: I0224 05:14:58.749964 7614 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/5afb5486-5f33-4755-bfe9-c993d0d9ca71-audit\") pod \"apiserver-786f58c449-64k2s\" (UID: \"5afb5486-5f33-4755-bfe9-c993d0d9ca71\") " pod="openshift-apiserver/apiserver-786f58c449-64k2s" Feb 24 05:14:58.750767 master-0 kubenswrapper[7614]: I0224 05:14:58.750032 7614 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g8nv5\" (UniqueName: \"kubernetes.io/projected/d458399b-415c-4aa6-be1a-7364c42841c7-kube-api-access-g8nv5\") pod \"controller-manager-669d5ddb7c-jzjkh\" (UID: \"d458399b-415c-4aa6-be1a-7364c42841c7\") " pod="openshift-controller-manager/controller-manager-669d5ddb7c-jzjkh" Feb 24 05:14:58.750767 master-0 kubenswrapper[7614]: E0224 05:14:58.750242 7614 configmap.go:193] Couldn't get configMap openshift-apiserver/audit-0: configmap "audit-0" not found Feb 24 05:14:58.750767 master-0 kubenswrapper[7614]: E0224 05:14:58.750299 7614 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5afb5486-5f33-4755-bfe9-c993d0d9ca71-audit podName:5afb5486-5f33-4755-bfe9-c993d0d9ca71 nodeName:}" failed. No retries permitted until 2026-02-24 05:14:59.750279527 +0000 UTC m=+30.785022683 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "audit" (UniqueName: "kubernetes.io/configmap/5afb5486-5f33-4755-bfe9-c993d0d9ca71-audit") pod "apiserver-786f58c449-64k2s" (UID: "5afb5486-5f33-4755-bfe9-c993d0d9ca71") : configmap "audit-0" not found Feb 24 05:14:58.762623 master-0 kubenswrapper[7614]: I0224 05:14:58.762456 7614 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7bcb58f8c7-49bnf"] Feb 24 05:14:58.762623 master-0 kubenswrapper[7614]: I0224 05:14:58.762509 7614 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7bcb58f8c7-49bnf"] Feb 24 05:14:58.859158 master-0 kubenswrapper[7614]: I0224 05:14:58.858421 7614 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d458399b-415c-4aa6-be1a-7364c42841c7-config\") pod \"controller-manager-669d5ddb7c-jzjkh\" (UID: \"d458399b-415c-4aa6-be1a-7364c42841c7\") " pod="openshift-controller-manager/controller-manager-669d5ddb7c-jzjkh" Feb 24 05:14:58.859158 master-0 kubenswrapper[7614]: I0224 05:14:58.858502 7614 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d458399b-415c-4aa6-be1a-7364c42841c7-serving-cert\") pod \"controller-manager-669d5ddb7c-jzjkh\" (UID: \"d458399b-415c-4aa6-be1a-7364c42841c7\") " pod="openshift-controller-manager/controller-manager-669d5ddb7c-jzjkh" Feb 24 05:14:58.859158 master-0 kubenswrapper[7614]: I0224 05:14:58.858535 7614 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/d458399b-415c-4aa6-be1a-7364c42841c7-client-ca\") pod \"controller-manager-669d5ddb7c-jzjkh\" (UID: \"d458399b-415c-4aa6-be1a-7364c42841c7\") " pod="openshift-controller-manager/controller-manager-669d5ddb7c-jzjkh" Feb 24 05:14:58.859158 master-0 kubenswrapper[7614]: I0224 05:14:58.858570 7614 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/d458399b-415c-4aa6-be1a-7364c42841c7-proxy-ca-bundles\") pod \"controller-manager-669d5ddb7c-jzjkh\" (UID: \"d458399b-415c-4aa6-be1a-7364c42841c7\") " pod="openshift-controller-manager/controller-manager-669d5ddb7c-jzjkh" Feb 24 05:14:58.859158 master-0 kubenswrapper[7614]: I0224 05:14:58.858790 7614 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g8nv5\" (UniqueName: \"kubernetes.io/projected/d458399b-415c-4aa6-be1a-7364c42841c7-kube-api-access-g8nv5\") pod \"controller-manager-669d5ddb7c-jzjkh\" (UID: \"d458399b-415c-4aa6-be1a-7364c42841c7\") " pod="openshift-controller-manager/controller-manager-669d5ddb7c-jzjkh" Feb 24 05:14:58.859158 master-0 kubenswrapper[7614]: I0224 05:14:58.858838 7614 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/34b7f824-db5f-4c7a-8e5d-ea70227dec1a-client-ca\") on node \"master-0\" DevicePath \"\"" Feb 24 05:14:58.859158 master-0 kubenswrapper[7614]: I0224 05:14:58.858854 7614 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/34b7f824-db5f-4c7a-8e5d-ea70227dec1a-config\") on node \"master-0\" DevicePath \"\"" Feb 24 05:14:58.860282 master-0 kubenswrapper[7614]: I0224 05:14:58.859612 7614 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d458399b-415c-4aa6-be1a-7364c42841c7-config\") pod \"controller-manager-669d5ddb7c-jzjkh\" (UID: \"d458399b-415c-4aa6-be1a-7364c42841c7\") " pod="openshift-controller-manager/controller-manager-669d5ddb7c-jzjkh" Feb 24 05:14:58.860282 master-0 kubenswrapper[7614]: I0224 05:14:58.859962 7614 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/d458399b-415c-4aa6-be1a-7364c42841c7-client-ca\") pod \"controller-manager-669d5ddb7c-jzjkh\" (UID: \"d458399b-415c-4aa6-be1a-7364c42841c7\") " pod="openshift-controller-manager/controller-manager-669d5ddb7c-jzjkh" Feb 24 05:14:58.861509 master-0 kubenswrapper[7614]: I0224 05:14:58.861454 7614 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/d458399b-415c-4aa6-be1a-7364c42841c7-proxy-ca-bundles\") pod \"controller-manager-669d5ddb7c-jzjkh\" (UID: \"d458399b-415c-4aa6-be1a-7364c42841c7\") " pod="openshift-controller-manager/controller-manager-669d5ddb7c-jzjkh" Feb 24 05:14:58.868455 master-0 kubenswrapper[7614]: I0224 05:14:58.867132 7614 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d458399b-415c-4aa6-be1a-7364c42841c7-serving-cert\") pod \"controller-manager-669d5ddb7c-jzjkh\" (UID: \"d458399b-415c-4aa6-be1a-7364c42841c7\") " pod="openshift-controller-manager/controller-manager-669d5ddb7c-jzjkh" Feb 24 05:14:58.886787 master-0 kubenswrapper[7614]: I0224 05:14:58.886715 7614 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g8nv5\" (UniqueName: \"kubernetes.io/projected/d458399b-415c-4aa6-be1a-7364c42841c7-kube-api-access-g8nv5\") pod \"controller-manager-669d5ddb7c-jzjkh\" (UID: \"d458399b-415c-4aa6-be1a-7364c42841c7\") " pod="openshift-controller-manager/controller-manager-669d5ddb7c-jzjkh" Feb 24 05:14:59.042804 master-0 kubenswrapper[7614]: I0224 05:14:59.042735 7614 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-669d5ddb7c-jzjkh" Feb 24 05:14:59.192983 master-0 kubenswrapper[7614]: I0224 05:14:59.188832 7614 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="34b7f824-db5f-4c7a-8e5d-ea70227dec1a" path="/var/lib/kubelet/pods/34b7f824-db5f-4c7a-8e5d-ea70227dec1a/volumes" Feb 24 05:14:59.192983 master-0 kubenswrapper[7614]: I0224 05:14:59.189290 7614 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="de841409-bae0-4887-a92a-ec71cf6fae5e" path="/var/lib/kubelet/pods/de841409-bae0-4887-a92a-ec71cf6fae5e/volumes" Feb 24 05:14:59.196704 master-0 kubenswrapper[7614]: I0224 05:14:59.196639 7614 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-669d5ddb7c-jzjkh"] Feb 24 05:14:59.253671 master-0 kubenswrapper[7614]: I0224 05:14:59.251567 7614 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-56fdc6b8c6-52tgv"] Feb 24 05:14:59.253671 master-0 kubenswrapper[7614]: I0224 05:14:59.252186 7614 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-56fdc6b8c6-52tgv" Feb 24 05:14:59.257562 master-0 kubenswrapper[7614]: I0224 05:14:59.256166 7614 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Feb 24 05:14:59.257562 master-0 kubenswrapper[7614]: I0224 05:14:59.256471 7614 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Feb 24 05:14:59.257562 master-0 kubenswrapper[7614]: I0224 05:14:59.256630 7614 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Feb 24 05:14:59.257562 master-0 kubenswrapper[7614]: I0224 05:14:59.256788 7614 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Feb 24 05:14:59.257562 master-0 kubenswrapper[7614]: I0224 05:14:59.256957 7614 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Feb 24 05:14:59.273068 master-0 kubenswrapper[7614]: I0224 05:14:59.272554 7614 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-56fdc6b8c6-52tgv"] Feb 24 05:14:59.370817 master-0 kubenswrapper[7614]: I0224 05:14:59.370588 7614 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mlhlf\" (UniqueName: \"kubernetes.io/projected/75a2f046-94a3-481e-b8f5-b2666e151fc9-kube-api-access-mlhlf\") pod \"route-controller-manager-56fdc6b8c6-52tgv\" (UID: \"75a2f046-94a3-481e-b8f5-b2666e151fc9\") " pod="openshift-route-controller-manager/route-controller-manager-56fdc6b8c6-52tgv" Feb 24 05:14:59.370817 master-0 kubenswrapper[7614]: I0224 05:14:59.370765 7614 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/75a2f046-94a3-481e-b8f5-b2666e151fc9-config\") pod \"route-controller-manager-56fdc6b8c6-52tgv\" (UID: \"75a2f046-94a3-481e-b8f5-b2666e151fc9\") " pod="openshift-route-controller-manager/route-controller-manager-56fdc6b8c6-52tgv" Feb 24 05:14:59.371169 master-0 kubenswrapper[7614]: I0224 05:14:59.370846 7614 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/75a2f046-94a3-481e-b8f5-b2666e151fc9-client-ca\") pod \"route-controller-manager-56fdc6b8c6-52tgv\" (UID: \"75a2f046-94a3-481e-b8f5-b2666e151fc9\") " pod="openshift-route-controller-manager/route-controller-manager-56fdc6b8c6-52tgv" Feb 24 05:14:59.371169 master-0 kubenswrapper[7614]: I0224 05:14:59.370917 7614 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/75a2f046-94a3-481e-b8f5-b2666e151fc9-serving-cert\") pod \"route-controller-manager-56fdc6b8c6-52tgv\" (UID: \"75a2f046-94a3-481e-b8f5-b2666e151fc9\") " pod="openshift-route-controller-manager/route-controller-manager-56fdc6b8c6-52tgv" Feb 24 05:14:59.472694 master-0 kubenswrapper[7614]: I0224 05:14:59.472606 7614 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/75a2f046-94a3-481e-b8f5-b2666e151fc9-config\") pod \"route-controller-manager-56fdc6b8c6-52tgv\" (UID: \"75a2f046-94a3-481e-b8f5-b2666e151fc9\") " pod="openshift-route-controller-manager/route-controller-manager-56fdc6b8c6-52tgv" Feb 24 05:14:59.473080 master-0 kubenswrapper[7614]: I0224 05:14:59.473026 7614 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/75a2f046-94a3-481e-b8f5-b2666e151fc9-client-ca\") pod \"route-controller-manager-56fdc6b8c6-52tgv\" (UID: \"75a2f046-94a3-481e-b8f5-b2666e151fc9\") " pod="openshift-route-controller-manager/route-controller-manager-56fdc6b8c6-52tgv" Feb 24 05:14:59.473168 master-0 kubenswrapper[7614]: I0224 05:14:59.473135 7614 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/75a2f046-94a3-481e-b8f5-b2666e151fc9-serving-cert\") pod \"route-controller-manager-56fdc6b8c6-52tgv\" (UID: \"75a2f046-94a3-481e-b8f5-b2666e151fc9\") " pod="openshift-route-controller-manager/route-controller-manager-56fdc6b8c6-52tgv" Feb 24 05:14:59.473580 master-0 kubenswrapper[7614]: I0224 05:14:59.473538 7614 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mlhlf\" (UniqueName: \"kubernetes.io/projected/75a2f046-94a3-481e-b8f5-b2666e151fc9-kube-api-access-mlhlf\") pod \"route-controller-manager-56fdc6b8c6-52tgv\" (UID: \"75a2f046-94a3-481e-b8f5-b2666e151fc9\") " pod="openshift-route-controller-manager/route-controller-manager-56fdc6b8c6-52tgv" Feb 24 05:14:59.474011 master-0 kubenswrapper[7614]: I0224 05:14:59.473973 7614 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/75a2f046-94a3-481e-b8f5-b2666e151fc9-client-ca\") pod \"route-controller-manager-56fdc6b8c6-52tgv\" (UID: \"75a2f046-94a3-481e-b8f5-b2666e151fc9\") " pod="openshift-route-controller-manager/route-controller-manager-56fdc6b8c6-52tgv" Feb 24 05:14:59.474374 master-0 kubenswrapper[7614]: I0224 05:14:59.474330 7614 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/75a2f046-94a3-481e-b8f5-b2666e151fc9-config\") pod \"route-controller-manager-56fdc6b8c6-52tgv\" (UID: \"75a2f046-94a3-481e-b8f5-b2666e151fc9\") " pod="openshift-route-controller-manager/route-controller-manager-56fdc6b8c6-52tgv" Feb 24 05:14:59.488814 master-0 kubenswrapper[7614]: I0224 05:14:59.488714 7614 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/75a2f046-94a3-481e-b8f5-b2666e151fc9-serving-cert\") pod \"route-controller-manager-56fdc6b8c6-52tgv\" (UID: \"75a2f046-94a3-481e-b8f5-b2666e151fc9\") " pod="openshift-route-controller-manager/route-controller-manager-56fdc6b8c6-52tgv" Feb 24 05:14:59.496731 master-0 kubenswrapper[7614]: I0224 05:14:59.493498 7614 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mlhlf\" (UniqueName: \"kubernetes.io/projected/75a2f046-94a3-481e-b8f5-b2666e151fc9-kube-api-access-mlhlf\") pod \"route-controller-manager-56fdc6b8c6-52tgv\" (UID: \"75a2f046-94a3-481e-b8f5-b2666e151fc9\") " pod="openshift-route-controller-manager/route-controller-manager-56fdc6b8c6-52tgv" Feb 24 05:14:59.528340 master-0 kubenswrapper[7614]: I0224 05:14:59.528212 7614 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-669d5ddb7c-jzjkh"] Feb 24 05:14:59.537269 master-0 kubenswrapper[7614]: W0224 05:14:59.537208 7614 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd458399b_415c_4aa6_be1a_7364c42841c7.slice/crio-87e032a96cc9b8f5869d8fccac9d6efc8bfee5fb3c683089c50456e7eea4cb4b WatchSource:0}: Error finding container 87e032a96cc9b8f5869d8fccac9d6efc8bfee5fb3c683089c50456e7eea4cb4b: Status 404 returned error can't find the container with id 87e032a96cc9b8f5869d8fccac9d6efc8bfee5fb3c683089c50456e7eea4cb4b Feb 24 05:14:59.573857 master-0 kubenswrapper[7614]: I0224 05:14:59.573764 7614 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-56fdc6b8c6-52tgv" Feb 24 05:14:59.593123 master-0 kubenswrapper[7614]: I0224 05:14:59.593031 7614 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-669d5ddb7c-jzjkh" event={"ID":"d458399b-415c-4aa6-be1a-7364c42841c7","Type":"ContainerStarted","Data":"87e032a96cc9b8f5869d8fccac9d6efc8bfee5fb3c683089c50456e7eea4cb4b"} Feb 24 05:14:59.597301 master-0 kubenswrapper[7614]: I0224 05:14:59.597249 7614 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/installer-1-master-0" event={"ID":"2d3d57f1-cd67-4f1d-b267-f652b9bb3448","Type":"ContainerStarted","Data":"9b98ab8d2dc17a91ddedb320e3bb1181b379c4590b7ec6f960ba108eb0e71383"} Feb 24 05:14:59.659591 master-0 kubenswrapper[7614]: I0224 05:14:59.659368 7614 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-etcd/installer-1-master-0" podStartSLOduration=3.659339654 podStartE2EDuration="3.659339654s" podCreationTimestamp="2026-02-24 05:14:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-24 05:14:59.658299235 +0000 UTC m=+30.693042401" watchObservedRunningTime="2026-02-24 05:14:59.659339654 +0000 UTC m=+30.694082820" Feb 24 05:14:59.782343 master-0 kubenswrapper[7614]: I0224 05:14:59.782089 7614 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/5afb5486-5f33-4755-bfe9-c993d0d9ca71-audit\") pod \"apiserver-786f58c449-64k2s\" (UID: \"5afb5486-5f33-4755-bfe9-c993d0d9ca71\") " pod="openshift-apiserver/apiserver-786f58c449-64k2s" Feb 24 05:14:59.782625 master-0 kubenswrapper[7614]: E0224 05:14:59.782456 7614 configmap.go:193] Couldn't get configMap openshift-apiserver/audit-0: configmap "audit-0" not found Feb 24 05:14:59.782625 master-0 kubenswrapper[7614]: E0224 05:14:59.782544 7614 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5afb5486-5f33-4755-bfe9-c993d0d9ca71-audit podName:5afb5486-5f33-4755-bfe9-c993d0d9ca71 nodeName:}" failed. No retries permitted until 2026-02-24 05:15:01.782518789 +0000 UTC m=+32.817261965 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "audit" (UniqueName: "kubernetes.io/configmap/5afb5486-5f33-4755-bfe9-c993d0d9ca71-audit") pod "apiserver-786f58c449-64k2s" (UID: "5afb5486-5f33-4755-bfe9-c993d0d9ca71") : configmap "audit-0" not found Feb 24 05:14:59.841988 master-0 kubenswrapper[7614]: I0224 05:14:59.841916 7614 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-56fdc6b8c6-52tgv"] Feb 24 05:14:59.851428 master-0 kubenswrapper[7614]: W0224 05:14:59.851367 7614 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod75a2f046_94a3_481e_b8f5_b2666e151fc9.slice/crio-a13ae7398b644d75895c0474f0121634bb2569001020b64bd65755923d771513 WatchSource:0}: Error finding container a13ae7398b644d75895c0474f0121634bb2569001020b64bd65755923d771513: Status 404 returned error can't find the container with id a13ae7398b644d75895c0474f0121634bb2569001020b64bd65755923d771513 Feb 24 05:15:00.603070 master-0 kubenswrapper[7614]: I0224 05:15:00.602940 7614 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-56fdc6b8c6-52tgv" event={"ID":"75a2f046-94a3-481e-b8f5-b2666e151fc9","Type":"ContainerStarted","Data":"a13ae7398b644d75895c0474f0121634bb2569001020b64bd65755923d771513"} Feb 24 05:15:01.846510 master-0 kubenswrapper[7614]: I0224 05:15:01.831031 7614 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/5afb5486-5f33-4755-bfe9-c993d0d9ca71-audit\") pod \"apiserver-786f58c449-64k2s\" (UID: \"5afb5486-5f33-4755-bfe9-c993d0d9ca71\") " pod="openshift-apiserver/apiserver-786f58c449-64k2s" Feb 24 05:15:01.846510 master-0 kubenswrapper[7614]: E0224 05:15:01.831257 7614 configmap.go:193] Couldn't get configMap openshift-apiserver/audit-0: configmap "audit-0" not found Feb 24 05:15:01.846510 master-0 kubenswrapper[7614]: E0224 05:15:01.831344 7614 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5afb5486-5f33-4755-bfe9-c993d0d9ca71-audit podName:5afb5486-5f33-4755-bfe9-c993d0d9ca71 nodeName:}" failed. No retries permitted until 2026-02-24 05:15:05.831303027 +0000 UTC m=+36.866046183 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "audit" (UniqueName: "kubernetes.io/configmap/5afb5486-5f33-4755-bfe9-c993d0d9ca71-audit") pod "apiserver-786f58c449-64k2s" (UID: "5afb5486-5f33-4755-bfe9-c993d0d9ca71") : configmap "audit-0" not found Feb 24 05:15:01.932164 master-0 kubenswrapper[7614]: I0224 05:15:01.932088 7614 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/7dcc5520-7aa8-4cd5-b06d-591827ed4e2a-metrics-certs\") pod \"network-metrics-daemon-2vsjh\" (UID: \"7dcc5520-7aa8-4cd5-b06d-591827ed4e2a\") " pod="openshift-multus/network-metrics-daemon-2vsjh" Feb 24 05:15:01.932518 master-0 kubenswrapper[7614]: I0224 05:15:01.932435 7614 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cluster-monitoring-operator-tls\" (UniqueName: \"kubernetes.io/secret/c177f8fe-8145-4557-ae78-af121efe001c-cluster-monitoring-operator-tls\") pod \"cluster-monitoring-operator-6bb6d78bf-mzb7q\" (UID: \"c177f8fe-8145-4557-ae78-af121efe001c\") " pod="openshift-monitoring/cluster-monitoring-operator-6bb6d78bf-mzb7q" Feb 24 05:15:01.932671 master-0 kubenswrapper[7614]: I0224 05:15:01.932644 7614 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/8be1f8db-3f0b-4d6f-be42-7564fba66820-webhook-certs\") pod \"multus-admission-controller-5f98f4f8d5-b985k\" (UID: \"8be1f8db-3f0b-4d6f-be42-7564fba66820\") " pod="openshift-multus/multus-admission-controller-5f98f4f8d5-b985k" Feb 24 05:15:01.932734 master-0 kubenswrapper[7614]: I0224 05:15:01.932712 7614 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/dd29bef3-d27e-48b3-9aa0-d915e949b3d5-marketplace-operator-metrics\") pod \"marketplace-operator-6f5488b997-dbsnm\" (UID: \"dd29bef3-d27e-48b3-9aa0-d915e949b3d5\") " pod="openshift-marketplace/marketplace-operator-6f5488b997-dbsnm" Feb 24 05:15:01.932831 master-0 kubenswrapper[7614]: I0224 05:15:01.932806 7614 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/49bfccec-61ec-4bef-a561-9f6e6f906215-package-server-manager-serving-cert\") pod \"package-server-manager-5c75f78c8b-9d82f\" (UID: \"49bfccec-61ec-4bef-a561-9f6e6f906215\") " pod="openshift-operator-lifecycle-manager/package-server-manager-5c75f78c8b-9d82f" Feb 24 05:15:01.936611 master-0 kubenswrapper[7614]: I0224 05:15:01.936566 7614 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/dd29bef3-d27e-48b3-9aa0-d915e949b3d5-marketplace-operator-metrics\") pod \"marketplace-operator-6f5488b997-dbsnm\" (UID: \"dd29bef3-d27e-48b3-9aa0-d915e949b3d5\") " pod="openshift-marketplace/marketplace-operator-6f5488b997-dbsnm" Feb 24 05:15:01.936680 master-0 kubenswrapper[7614]: I0224 05:15:01.936616 7614 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/49bfccec-61ec-4bef-a561-9f6e6f906215-package-server-manager-serving-cert\") pod \"package-server-manager-5c75f78c8b-9d82f\" (UID: \"49bfccec-61ec-4bef-a561-9f6e6f906215\") " pod="openshift-operator-lifecycle-manager/package-server-manager-5c75f78c8b-9d82f" Feb 24 05:15:01.936785 master-0 kubenswrapper[7614]: I0224 05:15:01.936571 7614 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/7dcc5520-7aa8-4cd5-b06d-591827ed4e2a-metrics-certs\") pod \"network-metrics-daemon-2vsjh\" (UID: \"7dcc5520-7aa8-4cd5-b06d-591827ed4e2a\") " pod="openshift-multus/network-metrics-daemon-2vsjh" Feb 24 05:15:01.937837 master-0 kubenswrapper[7614]: I0224 05:15:01.937793 7614 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cluster-monitoring-operator-tls\" (UniqueName: \"kubernetes.io/secret/c177f8fe-8145-4557-ae78-af121efe001c-cluster-monitoring-operator-tls\") pod \"cluster-monitoring-operator-6bb6d78bf-mzb7q\" (UID: \"c177f8fe-8145-4557-ae78-af121efe001c\") " pod="openshift-monitoring/cluster-monitoring-operator-6bb6d78bf-mzb7q" Feb 24 05:15:01.944753 master-0 kubenswrapper[7614]: I0224 05:15:01.944706 7614 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/8be1f8db-3f0b-4d6f-be42-7564fba66820-webhook-certs\") pod \"multus-admission-controller-5f98f4f8d5-b985k\" (UID: \"8be1f8db-3f0b-4d6f-be42-7564fba66820\") " pod="openshift-multus/multus-admission-controller-5f98f4f8d5-b985k" Feb 24 05:15:02.228271 master-0 kubenswrapper[7614]: I0224 05:15:02.228042 7614 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/cluster-monitoring-operator-6bb6d78bf-mzb7q" Feb 24 05:15:02.228271 master-0 kubenswrapper[7614]: I0224 05:15:02.228126 7614 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-6f5488b997-dbsnm" Feb 24 05:15:02.229439 master-0 kubenswrapper[7614]: I0224 05:15:02.229074 7614 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-5c75f78c8b-9d82f" Feb 24 05:15:02.229523 master-0 kubenswrapper[7614]: I0224 05:15:02.229484 7614 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-2vsjh" Feb 24 05:15:02.230128 master-0 kubenswrapper[7614]: I0224 05:15:02.230098 7614 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-5f98f4f8d5-b985k" Feb 24 05:15:03.806143 master-0 kubenswrapper[7614]: I0224 05:15:03.805608 7614 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/network-metrics-daemon-2vsjh"] Feb 24 05:15:03.806143 master-0 kubenswrapper[7614]: I0224 05:15:03.805719 7614 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/multus-admission-controller-5f98f4f8d5-b985k"] Feb 24 05:15:03.812393 master-0 kubenswrapper[7614]: I0224 05:15:03.807161 7614 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/cluster-monitoring-operator-6bb6d78bf-mzb7q"] Feb 24 05:15:03.812393 master-0 kubenswrapper[7614]: I0224 05:15:03.807999 7614 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-6f5488b997-dbsnm"] Feb 24 05:15:03.812393 master-0 kubenswrapper[7614]: I0224 05:15:03.808815 7614 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-5c75f78c8b-9d82f"] Feb 24 05:15:04.049650 master-0 kubenswrapper[7614]: W0224 05:15:04.049343 7614 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod49bfccec_61ec_4bef_a561_9f6e6f906215.slice/crio-371c4924a11b805a233cd8aa1cdf64502325cac941f4d66f86f54a68683a9e74 WatchSource:0}: Error finding container 371c4924a11b805a233cd8aa1cdf64502325cac941f4d66f86f54a68683a9e74: Status 404 returned error can't find the container with id 371c4924a11b805a233cd8aa1cdf64502325cac941f4d66f86f54a68683a9e74 Feb 24 05:15:04.638334 master-0 kubenswrapper[7614]: I0224 05:15:04.638143 7614 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-5c75f78c8b-9d82f" event={"ID":"49bfccec-61ec-4bef-a561-9f6e6f906215","Type":"ContainerStarted","Data":"371c4924a11b805a233cd8aa1cdf64502325cac941f4d66f86f54a68683a9e74"} Feb 24 05:15:04.873338 master-0 kubenswrapper[7614]: I0224 05:15:04.871890 7614 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-catalogd/catalogd-controller-manager-84b8d9d697-zvzxs" Feb 24 05:15:05.187681 master-0 kubenswrapper[7614]: I0224 05:15:05.186510 7614 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-apiserver/apiserver-786f58c449-64k2s"] Feb 24 05:15:05.187681 master-0 kubenswrapper[7614]: E0224 05:15:05.187363 7614 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[audit], unattached volumes=[], failed to process volumes=[]: context canceled" pod="openshift-apiserver/apiserver-786f58c449-64k2s" podUID="5afb5486-5f33-4755-bfe9-c993d0d9ca71" Feb 24 05:15:05.315205 master-0 kubenswrapper[7614]: I0224 05:15:05.315089 7614 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-controller/operator-controller-controller-manager-9cc7d7bb-t75jj" Feb 24 05:15:05.639993 master-0 kubenswrapper[7614]: W0224 05:15:05.639852 7614 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod7dcc5520_7aa8_4cd5_b06d_591827ed4e2a.slice/crio-267ebddc959ac57c572038da835a770f0388428b8136a92cef38a57e55a51aac WatchSource:0}: Error finding container 267ebddc959ac57c572038da835a770f0388428b8136a92cef38a57e55a51aac: Status 404 returned error can't find the container with id 267ebddc959ac57c572038da835a770f0388428b8136a92cef38a57e55a51aac Feb 24 05:15:05.649530 master-0 kubenswrapper[7614]: W0224 05:15:05.649443 7614 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8be1f8db_3f0b_4d6f_be42_7564fba66820.slice/crio-46a5994b405203be832b6e8a9d78723e27b9a540f4fcd8cfc16f6928523dcdb0 WatchSource:0}: Error finding container 46a5994b405203be832b6e8a9d78723e27b9a540f4fcd8cfc16f6928523dcdb0: Status 404 returned error can't find the container with id 46a5994b405203be832b6e8a9d78723e27b9a540f4fcd8cfc16f6928523dcdb0 Feb 24 05:15:05.651454 master-0 kubenswrapper[7614]: I0224 05:15:05.651361 7614 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-786f58c449-64k2s" Feb 24 05:15:05.653536 master-0 kubenswrapper[7614]: W0224 05:15:05.653426 7614 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc177f8fe_8145_4557_ae78_af121efe001c.slice/crio-5dd4d0e15147dd2dcd433c46cdfb1a10fbbcd3b91480c55088fbf67973e54f4c WatchSource:0}: Error finding container 5dd4d0e15147dd2dcd433c46cdfb1a10fbbcd3b91480c55088fbf67973e54f4c: Status 404 returned error can't find the container with id 5dd4d0e15147dd2dcd433c46cdfb1a10fbbcd3b91480c55088fbf67973e54f4c Feb 24 05:15:05.710364 master-0 kubenswrapper[7614]: I0224 05:15:05.709924 7614 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-786f58c449-64k2s" Feb 24 05:15:05.793980 master-0 kubenswrapper[7614]: I0224 05:15:05.793878 7614 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/5afb5486-5f33-4755-bfe9-c993d0d9ca71-audit-dir\") pod \"5afb5486-5f33-4755-bfe9-c993d0d9ca71\" (UID: \"5afb5486-5f33-4755-bfe9-c993d0d9ca71\") " Feb 24 05:15:05.793980 master-0 kubenswrapper[7614]: I0224 05:15:05.793981 7614 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5afb5486-5f33-4755-bfe9-c993d0d9ca71-serving-cert\") pod \"5afb5486-5f33-4755-bfe9-c993d0d9ca71\" (UID: \"5afb5486-5f33-4755-bfe9-c993d0d9ca71\") " Feb 24 05:15:05.794510 master-0 kubenswrapper[7614]: I0224 05:15:05.794036 7614 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5afb5486-5f33-4755-bfe9-c993d0d9ca71-trusted-ca-bundle\") pod \"5afb5486-5f33-4755-bfe9-c993d0d9ca71\" (UID: \"5afb5486-5f33-4755-bfe9-c993d0d9ca71\") " Feb 24 05:15:05.794510 master-0 kubenswrapper[7614]: I0224 05:15:05.794112 7614 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/5afb5486-5f33-4755-bfe9-c993d0d9ca71-encryption-config\") pod \"5afb5486-5f33-4755-bfe9-c993d0d9ca71\" (UID: \"5afb5486-5f33-4755-bfe9-c993d0d9ca71\") " Feb 24 05:15:05.794510 master-0 kubenswrapper[7614]: I0224 05:15:05.794116 7614 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5afb5486-5f33-4755-bfe9-c993d0d9ca71-audit-dir" (OuterVolumeSpecName: "audit-dir") pod "5afb5486-5f33-4755-bfe9-c993d0d9ca71" (UID: "5afb5486-5f33-4755-bfe9-c993d0d9ca71"). InnerVolumeSpecName "audit-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 24 05:15:05.794510 master-0 kubenswrapper[7614]: I0224 05:15:05.794169 7614 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5afb5486-5f33-4755-bfe9-c993d0d9ca71-config\") pod \"5afb5486-5f33-4755-bfe9-c993d0d9ca71\" (UID: \"5afb5486-5f33-4755-bfe9-c993d0d9ca71\") " Feb 24 05:15:05.794510 master-0 kubenswrapper[7614]: I0224 05:15:05.794361 7614 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/5afb5486-5f33-4755-bfe9-c993d0d9ca71-image-import-ca\") pod \"5afb5486-5f33-4755-bfe9-c993d0d9ca71\" (UID: \"5afb5486-5f33-4755-bfe9-c993d0d9ca71\") " Feb 24 05:15:05.794510 master-0 kubenswrapper[7614]: I0224 05:15:05.794441 7614 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/5afb5486-5f33-4755-bfe9-c993d0d9ca71-etcd-client\") pod \"5afb5486-5f33-4755-bfe9-c993d0d9ca71\" (UID: \"5afb5486-5f33-4755-bfe9-c993d0d9ca71\") " Feb 24 05:15:05.796458 master-0 kubenswrapper[7614]: I0224 05:15:05.794508 7614 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s67l4\" (UniqueName: \"kubernetes.io/projected/5afb5486-5f33-4755-bfe9-c993d0d9ca71-kube-api-access-s67l4\") pod \"5afb5486-5f33-4755-bfe9-c993d0d9ca71\" (UID: \"5afb5486-5f33-4755-bfe9-c993d0d9ca71\") " Feb 24 05:15:05.796579 master-0 kubenswrapper[7614]: I0224 05:15:05.796521 7614 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/5afb5486-5f33-4755-bfe9-c993d0d9ca71-node-pullsecrets\") pod \"5afb5486-5f33-4755-bfe9-c993d0d9ca71\" (UID: \"5afb5486-5f33-4755-bfe9-c993d0d9ca71\") " Feb 24 05:15:05.796579 master-0 kubenswrapper[7614]: I0224 05:15:05.796575 7614 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/5afb5486-5f33-4755-bfe9-c993d0d9ca71-etcd-serving-ca\") pod \"5afb5486-5f33-4755-bfe9-c993d0d9ca71\" (UID: \"5afb5486-5f33-4755-bfe9-c993d0d9ca71\") " Feb 24 05:15:05.796793 master-0 kubenswrapper[7614]: I0224 05:15:05.795002 7614 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5afb5486-5f33-4755-bfe9-c993d0d9ca71-config" (OuterVolumeSpecName: "config") pod "5afb5486-5f33-4755-bfe9-c993d0d9ca71" (UID: "5afb5486-5f33-4755-bfe9-c993d0d9ca71"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 24 05:15:05.796793 master-0 kubenswrapper[7614]: I0224 05:15:05.795067 7614 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5afb5486-5f33-4755-bfe9-c993d0d9ca71-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "5afb5486-5f33-4755-bfe9-c993d0d9ca71" (UID: "5afb5486-5f33-4755-bfe9-c993d0d9ca71"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 24 05:15:05.796793 master-0 kubenswrapper[7614]: I0224 05:15:05.795461 7614 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5afb5486-5f33-4755-bfe9-c993d0d9ca71-image-import-ca" (OuterVolumeSpecName: "image-import-ca") pod "5afb5486-5f33-4755-bfe9-c993d0d9ca71" (UID: "5afb5486-5f33-4755-bfe9-c993d0d9ca71"). InnerVolumeSpecName "image-import-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 24 05:15:05.797097 master-0 kubenswrapper[7614]: I0224 05:15:05.797043 7614 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5afb5486-5f33-4755-bfe9-c993d0d9ca71-config\") on node \"master-0\" DevicePath \"\"" Feb 24 05:15:05.797097 master-0 kubenswrapper[7614]: I0224 05:15:05.797081 7614 reconciler_common.go:293] "Volume detached for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/5afb5486-5f33-4755-bfe9-c993d0d9ca71-image-import-ca\") on node \"master-0\" DevicePath \"\"" Feb 24 05:15:05.797250 master-0 kubenswrapper[7614]: I0224 05:15:05.797103 7614 reconciler_common.go:293] "Volume detached for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/5afb5486-5f33-4755-bfe9-c993d0d9ca71-audit-dir\") on node \"master-0\" DevicePath \"\"" Feb 24 05:15:05.797250 master-0 kubenswrapper[7614]: I0224 05:15:05.797123 7614 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5afb5486-5f33-4755-bfe9-c993d0d9ca71-trusted-ca-bundle\") on node \"master-0\" DevicePath \"\"" Feb 24 05:15:05.797250 master-0 kubenswrapper[7614]: I0224 05:15:05.797090 7614 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5afb5486-5f33-4755-bfe9-c993d0d9ca71-node-pullsecrets" (OuterVolumeSpecName: "node-pullsecrets") pod "5afb5486-5f33-4755-bfe9-c993d0d9ca71" (UID: "5afb5486-5f33-4755-bfe9-c993d0d9ca71"). InnerVolumeSpecName "node-pullsecrets". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 24 05:15:05.797250 master-0 kubenswrapper[7614]: I0224 05:15:05.797145 7614 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5afb5486-5f33-4755-bfe9-c993d0d9ca71-etcd-serving-ca" (OuterVolumeSpecName: "etcd-serving-ca") pod "5afb5486-5f33-4755-bfe9-c993d0d9ca71" (UID: "5afb5486-5f33-4755-bfe9-c993d0d9ca71"). InnerVolumeSpecName "etcd-serving-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 24 05:15:05.797996 master-0 kubenswrapper[7614]: I0224 05:15:05.797939 7614 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5afb5486-5f33-4755-bfe9-c993d0d9ca71-kube-api-access-s67l4" (OuterVolumeSpecName: "kube-api-access-s67l4") pod "5afb5486-5f33-4755-bfe9-c993d0d9ca71" (UID: "5afb5486-5f33-4755-bfe9-c993d0d9ca71"). InnerVolumeSpecName "kube-api-access-s67l4". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 24 05:15:05.799756 master-0 kubenswrapper[7614]: I0224 05:15:05.799664 7614 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5afb5486-5f33-4755-bfe9-c993d0d9ca71-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "5afb5486-5f33-4755-bfe9-c993d0d9ca71" (UID: "5afb5486-5f33-4755-bfe9-c993d0d9ca71"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 24 05:15:05.799944 master-0 kubenswrapper[7614]: I0224 05:15:05.799804 7614 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5afb5486-5f33-4755-bfe9-c993d0d9ca71-encryption-config" (OuterVolumeSpecName: "encryption-config") pod "5afb5486-5f33-4755-bfe9-c993d0d9ca71" (UID: "5afb5486-5f33-4755-bfe9-c993d0d9ca71"). InnerVolumeSpecName "encryption-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 24 05:15:05.800478 master-0 kubenswrapper[7614]: I0224 05:15:05.800383 7614 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5afb5486-5f33-4755-bfe9-c993d0d9ca71-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "5afb5486-5f33-4755-bfe9-c993d0d9ca71" (UID: "5afb5486-5f33-4755-bfe9-c993d0d9ca71"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 24 05:15:05.905715 master-0 kubenswrapper[7614]: I0224 05:15:05.905494 7614 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/5afb5486-5f33-4755-bfe9-c993d0d9ca71-audit\") pod \"apiserver-786f58c449-64k2s\" (UID: \"5afb5486-5f33-4755-bfe9-c993d0d9ca71\") " pod="openshift-apiserver/apiserver-786f58c449-64k2s" Feb 24 05:15:05.905715 master-0 kubenswrapper[7614]: I0224 05:15:05.905626 7614 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/5afb5486-5f33-4755-bfe9-c993d0d9ca71-etcd-client\") on node \"master-0\" DevicePath \"\"" Feb 24 05:15:05.905715 master-0 kubenswrapper[7614]: I0224 05:15:05.905662 7614 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-s67l4\" (UniqueName: \"kubernetes.io/projected/5afb5486-5f33-4755-bfe9-c993d0d9ca71-kube-api-access-s67l4\") on node \"master-0\" DevicePath \"\"" Feb 24 05:15:05.905715 master-0 kubenswrapper[7614]: I0224 05:15:05.905698 7614 reconciler_common.go:293] "Volume detached for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/5afb5486-5f33-4755-bfe9-c993d0d9ca71-node-pullsecrets\") on node \"master-0\" DevicePath \"\"" Feb 24 05:15:05.905715 master-0 kubenswrapper[7614]: I0224 05:15:05.905734 7614 reconciler_common.go:293] "Volume detached for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/5afb5486-5f33-4755-bfe9-c993d0d9ca71-etcd-serving-ca\") on node \"master-0\" DevicePath \"\"" Feb 24 05:15:05.906973 master-0 kubenswrapper[7614]: I0224 05:15:05.905754 7614 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5afb5486-5f33-4755-bfe9-c993d0d9ca71-serving-cert\") on node \"master-0\" DevicePath \"\"" Feb 24 05:15:05.906973 master-0 kubenswrapper[7614]: I0224 05:15:05.905773 7614 reconciler_common.go:293] "Volume detached for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/5afb5486-5f33-4755-bfe9-c993d0d9ca71-encryption-config\") on node \"master-0\" DevicePath \"\"" Feb 24 05:15:05.906973 master-0 kubenswrapper[7614]: E0224 05:15:05.905692 7614 configmap.go:193] Couldn't get configMap openshift-apiserver/audit-0: configmap "audit-0" not found Feb 24 05:15:05.906973 master-0 kubenswrapper[7614]: E0224 05:15:05.905903 7614 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5afb5486-5f33-4755-bfe9-c993d0d9ca71-audit podName:5afb5486-5f33-4755-bfe9-c993d0d9ca71 nodeName:}" failed. No retries permitted until 2026-02-24 05:15:13.905870644 +0000 UTC m=+44.940613830 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "audit" (UniqueName: "kubernetes.io/configmap/5afb5486-5f33-4755-bfe9-c993d0d9ca71-audit") pod "apiserver-786f58c449-64k2s" (UID: "5afb5486-5f33-4755-bfe9-c993d0d9ca71") : configmap "audit-0" not found Feb 24 05:15:06.657436 master-0 kubenswrapper[7614]: I0224 05:15:06.657359 7614 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/cluster-monitoring-operator-6bb6d78bf-mzb7q" event={"ID":"c177f8fe-8145-4557-ae78-af121efe001c","Type":"ContainerStarted","Data":"5dd4d0e15147dd2dcd433c46cdfb1a10fbbcd3b91480c55088fbf67973e54f4c"} Feb 24 05:15:06.658554 master-0 kubenswrapper[7614]: I0224 05:15:06.658510 7614 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-6f5488b997-dbsnm" event={"ID":"dd29bef3-d27e-48b3-9aa0-d915e949b3d5","Type":"ContainerStarted","Data":"c125f0138a2358ed33a087eaebb28b417878c3d57e675823d35e0431d5663d9e"} Feb 24 05:15:06.659656 master-0 kubenswrapper[7614]: I0224 05:15:06.659619 7614 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-2vsjh" event={"ID":"7dcc5520-7aa8-4cd5-b06d-591827ed4e2a","Type":"ContainerStarted","Data":"267ebddc959ac57c572038da835a770f0388428b8136a92cef38a57e55a51aac"} Feb 24 05:15:06.660740 master-0 kubenswrapper[7614]: I0224 05:15:06.660683 7614 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-5f98f4f8d5-b985k" event={"ID":"8be1f8db-3f0b-4d6f-be42-7564fba66820","Type":"ContainerStarted","Data":"46a5994b405203be832b6e8a9d78723e27b9a540f4fcd8cfc16f6928523dcdb0"} Feb 24 05:15:06.660799 master-0 kubenswrapper[7614]: I0224 05:15:06.660765 7614 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-786f58c449-64k2s" Feb 24 05:15:06.909449 master-0 kubenswrapper[7614]: I0224 05:15:06.905466 7614 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-apiserver/apiserver-786f58c449-64k2s"] Feb 24 05:15:06.915338 master-0 kubenswrapper[7614]: I0224 05:15:06.913136 7614 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-apiserver/apiserver-786f58c449-64k2s"] Feb 24 05:15:07.041114 master-0 kubenswrapper[7614]: I0224 05:15:07.038821 7614 reconciler_common.go:293] "Volume detached for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/5afb5486-5f33-4755-bfe9-c993d0d9ca71-audit\") on node \"master-0\" DevicePath \"\"" Feb 24 05:15:07.181500 master-0 kubenswrapper[7614]: I0224 05:15:07.181357 7614 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5afb5486-5f33-4755-bfe9-c993d0d9ca71" path="/var/lib/kubelet/pods/5afb5486-5f33-4755-bfe9-c993d0d9ca71/volumes" Feb 24 05:15:07.668479 master-0 kubenswrapper[7614]: I0224 05:15:07.668430 7614 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-669d5ddb7c-jzjkh" event={"ID":"d458399b-415c-4aa6-be1a-7364c42841c7","Type":"ContainerStarted","Data":"ce1584aceb82aaa760f4b8bfb7af2460c57e030fd815cca2839f27eaa74df4aa"} Feb 24 05:15:07.668785 master-0 kubenswrapper[7614]: I0224 05:15:07.668607 7614 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-669d5ddb7c-jzjkh" podUID="d458399b-415c-4aa6-be1a-7364c42841c7" containerName="controller-manager" containerID="cri-o://ce1584aceb82aaa760f4b8bfb7af2460c57e030fd815cca2839f27eaa74df4aa" gracePeriod=30 Feb 24 05:15:07.669497 master-0 kubenswrapper[7614]: I0224 05:15:07.669424 7614 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-669d5ddb7c-jzjkh" Feb 24 05:15:07.671796 master-0 kubenswrapper[7614]: I0224 05:15:07.671743 7614 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-5c75f78c8b-9d82f" event={"ID":"49bfccec-61ec-4bef-a561-9f6e6f906215","Type":"ContainerStarted","Data":"b32f602e65fa0f96061ae1ac1598eb179ad2acaeb5a0a72fea806e7a02cf3708"} Feb 24 05:15:07.680382 master-0 kubenswrapper[7614]: I0224 05:15:07.677622 7614 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-669d5ddb7c-jzjkh" Feb 24 05:15:07.683677 master-0 kubenswrapper[7614]: I0224 05:15:07.683558 7614 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-56fdc6b8c6-52tgv" event={"ID":"75a2f046-94a3-481e-b8f5-b2666e151fc9","Type":"ContainerStarted","Data":"ab447b6da9854f88d9ed73e853efdddd099f2776799cafee02fcb896b0a6f932"} Feb 24 05:15:07.684461 master-0 kubenswrapper[7614]: I0224 05:15:07.684096 7614 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-56fdc6b8c6-52tgv" Feb 24 05:15:07.690420 master-0 kubenswrapper[7614]: I0224 05:15:07.689246 7614 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-669d5ddb7c-jzjkh" podStartSLOduration=3.364434701 podStartE2EDuration="10.689222633s" podCreationTimestamp="2026-02-24 05:14:57 +0000 UTC" firstStartedPulling="2026-02-24 05:14:59.540123699 +0000 UTC m=+30.574866845" lastFinishedPulling="2026-02-24 05:15:06.864911611 +0000 UTC m=+37.899654777" observedRunningTime="2026-02-24 05:15:07.688102551 +0000 UTC m=+38.722845707" watchObservedRunningTime="2026-02-24 05:15:07.689222633 +0000 UTC m=+38.723965789" Feb 24 05:15:07.693603 master-0 kubenswrapper[7614]: I0224 05:15:07.693560 7614 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-56fdc6b8c6-52tgv" Feb 24 05:15:07.706183 master-0 kubenswrapper[7614]: I0224 05:15:07.705049 7614 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-56fdc6b8c6-52tgv" podStartSLOduration=1.68704707 podStartE2EDuration="8.705026261s" podCreationTimestamp="2026-02-24 05:14:59 +0000 UTC" firstStartedPulling="2026-02-24 05:14:59.854951003 +0000 UTC m=+30.889694159" lastFinishedPulling="2026-02-24 05:15:06.872930194 +0000 UTC m=+37.907673350" observedRunningTime="2026-02-24 05:15:07.701917185 +0000 UTC m=+38.736660341" watchObservedRunningTime="2026-02-24 05:15:07.705026261 +0000 UTC m=+38.739769417" Feb 24 05:15:07.872974 master-0 kubenswrapper[7614]: I0224 05:15:07.872905 7614 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-apiserver/apiserver-fdc9d7cdd-8v72m"] Feb 24 05:15:07.874036 master-0 kubenswrapper[7614]: I0224 05:15:07.873999 7614 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-fdc9d7cdd-8v72m" Feb 24 05:15:07.882371 master-0 kubenswrapper[7614]: I0224 05:15:07.882275 7614 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"image-import-ca" Feb 24 05:15:07.882607 master-0 kubenswrapper[7614]: I0224 05:15:07.882584 7614 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"etcd-client" Feb 24 05:15:07.882684 master-0 kubenswrapper[7614]: I0224 05:15:07.882609 7614 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"encryption-config-1" Feb 24 05:15:07.882924 master-0 kubenswrapper[7614]: I0224 05:15:07.882890 7614 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"audit-1" Feb 24 05:15:07.889563 master-0 kubenswrapper[7614]: I0224 05:15:07.889530 7614 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"serving-cert" Feb 24 05:15:07.889765 master-0 kubenswrapper[7614]: I0224 05:15:07.889741 7614 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"kube-root-ca.crt" Feb 24 05:15:07.890255 master-0 kubenswrapper[7614]: I0224 05:15:07.890226 7614 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"config" Feb 24 05:15:07.890370 master-0 kubenswrapper[7614]: I0224 05:15:07.890298 7614 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"etcd-serving-ca" Feb 24 05:15:07.891805 master-0 kubenswrapper[7614]: I0224 05:15:07.891714 7614 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"openshift-service-ca.crt" Feb 24 05:15:07.893449 master-0 kubenswrapper[7614]: I0224 05:15:07.893426 7614 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"trusted-ca-bundle" Feb 24 05:15:07.896110 master-0 kubenswrapper[7614]: I0224 05:15:07.896041 7614 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver/apiserver-fdc9d7cdd-8v72m"] Feb 24 05:15:07.956635 master-0 kubenswrapper[7614]: I0224 05:15:07.956474 7614 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/b21148ab-4e3e-4d0b-b198-3278dd8e2e7e-audit-dir\") pod \"apiserver-fdc9d7cdd-8v72m\" (UID: \"b21148ab-4e3e-4d0b-b198-3278dd8e2e7e\") " pod="openshift-apiserver/apiserver-fdc9d7cdd-8v72m" Feb 24 05:15:07.956635 master-0 kubenswrapper[7614]: I0224 05:15:07.956582 7614 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/b21148ab-4e3e-4d0b-b198-3278dd8e2e7e-encryption-config\") pod \"apiserver-fdc9d7cdd-8v72m\" (UID: \"b21148ab-4e3e-4d0b-b198-3278dd8e2e7e\") " pod="openshift-apiserver/apiserver-fdc9d7cdd-8v72m" Feb 24 05:15:07.956635 master-0 kubenswrapper[7614]: I0224 05:15:07.956629 7614 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/b21148ab-4e3e-4d0b-b198-3278dd8e2e7e-image-import-ca\") pod \"apiserver-fdc9d7cdd-8v72m\" (UID: \"b21148ab-4e3e-4d0b-b198-3278dd8e2e7e\") " pod="openshift-apiserver/apiserver-fdc9d7cdd-8v72m" Feb 24 05:15:07.957653 master-0 kubenswrapper[7614]: I0224 05:15:07.956705 7614 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/b21148ab-4e3e-4d0b-b198-3278dd8e2e7e-audit\") pod \"apiserver-fdc9d7cdd-8v72m\" (UID: \"b21148ab-4e3e-4d0b-b198-3278dd8e2e7e\") " pod="openshift-apiserver/apiserver-fdc9d7cdd-8v72m" Feb 24 05:15:07.957653 master-0 kubenswrapper[7614]: I0224 05:15:07.956783 7614 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b21148ab-4e3e-4d0b-b198-3278dd8e2e7e-trusted-ca-bundle\") pod \"apiserver-fdc9d7cdd-8v72m\" (UID: \"b21148ab-4e3e-4d0b-b198-3278dd8e2e7e\") " pod="openshift-apiserver/apiserver-fdc9d7cdd-8v72m" Feb 24 05:15:07.957653 master-0 kubenswrapper[7614]: I0224 05:15:07.956820 7614 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/b21148ab-4e3e-4d0b-b198-3278dd8e2e7e-etcd-client\") pod \"apiserver-fdc9d7cdd-8v72m\" (UID: \"b21148ab-4e3e-4d0b-b198-3278dd8e2e7e\") " pod="openshift-apiserver/apiserver-fdc9d7cdd-8v72m" Feb 24 05:15:07.957653 master-0 kubenswrapper[7614]: I0224 05:15:07.956883 7614 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/b21148ab-4e3e-4d0b-b198-3278dd8e2e7e-node-pullsecrets\") pod \"apiserver-fdc9d7cdd-8v72m\" (UID: \"b21148ab-4e3e-4d0b-b198-3278dd8e2e7e\") " pod="openshift-apiserver/apiserver-fdc9d7cdd-8v72m" Feb 24 05:15:07.957653 master-0 kubenswrapper[7614]: I0224 05:15:07.956933 7614 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b21148ab-4e3e-4d0b-b198-3278dd8e2e7e-config\") pod \"apiserver-fdc9d7cdd-8v72m\" (UID: \"b21148ab-4e3e-4d0b-b198-3278dd8e2e7e\") " pod="openshift-apiserver/apiserver-fdc9d7cdd-8v72m" Feb 24 05:15:07.957653 master-0 kubenswrapper[7614]: I0224 05:15:07.956976 7614 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b21148ab-4e3e-4d0b-b198-3278dd8e2e7e-serving-cert\") pod \"apiserver-fdc9d7cdd-8v72m\" (UID: \"b21148ab-4e3e-4d0b-b198-3278dd8e2e7e\") " pod="openshift-apiserver/apiserver-fdc9d7cdd-8v72m" Feb 24 05:15:07.957653 master-0 kubenswrapper[7614]: I0224 05:15:07.957022 7614 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dtnxg\" (UniqueName: \"kubernetes.io/projected/b21148ab-4e3e-4d0b-b198-3278dd8e2e7e-kube-api-access-dtnxg\") pod \"apiserver-fdc9d7cdd-8v72m\" (UID: \"b21148ab-4e3e-4d0b-b198-3278dd8e2e7e\") " pod="openshift-apiserver/apiserver-fdc9d7cdd-8v72m" Feb 24 05:15:07.957653 master-0 kubenswrapper[7614]: I0224 05:15:07.957065 7614 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/b21148ab-4e3e-4d0b-b198-3278dd8e2e7e-etcd-serving-ca\") pod \"apiserver-fdc9d7cdd-8v72m\" (UID: \"b21148ab-4e3e-4d0b-b198-3278dd8e2e7e\") " pod="openshift-apiserver/apiserver-fdc9d7cdd-8v72m" Feb 24 05:15:08.058490 master-0 kubenswrapper[7614]: I0224 05:15:08.058427 7614 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b21148ab-4e3e-4d0b-b198-3278dd8e2e7e-trusted-ca-bundle\") pod \"apiserver-fdc9d7cdd-8v72m\" (UID: \"b21148ab-4e3e-4d0b-b198-3278dd8e2e7e\") " pod="openshift-apiserver/apiserver-fdc9d7cdd-8v72m" Feb 24 05:15:08.058490 master-0 kubenswrapper[7614]: I0224 05:15:08.058485 7614 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/b21148ab-4e3e-4d0b-b198-3278dd8e2e7e-etcd-client\") pod \"apiserver-fdc9d7cdd-8v72m\" (UID: \"b21148ab-4e3e-4d0b-b198-3278dd8e2e7e\") " pod="openshift-apiserver/apiserver-fdc9d7cdd-8v72m" Feb 24 05:15:08.058753 master-0 kubenswrapper[7614]: I0224 05:15:08.058513 7614 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/b21148ab-4e3e-4d0b-b198-3278dd8e2e7e-node-pullsecrets\") pod \"apiserver-fdc9d7cdd-8v72m\" (UID: \"b21148ab-4e3e-4d0b-b198-3278dd8e2e7e\") " pod="openshift-apiserver/apiserver-fdc9d7cdd-8v72m" Feb 24 05:15:08.058753 master-0 kubenswrapper[7614]: I0224 05:15:08.058531 7614 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b21148ab-4e3e-4d0b-b198-3278dd8e2e7e-serving-cert\") pod \"apiserver-fdc9d7cdd-8v72m\" (UID: \"b21148ab-4e3e-4d0b-b198-3278dd8e2e7e\") " pod="openshift-apiserver/apiserver-fdc9d7cdd-8v72m" Feb 24 05:15:08.058753 master-0 kubenswrapper[7614]: I0224 05:15:08.058553 7614 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b21148ab-4e3e-4d0b-b198-3278dd8e2e7e-config\") pod \"apiserver-fdc9d7cdd-8v72m\" (UID: \"b21148ab-4e3e-4d0b-b198-3278dd8e2e7e\") " pod="openshift-apiserver/apiserver-fdc9d7cdd-8v72m" Feb 24 05:15:08.058753 master-0 kubenswrapper[7614]: I0224 05:15:08.058577 7614 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dtnxg\" (UniqueName: \"kubernetes.io/projected/b21148ab-4e3e-4d0b-b198-3278dd8e2e7e-kube-api-access-dtnxg\") pod \"apiserver-fdc9d7cdd-8v72m\" (UID: \"b21148ab-4e3e-4d0b-b198-3278dd8e2e7e\") " pod="openshift-apiserver/apiserver-fdc9d7cdd-8v72m" Feb 24 05:15:08.058753 master-0 kubenswrapper[7614]: I0224 05:15:08.058614 7614 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/b21148ab-4e3e-4d0b-b198-3278dd8e2e7e-etcd-serving-ca\") pod \"apiserver-fdc9d7cdd-8v72m\" (UID: \"b21148ab-4e3e-4d0b-b198-3278dd8e2e7e\") " pod="openshift-apiserver/apiserver-fdc9d7cdd-8v72m" Feb 24 05:15:08.058753 master-0 kubenswrapper[7614]: I0224 05:15:08.058634 7614 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/b21148ab-4e3e-4d0b-b198-3278dd8e2e7e-audit-dir\") pod \"apiserver-fdc9d7cdd-8v72m\" (UID: \"b21148ab-4e3e-4d0b-b198-3278dd8e2e7e\") " pod="openshift-apiserver/apiserver-fdc9d7cdd-8v72m" Feb 24 05:15:08.058753 master-0 kubenswrapper[7614]: I0224 05:15:08.058658 7614 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/b21148ab-4e3e-4d0b-b198-3278dd8e2e7e-encryption-config\") pod \"apiserver-fdc9d7cdd-8v72m\" (UID: \"b21148ab-4e3e-4d0b-b198-3278dd8e2e7e\") " pod="openshift-apiserver/apiserver-fdc9d7cdd-8v72m" Feb 24 05:15:08.058753 master-0 kubenswrapper[7614]: I0224 05:15:08.058677 7614 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/b21148ab-4e3e-4d0b-b198-3278dd8e2e7e-image-import-ca\") pod \"apiserver-fdc9d7cdd-8v72m\" (UID: \"b21148ab-4e3e-4d0b-b198-3278dd8e2e7e\") " pod="openshift-apiserver/apiserver-fdc9d7cdd-8v72m" Feb 24 05:15:08.058753 master-0 kubenswrapper[7614]: I0224 05:15:08.058702 7614 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/b21148ab-4e3e-4d0b-b198-3278dd8e2e7e-audit\") pod \"apiserver-fdc9d7cdd-8v72m\" (UID: \"b21148ab-4e3e-4d0b-b198-3278dd8e2e7e\") " pod="openshift-apiserver/apiserver-fdc9d7cdd-8v72m" Feb 24 05:15:08.059607 master-0 kubenswrapper[7614]: I0224 05:15:08.059585 7614 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/b21148ab-4e3e-4d0b-b198-3278dd8e2e7e-audit\") pod \"apiserver-fdc9d7cdd-8v72m\" (UID: \"b21148ab-4e3e-4d0b-b198-3278dd8e2e7e\") " pod="openshift-apiserver/apiserver-fdc9d7cdd-8v72m" Feb 24 05:15:08.060775 master-0 kubenswrapper[7614]: I0224 05:15:08.060753 7614 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b21148ab-4e3e-4d0b-b198-3278dd8e2e7e-trusted-ca-bundle\") pod \"apiserver-fdc9d7cdd-8v72m\" (UID: \"b21148ab-4e3e-4d0b-b198-3278dd8e2e7e\") " pod="openshift-apiserver/apiserver-fdc9d7cdd-8v72m" Feb 24 05:15:08.061443 master-0 kubenswrapper[7614]: I0224 05:15:08.061383 7614 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/b21148ab-4e3e-4d0b-b198-3278dd8e2e7e-audit-dir\") pod \"apiserver-fdc9d7cdd-8v72m\" (UID: \"b21148ab-4e3e-4d0b-b198-3278dd8e2e7e\") " pod="openshift-apiserver/apiserver-fdc9d7cdd-8v72m" Feb 24 05:15:08.061502 master-0 kubenswrapper[7614]: I0224 05:15:08.061406 7614 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/b21148ab-4e3e-4d0b-b198-3278dd8e2e7e-node-pullsecrets\") pod \"apiserver-fdc9d7cdd-8v72m\" (UID: \"b21148ab-4e3e-4d0b-b198-3278dd8e2e7e\") " pod="openshift-apiserver/apiserver-fdc9d7cdd-8v72m" Feb 24 05:15:08.062641 master-0 kubenswrapper[7614]: I0224 05:15:08.062414 7614 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b21148ab-4e3e-4d0b-b198-3278dd8e2e7e-config\") pod \"apiserver-fdc9d7cdd-8v72m\" (UID: \"b21148ab-4e3e-4d0b-b198-3278dd8e2e7e\") " pod="openshift-apiserver/apiserver-fdc9d7cdd-8v72m" Feb 24 05:15:08.062641 master-0 kubenswrapper[7614]: I0224 05:15:08.062542 7614 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/b21148ab-4e3e-4d0b-b198-3278dd8e2e7e-etcd-serving-ca\") pod \"apiserver-fdc9d7cdd-8v72m\" (UID: \"b21148ab-4e3e-4d0b-b198-3278dd8e2e7e\") " pod="openshift-apiserver/apiserver-fdc9d7cdd-8v72m" Feb 24 05:15:08.062765 master-0 kubenswrapper[7614]: I0224 05:15:08.062621 7614 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/b21148ab-4e3e-4d0b-b198-3278dd8e2e7e-image-import-ca\") pod \"apiserver-fdc9d7cdd-8v72m\" (UID: \"b21148ab-4e3e-4d0b-b198-3278dd8e2e7e\") " pod="openshift-apiserver/apiserver-fdc9d7cdd-8v72m" Feb 24 05:15:08.064107 master-0 kubenswrapper[7614]: I0224 05:15:08.064057 7614 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/b21148ab-4e3e-4d0b-b198-3278dd8e2e7e-etcd-client\") pod \"apiserver-fdc9d7cdd-8v72m\" (UID: \"b21148ab-4e3e-4d0b-b198-3278dd8e2e7e\") " pod="openshift-apiserver/apiserver-fdc9d7cdd-8v72m" Feb 24 05:15:08.065214 master-0 kubenswrapper[7614]: I0224 05:15:08.065183 7614 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/b21148ab-4e3e-4d0b-b198-3278dd8e2e7e-encryption-config\") pod \"apiserver-fdc9d7cdd-8v72m\" (UID: \"b21148ab-4e3e-4d0b-b198-3278dd8e2e7e\") " pod="openshift-apiserver/apiserver-fdc9d7cdd-8v72m" Feb 24 05:15:08.065440 master-0 kubenswrapper[7614]: I0224 05:15:08.065413 7614 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b21148ab-4e3e-4d0b-b198-3278dd8e2e7e-serving-cert\") pod \"apiserver-fdc9d7cdd-8v72m\" (UID: \"b21148ab-4e3e-4d0b-b198-3278dd8e2e7e\") " pod="openshift-apiserver/apiserver-fdc9d7cdd-8v72m" Feb 24 05:15:08.081657 master-0 kubenswrapper[7614]: I0224 05:15:08.081582 7614 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dtnxg\" (UniqueName: \"kubernetes.io/projected/b21148ab-4e3e-4d0b-b198-3278dd8e2e7e-kube-api-access-dtnxg\") pod \"apiserver-fdc9d7cdd-8v72m\" (UID: \"b21148ab-4e3e-4d0b-b198-3278dd8e2e7e\") " pod="openshift-apiserver/apiserver-fdc9d7cdd-8v72m" Feb 24 05:15:08.199159 master-0 kubenswrapper[7614]: I0224 05:15:08.199084 7614 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-fdc9d7cdd-8v72m" Feb 24 05:15:08.465023 master-0 kubenswrapper[7614]: I0224 05:15:08.464965 7614 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-cluster-version/cluster-version-operator-5cfd9759cf-r4rf2"] Feb 24 05:15:08.467204 master-0 kubenswrapper[7614]: I0224 05:15:08.467153 7614 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-cluster-version/cluster-version-operator-5cfd9759cf-r4rf2" podUID="7c4b448f-670e-45a1-bdd7-c42903c682a9" containerName="cluster-version-operator" containerID="cri-o://e338e09a246700858efa3e983721a941e7283cc7d53a58bf5899c50605032792" gracePeriod=130 Feb 24 05:15:08.510401 master-0 kubenswrapper[7614]: I0224 05:15:08.510270 7614 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler/installer-1-master-0"] Feb 24 05:15:08.511003 master-0 kubenswrapper[7614]: I0224 05:15:08.510972 7614 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/installer-1-master-0" Feb 24 05:15:08.513210 master-0 kubenswrapper[7614]: I0224 05:15:08.513160 7614 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler"/"kube-root-ca.crt" Feb 24 05:15:08.518729 master-0 kubenswrapper[7614]: I0224 05:15:08.518679 7614 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler/installer-1-master-0"] Feb 24 05:15:08.666141 master-0 kubenswrapper[7614]: I0224 05:15:08.666077 7614 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/74d070e9-4193-4598-ad68-15955b07d649-kubelet-dir\") pod \"installer-1-master-0\" (UID: \"74d070e9-4193-4598-ad68-15955b07d649\") " pod="openshift-kube-scheduler/installer-1-master-0" Feb 24 05:15:08.666141 master-0 kubenswrapper[7614]: I0224 05:15:08.666134 7614 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/74d070e9-4193-4598-ad68-15955b07d649-kube-api-access\") pod \"installer-1-master-0\" (UID: \"74d070e9-4193-4598-ad68-15955b07d649\") " pod="openshift-kube-scheduler/installer-1-master-0" Feb 24 05:15:08.666555 master-0 kubenswrapper[7614]: I0224 05:15:08.666165 7614 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/74d070e9-4193-4598-ad68-15955b07d649-var-lock\") pod \"installer-1-master-0\" (UID: \"74d070e9-4193-4598-ad68-15955b07d649\") " pod="openshift-kube-scheduler/installer-1-master-0" Feb 24 05:15:08.692900 master-0 kubenswrapper[7614]: I0224 05:15:08.692688 7614 generic.go:334] "Generic (PLEG): container finished" podID="7c4b448f-670e-45a1-bdd7-c42903c682a9" containerID="e338e09a246700858efa3e983721a941e7283cc7d53a58bf5899c50605032792" exitCode=0 Feb 24 05:15:08.692900 master-0 kubenswrapper[7614]: I0224 05:15:08.692810 7614 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-5cfd9759cf-r4rf2" event={"ID":"7c4b448f-670e-45a1-bdd7-c42903c682a9","Type":"ContainerDied","Data":"e338e09a246700858efa3e983721a941e7283cc7d53a58bf5899c50605032792"} Feb 24 05:15:08.700154 master-0 kubenswrapper[7614]: I0224 05:15:08.695911 7614 generic.go:334] "Generic (PLEG): container finished" podID="d458399b-415c-4aa6-be1a-7364c42841c7" containerID="ce1584aceb82aaa760f4b8bfb7af2460c57e030fd815cca2839f27eaa74df4aa" exitCode=0 Feb 24 05:15:08.700154 master-0 kubenswrapper[7614]: I0224 05:15:08.696763 7614 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-669d5ddb7c-jzjkh" event={"ID":"d458399b-415c-4aa6-be1a-7364c42841c7","Type":"ContainerDied","Data":"ce1584aceb82aaa760f4b8bfb7af2460c57e030fd815cca2839f27eaa74df4aa"} Feb 24 05:15:08.769421 master-0 kubenswrapper[7614]: I0224 05:15:08.768570 7614 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/74d070e9-4193-4598-ad68-15955b07d649-kubelet-dir\") pod \"installer-1-master-0\" (UID: \"74d070e9-4193-4598-ad68-15955b07d649\") " pod="openshift-kube-scheduler/installer-1-master-0" Feb 24 05:15:08.769421 master-0 kubenswrapper[7614]: I0224 05:15:08.768640 7614 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/74d070e9-4193-4598-ad68-15955b07d649-kube-api-access\") pod \"installer-1-master-0\" (UID: \"74d070e9-4193-4598-ad68-15955b07d649\") " pod="openshift-kube-scheduler/installer-1-master-0" Feb 24 05:15:08.769421 master-0 kubenswrapper[7614]: I0224 05:15:08.768776 7614 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/74d070e9-4193-4598-ad68-15955b07d649-kubelet-dir\") pod \"installer-1-master-0\" (UID: \"74d070e9-4193-4598-ad68-15955b07d649\") " pod="openshift-kube-scheduler/installer-1-master-0" Feb 24 05:15:08.769421 master-0 kubenswrapper[7614]: I0224 05:15:08.768983 7614 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/74d070e9-4193-4598-ad68-15955b07d649-var-lock\") pod \"installer-1-master-0\" (UID: \"74d070e9-4193-4598-ad68-15955b07d649\") " pod="openshift-kube-scheduler/installer-1-master-0" Feb 24 05:15:08.769421 master-0 kubenswrapper[7614]: I0224 05:15:08.769168 7614 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/74d070e9-4193-4598-ad68-15955b07d649-var-lock\") pod \"installer-1-master-0\" (UID: \"74d070e9-4193-4598-ad68-15955b07d649\") " pod="openshift-kube-scheduler/installer-1-master-0" Feb 24 05:15:08.791848 master-0 kubenswrapper[7614]: I0224 05:15:08.791791 7614 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/74d070e9-4193-4598-ad68-15955b07d649-kube-api-access\") pod \"installer-1-master-0\" (UID: \"74d070e9-4193-4598-ad68-15955b07d649\") " pod="openshift-kube-scheduler/installer-1-master-0" Feb 24 05:15:08.929619 master-0 kubenswrapper[7614]: I0224 05:15:08.929554 7614 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/installer-1-master-0" Feb 24 05:15:08.972160 master-0 kubenswrapper[7614]: I0224 05:15:08.972078 7614 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-669d5ddb7c-jzjkh" Feb 24 05:15:09.005784 master-0 kubenswrapper[7614]: I0224 05:15:09.005371 7614 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-5b94645546-lgnpc"] Feb 24 05:15:09.005784 master-0 kubenswrapper[7614]: E0224 05:15:09.005574 7614 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d458399b-415c-4aa6-be1a-7364c42841c7" containerName="controller-manager" Feb 24 05:15:09.005784 master-0 kubenswrapper[7614]: I0224 05:15:09.005590 7614 state_mem.go:107] "Deleted CPUSet assignment" podUID="d458399b-415c-4aa6-be1a-7364c42841c7" containerName="controller-manager" Feb 24 05:15:09.005784 master-0 kubenswrapper[7614]: I0224 05:15:09.005683 7614 memory_manager.go:354] "RemoveStaleState removing state" podUID="d458399b-415c-4aa6-be1a-7364c42841c7" containerName="controller-manager" Feb 24 05:15:09.006208 master-0 kubenswrapper[7614]: I0224 05:15:09.006090 7614 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-5b94645546-lgnpc" Feb 24 05:15:09.017621 master-0 kubenswrapper[7614]: I0224 05:15:09.016791 7614 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-5b94645546-lgnpc"] Feb 24 05:15:09.073283 master-0 kubenswrapper[7614]: I0224 05:15:09.073214 7614 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d458399b-415c-4aa6-be1a-7364c42841c7-config\") pod \"d458399b-415c-4aa6-be1a-7364c42841c7\" (UID: \"d458399b-415c-4aa6-be1a-7364c42841c7\") " Feb 24 05:15:09.073283 master-0 kubenswrapper[7614]: I0224 05:15:09.073291 7614 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/d458399b-415c-4aa6-be1a-7364c42841c7-proxy-ca-bundles\") pod \"d458399b-415c-4aa6-be1a-7364c42841c7\" (UID: \"d458399b-415c-4aa6-be1a-7364c42841c7\") " Feb 24 05:15:09.073684 master-0 kubenswrapper[7614]: I0224 05:15:09.073380 7614 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-g8nv5\" (UniqueName: \"kubernetes.io/projected/d458399b-415c-4aa6-be1a-7364c42841c7-kube-api-access-g8nv5\") pod \"d458399b-415c-4aa6-be1a-7364c42841c7\" (UID: \"d458399b-415c-4aa6-be1a-7364c42841c7\") " Feb 24 05:15:09.073684 master-0 kubenswrapper[7614]: I0224 05:15:09.073593 7614 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d458399b-415c-4aa6-be1a-7364c42841c7-serving-cert\") pod \"d458399b-415c-4aa6-be1a-7364c42841c7\" (UID: \"d458399b-415c-4aa6-be1a-7364c42841c7\") " Feb 24 05:15:09.073825 master-0 kubenswrapper[7614]: I0224 05:15:09.073772 7614 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/d458399b-415c-4aa6-be1a-7364c42841c7-client-ca\") pod \"d458399b-415c-4aa6-be1a-7364c42841c7\" (UID: \"d458399b-415c-4aa6-be1a-7364c42841c7\") " Feb 24 05:15:09.074231 master-0 kubenswrapper[7614]: I0224 05:15:09.074197 7614 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d458399b-415c-4aa6-be1a-7364c42841c7-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "d458399b-415c-4aa6-be1a-7364c42841c7" (UID: "d458399b-415c-4aa6-be1a-7364c42841c7"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 24 05:15:09.074288 master-0 kubenswrapper[7614]: I0224 05:15:09.074240 7614 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d458399b-415c-4aa6-be1a-7364c42841c7-config" (OuterVolumeSpecName: "config") pod "d458399b-415c-4aa6-be1a-7364c42841c7" (UID: "d458399b-415c-4aa6-be1a-7364c42841c7"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 24 05:15:09.074777 master-0 kubenswrapper[7614]: I0224 05:15:09.074685 7614 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d458399b-415c-4aa6-be1a-7364c42841c7-client-ca" (OuterVolumeSpecName: "client-ca") pod "d458399b-415c-4aa6-be1a-7364c42841c7" (UID: "d458399b-415c-4aa6-be1a-7364c42841c7"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 24 05:15:09.074938 master-0 kubenswrapper[7614]: I0224 05:15:09.074862 7614 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/d458399b-415c-4aa6-be1a-7364c42841c7-client-ca\") on node \"master-0\" DevicePath \"\"" Feb 24 05:15:09.074938 master-0 kubenswrapper[7614]: I0224 05:15:09.074892 7614 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d458399b-415c-4aa6-be1a-7364c42841c7-config\") on node \"master-0\" DevicePath \"\"" Feb 24 05:15:09.074938 master-0 kubenswrapper[7614]: I0224 05:15:09.074904 7614 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/d458399b-415c-4aa6-be1a-7364c42841c7-proxy-ca-bundles\") on node \"master-0\" DevicePath \"\"" Feb 24 05:15:09.076757 master-0 kubenswrapper[7614]: I0224 05:15:09.076720 7614 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d458399b-415c-4aa6-be1a-7364c42841c7-kube-api-access-g8nv5" (OuterVolumeSpecName: "kube-api-access-g8nv5") pod "d458399b-415c-4aa6-be1a-7364c42841c7" (UID: "d458399b-415c-4aa6-be1a-7364c42841c7"). InnerVolumeSpecName "kube-api-access-g8nv5". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 24 05:15:09.077508 master-0 kubenswrapper[7614]: I0224 05:15:09.077454 7614 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d458399b-415c-4aa6-be1a-7364c42841c7-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "d458399b-415c-4aa6-be1a-7364c42841c7" (UID: "d458399b-415c-4aa6-be1a-7364c42841c7"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 24 05:15:09.175798 master-0 kubenswrapper[7614]: I0224 05:15:09.175700 7614 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/5112c3a6-9296-4687-9922-f7e4156d2de7-proxy-ca-bundles\") pod \"controller-manager-5b94645546-lgnpc\" (UID: \"5112c3a6-9296-4687-9922-f7e4156d2de7\") " pod="openshift-controller-manager/controller-manager-5b94645546-lgnpc" Feb 24 05:15:09.175798 master-0 kubenswrapper[7614]: I0224 05:15:09.175756 7614 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q22lj\" (UniqueName: \"kubernetes.io/projected/5112c3a6-9296-4687-9922-f7e4156d2de7-kube-api-access-q22lj\") pod \"controller-manager-5b94645546-lgnpc\" (UID: \"5112c3a6-9296-4687-9922-f7e4156d2de7\") " pod="openshift-controller-manager/controller-manager-5b94645546-lgnpc" Feb 24 05:15:09.175798 master-0 kubenswrapper[7614]: I0224 05:15:09.175786 7614 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5112c3a6-9296-4687-9922-f7e4156d2de7-client-ca\") pod \"controller-manager-5b94645546-lgnpc\" (UID: \"5112c3a6-9296-4687-9922-f7e4156d2de7\") " pod="openshift-controller-manager/controller-manager-5b94645546-lgnpc" Feb 24 05:15:09.175969 master-0 kubenswrapper[7614]: I0224 05:15:09.175823 7614 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5112c3a6-9296-4687-9922-f7e4156d2de7-serving-cert\") pod \"controller-manager-5b94645546-lgnpc\" (UID: \"5112c3a6-9296-4687-9922-f7e4156d2de7\") " pod="openshift-controller-manager/controller-manager-5b94645546-lgnpc" Feb 24 05:15:09.175969 master-0 kubenswrapper[7614]: I0224 05:15:09.175851 7614 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5112c3a6-9296-4687-9922-f7e4156d2de7-config\") pod \"controller-manager-5b94645546-lgnpc\" (UID: \"5112c3a6-9296-4687-9922-f7e4156d2de7\") " pod="openshift-controller-manager/controller-manager-5b94645546-lgnpc" Feb 24 05:15:09.175969 master-0 kubenswrapper[7614]: I0224 05:15:09.175950 7614 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-g8nv5\" (UniqueName: \"kubernetes.io/projected/d458399b-415c-4aa6-be1a-7364c42841c7-kube-api-access-g8nv5\") on node \"master-0\" DevicePath \"\"" Feb 24 05:15:09.176068 master-0 kubenswrapper[7614]: I0224 05:15:09.176014 7614 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d458399b-415c-4aa6-be1a-7364c42841c7-serving-cert\") on node \"master-0\" DevicePath \"\"" Feb 24 05:15:09.276925 master-0 kubenswrapper[7614]: I0224 05:15:09.276772 7614 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5112c3a6-9296-4687-9922-f7e4156d2de7-client-ca\") pod \"controller-manager-5b94645546-lgnpc\" (UID: \"5112c3a6-9296-4687-9922-f7e4156d2de7\") " pod="openshift-controller-manager/controller-manager-5b94645546-lgnpc" Feb 24 05:15:09.276925 master-0 kubenswrapper[7614]: I0224 05:15:09.276851 7614 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5112c3a6-9296-4687-9922-f7e4156d2de7-serving-cert\") pod \"controller-manager-5b94645546-lgnpc\" (UID: \"5112c3a6-9296-4687-9922-f7e4156d2de7\") " pod="openshift-controller-manager/controller-manager-5b94645546-lgnpc" Feb 24 05:15:09.276925 master-0 kubenswrapper[7614]: I0224 05:15:09.276880 7614 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5112c3a6-9296-4687-9922-f7e4156d2de7-config\") pod \"controller-manager-5b94645546-lgnpc\" (UID: \"5112c3a6-9296-4687-9922-f7e4156d2de7\") " pod="openshift-controller-manager/controller-manager-5b94645546-lgnpc" Feb 24 05:15:09.277373 master-0 kubenswrapper[7614]: I0224 05:15:09.277221 7614 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/5112c3a6-9296-4687-9922-f7e4156d2de7-proxy-ca-bundles\") pod \"controller-manager-5b94645546-lgnpc\" (UID: \"5112c3a6-9296-4687-9922-f7e4156d2de7\") " pod="openshift-controller-manager/controller-manager-5b94645546-lgnpc" Feb 24 05:15:09.277692 master-0 kubenswrapper[7614]: I0224 05:15:09.277616 7614 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q22lj\" (UniqueName: \"kubernetes.io/projected/5112c3a6-9296-4687-9922-f7e4156d2de7-kube-api-access-q22lj\") pod \"controller-manager-5b94645546-lgnpc\" (UID: \"5112c3a6-9296-4687-9922-f7e4156d2de7\") " pod="openshift-controller-manager/controller-manager-5b94645546-lgnpc" Feb 24 05:15:09.278339 master-0 kubenswrapper[7614]: I0224 05:15:09.278259 7614 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5112c3a6-9296-4687-9922-f7e4156d2de7-config\") pod \"controller-manager-5b94645546-lgnpc\" (UID: \"5112c3a6-9296-4687-9922-f7e4156d2de7\") " pod="openshift-controller-manager/controller-manager-5b94645546-lgnpc" Feb 24 05:15:09.278918 master-0 kubenswrapper[7614]: I0224 05:15:09.278852 7614 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/5112c3a6-9296-4687-9922-f7e4156d2de7-proxy-ca-bundles\") pod \"controller-manager-5b94645546-lgnpc\" (UID: \"5112c3a6-9296-4687-9922-f7e4156d2de7\") " pod="openshift-controller-manager/controller-manager-5b94645546-lgnpc" Feb 24 05:15:09.280440 master-0 kubenswrapper[7614]: I0224 05:15:09.280335 7614 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5112c3a6-9296-4687-9922-f7e4156d2de7-client-ca\") pod \"controller-manager-5b94645546-lgnpc\" (UID: \"5112c3a6-9296-4687-9922-f7e4156d2de7\") " pod="openshift-controller-manager/controller-manager-5b94645546-lgnpc" Feb 24 05:15:09.281091 master-0 kubenswrapper[7614]: I0224 05:15:09.281051 7614 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5112c3a6-9296-4687-9922-f7e4156d2de7-serving-cert\") pod \"controller-manager-5b94645546-lgnpc\" (UID: \"5112c3a6-9296-4687-9922-f7e4156d2de7\") " pod="openshift-controller-manager/controller-manager-5b94645546-lgnpc" Feb 24 05:15:09.304252 master-0 kubenswrapper[7614]: I0224 05:15:09.304156 7614 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q22lj\" (UniqueName: \"kubernetes.io/projected/5112c3a6-9296-4687-9922-f7e4156d2de7-kube-api-access-q22lj\") pod \"controller-manager-5b94645546-lgnpc\" (UID: \"5112c3a6-9296-4687-9922-f7e4156d2de7\") " pod="openshift-controller-manager/controller-manager-5b94645546-lgnpc" Feb 24 05:15:09.332032 master-0 kubenswrapper[7614]: I0224 05:15:09.331929 7614 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-5b94645546-lgnpc" Feb 24 05:15:09.704841 master-0 kubenswrapper[7614]: I0224 05:15:09.704643 7614 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-669d5ddb7c-jzjkh" event={"ID":"d458399b-415c-4aa6-be1a-7364c42841c7","Type":"ContainerDied","Data":"87e032a96cc9b8f5869d8fccac9d6efc8bfee5fb3c683089c50456e7eea4cb4b"} Feb 24 05:15:09.704841 master-0 kubenswrapper[7614]: I0224 05:15:09.704720 7614 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-669d5ddb7c-jzjkh" Feb 24 05:15:09.704841 master-0 kubenswrapper[7614]: I0224 05:15:09.704776 7614 scope.go:117] "RemoveContainer" containerID="ce1584aceb82aaa760f4b8bfb7af2460c57e030fd815cca2839f27eaa74df4aa" Feb 24 05:15:09.724135 master-0 kubenswrapper[7614]: I0224 05:15:09.723381 7614 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-669d5ddb7c-jzjkh"] Feb 24 05:15:09.727988 master-0 kubenswrapper[7614]: I0224 05:15:09.727938 7614 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-669d5ddb7c-jzjkh"] Feb 24 05:15:09.768171 master-0 kubenswrapper[7614]: I0224 05:15:09.768094 7614 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-dns/dns-default-cdk2w" Feb 24 05:15:10.297001 master-0 kubenswrapper[7614]: I0224 05:15:10.296957 7614 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-5cfd9759cf-r4rf2" Feb 24 05:15:10.399727 master-0 kubenswrapper[7614]: I0224 05:15:10.396007 7614 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/7c4b448f-670e-45a1-bdd7-c42903c682a9-service-ca\") pod \"7c4b448f-670e-45a1-bdd7-c42903c682a9\" (UID: \"7c4b448f-670e-45a1-bdd7-c42903c682a9\") " Feb 24 05:15:10.399727 master-0 kubenswrapper[7614]: I0224 05:15:10.396098 7614 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/7c4b448f-670e-45a1-bdd7-c42903c682a9-etc-ssl-certs\") pod \"7c4b448f-670e-45a1-bdd7-c42903c682a9\" (UID: \"7c4b448f-670e-45a1-bdd7-c42903c682a9\") " Feb 24 05:15:10.399727 master-0 kubenswrapper[7614]: I0224 05:15:10.396129 7614 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/7c4b448f-670e-45a1-bdd7-c42903c682a9-kube-api-access\") pod \"7c4b448f-670e-45a1-bdd7-c42903c682a9\" (UID: \"7c4b448f-670e-45a1-bdd7-c42903c682a9\") " Feb 24 05:15:10.399727 master-0 kubenswrapper[7614]: I0224 05:15:10.396185 7614 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/7c4b448f-670e-45a1-bdd7-c42903c682a9-etc-cvo-updatepayloads\") pod \"7c4b448f-670e-45a1-bdd7-c42903c682a9\" (UID: \"7c4b448f-670e-45a1-bdd7-c42903c682a9\") " Feb 24 05:15:10.399727 master-0 kubenswrapper[7614]: I0224 05:15:10.396244 7614 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7c4b448f-670e-45a1-bdd7-c42903c682a9-serving-cert\") pod \"7c4b448f-670e-45a1-bdd7-c42903c682a9\" (UID: \"7c4b448f-670e-45a1-bdd7-c42903c682a9\") " Feb 24 05:15:10.399727 master-0 kubenswrapper[7614]: I0224 05:15:10.396226 7614 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7c4b448f-670e-45a1-bdd7-c42903c682a9-etc-ssl-certs" (OuterVolumeSpecName: "etc-ssl-certs") pod "7c4b448f-670e-45a1-bdd7-c42903c682a9" (UID: "7c4b448f-670e-45a1-bdd7-c42903c682a9"). InnerVolumeSpecName "etc-ssl-certs". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 24 05:15:10.399727 master-0 kubenswrapper[7614]: I0224 05:15:10.396443 7614 reconciler_common.go:293] "Volume detached for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/7c4b448f-670e-45a1-bdd7-c42903c682a9-etc-ssl-certs\") on node \"master-0\" DevicePath \"\"" Feb 24 05:15:10.399727 master-0 kubenswrapper[7614]: I0224 05:15:10.397225 7614 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7c4b448f-670e-45a1-bdd7-c42903c682a9-service-ca" (OuterVolumeSpecName: "service-ca") pod "7c4b448f-670e-45a1-bdd7-c42903c682a9" (UID: "7c4b448f-670e-45a1-bdd7-c42903c682a9"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 24 05:15:10.399727 master-0 kubenswrapper[7614]: I0224 05:15:10.397383 7614 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7c4b448f-670e-45a1-bdd7-c42903c682a9-etc-cvo-updatepayloads" (OuterVolumeSpecName: "etc-cvo-updatepayloads") pod "7c4b448f-670e-45a1-bdd7-c42903c682a9" (UID: "7c4b448f-670e-45a1-bdd7-c42903c682a9"). InnerVolumeSpecName "etc-cvo-updatepayloads". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 24 05:15:10.402538 master-0 kubenswrapper[7614]: I0224 05:15:10.401979 7614 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7c4b448f-670e-45a1-bdd7-c42903c682a9-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "7c4b448f-670e-45a1-bdd7-c42903c682a9" (UID: "7c4b448f-670e-45a1-bdd7-c42903c682a9"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 24 05:15:10.405059 master-0 kubenswrapper[7614]: I0224 05:15:10.404671 7614 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7c4b448f-670e-45a1-bdd7-c42903c682a9-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "7c4b448f-670e-45a1-bdd7-c42903c682a9" (UID: "7c4b448f-670e-45a1-bdd7-c42903c682a9"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 24 05:15:10.504780 master-0 kubenswrapper[7614]: I0224 05:15:10.501748 7614 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/7c4b448f-670e-45a1-bdd7-c42903c682a9-kube-api-access\") on node \"master-0\" DevicePath \"\"" Feb 24 05:15:10.504780 master-0 kubenswrapper[7614]: I0224 05:15:10.501793 7614 reconciler_common.go:293] "Volume detached for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/7c4b448f-670e-45a1-bdd7-c42903c682a9-etc-cvo-updatepayloads\") on node \"master-0\" DevicePath \"\"" Feb 24 05:15:10.504780 master-0 kubenswrapper[7614]: I0224 05:15:10.501808 7614 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7c4b448f-670e-45a1-bdd7-c42903c682a9-serving-cert\") on node \"master-0\" DevicePath \"\"" Feb 24 05:15:10.504780 master-0 kubenswrapper[7614]: I0224 05:15:10.501818 7614 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/7c4b448f-670e-45a1-bdd7-c42903c682a9-service-ca\") on node \"master-0\" DevicePath \"\"" Feb 24 05:15:10.582120 master-0 kubenswrapper[7614]: I0224 05:15:10.582075 7614 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-5b94645546-lgnpc"] Feb 24 05:15:10.588279 master-0 kubenswrapper[7614]: W0224 05:15:10.588229 7614 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5112c3a6_9296_4687_9922_f7e4156d2de7.slice/crio-fc7d320f1c8dfab9abb33bca8fa93c8824cfb0508e2931b273ab92a8006d6a0f WatchSource:0}: Error finding container fc7d320f1c8dfab9abb33bca8fa93c8824cfb0508e2931b273ab92a8006d6a0f: Status 404 returned error can't find the container with id fc7d320f1c8dfab9abb33bca8fa93c8824cfb0508e2931b273ab92a8006d6a0f Feb 24 05:15:10.691117 master-0 kubenswrapper[7614]: I0224 05:15:10.691070 7614 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver/apiserver-fdc9d7cdd-8v72m"] Feb 24 05:15:10.715681 master-0 kubenswrapper[7614]: W0224 05:15:10.715501 7614 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb21148ab_4e3e_4d0b_b198_3278dd8e2e7e.slice/crio-fd03b91adf31c70f04d420a5ba045d6cd9e1f68b14c47322c66de7814d71ccf4 WatchSource:0}: Error finding container fd03b91adf31c70f04d420a5ba045d6cd9e1f68b14c47322c66de7814d71ccf4: Status 404 returned error can't find the container with id fd03b91adf31c70f04d420a5ba045d6cd9e1f68b14c47322c66de7814d71ccf4 Feb 24 05:15:10.736949 master-0 kubenswrapper[7614]: I0224 05:15:10.736897 7614 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-5cfd9759cf-r4rf2" event={"ID":"7c4b448f-670e-45a1-bdd7-c42903c682a9","Type":"ContainerDied","Data":"019ddbfba3ca4b29c85cce38fc32243e83dcf06f54ada15a33120765deb62756"} Feb 24 05:15:10.737049 master-0 kubenswrapper[7614]: I0224 05:15:10.736969 7614 scope.go:117] "RemoveContainer" containerID="e338e09a246700858efa3e983721a941e7283cc7d53a58bf5899c50605032792" Feb 24 05:15:10.737087 master-0 kubenswrapper[7614]: I0224 05:15:10.737064 7614 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-5cfd9759cf-r4rf2" Feb 24 05:15:10.746064 master-0 kubenswrapper[7614]: I0224 05:15:10.745984 7614 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-6f5488b997-dbsnm" event={"ID":"dd29bef3-d27e-48b3-9aa0-d915e949b3d5","Type":"ContainerStarted","Data":"270089d93d1aad8adc2c6f3a218f7c7455fbc8f4604c672dd2ed10a74721af6c"} Feb 24 05:15:10.747174 master-0 kubenswrapper[7614]: I0224 05:15:10.746713 7614 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-6f5488b997-dbsnm" Feb 24 05:15:10.748152 master-0 kubenswrapper[7614]: I0224 05:15:10.748118 7614 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-2vsjh" event={"ID":"7dcc5520-7aa8-4cd5-b06d-591827ed4e2a","Type":"ContainerStarted","Data":"8e916a389ddc5669464a5aeb6bc0ed698a7fd49715641ad481d46f966c3423f4"} Feb 24 05:15:10.749078 master-0 kubenswrapper[7614]: I0224 05:15:10.749047 7614 patch_prober.go:28] interesting pod/marketplace-operator-6f5488b997-dbsnm container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.128.0.22:8080/healthz\": dial tcp 10.128.0.22:8080: connect: connection refused" start-of-body= Feb 24 05:15:10.749171 master-0 kubenswrapper[7614]: I0224 05:15:10.749091 7614 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-6f5488b997-dbsnm" podUID="dd29bef3-d27e-48b3-9aa0-d915e949b3d5" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.128.0.22:8080/healthz\": dial tcp 10.128.0.22:8080: connect: connection refused" Feb 24 05:15:10.750139 master-0 kubenswrapper[7614]: I0224 05:15:10.750083 7614 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-5b94645546-lgnpc" event={"ID":"5112c3a6-9296-4687-9922-f7e4156d2de7","Type":"ContainerStarted","Data":"892c2d90e84b40ea731f6955f791f22d9c90f887063bd122af33eaed51683c25"} Feb 24 05:15:10.750202 master-0 kubenswrapper[7614]: I0224 05:15:10.750150 7614 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-5b94645546-lgnpc" event={"ID":"5112c3a6-9296-4687-9922-f7e4156d2de7","Type":"ContainerStarted","Data":"fc7d320f1c8dfab9abb33bca8fa93c8824cfb0508e2931b273ab92a8006d6a0f"} Feb 24 05:15:10.751081 master-0 kubenswrapper[7614]: I0224 05:15:10.751052 7614 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-5b94645546-lgnpc" Feb 24 05:15:10.752566 master-0 kubenswrapper[7614]: I0224 05:15:10.752530 7614 patch_prober.go:28] interesting pod/controller-manager-5b94645546-lgnpc container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.128.0.37:8443/healthz\": dial tcp 10.128.0.37:8443: connect: connection refused" start-of-body= Feb 24 05:15:10.752619 master-0 kubenswrapper[7614]: I0224 05:15:10.752573 7614 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-5b94645546-lgnpc" podUID="5112c3a6-9296-4687-9922-f7e4156d2de7" containerName="controller-manager" probeResult="failure" output="Get \"https://10.128.0.37:8443/healthz\": dial tcp 10.128.0.37:8443: connect: connection refused" Feb 24 05:15:10.754332 master-0 kubenswrapper[7614]: I0224 05:15:10.753662 7614 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-5f98f4f8d5-b985k" event={"ID":"8be1f8db-3f0b-4d6f-be42-7564fba66820","Type":"ContainerStarted","Data":"eb40f700665ddc5a59ad171b706d2fdf1426e6e5d152e9cd1782903011fd60d0"} Feb 24 05:15:10.756333 master-0 kubenswrapper[7614]: I0224 05:15:10.756271 7614 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/cluster-monitoring-operator-6bb6d78bf-mzb7q" event={"ID":"c177f8fe-8145-4557-ae78-af121efe001c","Type":"ContainerStarted","Data":"ce52f180dabf411d26667979615a9a115fe332f62d97dd5de4424b708e61c2fa"} Feb 24 05:15:10.808646 master-0 kubenswrapper[7614]: I0224 05:15:10.807772 7614 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-cluster-version/cluster-version-operator-5cfd9759cf-r4rf2"] Feb 24 05:15:10.811345 master-0 kubenswrapper[7614]: I0224 05:15:10.810364 7614 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-cluster-version/cluster-version-operator-5cfd9759cf-r4rf2"] Feb 24 05:15:10.835377 master-0 kubenswrapper[7614]: I0224 05:15:10.832760 7614 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-5b94645546-lgnpc" podStartSLOduration=11.832729042 podStartE2EDuration="11.832729042s" podCreationTimestamp="2026-02-24 05:14:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-24 05:15:10.826017295 +0000 UTC m=+41.860760451" watchObservedRunningTime="2026-02-24 05:15:10.832729042 +0000 UTC m=+41.867472198" Feb 24 05:15:10.837497 master-0 kubenswrapper[7614]: I0224 05:15:10.836377 7614 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler/installer-1-master-0"] Feb 24 05:15:10.852225 master-0 kubenswrapper[7614]: I0224 05:15:10.852189 7614 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-version/cluster-version-operator-57476485-7g2gq"] Feb 24 05:15:10.853519 master-0 kubenswrapper[7614]: E0224 05:15:10.853489 7614 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7c4b448f-670e-45a1-bdd7-c42903c682a9" containerName="cluster-version-operator" Feb 24 05:15:10.853619 master-0 kubenswrapper[7614]: I0224 05:15:10.853608 7614 state_mem.go:107] "Deleted CPUSet assignment" podUID="7c4b448f-670e-45a1-bdd7-c42903c682a9" containerName="cluster-version-operator" Feb 24 05:15:10.853821 master-0 kubenswrapper[7614]: I0224 05:15:10.853805 7614 memory_manager.go:354] "RemoveStaleState removing state" podUID="7c4b448f-670e-45a1-bdd7-c42903c682a9" containerName="cluster-version-operator" Feb 24 05:15:10.854429 master-0 kubenswrapper[7614]: I0224 05:15:10.854413 7614 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-57476485-7g2gq" Feb 24 05:15:10.856773 master-0 kubenswrapper[7614]: I0224 05:15:10.856731 7614 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"openshift-service-ca.crt" Feb 24 05:15:10.856830 master-0 kubenswrapper[7614]: I0224 05:15:10.856790 7614 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"cluster-version-operator-serving-cert" Feb 24 05:15:10.857382 master-0 kubenswrapper[7614]: I0224 05:15:10.857358 7614 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"kube-root-ca.crt" Feb 24 05:15:11.014715 master-0 kubenswrapper[7614]: I0224 05:15:11.014443 7614 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/0e05783d-6bd1-4c71-87d9-1eb3edd827b3-service-ca\") pod \"cluster-version-operator-57476485-7g2gq\" (UID: \"0e05783d-6bd1-4c71-87d9-1eb3edd827b3\") " pod="openshift-cluster-version/cluster-version-operator-57476485-7g2gq" Feb 24 05:15:11.014715 master-0 kubenswrapper[7614]: I0224 05:15:11.014550 7614 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0e05783d-6bd1-4c71-87d9-1eb3edd827b3-serving-cert\") pod \"cluster-version-operator-57476485-7g2gq\" (UID: \"0e05783d-6bd1-4c71-87d9-1eb3edd827b3\") " pod="openshift-cluster-version/cluster-version-operator-57476485-7g2gq" Feb 24 05:15:11.014715 master-0 kubenswrapper[7614]: I0224 05:15:11.014579 7614 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/0e05783d-6bd1-4c71-87d9-1eb3edd827b3-etc-cvo-updatepayloads\") pod \"cluster-version-operator-57476485-7g2gq\" (UID: \"0e05783d-6bd1-4c71-87d9-1eb3edd827b3\") " pod="openshift-cluster-version/cluster-version-operator-57476485-7g2gq" Feb 24 05:15:11.014715 master-0 kubenswrapper[7614]: I0224 05:15:11.014611 7614 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0e05783d-6bd1-4c71-87d9-1eb3edd827b3-kube-api-access\") pod \"cluster-version-operator-57476485-7g2gq\" (UID: \"0e05783d-6bd1-4c71-87d9-1eb3edd827b3\") " pod="openshift-cluster-version/cluster-version-operator-57476485-7g2gq" Feb 24 05:15:11.015023 master-0 kubenswrapper[7614]: I0224 05:15:11.014790 7614 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/0e05783d-6bd1-4c71-87d9-1eb3edd827b3-etc-ssl-certs\") pod \"cluster-version-operator-57476485-7g2gq\" (UID: \"0e05783d-6bd1-4c71-87d9-1eb3edd827b3\") " pod="openshift-cluster-version/cluster-version-operator-57476485-7g2gq" Feb 24 05:15:11.140202 master-0 kubenswrapper[7614]: I0224 05:15:11.140152 7614 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/0e05783d-6bd1-4c71-87d9-1eb3edd827b3-service-ca\") pod \"cluster-version-operator-57476485-7g2gq\" (UID: \"0e05783d-6bd1-4c71-87d9-1eb3edd827b3\") " pod="openshift-cluster-version/cluster-version-operator-57476485-7g2gq" Feb 24 05:15:11.140344 master-0 kubenswrapper[7614]: I0224 05:15:11.140330 7614 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0e05783d-6bd1-4c71-87d9-1eb3edd827b3-serving-cert\") pod \"cluster-version-operator-57476485-7g2gq\" (UID: \"0e05783d-6bd1-4c71-87d9-1eb3edd827b3\") " pod="openshift-cluster-version/cluster-version-operator-57476485-7g2gq" Feb 24 05:15:11.140443 master-0 kubenswrapper[7614]: I0224 05:15:11.140429 7614 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/0e05783d-6bd1-4c71-87d9-1eb3edd827b3-etc-cvo-updatepayloads\") pod \"cluster-version-operator-57476485-7g2gq\" (UID: \"0e05783d-6bd1-4c71-87d9-1eb3edd827b3\") " pod="openshift-cluster-version/cluster-version-operator-57476485-7g2gq" Feb 24 05:15:11.140542 master-0 kubenswrapper[7614]: I0224 05:15:11.140523 7614 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0e05783d-6bd1-4c71-87d9-1eb3edd827b3-kube-api-access\") pod \"cluster-version-operator-57476485-7g2gq\" (UID: \"0e05783d-6bd1-4c71-87d9-1eb3edd827b3\") " pod="openshift-cluster-version/cluster-version-operator-57476485-7g2gq" Feb 24 05:15:11.140634 master-0 kubenswrapper[7614]: I0224 05:15:11.140621 7614 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/0e05783d-6bd1-4c71-87d9-1eb3edd827b3-etc-ssl-certs\") pod \"cluster-version-operator-57476485-7g2gq\" (UID: \"0e05783d-6bd1-4c71-87d9-1eb3edd827b3\") " pod="openshift-cluster-version/cluster-version-operator-57476485-7g2gq" Feb 24 05:15:11.140782 master-0 kubenswrapper[7614]: I0224 05:15:11.140768 7614 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/0e05783d-6bd1-4c71-87d9-1eb3edd827b3-etc-ssl-certs\") pod \"cluster-version-operator-57476485-7g2gq\" (UID: \"0e05783d-6bd1-4c71-87d9-1eb3edd827b3\") " pod="openshift-cluster-version/cluster-version-operator-57476485-7g2gq" Feb 24 05:15:11.145676 master-0 kubenswrapper[7614]: I0224 05:15:11.143665 7614 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/0e05783d-6bd1-4c71-87d9-1eb3edd827b3-service-ca\") pod \"cluster-version-operator-57476485-7g2gq\" (UID: \"0e05783d-6bd1-4c71-87d9-1eb3edd827b3\") " pod="openshift-cluster-version/cluster-version-operator-57476485-7g2gq" Feb 24 05:15:11.145837 master-0 kubenswrapper[7614]: I0224 05:15:11.145814 7614 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/0e05783d-6bd1-4c71-87d9-1eb3edd827b3-etc-cvo-updatepayloads\") pod \"cluster-version-operator-57476485-7g2gq\" (UID: \"0e05783d-6bd1-4c71-87d9-1eb3edd827b3\") " pod="openshift-cluster-version/cluster-version-operator-57476485-7g2gq" Feb 24 05:15:11.154800 master-0 kubenswrapper[7614]: I0224 05:15:11.152891 7614 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0e05783d-6bd1-4c71-87d9-1eb3edd827b3-serving-cert\") pod \"cluster-version-operator-57476485-7g2gq\" (UID: \"0e05783d-6bd1-4c71-87d9-1eb3edd827b3\") " pod="openshift-cluster-version/cluster-version-operator-57476485-7g2gq" Feb 24 05:15:11.189808 master-0 kubenswrapper[7614]: I0224 05:15:11.189741 7614 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7c4b448f-670e-45a1-bdd7-c42903c682a9" path="/var/lib/kubelet/pods/7c4b448f-670e-45a1-bdd7-c42903c682a9/volumes" Feb 24 05:15:11.193743 master-0 kubenswrapper[7614]: I0224 05:15:11.190296 7614 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d458399b-415c-4aa6-be1a-7364c42841c7" path="/var/lib/kubelet/pods/d458399b-415c-4aa6-be1a-7364c42841c7/volumes" Feb 24 05:15:11.208421 master-0 kubenswrapper[7614]: I0224 05:15:11.208373 7614 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0e05783d-6bd1-4c71-87d9-1eb3edd827b3-kube-api-access\") pod \"cluster-version-operator-57476485-7g2gq\" (UID: \"0e05783d-6bd1-4c71-87d9-1eb3edd827b3\") " pod="openshift-cluster-version/cluster-version-operator-57476485-7g2gq" Feb 24 05:15:11.483705 master-0 kubenswrapper[7614]: I0224 05:15:11.482223 7614 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-57476485-7g2gq" Feb 24 05:15:11.662561 master-0 kubenswrapper[7614]: I0224 05:15:11.662449 7614 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-network-diagnostics/network-check-target-vp2jg" Feb 24 05:15:11.789365 master-0 kubenswrapper[7614]: I0224 05:15:11.789140 7614 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-2vsjh" event={"ID":"7dcc5520-7aa8-4cd5-b06d-591827ed4e2a","Type":"ContainerStarted","Data":"5a2a41e4e36c413bf56da56bd25032255df04c84cda749b5964b8a941d6ed96a"} Feb 24 05:15:11.792199 master-0 kubenswrapper[7614]: I0224 05:15:11.792161 7614 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-fdc9d7cdd-8v72m" event={"ID":"b21148ab-4e3e-4d0b-b198-3278dd8e2e7e","Type":"ContainerStarted","Data":"fd03b91adf31c70f04d420a5ba045d6cd9e1f68b14c47322c66de7814d71ccf4"} Feb 24 05:15:11.802956 master-0 kubenswrapper[7614]: I0224 05:15:11.802340 7614 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-5f98f4f8d5-b985k" event={"ID":"8be1f8db-3f0b-4d6f-be42-7564fba66820","Type":"ContainerStarted","Data":"bbd5aa582f8241ea4c62c11beba1abad300d328a3af1603fa3f170227b163e28"} Feb 24 05:15:11.806358 master-0 kubenswrapper[7614]: I0224 05:15:11.806301 7614 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-57476485-7g2gq" event={"ID":"0e05783d-6bd1-4c71-87d9-1eb3edd827b3","Type":"ContainerStarted","Data":"883402f37d06428c5ac9d5006756ff5c514e20caeb827c4b80ee87b11ce334df"} Feb 24 05:15:11.806430 master-0 kubenswrapper[7614]: I0224 05:15:11.806368 7614 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-57476485-7g2gq" event={"ID":"0e05783d-6bd1-4c71-87d9-1eb3edd827b3","Type":"ContainerStarted","Data":"ca1f4967e893fa63378ca09c1eeb80d103b9e8e60104bb8036c8ccc5faa3a035"} Feb 24 05:15:11.808856 master-0 kubenswrapper[7614]: I0224 05:15:11.808794 7614 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/installer-1-master-0" event={"ID":"74d070e9-4193-4598-ad68-15955b07d649","Type":"ContainerStarted","Data":"ec62ccfb72151c7c722b6450bced3a8fc5369d64de69ed787b605e7b33bf1f14"} Feb 24 05:15:11.808856 master-0 kubenswrapper[7614]: I0224 05:15:11.808838 7614 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/installer-1-master-0" event={"ID":"74d070e9-4193-4598-ad68-15955b07d649","Type":"ContainerStarted","Data":"b3e22a12aff8d5b6b6bf25f421a38e1ab75e1b3a0b022c9941c1b0c879a1106e"} Feb 24 05:15:11.817475 master-0 kubenswrapper[7614]: I0224 05:15:11.817416 7614 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-6f5488b997-dbsnm" Feb 24 05:15:11.825954 master-0 kubenswrapper[7614]: I0224 05:15:11.821851 7614 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-5b94645546-lgnpc" Feb 24 05:15:11.834870 master-0 kubenswrapper[7614]: I0224 05:15:11.834772 7614 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-version/cluster-version-operator-57476485-7g2gq" podStartSLOduration=1.834745674 podStartE2EDuration="1.834745674s" podCreationTimestamp="2026-02-24 05:15:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-24 05:15:11.833131939 +0000 UTC m=+42.867875115" watchObservedRunningTime="2026-02-24 05:15:11.834745674 +0000 UTC m=+42.869488830" Feb 24 05:15:11.921813 master-0 kubenswrapper[7614]: I0224 05:15:11.921682 7614 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler/installer-1-master-0" podStartSLOduration=3.92164887 podStartE2EDuration="3.92164887s" podCreationTimestamp="2026-02-24 05:15:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-24 05:15:11.88997165 +0000 UTC m=+42.924714806" watchObservedRunningTime="2026-02-24 05:15:11.92164887 +0000 UTC m=+42.956392026" Feb 24 05:15:12.949420 master-0 kubenswrapper[7614]: I0224 05:15:12.949249 7614 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-oauth-apiserver/apiserver-6f8b7f45f7-5df4m"] Feb 24 05:15:12.950411 master-0 kubenswrapper[7614]: I0224 05:15:12.950150 7614 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-6f8b7f45f7-5df4m" Feb 24 05:15:12.961432 master-0 kubenswrapper[7614]: I0224 05:15:12.959855 7614 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"encryption-config-1" Feb 24 05:15:12.962951 master-0 kubenswrapper[7614]: I0224 05:15:12.962850 7614 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"kube-root-ca.crt" Feb 24 05:15:12.963182 master-0 kubenswrapper[7614]: I0224 05:15:12.963128 7614 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"serving-cert" Feb 24 05:15:12.963384 master-0 kubenswrapper[7614]: I0224 05:15:12.963327 7614 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"trusted-ca-bundle" Feb 24 05:15:12.963506 master-0 kubenswrapper[7614]: I0224 05:15:12.963466 7614 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"openshift-service-ca.crt" Feb 24 05:15:12.963688 master-0 kubenswrapper[7614]: I0224 05:15:12.963664 7614 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"etcd-client" Feb 24 05:15:12.964522 master-0 kubenswrapper[7614]: I0224 05:15:12.964435 7614 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"audit-1" Feb 24 05:15:12.964827 master-0 kubenswrapper[7614]: I0224 05:15:12.964763 7614 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"etcd-serving-ca" Feb 24 05:15:12.988958 master-0 kubenswrapper[7614]: I0224 05:15:12.988811 7614 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/812552f3-09b1-43f8-b910-c78e776127f8-encryption-config\") pod \"apiserver-6f8b7f45f7-5df4m\" (UID: \"812552f3-09b1-43f8-b910-c78e776127f8\") " pod="openshift-oauth-apiserver/apiserver-6f8b7f45f7-5df4m" Feb 24 05:15:12.988958 master-0 kubenswrapper[7614]: I0224 05:15:12.988872 7614 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4lt5r\" (UniqueName: \"kubernetes.io/projected/812552f3-09b1-43f8-b910-c78e776127f8-kube-api-access-4lt5r\") pod \"apiserver-6f8b7f45f7-5df4m\" (UID: \"812552f3-09b1-43f8-b910-c78e776127f8\") " pod="openshift-oauth-apiserver/apiserver-6f8b7f45f7-5df4m" Feb 24 05:15:12.988958 master-0 kubenswrapper[7614]: I0224 05:15:12.988898 7614 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/812552f3-09b1-43f8-b910-c78e776127f8-audit-dir\") pod \"apiserver-6f8b7f45f7-5df4m\" (UID: \"812552f3-09b1-43f8-b910-c78e776127f8\") " pod="openshift-oauth-apiserver/apiserver-6f8b7f45f7-5df4m" Feb 24 05:15:12.988958 master-0 kubenswrapper[7614]: I0224 05:15:12.988926 7614 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/812552f3-09b1-43f8-b910-c78e776127f8-etcd-serving-ca\") pod \"apiserver-6f8b7f45f7-5df4m\" (UID: \"812552f3-09b1-43f8-b910-c78e776127f8\") " pod="openshift-oauth-apiserver/apiserver-6f8b7f45f7-5df4m" Feb 24 05:15:12.988958 master-0 kubenswrapper[7614]: I0224 05:15:12.988946 7614 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/812552f3-09b1-43f8-b910-c78e776127f8-etcd-client\") pod \"apiserver-6f8b7f45f7-5df4m\" (UID: \"812552f3-09b1-43f8-b910-c78e776127f8\") " pod="openshift-oauth-apiserver/apiserver-6f8b7f45f7-5df4m" Feb 24 05:15:12.988958 master-0 kubenswrapper[7614]: I0224 05:15:12.988964 7614 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/812552f3-09b1-43f8-b910-c78e776127f8-trusted-ca-bundle\") pod \"apiserver-6f8b7f45f7-5df4m\" (UID: \"812552f3-09b1-43f8-b910-c78e776127f8\") " pod="openshift-oauth-apiserver/apiserver-6f8b7f45f7-5df4m" Feb 24 05:15:12.989275 master-0 kubenswrapper[7614]: I0224 05:15:12.988985 7614 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/812552f3-09b1-43f8-b910-c78e776127f8-audit-policies\") pod \"apiserver-6f8b7f45f7-5df4m\" (UID: \"812552f3-09b1-43f8-b910-c78e776127f8\") " pod="openshift-oauth-apiserver/apiserver-6f8b7f45f7-5df4m" Feb 24 05:15:12.989275 master-0 kubenswrapper[7614]: I0224 05:15:12.989007 7614 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/812552f3-09b1-43f8-b910-c78e776127f8-serving-cert\") pod \"apiserver-6f8b7f45f7-5df4m\" (UID: \"812552f3-09b1-43f8-b910-c78e776127f8\") " pod="openshift-oauth-apiserver/apiserver-6f8b7f45f7-5df4m" Feb 24 05:15:12.989891 master-0 kubenswrapper[7614]: I0224 05:15:12.989582 7614 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-oauth-apiserver/apiserver-6f8b7f45f7-5df4m"] Feb 24 05:15:13.090151 master-0 kubenswrapper[7614]: I0224 05:15:13.090075 7614 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/812552f3-09b1-43f8-b910-c78e776127f8-encryption-config\") pod \"apiserver-6f8b7f45f7-5df4m\" (UID: \"812552f3-09b1-43f8-b910-c78e776127f8\") " pod="openshift-oauth-apiserver/apiserver-6f8b7f45f7-5df4m" Feb 24 05:15:13.090151 master-0 kubenswrapper[7614]: I0224 05:15:13.090143 7614 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4lt5r\" (UniqueName: \"kubernetes.io/projected/812552f3-09b1-43f8-b910-c78e776127f8-kube-api-access-4lt5r\") pod \"apiserver-6f8b7f45f7-5df4m\" (UID: \"812552f3-09b1-43f8-b910-c78e776127f8\") " pod="openshift-oauth-apiserver/apiserver-6f8b7f45f7-5df4m" Feb 24 05:15:13.090151 master-0 kubenswrapper[7614]: I0224 05:15:13.090165 7614 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/812552f3-09b1-43f8-b910-c78e776127f8-audit-dir\") pod \"apiserver-6f8b7f45f7-5df4m\" (UID: \"812552f3-09b1-43f8-b910-c78e776127f8\") " pod="openshift-oauth-apiserver/apiserver-6f8b7f45f7-5df4m" Feb 24 05:15:13.090553 master-0 kubenswrapper[7614]: I0224 05:15:13.090188 7614 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/812552f3-09b1-43f8-b910-c78e776127f8-etcd-serving-ca\") pod \"apiserver-6f8b7f45f7-5df4m\" (UID: \"812552f3-09b1-43f8-b910-c78e776127f8\") " pod="openshift-oauth-apiserver/apiserver-6f8b7f45f7-5df4m" Feb 24 05:15:13.090553 master-0 kubenswrapper[7614]: I0224 05:15:13.090494 7614 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/812552f3-09b1-43f8-b910-c78e776127f8-etcd-client\") pod \"apiserver-6f8b7f45f7-5df4m\" (UID: \"812552f3-09b1-43f8-b910-c78e776127f8\") " pod="openshift-oauth-apiserver/apiserver-6f8b7f45f7-5df4m" Feb 24 05:15:13.090626 master-0 kubenswrapper[7614]: I0224 05:15:13.090568 7614 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/812552f3-09b1-43f8-b910-c78e776127f8-trusted-ca-bundle\") pod \"apiserver-6f8b7f45f7-5df4m\" (UID: \"812552f3-09b1-43f8-b910-c78e776127f8\") " pod="openshift-oauth-apiserver/apiserver-6f8b7f45f7-5df4m" Feb 24 05:15:13.090626 master-0 kubenswrapper[7614]: I0224 05:15:13.090617 7614 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/812552f3-09b1-43f8-b910-c78e776127f8-audit-policies\") pod \"apiserver-6f8b7f45f7-5df4m\" (UID: \"812552f3-09b1-43f8-b910-c78e776127f8\") " pod="openshift-oauth-apiserver/apiserver-6f8b7f45f7-5df4m" Feb 24 05:15:13.090726 master-0 kubenswrapper[7614]: I0224 05:15:13.090654 7614 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/812552f3-09b1-43f8-b910-c78e776127f8-serving-cert\") pod \"apiserver-6f8b7f45f7-5df4m\" (UID: \"812552f3-09b1-43f8-b910-c78e776127f8\") " pod="openshift-oauth-apiserver/apiserver-6f8b7f45f7-5df4m" Feb 24 05:15:13.091800 master-0 kubenswrapper[7614]: I0224 05:15:13.091766 7614 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/812552f3-09b1-43f8-b910-c78e776127f8-trusted-ca-bundle\") pod \"apiserver-6f8b7f45f7-5df4m\" (UID: \"812552f3-09b1-43f8-b910-c78e776127f8\") " pod="openshift-oauth-apiserver/apiserver-6f8b7f45f7-5df4m" Feb 24 05:15:13.091860 master-0 kubenswrapper[7614]: I0224 05:15:13.091808 7614 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/812552f3-09b1-43f8-b910-c78e776127f8-etcd-serving-ca\") pod \"apiserver-6f8b7f45f7-5df4m\" (UID: \"812552f3-09b1-43f8-b910-c78e776127f8\") " pod="openshift-oauth-apiserver/apiserver-6f8b7f45f7-5df4m" Feb 24 05:15:13.091974 master-0 kubenswrapper[7614]: I0224 05:15:13.091934 7614 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/812552f3-09b1-43f8-b910-c78e776127f8-audit-dir\") pod \"apiserver-6f8b7f45f7-5df4m\" (UID: \"812552f3-09b1-43f8-b910-c78e776127f8\") " pod="openshift-oauth-apiserver/apiserver-6f8b7f45f7-5df4m" Feb 24 05:15:13.092427 master-0 kubenswrapper[7614]: I0224 05:15:13.092404 7614 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/812552f3-09b1-43f8-b910-c78e776127f8-audit-policies\") pod \"apiserver-6f8b7f45f7-5df4m\" (UID: \"812552f3-09b1-43f8-b910-c78e776127f8\") " pod="openshift-oauth-apiserver/apiserver-6f8b7f45f7-5df4m" Feb 24 05:15:13.094353 master-0 kubenswrapper[7614]: I0224 05:15:13.094329 7614 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/812552f3-09b1-43f8-b910-c78e776127f8-etcd-client\") pod \"apiserver-6f8b7f45f7-5df4m\" (UID: \"812552f3-09b1-43f8-b910-c78e776127f8\") " pod="openshift-oauth-apiserver/apiserver-6f8b7f45f7-5df4m" Feb 24 05:15:13.096664 master-0 kubenswrapper[7614]: I0224 05:15:13.096625 7614 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/812552f3-09b1-43f8-b910-c78e776127f8-serving-cert\") pod \"apiserver-6f8b7f45f7-5df4m\" (UID: \"812552f3-09b1-43f8-b910-c78e776127f8\") " pod="openshift-oauth-apiserver/apiserver-6f8b7f45f7-5df4m" Feb 24 05:15:13.101117 master-0 kubenswrapper[7614]: I0224 05:15:13.101075 7614 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/812552f3-09b1-43f8-b910-c78e776127f8-encryption-config\") pod \"apiserver-6f8b7f45f7-5df4m\" (UID: \"812552f3-09b1-43f8-b910-c78e776127f8\") " pod="openshift-oauth-apiserver/apiserver-6f8b7f45f7-5df4m" Feb 24 05:15:13.124958 master-0 kubenswrapper[7614]: I0224 05:15:13.124908 7614 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4lt5r\" (UniqueName: \"kubernetes.io/projected/812552f3-09b1-43f8-b910-c78e776127f8-kube-api-access-4lt5r\") pod \"apiserver-6f8b7f45f7-5df4m\" (UID: \"812552f3-09b1-43f8-b910-c78e776127f8\") " pod="openshift-oauth-apiserver/apiserver-6f8b7f45f7-5df4m" Feb 24 05:15:13.307106 master-0 kubenswrapper[7614]: I0224 05:15:13.307036 7614 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-6f8b7f45f7-5df4m" Feb 24 05:15:14.026229 master-0 kubenswrapper[7614]: I0224 05:15:14.025752 7614 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-5b94645546-lgnpc"] Feb 24 05:15:14.037372 master-0 kubenswrapper[7614]: I0224 05:15:14.035782 7614 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-56fdc6b8c6-52tgv"] Feb 24 05:15:14.037372 master-0 kubenswrapper[7614]: I0224 05:15:14.036023 7614 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-56fdc6b8c6-52tgv" podUID="75a2f046-94a3-481e-b8f5-b2666e151fc9" containerName="route-controller-manager" containerID="cri-o://ab447b6da9854f88d9ed73e853efdddd099f2776799cafee02fcb896b0a6f932" gracePeriod=30 Feb 24 05:15:14.425282 master-0 kubenswrapper[7614]: I0224 05:15:14.425149 7614 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager/installer-1-master-0"] Feb 24 05:15:14.426079 master-0 kubenswrapper[7614]: I0224 05:15:14.426061 7614 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/installer-1-master-0" Feb 24 05:15:14.432402 master-0 kubenswrapper[7614]: I0224 05:15:14.432338 7614 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager"/"kube-root-ca.crt" Feb 24 05:15:14.442393 master-0 kubenswrapper[7614]: I0224 05:15:14.442337 7614 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/installer-1-master-0"] Feb 24 05:15:14.512462 master-0 kubenswrapper[7614]: I0224 05:15:14.512359 7614 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/1d886fdf-fd74-45de-b7c0-2e8e75eb994e-var-lock\") pod \"installer-1-master-0\" (UID: \"1d886fdf-fd74-45de-b7c0-2e8e75eb994e\") " pod="openshift-kube-controller-manager/installer-1-master-0" Feb 24 05:15:14.512762 master-0 kubenswrapper[7614]: I0224 05:15:14.512505 7614 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1d886fdf-fd74-45de-b7c0-2e8e75eb994e-kube-api-access\") pod \"installer-1-master-0\" (UID: \"1d886fdf-fd74-45de-b7c0-2e8e75eb994e\") " pod="openshift-kube-controller-manager/installer-1-master-0" Feb 24 05:15:14.512762 master-0 kubenswrapper[7614]: I0224 05:15:14.512557 7614 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/1d886fdf-fd74-45de-b7c0-2e8e75eb994e-kubelet-dir\") pod \"installer-1-master-0\" (UID: \"1d886fdf-fd74-45de-b7c0-2e8e75eb994e\") " pod="openshift-kube-controller-manager/installer-1-master-0" Feb 24 05:15:14.614098 master-0 kubenswrapper[7614]: I0224 05:15:14.614017 7614 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/1d886fdf-fd74-45de-b7c0-2e8e75eb994e-var-lock\") pod \"installer-1-master-0\" (UID: \"1d886fdf-fd74-45de-b7c0-2e8e75eb994e\") " pod="openshift-kube-controller-manager/installer-1-master-0" Feb 24 05:15:14.614367 master-0 kubenswrapper[7614]: I0224 05:15:14.614115 7614 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1d886fdf-fd74-45de-b7c0-2e8e75eb994e-kube-api-access\") pod \"installer-1-master-0\" (UID: \"1d886fdf-fd74-45de-b7c0-2e8e75eb994e\") " pod="openshift-kube-controller-manager/installer-1-master-0" Feb 24 05:15:14.614367 master-0 kubenswrapper[7614]: I0224 05:15:14.614198 7614 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/1d886fdf-fd74-45de-b7c0-2e8e75eb994e-var-lock\") pod \"installer-1-master-0\" (UID: \"1d886fdf-fd74-45de-b7c0-2e8e75eb994e\") " pod="openshift-kube-controller-manager/installer-1-master-0" Feb 24 05:15:14.614476 master-0 kubenswrapper[7614]: I0224 05:15:14.614365 7614 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/1d886fdf-fd74-45de-b7c0-2e8e75eb994e-kubelet-dir\") pod \"installer-1-master-0\" (UID: \"1d886fdf-fd74-45de-b7c0-2e8e75eb994e\") " pod="openshift-kube-controller-manager/installer-1-master-0" Feb 24 05:15:14.614523 master-0 kubenswrapper[7614]: I0224 05:15:14.614452 7614 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/1d886fdf-fd74-45de-b7c0-2e8e75eb994e-kubelet-dir\") pod \"installer-1-master-0\" (UID: \"1d886fdf-fd74-45de-b7c0-2e8e75eb994e\") " pod="openshift-kube-controller-manager/installer-1-master-0" Feb 24 05:15:14.647033 master-0 kubenswrapper[7614]: I0224 05:15:14.646970 7614 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1d886fdf-fd74-45de-b7c0-2e8e75eb994e-kube-api-access\") pod \"installer-1-master-0\" (UID: \"1d886fdf-fd74-45de-b7c0-2e8e75eb994e\") " pod="openshift-kube-controller-manager/installer-1-master-0" Feb 24 05:15:14.772145 master-0 kubenswrapper[7614]: I0224 05:15:14.759921 7614 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/installer-1-master-0" Feb 24 05:15:14.832225 master-0 kubenswrapper[7614]: I0224 05:15:14.832147 7614 generic.go:334] "Generic (PLEG): container finished" podID="75a2f046-94a3-481e-b8f5-b2666e151fc9" containerID="ab447b6da9854f88d9ed73e853efdddd099f2776799cafee02fcb896b0a6f932" exitCode=0 Feb 24 05:15:14.832575 master-0 kubenswrapper[7614]: I0224 05:15:14.832251 7614 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-56fdc6b8c6-52tgv" event={"ID":"75a2f046-94a3-481e-b8f5-b2666e151fc9","Type":"ContainerDied","Data":"ab447b6da9854f88d9ed73e853efdddd099f2776799cafee02fcb896b0a6f932"} Feb 24 05:15:14.832575 master-0 kubenswrapper[7614]: I0224 05:15:14.832371 7614 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-5b94645546-lgnpc" podUID="5112c3a6-9296-4687-9922-f7e4156d2de7" containerName="controller-manager" containerID="cri-o://892c2d90e84b40ea731f6955f791f22d9c90f887063bd122af33eaed51683c25" gracePeriod=30 Feb 24 05:15:15.865868 master-0 kubenswrapper[7614]: I0224 05:15:15.865812 7614 generic.go:334] "Generic (PLEG): container finished" podID="5112c3a6-9296-4687-9922-f7e4156d2de7" containerID="892c2d90e84b40ea731f6955f791f22d9c90f887063bd122af33eaed51683c25" exitCode=0 Feb 24 05:15:15.866574 master-0 kubenswrapper[7614]: I0224 05:15:15.865996 7614 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-5b94645546-lgnpc" event={"ID":"5112c3a6-9296-4687-9922-f7e4156d2de7","Type":"ContainerDied","Data":"892c2d90e84b40ea731f6955f791f22d9c90f887063bd122af33eaed51683c25"} Feb 24 05:15:17.769718 master-0 kubenswrapper[7614]: I0224 05:15:17.769519 7614 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/installer-1-master-0"] Feb 24 05:15:17.770586 master-0 kubenswrapper[7614]: I0224 05:15:17.770147 7614 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-1-master-0" Feb 24 05:15:17.772781 master-0 kubenswrapper[7614]: I0224 05:15:17.772740 7614 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver"/"kube-root-ca.crt" Feb 24 05:15:17.783478 master-0 kubenswrapper[7614]: I0224 05:15:17.783421 7614 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-1-master-0"] Feb 24 05:15:17.860108 master-0 kubenswrapper[7614]: I0224 05:15:17.860045 7614 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/e44f770d-f88d-446a-a22f-51b30e89690c-kubelet-dir\") pod \"installer-1-master-0\" (UID: \"e44f770d-f88d-446a-a22f-51b30e89690c\") " pod="openshift-kube-apiserver/installer-1-master-0" Feb 24 05:15:17.860272 master-0 kubenswrapper[7614]: I0224 05:15:17.860123 7614 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/e44f770d-f88d-446a-a22f-51b30e89690c-var-lock\") pod \"installer-1-master-0\" (UID: \"e44f770d-f88d-446a-a22f-51b30e89690c\") " pod="openshift-kube-apiserver/installer-1-master-0" Feb 24 05:15:17.860272 master-0 kubenswrapper[7614]: I0224 05:15:17.860254 7614 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e44f770d-f88d-446a-a22f-51b30e89690c-kube-api-access\") pod \"installer-1-master-0\" (UID: \"e44f770d-f88d-446a-a22f-51b30e89690c\") " pod="openshift-kube-apiserver/installer-1-master-0" Feb 24 05:15:17.881426 master-0 kubenswrapper[7614]: I0224 05:15:17.881347 7614 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-56fdc6b8c6-52tgv" event={"ID":"75a2f046-94a3-481e-b8f5-b2666e151fc9","Type":"ContainerDied","Data":"a13ae7398b644d75895c0474f0121634bb2569001020b64bd65755923d771513"} Feb 24 05:15:17.881426 master-0 kubenswrapper[7614]: I0224 05:15:17.881398 7614 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a13ae7398b644d75895c0474f0121634bb2569001020b64bd65755923d771513" Feb 24 05:15:17.907200 master-0 kubenswrapper[7614]: I0224 05:15:17.907008 7614 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-56fdc6b8c6-52tgv" Feb 24 05:15:17.942217 master-0 kubenswrapper[7614]: I0224 05:15:17.942137 7614 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-85ff64b64d-965rz"] Feb 24 05:15:17.942545 master-0 kubenswrapper[7614]: E0224 05:15:17.942455 7614 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="75a2f046-94a3-481e-b8f5-b2666e151fc9" containerName="route-controller-manager" Feb 24 05:15:17.942545 master-0 kubenswrapper[7614]: I0224 05:15:17.942475 7614 state_mem.go:107] "Deleted CPUSet assignment" podUID="75a2f046-94a3-481e-b8f5-b2666e151fc9" containerName="route-controller-manager" Feb 24 05:15:17.942677 master-0 kubenswrapper[7614]: I0224 05:15:17.942569 7614 memory_manager.go:354] "RemoveStaleState removing state" podUID="75a2f046-94a3-481e-b8f5-b2666e151fc9" containerName="route-controller-manager" Feb 24 05:15:17.943070 master-0 kubenswrapper[7614]: I0224 05:15:17.943035 7614 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-85ff64b64d-965rz" Feb 24 05:15:17.957654 master-0 kubenswrapper[7614]: I0224 05:15:17.957045 7614 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-85ff64b64d-965rz"] Feb 24 05:15:17.960757 master-0 kubenswrapper[7614]: I0224 05:15:17.960722 7614 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/75a2f046-94a3-481e-b8f5-b2666e151fc9-serving-cert\") pod \"75a2f046-94a3-481e-b8f5-b2666e151fc9\" (UID: \"75a2f046-94a3-481e-b8f5-b2666e151fc9\") " Feb 24 05:15:17.960876 master-0 kubenswrapper[7614]: I0224 05:15:17.960788 7614 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/75a2f046-94a3-481e-b8f5-b2666e151fc9-config\") pod \"75a2f046-94a3-481e-b8f5-b2666e151fc9\" (UID: \"75a2f046-94a3-481e-b8f5-b2666e151fc9\") " Feb 24 05:15:17.960876 master-0 kubenswrapper[7614]: I0224 05:15:17.960856 7614 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mlhlf\" (UniqueName: \"kubernetes.io/projected/75a2f046-94a3-481e-b8f5-b2666e151fc9-kube-api-access-mlhlf\") pod \"75a2f046-94a3-481e-b8f5-b2666e151fc9\" (UID: \"75a2f046-94a3-481e-b8f5-b2666e151fc9\") " Feb 24 05:15:17.961006 master-0 kubenswrapper[7614]: I0224 05:15:17.960879 7614 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/75a2f046-94a3-481e-b8f5-b2666e151fc9-client-ca\") pod \"75a2f046-94a3-481e-b8f5-b2666e151fc9\" (UID: \"75a2f046-94a3-481e-b8f5-b2666e151fc9\") " Feb 24 05:15:17.961070 master-0 kubenswrapper[7614]: I0224 05:15:17.961048 7614 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/8cafd431-e8f6-4b60-9214-3d01b1f43982-client-ca\") pod \"route-controller-manager-85ff64b64d-965rz\" (UID: \"8cafd431-e8f6-4b60-9214-3d01b1f43982\") " pod="openshift-route-controller-manager/route-controller-manager-85ff64b64d-965rz" Feb 24 05:15:17.961134 master-0 kubenswrapper[7614]: I0224 05:15:17.961080 7614 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8cafd431-e8f6-4b60-9214-3d01b1f43982-config\") pod \"route-controller-manager-85ff64b64d-965rz\" (UID: \"8cafd431-e8f6-4b60-9214-3d01b1f43982\") " pod="openshift-route-controller-manager/route-controller-manager-85ff64b64d-965rz" Feb 24 05:15:17.961134 master-0 kubenswrapper[7614]: I0224 05:15:17.961104 7614 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jbfct\" (UniqueName: \"kubernetes.io/projected/8cafd431-e8f6-4b60-9214-3d01b1f43982-kube-api-access-jbfct\") pod \"route-controller-manager-85ff64b64d-965rz\" (UID: \"8cafd431-e8f6-4b60-9214-3d01b1f43982\") " pod="openshift-route-controller-manager/route-controller-manager-85ff64b64d-965rz" Feb 24 05:15:17.961134 master-0 kubenswrapper[7614]: I0224 05:15:17.961132 7614 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e44f770d-f88d-446a-a22f-51b30e89690c-kube-api-access\") pod \"installer-1-master-0\" (UID: \"e44f770d-f88d-446a-a22f-51b30e89690c\") " pod="openshift-kube-apiserver/installer-1-master-0" Feb 24 05:15:17.961354 master-0 kubenswrapper[7614]: I0224 05:15:17.961159 7614 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/e44f770d-f88d-446a-a22f-51b30e89690c-kubelet-dir\") pod \"installer-1-master-0\" (UID: \"e44f770d-f88d-446a-a22f-51b30e89690c\") " pod="openshift-kube-apiserver/installer-1-master-0" Feb 24 05:15:17.961354 master-0 kubenswrapper[7614]: I0224 05:15:17.961180 7614 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8cafd431-e8f6-4b60-9214-3d01b1f43982-serving-cert\") pod \"route-controller-manager-85ff64b64d-965rz\" (UID: \"8cafd431-e8f6-4b60-9214-3d01b1f43982\") " pod="openshift-route-controller-manager/route-controller-manager-85ff64b64d-965rz" Feb 24 05:15:17.961354 master-0 kubenswrapper[7614]: I0224 05:15:17.961208 7614 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/e44f770d-f88d-446a-a22f-51b30e89690c-var-lock\") pod \"installer-1-master-0\" (UID: \"e44f770d-f88d-446a-a22f-51b30e89690c\") " pod="openshift-kube-apiserver/installer-1-master-0" Feb 24 05:15:17.961354 master-0 kubenswrapper[7614]: I0224 05:15:17.961292 7614 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/e44f770d-f88d-446a-a22f-51b30e89690c-var-lock\") pod \"installer-1-master-0\" (UID: \"e44f770d-f88d-446a-a22f-51b30e89690c\") " pod="openshift-kube-apiserver/installer-1-master-0" Feb 24 05:15:17.966340 master-0 kubenswrapper[7614]: I0224 05:15:17.961895 7614 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/e44f770d-f88d-446a-a22f-51b30e89690c-kubelet-dir\") pod \"installer-1-master-0\" (UID: \"e44f770d-f88d-446a-a22f-51b30e89690c\") " pod="openshift-kube-apiserver/installer-1-master-0" Feb 24 05:15:17.966340 master-0 kubenswrapper[7614]: I0224 05:15:17.962043 7614 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/75a2f046-94a3-481e-b8f5-b2666e151fc9-client-ca" (OuterVolumeSpecName: "client-ca") pod "75a2f046-94a3-481e-b8f5-b2666e151fc9" (UID: "75a2f046-94a3-481e-b8f5-b2666e151fc9"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 24 05:15:17.966340 master-0 kubenswrapper[7614]: I0224 05:15:17.963277 7614 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/75a2f046-94a3-481e-b8f5-b2666e151fc9-config" (OuterVolumeSpecName: "config") pod "75a2f046-94a3-481e-b8f5-b2666e151fc9" (UID: "75a2f046-94a3-481e-b8f5-b2666e151fc9"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 24 05:15:17.968667 master-0 kubenswrapper[7614]: I0224 05:15:17.968499 7614 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/75a2f046-94a3-481e-b8f5-b2666e151fc9-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "75a2f046-94a3-481e-b8f5-b2666e151fc9" (UID: "75a2f046-94a3-481e-b8f5-b2666e151fc9"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 24 05:15:17.971230 master-0 kubenswrapper[7614]: I0224 05:15:17.969863 7614 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/75a2f046-94a3-481e-b8f5-b2666e151fc9-kube-api-access-mlhlf" (OuterVolumeSpecName: "kube-api-access-mlhlf") pod "75a2f046-94a3-481e-b8f5-b2666e151fc9" (UID: "75a2f046-94a3-481e-b8f5-b2666e151fc9"). InnerVolumeSpecName "kube-api-access-mlhlf". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 24 05:15:17.991940 master-0 kubenswrapper[7614]: I0224 05:15:17.991853 7614 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e44f770d-f88d-446a-a22f-51b30e89690c-kube-api-access\") pod \"installer-1-master-0\" (UID: \"e44f770d-f88d-446a-a22f-51b30e89690c\") " pod="openshift-kube-apiserver/installer-1-master-0" Feb 24 05:15:18.062413 master-0 kubenswrapper[7614]: I0224 05:15:18.062202 7614 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8cafd431-e8f6-4b60-9214-3d01b1f43982-serving-cert\") pod \"route-controller-manager-85ff64b64d-965rz\" (UID: \"8cafd431-e8f6-4b60-9214-3d01b1f43982\") " pod="openshift-route-controller-manager/route-controller-manager-85ff64b64d-965rz" Feb 24 05:15:18.062413 master-0 kubenswrapper[7614]: I0224 05:15:18.062295 7614 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/8cafd431-e8f6-4b60-9214-3d01b1f43982-client-ca\") pod \"route-controller-manager-85ff64b64d-965rz\" (UID: \"8cafd431-e8f6-4b60-9214-3d01b1f43982\") " pod="openshift-route-controller-manager/route-controller-manager-85ff64b64d-965rz" Feb 24 05:15:18.062692 master-0 kubenswrapper[7614]: I0224 05:15:18.062398 7614 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8cafd431-e8f6-4b60-9214-3d01b1f43982-config\") pod \"route-controller-manager-85ff64b64d-965rz\" (UID: \"8cafd431-e8f6-4b60-9214-3d01b1f43982\") " pod="openshift-route-controller-manager/route-controller-manager-85ff64b64d-965rz" Feb 24 05:15:18.062692 master-0 kubenswrapper[7614]: I0224 05:15:18.062467 7614 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jbfct\" (UniqueName: \"kubernetes.io/projected/8cafd431-e8f6-4b60-9214-3d01b1f43982-kube-api-access-jbfct\") pod \"route-controller-manager-85ff64b64d-965rz\" (UID: \"8cafd431-e8f6-4b60-9214-3d01b1f43982\") " pod="openshift-route-controller-manager/route-controller-manager-85ff64b64d-965rz" Feb 24 05:15:18.062692 master-0 kubenswrapper[7614]: I0224 05:15:18.062514 7614 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mlhlf\" (UniqueName: \"kubernetes.io/projected/75a2f046-94a3-481e-b8f5-b2666e151fc9-kube-api-access-mlhlf\") on node \"master-0\" DevicePath \"\"" Feb 24 05:15:18.062692 master-0 kubenswrapper[7614]: I0224 05:15:18.062526 7614 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/75a2f046-94a3-481e-b8f5-b2666e151fc9-client-ca\") on node \"master-0\" DevicePath \"\"" Feb 24 05:15:18.062692 master-0 kubenswrapper[7614]: I0224 05:15:18.062537 7614 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/75a2f046-94a3-481e-b8f5-b2666e151fc9-serving-cert\") on node \"master-0\" DevicePath \"\"" Feb 24 05:15:18.062692 master-0 kubenswrapper[7614]: I0224 05:15:18.062546 7614 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/75a2f046-94a3-481e-b8f5-b2666e151fc9-config\") on node \"master-0\" DevicePath \"\"" Feb 24 05:15:18.065427 master-0 kubenswrapper[7614]: I0224 05:15:18.064394 7614 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/8cafd431-e8f6-4b60-9214-3d01b1f43982-client-ca\") pod \"route-controller-manager-85ff64b64d-965rz\" (UID: \"8cafd431-e8f6-4b60-9214-3d01b1f43982\") " pod="openshift-route-controller-manager/route-controller-manager-85ff64b64d-965rz" Feb 24 05:15:18.065427 master-0 kubenswrapper[7614]: I0224 05:15:18.065285 7614 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8cafd431-e8f6-4b60-9214-3d01b1f43982-config\") pod \"route-controller-manager-85ff64b64d-965rz\" (UID: \"8cafd431-e8f6-4b60-9214-3d01b1f43982\") " pod="openshift-route-controller-manager/route-controller-manager-85ff64b64d-965rz" Feb 24 05:15:18.066108 master-0 kubenswrapper[7614]: I0224 05:15:18.065894 7614 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8cafd431-e8f6-4b60-9214-3d01b1f43982-serving-cert\") pod \"route-controller-manager-85ff64b64d-965rz\" (UID: \"8cafd431-e8f6-4b60-9214-3d01b1f43982\") " pod="openshift-route-controller-manager/route-controller-manager-85ff64b64d-965rz" Feb 24 05:15:18.084083 master-0 kubenswrapper[7614]: I0224 05:15:18.084032 7614 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jbfct\" (UniqueName: \"kubernetes.io/projected/8cafd431-e8f6-4b60-9214-3d01b1f43982-kube-api-access-jbfct\") pod \"route-controller-manager-85ff64b64d-965rz\" (UID: \"8cafd431-e8f6-4b60-9214-3d01b1f43982\") " pod="openshift-route-controller-manager/route-controller-manager-85ff64b64d-965rz" Feb 24 05:15:18.099240 master-0 kubenswrapper[7614]: I0224 05:15:18.098753 7614 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-1-master-0" Feb 24 05:15:18.113684 master-0 kubenswrapper[7614]: I0224 05:15:18.113647 7614 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-5b94645546-lgnpc" Feb 24 05:15:18.163013 master-0 kubenswrapper[7614]: I0224 05:15:18.162929 7614 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5112c3a6-9296-4687-9922-f7e4156d2de7-serving-cert\") pod \"5112c3a6-9296-4687-9922-f7e4156d2de7\" (UID: \"5112c3a6-9296-4687-9922-f7e4156d2de7\") " Feb 24 05:15:18.163013 master-0 kubenswrapper[7614]: I0224 05:15:18.162999 7614 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/5112c3a6-9296-4687-9922-f7e4156d2de7-proxy-ca-bundles\") pod \"5112c3a6-9296-4687-9922-f7e4156d2de7\" (UID: \"5112c3a6-9296-4687-9922-f7e4156d2de7\") " Feb 24 05:15:18.163013 master-0 kubenswrapper[7614]: I0224 05:15:18.163017 7614 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5112c3a6-9296-4687-9922-f7e4156d2de7-config\") pod \"5112c3a6-9296-4687-9922-f7e4156d2de7\" (UID: \"5112c3a6-9296-4687-9922-f7e4156d2de7\") " Feb 24 05:15:18.163909 master-0 kubenswrapper[7614]: I0224 05:15:18.163036 7614 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5112c3a6-9296-4687-9922-f7e4156d2de7-client-ca\") pod \"5112c3a6-9296-4687-9922-f7e4156d2de7\" (UID: \"5112c3a6-9296-4687-9922-f7e4156d2de7\") " Feb 24 05:15:18.163909 master-0 kubenswrapper[7614]: I0224 05:15:18.163143 7614 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-q22lj\" (UniqueName: \"kubernetes.io/projected/5112c3a6-9296-4687-9922-f7e4156d2de7-kube-api-access-q22lj\") pod \"5112c3a6-9296-4687-9922-f7e4156d2de7\" (UID: \"5112c3a6-9296-4687-9922-f7e4156d2de7\") " Feb 24 05:15:18.164761 master-0 kubenswrapper[7614]: I0224 05:15:18.164407 7614 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5112c3a6-9296-4687-9922-f7e4156d2de7-config" (OuterVolumeSpecName: "config") pod "5112c3a6-9296-4687-9922-f7e4156d2de7" (UID: "5112c3a6-9296-4687-9922-f7e4156d2de7"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 24 05:15:18.164761 master-0 kubenswrapper[7614]: I0224 05:15:18.164575 7614 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5112c3a6-9296-4687-9922-f7e4156d2de7-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "5112c3a6-9296-4687-9922-f7e4156d2de7" (UID: "5112c3a6-9296-4687-9922-f7e4156d2de7"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 24 05:15:18.166729 master-0 kubenswrapper[7614]: I0224 05:15:18.166691 7614 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5112c3a6-9296-4687-9922-f7e4156d2de7-kube-api-access-q22lj" (OuterVolumeSpecName: "kube-api-access-q22lj") pod "5112c3a6-9296-4687-9922-f7e4156d2de7" (UID: "5112c3a6-9296-4687-9922-f7e4156d2de7"). InnerVolumeSpecName "kube-api-access-q22lj". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 24 05:15:18.168148 master-0 kubenswrapper[7614]: I0224 05:15:18.168072 7614 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5112c3a6-9296-4687-9922-f7e4156d2de7-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "5112c3a6-9296-4687-9922-f7e4156d2de7" (UID: "5112c3a6-9296-4687-9922-f7e4156d2de7"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 24 05:15:18.181902 master-0 kubenswrapper[7614]: I0224 05:15:18.181596 7614 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5112c3a6-9296-4687-9922-f7e4156d2de7-client-ca" (OuterVolumeSpecName: "client-ca") pod "5112c3a6-9296-4687-9922-f7e4156d2de7" (UID: "5112c3a6-9296-4687-9922-f7e4156d2de7"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 24 05:15:18.268924 master-0 kubenswrapper[7614]: I0224 05:15:18.264048 7614 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-q22lj\" (UniqueName: \"kubernetes.io/projected/5112c3a6-9296-4687-9922-f7e4156d2de7-kube-api-access-q22lj\") on node \"master-0\" DevicePath \"\"" Feb 24 05:15:18.268924 master-0 kubenswrapper[7614]: I0224 05:15:18.264096 7614 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5112c3a6-9296-4687-9922-f7e4156d2de7-serving-cert\") on node \"master-0\" DevicePath \"\"" Feb 24 05:15:18.268924 master-0 kubenswrapper[7614]: I0224 05:15:18.264105 7614 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/5112c3a6-9296-4687-9922-f7e4156d2de7-proxy-ca-bundles\") on node \"master-0\" DevicePath \"\"" Feb 24 05:15:18.268924 master-0 kubenswrapper[7614]: I0224 05:15:18.264115 7614 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5112c3a6-9296-4687-9922-f7e4156d2de7-config\") on node \"master-0\" DevicePath \"\"" Feb 24 05:15:18.268924 master-0 kubenswrapper[7614]: I0224 05:15:18.264127 7614 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5112c3a6-9296-4687-9922-f7e4156d2de7-client-ca\") on node \"master-0\" DevicePath \"\"" Feb 24 05:15:18.287073 master-0 kubenswrapper[7614]: I0224 05:15:18.286946 7614 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-85ff64b64d-965rz" Feb 24 05:15:18.356775 master-0 kubenswrapper[7614]: I0224 05:15:18.356723 7614 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-oauth-apiserver/apiserver-6f8b7f45f7-5df4m"] Feb 24 05:15:18.449870 master-0 kubenswrapper[7614]: I0224 05:15:18.449145 7614 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/installer-1-master-0"] Feb 24 05:15:18.463110 master-0 kubenswrapper[7614]: W0224 05:15:18.463046 7614 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-pod1d886fdf_fd74_45de_b7c0_2e8e75eb994e.slice/crio-1bbbffdd27360f786259f5c979d3eb55f2c697612981d0e1286af22d00168b24 WatchSource:0}: Error finding container 1bbbffdd27360f786259f5c979d3eb55f2c697612981d0e1286af22d00168b24: Status 404 returned error can't find the container with id 1bbbffdd27360f786259f5c979d3eb55f2c697612981d0e1286af22d00168b24 Feb 24 05:15:18.518664 master-0 kubenswrapper[7614]: I0224 05:15:18.518613 7614 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-1-master-0"] Feb 24 05:15:18.528857 master-0 kubenswrapper[7614]: W0224 05:15:18.528782 7614 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-pode44f770d_f88d_446a_a22f_51b30e89690c.slice/crio-2de4f0bf021dd4e6a7368be09b5e12113f2c9fbed68c5c931e616a804a48f74b WatchSource:0}: Error finding container 2de4f0bf021dd4e6a7368be09b5e12113f2c9fbed68c5c931e616a804a48f74b: Status 404 returned error can't find the container with id 2de4f0bf021dd4e6a7368be09b5e12113f2c9fbed68c5c931e616a804a48f74b Feb 24 05:15:18.700758 master-0 kubenswrapper[7614]: I0224 05:15:18.700678 7614 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-85ff64b64d-965rz"] Feb 24 05:15:18.895121 master-0 kubenswrapper[7614]: I0224 05:15:18.895056 7614 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-5c75f78c8b-9d82f" event={"ID":"49bfccec-61ec-4bef-a561-9f6e6f906215","Type":"ContainerStarted","Data":"44c8e9a1ff88f591315795d60d58a57e8877a5eadcf63c1d03aab3f292d278d7"} Feb 24 05:15:18.901286 master-0 kubenswrapper[7614]: I0224 05:15:18.895299 7614 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/package-server-manager-5c75f78c8b-9d82f" Feb 24 05:15:18.901286 master-0 kubenswrapper[7614]: I0224 05:15:18.897662 7614 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-5b94645546-lgnpc" event={"ID":"5112c3a6-9296-4687-9922-f7e4156d2de7","Type":"ContainerDied","Data":"fc7d320f1c8dfab9abb33bca8fa93c8824cfb0508e2931b273ab92a8006d6a0f"} Feb 24 05:15:18.901286 master-0 kubenswrapper[7614]: I0224 05:15:18.897738 7614 scope.go:117] "RemoveContainer" containerID="892c2d90e84b40ea731f6955f791f22d9c90f887063bd122af33eaed51683c25" Feb 24 05:15:18.901286 master-0 kubenswrapper[7614]: I0224 05:15:18.897951 7614 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-5b94645546-lgnpc" Feb 24 05:15:18.906933 master-0 kubenswrapper[7614]: I0224 05:15:18.906869 7614 generic.go:334] "Generic (PLEG): container finished" podID="b21148ab-4e3e-4d0b-b198-3278dd8e2e7e" containerID="1ace97d4544be2984fbfabaf345c26dd7a0a17435d49cf2e1b85891ef684fa54" exitCode=0 Feb 24 05:15:18.906998 master-0 kubenswrapper[7614]: I0224 05:15:18.906967 7614 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-fdc9d7cdd-8v72m" event={"ID":"b21148ab-4e3e-4d0b-b198-3278dd8e2e7e","Type":"ContainerDied","Data":"1ace97d4544be2984fbfabaf345c26dd7a0a17435d49cf2e1b85891ef684fa54"} Feb 24 05:15:18.914526 master-0 kubenswrapper[7614]: I0224 05:15:18.914461 7614 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/installer-1-master-0" event={"ID":"1d886fdf-fd74-45de-b7c0-2e8e75eb994e","Type":"ContainerStarted","Data":"f43db6490300b530630636ca2020a29922f028b9720d4ba80166936836bf6b4e"} Feb 24 05:15:18.914591 master-0 kubenswrapper[7614]: I0224 05:15:18.914532 7614 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/installer-1-master-0" event={"ID":"1d886fdf-fd74-45de-b7c0-2e8e75eb994e","Type":"ContainerStarted","Data":"1bbbffdd27360f786259f5c979d3eb55f2c697612981d0e1286af22d00168b24"} Feb 24 05:15:18.937845 master-0 kubenswrapper[7614]: I0224 05:15:18.937748 7614 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-85ff64b64d-965rz" event={"ID":"8cafd431-e8f6-4b60-9214-3d01b1f43982","Type":"ContainerStarted","Data":"24f82b37e68110a8b17b3abd244f394367fec11cfc6bdefbe95aaa0a0a273ff0"} Feb 24 05:15:18.944020 master-0 kubenswrapper[7614]: I0224 05:15:18.943946 7614 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-1-master-0" event={"ID":"e44f770d-f88d-446a-a22f-51b30e89690c","Type":"ContainerStarted","Data":"2de4f0bf021dd4e6a7368be09b5e12113f2c9fbed68c5c931e616a804a48f74b"} Feb 24 05:15:18.950142 master-0 kubenswrapper[7614]: I0224 05:15:18.949899 7614 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-6f8b7f45f7-5df4m" event={"ID":"812552f3-09b1-43f8-b910-c78e776127f8","Type":"ContainerStarted","Data":"ed120d47621f85e51e2ef771ce28687d4c0566d41771f7a4a34982cc8d975798"} Feb 24 05:15:18.950142 master-0 kubenswrapper[7614]: I0224 05:15:18.949973 7614 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-56fdc6b8c6-52tgv" Feb 24 05:15:18.994634 master-0 kubenswrapper[7614]: I0224 05:15:18.994545 7614 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager/installer-1-master-0" podStartSLOduration=4.994514929 podStartE2EDuration="4.994514929s" podCreationTimestamp="2026-02-24 05:15:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-24 05:15:18.994015025 +0000 UTC m=+50.028758181" watchObservedRunningTime="2026-02-24 05:15:18.994514929 +0000 UTC m=+50.029258085" Feb 24 05:15:19.010171 master-0 kubenswrapper[7614]: I0224 05:15:19.008417 7614 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-5b94645546-lgnpc"] Feb 24 05:15:19.013299 master-0 kubenswrapper[7614]: I0224 05:15:19.011511 7614 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-5b94645546-lgnpc"] Feb 24 05:15:19.025950 master-0 kubenswrapper[7614]: I0224 05:15:19.025882 7614 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-56fdc6b8c6-52tgv"] Feb 24 05:15:19.027420 master-0 kubenswrapper[7614]: I0224 05:15:19.027365 7614 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-56fdc6b8c6-52tgv"] Feb 24 05:15:19.189210 master-0 kubenswrapper[7614]: I0224 05:15:19.189149 7614 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5112c3a6-9296-4687-9922-f7e4156d2de7" path="/var/lib/kubelet/pods/5112c3a6-9296-4687-9922-f7e4156d2de7/volumes" Feb 24 05:15:19.195819 master-0 kubenswrapper[7614]: I0224 05:15:19.195713 7614 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="75a2f046-94a3-481e-b8f5-b2666e151fc9" path="/var/lib/kubelet/pods/75a2f046-94a3-481e-b8f5-b2666e151fc9/volumes" Feb 24 05:15:19.975305 master-0 kubenswrapper[7614]: I0224 05:15:19.975139 7614 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-85ff64b64d-965rz" event={"ID":"8cafd431-e8f6-4b60-9214-3d01b1f43982","Type":"ContainerStarted","Data":"ee6c5d36068024e9bafe4482e75c474aa3bcf31e561b317ef75ae830061a9718"} Feb 24 05:15:19.976266 master-0 kubenswrapper[7614]: I0224 05:15:19.976199 7614 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-85ff64b64d-965rz" Feb 24 05:15:19.983826 master-0 kubenswrapper[7614]: I0224 05:15:19.983766 7614 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-1-master-0" event={"ID":"e44f770d-f88d-446a-a22f-51b30e89690c","Type":"ContainerStarted","Data":"1f43a4854636c4d4d499b77fab14041aa2c65280b5d333f68ca719e5325adfaf"} Feb 24 05:15:19.985133 master-0 kubenswrapper[7614]: I0224 05:15:19.985101 7614 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-85ff64b64d-965rz" Feb 24 05:15:19.987958 master-0 kubenswrapper[7614]: I0224 05:15:19.987873 7614 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-fdc9d7cdd-8v72m" event={"ID":"b21148ab-4e3e-4d0b-b198-3278dd8e2e7e","Type":"ContainerStarted","Data":"101a2de0e94a8e7a027187782cc7337b6db7f37c8342dc855b1739d02289e2d4"} Feb 24 05:15:19.988052 master-0 kubenswrapper[7614]: I0224 05:15:19.987980 7614 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-fdc9d7cdd-8v72m" event={"ID":"b21148ab-4e3e-4d0b-b198-3278dd8e2e7e","Type":"ContainerStarted","Data":"1cdecdd4e224aba7f2c38ee5bbc169664943dab3007ec1a09516d97eda81ae71"} Feb 24 05:15:20.001577 master-0 kubenswrapper[7614]: I0224 05:15:20.001499 7614 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-85ff64b64d-965rz" podStartSLOduration=6.001473299 podStartE2EDuration="6.001473299s" podCreationTimestamp="2026-02-24 05:15:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-24 05:15:19.996578973 +0000 UTC m=+51.031322169" watchObservedRunningTime="2026-02-24 05:15:20.001473299 +0000 UTC m=+51.036216465" Feb 24 05:15:20.028748 master-0 kubenswrapper[7614]: I0224 05:15:20.028623 7614 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/installer-1-master-0" podStartSLOduration=3.028561242 podStartE2EDuration="3.028561242s" podCreationTimestamp="2026-02-24 05:15:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-24 05:15:20.017099893 +0000 UTC m=+51.051843059" watchObservedRunningTime="2026-02-24 05:15:20.028561242 +0000 UTC m=+51.063304438" Feb 24 05:15:20.053157 master-0 kubenswrapper[7614]: I0224 05:15:20.053055 7614 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-apiserver/apiserver-fdc9d7cdd-8v72m" podStartSLOduration=7.858361666 podStartE2EDuration="15.053033512s" podCreationTimestamp="2026-02-24 05:15:05 +0000 UTC" firstStartedPulling="2026-02-24 05:15:10.724209624 +0000 UTC m=+41.758952780" lastFinishedPulling="2026-02-24 05:15:17.91888147 +0000 UTC m=+48.953624626" observedRunningTime="2026-02-24 05:15:20.052792006 +0000 UTC m=+51.087535192" watchObservedRunningTime="2026-02-24 05:15:20.053033512 +0000 UTC m=+51.087776668" Feb 24 05:15:20.885468 master-0 kubenswrapper[7614]: I0224 05:15:20.885395 7614 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-557cb6655b-75nhl"] Feb 24 05:15:20.885768 master-0 kubenswrapper[7614]: E0224 05:15:20.885586 7614 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5112c3a6-9296-4687-9922-f7e4156d2de7" containerName="controller-manager" Feb 24 05:15:20.885768 master-0 kubenswrapper[7614]: I0224 05:15:20.885602 7614 state_mem.go:107] "Deleted CPUSet assignment" podUID="5112c3a6-9296-4687-9922-f7e4156d2de7" containerName="controller-manager" Feb 24 05:15:20.885768 master-0 kubenswrapper[7614]: I0224 05:15:20.885686 7614 memory_manager.go:354] "RemoveStaleState removing state" podUID="5112c3a6-9296-4687-9922-f7e4156d2de7" containerName="controller-manager" Feb 24 05:15:20.886063 master-0 kubenswrapper[7614]: I0224 05:15:20.886027 7614 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-557cb6655b-75nhl" Feb 24 05:15:20.890147 master-0 kubenswrapper[7614]: I0224 05:15:20.889350 7614 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Feb 24 05:15:20.890147 master-0 kubenswrapper[7614]: I0224 05:15:20.889739 7614 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Feb 24 05:15:20.890147 master-0 kubenswrapper[7614]: I0224 05:15:20.890071 7614 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Feb 24 05:15:20.891939 master-0 kubenswrapper[7614]: I0224 05:15:20.890923 7614 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Feb 24 05:15:20.891939 master-0 kubenswrapper[7614]: I0224 05:15:20.891045 7614 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Feb 24 05:15:20.897980 master-0 kubenswrapper[7614]: I0224 05:15:20.897936 7614 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Feb 24 05:15:20.904896 master-0 kubenswrapper[7614]: I0224 05:15:20.904791 7614 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/e75c6622-29b4-4da8-8409-be898aab9f49-proxy-ca-bundles\") pod \"controller-manager-557cb6655b-75nhl\" (UID: \"e75c6622-29b4-4da8-8409-be898aab9f49\") " pod="openshift-controller-manager/controller-manager-557cb6655b-75nhl" Feb 24 05:15:20.904896 master-0 kubenswrapper[7614]: I0224 05:15:20.904832 7614 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/e75c6622-29b4-4da8-8409-be898aab9f49-client-ca\") pod \"controller-manager-557cb6655b-75nhl\" (UID: \"e75c6622-29b4-4da8-8409-be898aab9f49\") " pod="openshift-controller-manager/controller-manager-557cb6655b-75nhl" Feb 24 05:15:20.904896 master-0 kubenswrapper[7614]: I0224 05:15:20.904853 7614 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e75c6622-29b4-4da8-8409-be898aab9f49-config\") pod \"controller-manager-557cb6655b-75nhl\" (UID: \"e75c6622-29b4-4da8-8409-be898aab9f49\") " pod="openshift-controller-manager/controller-manager-557cb6655b-75nhl" Feb 24 05:15:20.905090 master-0 kubenswrapper[7614]: I0224 05:15:20.904910 7614 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e75c6622-29b4-4da8-8409-be898aab9f49-serving-cert\") pod \"controller-manager-557cb6655b-75nhl\" (UID: \"e75c6622-29b4-4da8-8409-be898aab9f49\") " pod="openshift-controller-manager/controller-manager-557cb6655b-75nhl" Feb 24 05:15:20.905090 master-0 kubenswrapper[7614]: I0224 05:15:20.904945 7614 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qgnf5\" (UniqueName: \"kubernetes.io/projected/e75c6622-29b4-4da8-8409-be898aab9f49-kube-api-access-qgnf5\") pod \"controller-manager-557cb6655b-75nhl\" (UID: \"e75c6622-29b4-4da8-8409-be898aab9f49\") " pod="openshift-controller-manager/controller-manager-557cb6655b-75nhl" Feb 24 05:15:20.905764 master-0 kubenswrapper[7614]: I0224 05:15:20.905746 7614 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-557cb6655b-75nhl"] Feb 24 05:15:21.013121 master-0 kubenswrapper[7614]: I0224 05:15:21.013064 7614 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qgnf5\" (UniqueName: \"kubernetes.io/projected/e75c6622-29b4-4da8-8409-be898aab9f49-kube-api-access-qgnf5\") pod \"controller-manager-557cb6655b-75nhl\" (UID: \"e75c6622-29b4-4da8-8409-be898aab9f49\") " pod="openshift-controller-manager/controller-manager-557cb6655b-75nhl" Feb 24 05:15:21.014584 master-0 kubenswrapper[7614]: I0224 05:15:21.014559 7614 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/e75c6622-29b4-4da8-8409-be898aab9f49-proxy-ca-bundles\") pod \"controller-manager-557cb6655b-75nhl\" (UID: \"e75c6622-29b4-4da8-8409-be898aab9f49\") " pod="openshift-controller-manager/controller-manager-557cb6655b-75nhl" Feb 24 05:15:21.014720 master-0 kubenswrapper[7614]: I0224 05:15:21.014703 7614 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/e75c6622-29b4-4da8-8409-be898aab9f49-client-ca\") pod \"controller-manager-557cb6655b-75nhl\" (UID: \"e75c6622-29b4-4da8-8409-be898aab9f49\") " pod="openshift-controller-manager/controller-manager-557cb6655b-75nhl" Feb 24 05:15:21.014917 master-0 kubenswrapper[7614]: I0224 05:15:21.014904 7614 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e75c6622-29b4-4da8-8409-be898aab9f49-config\") pod \"controller-manager-557cb6655b-75nhl\" (UID: \"e75c6622-29b4-4da8-8409-be898aab9f49\") " pod="openshift-controller-manager/controller-manager-557cb6655b-75nhl" Feb 24 05:15:21.015091 master-0 kubenswrapper[7614]: I0224 05:15:21.015078 7614 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e75c6622-29b4-4da8-8409-be898aab9f49-serving-cert\") pod \"controller-manager-557cb6655b-75nhl\" (UID: \"e75c6622-29b4-4da8-8409-be898aab9f49\") " pod="openshift-controller-manager/controller-manager-557cb6655b-75nhl" Feb 24 05:15:21.017254 master-0 kubenswrapper[7614]: I0224 05:15:21.017209 7614 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e75c6622-29b4-4da8-8409-be898aab9f49-config\") pod \"controller-manager-557cb6655b-75nhl\" (UID: \"e75c6622-29b4-4da8-8409-be898aab9f49\") " pod="openshift-controller-manager/controller-manager-557cb6655b-75nhl" Feb 24 05:15:21.017407 master-0 kubenswrapper[7614]: I0224 05:15:21.017350 7614 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/e75c6622-29b4-4da8-8409-be898aab9f49-client-ca\") pod \"controller-manager-557cb6655b-75nhl\" (UID: \"e75c6622-29b4-4da8-8409-be898aab9f49\") " pod="openshift-controller-manager/controller-manager-557cb6655b-75nhl" Feb 24 05:15:21.018229 master-0 kubenswrapper[7614]: I0224 05:15:21.018193 7614 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/e75c6622-29b4-4da8-8409-be898aab9f49-proxy-ca-bundles\") pod \"controller-manager-557cb6655b-75nhl\" (UID: \"e75c6622-29b4-4da8-8409-be898aab9f49\") " pod="openshift-controller-manager/controller-manager-557cb6655b-75nhl" Feb 24 05:15:21.019378 master-0 kubenswrapper[7614]: I0224 05:15:21.019362 7614 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e75c6622-29b4-4da8-8409-be898aab9f49-serving-cert\") pod \"controller-manager-557cb6655b-75nhl\" (UID: \"e75c6622-29b4-4da8-8409-be898aab9f49\") " pod="openshift-controller-manager/controller-manager-557cb6655b-75nhl" Feb 24 05:15:21.042914 master-0 kubenswrapper[7614]: I0224 05:15:21.042871 7614 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qgnf5\" (UniqueName: \"kubernetes.io/projected/e75c6622-29b4-4da8-8409-be898aab9f49-kube-api-access-qgnf5\") pod \"controller-manager-557cb6655b-75nhl\" (UID: \"e75c6622-29b4-4da8-8409-be898aab9f49\") " pod="openshift-controller-manager/controller-manager-557cb6655b-75nhl" Feb 24 05:15:21.214830 master-0 kubenswrapper[7614]: I0224 05:15:21.214705 7614 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-557cb6655b-75nhl" Feb 24 05:15:22.199185 master-0 kubenswrapper[7614]: I0224 05:15:22.199101 7614 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-557cb6655b-75nhl"] Feb 24 05:15:22.220722 master-0 kubenswrapper[7614]: W0224 05:15:22.220646 7614 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode75c6622_29b4_4da8_8409_be898aab9f49.slice/crio-2ac0807aac1339b1738831a83bed34bab87cdee7e6e8f967e0b4a894d0139f4e WatchSource:0}: Error finding container 2ac0807aac1339b1738831a83bed34bab87cdee7e6e8f967e0b4a894d0139f4e: Status 404 returned error can't find the container with id 2ac0807aac1339b1738831a83bed34bab87cdee7e6e8f967e0b4a894d0139f4e Feb 24 05:15:23.014002 master-0 kubenswrapper[7614]: I0224 05:15:23.013930 7614 generic.go:334] "Generic (PLEG): container finished" podID="812552f3-09b1-43f8-b910-c78e776127f8" containerID="b9d581ca9c4c50dcca1980b09409483a53c5ca25eba6a7a71de1be1dc2987a3e" exitCode=0 Feb 24 05:15:23.014374 master-0 kubenswrapper[7614]: I0224 05:15:23.014033 7614 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-6f8b7f45f7-5df4m" event={"ID":"812552f3-09b1-43f8-b910-c78e776127f8","Type":"ContainerDied","Data":"b9d581ca9c4c50dcca1980b09409483a53c5ca25eba6a7a71de1be1dc2987a3e"} Feb 24 05:15:23.016198 master-0 kubenswrapper[7614]: I0224 05:15:23.016134 7614 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-557cb6655b-75nhl" event={"ID":"e75c6622-29b4-4da8-8409-be898aab9f49","Type":"ContainerStarted","Data":"caaa92858c7b0807c25410a67269dc27f4266cbb4b787010dfde00b1cfce4b7b"} Feb 24 05:15:23.016198 master-0 kubenswrapper[7614]: I0224 05:15:23.016199 7614 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-557cb6655b-75nhl" event={"ID":"e75c6622-29b4-4da8-8409-be898aab9f49","Type":"ContainerStarted","Data":"2ac0807aac1339b1738831a83bed34bab87cdee7e6e8f967e0b4a894d0139f4e"} Feb 24 05:15:23.016498 master-0 kubenswrapper[7614]: I0224 05:15:23.016457 7614 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-557cb6655b-75nhl" Feb 24 05:15:23.026993 master-0 kubenswrapper[7614]: I0224 05:15:23.026906 7614 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-557cb6655b-75nhl" Feb 24 05:15:23.069495 master-0 kubenswrapper[7614]: I0224 05:15:23.069417 7614 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-557cb6655b-75nhl" podStartSLOduration=9.069399086 podStartE2EDuration="9.069399086s" podCreationTimestamp="2026-02-24 05:15:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-24 05:15:23.067157063 +0000 UTC m=+54.101900219" watchObservedRunningTime="2026-02-24 05:15:23.069399086 +0000 UTC m=+54.104142242" Feb 24 05:15:23.207337 master-0 kubenswrapper[7614]: I0224 05:15:23.203433 7614 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-apiserver/apiserver-fdc9d7cdd-8v72m" Feb 24 05:15:23.207337 master-0 kubenswrapper[7614]: I0224 05:15:23.204333 7614 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-apiserver/apiserver-fdc9d7cdd-8v72m" Feb 24 05:15:23.229341 master-0 kubenswrapper[7614]: I0224 05:15:23.228430 7614 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-apiserver/apiserver-fdc9d7cdd-8v72m" Feb 24 05:15:23.731097 master-0 kubenswrapper[7614]: I0224 05:15:23.731054 7614 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-686847ff5f-zzvtt"] Feb 24 05:15:23.731686 master-0 kubenswrapper[7614]: I0224 05:15:23.731667 7614 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-686847ff5f-zzvtt" Feb 24 05:15:23.738751 master-0 kubenswrapper[7614]: W0224 05:15:23.738712 7614 reflector.go:561] object-"openshift-machine-api"/"control-plane-machine-set-operator-tls": failed to list *v1.Secret: secrets "control-plane-machine-set-operator-tls" is forbidden: User "system:node:master-0" cannot list resource "secrets" in API group "" in the namespace "openshift-machine-api": no relationship found between node 'master-0' and this object Feb 24 05:15:23.738884 master-0 kubenswrapper[7614]: E0224 05:15:23.738768 7614 reflector.go:158] "Unhandled Error" err="object-\"openshift-machine-api\"/\"control-plane-machine-set-operator-tls\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"control-plane-machine-set-operator-tls\" is forbidden: User \"system:node:master-0\" cannot list resource \"secrets\" in API group \"\" in the namespace \"openshift-machine-api\": no relationship found between node 'master-0' and this object" logger="UnhandledError" Feb 24 05:15:23.740099 master-0 kubenswrapper[7614]: I0224 05:15:23.740067 7614 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"openshift-service-ca.crt" Feb 24 05:15:23.740461 master-0 kubenswrapper[7614]: I0224 05:15:23.740442 7614 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-dockercfg-kf8b6" Feb 24 05:15:23.741208 master-0 kubenswrapper[7614]: I0224 05:15:23.741176 7614 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-root-ca.crt" Feb 24 05:15:23.750458 master-0 kubenswrapper[7614]: I0224 05:15:23.750411 7614 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/32fd577d-8966-4ab1-95cf-357291084156-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-686847ff5f-zzvtt\" (UID: \"32fd577d-8966-4ab1-95cf-357291084156\") " pod="openshift-machine-api/control-plane-machine-set-operator-686847ff5f-zzvtt" Feb 24 05:15:23.750458 master-0 kubenswrapper[7614]: I0224 05:15:23.750456 7614 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fh2pc\" (UniqueName: \"kubernetes.io/projected/32fd577d-8966-4ab1-95cf-357291084156-kube-api-access-fh2pc\") pod \"control-plane-machine-set-operator-686847ff5f-zzvtt\" (UID: \"32fd577d-8966-4ab1-95cf-357291084156\") " pod="openshift-machine-api/control-plane-machine-set-operator-686847ff5f-zzvtt" Feb 24 05:15:23.754428 master-0 kubenswrapper[7614]: I0224 05:15:23.754395 7614 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-686847ff5f-zzvtt"] Feb 24 05:15:23.851229 master-0 kubenswrapper[7614]: I0224 05:15:23.851160 7614 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/32fd577d-8966-4ab1-95cf-357291084156-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-686847ff5f-zzvtt\" (UID: \"32fd577d-8966-4ab1-95cf-357291084156\") " pod="openshift-machine-api/control-plane-machine-set-operator-686847ff5f-zzvtt" Feb 24 05:15:23.851229 master-0 kubenswrapper[7614]: I0224 05:15:23.851219 7614 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fh2pc\" (UniqueName: \"kubernetes.io/projected/32fd577d-8966-4ab1-95cf-357291084156-kube-api-access-fh2pc\") pod \"control-plane-machine-set-operator-686847ff5f-zzvtt\" (UID: \"32fd577d-8966-4ab1-95cf-357291084156\") " pod="openshift-machine-api/control-plane-machine-set-operator-686847ff5f-zzvtt" Feb 24 05:15:23.881205 master-0 kubenswrapper[7614]: I0224 05:15:23.881137 7614 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fh2pc\" (UniqueName: \"kubernetes.io/projected/32fd577d-8966-4ab1-95cf-357291084156-kube-api-access-fh2pc\") pod \"control-plane-machine-set-operator-686847ff5f-zzvtt\" (UID: \"32fd577d-8966-4ab1-95cf-357291084156\") " pod="openshift-machine-api/control-plane-machine-set-operator-686847ff5f-zzvtt" Feb 24 05:15:24.022910 master-0 kubenswrapper[7614]: I0224 05:15:24.022763 7614 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-6f8b7f45f7-5df4m" event={"ID":"812552f3-09b1-43f8-b910-c78e776127f8","Type":"ContainerStarted","Data":"7a733ab2f63d82d074cb8a0870ad18b6b982d2d20a77a7c8ddb84638324230a4"} Feb 24 05:15:24.027640 master-0 kubenswrapper[7614]: I0224 05:15:24.027597 7614 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-apiserver/apiserver-fdc9d7cdd-8v72m" Feb 24 05:15:24.047148 master-0 kubenswrapper[7614]: I0224 05:15:24.047061 7614 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-oauth-apiserver/apiserver-6f8b7f45f7-5df4m" podStartSLOduration=8.64164153 podStartE2EDuration="12.04704187s" podCreationTimestamp="2026-02-24 05:15:12 +0000 UTC" firstStartedPulling="2026-02-24 05:15:18.377189354 +0000 UTC m=+49.411932520" lastFinishedPulling="2026-02-24 05:15:21.782589704 +0000 UTC m=+52.817332860" observedRunningTime="2026-02-24 05:15:24.044475388 +0000 UTC m=+55.079218544" watchObservedRunningTime="2026-02-24 05:15:24.04704187 +0000 UTC m=+55.081785026" Feb 24 05:15:24.852420 master-0 kubenswrapper[7614]: E0224 05:15:24.852341 7614 secret.go:189] Couldn't get secret openshift-machine-api/control-plane-machine-set-operator-tls: failed to sync secret cache: timed out waiting for the condition Feb 24 05:15:24.853116 master-0 kubenswrapper[7614]: E0224 05:15:24.852530 7614 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/32fd577d-8966-4ab1-95cf-357291084156-control-plane-machine-set-operator-tls podName:32fd577d-8966-4ab1-95cf-357291084156 nodeName:}" failed. No retries permitted until 2026-02-24 05:15:25.352489246 +0000 UTC m=+56.387232452 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "control-plane-machine-set-operator-tls" (UniqueName: "kubernetes.io/secret/32fd577d-8966-4ab1-95cf-357291084156-control-plane-machine-set-operator-tls") pod "control-plane-machine-set-operator-686847ff5f-zzvtt" (UID: "32fd577d-8966-4ab1-95cf-357291084156") : failed to sync secret cache: timed out waiting for the condition Feb 24 05:15:25.252725 master-0 kubenswrapper[7614]: I0224 05:15:25.252663 7614 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-tls" Feb 24 05:15:25.373989 master-0 kubenswrapper[7614]: I0224 05:15:25.373912 7614 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/32fd577d-8966-4ab1-95cf-357291084156-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-686847ff5f-zzvtt\" (UID: \"32fd577d-8966-4ab1-95cf-357291084156\") " pod="openshift-machine-api/control-plane-machine-set-operator-686847ff5f-zzvtt" Feb 24 05:15:25.377520 master-0 kubenswrapper[7614]: I0224 05:15:25.377479 7614 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/32fd577d-8966-4ab1-95cf-357291084156-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-686847ff5f-zzvtt\" (UID: \"32fd577d-8966-4ab1-95cf-357291084156\") " pod="openshift-machine-api/control-plane-machine-set-operator-686847ff5f-zzvtt" Feb 24 05:15:25.546210 master-0 kubenswrapper[7614]: I0224 05:15:25.546063 7614 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-686847ff5f-zzvtt" Feb 24 05:15:26.001837 master-0 kubenswrapper[7614]: I0224 05:15:26.001762 7614 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-686847ff5f-zzvtt"] Feb 24 05:15:26.016738 master-0 kubenswrapper[7614]: W0224 05:15:26.016627 7614 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod32fd577d_8966_4ab1_95cf_357291084156.slice/crio-f5885425638056ce98b14e0964ddb8ab6fa82dc0c949c580e04a0b062a448107 WatchSource:0}: Error finding container f5885425638056ce98b14e0964ddb8ab6fa82dc0c949c580e04a0b062a448107: Status 404 returned error can't find the container with id f5885425638056ce98b14e0964ddb8ab6fa82dc0c949c580e04a0b062a448107 Feb 24 05:15:26.033863 master-0 kubenswrapper[7614]: I0224 05:15:26.033780 7614 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/control-plane-machine-set-operator-686847ff5f-zzvtt" event={"ID":"32fd577d-8966-4ab1-95cf-357291084156","Type":"ContainerStarted","Data":"f5885425638056ce98b14e0964ddb8ab6fa82dc0c949c580e04a0b062a448107"} Feb 24 05:15:27.751662 master-0 kubenswrapper[7614]: I0224 05:15:27.746207 7614 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-machine-approver/machine-approver-798b897698-6hgvq"] Feb 24 05:15:27.759177 master-0 kubenswrapper[7614]: I0224 05:15:27.755393 7614 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-machine-approver/machine-approver-798b897698-6hgvq" Feb 24 05:15:27.759177 master-0 kubenswrapper[7614]: I0224 05:15:27.757690 7614 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"openshift-service-ca.crt" Feb 24 05:15:27.759177 master-0 kubenswrapper[7614]: I0224 05:15:27.758244 7614 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-root-ca.crt" Feb 24 05:15:27.759177 master-0 kubenswrapper[7614]: I0224 05:15:27.758878 7614 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"machine-approver-config" Feb 24 05:15:27.759177 master-0 kubenswrapper[7614]: I0224 05:15:27.758911 7614 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-sa-dockercfg-w9h5v" Feb 24 05:15:27.759177 master-0 kubenswrapper[7614]: I0224 05:15:27.758895 7614 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-rbac-proxy" Feb 24 05:15:27.765256 master-0 kubenswrapper[7614]: I0224 05:15:27.765185 7614 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-tls" Feb 24 05:15:27.811212 master-0 kubenswrapper[7614]: I0224 05:15:27.811142 7614 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/fe235661-d492-48fc-92e6-d9e1938daeb7-auth-proxy-config\") pod \"machine-approver-798b897698-6hgvq\" (UID: \"fe235661-d492-48fc-92e6-d9e1938daeb7\") " pod="openshift-cluster-machine-approver/machine-approver-798b897698-6hgvq" Feb 24 05:15:27.811212 master-0 kubenswrapper[7614]: I0224 05:15:27.811205 7614 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xxj59\" (UniqueName: \"kubernetes.io/projected/fe235661-d492-48fc-92e6-d9e1938daeb7-kube-api-access-xxj59\") pod \"machine-approver-798b897698-6hgvq\" (UID: \"fe235661-d492-48fc-92e6-d9e1938daeb7\") " pod="openshift-cluster-machine-approver/machine-approver-798b897698-6hgvq" Feb 24 05:15:27.811545 master-0 kubenswrapper[7614]: I0224 05:15:27.811240 7614 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fe235661-d492-48fc-92e6-d9e1938daeb7-config\") pod \"machine-approver-798b897698-6hgvq\" (UID: \"fe235661-d492-48fc-92e6-d9e1938daeb7\") " pod="openshift-cluster-machine-approver/machine-approver-798b897698-6hgvq" Feb 24 05:15:27.811545 master-0 kubenswrapper[7614]: I0224 05:15:27.811266 7614 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/fe235661-d492-48fc-92e6-d9e1938daeb7-machine-approver-tls\") pod \"machine-approver-798b897698-6hgvq\" (UID: \"fe235661-d492-48fc-92e6-d9e1938daeb7\") " pod="openshift-cluster-machine-approver/machine-approver-798b897698-6hgvq" Feb 24 05:15:28.306756 master-0 kubenswrapper[7614]: I0224 05:15:28.306664 7614 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/fe235661-d492-48fc-92e6-d9e1938daeb7-machine-approver-tls\") pod \"machine-approver-798b897698-6hgvq\" (UID: \"fe235661-d492-48fc-92e6-d9e1938daeb7\") " pod="openshift-cluster-machine-approver/machine-approver-798b897698-6hgvq" Feb 24 05:15:28.306756 master-0 kubenswrapper[7614]: I0224 05:15:28.306755 7614 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/fe235661-d492-48fc-92e6-d9e1938daeb7-auth-proxy-config\") pod \"machine-approver-798b897698-6hgvq\" (UID: \"fe235661-d492-48fc-92e6-d9e1938daeb7\") " pod="openshift-cluster-machine-approver/machine-approver-798b897698-6hgvq" Feb 24 05:15:28.307154 master-0 kubenswrapper[7614]: I0224 05:15:28.306794 7614 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xxj59\" (UniqueName: \"kubernetes.io/projected/fe235661-d492-48fc-92e6-d9e1938daeb7-kube-api-access-xxj59\") pod \"machine-approver-798b897698-6hgvq\" (UID: \"fe235661-d492-48fc-92e6-d9e1938daeb7\") " pod="openshift-cluster-machine-approver/machine-approver-798b897698-6hgvq" Feb 24 05:15:28.307699 master-0 kubenswrapper[7614]: I0224 05:15:28.307640 7614 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-oauth-apiserver/apiserver-6f8b7f45f7-5df4m" Feb 24 05:15:28.308468 master-0 kubenswrapper[7614]: I0224 05:15:28.308402 7614 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fe235661-d492-48fc-92e6-d9e1938daeb7-config\") pod \"machine-approver-798b897698-6hgvq\" (UID: \"fe235661-d492-48fc-92e6-d9e1938daeb7\") " pod="openshift-cluster-machine-approver/machine-approver-798b897698-6hgvq" Feb 24 05:15:28.308628 master-0 kubenswrapper[7614]: I0224 05:15:28.308510 7614 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-oauth-apiserver/apiserver-6f8b7f45f7-5df4m" Feb 24 05:15:28.309557 master-0 kubenswrapper[7614]: I0224 05:15:28.309467 7614 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fe235661-d492-48fc-92e6-d9e1938daeb7-config\") pod \"machine-approver-798b897698-6hgvq\" (UID: \"fe235661-d492-48fc-92e6-d9e1938daeb7\") " pod="openshift-cluster-machine-approver/machine-approver-798b897698-6hgvq" Feb 24 05:15:28.309557 master-0 kubenswrapper[7614]: I0224 05:15:28.309478 7614 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/fe235661-d492-48fc-92e6-d9e1938daeb7-auth-proxy-config\") pod \"machine-approver-798b897698-6hgvq\" (UID: \"fe235661-d492-48fc-92e6-d9e1938daeb7\") " pod="openshift-cluster-machine-approver/machine-approver-798b897698-6hgvq" Feb 24 05:15:28.325574 master-0 kubenswrapper[7614]: I0224 05:15:28.325483 7614 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-oauth-apiserver/apiserver-6f8b7f45f7-5df4m" Feb 24 05:15:28.333387 master-0 kubenswrapper[7614]: I0224 05:15:28.330074 7614 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/fe235661-d492-48fc-92e6-d9e1938daeb7-machine-approver-tls\") pod \"machine-approver-798b897698-6hgvq\" (UID: \"fe235661-d492-48fc-92e6-d9e1938daeb7\") " pod="openshift-cluster-machine-approver/machine-approver-798b897698-6hgvq" Feb 24 05:15:28.346354 master-0 kubenswrapper[7614]: I0224 05:15:28.346229 7614 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xxj59\" (UniqueName: \"kubernetes.io/projected/fe235661-d492-48fc-92e6-d9e1938daeb7-kube-api-access-xxj59\") pod \"machine-approver-798b897698-6hgvq\" (UID: \"fe235661-d492-48fc-92e6-d9e1938daeb7\") " pod="openshift-cluster-machine-approver/machine-approver-798b897698-6hgvq" Feb 24 05:15:28.379121 master-0 kubenswrapper[7614]: I0224 05:15:28.379030 7614 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-machine-approver/machine-approver-798b897698-6hgvq" Feb 24 05:15:28.398762 master-0 kubenswrapper[7614]: W0224 05:15:28.398678 7614 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podfe235661_d492_48fc_92e6_d9e1938daeb7.slice/crio-8815be107409d9117e98e0fbd4a569a1ac9718c2f1970ad5fa33996f9d7cc8ad WatchSource:0}: Error finding container 8815be107409d9117e98e0fbd4a569a1ac9718c2f1970ad5fa33996f9d7cc8ad: Status 404 returned error can't find the container with id 8815be107409d9117e98e0fbd4a569a1ac9718c2f1970ad5fa33996f9d7cc8ad Feb 24 05:15:28.439714 master-0 kubenswrapper[7614]: I0224 05:15:28.437996 7614 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-kube-controller-manager/installer-1-master-0"] Feb 24 05:15:28.440579 master-0 kubenswrapper[7614]: I0224 05:15:28.440497 7614 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-controller-manager/installer-1-master-0" podUID="1d886fdf-fd74-45de-b7c0-2e8e75eb994e" containerName="installer" containerID="cri-o://f43db6490300b530630636ca2020a29922f028b9720d4ba80166936836bf6b4e" gracePeriod=30 Feb 24 05:15:29.352087 master-0 kubenswrapper[7614]: I0224 05:15:29.351994 7614 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-798b897698-6hgvq" event={"ID":"fe235661-d492-48fc-92e6-d9e1938daeb7","Type":"ContainerStarted","Data":"4ca5d250bec3b25175c35f6a821fe8e53e48f1d38446e2b1d4770a2ed43da0a2"} Feb 24 05:15:29.352087 master-0 kubenswrapper[7614]: I0224 05:15:29.352057 7614 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-798b897698-6hgvq" event={"ID":"fe235661-d492-48fc-92e6-d9e1938daeb7","Type":"ContainerStarted","Data":"8815be107409d9117e98e0fbd4a569a1ac9718c2f1970ad5fa33996f9d7cc8ad"} Feb 24 05:15:29.353990 master-0 kubenswrapper[7614]: I0224 05:15:29.353864 7614 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/control-plane-machine-set-operator-686847ff5f-zzvtt" event={"ID":"32fd577d-8966-4ab1-95cf-357291084156","Type":"ContainerStarted","Data":"cd2e094a618f188c882e23ef5f50ea70a38793ab6e08f1bfec1cd4a082e97144"} Feb 24 05:15:29.368010 master-0 kubenswrapper[7614]: I0224 05:15:29.367459 7614 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-oauth-apiserver/apiserver-6f8b7f45f7-5df4m" Feb 24 05:15:29.390742 master-0 kubenswrapper[7614]: I0224 05:15:29.389014 7614 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-api/control-plane-machine-set-operator-686847ff5f-zzvtt" podStartSLOduration=4.026753854 podStartE2EDuration="6.388982628s" podCreationTimestamp="2026-02-24 05:15:23 +0000 UTC" firstStartedPulling="2026-02-24 05:15:26.01893026 +0000 UTC m=+57.053673436" lastFinishedPulling="2026-02-24 05:15:28.381159054 +0000 UTC m=+59.415902210" observedRunningTime="2026-02-24 05:15:29.384114222 +0000 UTC m=+60.418857388" watchObservedRunningTime="2026-02-24 05:15:29.388982628 +0000 UTC m=+60.423725814" Feb 24 05:15:29.958734 master-0 kubenswrapper[7614]: I0224 05:15:29.958643 7614 kubelet.go:2431] "SyncLoop REMOVE" source="file" pods=["openshift-etcd/etcd-master-0-master-0"] Feb 24 05:15:29.959093 master-0 kubenswrapper[7614]: I0224 05:15:29.959039 7614 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-etcd/etcd-master-0-master-0" podUID="12dab5d350ebc129b0bfa4714d330b15" containerName="etcdctl" containerID="cri-o://60a30f593dbe2d93b00c55ee834611c7e89cff694c686371357f4d3a921ea7a0" gracePeriod=30 Feb 24 05:15:29.959249 master-0 kubenswrapper[7614]: I0224 05:15:29.959121 7614 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-etcd/etcd-master-0-master-0" podUID="12dab5d350ebc129b0bfa4714d330b15" containerName="etcd" containerID="cri-o://b935af68bde88f6abe85c555e4940dca0ca8f352e0978bf5d688472276e6f4de" gracePeriod=30 Feb 24 05:15:29.963581 master-0 kubenswrapper[7614]: I0224 05:15:29.962668 7614 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-etcd/etcd-master-0"] Feb 24 05:15:29.963581 master-0 kubenswrapper[7614]: E0224 05:15:29.962972 7614 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="12dab5d350ebc129b0bfa4714d330b15" containerName="etcd" Feb 24 05:15:29.963581 master-0 kubenswrapper[7614]: I0224 05:15:29.962990 7614 state_mem.go:107] "Deleted CPUSet assignment" podUID="12dab5d350ebc129b0bfa4714d330b15" containerName="etcd" Feb 24 05:15:29.963581 master-0 kubenswrapper[7614]: E0224 05:15:29.963005 7614 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="12dab5d350ebc129b0bfa4714d330b15" containerName="etcdctl" Feb 24 05:15:29.963581 master-0 kubenswrapper[7614]: I0224 05:15:29.963013 7614 state_mem.go:107] "Deleted CPUSet assignment" podUID="12dab5d350ebc129b0bfa4714d330b15" containerName="etcdctl" Feb 24 05:15:29.963581 master-0 kubenswrapper[7614]: I0224 05:15:29.963111 7614 memory_manager.go:354] "RemoveStaleState removing state" podUID="12dab5d350ebc129b0bfa4714d330b15" containerName="etcdctl" Feb 24 05:15:29.963581 master-0 kubenswrapper[7614]: I0224 05:15:29.963132 7614 memory_manager.go:354] "RemoveStaleState removing state" podUID="12dab5d350ebc129b0bfa4714d330b15" containerName="etcd" Feb 24 05:15:29.979047 master-0 kubenswrapper[7614]: I0224 05:15:29.976966 7614 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-master-0" Feb 24 05:15:30.139346 master-0 kubenswrapper[7614]: I0224 05:15:30.139248 7614 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/18a83278819db2092fa26d8274eb3f00-data-dir\") pod \"etcd-master-0\" (UID: \"18a83278819db2092fa26d8274eb3f00\") " pod="openshift-etcd/etcd-master-0" Feb 24 05:15:30.139666 master-0 kubenswrapper[7614]: I0224 05:15:30.139361 7614 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/18a83278819db2092fa26d8274eb3f00-usr-local-bin\") pod \"etcd-master-0\" (UID: \"18a83278819db2092fa26d8274eb3f00\") " pod="openshift-etcd/etcd-master-0" Feb 24 05:15:30.139666 master-0 kubenswrapper[7614]: I0224 05:15:30.139476 7614 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/18a83278819db2092fa26d8274eb3f00-cert-dir\") pod \"etcd-master-0\" (UID: \"18a83278819db2092fa26d8274eb3f00\") " pod="openshift-etcd/etcd-master-0" Feb 24 05:15:30.139666 master-0 kubenswrapper[7614]: I0224 05:15:30.139534 7614 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/18a83278819db2092fa26d8274eb3f00-log-dir\") pod \"etcd-master-0\" (UID: \"18a83278819db2092fa26d8274eb3f00\") " pod="openshift-etcd/etcd-master-0" Feb 24 05:15:30.139666 master-0 kubenswrapper[7614]: I0224 05:15:30.139592 7614 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/18a83278819db2092fa26d8274eb3f00-resource-dir\") pod \"etcd-master-0\" (UID: \"18a83278819db2092fa26d8274eb3f00\") " pod="openshift-etcd/etcd-master-0" Feb 24 05:15:30.139924 master-0 kubenswrapper[7614]: I0224 05:15:30.139760 7614 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/18a83278819db2092fa26d8274eb3f00-static-pod-dir\") pod \"etcd-master-0\" (UID: \"18a83278819db2092fa26d8274eb3f00\") " pod="openshift-etcd/etcd-master-0" Feb 24 05:15:30.240993 master-0 kubenswrapper[7614]: I0224 05:15:30.240922 7614 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/18a83278819db2092fa26d8274eb3f00-data-dir\") pod \"etcd-master-0\" (UID: \"18a83278819db2092fa26d8274eb3f00\") " pod="openshift-etcd/etcd-master-0" Feb 24 05:15:30.240993 master-0 kubenswrapper[7614]: I0224 05:15:30.240991 7614 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/18a83278819db2092fa26d8274eb3f00-usr-local-bin\") pod \"etcd-master-0\" (UID: \"18a83278819db2092fa26d8274eb3f00\") " pod="openshift-etcd/etcd-master-0" Feb 24 05:15:30.241350 master-0 kubenswrapper[7614]: I0224 05:15:30.241246 7614 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/18a83278819db2092fa26d8274eb3f00-data-dir\") pod \"etcd-master-0\" (UID: \"18a83278819db2092fa26d8274eb3f00\") " pod="openshift-etcd/etcd-master-0" Feb 24 05:15:30.241427 master-0 kubenswrapper[7614]: I0224 05:15:30.241356 7614 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/18a83278819db2092fa26d8274eb3f00-cert-dir\") pod \"etcd-master-0\" (UID: \"18a83278819db2092fa26d8274eb3f00\") " pod="openshift-etcd/etcd-master-0" Feb 24 05:15:30.241469 master-0 kubenswrapper[7614]: I0224 05:15:30.241424 7614 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/18a83278819db2092fa26d8274eb3f00-usr-local-bin\") pod \"etcd-master-0\" (UID: \"18a83278819db2092fa26d8274eb3f00\") " pod="openshift-etcd/etcd-master-0" Feb 24 05:15:30.241502 master-0 kubenswrapper[7614]: I0224 05:15:30.241448 7614 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/18a83278819db2092fa26d8274eb3f00-cert-dir\") pod \"etcd-master-0\" (UID: \"18a83278819db2092fa26d8274eb3f00\") " pod="openshift-etcd/etcd-master-0" Feb 24 05:15:30.241502 master-0 kubenswrapper[7614]: I0224 05:15:30.241468 7614 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/18a83278819db2092fa26d8274eb3f00-log-dir\") pod \"etcd-master-0\" (UID: \"18a83278819db2092fa26d8274eb3f00\") " pod="openshift-etcd/etcd-master-0" Feb 24 05:15:30.241567 master-0 kubenswrapper[7614]: I0224 05:15:30.241525 7614 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/18a83278819db2092fa26d8274eb3f00-resource-dir\") pod \"etcd-master-0\" (UID: \"18a83278819db2092fa26d8274eb3f00\") " pod="openshift-etcd/etcd-master-0" Feb 24 05:15:30.241598 master-0 kubenswrapper[7614]: I0224 05:15:30.241527 7614 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/18a83278819db2092fa26d8274eb3f00-log-dir\") pod \"etcd-master-0\" (UID: \"18a83278819db2092fa26d8274eb3f00\") " pod="openshift-etcd/etcd-master-0" Feb 24 05:15:30.241598 master-0 kubenswrapper[7614]: I0224 05:15:30.241572 7614 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/18a83278819db2092fa26d8274eb3f00-static-pod-dir\") pod \"etcd-master-0\" (UID: \"18a83278819db2092fa26d8274eb3f00\") " pod="openshift-etcd/etcd-master-0" Feb 24 05:15:30.241660 master-0 kubenswrapper[7614]: I0224 05:15:30.241596 7614 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/18a83278819db2092fa26d8274eb3f00-resource-dir\") pod \"etcd-master-0\" (UID: \"18a83278819db2092fa26d8274eb3f00\") " pod="openshift-etcd/etcd-master-0" Feb 24 05:15:30.241660 master-0 kubenswrapper[7614]: I0224 05:15:30.241608 7614 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/18a83278819db2092fa26d8274eb3f00-static-pod-dir\") pod \"etcd-master-0\" (UID: \"18a83278819db2092fa26d8274eb3f00\") " pod="openshift-etcd/etcd-master-0" Feb 24 05:15:31.371506 master-0 kubenswrapper[7614]: I0224 05:15:31.371409 7614 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-798b897698-6hgvq" event={"ID":"fe235661-d492-48fc-92e6-d9e1938daeb7","Type":"ContainerStarted","Data":"011d397235a77ecca99655074960637a0e06d4850a685cb71f22e206406b1d2d"} Feb 24 05:15:40.758470 master-0 kubenswrapper[7614]: E0224 05:15:40.758350 7614 controller.go:195] "Failed to update lease" err="Put \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 24 05:15:43.011192 master-0 kubenswrapper[7614]: E0224 05:15:43.011048 7614 kubelet.go:1929] "Failed creating a mirror pod for" err="Internal error occurred: admission plugin \"LimitRanger\" failed to complete mutation in 13s" pod="openshift-etcd/etcd-master-0" Feb 24 05:15:43.012266 master-0 kubenswrapper[7614]: I0224 05:15:43.011873 7614 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-master-0" Feb 24 05:15:43.635924 master-0 kubenswrapper[7614]: I0224 05:15:43.635745 7614 generic.go:334] "Generic (PLEG): container finished" podID="18a83278819db2092fa26d8274eb3f00" containerID="aab7b09bde8c1057cef18f32fed6066df8d587332ad0a28d9336f34996955d46" exitCode=0 Feb 24 05:15:43.635924 master-0 kubenswrapper[7614]: I0224 05:15:43.635866 7614 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-master-0" event={"ID":"18a83278819db2092fa26d8274eb3f00","Type":"ContainerDied","Data":"aab7b09bde8c1057cef18f32fed6066df8d587332ad0a28d9336f34996955d46"} Feb 24 05:15:43.635924 master-0 kubenswrapper[7614]: I0224 05:15:43.635911 7614 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-master-0" event={"ID":"18a83278819db2092fa26d8274eb3f00","Type":"ContainerStarted","Data":"feee9b866457e6daa89dcd5aa732adcbad0ca3132fa440c7a140181cf2874eea"} Feb 24 05:15:43.639596 master-0 kubenswrapper[7614]: I0224 05:15:43.639543 7614 generic.go:334] "Generic (PLEG): container finished" podID="c9ad9373c007a4fcd25e70622bdc8deb" containerID="487b980bd30f01a4370e0f26f33ccb604a296e479126f79a526ff15ecc98aaf8" exitCode=1 Feb 24 05:15:43.639662 master-0 kubenswrapper[7614]: I0224 05:15:43.639604 7614 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="kube-system/bootstrap-kube-controller-manager-master-0" event={"ID":"c9ad9373c007a4fcd25e70622bdc8deb","Type":"ContainerDied","Data":"487b980bd30f01a4370e0f26f33ccb604a296e479126f79a526ff15ecc98aaf8"} Feb 24 05:15:43.639662 master-0 kubenswrapper[7614]: I0224 05:15:43.639653 7614 scope.go:117] "RemoveContainer" containerID="d6dd4a61ed7af8ebd78eddfac6cf4fdcc660e18cd4faabe4c2d616a566d86ff6" Feb 24 05:15:43.640654 master-0 kubenswrapper[7614]: I0224 05:15:43.640596 7614 scope.go:117] "RemoveContainer" containerID="487b980bd30f01a4370e0f26f33ccb604a296e479126f79a526ff15ecc98aaf8" Feb 24 05:15:44.239232 master-0 kubenswrapper[7614]: I0224 05:15:44.239146 7614 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="kube-system/bootstrap-kube-controller-manager-master-0" Feb 24 05:15:44.650470 master-0 kubenswrapper[7614]: I0224 05:15:44.650350 7614 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="kube-system/bootstrap-kube-controller-manager-master-0" event={"ID":"c9ad9373c007a4fcd25e70622bdc8deb","Type":"ContainerStarted","Data":"7e457713ca1d1ba317f2afd7258df17db980e4c10d06e5baec4b31193663e429"} Feb 24 05:15:44.652453 master-0 kubenswrapper[7614]: I0224 05:15:44.652382 7614 generic.go:334] "Generic (PLEG): container finished" podID="2d3d57f1-cd67-4f1d-b267-f652b9bb3448" containerID="9b98ab8d2dc17a91ddedb320e3bb1181b379c4590b7ec6f960ba108eb0e71383" exitCode=0 Feb 24 05:15:44.652453 master-0 kubenswrapper[7614]: I0224 05:15:44.652450 7614 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/installer-1-master-0" event={"ID":"2d3d57f1-cd67-4f1d-b267-f652b9bb3448","Type":"ContainerDied","Data":"9b98ab8d2dc17a91ddedb320e3bb1181b379c4590b7ec6f960ba108eb0e71383"} Feb 24 05:15:45.200535 master-0 kubenswrapper[7614]: I0224 05:15:45.200433 7614 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="kube-system/bootstrap-kube-controller-manager-master-0" Feb 24 05:15:45.578452 master-0 kubenswrapper[7614]: I0224 05:15:45.578374 7614 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="kube-system/bootstrap-kube-controller-manager-master-0" Feb 24 05:15:46.059360 master-0 kubenswrapper[7614]: I0224 05:15:46.059196 7614 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/installer-1-master-0" Feb 24 05:15:46.224051 master-0 kubenswrapper[7614]: I0224 05:15:46.223972 7614 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/2d3d57f1-cd67-4f1d-b267-f652b9bb3448-kube-api-access\") pod \"2d3d57f1-cd67-4f1d-b267-f652b9bb3448\" (UID: \"2d3d57f1-cd67-4f1d-b267-f652b9bb3448\") " Feb 24 05:15:46.224051 master-0 kubenswrapper[7614]: I0224 05:15:46.224048 7614 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/2d3d57f1-cd67-4f1d-b267-f652b9bb3448-kubelet-dir\") pod \"2d3d57f1-cd67-4f1d-b267-f652b9bb3448\" (UID: \"2d3d57f1-cd67-4f1d-b267-f652b9bb3448\") " Feb 24 05:15:46.224333 master-0 kubenswrapper[7614]: I0224 05:15:46.224162 7614 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/2d3d57f1-cd67-4f1d-b267-f652b9bb3448-var-lock\") pod \"2d3d57f1-cd67-4f1d-b267-f652b9bb3448\" (UID: \"2d3d57f1-cd67-4f1d-b267-f652b9bb3448\") " Feb 24 05:15:46.224560 master-0 kubenswrapper[7614]: I0224 05:15:46.224519 7614 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2d3d57f1-cd67-4f1d-b267-f652b9bb3448-var-lock" (OuterVolumeSpecName: "var-lock") pod "2d3d57f1-cd67-4f1d-b267-f652b9bb3448" (UID: "2d3d57f1-cd67-4f1d-b267-f652b9bb3448"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 24 05:15:46.224608 master-0 kubenswrapper[7614]: I0224 05:15:46.224530 7614 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2d3d57f1-cd67-4f1d-b267-f652b9bb3448-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "2d3d57f1-cd67-4f1d-b267-f652b9bb3448" (UID: "2d3d57f1-cd67-4f1d-b267-f652b9bb3448"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 24 05:15:46.228885 master-0 kubenswrapper[7614]: I0224 05:15:46.228848 7614 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2d3d57f1-cd67-4f1d-b267-f652b9bb3448-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "2d3d57f1-cd67-4f1d-b267-f652b9bb3448" (UID: "2d3d57f1-cd67-4f1d-b267-f652b9bb3448"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 24 05:15:46.325602 master-0 kubenswrapper[7614]: I0224 05:15:46.325417 7614 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/2d3d57f1-cd67-4f1d-b267-f652b9bb3448-var-lock\") on node \"master-0\" DevicePath \"\"" Feb 24 05:15:46.325602 master-0 kubenswrapper[7614]: I0224 05:15:46.325481 7614 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/2d3d57f1-cd67-4f1d-b267-f652b9bb3448-kube-api-access\") on node \"master-0\" DevicePath \"\"" Feb 24 05:15:46.325602 master-0 kubenswrapper[7614]: I0224 05:15:46.325504 7614 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/2d3d57f1-cd67-4f1d-b267-f652b9bb3448-kubelet-dir\") on node \"master-0\" DevicePath \"\"" Feb 24 05:15:46.670735 master-0 kubenswrapper[7614]: I0224 05:15:46.670483 7614 generic.go:334] "Generic (PLEG): container finished" podID="56c3cb71c9851003c8de7e7c5db4b87e" containerID="ec92c2ccaab799d81de24af8faba27c40dd8197fcd80279d1de6e4daee2ed87c" exitCode=1 Feb 24 05:15:46.671826 master-0 kubenswrapper[7614]: I0224 05:15:46.670596 7614 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="kube-system/bootstrap-kube-scheduler-master-0" event={"ID":"56c3cb71c9851003c8de7e7c5db4b87e","Type":"ContainerDied","Data":"ec92c2ccaab799d81de24af8faba27c40dd8197fcd80279d1de6e4daee2ed87c"} Feb 24 05:15:46.673450 master-0 kubenswrapper[7614]: I0224 05:15:46.673392 7614 scope.go:117] "RemoveContainer" containerID="ec92c2ccaab799d81de24af8faba27c40dd8197fcd80279d1de6e4daee2ed87c" Feb 24 05:15:46.675055 master-0 kubenswrapper[7614]: I0224 05:15:46.674996 7614 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/installer-1-master-0" Feb 24 05:15:46.675212 master-0 kubenswrapper[7614]: I0224 05:15:46.675015 7614 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/installer-1-master-0" event={"ID":"2d3d57f1-cd67-4f1d-b267-f652b9bb3448","Type":"ContainerDied","Data":"345bd8023fa43822945ff7359cdfe764906fb44812bf8f7d37334c964ddefedc"} Feb 24 05:15:46.675212 master-0 kubenswrapper[7614]: I0224 05:15:46.675109 7614 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="345bd8023fa43822945ff7359cdfe764906fb44812bf8f7d37334c964ddefedc" Feb 24 05:15:47.696621 master-0 kubenswrapper[7614]: I0224 05:15:47.696538 7614 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="kube-system/bootstrap-kube-scheduler-master-0" event={"ID":"56c3cb71c9851003c8de7e7c5db4b87e","Type":"ContainerStarted","Data":"28b8da242544132c6f029ed620036b6ee2e59516b410b237f207e8e4173db9a8"} Feb 24 05:15:48.200642 master-0 kubenswrapper[7614]: I0224 05:15:48.200490 7614 prober.go:107] "Probe failed" probeType="Startup" pod="kube-system/bootstrap-kube-controller-manager-master-0" podUID="c9ad9373c007a4fcd25e70622bdc8deb" containerName="kube-controller-manager" probeResult="failure" output="Get \"https://192.168.32.10:10257/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 24 05:15:49.714044 master-0 kubenswrapper[7614]: I0224 05:15:49.713969 7614 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_installer-1-master-0_1d886fdf-fd74-45de-b7c0-2e8e75eb994e/installer/0.log" Feb 24 05:15:49.714852 master-0 kubenswrapper[7614]: I0224 05:15:49.714051 7614 generic.go:334] "Generic (PLEG): container finished" podID="1d886fdf-fd74-45de-b7c0-2e8e75eb994e" containerID="f43db6490300b530630636ca2020a29922f028b9720d4ba80166936836bf6b4e" exitCode=1 Feb 24 05:15:49.714852 master-0 kubenswrapper[7614]: I0224 05:15:49.714097 7614 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/installer-1-master-0" event={"ID":"1d886fdf-fd74-45de-b7c0-2e8e75eb994e","Type":"ContainerDied","Data":"f43db6490300b530630636ca2020a29922f028b9720d4ba80166936836bf6b4e"} Feb 24 05:15:50.450934 master-0 kubenswrapper[7614]: I0224 05:15:50.450863 7614 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_installer-1-master-0_1d886fdf-fd74-45de-b7c0-2e8e75eb994e/installer/0.log" Feb 24 05:15:50.451261 master-0 kubenswrapper[7614]: I0224 05:15:50.450967 7614 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/installer-1-master-0" Feb 24 05:15:50.590354 master-0 kubenswrapper[7614]: I0224 05:15:50.590226 7614 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1d886fdf-fd74-45de-b7c0-2e8e75eb994e-kube-api-access\") pod \"1d886fdf-fd74-45de-b7c0-2e8e75eb994e\" (UID: \"1d886fdf-fd74-45de-b7c0-2e8e75eb994e\") " Feb 24 05:15:50.590713 master-0 kubenswrapper[7614]: I0224 05:15:50.590594 7614 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/1d886fdf-fd74-45de-b7c0-2e8e75eb994e-kubelet-dir\") pod \"1d886fdf-fd74-45de-b7c0-2e8e75eb994e\" (UID: \"1d886fdf-fd74-45de-b7c0-2e8e75eb994e\") " Feb 24 05:15:50.590713 master-0 kubenswrapper[7614]: I0224 05:15:50.590675 7614 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/1d886fdf-fd74-45de-b7c0-2e8e75eb994e-var-lock\") pod \"1d886fdf-fd74-45de-b7c0-2e8e75eb994e\" (UID: \"1d886fdf-fd74-45de-b7c0-2e8e75eb994e\") " Feb 24 05:15:50.590713 master-0 kubenswrapper[7614]: I0224 05:15:50.590698 7614 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1d886fdf-fd74-45de-b7c0-2e8e75eb994e-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "1d886fdf-fd74-45de-b7c0-2e8e75eb994e" (UID: "1d886fdf-fd74-45de-b7c0-2e8e75eb994e"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 24 05:15:50.590993 master-0 kubenswrapper[7614]: I0224 05:15:50.590886 7614 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1d886fdf-fd74-45de-b7c0-2e8e75eb994e-var-lock" (OuterVolumeSpecName: "var-lock") pod "1d886fdf-fd74-45de-b7c0-2e8e75eb994e" (UID: "1d886fdf-fd74-45de-b7c0-2e8e75eb994e"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 24 05:15:50.591089 master-0 kubenswrapper[7614]: I0224 05:15:50.591065 7614 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/1d886fdf-fd74-45de-b7c0-2e8e75eb994e-kubelet-dir\") on node \"master-0\" DevicePath \"\"" Feb 24 05:15:50.591089 master-0 kubenswrapper[7614]: I0224 05:15:50.591084 7614 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/1d886fdf-fd74-45de-b7c0-2e8e75eb994e-var-lock\") on node \"master-0\" DevicePath \"\"" Feb 24 05:15:50.595760 master-0 kubenswrapper[7614]: I0224 05:15:50.595670 7614 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1d886fdf-fd74-45de-b7c0-2e8e75eb994e-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "1d886fdf-fd74-45de-b7c0-2e8e75eb994e" (UID: "1d886fdf-fd74-45de-b7c0-2e8e75eb994e"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 24 05:15:50.692428 master-0 kubenswrapper[7614]: I0224 05:15:50.692273 7614 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1d886fdf-fd74-45de-b7c0-2e8e75eb994e-kube-api-access\") on node \"master-0\" DevicePath \"\"" Feb 24 05:15:50.723730 master-0 kubenswrapper[7614]: I0224 05:15:50.723668 7614 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_installer-1-master-0_1d886fdf-fd74-45de-b7c0-2e8e75eb994e/installer/0.log" Feb 24 05:15:50.724585 master-0 kubenswrapper[7614]: I0224 05:15:50.723768 7614 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/installer-1-master-0" event={"ID":"1d886fdf-fd74-45de-b7c0-2e8e75eb994e","Type":"ContainerDied","Data":"1bbbffdd27360f786259f5c979d3eb55f2c697612981d0e1286af22d00168b24"} Feb 24 05:15:50.724585 master-0 kubenswrapper[7614]: I0224 05:15:50.723835 7614 scope.go:117] "RemoveContainer" containerID="f43db6490300b530630636ca2020a29922f028b9720d4ba80166936836bf6b4e" Feb 24 05:15:50.724585 master-0 kubenswrapper[7614]: I0224 05:15:50.723937 7614 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/installer-1-master-0" Feb 24 05:15:50.759029 master-0 kubenswrapper[7614]: E0224 05:15:50.758939 7614 controller.go:195] "Failed to update lease" err="Put \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 24 05:15:51.025792 master-0 kubenswrapper[7614]: E0224 05:15:51.025614 7614 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-24T05:15:41Z\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-24T05:15:41Z\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-24T05:15:41Z\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-24T05:15:41Z\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:94d88fe2fa42931a725508dbf17296b6ed99b8e20c1169f5d1fb8a36f4927ddd\\\"],\\\"sizeBytes\\\":1637274270},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d7a8ac0ba2e5115c9d451d553741173ae8744d4544da15e28bf38f61630182fd\\\"],\\\"sizeBytes\\\":1237794314},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4775c6461221dafe3ddd67ff683ccb665bed6eb278fa047d9d744aab9af65dcf\\\"],\\\"sizeBytes\\\":992461126},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8177c465e14c63854e5c0fa95ca0635cffc9b5dd3d077ecf971feedbc42b1274\\\"],\\\"sizeBytes\\\":943734757},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:72fafcd55ab739919dd8a114863fda27106af1c497f474e7ce0cb23b58dfa021\\\"],\\\"sizeBytes\\\":875998518},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7b9239f1f5e9590e3db71e61fde86db8f43e0085f61ae7769508d2ea058481c7\\\"],\\\"sizeBytes\\\":862501144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3fa84eaa1310d97fe55bb23a7c27ece85718d0643fa7fc0ff81014edb4b948b\\\"],\\\"sizeBytes\\\":772838975},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bd420e879c9f0271bca2d123a6d762591d9a4626b72f254d1f885842c32149e8\\\"],\\\"sizeBytes\\\":687849728},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3c467c1eeba7434b2aebf07169ab8afe0203d638e871dbdf29a16f830e9aef9e\\\"],\\\"sizeBytes\\\":682963466},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5121a0944000b7bfa57ae2e4eb3f412e1b4b89fcc75eec1ef20241182c0527f2\\\"],\\\"sizeBytes\\\":677827184},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d5a31b448302fbb994548ed801ac488a44e8a7c4ae9149c3b4cc20d6af832f83\\\"],\\\"sizeBytes\\\":621542709},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3e089c4e4fa9a22803b2673b776215e021a1f12a856dbcaba2fadee29bee10a3\\\"],\\\"sizeBytes\\\":589275174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1582ea693f35073e3316e2380a18227b78096ca7f4e1328f1dd8a2c423da26e9\\\"],\\\"sizeBytes\\\":582052489},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:314be88d356b2c8a3c4416daeb4cfcd58d617a4526319c01ddaffae4b4179e74\\\"],\\\"sizeBytes\\\":558105176},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d77a77c401bcfaa65a6ab6de82415af0e7ace1b470626647e5feb4875c89a5ef\\\"],\\\"sizeBytes\\\":529218694},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bc0ca626e5e17f9f78ddbfde54ea13ddc7749904911817bba16e6b59f30499ec\\\"],\\\"sizeBytes\\\":528829499},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:11f566fe2ae782ad96d36028b0fd81911a64ef787dcebc83803f741f272fa396\\\"],\\\"sizeBytes\\\":518279996},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-release@sha256:40bb7cf7c637bf9efd8fb0157839d325a019d67cc7d7279665fcf90dbb7f3f33\\\"],\\\"sizeBytes\\\":517888569},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fd63e2c1185e529c6e9f6e1426222ff2ac195132b44a1775f407e4593b66d4c\\\"],\\\"sizeBytes\\\":514875199},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ce471c00b59fd855a59f7efa9afdb3f0f9cbf1c4bcce3a82fe1a4cb82e90f52e\\\"],\\\"sizeBytes\\\":513119434},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a9dcbc6b966928b7597d4a822948ae6f07b62feecb91679c1d825d0d19426e19\\\"],\\\"sizeBytes\\\":512172666},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d5f4a546983224e416dfcc3a700afc15f9790182a5a2f8f7c94892d0e95abab3\\\"],\\\"sizeBytes\\\":511125422},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2c8de5c5b21ed8c7829ba988d580ffa470c9913877fe0ee5e11bf507400ffbc7\\\"],\\\"sizeBytes\\\":511059399},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:64ba461fd5594e3a30bfd755f1496707a88249bc68d07c65124c8617d664d2ac\\\"],\\\"sizeBytes\\\":508786786},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a82e441a9e9b93f0e010f1ce26e30c24b6ca93f7752084d4694ebdb3c5b53f83\\\"],\\\"sizeBytes\\\":508443359},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d7bd3361d506dcc1be3afa62d35080c5dd37afccc26cd36019e2b9db2c45f896\\\"],\\\"sizeBytes\\\":507867630},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:034588ffd95ce834e866279bf80a45af2cddda631c6c9a6344c1bb2e033fd83e\\\"],\\\"sizeBytes\\\":506374680},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c8618d42fe4da4881abe39e98691d187e13713981b66d0dac0a11cb1287482b7\\\"],\\\"sizeBytes\\\":506291135},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ce68078d909b63bb5b872d94c04829aa1b5812c416abbaf9024840d348ee68b1\\\"],\\\"sizeBytes\\\":505244089},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:457c564075e8b14b1d24ff6eab750600ebc90ff8b7bb137306a579ee8445ae95\\\"],\\\"sizeBytes\\\":505137106},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:897708222502e4d710dd737923f74d153c084ba6048bffceb16dfd30f79a6ecc\\\"],\\\"sizeBytes\\\":504513960},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4e8c6ae1f9a450c90857c9fbccf1e5fb404dbc0d65d086afce005d6bd307853b\\\"],\\\"sizeBytes\\\":494959854},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:117a846734fc8159b7172a40ed2feb43a969b7dbc113ee1a572cbf6f9f922655\\\"],\\\"sizeBytes\\\":486990304},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4797a485fd4ab3414ba8d52bdf2afccefab6c657b1d259baad703fca5145124c\\\"],\\\"sizeBytes\\\":484349508},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7a132d09565133b36ac7c797213d6a74ac810bb368ef59136320ab3d300f45bd\\\"],\\\"sizeBytes\\\":484074784},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:235b846666adaa2e4b4d6d0f7fd71d57bf3be253466e1d9fffafd103fa2696ac\\\"],\\\"sizeBytes\\\":470575802},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ce89154fa3fe1e87c660e644b58cf125fede575869fd5841600082c0d1f858a3\\\"],\\\"sizeBytes\\\":468159025},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2ba8aec9f09d75121b95d2e6f1097415302c0ae7121fa7076fd38d7adb9a5afa\\\"],\\\"sizeBytes\\\":467133839},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cb2014728aa54e620f65424402b14c5247016734a9a982c393dc011acb1a1f52\\\"],\\\"sizeBytes\\\":464984427},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:39d04e6e7ced98e7e189aff1bf392a4d4526e011fc6adead5c6b27dbd08776a9\\\"],\\\"sizeBytes\\\":463600445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f42321072d0ab781f41e8f595ed6f5efabe791e472c7d0784e61b3c214194656\\\"],\\\"sizeBytes\\\":458025547},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:24097d3bc90ed1fc543f5d96736c6091eb57b9e578d7186f430147ee28269cbf\\\"],\\\"sizeBytes\\\":456470711},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e53cc6c4d6263c99978c787e90575dd4818eac732589145ca7331186ad4f16de\\\"],\\\"sizeBytes\\\":448723134},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fc46bdc145c2a9e4a89a5fe574cd228b7355eb99754255bf9a0c8bf2cc1de1f2\\\"],\\\"sizeBytes\\\":447940744},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eef7d0364bb9259fdc66e57df6df3a59ce7bf957a77d0ca25d4fedb5f122015\\\"],\\\"sizeBytes\\\":443170136},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b1d840665bf310fa455ddaff9b262dd0649440ca9ecf34d49b340ce669885568\\\"],\\\"sizeBytes\\\":411485245},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16ea15164e7d71550d4c0e2c90d17f96edda4ab77123947b2e188ffb23951fa0\\\"],\\\"sizeBytes\\\":407241636},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6d5001a555eb05eef7f23d64667303c2b4db8343ee900c265f7613c40c1db229\\\"],\\\"sizeBytes\\\":396420881}]}}\" for node \"master-0\": Patch \"https://api-int.sno.openstack.lab:6443/api/v1/nodes/master-0/status?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 24 05:15:52.266230 master-0 kubenswrapper[7614]: I0224 05:15:52.266123 7614 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/package-server-manager-5c75f78c8b-9d82f" Feb 24 05:15:56.646110 master-0 kubenswrapper[7614]: E0224 05:15:56.645968 7614 kubelet.go:1929] "Failed creating a mirror pod for" err="Internal error occurred: admission plugin \"LimitRanger\" failed to complete mutation in 13s" pod="openshift-etcd/etcd-master-0" Feb 24 05:15:56.768846 master-0 kubenswrapper[7614]: I0224 05:15:56.768765 7614 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-scheduler_installer-1-master-0_74d070e9-4193-4598-ad68-15955b07d649/installer/0.log" Feb 24 05:15:56.769215 master-0 kubenswrapper[7614]: I0224 05:15:56.768857 7614 generic.go:334] "Generic (PLEG): container finished" podID="74d070e9-4193-4598-ad68-15955b07d649" containerID="ec62ccfb72151c7c722b6450bced3a8fc5369d64de69ed787b605e7b33bf1f14" exitCode=1 Feb 24 05:15:56.769215 master-0 kubenswrapper[7614]: I0224 05:15:56.768915 7614 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/installer-1-master-0" event={"ID":"74d070e9-4193-4598-ad68-15955b07d649","Type":"ContainerDied","Data":"ec62ccfb72151c7c722b6450bced3a8fc5369d64de69ed787b605e7b33bf1f14"} Feb 24 05:15:57.780525 master-0 kubenswrapper[7614]: I0224 05:15:57.780450 7614 generic.go:334] "Generic (PLEG): container finished" podID="18a83278819db2092fa26d8274eb3f00" containerID="8e7e998099321e92b4a656cc6f1d593f93e765a527cc75d4dc4f7951434a0e8c" exitCode=0 Feb 24 05:15:57.781429 master-0 kubenswrapper[7614]: I0224 05:15:57.780542 7614 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-master-0" event={"ID":"18a83278819db2092fa26d8274eb3f00","Type":"ContainerDied","Data":"8e7e998099321e92b4a656cc6f1d593f93e765a527cc75d4dc4f7951434a0e8c"} Feb 24 05:15:57.785470 master-0 kubenswrapper[7614]: I0224 05:15:57.785417 7614 generic.go:334] "Generic (PLEG): container finished" podID="12dab5d350ebc129b0bfa4714d330b15" containerID="b935af68bde88f6abe85c555e4940dca0ca8f352e0978bf5d688472276e6f4de" exitCode=0 Feb 24 05:15:58.169070 master-0 kubenswrapper[7614]: I0224 05:15:58.168988 7614 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-scheduler_installer-1-master-0_74d070e9-4193-4598-ad68-15955b07d649/installer/0.log" Feb 24 05:15:58.169445 master-0 kubenswrapper[7614]: I0224 05:15:58.169110 7614 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/installer-1-master-0" Feb 24 05:15:58.200816 master-0 kubenswrapper[7614]: I0224 05:15:58.200669 7614 prober.go:107] "Probe failed" probeType="Startup" pod="kube-system/bootstrap-kube-controller-manager-master-0" podUID="c9ad9373c007a4fcd25e70622bdc8deb" containerName="kube-controller-manager" probeResult="failure" output="Get \"https://192.168.32.10:10257/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 24 05:15:58.322590 master-0 kubenswrapper[7614]: I0224 05:15:58.322469 7614 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/74d070e9-4193-4598-ad68-15955b07d649-kube-api-access\") pod \"74d070e9-4193-4598-ad68-15955b07d649\" (UID: \"74d070e9-4193-4598-ad68-15955b07d649\") " Feb 24 05:15:58.322590 master-0 kubenswrapper[7614]: I0224 05:15:58.322596 7614 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/74d070e9-4193-4598-ad68-15955b07d649-kubelet-dir\") pod \"74d070e9-4193-4598-ad68-15955b07d649\" (UID: \"74d070e9-4193-4598-ad68-15955b07d649\") " Feb 24 05:15:58.322957 master-0 kubenswrapper[7614]: I0224 05:15:58.322702 7614 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/74d070e9-4193-4598-ad68-15955b07d649-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "74d070e9-4193-4598-ad68-15955b07d649" (UID: "74d070e9-4193-4598-ad68-15955b07d649"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 24 05:15:58.322957 master-0 kubenswrapper[7614]: I0224 05:15:58.322753 7614 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/74d070e9-4193-4598-ad68-15955b07d649-var-lock\") pod \"74d070e9-4193-4598-ad68-15955b07d649\" (UID: \"74d070e9-4193-4598-ad68-15955b07d649\") " Feb 24 05:15:58.322957 master-0 kubenswrapper[7614]: I0224 05:15:58.322878 7614 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/74d070e9-4193-4598-ad68-15955b07d649-var-lock" (OuterVolumeSpecName: "var-lock") pod "74d070e9-4193-4598-ad68-15955b07d649" (UID: "74d070e9-4193-4598-ad68-15955b07d649"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 24 05:15:58.323242 master-0 kubenswrapper[7614]: I0224 05:15:58.323137 7614 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/74d070e9-4193-4598-ad68-15955b07d649-kubelet-dir\") on node \"master-0\" DevicePath \"\"" Feb 24 05:15:58.323242 master-0 kubenswrapper[7614]: I0224 05:15:58.323174 7614 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/74d070e9-4193-4598-ad68-15955b07d649-var-lock\") on node \"master-0\" DevicePath \"\"" Feb 24 05:15:58.329046 master-0 kubenswrapper[7614]: I0224 05:15:58.328961 7614 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/74d070e9-4193-4598-ad68-15955b07d649-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "74d070e9-4193-4598-ad68-15955b07d649" (UID: "74d070e9-4193-4598-ad68-15955b07d649"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 24 05:15:58.424604 master-0 kubenswrapper[7614]: I0224 05:15:58.424291 7614 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/74d070e9-4193-4598-ad68-15955b07d649-kube-api-access\") on node \"master-0\" DevicePath \"\"" Feb 24 05:15:58.793529 master-0 kubenswrapper[7614]: I0224 05:15:58.793401 7614 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-scheduler_installer-1-master-0_74d070e9-4193-4598-ad68-15955b07d649/installer/0.log" Feb 24 05:15:58.793529 master-0 kubenswrapper[7614]: I0224 05:15:58.793476 7614 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/installer-1-master-0" event={"ID":"74d070e9-4193-4598-ad68-15955b07d649","Type":"ContainerDied","Data":"b3e22a12aff8d5b6b6bf25f421a38e1ab75e1b3a0b022c9941c1b0c879a1106e"} Feb 24 05:15:58.793529 master-0 kubenswrapper[7614]: I0224 05:15:58.793508 7614 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b3e22a12aff8d5b6b6bf25f421a38e1ab75e1b3a0b022c9941c1b0c879a1106e" Feb 24 05:15:58.794799 master-0 kubenswrapper[7614]: I0224 05:15:58.793560 7614 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/installer-1-master-0" Feb 24 05:16:00.101741 master-0 kubenswrapper[7614]: I0224 05:16:00.101599 7614 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd_etcd-master-0-master-0_12dab5d350ebc129b0bfa4714d330b15/etcdctl/0.log" Feb 24 05:16:00.101741 master-0 kubenswrapper[7614]: I0224 05:16:00.101755 7614 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-master-0-master-0" Feb 24 05:16:00.251405 master-0 kubenswrapper[7614]: I0224 05:16:00.251258 7614 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/12dab5d350ebc129b0bfa4714d330b15-data-dir\") pod \"12dab5d350ebc129b0bfa4714d330b15\" (UID: \"12dab5d350ebc129b0bfa4714d330b15\") " Feb 24 05:16:00.251684 master-0 kubenswrapper[7614]: I0224 05:16:00.251454 7614 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/12dab5d350ebc129b0bfa4714d330b15-data-dir" (OuterVolumeSpecName: "data-dir") pod "12dab5d350ebc129b0bfa4714d330b15" (UID: "12dab5d350ebc129b0bfa4714d330b15"). InnerVolumeSpecName "data-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 24 05:16:00.251684 master-0 kubenswrapper[7614]: I0224 05:16:00.251561 7614 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/host-path/12dab5d350ebc129b0bfa4714d330b15-certs\") pod \"12dab5d350ebc129b0bfa4714d330b15\" (UID: \"12dab5d350ebc129b0bfa4714d330b15\") " Feb 24 05:16:00.251684 master-0 kubenswrapper[7614]: I0224 05:16:00.251603 7614 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/12dab5d350ebc129b0bfa4714d330b15-certs" (OuterVolumeSpecName: "certs") pod "12dab5d350ebc129b0bfa4714d330b15" (UID: "12dab5d350ebc129b0bfa4714d330b15"). InnerVolumeSpecName "certs". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 24 05:16:00.252036 master-0 kubenswrapper[7614]: I0224 05:16:00.251985 7614 reconciler_common.go:293] "Volume detached for volume \"certs\" (UniqueName: \"kubernetes.io/host-path/12dab5d350ebc129b0bfa4714d330b15-certs\") on node \"master-0\" DevicePath \"\"" Feb 24 05:16:00.252036 master-0 kubenswrapper[7614]: I0224 05:16:00.252019 7614 reconciler_common.go:293] "Volume detached for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/12dab5d350ebc129b0bfa4714d330b15-data-dir\") on node \"master-0\" DevicePath \"\"" Feb 24 05:16:00.760639 master-0 kubenswrapper[7614]: E0224 05:16:00.760473 7614 controller.go:195] "Failed to update lease" err="Put \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 24 05:16:00.811906 master-0 kubenswrapper[7614]: I0224 05:16:00.811783 7614 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd_etcd-master-0-master-0_12dab5d350ebc129b0bfa4714d330b15/etcdctl/0.log" Feb 24 05:16:00.811906 master-0 kubenswrapper[7614]: I0224 05:16:00.811867 7614 generic.go:334] "Generic (PLEG): container finished" podID="12dab5d350ebc129b0bfa4714d330b15" containerID="60a30f593dbe2d93b00c55ee834611c7e89cff694c686371357f4d3a921ea7a0" exitCode=137 Feb 24 05:16:00.811906 master-0 kubenswrapper[7614]: I0224 05:16:00.811935 7614 scope.go:117] "RemoveContainer" containerID="b935af68bde88f6abe85c555e4940dca0ca8f352e0978bf5d688472276e6f4de" Feb 24 05:16:00.812557 master-0 kubenswrapper[7614]: I0224 05:16:00.812033 7614 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-master-0-master-0" Feb 24 05:16:00.835750 master-0 kubenswrapper[7614]: I0224 05:16:00.835697 7614 scope.go:117] "RemoveContainer" containerID="60a30f593dbe2d93b00c55ee834611c7e89cff694c686371357f4d3a921ea7a0" Feb 24 05:16:00.864006 master-0 kubenswrapper[7614]: I0224 05:16:00.863949 7614 scope.go:117] "RemoveContainer" containerID="b935af68bde88f6abe85c555e4940dca0ca8f352e0978bf5d688472276e6f4de" Feb 24 05:16:00.864860 master-0 kubenswrapper[7614]: E0224 05:16:00.864794 7614 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b935af68bde88f6abe85c555e4940dca0ca8f352e0978bf5d688472276e6f4de\": container with ID starting with b935af68bde88f6abe85c555e4940dca0ca8f352e0978bf5d688472276e6f4de not found: ID does not exist" containerID="b935af68bde88f6abe85c555e4940dca0ca8f352e0978bf5d688472276e6f4de" Feb 24 05:16:00.865101 master-0 kubenswrapper[7614]: I0224 05:16:00.864880 7614 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b935af68bde88f6abe85c555e4940dca0ca8f352e0978bf5d688472276e6f4de"} err="failed to get container status \"b935af68bde88f6abe85c555e4940dca0ca8f352e0978bf5d688472276e6f4de\": rpc error: code = NotFound desc = could not find container \"b935af68bde88f6abe85c555e4940dca0ca8f352e0978bf5d688472276e6f4de\": container with ID starting with b935af68bde88f6abe85c555e4940dca0ca8f352e0978bf5d688472276e6f4de not found: ID does not exist" Feb 24 05:16:00.865101 master-0 kubenswrapper[7614]: I0224 05:16:00.865096 7614 scope.go:117] "RemoveContainer" containerID="60a30f593dbe2d93b00c55ee834611c7e89cff694c686371357f4d3a921ea7a0" Feb 24 05:16:00.865745 master-0 kubenswrapper[7614]: E0224 05:16:00.865670 7614 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"60a30f593dbe2d93b00c55ee834611c7e89cff694c686371357f4d3a921ea7a0\": container with ID starting with 60a30f593dbe2d93b00c55ee834611c7e89cff694c686371357f4d3a921ea7a0 not found: ID does not exist" containerID="60a30f593dbe2d93b00c55ee834611c7e89cff694c686371357f4d3a921ea7a0" Feb 24 05:16:00.865836 master-0 kubenswrapper[7614]: I0224 05:16:00.865749 7614 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"60a30f593dbe2d93b00c55ee834611c7e89cff694c686371357f4d3a921ea7a0"} err="failed to get container status \"60a30f593dbe2d93b00c55ee834611c7e89cff694c686371357f4d3a921ea7a0\": rpc error: code = NotFound desc = could not find container \"60a30f593dbe2d93b00c55ee834611c7e89cff694c686371357f4d3a921ea7a0\": container with ID starting with 60a30f593dbe2d93b00c55ee834611c7e89cff694c686371357f4d3a921ea7a0 not found: ID does not exist" Feb 24 05:16:01.026765 master-0 kubenswrapper[7614]: E0224 05:16:01.026513 7614 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"master-0\": Get \"https://api-int.sno.openstack.lab:6443/api/v1/nodes/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 24 05:16:01.184046 master-0 kubenswrapper[7614]: I0224 05:16:01.183998 7614 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="12dab5d350ebc129b0bfa4714d330b15" path="/var/lib/kubelet/pods/12dab5d350ebc129b0bfa4714d330b15/volumes" Feb 24 05:16:01.185435 master-0 kubenswrapper[7614]: I0224 05:16:01.185279 7614 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-etcd/etcd-master-0-master-0" podUID="" Feb 24 05:16:03.837925 master-0 kubenswrapper[7614]: I0224 05:16:03.837704 7614 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_installer-1-master-0_e44f770d-f88d-446a-a22f-51b30e89690c/installer/0.log" Feb 24 05:16:03.837925 master-0 kubenswrapper[7614]: I0224 05:16:03.837790 7614 generic.go:334] "Generic (PLEG): container finished" podID="e44f770d-f88d-446a-a22f-51b30e89690c" containerID="1f43a4854636c4d4d499b77fab14041aa2c65280b5d333f68ca719e5325adfaf" exitCode=1 Feb 24 05:16:03.978411 master-0 kubenswrapper[7614]: E0224 05:16:03.978131 7614 event.go:359] "Server rejected event (will not retry!)" err="Timeout: request did not complete within requested timeout - context deadline exceeded" event="&Event{ObjectMeta:{etcd-master-0-master-0.189716e00e207a73 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-master-0-master-0,UID:12dab5d350ebc129b0bfa4714d330b15,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd},},Reason:Killing,Message:Stopping container etcd,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-02-24 05:15:29.959074419 +0000 UTC m=+60.993817585,LastTimestamp:2026-02-24 05:15:29.959074419 +0000 UTC m=+60.993817585,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Feb 24 05:16:08.200644 master-0 kubenswrapper[7614]: I0224 05:16:08.200473 7614 prober.go:107] "Probe failed" probeType="Startup" pod="kube-system/bootstrap-kube-controller-manager-master-0" podUID="c9ad9373c007a4fcd25e70622bdc8deb" containerName="kube-controller-manager" probeResult="failure" output="Get \"https://192.168.32.10:10257/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 24 05:16:10.762933 master-0 kubenswrapper[7614]: E0224 05:16:10.762745 7614 controller.go:195] "Failed to update lease" err="Put \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 24 05:16:10.788123 master-0 kubenswrapper[7614]: E0224 05:16:10.788014 7614 kubelet.go:1929] "Failed creating a mirror pod for" err="Internal error occurred: admission plugin \"LimitRanger\" failed to complete mutation in 13s" pod="openshift-etcd/etcd-master-0" Feb 24 05:16:11.027976 master-0 kubenswrapper[7614]: E0224 05:16:11.027820 7614 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"master-0\": Get \"https://api-int.sno.openstack.lab:6443/api/v1/nodes/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 24 05:16:11.899841 master-0 kubenswrapper[7614]: I0224 05:16:11.899728 7614 generic.go:334] "Generic (PLEG): container finished" podID="18a83278819db2092fa26d8274eb3f00" containerID="80c968b4b9fc354e4a6c8675410d09e381b46a2ac2e807d24b9c6b5794f1030b" exitCode=0 Feb 24 05:16:13.919342 master-0 kubenswrapper[7614]: I0224 05:16:13.919238 7614 generic.go:334] "Generic (PLEG): container finished" podID="17f8e10b-88dc-4158-a7c4-aaa2f5d5fb9d" containerID="f93fdb0961b7ab6c511e8eb1cee936b815e97917116f05d83d27c325437b676d" exitCode=0 Feb 24 05:16:15.931411 master-0 kubenswrapper[7614]: I0224 05:16:15.931286 7614 generic.go:334] "Generic (PLEG): container finished" podID="22813c83-2f60-44ad-9624-ad367cec08f7" containerID="3d7e3ee020313467e6fefd173d6752fc4e4ffcc2fae974414212fcbe51114f7d" exitCode=0 Feb 24 05:16:16.939345 master-0 kubenswrapper[7614]: I0224 05:16:16.939146 7614 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-network-operator_network-operator-7d7db75979-4fk6k_f77227c8-c52d-4a71-ae1b-792055f6f23d/network-operator/0.log" Feb 24 05:16:16.939345 master-0 kubenswrapper[7614]: I0224 05:16:16.939251 7614 generic.go:334] "Generic (PLEG): container finished" podID="f77227c8-c52d-4a71-ae1b-792055f6f23d" containerID="22b7d6a6838a4874825b0fb486995e1ecae2b2ab9edf5d7d1caac95d9b544b8e" exitCode=255 Feb 24 05:16:20.764118 master-0 kubenswrapper[7614]: E0224 05:16:20.764017 7614 controller.go:195] "Failed to update lease" err="Put \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 24 05:16:20.764118 master-0 kubenswrapper[7614]: I0224 05:16:20.764114 7614 controller.go:115] "failed to update lease using latest lease, fallback to ensure lease" err="failed 5 attempts to update lease" Feb 24 05:16:21.029152 master-0 kubenswrapper[7614]: E0224 05:16:21.028894 7614 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"master-0\": Get \"https://api-int.sno.openstack.lab:6443/api/v1/nodes/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 24 05:16:24.994461 master-0 kubenswrapper[7614]: I0224 05:16:24.994362 7614 generic.go:334] "Generic (PLEG): container finished" podID="7a2c651d-ea1a-41f2-9745-04adc8d88904" containerID="1fe643ed33a9f72192d56893c5e0183a5530b52d1fd5cb43d00c8adaabb5837c" exitCode=0 Feb 24 05:16:28.016059 master-0 kubenswrapper[7614]: I0224 05:16:28.015984 7614 generic.go:334] "Generic (PLEG): container finished" podID="c3fed34f-b275-42c6-af6c-8de3e6fe0f9e" containerID="80dce2d75efa45ca36b53637a94f5b4155d200b7759d2e7b129815f6f4324f5a" exitCode=0 Feb 24 05:16:28.019667 master-0 kubenswrapper[7614]: I0224 05:16:28.019622 7614 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-network-node-identity_network-node-identity-rlg4x_c106275b-72b6-4877-95c3-830f93e35375/approver/0.log" Feb 24 05:16:28.020243 master-0 kubenswrapper[7614]: I0224 05:16:28.020198 7614 generic.go:334] "Generic (PLEG): container finished" podID="c106275b-72b6-4877-95c3-830f93e35375" containerID="3c48cf95cb20519b43165b534538afb3afad0ec1beb464f9f497eefdb2dc3c0f" exitCode=1 Feb 24 05:16:30.764658 master-0 kubenswrapper[7614]: E0224 05:16:30.764524 7614 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" interval="200ms" Feb 24 05:16:31.029893 master-0 kubenswrapper[7614]: E0224 05:16:31.029675 7614 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"master-0\": Get \"https://api-int.sno.openstack.lab:6443/api/v1/nodes/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 24 05:16:31.029893 master-0 kubenswrapper[7614]: E0224 05:16:31.029740 7614 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Feb 24 05:16:31.373482 master-0 kubenswrapper[7614]: I0224 05:16:31.373231 7614 status_manager.go:851] "Failed to get status for pod" podUID="fe235661-d492-48fc-92e6-d9e1938daeb7" pod="openshift-cluster-machine-approver/machine-approver-798b897698-6hgvq" err="the server was unable to return a response in the time allotted, but may still be processing the request (get pods machine-approver-798b897698-6hgvq)" Feb 24 05:16:35.003809 master-0 kubenswrapper[7614]: I0224 05:16:35.003521 7614 patch_prober.go:28] interesting pod/etcd-operator-545bf96f4d-tfmbs container/etcd-operator namespace/openshift-etcd-operator: Liveness probe status=failure output="Get \"https://10.128.0.8:8443/healthz\": dial tcp 10.128.0.8:8443: connect: connection refused" start-of-body= Feb 24 05:16:35.003809 master-0 kubenswrapper[7614]: I0224 05:16:35.003677 7614 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-etcd-operator/etcd-operator-545bf96f4d-tfmbs" podUID="7a2c651d-ea1a-41f2-9745-04adc8d88904" containerName="etcd-operator" probeResult="failure" output="Get \"https://10.128.0.8:8443/healthz\": dial tcp 10.128.0.8:8443: connect: connection refused" Feb 24 05:16:35.189025 master-0 kubenswrapper[7614]: E0224 05:16:35.188939 7614 mirror_client.go:138] "Failed deleting a mirror pod" err="Timeout: request did not complete within requested timeout - context deadline exceeded" pod="openshift-etcd/etcd-master-0-master-0" Feb 24 05:16:35.189366 master-0 kubenswrapper[7614]: E0224 05:16:35.189193 7614 kubelet.go:2526] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="34.016s" Feb 24 05:16:35.189366 master-0 kubenswrapper[7614]: I0224 05:16:35.189239 7614 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="kube-system/bootstrap-kube-controller-manager-master-0" Feb 24 05:16:35.190426 master-0 kubenswrapper[7614]: I0224 05:16:35.190305 7614 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="kube-controller-manager" containerStatusID={"Type":"cri-o","ID":"7e457713ca1d1ba317f2afd7258df17db980e4c10d06e5baec4b31193663e429"} pod="kube-system/bootstrap-kube-controller-manager-master-0" containerMessage="Container kube-controller-manager failed startup probe, will be restarted" Feb 24 05:16:35.190608 master-0 kubenswrapper[7614]: I0224 05:16:35.190543 7614 kuberuntime_container.go:808] "Killing container with a grace period" pod="kube-system/bootstrap-kube-controller-manager-master-0" podUID="c9ad9373c007a4fcd25e70622bdc8deb" containerName="kube-controller-manager" containerID="cri-o://7e457713ca1d1ba317f2afd7258df17db980e4c10d06e5baec4b31193663e429" gracePeriod=30 Feb 24 05:16:35.200969 master-0 kubenswrapper[7614]: I0224 05:16:35.200898 7614 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-etcd/etcd-master-0-master-0" podUID="" Feb 24 05:16:36.071229 master-0 kubenswrapper[7614]: I0224 05:16:36.071029 7614 generic.go:334] "Generic (PLEG): container finished" podID="c9ad9373c007a4fcd25e70622bdc8deb" containerID="7e457713ca1d1ba317f2afd7258df17db980e4c10d06e5baec4b31193663e429" exitCode=2 Feb 24 05:16:37.982770 master-0 kubenswrapper[7614]: E0224 05:16:37.982532 7614 event.go:359] "Server rejected event (will not retry!)" err="Timeout: request did not complete within requested timeout - context deadline exceeded" event="&Event{ObjectMeta:{machine-approver-798b897698-6hgvq.189716e0370278df openshift-cluster-machine-approver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-cluster-machine-approver,Name:machine-approver-798b897698-6hgvq,UID:fe235661-d492-48fc-92e6-d9e1938daeb7,APIVersion:v1,ResourceVersion:8265,FieldPath:spec.containers{machine-approver-controller},},Reason:Pulled,Message:Successfully pulled image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2ba8aec9f09d75121b95d2e6f1097415302c0ae7121fa7076fd38d7adb9a5afa\" in 1.872s (1.872s including waiting). Image size: 467133839 bytes.,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-02-24 05:15:30.644973791 +0000 UTC m=+61.679716987,LastTimestamp:2026-02-24 05:15:30.644973791 +0000 UTC m=+61.679716987,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Feb 24 05:16:40.966370 master-0 kubenswrapper[7614]: E0224 05:16:40.966187 7614 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" interval="400ms" Feb 24 05:16:51.222705 master-0 kubenswrapper[7614]: E0224 05:16:51.222131 7614 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-24T05:16:41Z\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-24T05:16:41Z\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-24T05:16:41Z\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-24T05:16:41Z\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:94d88fe2fa42931a725508dbf17296b6ed99b8e20c1169f5d1fb8a36f4927ddd\\\"],\\\"sizeBytes\\\":1637274270},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d7a8ac0ba2e5115c9d451d553741173ae8744d4544da15e28bf38f61630182fd\\\"],\\\"sizeBytes\\\":1237794314},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4775c6461221dafe3ddd67ff683ccb665bed6eb278fa047d9d744aab9af65dcf\\\"],\\\"sizeBytes\\\":992461126},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8177c465e14c63854e5c0fa95ca0635cffc9b5dd3d077ecf971feedbc42b1274\\\"],\\\"sizeBytes\\\":943734757},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:72fafcd55ab739919dd8a114863fda27106af1c497f474e7ce0cb23b58dfa021\\\"],\\\"sizeBytes\\\":875998518},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7b9239f1f5e9590e3db71e61fde86db8f43e0085f61ae7769508d2ea058481c7\\\"],\\\"sizeBytes\\\":862501144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3fa84eaa1310d97fe55bb23a7c27ece85718d0643fa7fc0ff81014edb4b948b\\\"],\\\"sizeBytes\\\":772838975},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bd420e879c9f0271bca2d123a6d762591d9a4626b72f254d1f885842c32149e8\\\"],\\\"sizeBytes\\\":687849728},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3c467c1eeba7434b2aebf07169ab8afe0203d638e871dbdf29a16f830e9aef9e\\\"],\\\"sizeBytes\\\":682963466},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5121a0944000b7bfa57ae2e4eb3f412e1b4b89fcc75eec1ef20241182c0527f2\\\"],\\\"sizeBytes\\\":677827184},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d5a31b448302fbb994548ed801ac488a44e8a7c4ae9149c3b4cc20d6af832f83\\\"],\\\"sizeBytes\\\":621542709},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3e089c4e4fa9a22803b2673b776215e021a1f12a856dbcaba2fadee29bee10a3\\\"],\\\"sizeBytes\\\":589275174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1582ea693f35073e3316e2380a18227b78096ca7f4e1328f1dd8a2c423da26e9\\\"],\\\"sizeBytes\\\":582052489},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:314be88d356b2c8a3c4416daeb4cfcd58d617a4526319c01ddaffae4b4179e74\\\"],\\\"sizeBytes\\\":558105176},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d77a77c401bcfaa65a6ab6de82415af0e7ace1b470626647e5feb4875c89a5ef\\\"],\\\"sizeBytes\\\":529218694},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bc0ca626e5e17f9f78ddbfde54ea13ddc7749904911817bba16e6b59f30499ec\\\"],\\\"sizeBytes\\\":528829499},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:11f566fe2ae782ad96d36028b0fd81911a64ef787dcebc83803f741f272fa396\\\"],\\\"sizeBytes\\\":518279996},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-release@sha256:40bb7cf7c637bf9efd8fb0157839d325a019d67cc7d7279665fcf90dbb7f3f33\\\"],\\\"sizeBytes\\\":517888569},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fd63e2c1185e529c6e9f6e1426222ff2ac195132b44a1775f407e4593b66d4c\\\"],\\\"sizeBytes\\\":514875199},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ce471c00b59fd855a59f7efa9afdb3f0f9cbf1c4bcce3a82fe1a4cb82e90f52e\\\"],\\\"sizeBytes\\\":513119434},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a9dcbc6b966928b7597d4a822948ae6f07b62feecb91679c1d825d0d19426e19\\\"],\\\"sizeBytes\\\":512172666},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d5f4a546983224e416dfcc3a700afc15f9790182a5a2f8f7c94892d0e95abab3\\\"],\\\"sizeBytes\\\":511125422},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2c8de5c5b21ed8c7829ba988d580ffa470c9913877fe0ee5e11bf507400ffbc7\\\"],\\\"sizeBytes\\\":511059399},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:64ba461fd5594e3a30bfd755f1496707a88249bc68d07c65124c8617d664d2ac\\\"],\\\"sizeBytes\\\":508786786},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a82e441a9e9b93f0e010f1ce26e30c24b6ca93f7752084d4694ebdb3c5b53f83\\\"],\\\"sizeBytes\\\":508443359},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d7bd3361d506dcc1be3afa62d35080c5dd37afccc26cd36019e2b9db2c45f896\\\"],\\\"sizeBytes\\\":507867630},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:034588ffd95ce834e866279bf80a45af2cddda631c6c9a6344c1bb2e033fd83e\\\"],\\\"sizeBytes\\\":506374680},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c8618d42fe4da4881abe39e98691d187e13713981b66d0dac0a11cb1287482b7\\\"],\\\"sizeBytes\\\":506291135},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ce68078d909b63bb5b872d94c04829aa1b5812c416abbaf9024840d348ee68b1\\\"],\\\"sizeBytes\\\":505244089},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:457c564075e8b14b1d24ff6eab750600ebc90ff8b7bb137306a579ee8445ae95\\\"],\\\"sizeBytes\\\":505137106},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:897708222502e4d710dd737923f74d153c084ba6048bffceb16dfd30f79a6ecc\\\"],\\\"sizeBytes\\\":504513960},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4e8c6ae1f9a450c90857c9fbccf1e5fb404dbc0d65d086afce005d6bd307853b\\\"],\\\"sizeBytes\\\":494959854},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:117a846734fc8159b7172a40ed2feb43a969b7dbc113ee1a572cbf6f9f922655\\\"],\\\"sizeBytes\\\":486990304},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4797a485fd4ab3414ba8d52bdf2afccefab6c657b1d259baad703fca5145124c\\\"],\\\"sizeBytes\\\":484349508},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7a132d09565133b36ac7c797213d6a74ac810bb368ef59136320ab3d300f45bd\\\"],\\\"sizeBytes\\\":484074784},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:235b846666adaa2e4b4d6d0f7fd71d57bf3be253466e1d9fffafd103fa2696ac\\\"],\\\"sizeBytes\\\":470575802},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ce89154fa3fe1e87c660e644b58cf125fede575869fd5841600082c0d1f858a3\\\"],\\\"sizeBytes\\\":468159025},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2ba8aec9f09d75121b95d2e6f1097415302c0ae7121fa7076fd38d7adb9a5afa\\\"],\\\"sizeBytes\\\":467133839},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cb2014728aa54e620f65424402b14c5247016734a9a982c393dc011acb1a1f52\\\"],\\\"sizeBytes\\\":464984427},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:39d04e6e7ced98e7e189aff1bf392a4d4526e011fc6adead5c6b27dbd08776a9\\\"],\\\"sizeBytes\\\":463600445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f42321072d0ab781f41e8f595ed6f5efabe791e472c7d0784e61b3c214194656\\\"],\\\"sizeBytes\\\":458025547},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:24097d3bc90ed1fc543f5d96736c6091eb57b9e578d7186f430147ee28269cbf\\\"],\\\"sizeBytes\\\":456470711},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e53cc6c4d6263c99978c787e90575dd4818eac732589145ca7331186ad4f16de\\\"],\\\"sizeBytes\\\":448723134},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fc46bdc145c2a9e4a89a5fe574cd228b7355eb99754255bf9a0c8bf2cc1de1f2\\\"],\\\"sizeBytes\\\":447940744},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eef7d0364bb9259fdc66e57df6df3a59ce7bf957a77d0ca25d4fedb5f122015\\\"],\\\"sizeBytes\\\":443170136},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b1d840665bf310fa455ddaff9b262dd0649440ca9ecf34d49b340ce669885568\\\"],\\\"sizeBytes\\\":411485245},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16ea15164e7d71550d4c0e2c90d17f96edda4ab77123947b2e188ffb23951fa0\\\"],\\\"sizeBytes\\\":407241636},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6d5001a555eb05eef7f23d64667303c2b4db8343ee900c265f7613c40c1db229\\\"],\\\"sizeBytes\\\":396420881}]}}\" for node \"master-0\": Patch \"https://api-int.sno.openstack.lab:6443/api/v1/nodes/master-0/status?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 24 05:16:51.367436 master-0 kubenswrapper[7614]: E0224 05:16:51.367238 7614 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" interval="800ms" Feb 24 05:16:54.193263 master-0 kubenswrapper[7614]: I0224 05:16:54.193175 7614 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ingress-operator_ingress-operator-6569778c84-rr8r7_3d6b1ce7-1213-494c-829d-186d39eac7eb/ingress-operator/0.log" Feb 24 05:16:54.193263 master-0 kubenswrapper[7614]: I0224 05:16:54.193264 7614 generic.go:334] "Generic (PLEG): container finished" podID="3d6b1ce7-1213-494c-829d-186d39eac7eb" containerID="dd6d3f4e8c90f9e72cf283fa2ee57699a971df08e7b5a82fbc21deb33aca4d26" exitCode=1 Feb 24 05:17:01.223060 master-0 kubenswrapper[7614]: E0224 05:17:01.222984 7614 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"master-0\": Get \"https://api-int.sno.openstack.lab:6443/api/v1/nodes/master-0?timeout=10s\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 24 05:17:02.169120 master-0 kubenswrapper[7614]: E0224 05:17:02.168929 7614 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" interval="1.6s" Feb 24 05:17:09.205127 master-0 kubenswrapper[7614]: E0224 05:17:09.205021 7614 mirror_client.go:138] "Failed deleting a mirror pod" err="Timeout: request did not complete within requested timeout - context deadline exceeded" pod="openshift-etcd/etcd-master-0-master-0" Feb 24 05:17:09.206577 master-0 kubenswrapper[7614]: E0224 05:17:09.205410 7614 kubelet.go:2526] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="34.016s" Feb 24 05:17:09.208138 master-0 kubenswrapper[7614]: I0224 05:17:09.208091 7614 scope.go:117] "RemoveContainer" containerID="22b7d6a6838a4874825b0fb486995e1ecae2b2ab9edf5d7d1caac95d9b544b8e" Feb 24 05:17:09.208361 master-0 kubenswrapper[7614]: I0224 05:17:09.208165 7614 scope.go:117] "RemoveContainer" containerID="3d7e3ee020313467e6fefd173d6752fc4e4ffcc2fae974414212fcbe51114f7d" Feb 24 05:17:09.208489 master-0 kubenswrapper[7614]: I0224 05:17:09.208419 7614 scope.go:117] "RemoveContainer" containerID="3c48cf95cb20519b43165b534538afb3afad0ec1beb464f9f497eefdb2dc3c0f" Feb 24 05:17:09.213478 master-0 kubenswrapper[7614]: I0224 05:17:09.213420 7614 scope.go:117] "RemoveContainer" containerID="dd6d3f4e8c90f9e72cf283fa2ee57699a971df08e7b5a82fbc21deb33aca4d26" Feb 24 05:17:09.214046 master-0 kubenswrapper[7614]: I0224 05:17:09.213939 7614 scope.go:117] "RemoveContainer" containerID="f93fdb0961b7ab6c511e8eb1cee936b815e97917116f05d83d27c325437b676d" Feb 24 05:17:09.216097 master-0 kubenswrapper[7614]: I0224 05:17:09.215872 7614 scope.go:117] "RemoveContainer" containerID="80dce2d75efa45ca36b53637a94f5b4155d200b7759d2e7b129815f6f4324f5a" Feb 24 05:17:09.223822 master-0 kubenswrapper[7614]: I0224 05:17:09.223766 7614 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-etcd/etcd-master-0-master-0" podUID="" Feb 24 05:17:09.573148 master-0 kubenswrapper[7614]: I0224 05:17:09.573100 7614 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_installer-1-master-0_e44f770d-f88d-446a-a22f-51b30e89690c/installer/0.log" Feb 24 05:17:09.573298 master-0 kubenswrapper[7614]: I0224 05:17:09.573230 7614 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-1-master-0" Feb 24 05:17:09.671280 master-0 kubenswrapper[7614]: I0224 05:17:09.671186 7614 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e44f770d-f88d-446a-a22f-51b30e89690c-kube-api-access\") pod \"e44f770d-f88d-446a-a22f-51b30e89690c\" (UID: \"e44f770d-f88d-446a-a22f-51b30e89690c\") " Feb 24 05:17:09.671280 master-0 kubenswrapper[7614]: I0224 05:17:09.671266 7614 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/e44f770d-f88d-446a-a22f-51b30e89690c-var-lock\") pod \"e44f770d-f88d-446a-a22f-51b30e89690c\" (UID: \"e44f770d-f88d-446a-a22f-51b30e89690c\") " Feb 24 05:17:09.671280 master-0 kubenswrapper[7614]: I0224 05:17:09.671336 7614 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/e44f770d-f88d-446a-a22f-51b30e89690c-kubelet-dir\") pod \"e44f770d-f88d-446a-a22f-51b30e89690c\" (UID: \"e44f770d-f88d-446a-a22f-51b30e89690c\") " Feb 24 05:17:09.671850 master-0 kubenswrapper[7614]: I0224 05:17:09.671407 7614 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e44f770d-f88d-446a-a22f-51b30e89690c-var-lock" (OuterVolumeSpecName: "var-lock") pod "e44f770d-f88d-446a-a22f-51b30e89690c" (UID: "e44f770d-f88d-446a-a22f-51b30e89690c"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 24 05:17:09.671850 master-0 kubenswrapper[7614]: I0224 05:17:09.671588 7614 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e44f770d-f88d-446a-a22f-51b30e89690c-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "e44f770d-f88d-446a-a22f-51b30e89690c" (UID: "e44f770d-f88d-446a-a22f-51b30e89690c"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 24 05:17:09.671850 master-0 kubenswrapper[7614]: I0224 05:17:09.671722 7614 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/e44f770d-f88d-446a-a22f-51b30e89690c-var-lock\") on node \"master-0\" DevicePath \"\"" Feb 24 05:17:09.674630 master-0 kubenswrapper[7614]: I0224 05:17:09.674566 7614 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e44f770d-f88d-446a-a22f-51b30e89690c-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "e44f770d-f88d-446a-a22f-51b30e89690c" (UID: "e44f770d-f88d-446a-a22f-51b30e89690c"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 24 05:17:09.772420 master-0 kubenswrapper[7614]: I0224 05:17:09.772375 7614 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e44f770d-f88d-446a-a22f-51b30e89690c-kube-api-access\") on node \"master-0\" DevicePath \"\"" Feb 24 05:17:09.772420 master-0 kubenswrapper[7614]: I0224 05:17:09.772410 7614 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/e44f770d-f88d-446a-a22f-51b30e89690c-kubelet-dir\") on node \"master-0\" DevicePath \"\"" Feb 24 05:17:10.295699 master-0 kubenswrapper[7614]: I0224 05:17:10.295593 7614 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-network-node-identity_network-node-identity-rlg4x_c106275b-72b6-4877-95c3-830f93e35375/approver/0.log" Feb 24 05:17:10.300224 master-0 kubenswrapper[7614]: I0224 05:17:10.300154 7614 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ingress-operator_ingress-operator-6569778c84-rr8r7_3d6b1ce7-1213-494c-829d-186d39eac7eb/ingress-operator/0.log" Feb 24 05:17:10.307015 master-0 kubenswrapper[7614]: I0224 05:17:10.306936 7614 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_installer-1-master-0_e44f770d-f88d-446a-a22f-51b30e89690c/installer/0.log" Feb 24 05:17:10.307380 master-0 kubenswrapper[7614]: I0224 05:17:10.307269 7614 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-1-master-0" Feb 24 05:17:10.313381 master-0 kubenswrapper[7614]: I0224 05:17:10.313211 7614 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-network-operator_network-operator-7d7db75979-4fk6k_f77227c8-c52d-4a71-ae1b-792055f6f23d/network-operator/0.log" Feb 24 05:17:11.224030 master-0 kubenswrapper[7614]: E0224 05:17:11.223555 7614 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"master-0\": Get \"https://api-int.sno.openstack.lab:6443/api/v1/nodes/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 24 05:17:11.986109 master-0 kubenswrapper[7614]: E0224 05:17:11.985841 7614 event.go:359] "Server rejected event (will not retry!)" err="Timeout: request did not complete within requested timeout - context deadline exceeded" event="&Event{ObjectMeta:{machine-approver-798b897698-6hgvq.189716e0458b945a openshift-cluster-machine-approver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-cluster-machine-approver,Name:machine-approver-798b897698-6hgvq,UID:fe235661-d492-48fc-92e6-d9e1938daeb7,APIVersion:v1,ResourceVersion:8265,FieldPath:spec.containers{machine-approver-controller},},Reason:Created,Message:Created container: machine-approver-controller,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-02-24 05:15:30.888840282 +0000 UTC m=+61.923583438,LastTimestamp:2026-02-24 05:15:30.888840282 +0000 UTC m=+61.923583438,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Feb 24 05:17:13.771125 master-0 kubenswrapper[7614]: E0224 05:17:13.770977 7614 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" interval="3.2s" Feb 24 05:17:16.361919 master-0 kubenswrapper[7614]: I0224 05:17:16.361806 7614 generic.go:334] "Generic (PLEG): container finished" podID="dd29bef3-d27e-48b3-9aa0-d915e949b3d5" containerID="270089d93d1aad8adc2c6f3a218f7c7455fbc8f4604c672dd2ed10a74721af6c" exitCode=0 Feb 24 05:17:21.224465 master-0 kubenswrapper[7614]: E0224 05:17:21.224363 7614 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"master-0\": Get \"https://api-int.sno.openstack.lab:6443/api/v1/nodes/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 24 05:17:22.230678 master-0 kubenswrapper[7614]: I0224 05:17:22.230572 7614 patch_prober.go:28] interesting pod/marketplace-operator-6f5488b997-dbsnm container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.128.0.22:8080/healthz\": dial tcp 10.128.0.22:8080: connect: connection refused" start-of-body= Feb 24 05:17:22.231255 master-0 kubenswrapper[7614]: I0224 05:17:22.230786 7614 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-6f5488b997-dbsnm" podUID="dd29bef3-d27e-48b3-9aa0-d915e949b3d5" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.128.0.22:8080/healthz\": dial tcp 10.128.0.22:8080: connect: connection refused" Feb 24 05:17:22.231255 master-0 kubenswrapper[7614]: I0224 05:17:22.230592 7614 patch_prober.go:28] interesting pod/marketplace-operator-6f5488b997-dbsnm container/marketplace-operator namespace/openshift-marketplace: Liveness probe status=failure output="Get \"http://10.128.0.22:8080/healthz\": dial tcp 10.128.0.22:8080: connect: connection refused" start-of-body= Feb 24 05:17:22.231255 master-0 kubenswrapper[7614]: I0224 05:17:22.230969 7614 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-marketplace/marketplace-operator-6f5488b997-dbsnm" podUID="dd29bef3-d27e-48b3-9aa0-d915e949b3d5" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.128.0.22:8080/healthz\": dial tcp 10.128.0.22:8080: connect: connection refused" Feb 24 05:17:26.973102 master-0 kubenswrapper[7614]: E0224 05:17:26.972911 7614 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" interval="6.4s" Feb 24 05:17:31.225778 master-0 kubenswrapper[7614]: E0224 05:17:31.225698 7614 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"master-0\": Get \"https://api-int.sno.openstack.lab:6443/api/v1/nodes/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 24 05:17:31.225778 master-0 kubenswrapper[7614]: E0224 05:17:31.225749 7614 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Feb 24 05:17:31.379425 master-0 kubenswrapper[7614]: I0224 05:17:31.379235 7614 status_manager.go:851] "Failed to get status for pod" podUID="c9ad9373c007a4fcd25e70622bdc8deb" pod="kube-system/bootstrap-kube-controller-manager-master-0" err="the server was unable to return a response in the time allotted, but may still be processing the request (get pods bootstrap-kube-controller-manager-master-0)" Feb 24 05:17:32.230492 master-0 kubenswrapper[7614]: I0224 05:17:32.230372 7614 patch_prober.go:28] interesting pod/marketplace-operator-6f5488b997-dbsnm container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.128.0.22:8080/healthz\": dial tcp 10.128.0.22:8080: connect: connection refused" start-of-body= Feb 24 05:17:32.230492 master-0 kubenswrapper[7614]: I0224 05:17:32.230375 7614 patch_prober.go:28] interesting pod/marketplace-operator-6f5488b997-dbsnm container/marketplace-operator namespace/openshift-marketplace: Liveness probe status=failure output="Get \"http://10.128.0.22:8080/healthz\": dial tcp 10.128.0.22:8080: connect: connection refused" start-of-body= Feb 24 05:17:32.230492 master-0 kubenswrapper[7614]: I0224 05:17:32.230507 7614 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-6f5488b997-dbsnm" podUID="dd29bef3-d27e-48b3-9aa0-d915e949b3d5" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.128.0.22:8080/healthz\": dial tcp 10.128.0.22:8080: connect: connection refused" Feb 24 05:17:32.233270 master-0 kubenswrapper[7614]: I0224 05:17:32.230554 7614 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-marketplace/marketplace-operator-6f5488b997-dbsnm" podUID="dd29bef3-d27e-48b3-9aa0-d915e949b3d5" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.128.0.22:8080/healthz\": dial tcp 10.128.0.22:8080: connect: connection refused" Feb 24 05:17:35.002577 master-0 kubenswrapper[7614]: I0224 05:17:35.002486 7614 patch_prober.go:28] interesting pod/etcd-operator-545bf96f4d-tfmbs container/etcd-operator namespace/openshift-etcd-operator: Liveness probe status=failure output="Get \"https://10.128.0.8:8443/healthz\": dial tcp 10.128.0.8:8443: connect: connection refused" start-of-body= Feb 24 05:17:35.002577 master-0 kubenswrapper[7614]: I0224 05:17:35.002570 7614 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-etcd-operator/etcd-operator-545bf96f4d-tfmbs" podUID="7a2c651d-ea1a-41f2-9745-04adc8d88904" containerName="etcd-operator" probeResult="failure" output="Get \"https://10.128.0.8:8443/healthz\": dial tcp 10.128.0.8:8443: connect: connection refused" Feb 24 05:17:35.312342 master-0 kubenswrapper[7614]: I0224 05:17:35.312098 7614 patch_prober.go:28] interesting pod/operator-controller-controller-manager-9cc7d7bb-t75jj container/manager namespace/openshift-operator-controller: Liveness probe status=failure output="Get \"http://10.128.0.26:8081/healthz\": dial tcp 10.128.0.26:8081: connect: connection refused" start-of-body= Feb 24 05:17:35.312342 master-0 kubenswrapper[7614]: I0224 05:17:35.312219 7614 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-operator-controller/operator-controller-controller-manager-9cc7d7bb-t75jj" podUID="347c43e5-86d5-436f-bdc5-1c7bbe19ab2a" containerName="manager" probeResult="failure" output="Get \"http://10.128.0.26:8081/healthz\": dial tcp 10.128.0.26:8081: connect: connection refused" Feb 24 05:17:35.312948 master-0 kubenswrapper[7614]: I0224 05:17:35.312469 7614 patch_prober.go:28] interesting pod/operator-controller-controller-manager-9cc7d7bb-t75jj container/manager namespace/openshift-operator-controller: Readiness probe status=failure output="Get \"http://10.128.0.26:8081/readyz\": dial tcp 10.128.0.26:8081: connect: connection refused" start-of-body= Feb 24 05:17:35.312948 master-0 kubenswrapper[7614]: I0224 05:17:35.312613 7614 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-controller/operator-controller-controller-manager-9cc7d7bb-t75jj" podUID="347c43e5-86d5-436f-bdc5-1c7bbe19ab2a" containerName="manager" probeResult="failure" output="Get \"http://10.128.0.26:8081/readyz\": dial tcp 10.128.0.26:8081: connect: connection refused" Feb 24 05:17:35.489193 master-0 kubenswrapper[7614]: I0224 05:17:35.489118 7614 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operator-controller_operator-controller-controller-manager-9cc7d7bb-t75jj_347c43e5-86d5-436f-bdc5-1c7bbe19ab2a/manager/0.log" Feb 24 05:17:35.489193 master-0 kubenswrapper[7614]: I0224 05:17:35.489201 7614 generic.go:334] "Generic (PLEG): container finished" podID="347c43e5-86d5-436f-bdc5-1c7bbe19ab2a" containerID="54f08b019978c50707a9af7625f4b1969ac2f9de3d91bdb89125a98cc8b35f5f" exitCode=1 Feb 24 05:17:35.492372 master-0 kubenswrapper[7614]: I0224 05:17:35.492269 7614 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-catalogd_catalogd-controller-manager-84b8d9d697-zvzxs_d9492fbf-d0f4-4ecf-84ba-b089d69535c1/manager/0.log" Feb 24 05:17:35.493201 master-0 kubenswrapper[7614]: I0224 05:17:35.493149 7614 generic.go:334] "Generic (PLEG): container finished" podID="d9492fbf-d0f4-4ecf-84ba-b089d69535c1" containerID="189c37430c077be09301cf49e843b65676efb76e5d67d2ea4dd214f2f7102ef5" exitCode=1 Feb 24 05:17:37.509195 master-0 kubenswrapper[7614]: I0224 05:17:37.509117 7614 generic.go:334] "Generic (PLEG): container finished" podID="c9ad9373c007a4fcd25e70622bdc8deb" containerID="14f05c07a59574af5d65c41c4cab4b8f70b44e6f6cb8561bf8b61c71a4263402" exitCode=1 Feb 24 05:17:38.518617 master-0 kubenswrapper[7614]: I0224 05:17:38.518541 7614 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cluster-storage-operator_csi-snapshot-controller-6847bb4785-vqn96_b79ef90c-dc66-4d5f-8943-2c3ac68796ba/snapshot-controller/0.log" Feb 24 05:17:38.518617 master-0 kubenswrapper[7614]: I0224 05:17:38.518623 7614 generic.go:334] "Generic (PLEG): container finished" podID="b79ef90c-dc66-4d5f-8943-2c3ac68796ba" containerID="92100dde9dbd51740744fac31aa4b79ba4dfcf0cd902c28d6ae66b9259196300" exitCode=1 Feb 24 05:17:42.230375 master-0 kubenswrapper[7614]: I0224 05:17:42.230264 7614 patch_prober.go:28] interesting pod/marketplace-operator-6f5488b997-dbsnm container/marketplace-operator namespace/openshift-marketplace: Liveness probe status=failure output="Get \"http://10.128.0.22:8080/healthz\": dial tcp 10.128.0.22:8080: connect: connection refused" start-of-body= Feb 24 05:17:42.230375 master-0 kubenswrapper[7614]: I0224 05:17:42.230265 7614 patch_prober.go:28] interesting pod/marketplace-operator-6f5488b997-dbsnm container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.128.0.22:8080/healthz\": dial tcp 10.128.0.22:8080: connect: connection refused" start-of-body= Feb 24 05:17:42.231367 master-0 kubenswrapper[7614]: I0224 05:17:42.230395 7614 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-marketplace/marketplace-operator-6f5488b997-dbsnm" podUID="dd29bef3-d27e-48b3-9aa0-d915e949b3d5" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.128.0.22:8080/healthz\": dial tcp 10.128.0.22:8080: connect: connection refused" Feb 24 05:17:42.231367 master-0 kubenswrapper[7614]: I0224 05:17:42.230436 7614 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-6f5488b997-dbsnm" podUID="dd29bef3-d27e-48b3-9aa0-d915e949b3d5" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.128.0.22:8080/healthz\": dial tcp 10.128.0.22:8080: connect: connection refused" Feb 24 05:17:43.227253 master-0 kubenswrapper[7614]: E0224 05:17:43.227156 7614 mirror_client.go:138] "Failed deleting a mirror pod" err="Timeout: request did not complete within requested timeout - context deadline exceeded" pod="openshift-etcd/etcd-master-0-master-0" Feb 24 05:17:43.227929 master-0 kubenswrapper[7614]: E0224 05:17:43.227447 7614 kubelet.go:2526] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="34.021s" Feb 24 05:17:43.227929 master-0 kubenswrapper[7614]: I0224 05:17:43.227556 7614 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-marketplace/marketplace-operator-6f5488b997-dbsnm" Feb 24 05:17:43.227929 master-0 kubenswrapper[7614]: I0224 05:17:43.227619 7614 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-6f5488b997-dbsnm" Feb 24 05:17:43.229892 master-0 kubenswrapper[7614]: I0224 05:17:43.229809 7614 scope.go:117] "RemoveContainer" containerID="270089d93d1aad8adc2c6f3a218f7c7455fbc8f4604c672dd2ed10a74721af6c" Feb 24 05:17:43.230026 master-0 kubenswrapper[7614]: I0224 05:17:43.229922 7614 scope.go:117] "RemoveContainer" containerID="1fe643ed33a9f72192d56893c5e0183a5530b52d1fd5cb43d00c8adaabb5837c" Feb 24 05:17:43.236287 master-0 kubenswrapper[7614]: I0224 05:17:43.236244 7614 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-etcd/etcd-master-0-master-0" podUID="" Feb 24 05:17:43.374532 master-0 kubenswrapper[7614]: E0224 05:17:43.374453 7614 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" interval="7s" Feb 24 05:17:44.857681 master-0 kubenswrapper[7614]: I0224 05:17:44.857584 7614 patch_prober.go:28] interesting pod/catalogd-controller-manager-84b8d9d697-zvzxs container/manager namespace/openshift-catalogd: Readiness probe status=failure output="Get \"http://10.128.0.28:8081/readyz\": dial tcp 10.128.0.28:8081: connect: connection refused" start-of-body= Feb 24 05:17:44.858644 master-0 kubenswrapper[7614]: I0224 05:17:44.857685 7614 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-catalogd/catalogd-controller-manager-84b8d9d697-zvzxs" podUID="d9492fbf-d0f4-4ecf-84ba-b089d69535c1" containerName="manager" probeResult="failure" output="Get \"http://10.128.0.28:8081/readyz\": dial tcp 10.128.0.28:8081: connect: connection refused" Feb 24 05:17:45.311120 master-0 kubenswrapper[7614]: I0224 05:17:45.311037 7614 patch_prober.go:28] interesting pod/operator-controller-controller-manager-9cc7d7bb-t75jj container/manager namespace/openshift-operator-controller: Readiness probe status=failure output="Get \"http://10.128.0.26:8081/readyz\": dial tcp 10.128.0.26:8081: connect: connection refused" start-of-body= Feb 24 05:17:45.311426 master-0 kubenswrapper[7614]: I0224 05:17:45.311147 7614 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-controller/operator-controller-controller-manager-9cc7d7bb-t75jj" podUID="347c43e5-86d5-436f-bdc5-1c7bbe19ab2a" containerName="manager" probeResult="failure" output="Get \"http://10.128.0.26:8081/readyz\": dial tcp 10.128.0.26:8081: connect: connection refused" Feb 24 05:17:45.572254 master-0 kubenswrapper[7614]: I0224 05:17:45.572043 7614 generic.go:334] "Generic (PLEG): container finished" podID="d86d5bbe-3768-4695-810b-245a56e4fd1d" containerID="104b76f7ac0ef4084c50822d35c6690afc0cd965133c5d489594ae901dd1b9f2" exitCode=0 Feb 24 05:17:45.989577 master-0 kubenswrapper[7614]: E0224 05:17:45.989249 7614 event.go:359] "Server rejected event (will not retry!)" err="Timeout: request did not complete within requested timeout - context deadline exceeded" event="&Event{ObjectMeta:{machine-approver-798b897698-6hgvq.189716e0464e6e18 openshift-cluster-machine-approver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-cluster-machine-approver,Name:machine-approver-798b897698-6hgvq,UID:fe235661-d492-48fc-92e6-d9e1938daeb7,APIVersion:v1,ResourceVersion:8265,FieldPath:spec.containers{machine-approver-controller},},Reason:Started,Message:Started container machine-approver-controller,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-02-24 05:15:30.901610008 +0000 UTC m=+61.936353164,LastTimestamp:2026-02-24 05:15:30.901610008 +0000 UTC m=+61.936353164,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Feb 24 05:17:51.364611 master-0 kubenswrapper[7614]: E0224 05:17:51.364348 7614 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-24T05:17:41Z\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-24T05:17:41Z\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-24T05:17:41Z\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-24T05:17:41Z\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:94d88fe2fa42931a725508dbf17296b6ed99b8e20c1169f5d1fb8a36f4927ddd\\\"],\\\"sizeBytes\\\":1637274270},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d7a8ac0ba2e5115c9d451d553741173ae8744d4544da15e28bf38f61630182fd\\\"],\\\"sizeBytes\\\":1237794314},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4775c6461221dafe3ddd67ff683ccb665bed6eb278fa047d9d744aab9af65dcf\\\"],\\\"sizeBytes\\\":992461126},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8177c465e14c63854e5c0fa95ca0635cffc9b5dd3d077ecf971feedbc42b1274\\\"],\\\"sizeBytes\\\":943734757},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:72fafcd55ab739919dd8a114863fda27106af1c497f474e7ce0cb23b58dfa021\\\"],\\\"sizeBytes\\\":875998518},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7b9239f1f5e9590e3db71e61fde86db8f43e0085f61ae7769508d2ea058481c7\\\"],\\\"sizeBytes\\\":862501144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3fa84eaa1310d97fe55bb23a7c27ece85718d0643fa7fc0ff81014edb4b948b\\\"],\\\"sizeBytes\\\":772838975},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bd420e879c9f0271bca2d123a6d762591d9a4626b72f254d1f885842c32149e8\\\"],\\\"sizeBytes\\\":687849728},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3c467c1eeba7434b2aebf07169ab8afe0203d638e871dbdf29a16f830e9aef9e\\\"],\\\"sizeBytes\\\":682963466},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5121a0944000b7bfa57ae2e4eb3f412e1b4b89fcc75eec1ef20241182c0527f2\\\"],\\\"sizeBytes\\\":677827184},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d5a31b448302fbb994548ed801ac488a44e8a7c4ae9149c3b4cc20d6af832f83\\\"],\\\"sizeBytes\\\":621542709},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3e089c4e4fa9a22803b2673b776215e021a1f12a856dbcaba2fadee29bee10a3\\\"],\\\"sizeBytes\\\":589275174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1582ea693f35073e3316e2380a18227b78096ca7f4e1328f1dd8a2c423da26e9\\\"],\\\"sizeBytes\\\":582052489},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:314be88d356b2c8a3c4416daeb4cfcd58d617a4526319c01ddaffae4b4179e74\\\"],\\\"sizeBytes\\\":558105176},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d77a77c401bcfaa65a6ab6de82415af0e7ace1b470626647e5feb4875c89a5ef\\\"],\\\"sizeBytes\\\":529218694},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bc0ca626e5e17f9f78ddbfde54ea13ddc7749904911817bba16e6b59f30499ec\\\"],\\\"sizeBytes\\\":528829499},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:11f566fe2ae782ad96d36028b0fd81911a64ef787dcebc83803f741f272fa396\\\"],\\\"sizeBytes\\\":518279996},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-release@sha256:40bb7cf7c637bf9efd8fb0157839d325a019d67cc7d7279665fcf90dbb7f3f33\\\"],\\\"sizeBytes\\\":517888569},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fd63e2c1185e529c6e9f6e1426222ff2ac195132b44a1775f407e4593b66d4c\\\"],\\\"sizeBytes\\\":514875199},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ce471c00b59fd855a59f7efa9afdb3f0f9cbf1c4bcce3a82fe1a4cb82e90f52e\\\"],\\\"sizeBytes\\\":513119434},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a9dcbc6b966928b7597d4a822948ae6f07b62feecb91679c1d825d0d19426e19\\\"],\\\"sizeBytes\\\":512172666},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d5f4a546983224e416dfcc3a700afc15f9790182a5a2f8f7c94892d0e95abab3\\\"],\\\"sizeBytes\\\":511125422},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2c8de5c5b21ed8c7829ba988d580ffa470c9913877fe0ee5e11bf507400ffbc7\\\"],\\\"sizeBytes\\\":511059399},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:64ba461fd5594e3a30bfd755f1496707a88249bc68d07c65124c8617d664d2ac\\\"],\\\"sizeBytes\\\":508786786},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a82e441a9e9b93f0e010f1ce26e30c24b6ca93f7752084d4694ebdb3c5b53f83\\\"],\\\"sizeBytes\\\":508443359},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d7bd3361d506dcc1be3afa62d35080c5dd37afccc26cd36019e2b9db2c45f896\\\"],\\\"sizeBytes\\\":507867630},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:034588ffd95ce834e866279bf80a45af2cddda631c6c9a6344c1bb2e033fd83e\\\"],\\\"sizeBytes\\\":506374680},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c8618d42fe4da4881abe39e98691d187e13713981b66d0dac0a11cb1287482b7\\\"],\\\"sizeBytes\\\":506291135},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ce68078d909b63bb5b872d94c04829aa1b5812c416abbaf9024840d348ee68b1\\\"],\\\"sizeBytes\\\":505244089},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:457c564075e8b14b1d24ff6eab750600ebc90ff8b7bb137306a579ee8445ae95\\\"],\\\"sizeBytes\\\":505137106},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:897708222502e4d710dd737923f74d153c084ba6048bffceb16dfd30f79a6ecc\\\"],\\\"sizeBytes\\\":504513960},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4e8c6ae1f9a450c90857c9fbccf1e5fb404dbc0d65d086afce005d6bd307853b\\\"],\\\"sizeBytes\\\":494959854},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:117a846734fc8159b7172a40ed2feb43a969b7dbc113ee1a572cbf6f9f922655\\\"],\\\"sizeBytes\\\":486990304},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4797a485fd4ab3414ba8d52bdf2afccefab6c657b1d259baad703fca5145124c\\\"],\\\"sizeBytes\\\":484349508},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7a132d09565133b36ac7c797213d6a74ac810bb368ef59136320ab3d300f45bd\\\"],\\\"sizeBytes\\\":484074784},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:235b846666adaa2e4b4d6d0f7fd71d57bf3be253466e1d9fffafd103fa2696ac\\\"],\\\"sizeBytes\\\":470575802},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ce89154fa3fe1e87c660e644b58cf125fede575869fd5841600082c0d1f858a3\\\"],\\\"sizeBytes\\\":468159025},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2ba8aec9f09d75121b95d2e6f1097415302c0ae7121fa7076fd38d7adb9a5afa\\\"],\\\"sizeBytes\\\":467133839},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cb2014728aa54e620f65424402b14c5247016734a9a982c393dc011acb1a1f52\\\"],\\\"sizeBytes\\\":464984427},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:39d04e6e7ced98e7e189aff1bf392a4d4526e011fc6adead5c6b27dbd08776a9\\\"],\\\"sizeBytes\\\":463600445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f42321072d0ab781f41e8f595ed6f5efabe791e472c7d0784e61b3c214194656\\\"],\\\"sizeBytes\\\":458025547},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:24097d3bc90ed1fc543f5d96736c6091eb57b9e578d7186f430147ee28269cbf\\\"],\\\"sizeBytes\\\":456470711},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e53cc6c4d6263c99978c787e90575dd4818eac732589145ca7331186ad4f16de\\\"],\\\"sizeBytes\\\":448723134},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fc46bdc145c2a9e4a89a5fe574cd228b7355eb99754255bf9a0c8bf2cc1de1f2\\\"],\\\"sizeBytes\\\":447940744},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eef7d0364bb9259fdc66e57df6df3a59ce7bf957a77d0ca25d4fedb5f122015\\\"],\\\"sizeBytes\\\":443170136},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b1d840665bf310fa455ddaff9b262dd0649440ca9ecf34d49b340ce669885568\\\"],\\\"sizeBytes\\\":411485245},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16ea15164e7d71550d4c0e2c90d17f96edda4ab77123947b2e188ffb23951fa0\\\"],\\\"sizeBytes\\\":407241636},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6d5001a555eb05eef7f23d64667303c2b4db8343ee900c265f7613c40c1db229\\\"],\\\"sizeBytes\\\":396420881}]}}\" for node \"master-0\": Patch \"https://api-int.sno.openstack.lab:6443/api/v1/nodes/master-0/status?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 24 05:17:54.856992 master-0 kubenswrapper[7614]: I0224 05:17:54.856863 7614 patch_prober.go:28] interesting pod/catalogd-controller-manager-84b8d9d697-zvzxs container/manager namespace/openshift-catalogd: Readiness probe status=failure output="Get \"http://10.128.0.28:8081/readyz\": dial tcp 10.128.0.28:8081: connect: connection refused" start-of-body= Feb 24 05:17:54.856992 master-0 kubenswrapper[7614]: I0224 05:17:54.856938 7614 patch_prober.go:28] interesting pod/catalogd-controller-manager-84b8d9d697-zvzxs container/manager namespace/openshift-catalogd: Liveness probe status=failure output="Get \"http://10.128.0.28:8081/healthz\": dial tcp 10.128.0.28:8081: connect: connection refused" start-of-body= Feb 24 05:17:54.856992 master-0 kubenswrapper[7614]: I0224 05:17:54.856987 7614 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-catalogd/catalogd-controller-manager-84b8d9d697-zvzxs" podUID="d9492fbf-d0f4-4ecf-84ba-b089d69535c1" containerName="manager" probeResult="failure" output="Get \"http://10.128.0.28:8081/readyz\": dial tcp 10.128.0.28:8081: connect: connection refused" Feb 24 05:17:54.858384 master-0 kubenswrapper[7614]: I0224 05:17:54.857058 7614 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-catalogd/catalogd-controller-manager-84b8d9d697-zvzxs" podUID="d9492fbf-d0f4-4ecf-84ba-b089d69535c1" containerName="manager" probeResult="failure" output="Get \"http://10.128.0.28:8081/healthz\": dial tcp 10.128.0.28:8081: connect: connection refused" Feb 24 05:17:55.311541 master-0 kubenswrapper[7614]: I0224 05:17:55.311410 7614 patch_prober.go:28] interesting pod/operator-controller-controller-manager-9cc7d7bb-t75jj container/manager namespace/openshift-operator-controller: Readiness probe status=failure output="Get \"http://10.128.0.26:8081/readyz\": dial tcp 10.128.0.26:8081: connect: connection refused" start-of-body= Feb 24 05:17:55.311541 master-0 kubenswrapper[7614]: I0224 05:17:55.311410 7614 patch_prober.go:28] interesting pod/operator-controller-controller-manager-9cc7d7bb-t75jj container/manager namespace/openshift-operator-controller: Liveness probe status=failure output="Get \"http://10.128.0.26:8081/healthz\": dial tcp 10.128.0.26:8081: connect: connection refused" start-of-body= Feb 24 05:17:55.312004 master-0 kubenswrapper[7614]: I0224 05:17:55.311534 7614 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-controller/operator-controller-controller-manager-9cc7d7bb-t75jj" podUID="347c43e5-86d5-436f-bdc5-1c7bbe19ab2a" containerName="manager" probeResult="failure" output="Get \"http://10.128.0.26:8081/readyz\": dial tcp 10.128.0.26:8081: connect: connection refused" Feb 24 05:17:55.312004 master-0 kubenswrapper[7614]: I0224 05:17:55.311632 7614 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-operator-controller/operator-controller-controller-manager-9cc7d7bb-t75jj" podUID="347c43e5-86d5-436f-bdc5-1c7bbe19ab2a" containerName="manager" probeResult="failure" output="Get \"http://10.128.0.26:8081/healthz\": dial tcp 10.128.0.26:8081: connect: connection refused" Feb 24 05:17:56.236365 master-0 kubenswrapper[7614]: E0224 05:17:56.236210 7614 kubelet.go:1929] "Failed creating a mirror pod for" err="Internal error occurred: admission plugin \"LimitRanger\" failed to complete mutation in 13s" pod="openshift-etcd/etcd-master-0" Feb 24 05:18:00.376100 master-0 kubenswrapper[7614]: E0224 05:18:00.375917 7614 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" interval="7s" Feb 24 05:18:01.365298 master-0 kubenswrapper[7614]: E0224 05:18:01.365191 7614 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"master-0\": Get \"https://api-int.sno.openstack.lab:6443/api/v1/nodes/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 24 05:18:04.857582 master-0 kubenswrapper[7614]: I0224 05:18:04.857275 7614 patch_prober.go:28] interesting pod/catalogd-controller-manager-84b8d9d697-zvzxs container/manager namespace/openshift-catalogd: Readiness probe status=failure output="Get \"http://10.128.0.28:8081/readyz\": dial tcp 10.128.0.28:8081: connect: connection refused" start-of-body= Feb 24 05:18:04.857582 master-0 kubenswrapper[7614]: I0224 05:18:04.857485 7614 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-catalogd/catalogd-controller-manager-84b8d9d697-zvzxs" podUID="d9492fbf-d0f4-4ecf-84ba-b089d69535c1" containerName="manager" probeResult="failure" output="Get \"http://10.128.0.28:8081/readyz\": dial tcp 10.128.0.28:8081: connect: connection refused" Feb 24 05:18:05.311161 master-0 kubenswrapper[7614]: I0224 05:18:05.311073 7614 patch_prober.go:28] interesting pod/operator-controller-controller-manager-9cc7d7bb-t75jj container/manager namespace/openshift-operator-controller: Readiness probe status=failure output="Get \"http://10.128.0.26:8081/readyz\": dial tcp 10.128.0.26:8081: connect: connection refused" start-of-body= Feb 24 05:18:05.311579 master-0 kubenswrapper[7614]: I0224 05:18:05.311185 7614 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-controller/operator-controller-controller-manager-9cc7d7bb-t75jj" podUID="347c43e5-86d5-436f-bdc5-1c7bbe19ab2a" containerName="manager" probeResult="failure" output="Get \"http://10.128.0.26:8081/readyz\": dial tcp 10.128.0.26:8081: connect: connection refused" Feb 24 05:18:07.765105 master-0 kubenswrapper[7614]: I0224 05:18:07.764996 7614 generic.go:334] "Generic (PLEG): container finished" podID="88b915ff-fd94-4998-aa09-70f95c0f1b8a" containerID="319aa71d8e4b9690e64904978260695fcae1163baf1014ab285b451aeabac3a9" exitCode=0 Feb 24 05:18:09.779933 master-0 kubenswrapper[7614]: I0224 05:18:09.779727 7614 generic.go:334] "Generic (PLEG): container finished" podID="e75c6622-29b4-4da8-8409-be898aab9f49" containerID="caaa92858c7b0807c25410a67269dc27f4266cbb4b787010dfde00b1cfce4b7b" exitCode=0 Feb 24 05:18:11.216729 master-0 kubenswrapper[7614]: I0224 05:18:11.216622 7614 patch_prober.go:28] interesting pod/controller-manager-557cb6655b-75nhl container/controller-manager namespace/openshift-controller-manager: Liveness probe status=failure output="Get \"https://10.128.0.42:8443/healthz\": dial tcp 10.128.0.42:8443: connect: connection refused" start-of-body= Feb 24 05:18:11.217502 master-0 kubenswrapper[7614]: I0224 05:18:11.216740 7614 patch_prober.go:28] interesting pod/controller-manager-557cb6655b-75nhl container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.128.0.42:8443/healthz\": dial tcp 10.128.0.42:8443: connect: connection refused" start-of-body= Feb 24 05:18:11.217502 master-0 kubenswrapper[7614]: I0224 05:18:11.216742 7614 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-controller-manager/controller-manager-557cb6655b-75nhl" podUID="e75c6622-29b4-4da8-8409-be898aab9f49" containerName="controller-manager" probeResult="failure" output="Get \"https://10.128.0.42:8443/healthz\": dial tcp 10.128.0.42:8443: connect: connection refused" Feb 24 05:18:11.217502 master-0 kubenswrapper[7614]: I0224 05:18:11.216880 7614 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-557cb6655b-75nhl" podUID="e75c6622-29b4-4da8-8409-be898aab9f49" containerName="controller-manager" probeResult="failure" output="Get \"https://10.128.0.42:8443/healthz\": dial tcp 10.128.0.42:8443: connect: connection refused" Feb 24 05:18:11.365627 master-0 kubenswrapper[7614]: E0224 05:18:11.365518 7614 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"master-0\": Get \"https://api-int.sno.openstack.lab:6443/api/v1/nodes/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 24 05:18:14.857470 master-0 kubenswrapper[7614]: I0224 05:18:14.857276 7614 patch_prober.go:28] interesting pod/catalogd-controller-manager-84b8d9d697-zvzxs container/manager namespace/openshift-catalogd: Liveness probe status=failure output="Get \"http://10.128.0.28:8081/healthz\": dial tcp 10.128.0.28:8081: connect: connection refused" start-of-body= Feb 24 05:18:14.858560 master-0 kubenswrapper[7614]: I0224 05:18:14.857458 7614 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-catalogd/catalogd-controller-manager-84b8d9d697-zvzxs" podUID="d9492fbf-d0f4-4ecf-84ba-b089d69535c1" containerName="manager" probeResult="failure" output="Get \"http://10.128.0.28:8081/healthz\": dial tcp 10.128.0.28:8081: connect: connection refused" Feb 24 05:18:14.858560 master-0 kubenswrapper[7614]: I0224 05:18:14.857293 7614 patch_prober.go:28] interesting pod/catalogd-controller-manager-84b8d9d697-zvzxs container/manager namespace/openshift-catalogd: Readiness probe status=failure output="Get \"http://10.128.0.28:8081/readyz\": dial tcp 10.128.0.28:8081: connect: connection refused" start-of-body= Feb 24 05:18:14.858560 master-0 kubenswrapper[7614]: I0224 05:18:14.857698 7614 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-catalogd/catalogd-controller-manager-84b8d9d697-zvzxs" podUID="d9492fbf-d0f4-4ecf-84ba-b089d69535c1" containerName="manager" probeResult="failure" output="Get \"http://10.128.0.28:8081/readyz\": dial tcp 10.128.0.28:8081: connect: connection refused" Feb 24 05:18:15.310584 master-0 kubenswrapper[7614]: I0224 05:18:15.310476 7614 patch_prober.go:28] interesting pod/operator-controller-controller-manager-9cc7d7bb-t75jj container/manager namespace/openshift-operator-controller: Liveness probe status=failure output="Get \"http://10.128.0.26:8081/healthz\": dial tcp 10.128.0.26:8081: connect: connection refused" start-of-body= Feb 24 05:18:15.310872 master-0 kubenswrapper[7614]: I0224 05:18:15.310598 7614 patch_prober.go:28] interesting pod/operator-controller-controller-manager-9cc7d7bb-t75jj container/manager namespace/openshift-operator-controller: Readiness probe status=failure output="Get \"http://10.128.0.26:8081/readyz\": dial tcp 10.128.0.26:8081: connect: connection refused" start-of-body= Feb 24 05:18:15.310872 master-0 kubenswrapper[7614]: I0224 05:18:15.310620 7614 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-operator-controller/operator-controller-controller-manager-9cc7d7bb-t75jj" podUID="347c43e5-86d5-436f-bdc5-1c7bbe19ab2a" containerName="manager" probeResult="failure" output="Get \"http://10.128.0.26:8081/healthz\": dial tcp 10.128.0.26:8081: connect: connection refused" Feb 24 05:18:15.310872 master-0 kubenswrapper[7614]: I0224 05:18:15.310699 7614 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-controller/operator-controller-controller-manager-9cc7d7bb-t75jj" podUID="347c43e5-86d5-436f-bdc5-1c7bbe19ab2a" containerName="manager" probeResult="failure" output="Get \"http://10.128.0.26:8081/readyz\": dial tcp 10.128.0.26:8081: connect: connection refused" Feb 24 05:18:15.821410 master-0 kubenswrapper[7614]: I0224 05:18:15.821299 7614 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_control-plane-machine-set-operator-686847ff5f-zzvtt_32fd577d-8966-4ab1-95cf-357291084156/control-plane-machine-set-operator/0.log" Feb 24 05:18:15.821690 master-0 kubenswrapper[7614]: I0224 05:18:15.821430 7614 generic.go:334] "Generic (PLEG): container finished" podID="32fd577d-8966-4ab1-95cf-357291084156" containerID="cd2e094a618f188c882e23ef5f50ea70a38793ab6e08f1bfec1cd4a082e97144" exitCode=1 Feb 24 05:18:17.240134 master-0 kubenswrapper[7614]: E0224 05:18:17.240049 7614 mirror_client.go:138] "Failed deleting a mirror pod" err="Timeout: request did not complete within requested timeout - context deadline exceeded" pod="openshift-etcd/etcd-master-0-master-0" Feb 24 05:18:17.241489 master-0 kubenswrapper[7614]: E0224 05:18:17.240364 7614 kubelet.go:2526] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="34.013s" Feb 24 05:18:17.241489 master-0 kubenswrapper[7614]: I0224 05:18:17.240400 7614 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-operator-controller/operator-controller-controller-manager-9cc7d7bb-t75jj" Feb 24 05:18:17.241489 master-0 kubenswrapper[7614]: I0224 05:18:17.240465 7614 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-controller/operator-controller-controller-manager-9cc7d7bb-t75jj" Feb 24 05:18:17.241489 master-0 kubenswrapper[7614]: I0224 05:18:17.240498 7614 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-catalogd/catalogd-controller-manager-84b8d9d697-zvzxs" Feb 24 05:18:17.241750 master-0 kubenswrapper[7614]: I0224 05:18:17.241524 7614 scope.go:117] "RemoveContainer" containerID="54f08b019978c50707a9af7625f4b1969ac2f9de3d91bdb89125a98cc8b35f5f" Feb 24 05:18:17.242138 master-0 kubenswrapper[7614]: I0224 05:18:17.241986 7614 scope.go:117] "RemoveContainer" containerID="189c37430c077be09301cf49e843b65676efb76e5d67d2ea4dd214f2f7102ef5" Feb 24 05:18:17.248375 master-0 kubenswrapper[7614]: I0224 05:18:17.248272 7614 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-etcd/etcd-master-0-master-0" podUID="" Feb 24 05:18:17.377713 master-0 kubenswrapper[7614]: E0224 05:18:17.377568 7614 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" interval="7s" Feb 24 05:18:17.845436 master-0 kubenswrapper[7614]: I0224 05:18:17.845189 7614 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operator-controller_operator-controller-controller-manager-9cc7d7bb-t75jj_347c43e5-86d5-436f-bdc5-1c7bbe19ab2a/manager/0.log" Feb 24 05:18:17.849974 master-0 kubenswrapper[7614]: I0224 05:18:17.849907 7614 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-catalogd_catalogd-controller-manager-84b8d9d697-zvzxs_d9492fbf-d0f4-4ecf-84ba-b089d69535c1/manager/0.log" Feb 24 05:18:19.994153 master-0 kubenswrapper[7614]: E0224 05:18:19.993943 7614 event.go:359] "Server rejected event (will not retry!)" err="Timeout: request did not complete within requested timeout - context deadline exceeded" event="&Event{ObjectMeta:{etcd-master-0.189716e31a39b050 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-master-0,UID:18a83278819db2092fa26d8274eb3f00,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d77a77c401bcfaa65a6ab6de82415af0e7ace1b470626647e5feb4875c89a5ef\" already present on machine,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-02-24 05:15:43.046955088 +0000 UTC m=+74.081698284,LastTimestamp:2026-02-24 05:15:43.046955088 +0000 UTC m=+74.081698284,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Feb 24 05:18:21.216168 master-0 kubenswrapper[7614]: I0224 05:18:21.216084 7614 patch_prober.go:28] interesting pod/controller-manager-557cb6655b-75nhl container/controller-manager namespace/openshift-controller-manager: Liveness probe status=failure output="Get \"https://10.128.0.42:8443/healthz\": dial tcp 10.128.0.42:8443: connect: connection refused" start-of-body= Feb 24 05:18:21.217004 master-0 kubenswrapper[7614]: I0224 05:18:21.216172 7614 patch_prober.go:28] interesting pod/controller-manager-557cb6655b-75nhl container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.128.0.42:8443/healthz\": dial tcp 10.128.0.42:8443: connect: connection refused" start-of-body= Feb 24 05:18:21.217004 master-0 kubenswrapper[7614]: I0224 05:18:21.216253 7614 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-557cb6655b-75nhl" podUID="e75c6622-29b4-4da8-8409-be898aab9f49" containerName="controller-manager" probeResult="failure" output="Get \"https://10.128.0.42:8443/healthz\": dial tcp 10.128.0.42:8443: connect: connection refused" Feb 24 05:18:21.217004 master-0 kubenswrapper[7614]: I0224 05:18:21.216171 7614 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-controller-manager/controller-manager-557cb6655b-75nhl" podUID="e75c6622-29b4-4da8-8409-be898aab9f49" containerName="controller-manager" probeResult="failure" output="Get \"https://10.128.0.42:8443/healthz\": dial tcp 10.128.0.42:8443: connect: connection refused" Feb 24 05:18:26.930130 master-0 kubenswrapper[7614]: E0224 05:18:26.930017 7614 kubelet.go:2526] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="9.689s" Feb 24 05:18:26.930130 master-0 kubenswrapper[7614]: I0224 05:18:26.930104 7614 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-etcd/etcd-master-0-master-0"] Feb 24 05:18:26.930130 master-0 kubenswrapper[7614]: I0224 05:18:26.930132 7614 kubelet.go:2649] "Unable to find pod for mirror pod, skipping" mirrorPod="openshift-etcd/etcd-master-0-master-0" mirrorPodUID="00c820c1-7074-45d8-bb0f-4f133e231662" Feb 24 05:18:26.948886 master-0 kubenswrapper[7614]: I0224 05:18:26.948794 7614 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-etcd/etcd-master-0-master-0" podUID="" Feb 24 05:18:26.953251 master-0 kubenswrapper[7614]: I0224 05:18:26.953192 7614 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-etcd/etcd-master-0-master-0"] Feb 24 05:18:26.953251 master-0 kubenswrapper[7614]: I0224 05:18:26.953240 7614 kubelet.go:2673] "Unable to find pod for mirror pod, skipping" mirrorPod="openshift-etcd/etcd-master-0-master-0" mirrorPodUID="00c820c1-7074-45d8-bb0f-4f133e231662" Feb 24 05:18:26.953500 master-0 kubenswrapper[7614]: I0224 05:18:26.953270 7614 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-1-master-0" event={"ID":"e44f770d-f88d-446a-a22f-51b30e89690c","Type":"ContainerDied","Data":"1f43a4854636c4d4d499b77fab14041aa2c65280b5d333f68ca719e5325adfaf"} Feb 24 05:18:26.953500 master-0 kubenswrapper[7614]: I0224 05:18:26.953303 7614 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-master-0" event={"ID":"18a83278819db2092fa26d8274eb3f00","Type":"ContainerDied","Data":"80c968b4b9fc354e4a6c8675410d09e381b46a2ac2e807d24b9c6b5794f1030b"} Feb 24 05:18:26.953500 master-0 kubenswrapper[7614]: I0224 05:18:26.953357 7614 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-5d87bf58c-ncrqj" event={"ID":"17f8e10b-88dc-4158-a7c4-aaa2f5d5fb9d","Type":"ContainerDied","Data":"f93fdb0961b7ab6c511e8eb1cee936b815e97917116f05d83d27c325437b676d"} Feb 24 05:18:26.953500 master-0 kubenswrapper[7614]: I0224 05:18:26.953381 7614 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-7bcfbc574b-8zrj9" event={"ID":"22813c83-2f60-44ad-9624-ad367cec08f7","Type":"ContainerDied","Data":"3d7e3ee020313467e6fefd173d6752fc4e4ffcc2fae974414212fcbe51114f7d"} Feb 24 05:18:26.953500 master-0 kubenswrapper[7614]: I0224 05:18:26.953406 7614 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-7d7db75979-4fk6k" event={"ID":"f77227c8-c52d-4a71-ae1b-792055f6f23d","Type":"ContainerDied","Data":"22b7d6a6838a4874825b0fb486995e1ecae2b2ab9edf5d7d1caac95d9b544b8e"} Feb 24 05:18:26.953500 master-0 kubenswrapper[7614]: I0224 05:18:26.953440 7614 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-545bf96f4d-tfmbs" event={"ID":"7a2c651d-ea1a-41f2-9745-04adc8d88904","Type":"ContainerDied","Data":"1fe643ed33a9f72192d56893c5e0183a5530b52d1fd5cb43d00c8adaabb5837c"} Feb 24 05:18:26.953500 master-0 kubenswrapper[7614]: I0224 05:18:26.953467 7614 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-kube-controller-manager/installer-1-master-0"] Feb 24 05:18:26.953500 master-0 kubenswrapper[7614]: I0224 05:18:26.953489 7614 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-fc889cfd5-r6p58" event={"ID":"c3fed34f-b275-42c6-af6c-8de3e6fe0f9e","Type":"ContainerDied","Data":"80dce2d75efa45ca36b53637a94f5b4155d200b7759d2e7b129815f6f4324f5a"} Feb 24 05:18:26.953500 master-0 kubenswrapper[7614]: I0224 05:18:26.953509 7614 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-rlg4x" event={"ID":"c106275b-72b6-4877-95c3-830f93e35375","Type":"ContainerDied","Data":"3c48cf95cb20519b43165b534538afb3afad0ec1beb464f9f497eefdb2dc3c0f"} Feb 24 05:18:26.954078 master-0 kubenswrapper[7614]: I0224 05:18:26.953539 7614 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="kube-system/bootstrap-kube-controller-manager-master-0" event={"ID":"c9ad9373c007a4fcd25e70622bdc8deb","Type":"ContainerDied","Data":"7e457713ca1d1ba317f2afd7258df17db980e4c10d06e5baec4b31193663e429"} Feb 24 05:18:26.954078 master-0 kubenswrapper[7614]: I0224 05:18:26.953564 7614 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="kube-system/bootstrap-kube-controller-manager-master-0" event={"ID":"c9ad9373c007a4fcd25e70622bdc8deb","Type":"ContainerStarted","Data":"14f05c07a59574af5d65c41c4cab4b8f70b44e6f6cb8561bf8b61c71a4263402"} Feb 24 05:18:26.954078 master-0 kubenswrapper[7614]: I0224 05:18:26.953583 7614 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-6569778c84-rr8r7" event={"ID":"3d6b1ce7-1213-494c-829d-186d39eac7eb","Type":"ContainerDied","Data":"dd6d3f4e8c90f9e72cf283fa2ee57699a971df08e7b5a82fbc21deb33aca4d26"} Feb 24 05:18:26.954078 master-0 kubenswrapper[7614]: I0224 05:18:26.953617 7614 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-rlg4x" event={"ID":"c106275b-72b6-4877-95c3-830f93e35375","Type":"ContainerStarted","Data":"8d89f8110c46f839405874fb4dba9bf410e3a518ca5d273b143187f669975cd0"} Feb 24 05:18:26.954078 master-0 kubenswrapper[7614]: I0224 05:18:26.953637 7614 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-6569778c84-rr8r7" event={"ID":"3d6b1ce7-1213-494c-829d-186d39eac7eb","Type":"ContainerStarted","Data":"f9e75ea6f0c81eec46e337376adf731ab535fc067c7d1c6d227f14a9e7433ffe"} Feb 24 05:18:26.954078 master-0 kubenswrapper[7614]: I0224 05:18:26.953692 7614 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-5d87bf58c-ncrqj" event={"ID":"17f8e10b-88dc-4158-a7c4-aaa2f5d5fb9d","Type":"ContainerStarted","Data":"3b73827e2bb1f8b20c02df6acec604b6c43e878ca9e2bd5192c12a2a62cbd894"} Feb 24 05:18:26.954078 master-0 kubenswrapper[7614]: I0224 05:18:26.953712 7614 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-1-master-0" event={"ID":"e44f770d-f88d-446a-a22f-51b30e89690c","Type":"ContainerDied","Data":"2de4f0bf021dd4e6a7368be09b5e12113f2c9fbed68c5c931e616a804a48f74b"} Feb 24 05:18:26.954078 master-0 kubenswrapper[7614]: I0224 05:18:26.953735 7614 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2de4f0bf021dd4e6a7368be09b5e12113f2c9fbed68c5c931e616a804a48f74b" Feb 24 05:18:26.954078 master-0 kubenswrapper[7614]: I0224 05:18:26.953755 7614 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-7bcfbc574b-8zrj9" event={"ID":"22813c83-2f60-44ad-9624-ad367cec08f7","Type":"ContainerStarted","Data":"03dd9053750096b7f82252736f4fac427fd0dcd291c847a9672ee97680c7a2e7"} Feb 24 05:18:26.954078 master-0 kubenswrapper[7614]: I0224 05:18:26.953773 7614 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-7d7db75979-4fk6k" event={"ID":"f77227c8-c52d-4a71-ae1b-792055f6f23d","Type":"ContainerStarted","Data":"6e3c93a1a355eeeb3f5cb2283a174709bfd59dc7e2e2f1d724c2278f1e630da9"} Feb 24 05:18:26.954078 master-0 kubenswrapper[7614]: I0224 05:18:26.953799 7614 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-fc889cfd5-r6p58" event={"ID":"c3fed34f-b275-42c6-af6c-8de3e6fe0f9e","Type":"ContainerStarted","Data":"49b21c85c511839ea61bf1eb992b507dfd3ec3bd10df341c02909db55b0a753b"} Feb 24 05:18:26.954078 master-0 kubenswrapper[7614]: I0224 05:18:26.953820 7614 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-6f5488b997-dbsnm" event={"ID":"dd29bef3-d27e-48b3-9aa0-d915e949b3d5","Type":"ContainerDied","Data":"270089d93d1aad8adc2c6f3a218f7c7455fbc8f4604c672dd2ed10a74721af6c"} Feb 24 05:18:26.954078 master-0 kubenswrapper[7614]: I0224 05:18:26.953842 7614 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-controller/operator-controller-controller-manager-9cc7d7bb-t75jj" event={"ID":"347c43e5-86d5-436f-bdc5-1c7bbe19ab2a","Type":"ContainerDied","Data":"54f08b019978c50707a9af7625f4b1969ac2f9de3d91bdb89125a98cc8b35f5f"} Feb 24 05:18:26.954078 master-0 kubenswrapper[7614]: I0224 05:18:26.953865 7614 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-catalogd/catalogd-controller-manager-84b8d9d697-zvzxs" event={"ID":"d9492fbf-d0f4-4ecf-84ba-b089d69535c1","Type":"ContainerDied","Data":"189c37430c077be09301cf49e843b65676efb76e5d67d2ea4dd214f2f7102ef5"} Feb 24 05:18:26.954078 master-0 kubenswrapper[7614]: I0224 05:18:26.953888 7614 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="kube-system/bootstrap-kube-controller-manager-master-0" event={"ID":"c9ad9373c007a4fcd25e70622bdc8deb","Type":"ContainerDied","Data":"14f05c07a59574af5d65c41c4cab4b8f70b44e6f6cb8561bf8b61c71a4263402"} Feb 24 05:18:26.954078 master-0 kubenswrapper[7614]: I0224 05:18:26.953910 7614 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-storage-operator/csi-snapshot-controller-6847bb4785-vqn96" event={"ID":"b79ef90c-dc66-4d5f-8943-2c3ac68796ba","Type":"ContainerDied","Data":"92100dde9dbd51740744fac31aa4b79ba4dfcf0cd902c28d6ae66b9259196300"} Feb 24 05:18:26.954078 master-0 kubenswrapper[7614]: I0224 05:18:26.953934 7614 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-6f5488b997-dbsnm" event={"ID":"dd29bef3-d27e-48b3-9aa0-d915e949b3d5","Type":"ContainerStarted","Data":"c2d1c04894486e075c5bb15ad6bb88a45eb446ca42f9495fa6638b84c3d79262"} Feb 24 05:18:26.954078 master-0 kubenswrapper[7614]: I0224 05:18:26.953953 7614 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-545bf96f4d-tfmbs" event={"ID":"7a2c651d-ea1a-41f2-9745-04adc8d88904","Type":"ContainerStarted","Data":"5b9fbeb4c761c7177b525ed4d8c68cf8e069fca30c46bcfac1010c8ec65d4d07"} Feb 24 05:18:26.954078 master-0 kubenswrapper[7614]: I0224 05:18:26.953972 7614 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-c48c8bf7c-mcdrl" event={"ID":"d86d5bbe-3768-4695-810b-245a56e4fd1d","Type":"ContainerDied","Data":"104b76f7ac0ef4084c50822d35c6690afc0cd965133c5d489594ae901dd1b9f2"} Feb 24 05:18:26.954078 master-0 kubenswrapper[7614]: I0224 05:18:26.953992 7614 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-master-0" event={"ID":"18a83278819db2092fa26d8274eb3f00","Type":"ContainerStarted","Data":"3ecee88921125a5f3daef9f73c06fadc6c6ff979e5d985e6de9e5a03f6b60138"} Feb 24 05:18:26.954078 master-0 kubenswrapper[7614]: I0224 05:18:26.954011 7614 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-master-0" event={"ID":"18a83278819db2092fa26d8274eb3f00","Type":"ContainerStarted","Data":"4bfeb0f00fa17020db1e4f0103b58f3f7fb077a3afee3ad3d0b2ebfe6459b4f1"} Feb 24 05:18:26.954078 master-0 kubenswrapper[7614]: I0224 05:18:26.954029 7614 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-master-0" event={"ID":"18a83278819db2092fa26d8274eb3f00","Type":"ContainerStarted","Data":"91ecda8417b960b9a029d90ed995bb60f62836abfd23bf2acd7b0e5ecf1da02e"} Feb 24 05:18:26.954078 master-0 kubenswrapper[7614]: I0224 05:18:26.954047 7614 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-master-0" event={"ID":"18a83278819db2092fa26d8274eb3f00","Type":"ContainerStarted","Data":"cd4cd16837dd09734e5f614dd2260006bcf56b4e101d508499976475323a14e3"} Feb 24 05:18:26.954078 master-0 kubenswrapper[7614]: I0224 05:18:26.954069 7614 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-master-0" event={"ID":"18a83278819db2092fa26d8274eb3f00","Type":"ContainerStarted","Data":"7d727fc6e0e5d77f1006238fe0a64fa226a54016635e42983a2c118d1cbf2d4d"} Feb 24 05:18:26.954078 master-0 kubenswrapper[7614]: I0224 05:18:26.954090 7614 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-5d8dfcdc87-b8ght" event={"ID":"88b915ff-fd94-4998-aa09-70f95c0f1b8a","Type":"ContainerDied","Data":"319aa71d8e4b9690e64904978260695fcae1163baf1014ab285b451aeabac3a9"} Feb 24 05:18:26.954078 master-0 kubenswrapper[7614]: I0224 05:18:26.954113 7614 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-557cb6655b-75nhl" event={"ID":"e75c6622-29b4-4da8-8409-be898aab9f49","Type":"ContainerDied","Data":"caaa92858c7b0807c25410a67269dc27f4266cbb4b787010dfde00b1cfce4b7b"} Feb 24 05:18:26.955698 master-0 kubenswrapper[7614]: I0224 05:18:26.954134 7614 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/control-plane-machine-set-operator-686847ff5f-zzvtt" event={"ID":"32fd577d-8966-4ab1-95cf-357291084156","Type":"ContainerDied","Data":"cd2e094a618f188c882e23ef5f50ea70a38793ab6e08f1bfec1cd4a082e97144"} Feb 24 05:18:26.955698 master-0 kubenswrapper[7614]: I0224 05:18:26.954158 7614 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-controller/operator-controller-controller-manager-9cc7d7bb-t75jj" event={"ID":"347c43e5-86d5-436f-bdc5-1c7bbe19ab2a","Type":"ContainerStarted","Data":"27d3c979d980c52be573082c4d98e2b43efa2f5962b15df7eb3f072aaaaf8885"} Feb 24 05:18:26.955698 master-0 kubenswrapper[7614]: I0224 05:18:26.954177 7614 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-catalogd/catalogd-controller-manager-84b8d9d697-zvzxs" event={"ID":"d9492fbf-d0f4-4ecf-84ba-b089d69535c1","Type":"ContainerStarted","Data":"54cc6a7eea7de4886fcefce8b98bd35f27338eed7eb5d39d1aa4df2fed85d25a"} Feb 24 05:18:26.955698 master-0 kubenswrapper[7614]: I0224 05:18:26.954908 7614 scope.go:117] "RemoveContainer" containerID="cd2e094a618f188c882e23ef5f50ea70a38793ab6e08f1bfec1cd4a082e97144" Feb 24 05:18:26.955698 master-0 kubenswrapper[7614]: I0224 05:18:26.955202 7614 scope.go:117] "RemoveContainer" containerID="7e457713ca1d1ba317f2afd7258df17db980e4c10d06e5baec4b31193663e429" Feb 24 05:18:26.957417 master-0 kubenswrapper[7614]: I0224 05:18:26.957362 7614 scope.go:117] "RemoveContainer" containerID="14f05c07a59574af5d65c41c4cab4b8f70b44e6f6cb8561bf8b61c71a4263402" Feb 24 05:18:26.959110 master-0 kubenswrapper[7614]: I0224 05:18:26.959062 7614 scope.go:117] "RemoveContainer" containerID="caaa92858c7b0807c25410a67269dc27f4266cbb4b787010dfde00b1cfce4b7b" Feb 24 05:18:26.960751 master-0 kubenswrapper[7614]: I0224 05:18:26.960644 7614 scope.go:117] "RemoveContainer" containerID="319aa71d8e4b9690e64904978260695fcae1163baf1014ab285b451aeabac3a9" Feb 24 05:18:26.962680 master-0 kubenswrapper[7614]: I0224 05:18:26.962590 7614 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-kube-controller-manager/installer-1-master-0"] Feb 24 05:18:26.963154 master-0 kubenswrapper[7614]: I0224 05:18:26.963111 7614 scope.go:117] "RemoveContainer" containerID="92100dde9dbd51740744fac31aa4b79ba4dfcf0cd902c28d6ae66b9259196300" Feb 24 05:18:26.964709 master-0 kubenswrapper[7614]: I0224 05:18:26.964649 7614 scope.go:117] "RemoveContainer" containerID="104b76f7ac0ef4084c50822d35c6690afc0cd965133c5d489594ae901dd1b9f2" Feb 24 05:18:26.983677 master-0 kubenswrapper[7614]: I0224 05:18:26.981740 7614 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-machine-approver/machine-approver-798b897698-6hgvq" podStartSLOduration=178.10871847 podStartE2EDuration="2m59.981715011s" podCreationTimestamp="2026-02-24 05:15:27 +0000 UTC" firstStartedPulling="2026-02-24 05:15:28.77195612 +0000 UTC m=+59.806699306" lastFinishedPulling="2026-02-24 05:15:30.644952651 +0000 UTC m=+61.679695847" observedRunningTime="2026-02-24 05:18:26.980177651 +0000 UTC m=+238.014920917" watchObservedRunningTime="2026-02-24 05:18:26.981715011 +0000 UTC m=+238.016458167" Feb 24 05:18:27.004485 master-0 kubenswrapper[7614]: I0224 05:18:27.003159 7614 scope.go:117] "RemoveContainer" containerID="487b980bd30f01a4370e0f26f33ccb604a296e479126f79a526ff15ecc98aaf8" Feb 24 05:18:27.098078 master-0 kubenswrapper[7614]: I0224 05:18:27.098014 7614 scope.go:117] "RemoveContainer" containerID="7e457713ca1d1ba317f2afd7258df17db980e4c10d06e5baec4b31193663e429" Feb 24 05:18:27.099296 master-0 kubenswrapper[7614]: E0224 05:18:27.099046 7614 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7e457713ca1d1ba317f2afd7258df17db980e4c10d06e5baec4b31193663e429\": container with ID starting with 7e457713ca1d1ba317f2afd7258df17db980e4c10d06e5baec4b31193663e429 not found: ID does not exist" containerID="7e457713ca1d1ba317f2afd7258df17db980e4c10d06e5baec4b31193663e429" Feb 24 05:18:27.099296 master-0 kubenswrapper[7614]: I0224 05:18:27.099111 7614 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7e457713ca1d1ba317f2afd7258df17db980e4c10d06e5baec4b31193663e429"} err="failed to get container status \"7e457713ca1d1ba317f2afd7258df17db980e4c10d06e5baec4b31193663e429\": rpc error: code = NotFound desc = could not find container \"7e457713ca1d1ba317f2afd7258df17db980e4c10d06e5baec4b31193663e429\": container with ID starting with 7e457713ca1d1ba317f2afd7258df17db980e4c10d06e5baec4b31193663e429 not found: ID does not exist" Feb 24 05:18:27.099296 master-0 kubenswrapper[7614]: I0224 05:18:27.099148 7614 scope.go:117] "RemoveContainer" containerID="487b980bd30f01a4370e0f26f33ccb604a296e479126f79a526ff15ecc98aaf8" Feb 24 05:18:27.099640 master-0 kubenswrapper[7614]: E0224 05:18:27.099589 7614 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"487b980bd30f01a4370e0f26f33ccb604a296e479126f79a526ff15ecc98aaf8\": container with ID starting with 487b980bd30f01a4370e0f26f33ccb604a296e479126f79a526ff15ecc98aaf8 not found: ID does not exist" containerID="487b980bd30f01a4370e0f26f33ccb604a296e479126f79a526ff15ecc98aaf8" Feb 24 05:18:27.099692 master-0 kubenswrapper[7614]: I0224 05:18:27.099657 7614 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"487b980bd30f01a4370e0f26f33ccb604a296e479126f79a526ff15ecc98aaf8"} err="failed to get container status \"487b980bd30f01a4370e0f26f33ccb604a296e479126f79a526ff15ecc98aaf8\": rpc error: code = NotFound desc = could not find container \"487b980bd30f01a4370e0f26f33ccb604a296e479126f79a526ff15ecc98aaf8\": container with ID starting with 487b980bd30f01a4370e0f26f33ccb604a296e479126f79a526ff15ecc98aaf8 not found: ID does not exist" Feb 24 05:18:27.185446 master-0 kubenswrapper[7614]: I0224 05:18:27.184971 7614 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1d886fdf-fd74-45de-b7c0-2e8e75eb994e" path="/var/lib/kubelet/pods/1d886fdf-fd74-45de-b7c0-2e8e75eb994e/volumes" Feb 24 05:18:27.926584 master-0 kubenswrapper[7614]: I0224 05:18:27.926490 7614 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_control-plane-machine-set-operator-686847ff5f-zzvtt_32fd577d-8966-4ab1-95cf-357291084156/control-plane-machine-set-operator/0.log" Feb 24 05:18:27.926973 master-0 kubenswrapper[7614]: I0224 05:18:27.926734 7614 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/control-plane-machine-set-operator-686847ff5f-zzvtt" event={"ID":"32fd577d-8966-4ab1-95cf-357291084156","Type":"ContainerStarted","Data":"b931c4e73120acfd5edaa21c3bd09b78ab41757182041f2c3263ed0153cf894b"} Feb 24 05:18:27.930855 master-0 kubenswrapper[7614]: I0224 05:18:27.930783 7614 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-557cb6655b-75nhl" event={"ID":"e75c6622-29b4-4da8-8409-be898aab9f49","Type":"ContainerStarted","Data":"eb6669c66956cacb51b3a8c28637b0795fa1847baffcb87c1466213004c904a0"} Feb 24 05:18:27.931889 master-0 kubenswrapper[7614]: I0224 05:18:27.931191 7614 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-557cb6655b-75nhl" Feb 24 05:18:27.934041 master-0 kubenswrapper[7614]: I0224 05:18:27.933945 7614 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-c48c8bf7c-mcdrl" event={"ID":"d86d5bbe-3768-4695-810b-245a56e4fd1d","Type":"ContainerStarted","Data":"2f151e3442498eed531dc228511816d55db9ae5db685cbb2166ce65b5b71997d"} Feb 24 05:18:27.936565 master-0 kubenswrapper[7614]: I0224 05:18:27.936495 7614 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="kube-system/bootstrap-kube-controller-manager-master-0" event={"ID":"c9ad9373c007a4fcd25e70622bdc8deb","Type":"ContainerStarted","Data":"5557a4a59d6ed82812c29ef2ac4e6682ca871ccd9af2af6045fce7cc16101c3a"} Feb 24 05:18:27.939140 master-0 kubenswrapper[7614]: I0224 05:18:27.939079 7614 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cluster-storage-operator_csi-snapshot-controller-6847bb4785-vqn96_b79ef90c-dc66-4d5f-8943-2c3ac68796ba/snapshot-controller/0.log" Feb 24 05:18:27.939292 master-0 kubenswrapper[7614]: I0224 05:18:27.939170 7614 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-storage-operator/csi-snapshot-controller-6847bb4785-vqn96" event={"ID":"b79ef90c-dc66-4d5f-8943-2c3ac68796ba","Type":"ContainerStarted","Data":"9223fb2da930fb3c50e82163a41bfe2c42eac1ee2e2d4f682d787074cbff45d5"} Feb 24 05:18:27.942853 master-0 kubenswrapper[7614]: I0224 05:18:27.942789 7614 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-5d8dfcdc87-b8ght" event={"ID":"88b915ff-fd94-4998-aa09-70f95c0f1b8a","Type":"ContainerStarted","Data":"96a4e787b3e1f9eeaea51f2ad42e9605d98e2f89f59460135daea10bdd951213"} Feb 24 05:18:27.946441 master-0 kubenswrapper[7614]: I0224 05:18:27.946374 7614 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-557cb6655b-75nhl" Feb 24 05:18:28.012804 master-0 kubenswrapper[7614]: I0224 05:18:28.012705 7614 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-etcd/etcd-master-0" Feb 24 05:18:32.230576 master-0 kubenswrapper[7614]: I0224 05:18:32.230466 7614 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-6f5488b997-dbsnm" Feb 24 05:18:32.234066 master-0 kubenswrapper[7614]: I0224 05:18:32.234018 7614 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-6f5488b997-dbsnm" Feb 24 05:18:33.012687 master-0 kubenswrapper[7614]: I0224 05:18:33.012551 7614 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-etcd/etcd-master-0" Feb 24 05:18:33.049169 master-0 kubenswrapper[7614]: I0224 05:18:33.049085 7614 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-etcd/etcd-master-0" Feb 24 05:18:34.378585 master-0 kubenswrapper[7614]: E0224 05:18:34.378410 7614 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" interval="7s" Feb 24 05:18:34.856532 master-0 kubenswrapper[7614]: I0224 05:18:34.856438 7614 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-catalogd/catalogd-controller-manager-84b8d9d697-zvzxs" Feb 24 05:18:34.859689 master-0 kubenswrapper[7614]: I0224 05:18:34.859623 7614 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-catalogd/catalogd-controller-manager-84b8d9d697-zvzxs" Feb 24 05:18:35.200111 master-0 kubenswrapper[7614]: I0224 05:18:35.199906 7614 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="kube-system/bootstrap-kube-controller-manager-master-0" Feb 24 05:18:35.207420 master-0 kubenswrapper[7614]: I0224 05:18:35.207289 7614 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="kube-system/bootstrap-kube-controller-manager-master-0" Feb 24 05:18:35.311396 master-0 kubenswrapper[7614]: I0224 05:18:35.311287 7614 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-etcd/etcd-master-0"] Feb 24 05:18:35.311396 master-0 kubenswrapper[7614]: I0224 05:18:35.311378 7614 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-controller/operator-controller-controller-manager-9cc7d7bb-t75jj" Feb 24 05:18:35.313974 master-0 kubenswrapper[7614]: I0224 05:18:35.313894 7614 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-controller/operator-controller-controller-manager-9cc7d7bb-t75jj" Feb 24 05:18:35.401358 master-0 kubenswrapper[7614]: I0224 05:18:35.401157 7614 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-etcd/etcd-master-0" podStartSLOduration=0.401109902 podStartE2EDuration="401.109902ms" podCreationTimestamp="2026-02-24 05:18:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-24 05:18:35.398689258 +0000 UTC m=+246.433432424" watchObservedRunningTime="2026-02-24 05:18:35.401109902 +0000 UTC m=+246.435853098" Feb 24 05:18:35.579305 master-0 kubenswrapper[7614]: I0224 05:18:35.579102 7614 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="kube-system/bootstrap-kube-controller-manager-master-0" Feb 24 05:18:36.018720 master-0 kubenswrapper[7614]: E0224 05:18:36.018636 7614 kubelet.go:1929] "Failed creating a mirror pod for" err="pods \"etcd-master-0\" already exists" pod="openshift-etcd/etcd-master-0" Feb 24 05:18:36.030683 master-0 kubenswrapper[7614]: I0224 05:18:36.030595 7614 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-etcd/etcd-master-0" Feb 24 05:18:37.011847 master-0 kubenswrapper[7614]: I0224 05:18:37.011756 7614 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="kube-system/bootstrap-kube-controller-manager-master-0" Feb 24 05:18:40.028390 master-0 kubenswrapper[7614]: I0224 05:18:40.028297 7614 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-network-operator_network-operator-7d7db75979-4fk6k_f77227c8-c52d-4a71-ae1b-792055f6f23d/network-operator/1.log" Feb 24 05:18:40.029701 master-0 kubenswrapper[7614]: I0224 05:18:40.029623 7614 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-network-operator_network-operator-7d7db75979-4fk6k_f77227c8-c52d-4a71-ae1b-792055f6f23d/network-operator/0.log" Feb 24 05:18:40.029835 master-0 kubenswrapper[7614]: I0224 05:18:40.029745 7614 generic.go:334] "Generic (PLEG): container finished" podID="f77227c8-c52d-4a71-ae1b-792055f6f23d" containerID="6e3c93a1a355eeeb3f5cb2283a174709bfd59dc7e2e2f1d724c2278f1e630da9" exitCode=255 Feb 24 05:18:40.029835 master-0 kubenswrapper[7614]: I0224 05:18:40.029808 7614 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-7d7db75979-4fk6k" event={"ID":"f77227c8-c52d-4a71-ae1b-792055f6f23d","Type":"ContainerDied","Data":"6e3c93a1a355eeeb3f5cb2283a174709bfd59dc7e2e2f1d724c2278f1e630da9"} Feb 24 05:18:40.029972 master-0 kubenswrapper[7614]: I0224 05:18:40.029881 7614 scope.go:117] "RemoveContainer" containerID="22b7d6a6838a4874825b0fb486995e1ecae2b2ab9edf5d7d1caac95d9b544b8e" Feb 24 05:18:40.030901 master-0 kubenswrapper[7614]: I0224 05:18:40.030832 7614 scope.go:117] "RemoveContainer" containerID="6e3c93a1a355eeeb3f5cb2283a174709bfd59dc7e2e2f1d724c2278f1e630da9" Feb 24 05:18:40.031295 master-0 kubenswrapper[7614]: E0224 05:18:40.031224 7614 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"network-operator\" with CrashLoopBackOff: \"back-off 10s restarting failed container=network-operator pod=network-operator-7d7db75979-4fk6k_openshift-network-operator(f77227c8-c52d-4a71-ae1b-792055f6f23d)\"" pod="openshift-network-operator/network-operator-7d7db75979-4fk6k" podUID="f77227c8-c52d-4a71-ae1b-792055f6f23d" Feb 24 05:18:41.041012 master-0 kubenswrapper[7614]: I0224 05:18:41.040920 7614 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver-operator_kube-apiserver-operator-5d87bf58c-ncrqj_17f8e10b-88dc-4158-a7c4-aaa2f5d5fb9d/kube-apiserver-operator/1.log" Feb 24 05:18:41.042089 master-0 kubenswrapper[7614]: I0224 05:18:41.042022 7614 generic.go:334] "Generic (PLEG): container finished" podID="17f8e10b-88dc-4158-a7c4-aaa2f5d5fb9d" containerID="3b73827e2bb1f8b20c02df6acec604b6c43e878ca9e2bd5192c12a2a62cbd894" exitCode=255 Feb 24 05:18:41.042232 master-0 kubenswrapper[7614]: I0224 05:18:41.042146 7614 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-5d87bf58c-ncrqj" event={"ID":"17f8e10b-88dc-4158-a7c4-aaa2f5d5fb9d","Type":"ContainerDied","Data":"3b73827e2bb1f8b20c02df6acec604b6c43e878ca9e2bd5192c12a2a62cbd894"} Feb 24 05:18:41.042338 master-0 kubenswrapper[7614]: I0224 05:18:41.042281 7614 scope.go:117] "RemoveContainer" containerID="f93fdb0961b7ab6c511e8eb1cee936b815e97917116f05d83d27c325437b676d" Feb 24 05:18:41.043027 master-0 kubenswrapper[7614]: I0224 05:18:41.042975 7614 scope.go:117] "RemoveContainer" containerID="3b73827e2bb1f8b20c02df6acec604b6c43e878ca9e2bd5192c12a2a62cbd894" Feb 24 05:18:41.043286 master-0 kubenswrapper[7614]: E0224 05:18:41.043239 7614 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-operator\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-apiserver-operator pod=kube-apiserver-operator-5d87bf58c-ncrqj_openshift-kube-apiserver-operator(17f8e10b-88dc-4158-a7c4-aaa2f5d5fb9d)\"" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-5d87bf58c-ncrqj" podUID="17f8e10b-88dc-4158-a7c4-aaa2f5d5fb9d" Feb 24 05:18:41.048146 master-0 kubenswrapper[7614]: I0224 05:18:41.048086 7614 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager-operator_kube-controller-manager-operator-7bcfbc574b-8zrj9_22813c83-2f60-44ad-9624-ad367cec08f7/kube-controller-manager-operator/1.log" Feb 24 05:18:41.049157 master-0 kubenswrapper[7614]: I0224 05:18:41.048733 7614 generic.go:334] "Generic (PLEG): container finished" podID="22813c83-2f60-44ad-9624-ad367cec08f7" containerID="03dd9053750096b7f82252736f4fac427fd0dcd291c847a9672ee97680c7a2e7" exitCode=255 Feb 24 05:18:41.049157 master-0 kubenswrapper[7614]: I0224 05:18:41.048836 7614 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-7bcfbc574b-8zrj9" event={"ID":"22813c83-2f60-44ad-9624-ad367cec08f7","Type":"ContainerDied","Data":"03dd9053750096b7f82252736f4fac427fd0dcd291c847a9672ee97680c7a2e7"} Feb 24 05:18:41.049360 master-0 kubenswrapper[7614]: I0224 05:18:41.049238 7614 scope.go:117] "RemoveContainer" containerID="03dd9053750096b7f82252736f4fac427fd0dcd291c847a9672ee97680c7a2e7" Feb 24 05:18:41.049523 master-0 kubenswrapper[7614]: E0224 05:18:41.049464 7614 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager-operator\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-controller-manager-operator pod=kube-controller-manager-operator-7bcfbc574b-8zrj9_openshift-kube-controller-manager-operator(22813c83-2f60-44ad-9624-ad367cec08f7)\"" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-7bcfbc574b-8zrj9" podUID="22813c83-2f60-44ad-9624-ad367cec08f7" Feb 24 05:18:41.050719 master-0 kubenswrapper[7614]: I0224 05:18:41.050680 7614 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-network-operator_network-operator-7d7db75979-4fk6k_f77227c8-c52d-4a71-ae1b-792055f6f23d/network-operator/1.log" Feb 24 05:18:41.053427 master-0 kubenswrapper[7614]: I0224 05:18:41.053374 7614 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-storage-version-migrator-operator_kube-storage-version-migrator-operator-fc889cfd5-r6p58_c3fed34f-b275-42c6-af6c-8de3e6fe0f9e/kube-storage-version-migrator-operator/1.log" Feb 24 05:18:41.056077 master-0 kubenswrapper[7614]: I0224 05:18:41.056016 7614 generic.go:334] "Generic (PLEG): container finished" podID="c3fed34f-b275-42c6-af6c-8de3e6fe0f9e" containerID="49b21c85c511839ea61bf1eb992b507dfd3ec3bd10df341c02909db55b0a753b" exitCode=255 Feb 24 05:18:41.056202 master-0 kubenswrapper[7614]: I0224 05:18:41.056082 7614 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-fc889cfd5-r6p58" event={"ID":"c3fed34f-b275-42c6-af6c-8de3e6fe0f9e","Type":"ContainerDied","Data":"49b21c85c511839ea61bf1eb992b507dfd3ec3bd10df341c02909db55b0a753b"} Feb 24 05:18:41.057239 master-0 kubenswrapper[7614]: I0224 05:18:41.057185 7614 scope.go:117] "RemoveContainer" containerID="49b21c85c511839ea61bf1eb992b507dfd3ec3bd10df341c02909db55b0a753b" Feb 24 05:18:41.057641 master-0 kubenswrapper[7614]: E0224 05:18:41.057555 7614 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-storage-version-migrator-operator\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-storage-version-migrator-operator pod=kube-storage-version-migrator-operator-fc889cfd5-r6p58_openshift-kube-storage-version-migrator-operator(c3fed34f-b275-42c6-af6c-8de3e6fe0f9e)\"" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-fc889cfd5-r6p58" podUID="c3fed34f-b275-42c6-af6c-8de3e6fe0f9e" Feb 24 05:18:41.093987 master-0 kubenswrapper[7614]: I0224 05:18:41.093902 7614 scope.go:117] "RemoveContainer" containerID="3d7e3ee020313467e6fefd173d6752fc4e4ffcc2fae974414212fcbe51114f7d" Feb 24 05:18:41.133216 master-0 kubenswrapper[7614]: I0224 05:18:41.133141 7614 scope.go:117] "RemoveContainer" containerID="80dce2d75efa45ca36b53637a94f5b4155d200b7759d2e7b129815f6f4324f5a" Feb 24 05:18:42.063326 master-0 kubenswrapper[7614]: I0224 05:18:42.063245 7614 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager-operator_kube-controller-manager-operator-7bcfbc574b-8zrj9_22813c83-2f60-44ad-9624-ad367cec08f7/kube-controller-manager-operator/1.log" Feb 24 05:18:42.065989 master-0 kubenswrapper[7614]: I0224 05:18:42.065924 7614 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-storage-version-migrator-operator_kube-storage-version-migrator-operator-fc889cfd5-r6p58_c3fed34f-b275-42c6-af6c-8de3e6fe0f9e/kube-storage-version-migrator-operator/1.log" Feb 24 05:18:42.067866 master-0 kubenswrapper[7614]: I0224 05:18:42.067822 7614 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver-operator_kube-apiserver-operator-5d87bf58c-ncrqj_17f8e10b-88dc-4158-a7c4-aaa2f5d5fb9d/kube-apiserver-operator/1.log" Feb 24 05:18:52.173964 master-0 kubenswrapper[7614]: I0224 05:18:52.173891 7614 scope.go:117] "RemoveContainer" containerID="3b73827e2bb1f8b20c02df6acec604b6c43e878ca9e2bd5192c12a2a62cbd894" Feb 24 05:18:53.135063 master-0 kubenswrapper[7614]: I0224 05:18:53.135001 7614 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver-operator_kube-apiserver-operator-5d87bf58c-ncrqj_17f8e10b-88dc-4158-a7c4-aaa2f5d5fb9d/kube-apiserver-operator/1.log" Feb 24 05:18:53.135063 master-0 kubenswrapper[7614]: I0224 05:18:53.135075 7614 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-5d87bf58c-ncrqj" event={"ID":"17f8e10b-88dc-4158-a7c4-aaa2f5d5fb9d","Type":"ContainerStarted","Data":"4128e6ec737b6b0efca5e7827427326735a8755e3faf1df48d6f075e6755cd88"} Feb 24 05:18:53.176295 master-0 kubenswrapper[7614]: I0224 05:18:53.174777 7614 scope.go:117] "RemoveContainer" containerID="6e3c93a1a355eeeb3f5cb2283a174709bfd59dc7e2e2f1d724c2278f1e630da9" Feb 24 05:18:54.145551 master-0 kubenswrapper[7614]: I0224 05:18:54.145483 7614 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-network-operator_network-operator-7d7db75979-4fk6k_f77227c8-c52d-4a71-ae1b-792055f6f23d/network-operator/1.log" Feb 24 05:18:54.145844 master-0 kubenswrapper[7614]: I0224 05:18:54.145594 7614 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-7d7db75979-4fk6k" event={"ID":"f77227c8-c52d-4a71-ae1b-792055f6f23d","Type":"ContainerStarted","Data":"77344984c3a22910313574fd5443c3f8c0826a85a9d2f12dd8592b5e925a1b84"} Feb 24 05:18:54.174979 master-0 kubenswrapper[7614]: I0224 05:18:54.174929 7614 scope.go:117] "RemoveContainer" containerID="49b21c85c511839ea61bf1eb992b507dfd3ec3bd10df341c02909db55b0a753b" Feb 24 05:18:54.175219 master-0 kubenswrapper[7614]: I0224 05:18:54.175170 7614 scope.go:117] "RemoveContainer" containerID="03dd9053750096b7f82252736f4fac427fd0dcd291c847a9672ee97680c7a2e7" Feb 24 05:18:55.156269 master-0 kubenswrapper[7614]: I0224 05:18:55.156181 7614 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-storage-version-migrator-operator_kube-storage-version-migrator-operator-fc889cfd5-r6p58_c3fed34f-b275-42c6-af6c-8de3e6fe0f9e/kube-storage-version-migrator-operator/1.log" Feb 24 05:18:55.157886 master-0 kubenswrapper[7614]: I0224 05:18:55.156371 7614 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-fc889cfd5-r6p58" event={"ID":"c3fed34f-b275-42c6-af6c-8de3e6fe0f9e","Type":"ContainerStarted","Data":"8eadd02a3eb053b6fcdd393a3aeb7df438083855b4ae5ac3cfedf974ce5cb69c"} Feb 24 05:18:55.161008 master-0 kubenswrapper[7614]: I0224 05:18:55.160945 7614 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager-operator_kube-controller-manager-operator-7bcfbc574b-8zrj9_22813c83-2f60-44ad-9624-ad367cec08f7/kube-controller-manager-operator/1.log" Feb 24 05:18:55.161150 master-0 kubenswrapper[7614]: I0224 05:18:55.161035 7614 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-7bcfbc574b-8zrj9" event={"ID":"22813c83-2f60-44ad-9624-ad367cec08f7","Type":"ContainerStarted","Data":"c0559153cb9d3232da1d9baca34a653eff61d748f8d7e4af8a7f1e0e1d63e86d"} Feb 24 05:19:01.025832 master-0 kubenswrapper[7614]: I0224 05:19:01.025756 7614 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler/installer-1-retry-1-master-0"] Feb 24 05:19:01.026491 master-0 kubenswrapper[7614]: E0224 05:19:01.026063 7614 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="74d070e9-4193-4598-ad68-15955b07d649" containerName="installer" Feb 24 05:19:01.026491 master-0 kubenswrapper[7614]: I0224 05:19:01.026085 7614 state_mem.go:107] "Deleted CPUSet assignment" podUID="74d070e9-4193-4598-ad68-15955b07d649" containerName="installer" Feb 24 05:19:01.026491 master-0 kubenswrapper[7614]: E0224 05:19:01.026109 7614 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2d3d57f1-cd67-4f1d-b267-f652b9bb3448" containerName="installer" Feb 24 05:19:01.026491 master-0 kubenswrapper[7614]: I0224 05:19:01.026123 7614 state_mem.go:107] "Deleted CPUSet assignment" podUID="2d3d57f1-cd67-4f1d-b267-f652b9bb3448" containerName="installer" Feb 24 05:19:01.026491 master-0 kubenswrapper[7614]: E0224 05:19:01.026139 7614 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1d886fdf-fd74-45de-b7c0-2e8e75eb994e" containerName="installer" Feb 24 05:19:01.026491 master-0 kubenswrapper[7614]: I0224 05:19:01.026152 7614 state_mem.go:107] "Deleted CPUSet assignment" podUID="1d886fdf-fd74-45de-b7c0-2e8e75eb994e" containerName="installer" Feb 24 05:19:01.026491 master-0 kubenswrapper[7614]: E0224 05:19:01.026174 7614 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e44f770d-f88d-446a-a22f-51b30e89690c" containerName="installer" Feb 24 05:19:01.026491 master-0 kubenswrapper[7614]: I0224 05:19:01.026187 7614 state_mem.go:107] "Deleted CPUSet assignment" podUID="e44f770d-f88d-446a-a22f-51b30e89690c" containerName="installer" Feb 24 05:19:01.026491 master-0 kubenswrapper[7614]: I0224 05:19:01.026354 7614 memory_manager.go:354] "RemoveStaleState removing state" podUID="74d070e9-4193-4598-ad68-15955b07d649" containerName="installer" Feb 24 05:19:01.026491 master-0 kubenswrapper[7614]: I0224 05:19:01.026379 7614 memory_manager.go:354] "RemoveStaleState removing state" podUID="1d886fdf-fd74-45de-b7c0-2e8e75eb994e" containerName="installer" Feb 24 05:19:01.026491 master-0 kubenswrapper[7614]: I0224 05:19:01.026404 7614 memory_manager.go:354] "RemoveStaleState removing state" podUID="2d3d57f1-cd67-4f1d-b267-f652b9bb3448" containerName="installer" Feb 24 05:19:01.026491 master-0 kubenswrapper[7614]: I0224 05:19:01.026429 7614 memory_manager.go:354] "RemoveStaleState removing state" podUID="e44f770d-f88d-446a-a22f-51b30e89690c" containerName="installer" Feb 24 05:19:01.027027 master-0 kubenswrapper[7614]: I0224 05:19:01.027001 7614 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/installer-1-retry-1-master-0" Feb 24 05:19:01.029608 master-0 kubenswrapper[7614]: I0224 05:19:01.029563 7614 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler"/"installer-sa-dockercfg-sh42j" Feb 24 05:19:01.030596 master-0 kubenswrapper[7614]: I0224 05:19:01.030567 7614 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler"/"kube-root-ca.crt" Feb 24 05:19:01.036994 master-0 kubenswrapper[7614]: I0224 05:19:01.036923 7614 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler/installer-1-retry-1-master-0"] Feb 24 05:19:01.178612 master-0 kubenswrapper[7614]: I0224 05:19:01.178387 7614 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/4df29682-0936-44a2-9629-2e90115671e0-kube-api-access\") pod \"installer-1-retry-1-master-0\" (UID: \"4df29682-0936-44a2-9629-2e90115671e0\") " pod="openshift-kube-scheduler/installer-1-retry-1-master-0" Feb 24 05:19:01.178612 master-0 kubenswrapper[7614]: I0224 05:19:01.178503 7614 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/4df29682-0936-44a2-9629-2e90115671e0-kubelet-dir\") pod \"installer-1-retry-1-master-0\" (UID: \"4df29682-0936-44a2-9629-2e90115671e0\") " pod="openshift-kube-scheduler/installer-1-retry-1-master-0" Feb 24 05:19:01.178612 master-0 kubenswrapper[7614]: I0224 05:19:01.178584 7614 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/4df29682-0936-44a2-9629-2e90115671e0-var-lock\") pod \"installer-1-retry-1-master-0\" (UID: \"4df29682-0936-44a2-9629-2e90115671e0\") " pod="openshift-kube-scheduler/installer-1-retry-1-master-0" Feb 24 05:19:01.279989 master-0 kubenswrapper[7614]: I0224 05:19:01.279806 7614 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/4df29682-0936-44a2-9629-2e90115671e0-kubelet-dir\") pod \"installer-1-retry-1-master-0\" (UID: \"4df29682-0936-44a2-9629-2e90115671e0\") " pod="openshift-kube-scheduler/installer-1-retry-1-master-0" Feb 24 05:19:01.279989 master-0 kubenswrapper[7614]: I0224 05:19:01.279965 7614 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/4df29682-0936-44a2-9629-2e90115671e0-var-lock\") pod \"installer-1-retry-1-master-0\" (UID: \"4df29682-0936-44a2-9629-2e90115671e0\") " pod="openshift-kube-scheduler/installer-1-retry-1-master-0" Feb 24 05:19:01.280619 master-0 kubenswrapper[7614]: I0224 05:19:01.280279 7614 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/4df29682-0936-44a2-9629-2e90115671e0-kube-api-access\") pod \"installer-1-retry-1-master-0\" (UID: \"4df29682-0936-44a2-9629-2e90115671e0\") " pod="openshift-kube-scheduler/installer-1-retry-1-master-0" Feb 24 05:19:01.280619 master-0 kubenswrapper[7614]: I0224 05:19:01.280520 7614 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/4df29682-0936-44a2-9629-2e90115671e0-var-lock\") pod \"installer-1-retry-1-master-0\" (UID: \"4df29682-0936-44a2-9629-2e90115671e0\") " pod="openshift-kube-scheduler/installer-1-retry-1-master-0" Feb 24 05:19:01.280737 master-0 kubenswrapper[7614]: I0224 05:19:01.280615 7614 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/4df29682-0936-44a2-9629-2e90115671e0-kubelet-dir\") pod \"installer-1-retry-1-master-0\" (UID: \"4df29682-0936-44a2-9629-2e90115671e0\") " pod="openshift-kube-scheduler/installer-1-retry-1-master-0" Feb 24 05:19:01.313579 master-0 kubenswrapper[7614]: I0224 05:19:01.313471 7614 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/4df29682-0936-44a2-9629-2e90115671e0-kube-api-access\") pod \"installer-1-retry-1-master-0\" (UID: \"4df29682-0936-44a2-9629-2e90115671e0\") " pod="openshift-kube-scheduler/installer-1-retry-1-master-0" Feb 24 05:19:01.355407 master-0 kubenswrapper[7614]: I0224 05:19:01.355283 7614 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/installer-1-retry-1-master-0" Feb 24 05:19:01.768338 master-0 kubenswrapper[7614]: I0224 05:19:01.767325 7614 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-557cb6655b-75nhl"] Feb 24 05:19:01.768338 master-0 kubenswrapper[7614]: I0224 05:19:01.767604 7614 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-557cb6655b-75nhl" podUID="e75c6622-29b4-4da8-8409-be898aab9f49" containerName="controller-manager" containerID="cri-o://eb6669c66956cacb51b3a8c28637b0795fa1847baffcb87c1466213004c904a0" gracePeriod=30 Feb 24 05:19:01.856789 master-0 kubenswrapper[7614]: I0224 05:19:01.856728 7614 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler/installer-1-retry-1-master-0"] Feb 24 05:19:01.870288 master-0 kubenswrapper[7614]: I0224 05:19:01.868727 7614 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-cluster-machine-approver/machine-approver-798b897698-6hgvq"] Feb 24 05:19:01.870288 master-0 kubenswrapper[7614]: I0224 05:19:01.869009 7614 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-cluster-machine-approver/machine-approver-798b897698-6hgvq" podUID="fe235661-d492-48fc-92e6-d9e1938daeb7" containerName="kube-rbac-proxy" containerID="cri-o://4ca5d250bec3b25175c35f6a821fe8e53e48f1d38446e2b1d4770a2ed43da0a2" gracePeriod=30 Feb 24 05:19:01.870288 master-0 kubenswrapper[7614]: I0224 05:19:01.869097 7614 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-cluster-machine-approver/machine-approver-798b897698-6hgvq" podUID="fe235661-d492-48fc-92e6-d9e1938daeb7" containerName="machine-approver-controller" containerID="cri-o://011d397235a77ecca99655074960637a0e06d4850a685cb71f22e206406b1d2d" gracePeriod=30 Feb 24 05:19:01.877104 master-0 kubenswrapper[7614]: W0224 05:19:01.876992 7614 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-pod4df29682_0936_44a2_9629_2e90115671e0.slice/crio-33a4bcbe5ee93a7507e3b17c9d65e1fc83f9e2c984de2f2f9d7e2c4fd84b6d8a WatchSource:0}: Error finding container 33a4bcbe5ee93a7507e3b17c9d65e1fc83f9e2c984de2f2f9d7e2c4fd84b6d8a: Status 404 returned error can't find the container with id 33a4bcbe5ee93a7507e3b17c9d65e1fc83f9e2c984de2f2f9d7e2c4fd84b6d8a Feb 24 05:19:01.917217 master-0 kubenswrapper[7614]: I0224 05:19:01.914512 7614 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-85ff64b64d-965rz"] Feb 24 05:19:01.917217 master-0 kubenswrapper[7614]: I0224 05:19:01.914760 7614 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-85ff64b64d-965rz" podUID="8cafd431-e8f6-4b60-9214-3d01b1f43982" containerName="route-controller-manager" containerID="cri-o://ee6c5d36068024e9bafe4482e75c474aa3bcf31e561b317ef75ae830061a9718" gracePeriod=30 Feb 24 05:19:02.045943 master-0 kubenswrapper[7614]: I0224 05:19:02.045508 7614 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-machine-approver/machine-approver-798b897698-6hgvq" Feb 24 05:19:02.155661 master-0 kubenswrapper[7614]: I0224 05:19:02.155598 7614 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-557cb6655b-75nhl" Feb 24 05:19:02.197770 master-0 kubenswrapper[7614]: I0224 05:19:02.197716 7614 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/fe235661-d492-48fc-92e6-d9e1938daeb7-machine-approver-tls\") pod \"fe235661-d492-48fc-92e6-d9e1938daeb7\" (UID: \"fe235661-d492-48fc-92e6-d9e1938daeb7\") " Feb 24 05:19:02.197988 master-0 kubenswrapper[7614]: I0224 05:19:02.197849 7614 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xxj59\" (UniqueName: \"kubernetes.io/projected/fe235661-d492-48fc-92e6-d9e1938daeb7-kube-api-access-xxj59\") pod \"fe235661-d492-48fc-92e6-d9e1938daeb7\" (UID: \"fe235661-d492-48fc-92e6-d9e1938daeb7\") " Feb 24 05:19:02.197988 master-0 kubenswrapper[7614]: I0224 05:19:02.197973 7614 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fe235661-d492-48fc-92e6-d9e1938daeb7-config\") pod \"fe235661-d492-48fc-92e6-d9e1938daeb7\" (UID: \"fe235661-d492-48fc-92e6-d9e1938daeb7\") " Feb 24 05:19:02.198056 master-0 kubenswrapper[7614]: I0224 05:19:02.198000 7614 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/fe235661-d492-48fc-92e6-d9e1938daeb7-auth-proxy-config\") pod \"fe235661-d492-48fc-92e6-d9e1938daeb7\" (UID: \"fe235661-d492-48fc-92e6-d9e1938daeb7\") " Feb 24 05:19:02.198850 master-0 kubenswrapper[7614]: I0224 05:19:02.198816 7614 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fe235661-d492-48fc-92e6-d9e1938daeb7-auth-proxy-config" (OuterVolumeSpecName: "auth-proxy-config") pod "fe235661-d492-48fc-92e6-d9e1938daeb7" (UID: "fe235661-d492-48fc-92e6-d9e1938daeb7"). InnerVolumeSpecName "auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 24 05:19:02.199365 master-0 kubenswrapper[7614]: I0224 05:19:02.199332 7614 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fe235661-d492-48fc-92e6-d9e1938daeb7-config" (OuterVolumeSpecName: "config") pod "fe235661-d492-48fc-92e6-d9e1938daeb7" (UID: "fe235661-d492-48fc-92e6-d9e1938daeb7"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 24 05:19:02.201654 master-0 kubenswrapper[7614]: I0224 05:19:02.201610 7614 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fe235661-d492-48fc-92e6-d9e1938daeb7-kube-api-access-xxj59" (OuterVolumeSpecName: "kube-api-access-xxj59") pod "fe235661-d492-48fc-92e6-d9e1938daeb7" (UID: "fe235661-d492-48fc-92e6-d9e1938daeb7"). InnerVolumeSpecName "kube-api-access-xxj59". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 24 05:19:02.202022 master-0 kubenswrapper[7614]: I0224 05:19:02.201977 7614 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fe235661-d492-48fc-92e6-d9e1938daeb7-machine-approver-tls" (OuterVolumeSpecName: "machine-approver-tls") pod "fe235661-d492-48fc-92e6-d9e1938daeb7" (UID: "fe235661-d492-48fc-92e6-d9e1938daeb7"). InnerVolumeSpecName "machine-approver-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 24 05:19:02.214826 master-0 kubenswrapper[7614]: I0224 05:19:02.214786 7614 generic.go:334] "Generic (PLEG): container finished" podID="e75c6622-29b4-4da8-8409-be898aab9f49" containerID="eb6669c66956cacb51b3a8c28637b0795fa1847baffcb87c1466213004c904a0" exitCode=0 Feb 24 05:19:02.214913 master-0 kubenswrapper[7614]: I0224 05:19:02.214861 7614 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-557cb6655b-75nhl" event={"ID":"e75c6622-29b4-4da8-8409-be898aab9f49","Type":"ContainerDied","Data":"eb6669c66956cacb51b3a8c28637b0795fa1847baffcb87c1466213004c904a0"} Feb 24 05:19:02.214913 master-0 kubenswrapper[7614]: I0224 05:19:02.214900 7614 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-557cb6655b-75nhl" event={"ID":"e75c6622-29b4-4da8-8409-be898aab9f49","Type":"ContainerDied","Data":"2ac0807aac1339b1738831a83bed34bab87cdee7e6e8f967e0b4a894d0139f4e"} Feb 24 05:19:02.214975 master-0 kubenswrapper[7614]: I0224 05:19:02.214922 7614 scope.go:117] "RemoveContainer" containerID="eb6669c66956cacb51b3a8c28637b0795fa1847baffcb87c1466213004c904a0" Feb 24 05:19:02.215057 master-0 kubenswrapper[7614]: I0224 05:19:02.215038 7614 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-557cb6655b-75nhl" Feb 24 05:19:02.228102 master-0 kubenswrapper[7614]: I0224 05:19:02.228042 7614 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/installer-1-retry-1-master-0" event={"ID":"4df29682-0936-44a2-9629-2e90115671e0","Type":"ContainerStarted","Data":"33a4bcbe5ee93a7507e3b17c9d65e1fc83f9e2c984de2f2f9d7e2c4fd84b6d8a"} Feb 24 05:19:02.235640 master-0 kubenswrapper[7614]: I0224 05:19:02.235423 7614 scope.go:117] "RemoveContainer" containerID="caaa92858c7b0807c25410a67269dc27f4266cbb4b787010dfde00b1cfce4b7b" Feb 24 05:19:02.236187 master-0 kubenswrapper[7614]: I0224 05:19:02.236149 7614 generic.go:334] "Generic (PLEG): container finished" podID="fe235661-d492-48fc-92e6-d9e1938daeb7" containerID="011d397235a77ecca99655074960637a0e06d4850a685cb71f22e206406b1d2d" exitCode=0 Feb 24 05:19:02.236187 master-0 kubenswrapper[7614]: I0224 05:19:02.236181 7614 generic.go:334] "Generic (PLEG): container finished" podID="fe235661-d492-48fc-92e6-d9e1938daeb7" containerID="4ca5d250bec3b25175c35f6a821fe8e53e48f1d38446e2b1d4770a2ed43da0a2" exitCode=0 Feb 24 05:19:02.239652 master-0 kubenswrapper[7614]: I0224 05:19:02.236228 7614 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-machine-approver/machine-approver-798b897698-6hgvq" Feb 24 05:19:02.239652 master-0 kubenswrapper[7614]: I0224 05:19:02.236247 7614 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-798b897698-6hgvq" event={"ID":"fe235661-d492-48fc-92e6-d9e1938daeb7","Type":"ContainerDied","Data":"011d397235a77ecca99655074960637a0e06d4850a685cb71f22e206406b1d2d"} Feb 24 05:19:02.239652 master-0 kubenswrapper[7614]: I0224 05:19:02.236340 7614 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-798b897698-6hgvq" event={"ID":"fe235661-d492-48fc-92e6-d9e1938daeb7","Type":"ContainerDied","Data":"4ca5d250bec3b25175c35f6a821fe8e53e48f1d38446e2b1d4770a2ed43da0a2"} Feb 24 05:19:02.239652 master-0 kubenswrapper[7614]: I0224 05:19:02.236359 7614 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-798b897698-6hgvq" event={"ID":"fe235661-d492-48fc-92e6-d9e1938daeb7","Type":"ContainerDied","Data":"8815be107409d9117e98e0fbd4a569a1ac9718c2f1970ad5fa33996f9d7cc8ad"} Feb 24 05:19:02.247150 master-0 kubenswrapper[7614]: I0224 05:19:02.247096 7614 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-85ff64b64d-965rz" event={"ID":"8cafd431-e8f6-4b60-9214-3d01b1f43982","Type":"ContainerDied","Data":"ee6c5d36068024e9bafe4482e75c474aa3bcf31e561b317ef75ae830061a9718"} Feb 24 05:19:02.247273 master-0 kubenswrapper[7614]: I0224 05:19:02.247242 7614 generic.go:334] "Generic (PLEG): container finished" podID="8cafd431-e8f6-4b60-9214-3d01b1f43982" containerID="ee6c5d36068024e9bafe4482e75c474aa3bcf31e561b317ef75ae830061a9718" exitCode=0 Feb 24 05:19:02.266481 master-0 kubenswrapper[7614]: I0224 05:19:02.266420 7614 scope.go:117] "RemoveContainer" containerID="eb6669c66956cacb51b3a8c28637b0795fa1847baffcb87c1466213004c904a0" Feb 24 05:19:02.267221 master-0 kubenswrapper[7614]: E0224 05:19:02.267166 7614 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"eb6669c66956cacb51b3a8c28637b0795fa1847baffcb87c1466213004c904a0\": container with ID starting with eb6669c66956cacb51b3a8c28637b0795fa1847baffcb87c1466213004c904a0 not found: ID does not exist" containerID="eb6669c66956cacb51b3a8c28637b0795fa1847baffcb87c1466213004c904a0" Feb 24 05:19:02.267260 master-0 kubenswrapper[7614]: I0224 05:19:02.267236 7614 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"eb6669c66956cacb51b3a8c28637b0795fa1847baffcb87c1466213004c904a0"} err="failed to get container status \"eb6669c66956cacb51b3a8c28637b0795fa1847baffcb87c1466213004c904a0\": rpc error: code = NotFound desc = could not find container \"eb6669c66956cacb51b3a8c28637b0795fa1847baffcb87c1466213004c904a0\": container with ID starting with eb6669c66956cacb51b3a8c28637b0795fa1847baffcb87c1466213004c904a0 not found: ID does not exist" Feb 24 05:19:02.267298 master-0 kubenswrapper[7614]: I0224 05:19:02.267283 7614 scope.go:117] "RemoveContainer" containerID="caaa92858c7b0807c25410a67269dc27f4266cbb4b787010dfde00b1cfce4b7b" Feb 24 05:19:02.267865 master-0 kubenswrapper[7614]: E0224 05:19:02.267790 7614 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"caaa92858c7b0807c25410a67269dc27f4266cbb4b787010dfde00b1cfce4b7b\": container with ID starting with caaa92858c7b0807c25410a67269dc27f4266cbb4b787010dfde00b1cfce4b7b not found: ID does not exist" containerID="caaa92858c7b0807c25410a67269dc27f4266cbb4b787010dfde00b1cfce4b7b" Feb 24 05:19:02.267922 master-0 kubenswrapper[7614]: I0224 05:19:02.267890 7614 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"caaa92858c7b0807c25410a67269dc27f4266cbb4b787010dfde00b1cfce4b7b"} err="failed to get container status \"caaa92858c7b0807c25410a67269dc27f4266cbb4b787010dfde00b1cfce4b7b\": rpc error: code = NotFound desc = could not find container \"caaa92858c7b0807c25410a67269dc27f4266cbb4b787010dfde00b1cfce4b7b\": container with ID starting with caaa92858c7b0807c25410a67269dc27f4266cbb4b787010dfde00b1cfce4b7b not found: ID does not exist" Feb 24 05:19:02.267957 master-0 kubenswrapper[7614]: I0224 05:19:02.267929 7614 scope.go:117] "RemoveContainer" containerID="011d397235a77ecca99655074960637a0e06d4850a685cb71f22e206406b1d2d" Feb 24 05:19:02.277007 master-0 kubenswrapper[7614]: I0224 05:19:02.274744 7614 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-cluster-machine-approver/machine-approver-798b897698-6hgvq"] Feb 24 05:19:02.280837 master-0 kubenswrapper[7614]: I0224 05:19:02.280767 7614 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-cluster-machine-approver/machine-approver-798b897698-6hgvq"] Feb 24 05:19:02.293954 master-0 kubenswrapper[7614]: I0224 05:19:02.292291 7614 scope.go:117] "RemoveContainer" containerID="4ca5d250bec3b25175c35f6a821fe8e53e48f1d38446e2b1d4770a2ed43da0a2" Feb 24 05:19:02.301441 master-0 kubenswrapper[7614]: I0224 05:19:02.300633 7614 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/e75c6622-29b4-4da8-8409-be898aab9f49-proxy-ca-bundles\") pod \"e75c6622-29b4-4da8-8409-be898aab9f49\" (UID: \"e75c6622-29b4-4da8-8409-be898aab9f49\") " Feb 24 05:19:02.301623 master-0 kubenswrapper[7614]: I0224 05:19:02.301276 7614 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e75c6622-29b4-4da8-8409-be898aab9f49-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "e75c6622-29b4-4da8-8409-be898aab9f49" (UID: "e75c6622-29b4-4da8-8409-be898aab9f49"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 24 05:19:02.302600 master-0 kubenswrapper[7614]: I0224 05:19:02.302545 7614 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qgnf5\" (UniqueName: \"kubernetes.io/projected/e75c6622-29b4-4da8-8409-be898aab9f49-kube-api-access-qgnf5\") pod \"e75c6622-29b4-4da8-8409-be898aab9f49\" (UID: \"e75c6622-29b4-4da8-8409-be898aab9f49\") " Feb 24 05:19:02.302655 master-0 kubenswrapper[7614]: I0224 05:19:02.302624 7614 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e75c6622-29b4-4da8-8409-be898aab9f49-config\") pod \"e75c6622-29b4-4da8-8409-be898aab9f49\" (UID: \"e75c6622-29b4-4da8-8409-be898aab9f49\") " Feb 24 05:19:02.302771 master-0 kubenswrapper[7614]: I0224 05:19:02.302732 7614 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e75c6622-29b4-4da8-8409-be898aab9f49-serving-cert\") pod \"e75c6622-29b4-4da8-8409-be898aab9f49\" (UID: \"e75c6622-29b4-4da8-8409-be898aab9f49\") " Feb 24 05:19:02.302811 master-0 kubenswrapper[7614]: I0224 05:19:02.302785 7614 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/e75c6622-29b4-4da8-8409-be898aab9f49-client-ca\") pod \"e75c6622-29b4-4da8-8409-be898aab9f49\" (UID: \"e75c6622-29b4-4da8-8409-be898aab9f49\") " Feb 24 05:19:02.303514 master-0 kubenswrapper[7614]: I0224 05:19:02.303282 7614 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xxj59\" (UniqueName: \"kubernetes.io/projected/fe235661-d492-48fc-92e6-d9e1938daeb7-kube-api-access-xxj59\") on node \"master-0\" DevicePath \"\"" Feb 24 05:19:02.303514 master-0 kubenswrapper[7614]: I0224 05:19:02.303332 7614 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/e75c6622-29b4-4da8-8409-be898aab9f49-proxy-ca-bundles\") on node \"master-0\" DevicePath \"\"" Feb 24 05:19:02.303514 master-0 kubenswrapper[7614]: I0224 05:19:02.303369 7614 reconciler_common.go:293] "Volume detached for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/fe235661-d492-48fc-92e6-d9e1938daeb7-auth-proxy-config\") on node \"master-0\" DevicePath \"\"" Feb 24 05:19:02.303514 master-0 kubenswrapper[7614]: I0224 05:19:02.303385 7614 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fe235661-d492-48fc-92e6-d9e1938daeb7-config\") on node \"master-0\" DevicePath \"\"" Feb 24 05:19:02.303514 master-0 kubenswrapper[7614]: I0224 05:19:02.303396 7614 reconciler_common.go:293] "Volume detached for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/fe235661-d492-48fc-92e6-d9e1938daeb7-machine-approver-tls\") on node \"master-0\" DevicePath \"\"" Feb 24 05:19:02.303937 master-0 kubenswrapper[7614]: I0224 05:19:02.303888 7614 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e75c6622-29b4-4da8-8409-be898aab9f49-config" (OuterVolumeSpecName: "config") pod "e75c6622-29b4-4da8-8409-be898aab9f49" (UID: "e75c6622-29b4-4da8-8409-be898aab9f49"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 24 05:19:02.304217 master-0 kubenswrapper[7614]: I0224 05:19:02.304144 7614 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e75c6622-29b4-4da8-8409-be898aab9f49-client-ca" (OuterVolumeSpecName: "client-ca") pod "e75c6622-29b4-4da8-8409-be898aab9f49" (UID: "e75c6622-29b4-4da8-8409-be898aab9f49"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 24 05:19:02.306770 master-0 kubenswrapper[7614]: I0224 05:19:02.306613 7614 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e75c6622-29b4-4da8-8409-be898aab9f49-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "e75c6622-29b4-4da8-8409-be898aab9f49" (UID: "e75c6622-29b4-4da8-8409-be898aab9f49"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 24 05:19:02.310405 master-0 kubenswrapper[7614]: I0224 05:19:02.310367 7614 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e75c6622-29b4-4da8-8409-be898aab9f49-kube-api-access-qgnf5" (OuterVolumeSpecName: "kube-api-access-qgnf5") pod "e75c6622-29b4-4da8-8409-be898aab9f49" (UID: "e75c6622-29b4-4da8-8409-be898aab9f49"). InnerVolumeSpecName "kube-api-access-qgnf5". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 24 05:19:02.314487 master-0 kubenswrapper[7614]: I0224 05:19:02.313499 7614 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-85ff64b64d-965rz" Feb 24 05:19:02.319940 master-0 kubenswrapper[7614]: I0224 05:19:02.319761 7614 scope.go:117] "RemoveContainer" containerID="011d397235a77ecca99655074960637a0e06d4850a685cb71f22e206406b1d2d" Feb 24 05:19:02.320781 master-0 kubenswrapper[7614]: E0224 05:19:02.320426 7614 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"011d397235a77ecca99655074960637a0e06d4850a685cb71f22e206406b1d2d\": container with ID starting with 011d397235a77ecca99655074960637a0e06d4850a685cb71f22e206406b1d2d not found: ID does not exist" containerID="011d397235a77ecca99655074960637a0e06d4850a685cb71f22e206406b1d2d" Feb 24 05:19:02.320781 master-0 kubenswrapper[7614]: I0224 05:19:02.320477 7614 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"011d397235a77ecca99655074960637a0e06d4850a685cb71f22e206406b1d2d"} err="failed to get container status \"011d397235a77ecca99655074960637a0e06d4850a685cb71f22e206406b1d2d\": rpc error: code = NotFound desc = could not find container \"011d397235a77ecca99655074960637a0e06d4850a685cb71f22e206406b1d2d\": container with ID starting with 011d397235a77ecca99655074960637a0e06d4850a685cb71f22e206406b1d2d not found: ID does not exist" Feb 24 05:19:02.320781 master-0 kubenswrapper[7614]: I0224 05:19:02.320514 7614 scope.go:117] "RemoveContainer" containerID="4ca5d250bec3b25175c35f6a821fe8e53e48f1d38446e2b1d4770a2ed43da0a2" Feb 24 05:19:02.321989 master-0 kubenswrapper[7614]: E0224 05:19:02.321834 7614 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4ca5d250bec3b25175c35f6a821fe8e53e48f1d38446e2b1d4770a2ed43da0a2\": container with ID starting with 4ca5d250bec3b25175c35f6a821fe8e53e48f1d38446e2b1d4770a2ed43da0a2 not found: ID does not exist" containerID="4ca5d250bec3b25175c35f6a821fe8e53e48f1d38446e2b1d4770a2ed43da0a2" Feb 24 05:19:02.322140 master-0 kubenswrapper[7614]: I0224 05:19:02.321898 7614 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4ca5d250bec3b25175c35f6a821fe8e53e48f1d38446e2b1d4770a2ed43da0a2"} err="failed to get container status \"4ca5d250bec3b25175c35f6a821fe8e53e48f1d38446e2b1d4770a2ed43da0a2\": rpc error: code = NotFound desc = could not find container \"4ca5d250bec3b25175c35f6a821fe8e53e48f1d38446e2b1d4770a2ed43da0a2\": container with ID starting with 4ca5d250bec3b25175c35f6a821fe8e53e48f1d38446e2b1d4770a2ed43da0a2 not found: ID does not exist" Feb 24 05:19:02.322140 master-0 kubenswrapper[7614]: I0224 05:19:02.322067 7614 scope.go:117] "RemoveContainer" containerID="011d397235a77ecca99655074960637a0e06d4850a685cb71f22e206406b1d2d" Feb 24 05:19:02.322605 master-0 kubenswrapper[7614]: I0224 05:19:02.322534 7614 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"011d397235a77ecca99655074960637a0e06d4850a685cb71f22e206406b1d2d"} err="failed to get container status \"011d397235a77ecca99655074960637a0e06d4850a685cb71f22e206406b1d2d\": rpc error: code = NotFound desc = could not find container \"011d397235a77ecca99655074960637a0e06d4850a685cb71f22e206406b1d2d\": container with ID starting with 011d397235a77ecca99655074960637a0e06d4850a685cb71f22e206406b1d2d not found: ID does not exist" Feb 24 05:19:02.322605 master-0 kubenswrapper[7614]: I0224 05:19:02.322553 7614 scope.go:117] "RemoveContainer" containerID="4ca5d250bec3b25175c35f6a821fe8e53e48f1d38446e2b1d4770a2ed43da0a2" Feb 24 05:19:02.323008 master-0 kubenswrapper[7614]: I0224 05:19:02.322897 7614 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4ca5d250bec3b25175c35f6a821fe8e53e48f1d38446e2b1d4770a2ed43da0a2"} err="failed to get container status \"4ca5d250bec3b25175c35f6a821fe8e53e48f1d38446e2b1d4770a2ed43da0a2\": rpc error: code = NotFound desc = could not find container \"4ca5d250bec3b25175c35f6a821fe8e53e48f1d38446e2b1d4770a2ed43da0a2\": container with ID starting with 4ca5d250bec3b25175c35f6a821fe8e53e48f1d38446e2b1d4770a2ed43da0a2 not found: ID does not exist" Feb 24 05:19:02.405826 master-0 kubenswrapper[7614]: I0224 05:19:02.405714 7614 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qgnf5\" (UniqueName: \"kubernetes.io/projected/e75c6622-29b4-4da8-8409-be898aab9f49-kube-api-access-qgnf5\") on node \"master-0\" DevicePath \"\"" Feb 24 05:19:02.405826 master-0 kubenswrapper[7614]: I0224 05:19:02.405761 7614 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e75c6622-29b4-4da8-8409-be898aab9f49-config\") on node \"master-0\" DevicePath \"\"" Feb 24 05:19:02.405826 master-0 kubenswrapper[7614]: I0224 05:19:02.405774 7614 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e75c6622-29b4-4da8-8409-be898aab9f49-serving-cert\") on node \"master-0\" DevicePath \"\"" Feb 24 05:19:02.405826 master-0 kubenswrapper[7614]: I0224 05:19:02.405784 7614 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/e75c6622-29b4-4da8-8409-be898aab9f49-client-ca\") on node \"master-0\" DevicePath \"\"" Feb 24 05:19:02.507787 master-0 kubenswrapper[7614]: I0224 05:19:02.507518 7614 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/8cafd431-e8f6-4b60-9214-3d01b1f43982-client-ca\") pod \"8cafd431-e8f6-4b60-9214-3d01b1f43982\" (UID: \"8cafd431-e8f6-4b60-9214-3d01b1f43982\") " Feb 24 05:19:02.507787 master-0 kubenswrapper[7614]: I0224 05:19:02.507667 7614 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8cafd431-e8f6-4b60-9214-3d01b1f43982-config\") pod \"8cafd431-e8f6-4b60-9214-3d01b1f43982\" (UID: \"8cafd431-e8f6-4b60-9214-3d01b1f43982\") " Feb 24 05:19:02.507787 master-0 kubenswrapper[7614]: I0224 05:19:02.507753 7614 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jbfct\" (UniqueName: \"kubernetes.io/projected/8cafd431-e8f6-4b60-9214-3d01b1f43982-kube-api-access-jbfct\") pod \"8cafd431-e8f6-4b60-9214-3d01b1f43982\" (UID: \"8cafd431-e8f6-4b60-9214-3d01b1f43982\") " Feb 24 05:19:02.508217 master-0 kubenswrapper[7614]: I0224 05:19:02.507872 7614 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8cafd431-e8f6-4b60-9214-3d01b1f43982-serving-cert\") pod \"8cafd431-e8f6-4b60-9214-3d01b1f43982\" (UID: \"8cafd431-e8f6-4b60-9214-3d01b1f43982\") " Feb 24 05:19:02.508261 master-0 kubenswrapper[7614]: I0224 05:19:02.508218 7614 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8cafd431-e8f6-4b60-9214-3d01b1f43982-client-ca" (OuterVolumeSpecName: "client-ca") pod "8cafd431-e8f6-4b60-9214-3d01b1f43982" (UID: "8cafd431-e8f6-4b60-9214-3d01b1f43982"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 24 05:19:02.508300 master-0 kubenswrapper[7614]: I0224 05:19:02.508269 7614 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8cafd431-e8f6-4b60-9214-3d01b1f43982-config" (OuterVolumeSpecName: "config") pod "8cafd431-e8f6-4b60-9214-3d01b1f43982" (UID: "8cafd431-e8f6-4b60-9214-3d01b1f43982"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 24 05:19:02.512429 master-0 kubenswrapper[7614]: I0224 05:19:02.512358 7614 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8cafd431-e8f6-4b60-9214-3d01b1f43982-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "8cafd431-e8f6-4b60-9214-3d01b1f43982" (UID: "8cafd431-e8f6-4b60-9214-3d01b1f43982"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 24 05:19:02.513864 master-0 kubenswrapper[7614]: I0224 05:19:02.513817 7614 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8cafd431-e8f6-4b60-9214-3d01b1f43982-kube-api-access-jbfct" (OuterVolumeSpecName: "kube-api-access-jbfct") pod "8cafd431-e8f6-4b60-9214-3d01b1f43982" (UID: "8cafd431-e8f6-4b60-9214-3d01b1f43982"). InnerVolumeSpecName "kube-api-access-jbfct". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 24 05:19:02.579329 master-0 kubenswrapper[7614]: I0224 05:19:02.579180 7614 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-557cb6655b-75nhl"] Feb 24 05:19:02.586178 master-0 kubenswrapper[7614]: I0224 05:19:02.586127 7614 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-557cb6655b-75nhl"] Feb 24 05:19:02.610340 master-0 kubenswrapper[7614]: I0224 05:19:02.609629 7614 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jbfct\" (UniqueName: \"kubernetes.io/projected/8cafd431-e8f6-4b60-9214-3d01b1f43982-kube-api-access-jbfct\") on node \"master-0\" DevicePath \"\"" Feb 24 05:19:02.610340 master-0 kubenswrapper[7614]: I0224 05:19:02.609692 7614 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8cafd431-e8f6-4b60-9214-3d01b1f43982-serving-cert\") on node \"master-0\" DevicePath \"\"" Feb 24 05:19:02.610340 master-0 kubenswrapper[7614]: I0224 05:19:02.609714 7614 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/8cafd431-e8f6-4b60-9214-3d01b1f43982-client-ca\") on node \"master-0\" DevicePath \"\"" Feb 24 05:19:02.610340 master-0 kubenswrapper[7614]: I0224 05:19:02.609734 7614 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8cafd431-e8f6-4b60-9214-3d01b1f43982-config\") on node \"master-0\" DevicePath \"\"" Feb 24 05:19:02.936256 master-0 kubenswrapper[7614]: I0224 05:19:02.936103 7614 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-api/cluster-baremetal-operator-d6bb9bb76-54hnv"] Feb 24 05:19:02.936557 master-0 kubenswrapper[7614]: E0224 05:19:02.936523 7614 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e75c6622-29b4-4da8-8409-be898aab9f49" containerName="controller-manager" Feb 24 05:19:02.936609 master-0 kubenswrapper[7614]: I0224 05:19:02.936564 7614 state_mem.go:107] "Deleted CPUSet assignment" podUID="e75c6622-29b4-4da8-8409-be898aab9f49" containerName="controller-manager" Feb 24 05:19:02.936645 master-0 kubenswrapper[7614]: E0224 05:19:02.936612 7614 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fe235661-d492-48fc-92e6-d9e1938daeb7" containerName="machine-approver-controller" Feb 24 05:19:02.936645 master-0 kubenswrapper[7614]: I0224 05:19:02.936630 7614 state_mem.go:107] "Deleted CPUSet assignment" podUID="fe235661-d492-48fc-92e6-d9e1938daeb7" containerName="machine-approver-controller" Feb 24 05:19:02.936703 master-0 kubenswrapper[7614]: E0224 05:19:02.936650 7614 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fe235661-d492-48fc-92e6-d9e1938daeb7" containerName="kube-rbac-proxy" Feb 24 05:19:02.936703 master-0 kubenswrapper[7614]: I0224 05:19:02.936665 7614 state_mem.go:107] "Deleted CPUSet assignment" podUID="fe235661-d492-48fc-92e6-d9e1938daeb7" containerName="kube-rbac-proxy" Feb 24 05:19:02.936703 master-0 kubenswrapper[7614]: E0224 05:19:02.936686 7614 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8cafd431-e8f6-4b60-9214-3d01b1f43982" containerName="route-controller-manager" Feb 24 05:19:02.936703 master-0 kubenswrapper[7614]: I0224 05:19:02.936697 7614 state_mem.go:107] "Deleted CPUSet assignment" podUID="8cafd431-e8f6-4b60-9214-3d01b1f43982" containerName="route-controller-manager" Feb 24 05:19:02.936864 master-0 kubenswrapper[7614]: I0224 05:19:02.936833 7614 memory_manager.go:354] "RemoveStaleState removing state" podUID="fe235661-d492-48fc-92e6-d9e1938daeb7" containerName="kube-rbac-proxy" Feb 24 05:19:02.936902 master-0 kubenswrapper[7614]: I0224 05:19:02.936879 7614 memory_manager.go:354] "RemoveStaleState removing state" podUID="e75c6622-29b4-4da8-8409-be898aab9f49" containerName="controller-manager" Feb 24 05:19:02.936902 master-0 kubenswrapper[7614]: I0224 05:19:02.936895 7614 memory_manager.go:354] "RemoveStaleState removing state" podUID="fe235661-d492-48fc-92e6-d9e1938daeb7" containerName="machine-approver-controller" Feb 24 05:19:02.936960 master-0 kubenswrapper[7614]: I0224 05:19:02.936912 7614 memory_manager.go:354] "RemoveStaleState removing state" podUID="8cafd431-e8f6-4b60-9214-3d01b1f43982" containerName="route-controller-manager" Feb 24 05:19:02.936960 master-0 kubenswrapper[7614]: I0224 05:19:02.936932 7614 memory_manager.go:354] "RemoveStaleState removing state" podUID="e75c6622-29b4-4da8-8409-be898aab9f49" containerName="controller-manager" Feb 24 05:19:02.937130 master-0 kubenswrapper[7614]: E0224 05:19:02.937054 7614 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e75c6622-29b4-4da8-8409-be898aab9f49" containerName="controller-manager" Feb 24 05:19:02.937130 master-0 kubenswrapper[7614]: I0224 05:19:02.937077 7614 state_mem.go:107] "Deleted CPUSet assignment" podUID="e75c6622-29b4-4da8-8409-be898aab9f49" containerName="controller-manager" Feb 24 05:19:02.937905 master-0 kubenswrapper[7614]: I0224 05:19:02.937861 7614 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/cluster-baremetal-operator-d6bb9bb76-54hnv" Feb 24 05:19:02.938581 master-0 kubenswrapper[7614]: I0224 05:19:02.938531 7614 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-cbd75ff8d-jzkmq"] Feb 24 05:19:02.940026 master-0 kubenswrapper[7614]: I0224 05:19:02.939985 7614 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-cbd75ff8d-jzkmq" Feb 24 05:19:02.941068 master-0 kubenswrapper[7614]: I0224 05:19:02.941020 7614 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-654dcf5585-fgmnd"] Feb 24 05:19:02.942147 master-0 kubenswrapper[7614]: I0224 05:19:02.942112 7614 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-654dcf5585-fgmnd" Feb 24 05:19:02.943764 master-0 kubenswrapper[7614]: I0224 05:19:02.943722 7614 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"cluster-baremetal-operator-tls" Feb 24 05:19:02.945266 master-0 kubenswrapper[7614]: I0224 05:19:02.945226 7614 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"cluster-baremetal-operator-dockercfg-85vp6" Feb 24 05:19:02.945423 master-0 kubenswrapper[7614]: I0224 05:19:02.945358 7614 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cloud-controller-manager-operator"/"cloud-controller-manager-images" Feb 24 05:19:02.945526 master-0 kubenswrapper[7614]: I0224 05:19:02.945494 7614 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-d22d8" Feb 24 05:19:02.948969 master-0 kubenswrapper[7614]: I0224 05:19:02.948935 7614 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"cluster-baremetal-webhook-server-cert" Feb 24 05:19:02.948969 master-0 kubenswrapper[7614]: I0224 05:19:02.948943 7614 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cloud-controller-manager-operator"/"openshift-service-ca.crt" Feb 24 05:19:02.949097 master-0 kubenswrapper[7614]: I0224 05:19:02.948982 7614 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"baremetal-kube-rbac-proxy" Feb 24 05:19:02.949097 master-0 kubenswrapper[7614]: I0224 05:19:02.949008 7614 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"cluster-baremetal-operator-images" Feb 24 05:19:02.949097 master-0 kubenswrapper[7614]: I0224 05:19:02.949053 7614 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cloud-controller-manager-operator"/"cluster-cloud-controller-manager-dockercfg-zqsq8" Feb 24 05:19:02.949886 master-0 kubenswrapper[7614]: I0224 05:19:02.949283 7614 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cloud-controller-manager-operator"/"kube-rbac-proxy" Feb 24 05:19:02.949886 master-0 kubenswrapper[7614]: I0224 05:19:02.949739 7614 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cloud-controller-manager-operator"/"kube-root-ca.crt" Feb 24 05:19:02.950008 master-0 kubenswrapper[7614]: I0224 05:19:02.949985 7614 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cloud-controller-manager-operator"/"cloud-controller-manager-operator-tls" Feb 24 05:19:02.950090 master-0 kubenswrapper[7614]: I0224 05:19:02.950052 7614 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-api/machine-api-operator-5c7cf458b4-65mc5"] Feb 24 05:19:02.951226 master-0 kubenswrapper[7614]: I0224 05:19:02.951207 7614 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-5c7cf458b4-65mc5" Feb 24 05:19:02.966745 master-0 kubenswrapper[7614]: I0224 05:19:02.964924 7614 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-api/cluster-autoscaler-operator-86b8dc6d6-mcf2z"] Feb 24 05:19:02.966745 master-0 kubenswrapper[7614]: I0224 05:19:02.965504 7614 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-dockercfg-7tq27" Feb 24 05:19:02.966745 master-0 kubenswrapper[7614]: I0224 05:19:02.965559 7614 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-rbac-proxy" Feb 24 05:19:02.966745 master-0 kubenswrapper[7614]: I0224 05:19:02.965586 7614 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-tls" Feb 24 05:19:02.966745 master-0 kubenswrapper[7614]: I0224 05:19:02.965578 7614 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"machine-api-operator-images" Feb 24 05:19:02.983847 master-0 kubenswrapper[7614]: I0224 05:19:02.983799 7614 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/cluster-autoscaler-operator-86b8dc6d6-mcf2z" Feb 24 05:19:02.986461 master-0 kubenswrapper[7614]: I0224 05:19:02.986421 7614 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"cluster-autoscaler-operator-dockercfg-44r64" Feb 24 05:19:02.997095 master-0 kubenswrapper[7614]: I0224 05:19:02.997027 7614 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"cluster-autoscaler-operator-cert" Feb 24 05:19:02.997515 master-0 kubenswrapper[7614]: I0224 05:19:02.997462 7614 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-rbac-proxy-cluster-autoscaler-operator" Feb 24 05:19:03.002938 master-0 kubenswrapper[7614]: I0224 05:19:03.002411 7614 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-7657d7494-mmsz6"] Feb 24 05:19:03.015292 master-0 kubenswrapper[7614]: I0224 05:19:03.014388 7614 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-7657d7494-mmsz6" Feb 24 05:19:03.015782 master-0 kubenswrapper[7614]: I0224 05:19:03.015620 7614 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cloud-controller-manager-operator-tls\" (UniqueName: \"kubernetes.io/secret/51b6b038-7029-4e3e-af6d-b7f85ac532b0-cloud-controller-manager-operator-tls\") pod \"cluster-cloud-controller-manager-operator-cbd75ff8d-jzkmq\" (UID: \"51b6b038-7029-4e3e-af6d-b7f85ac532b0\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-cbd75ff8d-jzkmq" Feb 24 05:19:03.016009 master-0 kubenswrapper[7614]: I0224 05:19:03.015957 7614 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zg9bg\" (UniqueName: \"kubernetes.io/projected/51b6b038-7029-4e3e-af6d-b7f85ac532b0-kube-api-access-zg9bg\") pod \"cluster-cloud-controller-manager-operator-cbd75ff8d-jzkmq\" (UID: \"51b6b038-7029-4e3e-af6d-b7f85ac532b0\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-cbd75ff8d-jzkmq" Feb 24 05:19:03.016112 master-0 kubenswrapper[7614]: I0224 05:19:03.016085 7614 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/51b6b038-7029-4e3e-af6d-b7f85ac532b0-images\") pod \"cluster-cloud-controller-manager-operator-cbd75ff8d-jzkmq\" (UID: \"51b6b038-7029-4e3e-af6d-b7f85ac532b0\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-cbd75ff8d-jzkmq" Feb 24 05:19:03.016155 master-0 kubenswrapper[7614]: I0224 05:19:03.016136 7614 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/51b6b038-7029-4e3e-af6d-b7f85ac532b0-auth-proxy-config\") pod \"cluster-cloud-controller-manager-operator-cbd75ff8d-jzkmq\" (UID: \"51b6b038-7029-4e3e-af6d-b7f85ac532b0\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-cbd75ff8d-jzkmq" Feb 24 05:19:03.018049 master-0 kubenswrapper[7614]: I0224 05:19:03.016358 7614 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/51b6b038-7029-4e3e-af6d-b7f85ac532b0-host-etc-kube\") pod \"cluster-cloud-controller-manager-operator-cbd75ff8d-jzkmq\" (UID: \"51b6b038-7029-4e3e-af6d-b7f85ac532b0\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-cbd75ff8d-jzkmq" Feb 24 05:19:03.021483 master-0 kubenswrapper[7614]: I0224 05:19:03.021438 7614 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Feb 24 05:19:03.021829 master-0 kubenswrapper[7614]: I0224 05:19:03.021804 7614 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Feb 24 05:19:03.021929 master-0 kubenswrapper[7614]: I0224 05:19:03.021878 7614 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Feb 24 05:19:03.021929 master-0 kubenswrapper[7614]: I0224 05:19:03.021900 7614 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-rv6pq" Feb 24 05:19:03.022126 master-0 kubenswrapper[7614]: I0224 05:19:03.022060 7614 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Feb 24 05:19:03.022267 master-0 kubenswrapper[7614]: I0224 05:19:03.022231 7614 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Feb 24 05:19:03.025924 master-0 kubenswrapper[7614]: I0224 05:19:03.025876 7614 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-storage-operator/cluster-storage-operator-f94476f49-tlmg5"] Feb 24 05:19:03.026930 master-0 kubenswrapper[7614]: I0224 05:19:03.026896 7614 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-storage-operator/cluster-storage-operator-f94476f49-tlmg5" Feb 24 05:19:03.031173 master-0 kubenswrapper[7614]: I0224 05:19:03.031137 7614 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-storage-operator"/"cluster-storage-operator-dockercfg-jdmr6" Feb 24 05:19:03.031781 master-0 kubenswrapper[7614]: I0224 05:19:03.031732 7614 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-storage-operator"/"cluster-storage-operator-serving-cert" Feb 24 05:19:03.032276 master-0 kubenswrapper[7614]: I0224 05:19:03.032237 7614 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-65c5c48b9b-hmlsl"] Feb 24 05:19:03.034710 master-0 kubenswrapper[7614]: I0224 05:19:03.033694 7614 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-65c5c48b9b-hmlsl" Feb 24 05:19:03.035696 master-0 kubenswrapper[7614]: I0224 05:19:03.035662 7614 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"openshift-service-ca.crt" Feb 24 05:19:03.036024 master-0 kubenswrapper[7614]: I0224 05:19:03.035994 7614 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"kube-root-ca.crt" Feb 24 05:19:03.036232 master-0 kubenswrapper[7614]: I0224 05:19:03.036205 7614 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"cluster-samples-operator-dockercfg-fxsc2" Feb 24 05:19:03.037018 master-0 kubenswrapper[7614]: I0224 05:19:03.036447 7614 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Feb 24 05:19:03.038419 master-0 kubenswrapper[7614]: I0224 05:19:03.037810 7614 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-config-operator/openshift-config-operator-6f47d587d6-7b87v"] Feb 24 05:19:03.039964 master-0 kubenswrapper[7614]: I0224 05:19:03.039919 7614 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"samples-operator-tls" Feb 24 05:19:03.040932 master-0 kubenswrapper[7614]: I0224 05:19:03.040892 7614 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-5499d7f7bb-8xdmq"] Feb 24 05:19:03.041924 master-0 kubenswrapper[7614]: I0224 05:19:03.041878 7614 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-5499d7f7bb-8xdmq" Feb 24 05:19:03.042690 master-0 kubenswrapper[7614]: I0224 05:19:03.042655 7614 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-6f47d587d6-7b87v" Feb 24 05:19:03.043691 master-0 kubenswrapper[7614]: I0224 05:19:03.043658 7614 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"pprof-cert" Feb 24 05:19:03.047112 master-0 kubenswrapper[7614]: I0224 05:19:03.045843 7614 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-596f79dd6f-v22h2"] Feb 24 05:19:03.047473 master-0 kubenswrapper[7614]: I0224 05:19:03.047272 7614 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"openshift-service-ca.crt" Feb 24 05:19:03.048922 master-0 kubenswrapper[7614]: I0224 05:19:03.048801 7614 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"openshift-config-operator-dockercfg-p74xw" Feb 24 05:19:03.049119 master-0 kubenswrapper[7614]: I0224 05:19:03.049088 7614 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serving-cert" Feb 24 05:19:03.049161 master-0 kubenswrapper[7614]: I0224 05:19:03.049134 7614 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serviceaccount-dockercfg-ckqkb" Feb 24 05:19:03.049264 master-0 kubenswrapper[7614]: I0224 05:19:03.049245 7614 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"kube-root-ca.crt" Feb 24 05:19:03.049410 master-0 kubenswrapper[7614]: I0224 05:19:03.049389 7614 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"config-operator-serving-cert" Feb 24 05:19:03.050640 master-0 kubenswrapper[7614]: I0224 05:19:03.050604 7614 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-596f79dd6f-v22h2" Feb 24 05:19:03.053361 master-0 kubenswrapper[7614]: I0224 05:19:03.052524 7614 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"catalog-operator-serving-cert" Feb 24 05:19:03.053361 master-0 kubenswrapper[7614]: I0224 05:19:03.053206 7614 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-779979bdf7-t98nr"] Feb 24 05:19:03.058365 master-0 kubenswrapper[7614]: I0224 05:19:03.053873 7614 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-779979bdf7-t98nr" Feb 24 05:19:03.058365 master-0 kubenswrapper[7614]: I0224 05:19:03.055000 7614 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"openshift-service-ca.crt" Feb 24 05:19:03.058365 master-0 kubenswrapper[7614]: I0224 05:19:03.055827 7614 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cloud-credential-operator/cloud-credential-operator-6968c58f46-68rth"] Feb 24 05:19:03.058365 master-0 kubenswrapper[7614]: I0224 05:19:03.056652 7614 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-operator-tls" Feb 24 05:19:03.058365 master-0 kubenswrapper[7614]: I0224 05:19:03.056711 7614 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"cluster-image-registry-operator-dockercfg-9rnhs" Feb 24 05:19:03.058365 master-0 kubenswrapper[7614]: I0224 05:19:03.057264 7614 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cloud-credential-operator/cloud-credential-operator-6968c58f46-68rth" Feb 24 05:19:03.058365 master-0 kubenswrapper[7614]: I0224 05:19:03.057294 7614 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"kube-root-ca.crt" Feb 24 05:19:03.060033 master-0 kubenswrapper[7614]: I0224 05:19:03.059882 7614 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cloud-credential-operator"/"cloud-credential-operator-serving-cert" Feb 24 05:19:03.060768 master-0 kubenswrapper[7614]: I0224 05:19:03.060457 7614 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cloud-credential-operator"/"openshift-service-ca.crt" Feb 24 05:19:03.061044 master-0 kubenswrapper[7614]: I0224 05:19:03.061001 7614 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cloud-credential-operator"/"kube-root-ca.crt" Feb 24 05:19:03.062367 master-0 kubenswrapper[7614]: I0224 05:19:03.061565 7614 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cloud-credential-operator"/"cloud-credential-operator-dockercfg-9cc2t" Feb 24 05:19:03.063251 master-0 kubenswrapper[7614]: I0224 05:19:03.063204 7614 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-machine-approver/machine-approver-7dd9c7d7b9-pb6sw"] Feb 24 05:19:03.064464 master-0 kubenswrapper[7614]: I0224 05:19:03.064428 7614 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-machine-approver/machine-approver-7dd9c7d7b9-pb6sw" Feb 24 05:19:03.066761 master-0 kubenswrapper[7614]: I0224 05:19:03.066720 7614 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cloud-credential-operator"/"cco-trusted-ca" Feb 24 05:19:03.066970 master-0 kubenswrapper[7614]: I0224 05:19:03.066937 7614 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"trusted-ca" Feb 24 05:19:03.068130 master-0 kubenswrapper[7614]: I0224 05:19:03.068098 7614 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"openshift-service-ca.crt" Feb 24 05:19:03.068193 master-0 kubenswrapper[7614]: I0224 05:19:03.068133 7614 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-tls" Feb 24 05:19:03.068346 master-0 kubenswrapper[7614]: I0224 05:19:03.068256 7614 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-rbac-proxy" Feb 24 05:19:03.068434 master-0 kubenswrapper[7614]: I0224 05:19:03.068402 7614 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-root-ca.crt" Feb 24 05:19:03.068487 master-0 kubenswrapper[7614]: I0224 05:19:03.068476 7614 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-sa-dockercfg-w9h5v" Feb 24 05:19:03.068652 master-0 kubenswrapper[7614]: I0224 05:19:03.068599 7614 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"machine-approver-config" Feb 24 05:19:03.070983 master-0 kubenswrapper[7614]: I0224 05:19:03.069550 7614 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-operator-7f8c75f984-922md"] Feb 24 05:19:03.072993 master-0 kubenswrapper[7614]: I0224 05:19:03.072949 7614 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-7f8c75f984-922md" Feb 24 05:19:03.080345 master-0 kubenswrapper[7614]: I0224 05:19:03.074836 7614 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mco-proxy-tls" Feb 24 05:19:03.080345 master-0 kubenswrapper[7614]: I0224 05:19:03.075008 7614 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"openshift-service-ca.crt" Feb 24 05:19:03.080345 master-0 kubenswrapper[7614]: I0224 05:19:03.075166 7614 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"machine-config-operator-images" Feb 24 05:19:03.080345 master-0 kubenswrapper[7614]: I0224 05:19:03.075262 7614 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-rbac-proxy" Feb 24 05:19:03.080345 master-0 kubenswrapper[7614]: I0224 05:19:03.075276 7614 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-operator-dockercfg-qhzzf" Feb 24 05:19:03.080345 master-0 kubenswrapper[7614]: I0224 05:19:03.075420 7614 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-root-ca.crt" Feb 24 05:19:03.080345 master-0 kubenswrapper[7614]: I0224 05:19:03.076549 7614 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-insights/insights-operator-59b498fcfb-mprnx"] Feb 24 05:19:03.080345 master-0 kubenswrapper[7614]: I0224 05:19:03.077573 7614 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-insights/insights-operator-59b498fcfb-mprnx" Feb 24 05:19:03.080345 master-0 kubenswrapper[7614]: I0224 05:19:03.078801 7614 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-insights"/"trusted-ca-bundle" Feb 24 05:19:03.080345 master-0 kubenswrapper[7614]: I0224 05:19:03.078766 7614 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/cluster-baremetal-operator-d6bb9bb76-54hnv"] Feb 24 05:19:03.080345 master-0 kubenswrapper[7614]: I0224 05:19:03.079718 7614 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/machine-api-operator-5c7cf458b4-65mc5"] Feb 24 05:19:03.080345 master-0 kubenswrapper[7614]: I0224 05:19:03.080200 7614 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-insights"/"service-ca-bundle" Feb 24 05:19:03.080914 master-0 kubenswrapper[7614]: I0224 05:19:03.080404 7614 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-insights"/"openshift-service-ca.crt" Feb 24 05:19:03.082396 master-0 kubenswrapper[7614]: I0224 05:19:03.081775 7614 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-insights"/"openshift-insights-serving-cert" Feb 24 05:19:03.082396 master-0 kubenswrapper[7614]: I0224 05:19:03.081805 7614 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-insights"/"operator-dockercfg-2gwgm" Feb 24 05:19:03.082396 master-0 kubenswrapper[7614]: I0224 05:19:03.081832 7614 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-insights"/"kube-root-ca.crt" Feb 24 05:19:03.082396 master-0 kubenswrapper[7614]: I0224 05:19:03.082236 7614 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-654dcf5585-fgmnd"] Feb 24 05:19:03.085071 master-0 kubenswrapper[7614]: I0224 05:19:03.085030 7614 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-storage-operator/cluster-storage-operator-f94476f49-tlmg5"] Feb 24 05:19:03.091578 master-0 kubenswrapper[7614]: I0224 05:19:03.091512 7614 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-insights/insights-operator-59b498fcfb-mprnx"] Feb 24 05:19:03.093743 master-0 kubenswrapper[7614]: I0224 05:19:03.093689 7614 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-65c5c48b9b-hmlsl"] Feb 24 05:19:03.095446 master-0 kubenswrapper[7614]: I0224 05:19:03.095390 7614 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-779979bdf7-t98nr"] Feb 24 05:19:03.096583 master-0 kubenswrapper[7614]: I0224 05:19:03.096538 7614 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cloud-credential-operator/cloud-credential-operator-6968c58f46-68rth"] Feb 24 05:19:03.098901 master-0 kubenswrapper[7614]: I0224 05:19:03.098844 7614 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-operator-7f8c75f984-922md"] Feb 24 05:19:03.101671 master-0 kubenswrapper[7614]: I0224 05:19:03.101636 7614 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-7657d7494-mmsz6"] Feb 24 05:19:03.103208 master-0 kubenswrapper[7614]: I0224 05:19:03.103152 7614 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-596f79dd6f-v22h2"] Feb 24 05:19:03.106513 master-0 kubenswrapper[7614]: I0224 05:19:03.106484 7614 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/cluster-autoscaler-operator-86b8dc6d6-mcf2z"] Feb 24 05:19:03.108137 master-0 kubenswrapper[7614]: I0224 05:19:03.108051 7614 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-config-operator/openshift-config-operator-6f47d587d6-7b87v"] Feb 24 05:19:03.111645 master-0 kubenswrapper[7614]: I0224 05:19:03.111603 7614 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-5499d7f7bb-8xdmq"] Feb 24 05:19:03.117152 master-0 kubenswrapper[7614]: I0224 05:19:03.117113 7614 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/116e6b47-d435-49ca-abb5-088788daf16a-images\") pod \"machine-api-operator-5c7cf458b4-65mc5\" (UID: \"116e6b47-d435-49ca-abb5-088788daf16a\") " pod="openshift-machine-api/machine-api-operator-5c7cf458b4-65mc5" Feb 24 05:19:03.117231 master-0 kubenswrapper[7614]: I0224 05:19:03.117163 7614 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kx4qf\" (UniqueName: \"kubernetes.io/projected/e6a0fc47-b446-4902-9f8a-04870cbafcab-kube-api-access-kx4qf\") pod \"machine-approver-7dd9c7d7b9-pb6sw\" (UID: \"e6a0fc47-b446-4902-9f8a-04870cbafcab\") " pod="openshift-cluster-machine-approver/machine-approver-7dd9c7d7b9-pb6sw" Feb 24 05:19:03.117231 master-0 kubenswrapper[7614]: I0224 05:19:03.117188 7614 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k8dtv\" (UniqueName: \"kubernetes.io/projected/b46907eb-36d6-4410-b7d8-8012b254c861-kube-api-access-k8dtv\") pod \"cloud-credential-operator-6968c58f46-68rth\" (UID: \"b46907eb-36d6-4410-b7d8-8012b254c861\") " pod="openshift-cloud-credential-operator/cloud-credential-operator-6968c58f46-68rth" Feb 24 05:19:03.117231 master-0 kubenswrapper[7614]: I0224 05:19:03.117208 7614 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hjtv8\" (UniqueName: \"kubernetes.io/projected/b426cb33-1624-45e6-b8c5-4e8d251f6339-kube-api-access-hjtv8\") pod \"route-controller-manager-654dcf5585-fgmnd\" (UID: \"b426cb33-1624-45e6-b8c5-4e8d251f6339\") " pod="openshift-route-controller-manager/route-controller-manager-654dcf5585-fgmnd" Feb 24 05:19:03.117370 master-0 kubenswrapper[7614]: I0224 05:19:03.117234 7614 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cco-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b46907eb-36d6-4410-b7d8-8012b254c861-cco-trusted-ca\") pod \"cloud-credential-operator-6968c58f46-68rth\" (UID: \"b46907eb-36d6-4410-b7d8-8012b254c861\") " pod="openshift-cloud-credential-operator/cloud-credential-operator-6968c58f46-68rth" Feb 24 05:19:03.117370 master-0 kubenswrapper[7614]: I0224 05:19:03.117349 7614 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/cc0cfdd6-99d8-40dc-87d0-06c2a6767f38-profile-collector-cert\") pod \"catalog-operator-596f79dd6f-v22h2\" (UID: \"cc0cfdd6-99d8-40dc-87d0-06c2a6767f38\") " pod="openshift-operator-lifecycle-manager/catalog-operator-596f79dd6f-v22h2" Feb 24 05:19:03.117574 master-0 kubenswrapper[7614]: I0224 05:19:03.117515 7614 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/116e6b47-d435-49ca-abb5-088788daf16a-machine-api-operator-tls\") pod \"machine-api-operator-5c7cf458b4-65mc5\" (UID: \"116e6b47-d435-49ca-abb5-088788daf16a\") " pod="openshift-machine-api/machine-api-operator-5c7cf458b4-65mc5" Feb 24 05:19:03.117612 master-0 kubenswrapper[7614]: I0224 05:19:03.117587 7614 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/39623346-691b-42c8-af76-409d4f6629af-images\") pod \"cluster-baremetal-operator-d6bb9bb76-54hnv\" (UID: \"39623346-691b-42c8-af76-409d4f6629af\") " pod="openshift-machine-api/cluster-baremetal-operator-d6bb9bb76-54hnv" Feb 24 05:19:03.117646 master-0 kubenswrapper[7614]: I0224 05:19:03.117622 7614 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7vjzn\" (UniqueName: \"kubernetes.io/projected/4bb05b64-74d7-41bc-991c-5d3cddc9d8f4-kube-api-access-7vjzn\") pod \"cluster-samples-operator-65c5c48b9b-hmlsl\" (UID: \"4bb05b64-74d7-41bc-991c-5d3cddc9d8f4\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-65c5c48b9b-hmlsl" Feb 24 05:19:03.117682 master-0 kubenswrapper[7614]: I0224 05:19:03.117657 7614 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nb75b\" (UniqueName: \"kubernetes.io/projected/e1f03d97-1a6a-41e4-9ed3-cd9b01c46400-kube-api-access-nb75b\") pod \"cluster-storage-operator-f94476f49-tlmg5\" (UID: \"e1f03d97-1a6a-41e4-9ed3-cd9b01c46400\") " pod="openshift-cluster-storage-operator/cluster-storage-operator-f94476f49-tlmg5" Feb 24 05:19:03.117768 master-0 kubenswrapper[7614]: I0224 05:19:03.117732 7614 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b426cb33-1624-45e6-b8c5-4e8d251f6339-config\") pod \"route-controller-manager-654dcf5585-fgmnd\" (UID: \"b426cb33-1624-45e6-b8c5-4e8d251f6339\") " pod="openshift-route-controller-manager/route-controller-manager-654dcf5585-fgmnd" Feb 24 05:19:03.117802 master-0 kubenswrapper[7614]: I0224 05:19:03.117776 7614 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-25dbj\" (UniqueName: \"kubernetes.io/projected/cc0cfdd6-99d8-40dc-87d0-06c2a6767f38-kube-api-access-25dbj\") pod \"catalog-operator-596f79dd6f-v22h2\" (UID: \"cc0cfdd6-99d8-40dc-87d0-06c2a6767f38\") " pod="openshift-operator-lifecycle-manager/catalog-operator-596f79dd6f-v22h2" Feb 24 05:19:03.117853 master-0 kubenswrapper[7614]: I0224 05:19:03.117807 7614 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/5d51ce58-55f6-45d5-9d5d-7b31ae42380a-cert\") pod \"cluster-autoscaler-operator-86b8dc6d6-mcf2z\" (UID: \"5d51ce58-55f6-45d5-9d5d-7b31ae42380a\") " pod="openshift-machine-api/cluster-autoscaler-operator-86b8dc6d6-mcf2z" Feb 24 05:19:03.117891 master-0 kubenswrapper[7614]: I0224 05:19:03.117873 7614 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/b426cb33-1624-45e6-b8c5-4e8d251f6339-client-ca\") pod \"route-controller-manager-654dcf5585-fgmnd\" (UID: \"b426cb33-1624-45e6-b8c5-4e8d251f6339\") " pod="openshift-route-controller-manager/route-controller-manager-654dcf5585-fgmnd" Feb 24 05:19:03.118378 master-0 kubenswrapper[7614]: I0224 05:19:03.117929 7614 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cloud-credential-operator-serving-cert\" (UniqueName: \"kubernetes.io/secret/b46907eb-36d6-4410-b7d8-8012b254c861-cloud-credential-operator-serving-cert\") pod \"cloud-credential-operator-6968c58f46-68rth\" (UID: \"b46907eb-36d6-4410-b7d8-8012b254c861\") " pod="openshift-cloud-credential-operator/cloud-credential-operator-6968c58f46-68rth" Feb 24 05:19:03.118378 master-0 kubenswrapper[7614]: I0224 05:19:03.118026 7614 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cloud-controller-manager-operator-tls\" (UniqueName: \"kubernetes.io/secret/51b6b038-7029-4e3e-af6d-b7f85ac532b0-cloud-controller-manager-operator-tls\") pod \"cluster-cloud-controller-manager-operator-cbd75ff8d-jzkmq\" (UID: \"51b6b038-7029-4e3e-af6d-b7f85ac532b0\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-cbd75ff8d-jzkmq" Feb 24 05:19:03.118378 master-0 kubenswrapper[7614]: I0224 05:19:03.118269 7614 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/cc0cfdd6-99d8-40dc-87d0-06c2a6767f38-srv-cert\") pod \"catalog-operator-596f79dd6f-v22h2\" (UID: \"cc0cfdd6-99d8-40dc-87d0-06c2a6767f38\") " pod="openshift-operator-lifecycle-manager/catalog-operator-596f79dd6f-v22h2" Feb 24 05:19:03.118378 master-0 kubenswrapper[7614]: I0224 05:19:03.118323 7614 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b426cb33-1624-45e6-b8c5-4e8d251f6339-serving-cert\") pod \"route-controller-manager-654dcf5585-fgmnd\" (UID: \"b426cb33-1624-45e6-b8c5-4e8d251f6339\") " pod="openshift-route-controller-manager/route-controller-manager-654dcf5585-fgmnd" Feb 24 05:19:03.118378 master-0 kubenswrapper[7614]: I0224 05:19:03.118369 7614 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/9666fc94-71e3-46af-8b45-26e3a085d076-srv-cert\") pod \"olm-operator-5499d7f7bb-8xdmq\" (UID: \"9666fc94-71e3-46af-8b45-26e3a085d076\") " pod="openshift-operator-lifecycle-manager/olm-operator-5499d7f7bb-8xdmq" Feb 24 05:19:03.118640 master-0 kubenswrapper[7614]: I0224 05:19:03.118401 7614 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zg9bg\" (UniqueName: \"kubernetes.io/projected/51b6b038-7029-4e3e-af6d-b7f85ac532b0-kube-api-access-zg9bg\") pod \"cluster-cloud-controller-manager-operator-cbd75ff8d-jzkmq\" (UID: \"51b6b038-7029-4e3e-af6d-b7f85ac532b0\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-cbd75ff8d-jzkmq" Feb 24 05:19:03.118640 master-0 kubenswrapper[7614]: I0224 05:19:03.118425 7614 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/da19bb93-c9ba-4e60-9e83-d92bc0dd33c4-client-ca\") pod \"controller-manager-7657d7494-mmsz6\" (UID: \"da19bb93-c9ba-4e60-9e83-d92bc0dd33c4\") " pod="openshift-controller-manager/controller-manager-7657d7494-mmsz6" Feb 24 05:19:03.118640 master-0 kubenswrapper[7614]: I0224 05:19:03.118458 7614 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/5d51ce58-55f6-45d5-9d5d-7b31ae42380a-auth-proxy-config\") pod \"cluster-autoscaler-operator-86b8dc6d6-mcf2z\" (UID: \"5d51ce58-55f6-45d5-9d5d-7b31ae42380a\") " pod="openshift-machine-api/cluster-autoscaler-operator-86b8dc6d6-mcf2z" Feb 24 05:19:03.118640 master-0 kubenswrapper[7614]: I0224 05:19:03.118483 7614 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/9666fc94-71e3-46af-8b45-26e3a085d076-profile-collector-cert\") pod \"olm-operator-5499d7f7bb-8xdmq\" (UID: \"9666fc94-71e3-46af-8b45-26e3a085d076\") " pod="openshift-operator-lifecycle-manager/olm-operator-5499d7f7bb-8xdmq" Feb 24 05:19:03.118640 master-0 kubenswrapper[7614]: I0224 05:19:03.118508 7614 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/51b6b038-7029-4e3e-af6d-b7f85ac532b0-images\") pod \"cluster-cloud-controller-manager-operator-cbd75ff8d-jzkmq\" (UID: \"51b6b038-7029-4e3e-af6d-b7f85ac532b0\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-cbd75ff8d-jzkmq" Feb 24 05:19:03.118640 master-0 kubenswrapper[7614]: I0224 05:19:03.118534 7614 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cluster-baremetal-operator-tls\" (UniqueName: \"kubernetes.io/secret/39623346-691b-42c8-af76-409d4f6629af-cluster-baremetal-operator-tls\") pod \"cluster-baremetal-operator-d6bb9bb76-54hnv\" (UID: \"39623346-691b-42c8-af76-409d4f6629af\") " pod="openshift-machine-api/cluster-baremetal-operator-d6bb9bb76-54hnv" Feb 24 05:19:03.118640 master-0 kubenswrapper[7614]: I0224 05:19:03.118589 7614 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2kh6l\" (UniqueName: \"kubernetes.io/projected/5d51ce58-55f6-45d5-9d5d-7b31ae42380a-kube-api-access-2kh6l\") pod \"cluster-autoscaler-operator-86b8dc6d6-mcf2z\" (UID: \"5d51ce58-55f6-45d5-9d5d-7b31ae42380a\") " pod="openshift-machine-api/cluster-autoscaler-operator-86b8dc6d6-mcf2z" Feb 24 05:19:03.118640 master-0 kubenswrapper[7614]: I0224 05:19:03.118614 7614 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/da19bb93-c9ba-4e60-9e83-d92bc0dd33c4-proxy-ca-bundles\") pod \"controller-manager-7657d7494-mmsz6\" (UID: \"da19bb93-c9ba-4e60-9e83-d92bc0dd33c4\") " pod="openshift-controller-manager/controller-manager-7657d7494-mmsz6" Feb 24 05:19:03.118916 master-0 kubenswrapper[7614]: I0224 05:19:03.118731 7614 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jt9fb\" (UniqueName: \"kubernetes.io/projected/116e6b47-d435-49ca-abb5-088788daf16a-kube-api-access-jt9fb\") pod \"machine-api-operator-5c7cf458b4-65mc5\" (UID: \"116e6b47-d435-49ca-abb5-088788daf16a\") " pod="openshift-machine-api/machine-api-operator-5c7cf458b4-65mc5" Feb 24 05:19:03.118916 master-0 kubenswrapper[7614]: I0224 05:19:03.118783 7614 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/23bdafdd-27c9-4461-be4a-3ea916ac3875-image-registry-operator-tls\") pod \"cluster-image-registry-operator-779979bdf7-t98nr\" (UID: \"23bdafdd-27c9-4461-be4a-3ea916ac3875\") " pod="openshift-image-registry/cluster-image-registry-operator-779979bdf7-t98nr" Feb 24 05:19:03.118916 master-0 kubenswrapper[7614]: I0224 05:19:03.118814 7614 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/51b6b038-7029-4e3e-af6d-b7f85ac532b0-auth-proxy-config\") pod \"cluster-cloud-controller-manager-operator-cbd75ff8d-jzkmq\" (UID: \"51b6b038-7029-4e3e-af6d-b7f85ac532b0\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-cbd75ff8d-jzkmq" Feb 24 05:19:03.118916 master-0 kubenswrapper[7614]: I0224 05:19:03.118844 7614 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/3f511d03-a182-4968-ba40-5c5c10e5e6be-available-featuregates\") pod \"openshift-config-operator-6f47d587d6-7b87v\" (UID: \"3f511d03-a182-4968-ba40-5c5c10e5e6be\") " pod="openshift-config-operator/openshift-config-operator-6f47d587d6-7b87v" Feb 24 05:19:03.118916 master-0 kubenswrapper[7614]: I0224 05:19:03.118875 7614 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5bwl7\" (UniqueName: \"kubernetes.io/projected/9666fc94-71e3-46af-8b45-26e3a085d076-kube-api-access-5bwl7\") pod \"olm-operator-5499d7f7bb-8xdmq\" (UID: \"9666fc94-71e3-46af-8b45-26e3a085d076\") " pod="openshift-operator-lifecycle-manager/olm-operator-5499d7f7bb-8xdmq" Feb 24 05:19:03.118916 master-0 kubenswrapper[7614]: I0224 05:19:03.118904 7614 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/23bdafdd-27c9-4461-be4a-3ea916ac3875-bound-sa-token\") pod \"cluster-image-registry-operator-779979bdf7-t98nr\" (UID: \"23bdafdd-27c9-4461-be4a-3ea916ac3875\") " pod="openshift-image-registry/cluster-image-registry-operator-779979bdf7-t98nr" Feb 24 05:19:03.119178 master-0 kubenswrapper[7614]: I0224 05:19:03.118932 7614 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ddfqw\" (UniqueName: \"kubernetes.io/projected/39623346-691b-42c8-af76-409d4f6629af-kube-api-access-ddfqw\") pod \"cluster-baremetal-operator-d6bb9bb76-54hnv\" (UID: \"39623346-691b-42c8-af76-409d4f6629af\") " pod="openshift-machine-api/cluster-baremetal-operator-d6bb9bb76-54hnv" Feb 24 05:19:03.119178 master-0 kubenswrapper[7614]: I0224 05:19:03.118963 7614 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/e6a0fc47-b446-4902-9f8a-04870cbafcab-machine-approver-tls\") pod \"machine-approver-7dd9c7d7b9-pb6sw\" (UID: \"e6a0fc47-b446-4902-9f8a-04870cbafcab\") " pod="openshift-cluster-machine-approver/machine-approver-7dd9c7d7b9-pb6sw" Feb 24 05:19:03.119178 master-0 kubenswrapper[7614]: I0224 05:19:03.118993 7614 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cluster-storage-operator-serving-cert\" (UniqueName: \"kubernetes.io/secret/e1f03d97-1a6a-41e4-9ed3-cd9b01c46400-cluster-storage-operator-serving-cert\") pod \"cluster-storage-operator-f94476f49-tlmg5\" (UID: \"e1f03d97-1a6a-41e4-9ed3-cd9b01c46400\") " pod="openshift-cluster-storage-operator/cluster-storage-operator-f94476f49-tlmg5" Feb 24 05:19:03.119178 master-0 kubenswrapper[7614]: I0224 05:19:03.119020 7614 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cczbm\" (UniqueName: \"kubernetes.io/projected/23bdafdd-27c9-4461-be4a-3ea916ac3875-kube-api-access-cczbm\") pod \"cluster-image-registry-operator-779979bdf7-t98nr\" (UID: \"23bdafdd-27c9-4461-be4a-3ea916ac3875\") " pod="openshift-image-registry/cluster-image-registry-operator-779979bdf7-t98nr" Feb 24 05:19:03.119178 master-0 kubenswrapper[7614]: I0224 05:19:03.119054 7614 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/da19bb93-c9ba-4e60-9e83-d92bc0dd33c4-config\") pod \"controller-manager-7657d7494-mmsz6\" (UID: \"da19bb93-c9ba-4e60-9e83-d92bc0dd33c4\") " pod="openshift-controller-manager/controller-manager-7657d7494-mmsz6" Feb 24 05:19:03.119178 master-0 kubenswrapper[7614]: I0224 05:19:03.119081 7614 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/4bb05b64-74d7-41bc-991c-5d3cddc9d8f4-samples-operator-tls\") pod \"cluster-samples-operator-65c5c48b9b-hmlsl\" (UID: \"4bb05b64-74d7-41bc-991c-5d3cddc9d8f4\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-65c5c48b9b-hmlsl" Feb 24 05:19:03.119178 master-0 kubenswrapper[7614]: I0224 05:19:03.119110 7614 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e6a0fc47-b446-4902-9f8a-04870cbafcab-config\") pod \"machine-approver-7dd9c7d7b9-pb6sw\" (UID: \"e6a0fc47-b446-4902-9f8a-04870cbafcab\") " pod="openshift-cluster-machine-approver/machine-approver-7dd9c7d7b9-pb6sw" Feb 24 05:19:03.119178 master-0 kubenswrapper[7614]: I0224 05:19:03.119136 7614 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/e6a0fc47-b446-4902-9f8a-04870cbafcab-auth-proxy-config\") pod \"machine-approver-7dd9c7d7b9-pb6sw\" (UID: \"e6a0fc47-b446-4902-9f8a-04870cbafcab\") " pod="openshift-cluster-machine-approver/machine-approver-7dd9c7d7b9-pb6sw" Feb 24 05:19:03.119178 master-0 kubenswrapper[7614]: I0224 05:19:03.119165 7614 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/51b6b038-7029-4e3e-af6d-b7f85ac532b0-host-etc-kube\") pod \"cluster-cloud-controller-manager-operator-cbd75ff8d-jzkmq\" (UID: \"51b6b038-7029-4e3e-af6d-b7f85ac532b0\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-cbd75ff8d-jzkmq" Feb 24 05:19:03.119474 master-0 kubenswrapper[7614]: I0224 05:19:03.119195 7614 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4vdmz\" (UniqueName: \"kubernetes.io/projected/3f511d03-a182-4968-ba40-5c5c10e5e6be-kube-api-access-4vdmz\") pod \"openshift-config-operator-6f47d587d6-7b87v\" (UID: \"3f511d03-a182-4968-ba40-5c5c10e5e6be\") " pod="openshift-config-operator/openshift-config-operator-6f47d587d6-7b87v" Feb 24 05:19:03.119474 master-0 kubenswrapper[7614]: I0224 05:19:03.119229 7614 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/da19bb93-c9ba-4e60-9e83-d92bc0dd33c4-serving-cert\") pod \"controller-manager-7657d7494-mmsz6\" (UID: \"da19bb93-c9ba-4e60-9e83-d92bc0dd33c4\") " pod="openshift-controller-manager/controller-manager-7657d7494-mmsz6" Feb 24 05:19:03.119474 master-0 kubenswrapper[7614]: I0224 05:19:03.119255 7614 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/39623346-691b-42c8-af76-409d4f6629af-cert\") pod \"cluster-baremetal-operator-d6bb9bb76-54hnv\" (UID: \"39623346-691b-42c8-af76-409d4f6629af\") " pod="openshift-machine-api/cluster-baremetal-operator-d6bb9bb76-54hnv" Feb 24 05:19:03.119474 master-0 kubenswrapper[7614]: I0224 05:19:03.119280 7614 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/116e6b47-d435-49ca-abb5-088788daf16a-config\") pod \"machine-api-operator-5c7cf458b4-65mc5\" (UID: \"116e6b47-d435-49ca-abb5-088788daf16a\") " pod="openshift-machine-api/machine-api-operator-5c7cf458b4-65mc5" Feb 24 05:19:03.119474 master-0 kubenswrapper[7614]: I0224 05:19:03.119330 7614 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3f511d03-a182-4968-ba40-5c5c10e5e6be-serving-cert\") pod \"openshift-config-operator-6f47d587d6-7b87v\" (UID: \"3f511d03-a182-4968-ba40-5c5c10e5e6be\") " pod="openshift-config-operator/openshift-config-operator-6f47d587d6-7b87v" Feb 24 05:19:03.119474 master-0 kubenswrapper[7614]: I0224 05:19:03.119360 7614 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/23bdafdd-27c9-4461-be4a-3ea916ac3875-trusted-ca\") pod \"cluster-image-registry-operator-779979bdf7-t98nr\" (UID: \"23bdafdd-27c9-4461-be4a-3ea916ac3875\") " pod="openshift-image-registry/cluster-image-registry-operator-779979bdf7-t98nr" Feb 24 05:19:03.119851 master-0 kubenswrapper[7614]: I0224 05:19:03.119735 7614 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9lkf2\" (UniqueName: \"kubernetes.io/projected/da19bb93-c9ba-4e60-9e83-d92bc0dd33c4-kube-api-access-9lkf2\") pod \"controller-manager-7657d7494-mmsz6\" (UID: \"da19bb93-c9ba-4e60-9e83-d92bc0dd33c4\") " pod="openshift-controller-manager/controller-manager-7657d7494-mmsz6" Feb 24 05:19:03.119903 master-0 kubenswrapper[7614]: I0224 05:19:03.119871 7614 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/51b6b038-7029-4e3e-af6d-b7f85ac532b0-host-etc-kube\") pod \"cluster-cloud-controller-manager-operator-cbd75ff8d-jzkmq\" (UID: \"51b6b038-7029-4e3e-af6d-b7f85ac532b0\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-cbd75ff8d-jzkmq" Feb 24 05:19:03.119980 master-0 kubenswrapper[7614]: I0224 05:19:03.119950 7614 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/39623346-691b-42c8-af76-409d4f6629af-config\") pod \"cluster-baremetal-operator-d6bb9bb76-54hnv\" (UID: \"39623346-691b-42c8-af76-409d4f6629af\") " pod="openshift-machine-api/cluster-baremetal-operator-d6bb9bb76-54hnv" Feb 24 05:19:03.120598 master-0 kubenswrapper[7614]: I0224 05:19:03.120562 7614 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/51b6b038-7029-4e3e-af6d-b7f85ac532b0-images\") pod \"cluster-cloud-controller-manager-operator-cbd75ff8d-jzkmq\" (UID: \"51b6b038-7029-4e3e-af6d-b7f85ac532b0\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-cbd75ff8d-jzkmq" Feb 24 05:19:03.120676 master-0 kubenswrapper[7614]: I0224 05:19:03.120598 7614 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/51b6b038-7029-4e3e-af6d-b7f85ac532b0-auth-proxy-config\") pod \"cluster-cloud-controller-manager-operator-cbd75ff8d-jzkmq\" (UID: \"51b6b038-7029-4e3e-af6d-b7f85ac532b0\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-cbd75ff8d-jzkmq" Feb 24 05:19:03.124255 master-0 kubenswrapper[7614]: I0224 05:19:03.124021 7614 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cloud-controller-manager-operator-tls\" (UniqueName: \"kubernetes.io/secret/51b6b038-7029-4e3e-af6d-b7f85ac532b0-cloud-controller-manager-operator-tls\") pod \"cluster-cloud-controller-manager-operator-cbd75ff8d-jzkmq\" (UID: \"51b6b038-7029-4e3e-af6d-b7f85ac532b0\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-cbd75ff8d-jzkmq" Feb 24 05:19:03.144541 master-0 kubenswrapper[7614]: I0224 05:19:03.144468 7614 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zg9bg\" (UniqueName: \"kubernetes.io/projected/51b6b038-7029-4e3e-af6d-b7f85ac532b0-kube-api-access-zg9bg\") pod \"cluster-cloud-controller-manager-operator-cbd75ff8d-jzkmq\" (UID: \"51b6b038-7029-4e3e-af6d-b7f85ac532b0\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-cbd75ff8d-jzkmq" Feb 24 05:19:03.208894 master-0 kubenswrapper[7614]: I0224 05:19:03.194918 7614 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e75c6622-29b4-4da8-8409-be898aab9f49" path="/var/lib/kubelet/pods/e75c6622-29b4-4da8-8409-be898aab9f49/volumes" Feb 24 05:19:03.208894 master-0 kubenswrapper[7614]: I0224 05:19:03.196108 7614 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fe235661-d492-48fc-92e6-d9e1938daeb7" path="/var/lib/kubelet/pods/fe235661-d492-48fc-92e6-d9e1938daeb7/volumes" Feb 24 05:19:03.236733 master-0 kubenswrapper[7614]: I0224 05:19:03.222645 7614 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e6a0fc47-b446-4902-9f8a-04870cbafcab-config\") pod \"machine-approver-7dd9c7d7b9-pb6sw\" (UID: \"e6a0fc47-b446-4902-9f8a-04870cbafcab\") " pod="openshift-cluster-machine-approver/machine-approver-7dd9c7d7b9-pb6sw" Feb 24 05:19:03.236733 master-0 kubenswrapper[7614]: I0224 05:19:03.223814 7614 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e6a0fc47-b446-4902-9f8a-04870cbafcab-config\") pod \"machine-approver-7dd9c7d7b9-pb6sw\" (UID: \"e6a0fc47-b446-4902-9f8a-04870cbafcab\") " pod="openshift-cluster-machine-approver/machine-approver-7dd9c7d7b9-pb6sw" Feb 24 05:19:03.236733 master-0 kubenswrapper[7614]: I0224 05:19:03.224029 7614 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/e6a0fc47-b446-4902-9f8a-04870cbafcab-auth-proxy-config\") pod \"machine-approver-7dd9c7d7b9-pb6sw\" (UID: \"e6a0fc47-b446-4902-9f8a-04870cbafcab\") " pod="openshift-cluster-machine-approver/machine-approver-7dd9c7d7b9-pb6sw" Feb 24 05:19:03.236733 master-0 kubenswrapper[7614]: I0224 05:19:03.224433 7614 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/e6a0fc47-b446-4902-9f8a-04870cbafcab-auth-proxy-config\") pod \"machine-approver-7dd9c7d7b9-pb6sw\" (UID: \"e6a0fc47-b446-4902-9f8a-04870cbafcab\") " pod="openshift-cluster-machine-approver/machine-approver-7dd9c7d7b9-pb6sw" Feb 24 05:19:03.236733 master-0 kubenswrapper[7614]: I0224 05:19:03.224561 7614 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/39c4d0aa-c372-4d02-9302-337e68b56784-images\") pod \"machine-config-operator-7f8c75f984-922md\" (UID: \"39c4d0aa-c372-4d02-9302-337e68b56784\") " pod="openshift-machine-config-operator/machine-config-operator-7f8c75f984-922md" Feb 24 05:19:03.236733 master-0 kubenswrapper[7614]: I0224 05:19:03.224598 7614 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4vdmz\" (UniqueName: \"kubernetes.io/projected/3f511d03-a182-4968-ba40-5c5c10e5e6be-kube-api-access-4vdmz\") pod \"openshift-config-operator-6f47d587d6-7b87v\" (UID: \"3f511d03-a182-4968-ba40-5c5c10e5e6be\") " pod="openshift-config-operator/openshift-config-operator-6f47d587d6-7b87v" Feb 24 05:19:03.236733 master-0 kubenswrapper[7614]: I0224 05:19:03.224641 7614 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b2fkp\" (UniqueName: \"kubernetes.io/projected/39c4d0aa-c372-4d02-9302-337e68b56784-kube-api-access-b2fkp\") pod \"machine-config-operator-7f8c75f984-922md\" (UID: \"39c4d0aa-c372-4d02-9302-337e68b56784\") " pod="openshift-machine-config-operator/machine-config-operator-7f8c75f984-922md" Feb 24 05:19:03.236733 master-0 kubenswrapper[7614]: I0224 05:19:03.224676 7614 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/da19bb93-c9ba-4e60-9e83-d92bc0dd33c4-serving-cert\") pod \"controller-manager-7657d7494-mmsz6\" (UID: \"da19bb93-c9ba-4e60-9e83-d92bc0dd33c4\") " pod="openshift-controller-manager/controller-manager-7657d7494-mmsz6" Feb 24 05:19:03.236733 master-0 kubenswrapper[7614]: I0224 05:19:03.224775 7614 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/39623346-691b-42c8-af76-409d4f6629af-cert\") pod \"cluster-baremetal-operator-d6bb9bb76-54hnv\" (UID: \"39623346-691b-42c8-af76-409d4f6629af\") " pod="openshift-machine-api/cluster-baremetal-operator-d6bb9bb76-54hnv" Feb 24 05:19:03.236733 master-0 kubenswrapper[7614]: I0224 05:19:03.224826 7614 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/116e6b47-d435-49ca-abb5-088788daf16a-config\") pod \"machine-api-operator-5c7cf458b4-65mc5\" (UID: \"116e6b47-d435-49ca-abb5-088788daf16a\") " pod="openshift-machine-api/machine-api-operator-5c7cf458b4-65mc5" Feb 24 05:19:03.236733 master-0 kubenswrapper[7614]: I0224 05:19:03.224878 7614 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3f511d03-a182-4968-ba40-5c5c10e5e6be-serving-cert\") pod \"openshift-config-operator-6f47d587d6-7b87v\" (UID: \"3f511d03-a182-4968-ba40-5c5c10e5e6be\") " pod="openshift-config-operator/openshift-config-operator-6f47d587d6-7b87v" Feb 24 05:19:03.236733 master-0 kubenswrapper[7614]: I0224 05:19:03.225430 7614 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/23bdafdd-27c9-4461-be4a-3ea916ac3875-trusted-ca\") pod \"cluster-image-registry-operator-779979bdf7-t98nr\" (UID: \"23bdafdd-27c9-4461-be4a-3ea916ac3875\") " pod="openshift-image-registry/cluster-image-registry-operator-779979bdf7-t98nr" Feb 24 05:19:03.236733 master-0 kubenswrapper[7614]: I0224 05:19:03.225462 7614 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9lkf2\" (UniqueName: \"kubernetes.io/projected/da19bb93-c9ba-4e60-9e83-d92bc0dd33c4-kube-api-access-9lkf2\") pod \"controller-manager-7657d7494-mmsz6\" (UID: \"da19bb93-c9ba-4e60-9e83-d92bc0dd33c4\") " pod="openshift-controller-manager/controller-manager-7657d7494-mmsz6" Feb 24 05:19:03.236733 master-0 kubenswrapper[7614]: I0224 05:19:03.225491 7614 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/39623346-691b-42c8-af76-409d4f6629af-config\") pod \"cluster-baremetal-operator-d6bb9bb76-54hnv\" (UID: \"39623346-691b-42c8-af76-409d4f6629af\") " pod="openshift-machine-api/cluster-baremetal-operator-d6bb9bb76-54hnv" Feb 24 05:19:03.236733 master-0 kubenswrapper[7614]: I0224 05:19:03.225514 7614 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5dwz2\" (UniqueName: \"kubernetes.io/projected/ee10cd9f-23da-4e40-bc7a-0856a9fa7ae5-kube-api-access-5dwz2\") pod \"insights-operator-59b498fcfb-mprnx\" (UID: \"ee10cd9f-23da-4e40-bc7a-0856a9fa7ae5\") " pod="openshift-insights/insights-operator-59b498fcfb-mprnx" Feb 24 05:19:03.236733 master-0 kubenswrapper[7614]: I0224 05:19:03.225542 7614 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ee10cd9f-23da-4e40-bc7a-0856a9fa7ae5-trusted-ca-bundle\") pod \"insights-operator-59b498fcfb-mprnx\" (UID: \"ee10cd9f-23da-4e40-bc7a-0856a9fa7ae5\") " pod="openshift-insights/insights-operator-59b498fcfb-mprnx" Feb 24 05:19:03.236733 master-0 kubenswrapper[7614]: I0224 05:19:03.225564 7614 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/39c4d0aa-c372-4d02-9302-337e68b56784-auth-proxy-config\") pod \"machine-config-operator-7f8c75f984-922md\" (UID: \"39c4d0aa-c372-4d02-9302-337e68b56784\") " pod="openshift-machine-config-operator/machine-config-operator-7f8c75f984-922md" Feb 24 05:19:03.236733 master-0 kubenswrapper[7614]: I0224 05:19:03.225595 7614 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/116e6b47-d435-49ca-abb5-088788daf16a-images\") pod \"machine-api-operator-5c7cf458b4-65mc5\" (UID: \"116e6b47-d435-49ca-abb5-088788daf16a\") " pod="openshift-machine-api/machine-api-operator-5c7cf458b4-65mc5" Feb 24 05:19:03.236733 master-0 kubenswrapper[7614]: I0224 05:19:03.225619 7614 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ee10cd9f-23da-4e40-bc7a-0856a9fa7ae5-serving-cert\") pod \"insights-operator-59b498fcfb-mprnx\" (UID: \"ee10cd9f-23da-4e40-bc7a-0856a9fa7ae5\") " pod="openshift-insights/insights-operator-59b498fcfb-mprnx" Feb 24 05:19:03.236733 master-0 kubenswrapper[7614]: I0224 05:19:03.225640 7614 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k8dtv\" (UniqueName: \"kubernetes.io/projected/b46907eb-36d6-4410-b7d8-8012b254c861-kube-api-access-k8dtv\") pod \"cloud-credential-operator-6968c58f46-68rth\" (UID: \"b46907eb-36d6-4410-b7d8-8012b254c861\") " pod="openshift-cloud-credential-operator/cloud-credential-operator-6968c58f46-68rth" Feb 24 05:19:03.236733 master-0 kubenswrapper[7614]: I0224 05:19:03.225663 7614 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/39c4d0aa-c372-4d02-9302-337e68b56784-proxy-tls\") pod \"machine-config-operator-7f8c75f984-922md\" (UID: \"39c4d0aa-c372-4d02-9302-337e68b56784\") " pod="openshift-machine-config-operator/machine-config-operator-7f8c75f984-922md" Feb 24 05:19:03.236733 master-0 kubenswrapper[7614]: I0224 05:19:03.225686 7614 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kx4qf\" (UniqueName: \"kubernetes.io/projected/e6a0fc47-b446-4902-9f8a-04870cbafcab-kube-api-access-kx4qf\") pod \"machine-approver-7dd9c7d7b9-pb6sw\" (UID: \"e6a0fc47-b446-4902-9f8a-04870cbafcab\") " pod="openshift-cluster-machine-approver/machine-approver-7dd9c7d7b9-pb6sw" Feb 24 05:19:03.236733 master-0 kubenswrapper[7614]: I0224 05:19:03.225708 7614 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hjtv8\" (UniqueName: \"kubernetes.io/projected/b426cb33-1624-45e6-b8c5-4e8d251f6339-kube-api-access-hjtv8\") pod \"route-controller-manager-654dcf5585-fgmnd\" (UID: \"b426cb33-1624-45e6-b8c5-4e8d251f6339\") " pod="openshift-route-controller-manager/route-controller-manager-654dcf5585-fgmnd" Feb 24 05:19:03.236733 master-0 kubenswrapper[7614]: I0224 05:19:03.225731 7614 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cco-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b46907eb-36d6-4410-b7d8-8012b254c861-cco-trusted-ca\") pod \"cloud-credential-operator-6968c58f46-68rth\" (UID: \"b46907eb-36d6-4410-b7d8-8012b254c861\") " pod="openshift-cloud-credential-operator/cloud-credential-operator-6968c58f46-68rth" Feb 24 05:19:03.236733 master-0 kubenswrapper[7614]: I0224 05:19:03.225757 7614 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/cc0cfdd6-99d8-40dc-87d0-06c2a6767f38-profile-collector-cert\") pod \"catalog-operator-596f79dd6f-v22h2\" (UID: \"cc0cfdd6-99d8-40dc-87d0-06c2a6767f38\") " pod="openshift-operator-lifecycle-manager/catalog-operator-596f79dd6f-v22h2" Feb 24 05:19:03.236733 master-0 kubenswrapper[7614]: I0224 05:19:03.225779 7614 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/116e6b47-d435-49ca-abb5-088788daf16a-machine-api-operator-tls\") pod \"machine-api-operator-5c7cf458b4-65mc5\" (UID: \"116e6b47-d435-49ca-abb5-088788daf16a\") " pod="openshift-machine-api/machine-api-operator-5c7cf458b4-65mc5" Feb 24 05:19:03.236733 master-0 kubenswrapper[7614]: I0224 05:19:03.225806 7614 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7vjzn\" (UniqueName: \"kubernetes.io/projected/4bb05b64-74d7-41bc-991c-5d3cddc9d8f4-kube-api-access-7vjzn\") pod \"cluster-samples-operator-65c5c48b9b-hmlsl\" (UID: \"4bb05b64-74d7-41bc-991c-5d3cddc9d8f4\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-65c5c48b9b-hmlsl" Feb 24 05:19:03.236733 master-0 kubenswrapper[7614]: I0224 05:19:03.225824 7614 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/39623346-691b-42c8-af76-409d4f6629af-images\") pod \"cluster-baremetal-operator-d6bb9bb76-54hnv\" (UID: \"39623346-691b-42c8-af76-409d4f6629af\") " pod="openshift-machine-api/cluster-baremetal-operator-d6bb9bb76-54hnv" Feb 24 05:19:03.236733 master-0 kubenswrapper[7614]: I0224 05:19:03.226236 7614 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nb75b\" (UniqueName: \"kubernetes.io/projected/e1f03d97-1a6a-41e4-9ed3-cd9b01c46400-kube-api-access-nb75b\") pod \"cluster-storage-operator-f94476f49-tlmg5\" (UID: \"e1f03d97-1a6a-41e4-9ed3-cd9b01c46400\") " pod="openshift-cluster-storage-operator/cluster-storage-operator-f94476f49-tlmg5" Feb 24 05:19:03.236733 master-0 kubenswrapper[7614]: I0224 05:19:03.226273 7614 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b426cb33-1624-45e6-b8c5-4e8d251f6339-config\") pod \"route-controller-manager-654dcf5585-fgmnd\" (UID: \"b426cb33-1624-45e6-b8c5-4e8d251f6339\") " pod="openshift-route-controller-manager/route-controller-manager-654dcf5585-fgmnd" Feb 24 05:19:03.236733 master-0 kubenswrapper[7614]: I0224 05:19:03.226683 7614 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/116e6b47-d435-49ca-abb5-088788daf16a-config\") pod \"machine-api-operator-5c7cf458b4-65mc5\" (UID: \"116e6b47-d435-49ca-abb5-088788daf16a\") " pod="openshift-machine-api/machine-api-operator-5c7cf458b4-65mc5" Feb 24 05:19:03.236733 master-0 kubenswrapper[7614]: I0224 05:19:03.227384 7614 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/39623346-691b-42c8-af76-409d4f6629af-images\") pod \"cluster-baremetal-operator-d6bb9bb76-54hnv\" (UID: \"39623346-691b-42c8-af76-409d4f6629af\") " pod="openshift-machine-api/cluster-baremetal-operator-d6bb9bb76-54hnv" Feb 24 05:19:03.236733 master-0 kubenswrapper[7614]: I0224 05:19:03.227493 7614 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-25dbj\" (UniqueName: \"kubernetes.io/projected/cc0cfdd6-99d8-40dc-87d0-06c2a6767f38-kube-api-access-25dbj\") pod \"catalog-operator-596f79dd6f-v22h2\" (UID: \"cc0cfdd6-99d8-40dc-87d0-06c2a6767f38\") " pod="openshift-operator-lifecycle-manager/catalog-operator-596f79dd6f-v22h2" Feb 24 05:19:03.236733 master-0 kubenswrapper[7614]: I0224 05:19:03.227942 7614 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/b426cb33-1624-45e6-b8c5-4e8d251f6339-client-ca\") pod \"route-controller-manager-654dcf5585-fgmnd\" (UID: \"b426cb33-1624-45e6-b8c5-4e8d251f6339\") " pod="openshift-route-controller-manager/route-controller-manager-654dcf5585-fgmnd" Feb 24 05:19:03.236733 master-0 kubenswrapper[7614]: I0224 05:19:03.228085 7614 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cloud-credential-operator-serving-cert\" (UniqueName: \"kubernetes.io/secret/b46907eb-36d6-4410-b7d8-8012b254c861-cloud-credential-operator-serving-cert\") pod \"cloud-credential-operator-6968c58f46-68rth\" (UID: \"b46907eb-36d6-4410-b7d8-8012b254c861\") " pod="openshift-cloud-credential-operator/cloud-credential-operator-6968c58f46-68rth" Feb 24 05:19:03.236733 master-0 kubenswrapper[7614]: I0224 05:19:03.228145 7614 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/5d51ce58-55f6-45d5-9d5d-7b31ae42380a-cert\") pod \"cluster-autoscaler-operator-86b8dc6d6-mcf2z\" (UID: \"5d51ce58-55f6-45d5-9d5d-7b31ae42380a\") " pod="openshift-machine-api/cluster-autoscaler-operator-86b8dc6d6-mcf2z" Feb 24 05:19:03.236733 master-0 kubenswrapper[7614]: I0224 05:19:03.228207 7614 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/39623346-691b-42c8-af76-409d4f6629af-config\") pod \"cluster-baremetal-operator-d6bb9bb76-54hnv\" (UID: \"39623346-691b-42c8-af76-409d4f6629af\") " pod="openshift-machine-api/cluster-baremetal-operator-d6bb9bb76-54hnv" Feb 24 05:19:03.236733 master-0 kubenswrapper[7614]: I0224 05:19:03.228262 7614 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/cc0cfdd6-99d8-40dc-87d0-06c2a6767f38-srv-cert\") pod \"catalog-operator-596f79dd6f-v22h2\" (UID: \"cc0cfdd6-99d8-40dc-87d0-06c2a6767f38\") " pod="openshift-operator-lifecycle-manager/catalog-operator-596f79dd6f-v22h2" Feb 24 05:19:03.236733 master-0 kubenswrapper[7614]: I0224 05:19:03.228420 7614 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"snapshots\" (UniqueName: \"kubernetes.io/empty-dir/ee10cd9f-23da-4e40-bc7a-0856a9fa7ae5-snapshots\") pod \"insights-operator-59b498fcfb-mprnx\" (UID: \"ee10cd9f-23da-4e40-bc7a-0856a9fa7ae5\") " pod="openshift-insights/insights-operator-59b498fcfb-mprnx" Feb 24 05:19:03.236733 master-0 kubenswrapper[7614]: I0224 05:19:03.228515 7614 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b426cb33-1624-45e6-b8c5-4e8d251f6339-serving-cert\") pod \"route-controller-manager-654dcf5585-fgmnd\" (UID: \"b426cb33-1624-45e6-b8c5-4e8d251f6339\") " pod="openshift-route-controller-manager/route-controller-manager-654dcf5585-fgmnd" Feb 24 05:19:03.236733 master-0 kubenswrapper[7614]: I0224 05:19:03.228946 7614 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/9666fc94-71e3-46af-8b45-26e3a085d076-srv-cert\") pod \"olm-operator-5499d7f7bb-8xdmq\" (UID: \"9666fc94-71e3-46af-8b45-26e3a085d076\") " pod="openshift-operator-lifecycle-manager/olm-operator-5499d7f7bb-8xdmq" Feb 24 05:19:03.236733 master-0 kubenswrapper[7614]: I0224 05:19:03.228981 7614 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/da19bb93-c9ba-4e60-9e83-d92bc0dd33c4-client-ca\") pod \"controller-manager-7657d7494-mmsz6\" (UID: \"da19bb93-c9ba-4e60-9e83-d92bc0dd33c4\") " pod="openshift-controller-manager/controller-manager-7657d7494-mmsz6" Feb 24 05:19:03.236733 master-0 kubenswrapper[7614]: I0224 05:19:03.229046 7614 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/5d51ce58-55f6-45d5-9d5d-7b31ae42380a-auth-proxy-config\") pod \"cluster-autoscaler-operator-86b8dc6d6-mcf2z\" (UID: \"5d51ce58-55f6-45d5-9d5d-7b31ae42380a\") " pod="openshift-machine-api/cluster-autoscaler-operator-86b8dc6d6-mcf2z" Feb 24 05:19:03.236733 master-0 kubenswrapper[7614]: I0224 05:19:03.229070 7614 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/9666fc94-71e3-46af-8b45-26e3a085d076-profile-collector-cert\") pod \"olm-operator-5499d7f7bb-8xdmq\" (UID: \"9666fc94-71e3-46af-8b45-26e3a085d076\") " pod="openshift-operator-lifecycle-manager/olm-operator-5499d7f7bb-8xdmq" Feb 24 05:19:03.236733 master-0 kubenswrapper[7614]: I0224 05:19:03.229065 7614 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/23bdafdd-27c9-4461-be4a-3ea916ac3875-trusted-ca\") pod \"cluster-image-registry-operator-779979bdf7-t98nr\" (UID: \"23bdafdd-27c9-4461-be4a-3ea916ac3875\") " pod="openshift-image-registry/cluster-image-registry-operator-779979bdf7-t98nr" Feb 24 05:19:03.236733 master-0 kubenswrapper[7614]: I0224 05:19:03.228979 7614 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b426cb33-1624-45e6-b8c5-4e8d251f6339-config\") pod \"route-controller-manager-654dcf5585-fgmnd\" (UID: \"b426cb33-1624-45e6-b8c5-4e8d251f6339\") " pod="openshift-route-controller-manager/route-controller-manager-654dcf5585-fgmnd" Feb 24 05:19:03.236733 master-0 kubenswrapper[7614]: I0224 05:19:03.229144 7614 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cluster-baremetal-operator-tls\" (UniqueName: \"kubernetes.io/secret/39623346-691b-42c8-af76-409d4f6629af-cluster-baremetal-operator-tls\") pod \"cluster-baremetal-operator-d6bb9bb76-54hnv\" (UID: \"39623346-691b-42c8-af76-409d4f6629af\") " pod="openshift-machine-api/cluster-baremetal-operator-d6bb9bb76-54hnv" Feb 24 05:19:03.236733 master-0 kubenswrapper[7614]: I0224 05:19:03.229146 7614 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/b426cb33-1624-45e6-b8c5-4e8d251f6339-client-ca\") pod \"route-controller-manager-654dcf5585-fgmnd\" (UID: \"b426cb33-1624-45e6-b8c5-4e8d251f6339\") " pod="openshift-route-controller-manager/route-controller-manager-654dcf5585-fgmnd" Feb 24 05:19:03.236733 master-0 kubenswrapper[7614]: I0224 05:19:03.229175 7614 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ee10cd9f-23da-4e40-bc7a-0856a9fa7ae5-service-ca-bundle\") pod \"insights-operator-59b498fcfb-mprnx\" (UID: \"ee10cd9f-23da-4e40-bc7a-0856a9fa7ae5\") " pod="openshift-insights/insights-operator-59b498fcfb-mprnx" Feb 24 05:19:03.236733 master-0 kubenswrapper[7614]: I0224 05:19:03.229235 7614 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/da19bb93-c9ba-4e60-9e83-d92bc0dd33c4-proxy-ca-bundles\") pod \"controller-manager-7657d7494-mmsz6\" (UID: \"da19bb93-c9ba-4e60-9e83-d92bc0dd33c4\") " pod="openshift-controller-manager/controller-manager-7657d7494-mmsz6" Feb 24 05:19:03.236733 master-0 kubenswrapper[7614]: I0224 05:19:03.229273 7614 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2kh6l\" (UniqueName: \"kubernetes.io/projected/5d51ce58-55f6-45d5-9d5d-7b31ae42380a-kube-api-access-2kh6l\") pod \"cluster-autoscaler-operator-86b8dc6d6-mcf2z\" (UID: \"5d51ce58-55f6-45d5-9d5d-7b31ae42380a\") " pod="openshift-machine-api/cluster-autoscaler-operator-86b8dc6d6-mcf2z" Feb 24 05:19:03.236733 master-0 kubenswrapper[7614]: I0224 05:19:03.229327 7614 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jt9fb\" (UniqueName: \"kubernetes.io/projected/116e6b47-d435-49ca-abb5-088788daf16a-kube-api-access-jt9fb\") pod \"machine-api-operator-5c7cf458b4-65mc5\" (UID: \"116e6b47-d435-49ca-abb5-088788daf16a\") " pod="openshift-machine-api/machine-api-operator-5c7cf458b4-65mc5" Feb 24 05:19:03.236733 master-0 kubenswrapper[7614]: I0224 05:19:03.229348 7614 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cco-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b46907eb-36d6-4410-b7d8-8012b254c861-cco-trusted-ca\") pod \"cloud-credential-operator-6968c58f46-68rth\" (UID: \"b46907eb-36d6-4410-b7d8-8012b254c861\") " pod="openshift-cloud-credential-operator/cloud-credential-operator-6968c58f46-68rth" Feb 24 05:19:03.236733 master-0 kubenswrapper[7614]: I0224 05:19:03.229356 7614 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/23bdafdd-27c9-4461-be4a-3ea916ac3875-image-registry-operator-tls\") pod \"cluster-image-registry-operator-779979bdf7-t98nr\" (UID: \"23bdafdd-27c9-4461-be4a-3ea916ac3875\") " pod="openshift-image-registry/cluster-image-registry-operator-779979bdf7-t98nr" Feb 24 05:19:03.236733 master-0 kubenswrapper[7614]: I0224 05:19:03.229398 7614 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/3f511d03-a182-4968-ba40-5c5c10e5e6be-available-featuregates\") pod \"openshift-config-operator-6f47d587d6-7b87v\" (UID: \"3f511d03-a182-4968-ba40-5c5c10e5e6be\") " pod="openshift-config-operator/openshift-config-operator-6f47d587d6-7b87v" Feb 24 05:19:03.236733 master-0 kubenswrapper[7614]: I0224 05:19:03.229426 7614 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5bwl7\" (UniqueName: \"kubernetes.io/projected/9666fc94-71e3-46af-8b45-26e3a085d076-kube-api-access-5bwl7\") pod \"olm-operator-5499d7f7bb-8xdmq\" (UID: \"9666fc94-71e3-46af-8b45-26e3a085d076\") " pod="openshift-operator-lifecycle-manager/olm-operator-5499d7f7bb-8xdmq" Feb 24 05:19:03.236733 master-0 kubenswrapper[7614]: I0224 05:19:03.229457 7614 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/23bdafdd-27c9-4461-be4a-3ea916ac3875-bound-sa-token\") pod \"cluster-image-registry-operator-779979bdf7-t98nr\" (UID: \"23bdafdd-27c9-4461-be4a-3ea916ac3875\") " pod="openshift-image-registry/cluster-image-registry-operator-779979bdf7-t98nr" Feb 24 05:19:03.236733 master-0 kubenswrapper[7614]: I0224 05:19:03.229486 7614 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/e6a0fc47-b446-4902-9f8a-04870cbafcab-machine-approver-tls\") pod \"machine-approver-7dd9c7d7b9-pb6sw\" (UID: \"e6a0fc47-b446-4902-9f8a-04870cbafcab\") " pod="openshift-cluster-machine-approver/machine-approver-7dd9c7d7b9-pb6sw" Feb 24 05:19:03.236733 master-0 kubenswrapper[7614]: I0224 05:19:03.229511 7614 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ddfqw\" (UniqueName: \"kubernetes.io/projected/39623346-691b-42c8-af76-409d4f6629af-kube-api-access-ddfqw\") pod \"cluster-baremetal-operator-d6bb9bb76-54hnv\" (UID: \"39623346-691b-42c8-af76-409d4f6629af\") " pod="openshift-machine-api/cluster-baremetal-operator-d6bb9bb76-54hnv" Feb 24 05:19:03.236733 master-0 kubenswrapper[7614]: I0224 05:19:03.229543 7614 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cluster-storage-operator-serving-cert\" (UniqueName: \"kubernetes.io/secret/e1f03d97-1a6a-41e4-9ed3-cd9b01c46400-cluster-storage-operator-serving-cert\") pod \"cluster-storage-operator-f94476f49-tlmg5\" (UID: \"e1f03d97-1a6a-41e4-9ed3-cd9b01c46400\") " pod="openshift-cluster-storage-operator/cluster-storage-operator-f94476f49-tlmg5" Feb 24 05:19:03.236733 master-0 kubenswrapper[7614]: I0224 05:19:03.229578 7614 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cczbm\" (UniqueName: \"kubernetes.io/projected/23bdafdd-27c9-4461-be4a-3ea916ac3875-kube-api-access-cczbm\") pod \"cluster-image-registry-operator-779979bdf7-t98nr\" (UID: \"23bdafdd-27c9-4461-be4a-3ea916ac3875\") " pod="openshift-image-registry/cluster-image-registry-operator-779979bdf7-t98nr" Feb 24 05:19:03.236733 master-0 kubenswrapper[7614]: I0224 05:19:03.229618 7614 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/da19bb93-c9ba-4e60-9e83-d92bc0dd33c4-config\") pod \"controller-manager-7657d7494-mmsz6\" (UID: \"da19bb93-c9ba-4e60-9e83-d92bc0dd33c4\") " pod="openshift-controller-manager/controller-manager-7657d7494-mmsz6" Feb 24 05:19:03.236733 master-0 kubenswrapper[7614]: I0224 05:19:03.229643 7614 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/4bb05b64-74d7-41bc-991c-5d3cddc9d8f4-samples-operator-tls\") pod \"cluster-samples-operator-65c5c48b9b-hmlsl\" (UID: \"4bb05b64-74d7-41bc-991c-5d3cddc9d8f4\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-65c5c48b9b-hmlsl" Feb 24 05:19:03.236733 master-0 kubenswrapper[7614]: I0224 05:19:03.230183 7614 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/116e6b47-d435-49ca-abb5-088788daf16a-images\") pod \"machine-api-operator-5c7cf458b4-65mc5\" (UID: \"116e6b47-d435-49ca-abb5-088788daf16a\") " pod="openshift-machine-api/machine-api-operator-5c7cf458b4-65mc5" Feb 24 05:19:03.236733 master-0 kubenswrapper[7614]: I0224 05:19:03.230583 7614 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/3f511d03-a182-4968-ba40-5c5c10e5e6be-available-featuregates\") pod \"openshift-config-operator-6f47d587d6-7b87v\" (UID: \"3f511d03-a182-4968-ba40-5c5c10e5e6be\") " pod="openshift-config-operator/openshift-config-operator-6f47d587d6-7b87v" Feb 24 05:19:03.236733 master-0 kubenswrapper[7614]: I0224 05:19:03.231146 7614 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/5d51ce58-55f6-45d5-9d5d-7b31ae42380a-auth-proxy-config\") pod \"cluster-autoscaler-operator-86b8dc6d6-mcf2z\" (UID: \"5d51ce58-55f6-45d5-9d5d-7b31ae42380a\") " pod="openshift-machine-api/cluster-autoscaler-operator-86b8dc6d6-mcf2z" Feb 24 05:19:03.236733 master-0 kubenswrapper[7614]: I0224 05:19:03.231868 7614 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/39623346-691b-42c8-af76-409d4f6629af-cert\") pod \"cluster-baremetal-operator-d6bb9bb76-54hnv\" (UID: \"39623346-691b-42c8-af76-409d4f6629af\") " pod="openshift-machine-api/cluster-baremetal-operator-d6bb9bb76-54hnv" Feb 24 05:19:03.236733 master-0 kubenswrapper[7614]: I0224 05:19:03.231928 7614 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cloud-credential-operator-serving-cert\" (UniqueName: \"kubernetes.io/secret/b46907eb-36d6-4410-b7d8-8012b254c861-cloud-credential-operator-serving-cert\") pod \"cloud-credential-operator-6968c58f46-68rth\" (UID: \"b46907eb-36d6-4410-b7d8-8012b254c861\") " pod="openshift-cloud-credential-operator/cloud-credential-operator-6968c58f46-68rth" Feb 24 05:19:03.236733 master-0 kubenswrapper[7614]: I0224 05:19:03.232528 7614 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3f511d03-a182-4968-ba40-5c5c10e5e6be-serving-cert\") pod \"openshift-config-operator-6f47d587d6-7b87v\" (UID: \"3f511d03-a182-4968-ba40-5c5c10e5e6be\") " pod="openshift-config-operator/openshift-config-operator-6f47d587d6-7b87v" Feb 24 05:19:03.236733 master-0 kubenswrapper[7614]: I0224 05:19:03.232719 7614 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/da19bb93-c9ba-4e60-9e83-d92bc0dd33c4-proxy-ca-bundles\") pod \"controller-manager-7657d7494-mmsz6\" (UID: \"da19bb93-c9ba-4e60-9e83-d92bc0dd33c4\") " pod="openshift-controller-manager/controller-manager-7657d7494-mmsz6" Feb 24 05:19:03.236733 master-0 kubenswrapper[7614]: I0224 05:19:03.233387 7614 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/da19bb93-c9ba-4e60-9e83-d92bc0dd33c4-config\") pod \"controller-manager-7657d7494-mmsz6\" (UID: \"da19bb93-c9ba-4e60-9e83-d92bc0dd33c4\") " pod="openshift-controller-manager/controller-manager-7657d7494-mmsz6" Feb 24 05:19:03.236733 master-0 kubenswrapper[7614]: I0224 05:19:03.235858 7614 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/4bb05b64-74d7-41bc-991c-5d3cddc9d8f4-samples-operator-tls\") pod \"cluster-samples-operator-65c5c48b9b-hmlsl\" (UID: \"4bb05b64-74d7-41bc-991c-5d3cddc9d8f4\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-65c5c48b9b-hmlsl" Feb 24 05:19:03.241024 master-0 kubenswrapper[7614]: I0224 05:19:03.239863 7614 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/da19bb93-c9ba-4e60-9e83-d92bc0dd33c4-client-ca\") pod \"controller-manager-7657d7494-mmsz6\" (UID: \"da19bb93-c9ba-4e60-9e83-d92bc0dd33c4\") " pod="openshift-controller-manager/controller-manager-7657d7494-mmsz6" Feb 24 05:19:03.250391 master-0 kubenswrapper[7614]: I0224 05:19:03.242363 7614 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/cc0cfdd6-99d8-40dc-87d0-06c2a6767f38-profile-collector-cert\") pod \"catalog-operator-596f79dd6f-v22h2\" (UID: \"cc0cfdd6-99d8-40dc-87d0-06c2a6767f38\") " pod="openshift-operator-lifecycle-manager/catalog-operator-596f79dd6f-v22h2" Feb 24 05:19:03.252887 master-0 kubenswrapper[7614]: I0224 05:19:03.252828 7614 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cluster-baremetal-operator-tls\" (UniqueName: \"kubernetes.io/secret/39623346-691b-42c8-af76-409d4f6629af-cluster-baremetal-operator-tls\") pod \"cluster-baremetal-operator-d6bb9bb76-54hnv\" (UID: \"39623346-691b-42c8-af76-409d4f6629af\") " pod="openshift-machine-api/cluster-baremetal-operator-d6bb9bb76-54hnv" Feb 24 05:19:03.254900 master-0 kubenswrapper[7614]: I0224 05:19:03.254849 7614 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/23bdafdd-27c9-4461-be4a-3ea916ac3875-image-registry-operator-tls\") pod \"cluster-image-registry-operator-779979bdf7-t98nr\" (UID: \"23bdafdd-27c9-4461-be4a-3ea916ac3875\") " pod="openshift-image-registry/cluster-image-registry-operator-779979bdf7-t98nr" Feb 24 05:19:03.256040 master-0 kubenswrapper[7614]: I0224 05:19:03.255999 7614 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b426cb33-1624-45e6-b8c5-4e8d251f6339-serving-cert\") pod \"route-controller-manager-654dcf5585-fgmnd\" (UID: \"b426cb33-1624-45e6-b8c5-4e8d251f6339\") " pod="openshift-route-controller-manager/route-controller-manager-654dcf5585-fgmnd" Feb 24 05:19:03.256628 master-0 kubenswrapper[7614]: I0224 05:19:03.256588 7614 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cluster-storage-operator-serving-cert\" (UniqueName: \"kubernetes.io/secret/e1f03d97-1a6a-41e4-9ed3-cd9b01c46400-cluster-storage-operator-serving-cert\") pod \"cluster-storage-operator-f94476f49-tlmg5\" (UID: \"e1f03d97-1a6a-41e4-9ed3-cd9b01c46400\") " pod="openshift-cluster-storage-operator/cluster-storage-operator-f94476f49-tlmg5" Feb 24 05:19:03.257210 master-0 kubenswrapper[7614]: I0224 05:19:03.257168 7614 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/cc0cfdd6-99d8-40dc-87d0-06c2a6767f38-srv-cert\") pod \"catalog-operator-596f79dd6f-v22h2\" (UID: \"cc0cfdd6-99d8-40dc-87d0-06c2a6767f38\") " pod="openshift-operator-lifecycle-manager/catalog-operator-596f79dd6f-v22h2" Feb 24 05:19:03.257869 master-0 kubenswrapper[7614]: I0224 05:19:03.257824 7614 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/5d51ce58-55f6-45d5-9d5d-7b31ae42380a-cert\") pod \"cluster-autoscaler-operator-86b8dc6d6-mcf2z\" (UID: \"5d51ce58-55f6-45d5-9d5d-7b31ae42380a\") " pod="openshift-machine-api/cluster-autoscaler-operator-86b8dc6d6-mcf2z" Feb 24 05:19:03.257869 master-0 kubenswrapper[7614]: I0224 05:19:03.257855 7614 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/da19bb93-c9ba-4e60-9e83-d92bc0dd33c4-serving-cert\") pod \"controller-manager-7657d7494-mmsz6\" (UID: \"da19bb93-c9ba-4e60-9e83-d92bc0dd33c4\") " pod="openshift-controller-manager/controller-manager-7657d7494-mmsz6" Feb 24 05:19:03.257982 master-0 kubenswrapper[7614]: I0224 05:19:03.257904 7614 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/9666fc94-71e3-46af-8b45-26e3a085d076-profile-collector-cert\") pod \"olm-operator-5499d7f7bb-8xdmq\" (UID: \"9666fc94-71e3-46af-8b45-26e3a085d076\") " pod="openshift-operator-lifecycle-manager/olm-operator-5499d7f7bb-8xdmq" Feb 24 05:19:03.258684 master-0 kubenswrapper[7614]: I0224 05:19:03.258649 7614 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/9666fc94-71e3-46af-8b45-26e3a085d076-srv-cert\") pod \"olm-operator-5499d7f7bb-8xdmq\" (UID: \"9666fc94-71e3-46af-8b45-26e3a085d076\") " pod="openshift-operator-lifecycle-manager/olm-operator-5499d7f7bb-8xdmq" Feb 24 05:19:03.264496 master-0 kubenswrapper[7614]: I0224 05:19:03.264451 7614 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jt9fb\" (UniqueName: \"kubernetes.io/projected/116e6b47-d435-49ca-abb5-088788daf16a-kube-api-access-jt9fb\") pod \"machine-api-operator-5c7cf458b4-65mc5\" (UID: \"116e6b47-d435-49ca-abb5-088788daf16a\") " pod="openshift-machine-api/machine-api-operator-5c7cf458b4-65mc5" Feb 24 05:19:03.264496 master-0 kubenswrapper[7614]: I0224 05:19:03.264482 7614 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nb75b\" (UniqueName: \"kubernetes.io/projected/e1f03d97-1a6a-41e4-9ed3-cd9b01c46400-kube-api-access-nb75b\") pod \"cluster-storage-operator-f94476f49-tlmg5\" (UID: \"e1f03d97-1a6a-41e4-9ed3-cd9b01c46400\") " pod="openshift-cluster-storage-operator/cluster-storage-operator-f94476f49-tlmg5" Feb 24 05:19:03.264600 master-0 kubenswrapper[7614]: I0224 05:19:03.264571 7614 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9lkf2\" (UniqueName: \"kubernetes.io/projected/da19bb93-c9ba-4e60-9e83-d92bc0dd33c4-kube-api-access-9lkf2\") pod \"controller-manager-7657d7494-mmsz6\" (UID: \"da19bb93-c9ba-4e60-9e83-d92bc0dd33c4\") " pod="openshift-controller-manager/controller-manager-7657d7494-mmsz6" Feb 24 05:19:03.264854 master-0 kubenswrapper[7614]: I0224 05:19:03.264823 7614 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-25dbj\" (UniqueName: \"kubernetes.io/projected/cc0cfdd6-99d8-40dc-87d0-06c2a6767f38-kube-api-access-25dbj\") pod \"catalog-operator-596f79dd6f-v22h2\" (UID: \"cc0cfdd6-99d8-40dc-87d0-06c2a6767f38\") " pod="openshift-operator-lifecycle-manager/catalog-operator-596f79dd6f-v22h2" Feb 24 05:19:03.265686 master-0 kubenswrapper[7614]: I0224 05:19:03.265638 7614 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hjtv8\" (UniqueName: \"kubernetes.io/projected/b426cb33-1624-45e6-b8c5-4e8d251f6339-kube-api-access-hjtv8\") pod \"route-controller-manager-654dcf5585-fgmnd\" (UID: \"b426cb33-1624-45e6-b8c5-4e8d251f6339\") " pod="openshift-route-controller-manager/route-controller-manager-654dcf5585-fgmnd" Feb 24 05:19:03.266125 master-0 kubenswrapper[7614]: I0224 05:19:03.266076 7614 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5bwl7\" (UniqueName: \"kubernetes.io/projected/9666fc94-71e3-46af-8b45-26e3a085d076-kube-api-access-5bwl7\") pod \"olm-operator-5499d7f7bb-8xdmq\" (UID: \"9666fc94-71e3-46af-8b45-26e3a085d076\") " pod="openshift-operator-lifecycle-manager/olm-operator-5499d7f7bb-8xdmq" Feb 24 05:19:03.266375 master-0 kubenswrapper[7614]: I0224 05:19:03.265238 7614 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4vdmz\" (UniqueName: \"kubernetes.io/projected/3f511d03-a182-4968-ba40-5c5c10e5e6be-kube-api-access-4vdmz\") pod \"openshift-config-operator-6f47d587d6-7b87v\" (UID: \"3f511d03-a182-4968-ba40-5c5c10e5e6be\") " pod="openshift-config-operator/openshift-config-operator-6f47d587d6-7b87v" Feb 24 05:19:03.267914 master-0 kubenswrapper[7614]: I0224 05:19:03.267869 7614 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/e6a0fc47-b446-4902-9f8a-04870cbafcab-machine-approver-tls\") pod \"machine-approver-7dd9c7d7b9-pb6sw\" (UID: \"e6a0fc47-b446-4902-9f8a-04870cbafcab\") " pod="openshift-cluster-machine-approver/machine-approver-7dd9c7d7b9-pb6sw" Feb 24 05:19:03.268239 master-0 kubenswrapper[7614]: I0224 05:19:03.267927 7614 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2kh6l\" (UniqueName: \"kubernetes.io/projected/5d51ce58-55f6-45d5-9d5d-7b31ae42380a-kube-api-access-2kh6l\") pod \"cluster-autoscaler-operator-86b8dc6d6-mcf2z\" (UID: \"5d51ce58-55f6-45d5-9d5d-7b31ae42380a\") " pod="openshift-machine-api/cluster-autoscaler-operator-86b8dc6d6-mcf2z" Feb 24 05:19:03.268239 master-0 kubenswrapper[7614]: I0224 05:19:03.268191 7614 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k8dtv\" (UniqueName: \"kubernetes.io/projected/b46907eb-36d6-4410-b7d8-8012b254c861-kube-api-access-k8dtv\") pod \"cloud-credential-operator-6968c58f46-68rth\" (UID: \"b46907eb-36d6-4410-b7d8-8012b254c861\") " pod="openshift-cloud-credential-operator/cloud-credential-operator-6968c58f46-68rth" Feb 24 05:19:03.272595 master-0 kubenswrapper[7614]: I0224 05:19:03.268379 7614 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/23bdafdd-27c9-4461-be4a-3ea916ac3875-bound-sa-token\") pod \"cluster-image-registry-operator-779979bdf7-t98nr\" (UID: \"23bdafdd-27c9-4461-be4a-3ea916ac3875\") " pod="openshift-image-registry/cluster-image-registry-operator-779979bdf7-t98nr" Feb 24 05:19:03.272595 master-0 kubenswrapper[7614]: I0224 05:19:03.268498 7614 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cczbm\" (UniqueName: \"kubernetes.io/projected/23bdafdd-27c9-4461-be4a-3ea916ac3875-kube-api-access-cczbm\") pod \"cluster-image-registry-operator-779979bdf7-t98nr\" (UID: \"23bdafdd-27c9-4461-be4a-3ea916ac3875\") " pod="openshift-image-registry/cluster-image-registry-operator-779979bdf7-t98nr" Feb 24 05:19:03.272595 master-0 kubenswrapper[7614]: I0224 05:19:03.268768 7614 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/116e6b47-d435-49ca-abb5-088788daf16a-machine-api-operator-tls\") pod \"machine-api-operator-5c7cf458b4-65mc5\" (UID: \"116e6b47-d435-49ca-abb5-088788daf16a\") " pod="openshift-machine-api/machine-api-operator-5c7cf458b4-65mc5" Feb 24 05:19:03.272595 master-0 kubenswrapper[7614]: I0224 05:19:03.269793 7614 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ddfqw\" (UniqueName: \"kubernetes.io/projected/39623346-691b-42c8-af76-409d4f6629af-kube-api-access-ddfqw\") pod \"cluster-baremetal-operator-d6bb9bb76-54hnv\" (UID: \"39623346-691b-42c8-af76-409d4f6629af\") " pod="openshift-machine-api/cluster-baremetal-operator-d6bb9bb76-54hnv" Feb 24 05:19:03.272595 master-0 kubenswrapper[7614]: I0224 05:19:03.270031 7614 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kx4qf\" (UniqueName: \"kubernetes.io/projected/e6a0fc47-b446-4902-9f8a-04870cbafcab-kube-api-access-kx4qf\") pod \"machine-approver-7dd9c7d7b9-pb6sw\" (UID: \"e6a0fc47-b446-4902-9f8a-04870cbafcab\") " pod="openshift-cluster-machine-approver/machine-approver-7dd9c7d7b9-pb6sw" Feb 24 05:19:03.272595 master-0 kubenswrapper[7614]: I0224 05:19:03.270368 7614 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7vjzn\" (UniqueName: \"kubernetes.io/projected/4bb05b64-74d7-41bc-991c-5d3cddc9d8f4-kube-api-access-7vjzn\") pod \"cluster-samples-operator-65c5c48b9b-hmlsl\" (UID: \"4bb05b64-74d7-41bc-991c-5d3cddc9d8f4\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-65c5c48b9b-hmlsl" Feb 24 05:19:03.275174 master-0 kubenswrapper[7614]: I0224 05:19:03.275127 7614 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/installer-1-retry-1-master-0" event={"ID":"4df29682-0936-44a2-9629-2e90115671e0","Type":"ContainerStarted","Data":"9591bdc727c99f89e551f4c32dad8c2aa3f7be8a52343c558f1322701668f7df"} Feb 24 05:19:03.278274 master-0 kubenswrapper[7614]: I0224 05:19:03.277940 7614 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cloud-credential-operator/cloud-credential-operator-6968c58f46-68rth" Feb 24 05:19:03.281104 master-0 kubenswrapper[7614]: I0224 05:19:03.281052 7614 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-85ff64b64d-965rz" event={"ID":"8cafd431-e8f6-4b60-9214-3d01b1f43982","Type":"ContainerDied","Data":"24f82b37e68110a8b17b3abd244f394367fec11cfc6bdefbe95aaa0a0a273ff0"} Feb 24 05:19:03.281188 master-0 kubenswrapper[7614]: I0224 05:19:03.281127 7614 scope.go:117] "RemoveContainer" containerID="ee6c5d36068024e9bafe4482e75c474aa3bcf31e561b317ef75ae830061a9718" Feb 24 05:19:03.281579 master-0 kubenswrapper[7614]: I0224 05:19:03.281561 7614 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-85ff64b64d-965rz" Feb 24 05:19:03.299374 master-0 kubenswrapper[7614]: I0224 05:19:03.299333 7614 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/cluster-baremetal-operator-d6bb9bb76-54hnv" Feb 24 05:19:03.309510 master-0 kubenswrapper[7614]: I0224 05:19:03.309470 7614 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-machine-approver/machine-approver-7dd9c7d7b9-pb6sw" Feb 24 05:19:03.328334 master-0 kubenswrapper[7614]: I0224 05:19:03.328262 7614 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-cbd75ff8d-jzkmq" Feb 24 05:19:03.332919 master-0 kubenswrapper[7614]: I0224 05:19:03.331762 7614 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/39c4d0aa-c372-4d02-9302-337e68b56784-images\") pod \"machine-config-operator-7f8c75f984-922md\" (UID: \"39c4d0aa-c372-4d02-9302-337e68b56784\") " pod="openshift-machine-config-operator/machine-config-operator-7f8c75f984-922md" Feb 24 05:19:03.333098 master-0 kubenswrapper[7614]: I0224 05:19:03.332866 7614 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/39c4d0aa-c372-4d02-9302-337e68b56784-images\") pod \"machine-config-operator-7f8c75f984-922md\" (UID: \"39c4d0aa-c372-4d02-9302-337e68b56784\") " pod="openshift-machine-config-operator/machine-config-operator-7f8c75f984-922md" Feb 24 05:19:03.333394 master-0 kubenswrapper[7614]: I0224 05:19:03.333369 7614 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b2fkp\" (UniqueName: \"kubernetes.io/projected/39c4d0aa-c372-4d02-9302-337e68b56784-kube-api-access-b2fkp\") pod \"machine-config-operator-7f8c75f984-922md\" (UID: \"39c4d0aa-c372-4d02-9302-337e68b56784\") " pod="openshift-machine-config-operator/machine-config-operator-7f8c75f984-922md" Feb 24 05:19:03.334293 master-0 kubenswrapper[7614]: I0224 05:19:03.334248 7614 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5dwz2\" (UniqueName: \"kubernetes.io/projected/ee10cd9f-23da-4e40-bc7a-0856a9fa7ae5-kube-api-access-5dwz2\") pod \"insights-operator-59b498fcfb-mprnx\" (UID: \"ee10cd9f-23da-4e40-bc7a-0856a9fa7ae5\") " pod="openshift-insights/insights-operator-59b498fcfb-mprnx" Feb 24 05:19:03.334639 master-0 kubenswrapper[7614]: I0224 05:19:03.334618 7614 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ee10cd9f-23da-4e40-bc7a-0856a9fa7ae5-trusted-ca-bundle\") pod \"insights-operator-59b498fcfb-mprnx\" (UID: \"ee10cd9f-23da-4e40-bc7a-0856a9fa7ae5\") " pod="openshift-insights/insights-operator-59b498fcfb-mprnx" Feb 24 05:19:03.335238 master-0 kubenswrapper[7614]: I0224 05:19:03.335220 7614 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/39c4d0aa-c372-4d02-9302-337e68b56784-auth-proxy-config\") pod \"machine-config-operator-7f8c75f984-922md\" (UID: \"39c4d0aa-c372-4d02-9302-337e68b56784\") " pod="openshift-machine-config-operator/machine-config-operator-7f8c75f984-922md" Feb 24 05:19:03.336100 master-0 kubenswrapper[7614]: I0224 05:19:03.336082 7614 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ee10cd9f-23da-4e40-bc7a-0856a9fa7ae5-serving-cert\") pod \"insights-operator-59b498fcfb-mprnx\" (UID: \"ee10cd9f-23da-4e40-bc7a-0856a9fa7ae5\") " pod="openshift-insights/insights-operator-59b498fcfb-mprnx" Feb 24 05:19:03.336817 master-0 kubenswrapper[7614]: I0224 05:19:03.336794 7614 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/39c4d0aa-c372-4d02-9302-337e68b56784-proxy-tls\") pod \"machine-config-operator-7f8c75f984-922md\" (UID: \"39c4d0aa-c372-4d02-9302-337e68b56784\") " pod="openshift-machine-config-operator/machine-config-operator-7f8c75f984-922md" Feb 24 05:19:03.337071 master-0 kubenswrapper[7614]: I0224 05:19:03.335630 7614 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ee10cd9f-23da-4e40-bc7a-0856a9fa7ae5-trusted-ca-bundle\") pod \"insights-operator-59b498fcfb-mprnx\" (UID: \"ee10cd9f-23da-4e40-bc7a-0856a9fa7ae5\") " pod="openshift-insights/insights-operator-59b498fcfb-mprnx" Feb 24 05:19:03.337071 master-0 kubenswrapper[7614]: I0224 05:19:03.335955 7614 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/39c4d0aa-c372-4d02-9302-337e68b56784-auth-proxy-config\") pod \"machine-config-operator-7f8c75f984-922md\" (UID: \"39c4d0aa-c372-4d02-9302-337e68b56784\") " pod="openshift-machine-config-operator/machine-config-operator-7f8c75f984-922md" Feb 24 05:19:03.337557 master-0 kubenswrapper[7614]: I0224 05:19:03.337510 7614 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"snapshots\" (UniqueName: \"kubernetes.io/empty-dir/ee10cd9f-23da-4e40-bc7a-0856a9fa7ae5-snapshots\") pod \"insights-operator-59b498fcfb-mprnx\" (UID: \"ee10cd9f-23da-4e40-bc7a-0856a9fa7ae5\") " pod="openshift-insights/insights-operator-59b498fcfb-mprnx" Feb 24 05:19:03.337664 master-0 kubenswrapper[7614]: I0224 05:19:03.337623 7614 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ee10cd9f-23da-4e40-bc7a-0856a9fa7ae5-service-ca-bundle\") pod \"insights-operator-59b498fcfb-mprnx\" (UID: \"ee10cd9f-23da-4e40-bc7a-0856a9fa7ae5\") " pod="openshift-insights/insights-operator-59b498fcfb-mprnx" Feb 24 05:19:03.338583 master-0 kubenswrapper[7614]: I0224 05:19:03.338551 7614 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"snapshots\" (UniqueName: \"kubernetes.io/empty-dir/ee10cd9f-23da-4e40-bc7a-0856a9fa7ae5-snapshots\") pod \"insights-operator-59b498fcfb-mprnx\" (UID: \"ee10cd9f-23da-4e40-bc7a-0856a9fa7ae5\") " pod="openshift-insights/insights-operator-59b498fcfb-mprnx" Feb 24 05:19:03.338967 master-0 kubenswrapper[7614]: I0224 05:19:03.338946 7614 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ee10cd9f-23da-4e40-bc7a-0856a9fa7ae5-service-ca-bundle\") pod \"insights-operator-59b498fcfb-mprnx\" (UID: \"ee10cd9f-23da-4e40-bc7a-0856a9fa7ae5\") " pod="openshift-insights/insights-operator-59b498fcfb-mprnx" Feb 24 05:19:03.340258 master-0 kubenswrapper[7614]: I0224 05:19:03.340229 7614 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ee10cd9f-23da-4e40-bc7a-0856a9fa7ae5-serving-cert\") pod \"insights-operator-59b498fcfb-mprnx\" (UID: \"ee10cd9f-23da-4e40-bc7a-0856a9fa7ae5\") " pod="openshift-insights/insights-operator-59b498fcfb-mprnx" Feb 24 05:19:03.341165 master-0 kubenswrapper[7614]: I0224 05:19:03.341065 7614 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-654dcf5585-fgmnd" Feb 24 05:19:03.341425 master-0 kubenswrapper[7614]: I0224 05:19:03.341152 7614 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/39c4d0aa-c372-4d02-9302-337e68b56784-proxy-tls\") pod \"machine-config-operator-7f8c75f984-922md\" (UID: \"39c4d0aa-c372-4d02-9302-337e68b56784\") " pod="openshift-machine-config-operator/machine-config-operator-7f8c75f984-922md" Feb 24 05:19:03.360848 master-0 kubenswrapper[7614]: I0224 05:19:03.360761 7614 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5dwz2\" (UniqueName: \"kubernetes.io/projected/ee10cd9f-23da-4e40-bc7a-0856a9fa7ae5-kube-api-access-5dwz2\") pod \"insights-operator-59b498fcfb-mprnx\" (UID: \"ee10cd9f-23da-4e40-bc7a-0856a9fa7ae5\") " pod="openshift-insights/insights-operator-59b498fcfb-mprnx" Feb 24 05:19:03.365037 master-0 kubenswrapper[7614]: I0224 05:19:03.364569 7614 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-5c7cf458b4-65mc5" Feb 24 05:19:03.372880 master-0 kubenswrapper[7614]: I0224 05:19:03.372766 7614 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b2fkp\" (UniqueName: \"kubernetes.io/projected/39c4d0aa-c372-4d02-9302-337e68b56784-kube-api-access-b2fkp\") pod \"machine-config-operator-7f8c75f984-922md\" (UID: \"39c4d0aa-c372-4d02-9302-337e68b56784\") " pod="openshift-machine-config-operator/machine-config-operator-7f8c75f984-922md" Feb 24 05:19:03.376525 master-0 kubenswrapper[7614]: I0224 05:19:03.375710 7614 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler/installer-1-retry-1-master-0" podStartSLOduration=2.37569086 podStartE2EDuration="2.37569086s" podCreationTimestamp="2026-02-24 05:19:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-24 05:19:03.364462082 +0000 UTC m=+274.399205228" watchObservedRunningTime="2026-02-24 05:19:03.37569086 +0000 UTC m=+274.410434026" Feb 24 05:19:03.383526 master-0 kubenswrapper[7614]: I0224 05:19:03.383441 7614 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-85ff64b64d-965rz"] Feb 24 05:19:03.387643 master-0 kubenswrapper[7614]: I0224 05:19:03.387150 7614 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/cluster-autoscaler-operator-86b8dc6d6-mcf2z" Feb 24 05:19:03.388405 master-0 kubenswrapper[7614]: I0224 05:19:03.388259 7614 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-85ff64b64d-965rz"] Feb 24 05:19:03.390906 master-0 kubenswrapper[7614]: W0224 05:19:03.390840 7614 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode6a0fc47_b446_4902_9f8a_04870cbafcab.slice/crio-e0d20c57fe745f0a7a074b91ba4c54bbdd4dc326b155cd4b8a578d9c21d5db21 WatchSource:0}: Error finding container e0d20c57fe745f0a7a074b91ba4c54bbdd4dc326b155cd4b8a578d9c21d5db21: Status 404 returned error can't find the container with id e0d20c57fe745f0a7a074b91ba4c54bbdd4dc326b155cd4b8a578d9c21d5db21 Feb 24 05:19:03.400987 master-0 kubenswrapper[7614]: I0224 05:19:03.400618 7614 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-7657d7494-mmsz6" Feb 24 05:19:03.414187 master-0 kubenswrapper[7614]: I0224 05:19:03.414161 7614 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-storage-operator/cluster-storage-operator-f94476f49-tlmg5" Feb 24 05:19:03.474514 master-0 kubenswrapper[7614]: I0224 05:19:03.451113 7614 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-65c5c48b9b-hmlsl" Feb 24 05:19:03.491376 master-0 kubenswrapper[7614]: I0224 05:19:03.486911 7614 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-5499d7f7bb-8xdmq" Feb 24 05:19:03.493159 master-0 kubenswrapper[7614]: I0224 05:19:03.492631 7614 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-7f8c75f984-922md" Feb 24 05:19:03.505214 master-0 kubenswrapper[7614]: I0224 05:19:03.505108 7614 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-6f47d587d6-7b87v" Feb 24 05:19:03.517694 master-0 kubenswrapper[7614]: I0224 05:19:03.517292 7614 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-insights/insights-operator-59b498fcfb-mprnx" Feb 24 05:19:03.518139 master-0 kubenswrapper[7614]: I0224 05:19:03.517884 7614 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-596f79dd6f-v22h2" Feb 24 05:19:03.571460 master-0 kubenswrapper[7614]: I0224 05:19:03.571127 7614 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-779979bdf7-t98nr" Feb 24 05:19:03.747162 master-0 kubenswrapper[7614]: I0224 05:19:03.745884 7614 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cloud-credential-operator/cloud-credential-operator-6968c58f46-68rth"] Feb 24 05:19:03.861233 master-0 kubenswrapper[7614]: I0224 05:19:03.858953 7614 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/cluster-baremetal-operator-d6bb9bb76-54hnv"] Feb 24 05:19:03.891427 master-0 kubenswrapper[7614]: W0224 05:19:03.891376 7614 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod39623346_691b_42c8_af76_409d4f6629af.slice/crio-89dd38053c589bc34a06848b1d85945f7e695c76927a0e1433d3c5444dd1eb09 WatchSource:0}: Error finding container 89dd38053c589bc34a06848b1d85945f7e695c76927a0e1433d3c5444dd1eb09: Status 404 returned error can't find the container with id 89dd38053c589bc34a06848b1d85945f7e695c76927a0e1433d3c5444dd1eb09 Feb 24 05:19:03.954771 master-0 kubenswrapper[7614]: I0224 05:19:03.954631 7614 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-654dcf5585-fgmnd"] Feb 24 05:19:03.955413 master-0 kubenswrapper[7614]: I0224 05:19:03.955301 7614 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/cluster-autoscaler-operator-86b8dc6d6-mcf2z"] Feb 24 05:19:03.960144 master-0 kubenswrapper[7614]: W0224 05:19:03.959288 7614 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb426cb33_1624_45e6_b8c5_4e8d251f6339.slice/crio-937f03ad2559d182c0cdd1d2762487960e12dca202f4d10b53ec97e755cb0a40 WatchSource:0}: Error finding container 937f03ad2559d182c0cdd1d2762487960e12dca202f4d10b53ec97e755cb0a40: Status 404 returned error can't find the container with id 937f03ad2559d182c0cdd1d2762487960e12dca202f4d10b53ec97e755cb0a40 Feb 24 05:19:03.964896 master-0 kubenswrapper[7614]: W0224 05:19:03.962427 7614 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5d51ce58_55f6_45d5_9d5d_7b31ae42380a.slice/crio-a51d75323a923af00f3bd0e9f47fc2b98d3fa4f81d500b08ed1b5763acd5b079 WatchSource:0}: Error finding container a51d75323a923af00f3bd0e9f47fc2b98d3fa4f81d500b08ed1b5763acd5b079: Status 404 returned error can't find the container with id a51d75323a923af00f3bd0e9f47fc2b98d3fa4f81d500b08ed1b5763acd5b079 Feb 24 05:19:03.964896 master-0 kubenswrapper[7614]: I0224 05:19:03.964837 7614 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-storage-operator/cluster-storage-operator-f94476f49-tlmg5"] Feb 24 05:19:03.982283 master-0 kubenswrapper[7614]: I0224 05:19:03.982236 7614 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/machine-api-operator-5c7cf458b4-65mc5"] Feb 24 05:19:03.987988 master-0 kubenswrapper[7614]: W0224 05:19:03.987919 7614 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode1f03d97_1a6a_41e4_9ed3_cd9b01c46400.slice/crio-2d6d12cb5b54a813b83ddffc4965018d471ee515affc2a1d0cb0aec4f5245797 WatchSource:0}: Error finding container 2d6d12cb5b54a813b83ddffc4965018d471ee515affc2a1d0cb0aec4f5245797: Status 404 returned error can't find the container with id 2d6d12cb5b54a813b83ddffc4965018d471ee515affc2a1d0cb0aec4f5245797 Feb 24 05:19:03.994911 master-0 kubenswrapper[7614]: W0224 05:19:03.994862 7614 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod116e6b47_d435_49ca_abb5_088788daf16a.slice/crio-93dd263e4986822eec0c710075ac8eebc645d482f87f7ef8bb335adc841614f2 WatchSource:0}: Error finding container 93dd263e4986822eec0c710075ac8eebc645d482f87f7ef8bb335adc841614f2: Status 404 returned error can't find the container with id 93dd263e4986822eec0c710075ac8eebc645d482f87f7ef8bb335adc841614f2 Feb 24 05:19:04.143636 master-0 kubenswrapper[7614]: I0224 05:19:04.143472 7614 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-779979bdf7-t98nr"] Feb 24 05:19:04.147091 master-0 kubenswrapper[7614]: I0224 05:19:04.146928 7614 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-65c5c48b9b-hmlsl"] Feb 24 05:19:04.149405 master-0 kubenswrapper[7614]: I0224 05:19:04.149359 7614 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-config-operator/openshift-config-operator-6f47d587d6-7b87v"] Feb 24 05:19:04.164050 master-0 kubenswrapper[7614]: I0224 05:19:04.163620 7614 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-7657d7494-mmsz6"] Feb 24 05:19:04.165814 master-0 kubenswrapper[7614]: I0224 05:19:04.165788 7614 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-5499d7f7bb-8xdmq"] Feb 24 05:19:04.205397 master-0 kubenswrapper[7614]: W0224 05:19:04.204928 7614 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9666fc94_71e3_46af_8b45_26e3a085d076.slice/crio-906a4975f221a3093bffb39f286ed36f66979e79a259e327d3df353ea75730c0 WatchSource:0}: Error finding container 906a4975f221a3093bffb39f286ed36f66979e79a259e327d3df353ea75730c0: Status 404 returned error can't find the container with id 906a4975f221a3093bffb39f286ed36f66979e79a259e327d3df353ea75730c0 Feb 24 05:19:04.293380 master-0 kubenswrapper[7614]: I0224 05:19:04.293336 7614 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/cluster-autoscaler-operator-86b8dc6d6-mcf2z" event={"ID":"5d51ce58-55f6-45d5-9d5d-7b31ae42380a","Type":"ContainerStarted","Data":"d8bed4f9ab20b823328705627959d55208daad3d2b2ea306bf5e481cb8fb82b7"} Feb 24 05:19:04.293380 master-0 kubenswrapper[7614]: I0224 05:19:04.293390 7614 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/cluster-autoscaler-operator-86b8dc6d6-mcf2z" event={"ID":"5d51ce58-55f6-45d5-9d5d-7b31ae42380a","Type":"ContainerStarted","Data":"a51d75323a923af00f3bd0e9f47fc2b98d3fa4f81d500b08ed1b5763acd5b079"} Feb 24 05:19:04.297952 master-0 kubenswrapper[7614]: I0224 05:19:04.297893 7614 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-654dcf5585-fgmnd" event={"ID":"b426cb33-1624-45e6-b8c5-4e8d251f6339","Type":"ContainerStarted","Data":"adced94d4c33380fc07637d5383f9ac889fe6f6c9825230d6be68622728bb5ff"} Feb 24 05:19:04.298046 master-0 kubenswrapper[7614]: I0224 05:19:04.297970 7614 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-654dcf5585-fgmnd" event={"ID":"b426cb33-1624-45e6-b8c5-4e8d251f6339","Type":"ContainerStarted","Data":"937f03ad2559d182c0cdd1d2762487960e12dca202f4d10b53ec97e755cb0a40"} Feb 24 05:19:04.298387 master-0 kubenswrapper[7614]: I0224 05:19:04.298365 7614 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-654dcf5585-fgmnd" Feb 24 05:19:04.300054 master-0 kubenswrapper[7614]: I0224 05:19:04.299994 7614 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-6f47d587d6-7b87v" event={"ID":"3f511d03-a182-4968-ba40-5c5c10e5e6be","Type":"ContainerStarted","Data":"410534ca0c42d1b797ab53ba5fbf6b12f5a1a2db22751f87c2aa91614045629d"} Feb 24 05:19:04.301819 master-0 kubenswrapper[7614]: I0224 05:19:04.301774 7614 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-storage-operator/cluster-storage-operator-f94476f49-tlmg5" event={"ID":"e1f03d97-1a6a-41e4-9ed3-cd9b01c46400","Type":"ContainerStarted","Data":"2d6d12cb5b54a813b83ddffc4965018d471ee515affc2a1d0cb0aec4f5245797"} Feb 24 05:19:04.304760 master-0 kubenswrapper[7614]: I0224 05:19:04.304695 7614 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-7dd9c7d7b9-pb6sw" event={"ID":"e6a0fc47-b446-4902-9f8a-04870cbafcab","Type":"ContainerStarted","Data":"ff86ebcc5c21c17d77b09c8668eacb2f60f3347c8c630b1700b81d719fb05f20"} Feb 24 05:19:04.304760 master-0 kubenswrapper[7614]: I0224 05:19:04.304757 7614 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-7dd9c7d7b9-pb6sw" event={"ID":"e6a0fc47-b446-4902-9f8a-04870cbafcab","Type":"ContainerStarted","Data":"c6c52d6ef70ea7f7f832372ec9bfee7e402b20c8d34402c4b150b7a3ac96b4a0"} Feb 24 05:19:04.304959 master-0 kubenswrapper[7614]: I0224 05:19:04.304769 7614 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-7dd9c7d7b9-pb6sw" event={"ID":"e6a0fc47-b446-4902-9f8a-04870cbafcab","Type":"ContainerStarted","Data":"e0d20c57fe745f0a7a074b91ba4c54bbdd4dc326b155cd4b8a578d9c21d5db21"} Feb 24 05:19:04.307323 master-0 kubenswrapper[7614]: I0224 05:19:04.307245 7614 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-cbd75ff8d-jzkmq" event={"ID":"51b6b038-7029-4e3e-af6d-b7f85ac532b0","Type":"ContainerStarted","Data":"b0d97a3313f34611823cdda1a11180f5f55eb172ec7bcc000e94b7424e41c15c"} Feb 24 05:19:04.308865 master-0 kubenswrapper[7614]: I0224 05:19:04.308836 7614 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cloud-credential-operator/cloud-credential-operator-6968c58f46-68rth" event={"ID":"b46907eb-36d6-4410-b7d8-8012b254c861","Type":"ContainerStarted","Data":"ad1ff7aca1c01f4debb69fc2fbfa6d76df8fe23c1970fe0bd96d9b70b7c21b32"} Feb 24 05:19:04.308942 master-0 kubenswrapper[7614]: I0224 05:19:04.308869 7614 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cloud-credential-operator/cloud-credential-operator-6968c58f46-68rth" event={"ID":"b46907eb-36d6-4410-b7d8-8012b254c861","Type":"ContainerStarted","Data":"8e403e85ba5e32d44b48160b30b4587230e7b0f26d90604af0e04232edc028bd"} Feb 24 05:19:04.311394 master-0 kubenswrapper[7614]: I0224 05:19:04.311250 7614 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5c7cf458b4-65mc5" event={"ID":"116e6b47-d435-49ca-abb5-088788daf16a","Type":"ContainerStarted","Data":"91474ed4f0f18a33ba98693b844c8cbc82cdecfad32a8a94a35feeae9b527cc8"} Feb 24 05:19:04.311394 master-0 kubenswrapper[7614]: I0224 05:19:04.311279 7614 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5c7cf458b4-65mc5" event={"ID":"116e6b47-d435-49ca-abb5-088788daf16a","Type":"ContainerStarted","Data":"93dd263e4986822eec0c710075ac8eebc645d482f87f7ef8bb335adc841614f2"} Feb 24 05:19:04.312291 master-0 kubenswrapper[7614]: I0224 05:19:04.312264 7614 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/cluster-image-registry-operator-779979bdf7-t98nr" event={"ID":"23bdafdd-27c9-4461-be4a-3ea916ac3875","Type":"ContainerStarted","Data":"d3656437a9ce9676295b2eb9bd8bc3fb63776e655e923084238b22192495f791"} Feb 24 05:19:04.313734 master-0 kubenswrapper[7614]: I0224 05:19:04.313670 7614 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/cluster-baremetal-operator-d6bb9bb76-54hnv" event={"ID":"39623346-691b-42c8-af76-409d4f6629af","Type":"ContainerStarted","Data":"89dd38053c589bc34a06848b1d85945f7e695c76927a0e1433d3c5444dd1eb09"} Feb 24 05:19:04.316413 master-0 kubenswrapper[7614]: I0224 05:19:04.316382 7614 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-7657d7494-mmsz6" event={"ID":"da19bb93-c9ba-4e60-9e83-d92bc0dd33c4","Type":"ContainerStarted","Data":"3aa615a9d796b417e579505462fba818eb63c6e04f0fc9bcc949d228f425e015"} Feb 24 05:19:04.317330 master-0 kubenswrapper[7614]: I0224 05:19:04.317280 7614 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-65c5c48b9b-hmlsl" event={"ID":"4bb05b64-74d7-41bc-991c-5d3cddc9d8f4","Type":"ContainerStarted","Data":"68f61c7a09ca20650d4a6ea4b0f5e362ed36ea985ba0db19d10925a21520b6ad"} Feb 24 05:19:04.324625 master-0 kubenswrapper[7614]: I0224 05:19:04.324582 7614 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/olm-operator-5499d7f7bb-8xdmq" event={"ID":"9666fc94-71e3-46af-8b45-26e3a085d076","Type":"ContainerStarted","Data":"906a4975f221a3093bffb39f286ed36f66979e79a259e327d3df353ea75730c0"} Feb 24 05:19:04.326296 master-0 kubenswrapper[7614]: I0224 05:19:04.326220 7614 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-654dcf5585-fgmnd" podStartSLOduration=3.326171467 podStartE2EDuration="3.326171467s" podCreationTimestamp="2026-02-24 05:19:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-24 05:19:04.315101882 +0000 UTC m=+275.349845038" watchObservedRunningTime="2026-02-24 05:19:04.326171467 +0000 UTC m=+275.360914623" Feb 24 05:19:04.332824 master-0 kubenswrapper[7614]: I0224 05:19:04.332770 7614 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-machine-approver/machine-approver-7dd9c7d7b9-pb6sw" podStartSLOduration=2.332754441 podStartE2EDuration="2.332754441s" podCreationTimestamp="2026-02-24 05:19:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-24 05:19:04.331144249 +0000 UTC m=+275.365887425" watchObservedRunningTime="2026-02-24 05:19:04.332754441 +0000 UTC m=+275.367497597" Feb 24 05:19:04.413436 master-0 kubenswrapper[7614]: I0224 05:19:04.413282 7614 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-insights/insights-operator-59b498fcfb-mprnx"] Feb 24 05:19:04.431621 master-0 kubenswrapper[7614]: I0224 05:19:04.431582 7614 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-596f79dd6f-v22h2"] Feb 24 05:19:04.532969 master-0 kubenswrapper[7614]: I0224 05:19:04.532885 7614 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-operator-7f8c75f984-922md"] Feb 24 05:19:04.541134 master-0 kubenswrapper[7614]: W0224 05:19:04.541036 7614 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod39c4d0aa_c372_4d02_9302_337e68b56784.slice/crio-cb022277db501e47c11144c7784ae45171d1fe684dae009de53aad7904c4eadc WatchSource:0}: Error finding container cb022277db501e47c11144c7784ae45171d1fe684dae009de53aad7904c4eadc: Status 404 returned error can't find the container with id cb022277db501e47c11144c7784ae45171d1fe684dae009de53aad7904c4eadc Feb 24 05:19:04.741760 master-0 kubenswrapper[7614]: I0224 05:19:04.737570 7614 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-654dcf5585-fgmnd" Feb 24 05:19:05.186546 master-0 kubenswrapper[7614]: I0224 05:19:05.186235 7614 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8cafd431-e8f6-4b60-9214-3d01b1f43982" path="/var/lib/kubelet/pods/8cafd431-e8f6-4b60-9214-3d01b1f43982/volumes" Feb 24 05:19:05.344287 master-0 kubenswrapper[7614]: I0224 05:19:05.344195 7614 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-insights/insights-operator-59b498fcfb-mprnx" event={"ID":"ee10cd9f-23da-4e40-bc7a-0856a9fa7ae5","Type":"ContainerStarted","Data":"8edfb6097f947373026f0b09e341e33fda8a35b32db2f2f2929d0f92ff74f282"} Feb 24 05:19:05.356872 master-0 kubenswrapper[7614]: I0224 05:19:05.356788 7614 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-596f79dd6f-v22h2" event={"ID":"cc0cfdd6-99d8-40dc-87d0-06c2a6767f38","Type":"ContainerStarted","Data":"9b2d98c4f0e58ff7b071f2a8af044d37c22d62fae7d69e0aeb951e2f2d4347a6"} Feb 24 05:19:05.356872 master-0 kubenswrapper[7614]: I0224 05:19:05.356864 7614 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-596f79dd6f-v22h2" event={"ID":"cc0cfdd6-99d8-40dc-87d0-06c2a6767f38","Type":"ContainerStarted","Data":"d243d9f4d6d9c16fd75ab0c5744222bf367eeb4a55dc3a56ad2f15b145aca434"} Feb 24 05:19:05.356872 master-0 kubenswrapper[7614]: I0224 05:19:05.356890 7614 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/catalog-operator-596f79dd6f-v22h2" Feb 24 05:19:05.362163 master-0 kubenswrapper[7614]: I0224 05:19:05.362132 7614 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/catalog-operator-596f79dd6f-v22h2" Feb 24 05:19:05.362676 master-0 kubenswrapper[7614]: I0224 05:19:05.362620 7614 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-7f8c75f984-922md" event={"ID":"39c4d0aa-c372-4d02-9302-337e68b56784","Type":"ContainerStarted","Data":"57a65215dc885b11393168d1ddd3aa92cc9659c69613149b4ae80c58c5113c5b"} Feb 24 05:19:05.362676 master-0 kubenswrapper[7614]: I0224 05:19:05.362671 7614 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-7f8c75f984-922md" event={"ID":"39c4d0aa-c372-4d02-9302-337e68b56784","Type":"ContainerStarted","Data":"986b482003ff19c4b718ec972373fc705ec17bcf47510b88393859e89ab2007d"} Feb 24 05:19:05.362922 master-0 kubenswrapper[7614]: I0224 05:19:05.362685 7614 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-7f8c75f984-922md" event={"ID":"39c4d0aa-c372-4d02-9302-337e68b56784","Type":"ContainerStarted","Data":"cb022277db501e47c11144c7784ae45171d1fe684dae009de53aad7904c4eadc"} Feb 24 05:19:05.366204 master-0 kubenswrapper[7614]: I0224 05:19:05.366157 7614 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/olm-operator-5499d7f7bb-8xdmq" event={"ID":"9666fc94-71e3-46af-8b45-26e3a085d076","Type":"ContainerStarted","Data":"c7e730dd3a7d0bf79db7c97546cc1a774de2e51bf08a7ed7e4659615414dc4f1"} Feb 24 05:19:05.366635 master-0 kubenswrapper[7614]: I0224 05:19:05.366601 7614 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/olm-operator-5499d7f7bb-8xdmq" Feb 24 05:19:05.369995 master-0 kubenswrapper[7614]: I0224 05:19:05.369957 7614 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-7657d7494-mmsz6" event={"ID":"da19bb93-c9ba-4e60-9e83-d92bc0dd33c4","Type":"ContainerStarted","Data":"d54fd19b9eb4386cf27b0171bbd26afecfaf6c5721e1c1b2aba9af1126e48295"} Feb 24 05:19:05.370701 master-0 kubenswrapper[7614]: I0224 05:19:05.370675 7614 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/olm-operator-5499d7f7bb-8xdmq" Feb 24 05:19:05.405464 master-0 kubenswrapper[7614]: I0224 05:19:05.405384 7614 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/catalog-operator-596f79dd6f-v22h2" podStartSLOduration=4.405363286 podStartE2EDuration="4.405363286s" podCreationTimestamp="2026-02-24 05:19:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-24 05:19:05.380847533 +0000 UTC m=+276.415590689" watchObservedRunningTime="2026-02-24 05:19:05.405363286 +0000 UTC m=+276.440106442" Feb 24 05:19:05.407569 master-0 kubenswrapper[7614]: I0224 05:19:05.407464 7614 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-7657d7494-mmsz6" podStartSLOduration=4.407457122 podStartE2EDuration="4.407457122s" podCreationTimestamp="2026-02-24 05:19:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-24 05:19:05.403001843 +0000 UTC m=+276.437745009" watchObservedRunningTime="2026-02-24 05:19:05.407457122 +0000 UTC m=+276.442200278" Feb 24 05:19:05.781336 master-0 kubenswrapper[7614]: I0224 05:19:05.781236 7614 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/olm-operator-5499d7f7bb-8xdmq" podStartSLOduration=4.781204649 podStartE2EDuration="4.781204649s" podCreationTimestamp="2026-02-24 05:19:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-24 05:19:05.781168438 +0000 UTC m=+276.815911594" watchObservedRunningTime="2026-02-24 05:19:05.781204649 +0000 UTC m=+276.815947805" Feb 24 05:19:05.850567 master-0 kubenswrapper[7614]: I0224 05:19:05.850476 7614 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-operator-7f8c75f984-922md" podStartSLOduration=4.850447802 podStartE2EDuration="4.850447802s" podCreationTimestamp="2026-02-24 05:19:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-24 05:19:05.843888348 +0000 UTC m=+276.878631504" watchObservedRunningTime="2026-02-24 05:19:05.850447802 +0000 UTC m=+276.885190958" Feb 24 05:19:05.948739 master-0 kubenswrapper[7614]: I0224 05:19:05.941754 7614 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-gn8m8"] Feb 24 05:19:05.948739 master-0 kubenswrapper[7614]: I0224 05:19:05.943245 7614 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-gn8m8" Feb 24 05:19:05.948739 master-0 kubenswrapper[7614]: I0224 05:19:05.943818 7614 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-gn8m8"] Feb 24 05:19:05.949137 master-0 kubenswrapper[7614]: I0224 05:19:05.949084 7614 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-qvwf6" Feb 24 05:19:06.073885 master-0 kubenswrapper[7614]: I0224 05:19:06.073718 7614 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2c6bb439-ed17-4761-b193-580be5f6aa00-utilities\") pod \"certified-operators-gn8m8\" (UID: \"2c6bb439-ed17-4761-b193-580be5f6aa00\") " pod="openshift-marketplace/certified-operators-gn8m8" Feb 24 05:19:06.073885 master-0 kubenswrapper[7614]: I0224 05:19:06.073800 7614 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pl6rx\" (UniqueName: \"kubernetes.io/projected/2c6bb439-ed17-4761-b193-580be5f6aa00-kube-api-access-pl6rx\") pod \"certified-operators-gn8m8\" (UID: \"2c6bb439-ed17-4761-b193-580be5f6aa00\") " pod="openshift-marketplace/certified-operators-gn8m8" Feb 24 05:19:06.073885 master-0 kubenswrapper[7614]: I0224 05:19:06.073846 7614 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2c6bb439-ed17-4761-b193-580be5f6aa00-catalog-content\") pod \"certified-operators-gn8m8\" (UID: \"2c6bb439-ed17-4761-b193-580be5f6aa00\") " pod="openshift-marketplace/certified-operators-gn8m8" Feb 24 05:19:06.111698 master-0 kubenswrapper[7614]: I0224 05:19:06.111230 7614 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-68vwc"] Feb 24 05:19:06.112342 master-0 kubenswrapper[7614]: I0224 05:19:06.112276 7614 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-68vwc" Feb 24 05:19:06.115834 master-0 kubenswrapper[7614]: I0224 05:19:06.115465 7614 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-srdvz" Feb 24 05:19:06.129064 master-0 kubenswrapper[7614]: I0224 05:19:06.128806 7614 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-68vwc"] Feb 24 05:19:06.175786 master-0 kubenswrapper[7614]: I0224 05:19:06.175642 7614 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2c6bb439-ed17-4761-b193-580be5f6aa00-utilities\") pod \"certified-operators-gn8m8\" (UID: \"2c6bb439-ed17-4761-b193-580be5f6aa00\") " pod="openshift-marketplace/certified-operators-gn8m8" Feb 24 05:19:06.175786 master-0 kubenswrapper[7614]: I0224 05:19:06.175752 7614 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pl6rx\" (UniqueName: \"kubernetes.io/projected/2c6bb439-ed17-4761-b193-580be5f6aa00-kube-api-access-pl6rx\") pod \"certified-operators-gn8m8\" (UID: \"2c6bb439-ed17-4761-b193-580be5f6aa00\") " pod="openshift-marketplace/certified-operators-gn8m8" Feb 24 05:19:06.175786 master-0 kubenswrapper[7614]: I0224 05:19:06.175856 7614 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2c6bb439-ed17-4761-b193-580be5f6aa00-catalog-content\") pod \"certified-operators-gn8m8\" (UID: \"2c6bb439-ed17-4761-b193-580be5f6aa00\") " pod="openshift-marketplace/certified-operators-gn8m8" Feb 24 05:19:06.176673 master-0 kubenswrapper[7614]: I0224 05:19:06.176372 7614 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2c6bb439-ed17-4761-b193-580be5f6aa00-utilities\") pod \"certified-operators-gn8m8\" (UID: \"2c6bb439-ed17-4761-b193-580be5f6aa00\") " pod="openshift-marketplace/certified-operators-gn8m8" Feb 24 05:19:06.177695 master-0 kubenswrapper[7614]: I0224 05:19:06.177655 7614 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2c6bb439-ed17-4761-b193-580be5f6aa00-catalog-content\") pod \"certified-operators-gn8m8\" (UID: \"2c6bb439-ed17-4761-b193-580be5f6aa00\") " pod="openshift-marketplace/certified-operators-gn8m8" Feb 24 05:19:06.234258 master-0 kubenswrapper[7614]: I0224 05:19:06.234193 7614 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pl6rx\" (UniqueName: \"kubernetes.io/projected/2c6bb439-ed17-4761-b193-580be5f6aa00-kube-api-access-pl6rx\") pod \"certified-operators-gn8m8\" (UID: \"2c6bb439-ed17-4761-b193-580be5f6aa00\") " pod="openshift-marketplace/certified-operators-gn8m8" Feb 24 05:19:06.276670 master-0 kubenswrapper[7614]: I0224 05:19:06.276606 7614 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-67qg5\" (UniqueName: \"kubernetes.io/projected/cd674e58-b749-46fb-8a28-66012fd8b401-kube-api-access-67qg5\") pod \"community-operators-68vwc\" (UID: \"cd674e58-b749-46fb-8a28-66012fd8b401\") " pod="openshift-marketplace/community-operators-68vwc" Feb 24 05:19:06.276942 master-0 kubenswrapper[7614]: I0224 05:19:06.276811 7614 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cd674e58-b749-46fb-8a28-66012fd8b401-utilities\") pod \"community-operators-68vwc\" (UID: \"cd674e58-b749-46fb-8a28-66012fd8b401\") " pod="openshift-marketplace/community-operators-68vwc" Feb 24 05:19:06.276942 master-0 kubenswrapper[7614]: I0224 05:19:06.276853 7614 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cd674e58-b749-46fb-8a28-66012fd8b401-catalog-content\") pod \"community-operators-68vwc\" (UID: \"cd674e58-b749-46fb-8a28-66012fd8b401\") " pod="openshift-marketplace/community-operators-68vwc" Feb 24 05:19:06.277434 master-0 kubenswrapper[7614]: I0224 05:19:06.277331 7614 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-gn8m8" Feb 24 05:19:06.380103 master-0 kubenswrapper[7614]: I0224 05:19:06.379655 7614 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-67qg5\" (UniqueName: \"kubernetes.io/projected/cd674e58-b749-46fb-8a28-66012fd8b401-kube-api-access-67qg5\") pod \"community-operators-68vwc\" (UID: \"cd674e58-b749-46fb-8a28-66012fd8b401\") " pod="openshift-marketplace/community-operators-68vwc" Feb 24 05:19:06.380103 master-0 kubenswrapper[7614]: I0224 05:19:06.379825 7614 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cd674e58-b749-46fb-8a28-66012fd8b401-utilities\") pod \"community-operators-68vwc\" (UID: \"cd674e58-b749-46fb-8a28-66012fd8b401\") " pod="openshift-marketplace/community-operators-68vwc" Feb 24 05:19:06.380103 master-0 kubenswrapper[7614]: I0224 05:19:06.379892 7614 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cd674e58-b749-46fb-8a28-66012fd8b401-catalog-content\") pod \"community-operators-68vwc\" (UID: \"cd674e58-b749-46fb-8a28-66012fd8b401\") " pod="openshift-marketplace/community-operators-68vwc" Feb 24 05:19:06.381909 master-0 kubenswrapper[7614]: I0224 05:19:06.381494 7614 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cd674e58-b749-46fb-8a28-66012fd8b401-catalog-content\") pod \"community-operators-68vwc\" (UID: \"cd674e58-b749-46fb-8a28-66012fd8b401\") " pod="openshift-marketplace/community-operators-68vwc" Feb 24 05:19:06.382892 master-0 kubenswrapper[7614]: I0224 05:19:06.382806 7614 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cd674e58-b749-46fb-8a28-66012fd8b401-utilities\") pod \"community-operators-68vwc\" (UID: \"cd674e58-b749-46fb-8a28-66012fd8b401\") " pod="openshift-marketplace/community-operators-68vwc" Feb 24 05:19:06.382987 master-0 kubenswrapper[7614]: I0224 05:19:06.382942 7614 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-7657d7494-mmsz6" Feb 24 05:19:06.388762 master-0 kubenswrapper[7614]: I0224 05:19:06.388542 7614 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-7657d7494-mmsz6" Feb 24 05:19:06.398440 master-0 kubenswrapper[7614]: I0224 05:19:06.398393 7614 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-67qg5\" (UniqueName: \"kubernetes.io/projected/cd674e58-b749-46fb-8a28-66012fd8b401-kube-api-access-67qg5\") pod \"community-operators-68vwc\" (UID: \"cd674e58-b749-46fb-8a28-66012fd8b401\") " pod="openshift-marketplace/community-operators-68vwc" Feb 24 05:19:06.436416 master-0 kubenswrapper[7614]: I0224 05:19:06.435585 7614 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-68vwc" Feb 24 05:19:06.611455 master-0 kubenswrapper[7614]: I0224 05:19:06.611382 7614 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-df5f88cd4-cwzcs"] Feb 24 05:19:06.612261 master-0 kubenswrapper[7614]: I0224 05:19:06.612239 7614 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-df5f88cd4-cwzcs" Feb 24 05:19:06.619353 master-0 kubenswrapper[7614]: I0224 05:19:06.618700 7614 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"packageserver-service-cert" Feb 24 05:19:06.632302 master-0 kubenswrapper[7614]: I0224 05:19:06.632174 7614 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-df5f88cd4-cwzcs"] Feb 24 05:19:06.787985 master-0 kubenswrapper[7614]: I0224 05:19:06.787918 7614 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9zxwj\" (UniqueName: \"kubernetes.io/projected/49b426a3-f16e-40e9-a166-7270d4cfcc60-kube-api-access-9zxwj\") pod \"packageserver-df5f88cd4-cwzcs\" (UID: \"49b426a3-f16e-40e9-a166-7270d4cfcc60\") " pod="openshift-operator-lifecycle-manager/packageserver-df5f88cd4-cwzcs" Feb 24 05:19:06.788300 master-0 kubenswrapper[7614]: I0224 05:19:06.788062 7614 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/49b426a3-f16e-40e9-a166-7270d4cfcc60-tmpfs\") pod \"packageserver-df5f88cd4-cwzcs\" (UID: \"49b426a3-f16e-40e9-a166-7270d4cfcc60\") " pod="openshift-operator-lifecycle-manager/packageserver-df5f88cd4-cwzcs" Feb 24 05:19:06.788300 master-0 kubenswrapper[7614]: I0224 05:19:06.788096 7614 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/49b426a3-f16e-40e9-a166-7270d4cfcc60-apiservice-cert\") pod \"packageserver-df5f88cd4-cwzcs\" (UID: \"49b426a3-f16e-40e9-a166-7270d4cfcc60\") " pod="openshift-operator-lifecycle-manager/packageserver-df5f88cd4-cwzcs" Feb 24 05:19:06.788300 master-0 kubenswrapper[7614]: I0224 05:19:06.788112 7614 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/49b426a3-f16e-40e9-a166-7270d4cfcc60-webhook-cert\") pod \"packageserver-df5f88cd4-cwzcs\" (UID: \"49b426a3-f16e-40e9-a166-7270d4cfcc60\") " pod="openshift-operator-lifecycle-manager/packageserver-df5f88cd4-cwzcs" Feb 24 05:19:06.889257 master-0 kubenswrapper[7614]: I0224 05:19:06.889129 7614 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/49b426a3-f16e-40e9-a166-7270d4cfcc60-tmpfs\") pod \"packageserver-df5f88cd4-cwzcs\" (UID: \"49b426a3-f16e-40e9-a166-7270d4cfcc60\") " pod="openshift-operator-lifecycle-manager/packageserver-df5f88cd4-cwzcs" Feb 24 05:19:06.889257 master-0 kubenswrapper[7614]: I0224 05:19:06.889190 7614 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/49b426a3-f16e-40e9-a166-7270d4cfcc60-apiservice-cert\") pod \"packageserver-df5f88cd4-cwzcs\" (UID: \"49b426a3-f16e-40e9-a166-7270d4cfcc60\") " pod="openshift-operator-lifecycle-manager/packageserver-df5f88cd4-cwzcs" Feb 24 05:19:06.889540 master-0 kubenswrapper[7614]: I0224 05:19:06.889438 7614 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/49b426a3-f16e-40e9-a166-7270d4cfcc60-webhook-cert\") pod \"packageserver-df5f88cd4-cwzcs\" (UID: \"49b426a3-f16e-40e9-a166-7270d4cfcc60\") " pod="openshift-operator-lifecycle-manager/packageserver-df5f88cd4-cwzcs" Feb 24 05:19:06.889712 master-0 kubenswrapper[7614]: I0224 05:19:06.889578 7614 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9zxwj\" (UniqueName: \"kubernetes.io/projected/49b426a3-f16e-40e9-a166-7270d4cfcc60-kube-api-access-9zxwj\") pod \"packageserver-df5f88cd4-cwzcs\" (UID: \"49b426a3-f16e-40e9-a166-7270d4cfcc60\") " pod="openshift-operator-lifecycle-manager/packageserver-df5f88cd4-cwzcs" Feb 24 05:19:06.889949 master-0 kubenswrapper[7614]: I0224 05:19:06.889891 7614 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/49b426a3-f16e-40e9-a166-7270d4cfcc60-tmpfs\") pod \"packageserver-df5f88cd4-cwzcs\" (UID: \"49b426a3-f16e-40e9-a166-7270d4cfcc60\") " pod="openshift-operator-lifecycle-manager/packageserver-df5f88cd4-cwzcs" Feb 24 05:19:06.893617 master-0 kubenswrapper[7614]: I0224 05:19:06.893585 7614 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/49b426a3-f16e-40e9-a166-7270d4cfcc60-webhook-cert\") pod \"packageserver-df5f88cd4-cwzcs\" (UID: \"49b426a3-f16e-40e9-a166-7270d4cfcc60\") " pod="openshift-operator-lifecycle-manager/packageserver-df5f88cd4-cwzcs" Feb 24 05:19:06.899333 master-0 kubenswrapper[7614]: I0224 05:19:06.899260 7614 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/49b426a3-f16e-40e9-a166-7270d4cfcc60-apiservice-cert\") pod \"packageserver-df5f88cd4-cwzcs\" (UID: \"49b426a3-f16e-40e9-a166-7270d4cfcc60\") " pod="openshift-operator-lifecycle-manager/packageserver-df5f88cd4-cwzcs" Feb 24 05:19:06.937264 master-0 kubenswrapper[7614]: I0224 05:19:06.937192 7614 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9zxwj\" (UniqueName: \"kubernetes.io/projected/49b426a3-f16e-40e9-a166-7270d4cfcc60-kube-api-access-9zxwj\") pod \"packageserver-df5f88cd4-cwzcs\" (UID: \"49b426a3-f16e-40e9-a166-7270d4cfcc60\") " pod="openshift-operator-lifecycle-manager/packageserver-df5f88cd4-cwzcs" Feb 24 05:19:07.232242 master-0 kubenswrapper[7614]: I0224 05:19:07.232179 7614 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-df5f88cd4-cwzcs" Feb 24 05:19:07.711723 master-0 kubenswrapper[7614]: I0224 05:19:07.711513 7614 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-v64s6"] Feb 24 05:19:07.712828 master-0 kubenswrapper[7614]: I0224 05:19:07.712785 7614 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-v64s6" Feb 24 05:19:07.714575 master-0 kubenswrapper[7614]: I0224 05:19:07.714540 7614 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-46rst" Feb 24 05:19:07.727334 master-0 kubenswrapper[7614]: I0224 05:19:07.727254 7614 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-v64s6"] Feb 24 05:19:07.810581 master-0 kubenswrapper[7614]: I0224 05:19:07.810514 7614 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7jflg\" (UniqueName: \"kubernetes.io/projected/75b4304c-09f2-499e-8c2f-da603e43ba72-kube-api-access-7jflg\") pod \"redhat-marketplace-v64s6\" (UID: \"75b4304c-09f2-499e-8c2f-da603e43ba72\") " pod="openshift-marketplace/redhat-marketplace-v64s6" Feb 24 05:19:07.810581 master-0 kubenswrapper[7614]: I0224 05:19:07.810574 7614 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/75b4304c-09f2-499e-8c2f-da603e43ba72-utilities\") pod \"redhat-marketplace-v64s6\" (UID: \"75b4304c-09f2-499e-8c2f-da603e43ba72\") " pod="openshift-marketplace/redhat-marketplace-v64s6" Feb 24 05:19:07.810930 master-0 kubenswrapper[7614]: I0224 05:19:07.810685 7614 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/75b4304c-09f2-499e-8c2f-da603e43ba72-catalog-content\") pod \"redhat-marketplace-v64s6\" (UID: \"75b4304c-09f2-499e-8c2f-da603e43ba72\") " pod="openshift-marketplace/redhat-marketplace-v64s6" Feb 24 05:19:07.912610 master-0 kubenswrapper[7614]: I0224 05:19:07.912503 7614 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7jflg\" (UniqueName: \"kubernetes.io/projected/75b4304c-09f2-499e-8c2f-da603e43ba72-kube-api-access-7jflg\") pod \"redhat-marketplace-v64s6\" (UID: \"75b4304c-09f2-499e-8c2f-da603e43ba72\") " pod="openshift-marketplace/redhat-marketplace-v64s6" Feb 24 05:19:07.912610 master-0 kubenswrapper[7614]: I0224 05:19:07.912589 7614 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/75b4304c-09f2-499e-8c2f-da603e43ba72-utilities\") pod \"redhat-marketplace-v64s6\" (UID: \"75b4304c-09f2-499e-8c2f-da603e43ba72\") " pod="openshift-marketplace/redhat-marketplace-v64s6" Feb 24 05:19:07.913132 master-0 kubenswrapper[7614]: I0224 05:19:07.912641 7614 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/75b4304c-09f2-499e-8c2f-da603e43ba72-catalog-content\") pod \"redhat-marketplace-v64s6\" (UID: \"75b4304c-09f2-499e-8c2f-da603e43ba72\") " pod="openshift-marketplace/redhat-marketplace-v64s6" Feb 24 05:19:07.913393 master-0 kubenswrapper[7614]: I0224 05:19:07.913348 7614 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/75b4304c-09f2-499e-8c2f-da603e43ba72-utilities\") pod \"redhat-marketplace-v64s6\" (UID: \"75b4304c-09f2-499e-8c2f-da603e43ba72\") " pod="openshift-marketplace/redhat-marketplace-v64s6" Feb 24 05:19:07.913632 master-0 kubenswrapper[7614]: I0224 05:19:07.913572 7614 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/75b4304c-09f2-499e-8c2f-da603e43ba72-catalog-content\") pod \"redhat-marketplace-v64s6\" (UID: \"75b4304c-09f2-499e-8c2f-da603e43ba72\") " pod="openshift-marketplace/redhat-marketplace-v64s6" Feb 24 05:19:07.933507 master-0 kubenswrapper[7614]: I0224 05:19:07.933423 7614 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7jflg\" (UniqueName: \"kubernetes.io/projected/75b4304c-09f2-499e-8c2f-da603e43ba72-kube-api-access-7jflg\") pod \"redhat-marketplace-v64s6\" (UID: \"75b4304c-09f2-499e-8c2f-da603e43ba72\") " pod="openshift-marketplace/redhat-marketplace-v64s6" Feb 24 05:19:08.039040 master-0 kubenswrapper[7614]: I0224 05:19:08.038952 7614 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-v64s6" Feb 24 05:19:08.410875 master-0 kubenswrapper[7614]: I0224 05:19:08.410669 7614 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-daemon-c56dz"] Feb 24 05:19:08.412274 master-0 kubenswrapper[7614]: I0224 05:19:08.412220 7614 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-daemon-c56dz" Feb 24 05:19:08.415532 master-0 kubenswrapper[7614]: I0224 05:19:08.414826 7614 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"proxy-tls" Feb 24 05:19:08.415532 master-0 kubenswrapper[7614]: I0224 05:19:08.414937 7614 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-daemon-dockercfg-sdvhz" Feb 24 05:19:08.518973 master-0 kubenswrapper[7614]: I0224 05:19:08.518909 7614 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/a3561f49-0808-4d96-95ec-456fcb5c5bb4-proxy-tls\") pod \"machine-config-daemon-c56dz\" (UID: \"a3561f49-0808-4d96-95ec-456fcb5c5bb4\") " pod="openshift-machine-config-operator/machine-config-daemon-c56dz" Feb 24 05:19:08.518973 master-0 kubenswrapper[7614]: I0224 05:19:08.518994 7614 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r5tgk\" (UniqueName: \"kubernetes.io/projected/a3561f49-0808-4d96-95ec-456fcb5c5bb4-kube-api-access-r5tgk\") pod \"machine-config-daemon-c56dz\" (UID: \"a3561f49-0808-4d96-95ec-456fcb5c5bb4\") " pod="openshift-machine-config-operator/machine-config-daemon-c56dz" Feb 24 05:19:08.519430 master-0 kubenswrapper[7614]: I0224 05:19:08.519030 7614 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/a3561f49-0808-4d96-95ec-456fcb5c5bb4-rootfs\") pod \"machine-config-daemon-c56dz\" (UID: \"a3561f49-0808-4d96-95ec-456fcb5c5bb4\") " pod="openshift-machine-config-operator/machine-config-daemon-c56dz" Feb 24 05:19:08.519430 master-0 kubenswrapper[7614]: I0224 05:19:08.519185 7614 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/a3561f49-0808-4d96-95ec-456fcb5c5bb4-mcd-auth-proxy-config\") pod \"machine-config-daemon-c56dz\" (UID: \"a3561f49-0808-4d96-95ec-456fcb5c5bb4\") " pod="openshift-machine-config-operator/machine-config-daemon-c56dz" Feb 24 05:19:08.621723 master-0 kubenswrapper[7614]: I0224 05:19:08.621639 7614 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/a3561f49-0808-4d96-95ec-456fcb5c5bb4-mcd-auth-proxy-config\") pod \"machine-config-daemon-c56dz\" (UID: \"a3561f49-0808-4d96-95ec-456fcb5c5bb4\") " pod="openshift-machine-config-operator/machine-config-daemon-c56dz" Feb 24 05:19:08.621723 master-0 kubenswrapper[7614]: I0224 05:19:08.621736 7614 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/a3561f49-0808-4d96-95ec-456fcb5c5bb4-proxy-tls\") pod \"machine-config-daemon-c56dz\" (UID: \"a3561f49-0808-4d96-95ec-456fcb5c5bb4\") " pod="openshift-machine-config-operator/machine-config-daemon-c56dz" Feb 24 05:19:08.622034 master-0 kubenswrapper[7614]: I0224 05:19:08.621800 7614 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r5tgk\" (UniqueName: \"kubernetes.io/projected/a3561f49-0808-4d96-95ec-456fcb5c5bb4-kube-api-access-r5tgk\") pod \"machine-config-daemon-c56dz\" (UID: \"a3561f49-0808-4d96-95ec-456fcb5c5bb4\") " pod="openshift-machine-config-operator/machine-config-daemon-c56dz" Feb 24 05:19:08.622034 master-0 kubenswrapper[7614]: I0224 05:19:08.621853 7614 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/a3561f49-0808-4d96-95ec-456fcb5c5bb4-rootfs\") pod \"machine-config-daemon-c56dz\" (UID: \"a3561f49-0808-4d96-95ec-456fcb5c5bb4\") " pod="openshift-machine-config-operator/machine-config-daemon-c56dz" Feb 24 05:19:08.622900 master-0 kubenswrapper[7614]: I0224 05:19:08.622847 7614 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/a3561f49-0808-4d96-95ec-456fcb5c5bb4-rootfs\") pod \"machine-config-daemon-c56dz\" (UID: \"a3561f49-0808-4d96-95ec-456fcb5c5bb4\") " pod="openshift-machine-config-operator/machine-config-daemon-c56dz" Feb 24 05:19:08.624588 master-0 kubenswrapper[7614]: I0224 05:19:08.624527 7614 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/a3561f49-0808-4d96-95ec-456fcb5c5bb4-mcd-auth-proxy-config\") pod \"machine-config-daemon-c56dz\" (UID: \"a3561f49-0808-4d96-95ec-456fcb5c5bb4\") " pod="openshift-machine-config-operator/machine-config-daemon-c56dz" Feb 24 05:19:08.633613 master-0 kubenswrapper[7614]: I0224 05:19:08.633548 7614 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/a3561f49-0808-4d96-95ec-456fcb5c5bb4-proxy-tls\") pod \"machine-config-daemon-c56dz\" (UID: \"a3561f49-0808-4d96-95ec-456fcb5c5bb4\") " pod="openshift-machine-config-operator/machine-config-daemon-c56dz" Feb 24 05:19:08.641990 master-0 kubenswrapper[7614]: I0224 05:19:08.641937 7614 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r5tgk\" (UniqueName: \"kubernetes.io/projected/a3561f49-0808-4d96-95ec-456fcb5c5bb4-kube-api-access-r5tgk\") pod \"machine-config-daemon-c56dz\" (UID: \"a3561f49-0808-4d96-95ec-456fcb5c5bb4\") " pod="openshift-machine-config-operator/machine-config-daemon-c56dz" Feb 24 05:19:08.714601 master-0 kubenswrapper[7614]: I0224 05:19:08.714435 7614 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-xm8sw"] Feb 24 05:19:08.715712 master-0 kubenswrapper[7614]: I0224 05:19:08.715665 7614 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-xm8sw" Feb 24 05:19:08.718182 master-0 kubenswrapper[7614]: I0224 05:19:08.718145 7614 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-zm289" Feb 24 05:19:08.731943 master-0 kubenswrapper[7614]: I0224 05:19:08.731884 7614 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-xm8sw"] Feb 24 05:19:08.755942 master-0 kubenswrapper[7614]: I0224 05:19:08.755882 7614 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-daemon-c56dz" Feb 24 05:19:08.824175 master-0 kubenswrapper[7614]: I0224 05:19:08.824103 7614 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l8z6s\" (UniqueName: \"kubernetes.io/projected/8f3825c1-975c-40b5-a6ad-0f200968b3cd-kube-api-access-l8z6s\") pod \"redhat-operators-xm8sw\" (UID: \"8f3825c1-975c-40b5-a6ad-0f200968b3cd\") " pod="openshift-marketplace/redhat-operators-xm8sw" Feb 24 05:19:08.824175 master-0 kubenswrapper[7614]: I0224 05:19:08.824172 7614 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8f3825c1-975c-40b5-a6ad-0f200968b3cd-catalog-content\") pod \"redhat-operators-xm8sw\" (UID: \"8f3825c1-975c-40b5-a6ad-0f200968b3cd\") " pod="openshift-marketplace/redhat-operators-xm8sw" Feb 24 05:19:08.824508 master-0 kubenswrapper[7614]: I0224 05:19:08.824262 7614 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8f3825c1-975c-40b5-a6ad-0f200968b3cd-utilities\") pod \"redhat-operators-xm8sw\" (UID: \"8f3825c1-975c-40b5-a6ad-0f200968b3cd\") " pod="openshift-marketplace/redhat-operators-xm8sw" Feb 24 05:19:08.926961 master-0 kubenswrapper[7614]: I0224 05:19:08.926871 7614 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8f3825c1-975c-40b5-a6ad-0f200968b3cd-utilities\") pod \"redhat-operators-xm8sw\" (UID: \"8f3825c1-975c-40b5-a6ad-0f200968b3cd\") " pod="openshift-marketplace/redhat-operators-xm8sw" Feb 24 05:19:08.927591 master-0 kubenswrapper[7614]: I0224 05:19:08.927517 7614 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l8z6s\" (UniqueName: \"kubernetes.io/projected/8f3825c1-975c-40b5-a6ad-0f200968b3cd-kube-api-access-l8z6s\") pod \"redhat-operators-xm8sw\" (UID: \"8f3825c1-975c-40b5-a6ad-0f200968b3cd\") " pod="openshift-marketplace/redhat-operators-xm8sw" Feb 24 05:19:08.927672 master-0 kubenswrapper[7614]: I0224 05:19:08.927635 7614 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8f3825c1-975c-40b5-a6ad-0f200968b3cd-catalog-content\") pod \"redhat-operators-xm8sw\" (UID: \"8f3825c1-975c-40b5-a6ad-0f200968b3cd\") " pod="openshift-marketplace/redhat-operators-xm8sw" Feb 24 05:19:08.928626 master-0 kubenswrapper[7614]: I0224 05:19:08.928584 7614 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8f3825c1-975c-40b5-a6ad-0f200968b3cd-utilities\") pod \"redhat-operators-xm8sw\" (UID: \"8f3825c1-975c-40b5-a6ad-0f200968b3cd\") " pod="openshift-marketplace/redhat-operators-xm8sw" Feb 24 05:19:08.929248 master-0 kubenswrapper[7614]: I0224 05:19:08.928518 7614 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8f3825c1-975c-40b5-a6ad-0f200968b3cd-catalog-content\") pod \"redhat-operators-xm8sw\" (UID: \"8f3825c1-975c-40b5-a6ad-0f200968b3cd\") " pod="openshift-marketplace/redhat-operators-xm8sw" Feb 24 05:19:08.950491 master-0 kubenswrapper[7614]: I0224 05:19:08.950447 7614 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l8z6s\" (UniqueName: \"kubernetes.io/projected/8f3825c1-975c-40b5-a6ad-0f200968b3cd-kube-api-access-l8z6s\") pod \"redhat-operators-xm8sw\" (UID: \"8f3825c1-975c-40b5-a6ad-0f200968b3cd\") " pod="openshift-marketplace/redhat-operators-xm8sw" Feb 24 05:19:09.037908 master-0 kubenswrapper[7614]: I0224 05:19:09.037836 7614 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-xm8sw" Feb 24 05:19:17.293341 master-0 kubenswrapper[7614]: I0224 05:19:17.293231 7614 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-cbd75ff8d-jzkmq"] Feb 24 05:19:19.027061 master-0 kubenswrapper[7614]: W0224 05:19:19.026792 7614 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda3561f49_0808_4d96_95ec_456fcb5c5bb4.slice/crio-8f82575ddbb5dc664a876d323c277ef91af413f2e9ed224a0250e918dc81ae61 WatchSource:0}: Error finding container 8f82575ddbb5dc664a876d323c277ef91af413f2e9ed224a0250e918dc81ae61: Status 404 returned error can't find the container with id 8f82575ddbb5dc664a876d323c277ef91af413f2e9ed224a0250e918dc81ae61 Feb 24 05:19:19.509368 master-0 kubenswrapper[7614]: I0224 05:19:19.507302 7614 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-gn8m8"] Feb 24 05:19:19.619328 master-0 kubenswrapper[7614]: I0224 05:19:19.616464 7614 generic.go:334] "Generic (PLEG): container finished" podID="3f511d03-a182-4968-ba40-5c5c10e5e6be" containerID="9d5d2fd92f71a6c0810699352fbe58ce30a0fa6af46df79a0db731109cbec1eb" exitCode=0 Feb 24 05:19:19.619328 master-0 kubenswrapper[7614]: I0224 05:19:19.617513 7614 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-6f47d587d6-7b87v" event={"ID":"3f511d03-a182-4968-ba40-5c5c10e5e6be","Type":"ContainerDied","Data":"9d5d2fd92f71a6c0810699352fbe58ce30a0fa6af46df79a0db731109cbec1eb"} Feb 24 05:19:19.650377 master-0 kubenswrapper[7614]: I0224 05:19:19.647584 7614 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-68vwc"] Feb 24 05:19:19.660921 master-0 kubenswrapper[7614]: I0224 05:19:19.660592 7614 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-65c5c48b9b-hmlsl" event={"ID":"4bb05b64-74d7-41bc-991c-5d3cddc9d8f4","Type":"ContainerStarted","Data":"050953d370eb17949c69b6e06def82479256e18c1b5c79a676b93c90f8560202"} Feb 24 05:19:19.688512 master-0 kubenswrapper[7614]: I0224 05:19:19.688452 7614 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/cluster-autoscaler-operator-86b8dc6d6-mcf2z" event={"ID":"5d51ce58-55f6-45d5-9d5d-7b31ae42380a","Type":"ContainerStarted","Data":"bb3a0e8898f8ea9060490a27cc51b9a9e7a34486fe6313b2342ac6b15f983128"} Feb 24 05:19:19.692408 master-0 kubenswrapper[7614]: I0224 05:19:19.692332 7614 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-df5f88cd4-cwzcs"] Feb 24 05:19:19.694364 master-0 kubenswrapper[7614]: I0224 05:19:19.694296 7614 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-v64s6"] Feb 24 05:19:19.696800 master-0 kubenswrapper[7614]: I0224 05:19:19.696749 7614 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-insights/insights-operator-59b498fcfb-mprnx" event={"ID":"ee10cd9f-23da-4e40-bc7a-0856a9fa7ae5","Type":"ContainerStarted","Data":"735cacb8903687f4959bf45209496824db9b626d66fde71cf866813f594377bb"} Feb 24 05:19:19.703200 master-0 kubenswrapper[7614]: I0224 05:19:19.703149 7614 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-c56dz" event={"ID":"a3561f49-0808-4d96-95ec-456fcb5c5bb4","Type":"ContainerStarted","Data":"8f82575ddbb5dc664a876d323c277ef91af413f2e9ed224a0250e918dc81ae61"} Feb 24 05:19:19.708340 master-0 kubenswrapper[7614]: I0224 05:19:19.708275 7614 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/cluster-image-registry-operator-779979bdf7-t98nr" event={"ID":"23bdafdd-27c9-4461-be4a-3ea916ac3875","Type":"ContainerStarted","Data":"e316013fb83fe451b12a337302e18c3ea427b3968c1f30f37e4c5892013d663c"} Feb 24 05:19:19.711355 master-0 kubenswrapper[7614]: I0224 05:19:19.711217 7614 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/cluster-baremetal-operator-d6bb9bb76-54hnv" event={"ID":"39623346-691b-42c8-af76-409d4f6629af","Type":"ContainerStarted","Data":"d4516cc83e87e18d7c8ea61312f0b1b6185fcfcd2b620f9f1b31d56f65e19d0a"} Feb 24 05:19:19.713284 master-0 kubenswrapper[7614]: I0224 05:19:19.712853 7614 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-xm8sw"] Feb 24 05:19:19.747278 master-0 kubenswrapper[7614]: W0224 05:19:19.746760 7614 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8f3825c1_975c_40b5_a6ad_0f200968b3cd.slice/crio-b5eb5695ccec6b92144f40353b32b80192cdcb4ed71afa4329c2fd87d4604e30 WatchSource:0}: Error finding container b5eb5695ccec6b92144f40353b32b80192cdcb4ed71afa4329c2fd87d4604e30: Status 404 returned error can't find the container with id b5eb5695ccec6b92144f40353b32b80192cdcb4ed71afa4329c2fd87d4604e30 Feb 24 05:19:19.776923 master-0 kubenswrapper[7614]: I0224 05:19:19.776847 7614 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-api/cluster-autoscaler-operator-86b8dc6d6-mcf2z" podStartSLOduration=3.998290335 podStartE2EDuration="18.776820547s" podCreationTimestamp="2026-02-24 05:19:01 +0000 UTC" firstStartedPulling="2026-02-24 05:19:04.208042645 +0000 UTC m=+275.242785801" lastFinishedPulling="2026-02-24 05:19:18.986572857 +0000 UTC m=+290.021316013" observedRunningTime="2026-02-24 05:19:19.721091655 +0000 UTC m=+290.755834821" watchObservedRunningTime="2026-02-24 05:19:19.776820547 +0000 UTC m=+290.811563703" Feb 24 05:19:19.777068 master-0 kubenswrapper[7614]: I0224 05:19:19.776973 7614 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/cluster-image-registry-operator-779979bdf7-t98nr" podStartSLOduration=3.9656717329999998 podStartE2EDuration="18.776968601s" podCreationTimestamp="2026-02-24 05:19:01 +0000 UTC" firstStartedPulling="2026-02-24 05:19:04.17098285 +0000 UTC m=+275.205726006" lastFinishedPulling="2026-02-24 05:19:18.982279728 +0000 UTC m=+290.017022874" observedRunningTime="2026-02-24 05:19:19.774488457 +0000 UTC m=+290.809231623" watchObservedRunningTime="2026-02-24 05:19:19.776968601 +0000 UTC m=+290.811711757" Feb 24 05:19:19.806715 master-0 kubenswrapper[7614]: I0224 05:19:19.801414 7614 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-insights/insights-operator-59b498fcfb-mprnx" podStartSLOduration=4.221151646 podStartE2EDuration="18.801385798s" podCreationTimestamp="2026-02-24 05:19:01 +0000 UTC" firstStartedPulling="2026-02-24 05:19:04.447609146 +0000 UTC m=+275.482352292" lastFinishedPulling="2026-02-24 05:19:19.027843248 +0000 UTC m=+290.062586444" observedRunningTime="2026-02-24 05:19:19.800462005 +0000 UTC m=+290.835205161" watchObservedRunningTime="2026-02-24 05:19:19.801385798 +0000 UTC m=+290.836128954" Feb 24 05:19:20.723810 master-0 kubenswrapper[7614]: I0224 05:19:20.723742 7614 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/cluster-baremetal-operator-d6bb9bb76-54hnv" event={"ID":"39623346-691b-42c8-af76-409d4f6629af","Type":"ContainerStarted","Data":"b84fcd05623ce330ed76a989e0aa6afb0c33a3acbcf41b0e7786c46338662d84"} Feb 24 05:19:20.727450 master-0 kubenswrapper[7614]: I0224 05:19:20.727373 7614 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-cbd75ff8d-jzkmq" event={"ID":"51b6b038-7029-4e3e-af6d-b7f85ac532b0","Type":"ContainerStarted","Data":"3e39021de07b2dcc4a8ef149f387beb7c3dc6fa4caab9c3f9d5b52e8cda914cd"} Feb 24 05:19:20.727527 master-0 kubenswrapper[7614]: I0224 05:19:20.727460 7614 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-cbd75ff8d-jzkmq" event={"ID":"51b6b038-7029-4e3e-af6d-b7f85ac532b0","Type":"ContainerStarted","Data":"12a0130eaadd74114141eb665226069978edad7d44d69a00b70c0c68e0a65d55"} Feb 24 05:19:20.727527 master-0 kubenswrapper[7614]: I0224 05:19:20.727462 7614 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-cbd75ff8d-jzkmq" podUID="51b6b038-7029-4e3e-af6d-b7f85ac532b0" containerName="cluster-cloud-controller-manager" containerID="cri-o://23aaf196fef5b9187bbb59c5703cab1031a98798417a6b48fb061800fe7935a4" gracePeriod=30 Feb 24 05:19:20.727596 master-0 kubenswrapper[7614]: I0224 05:19:20.727541 7614 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-cbd75ff8d-jzkmq" podUID="51b6b038-7029-4e3e-af6d-b7f85ac532b0" containerName="kube-rbac-proxy" containerID="cri-o://3e39021de07b2dcc4a8ef149f387beb7c3dc6fa4caab9c3f9d5b52e8cda914cd" gracePeriod=30 Feb 24 05:19:20.727662 master-0 kubenswrapper[7614]: I0224 05:19:20.727597 7614 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-cbd75ff8d-jzkmq" podUID="51b6b038-7029-4e3e-af6d-b7f85ac532b0" containerName="config-sync-controllers" containerID="cri-o://12a0130eaadd74114141eb665226069978edad7d44d69a00b70c0c68e0a65d55" gracePeriod=30 Feb 24 05:19:20.727714 master-0 kubenswrapper[7614]: I0224 05:19:20.727484 7614 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-cbd75ff8d-jzkmq" event={"ID":"51b6b038-7029-4e3e-af6d-b7f85ac532b0","Type":"ContainerStarted","Data":"23aaf196fef5b9187bbb59c5703cab1031a98798417a6b48fb061800fe7935a4"} Feb 24 05:19:20.730237 master-0 kubenswrapper[7614]: I0224 05:19:20.730193 7614 generic.go:334] "Generic (PLEG): container finished" podID="75b4304c-09f2-499e-8c2f-da603e43ba72" containerID="b33243ea493b8d799596bfb5b13489bdfd7fcd9e03b18f82f7534ca74a24e7e7" exitCode=0 Feb 24 05:19:20.730336 master-0 kubenswrapper[7614]: I0224 05:19:20.730293 7614 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-v64s6" event={"ID":"75b4304c-09f2-499e-8c2f-da603e43ba72","Type":"ContainerDied","Data":"b33243ea493b8d799596bfb5b13489bdfd7fcd9e03b18f82f7534ca74a24e7e7"} Feb 24 05:19:20.730383 master-0 kubenswrapper[7614]: I0224 05:19:20.730352 7614 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-v64s6" event={"ID":"75b4304c-09f2-499e-8c2f-da603e43ba72","Type":"ContainerStarted","Data":"0c671a703dbac86ce7b1c5dcbfbe1729e65e787dfd6afe8e60d163a277f3e763"} Feb 24 05:19:20.734577 master-0 kubenswrapper[7614]: I0224 05:19:20.734508 7614 generic.go:334] "Generic (PLEG): container finished" podID="2c6bb439-ed17-4761-b193-580be5f6aa00" containerID="1e0a6a04590c29af11ea3e9db28d3f49a4348c84904bd3e2b3e794e87f147724" exitCode=0 Feb 24 05:19:20.734685 master-0 kubenswrapper[7614]: I0224 05:19:20.734643 7614 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-gn8m8" event={"ID":"2c6bb439-ed17-4761-b193-580be5f6aa00","Type":"ContainerDied","Data":"1e0a6a04590c29af11ea3e9db28d3f49a4348c84904bd3e2b3e794e87f147724"} Feb 24 05:19:20.734730 master-0 kubenswrapper[7614]: I0224 05:19:20.734693 7614 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-gn8m8" event={"ID":"2c6bb439-ed17-4761-b193-580be5f6aa00","Type":"ContainerStarted","Data":"79723ddb5fac1ee4009ac879b87cc7a72172f4afc11c2c1be74ae202b150e818"} Feb 24 05:19:20.738163 master-0 kubenswrapper[7614]: I0224 05:19:20.738111 7614 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/packageserver-df5f88cd4-cwzcs" event={"ID":"49b426a3-f16e-40e9-a166-7270d4cfcc60","Type":"ContainerStarted","Data":"77606a05dceec6664cf836f0344018d0d1eae5a5e1e75d365390ed397062261f"} Feb 24 05:19:20.738242 master-0 kubenswrapper[7614]: I0224 05:19:20.738167 7614 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/packageserver-df5f88cd4-cwzcs" event={"ID":"49b426a3-f16e-40e9-a166-7270d4cfcc60","Type":"ContainerStarted","Data":"005aea3f18d4d280e39bcec0aace6a6b0719831dd54d5e5f2bb06b03a10a1e55"} Feb 24 05:19:20.739455 master-0 kubenswrapper[7614]: I0224 05:19:20.739238 7614 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/packageserver-df5f88cd4-cwzcs" Feb 24 05:19:20.745413 master-0 kubenswrapper[7614]: I0224 05:19:20.745343 7614 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-api/cluster-baremetal-operator-d6bb9bb76-54hnv" podStartSLOduration=4.726199733 podStartE2EDuration="19.745326268s" podCreationTimestamp="2026-02-24 05:19:01 +0000 UTC" firstStartedPulling="2026-02-24 05:19:03.89614593 +0000 UTC m=+274.930889086" lastFinishedPulling="2026-02-24 05:19:18.915272465 +0000 UTC m=+289.950015621" observedRunningTime="2026-02-24 05:19:20.743758267 +0000 UTC m=+291.778501413" watchObservedRunningTime="2026-02-24 05:19:20.745326268 +0000 UTC m=+291.780069424" Feb 24 05:19:20.745984 master-0 kubenswrapper[7614]: I0224 05:19:20.745931 7614 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/packageserver-df5f88cd4-cwzcs" Feb 24 05:19:20.748159 master-0 kubenswrapper[7614]: I0224 05:19:20.748118 7614 generic.go:334] "Generic (PLEG): container finished" podID="8f3825c1-975c-40b5-a6ad-0f200968b3cd" containerID="99fcb3aa839cddf10ee1220b0b8dba6f4ce8ca2800ef080d6330776f6b0863c7" exitCode=0 Feb 24 05:19:20.748231 master-0 kubenswrapper[7614]: I0224 05:19:20.748195 7614 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-xm8sw" event={"ID":"8f3825c1-975c-40b5-a6ad-0f200968b3cd","Type":"ContainerDied","Data":"99fcb3aa839cddf10ee1220b0b8dba6f4ce8ca2800ef080d6330776f6b0863c7"} Feb 24 05:19:20.748231 master-0 kubenswrapper[7614]: I0224 05:19:20.748224 7614 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-xm8sw" event={"ID":"8f3825c1-975c-40b5-a6ad-0f200968b3cd","Type":"ContainerStarted","Data":"b5eb5695ccec6b92144f40353b32b80192cdcb4ed71afa4329c2fd87d4604e30"} Feb 24 05:19:20.754827 master-0 kubenswrapper[7614]: I0224 05:19:20.753854 7614 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-c56dz" event={"ID":"a3561f49-0808-4d96-95ec-456fcb5c5bb4","Type":"ContainerStarted","Data":"f28f2e6c1e5d132fc320885e549569b3d8fd2507e2a1890481853165fc870754"} Feb 24 05:19:20.754827 master-0 kubenswrapper[7614]: I0224 05:19:20.753913 7614 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-c56dz" event={"ID":"a3561f49-0808-4d96-95ec-456fcb5c5bb4","Type":"ContainerStarted","Data":"3704b4410c390b4e0f95f080c5414b10d97f80b5ba3118394a07582dd875838c"} Feb 24 05:19:20.756855 master-0 kubenswrapper[7614]: I0224 05:19:20.756739 7614 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cloud-credential-operator/cloud-credential-operator-6968c58f46-68rth" event={"ID":"b46907eb-36d6-4410-b7d8-8012b254c861","Type":"ContainerStarted","Data":"4240fe5fb8260c10732017fed99af3d30c24e8ab69f98fb89fb188b3077c32da"} Feb 24 05:19:20.761585 master-0 kubenswrapper[7614]: I0224 05:19:20.761123 7614 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-storage-operator/cluster-storage-operator-f94476f49-tlmg5" event={"ID":"e1f03d97-1a6a-41e4-9ed3-cd9b01c46400","Type":"ContainerStarted","Data":"a36fb847cfc8df5fc6c5185376329dd9ae5ab47df139ba0d792b1adb2ce6277f"} Feb 24 05:19:20.765139 master-0 kubenswrapper[7614]: I0224 05:19:20.765103 7614 generic.go:334] "Generic (PLEG): container finished" podID="cd674e58-b749-46fb-8a28-66012fd8b401" containerID="8b37d0025618263e47dfe8f40022b28e5392017192dbe6c7bc145156cde44d71" exitCode=0 Feb 24 05:19:20.765329 master-0 kubenswrapper[7614]: I0224 05:19:20.765281 7614 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-68vwc" event={"ID":"cd674e58-b749-46fb-8a28-66012fd8b401","Type":"ContainerDied","Data":"8b37d0025618263e47dfe8f40022b28e5392017192dbe6c7bc145156cde44d71"} Feb 24 05:19:20.768213 master-0 kubenswrapper[7614]: I0224 05:19:20.766457 7614 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-68vwc" event={"ID":"cd674e58-b749-46fb-8a28-66012fd8b401","Type":"ContainerStarted","Data":"f05f4c8572660fb60933e1a43cdf2d946cf6624f2ede2a6f783e25d928dd09bd"} Feb 24 05:19:20.782513 master-0 kubenswrapper[7614]: I0224 05:19:20.778343 7614 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-65c5c48b9b-hmlsl" event={"ID":"4bb05b64-74d7-41bc-991c-5d3cddc9d8f4","Type":"ContainerStarted","Data":"3662deb6ce7c9516218ecd5a8d1a712476279c064d7e6c34b55f227ee7531977"} Feb 24 05:19:20.782869 master-0 kubenswrapper[7614]: I0224 05:19:20.782771 7614 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5c7cf458b4-65mc5" event={"ID":"116e6b47-d435-49ca-abb5-088788daf16a","Type":"ContainerStarted","Data":"6b3c3ebf05dd2e018df6f39f4bdd076d24f312bc4472c6ee016795dfeeb9269e"} Feb 24 05:19:20.798838 master-0 kubenswrapper[7614]: I0224 05:19:20.795455 7614 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/packageserver-df5f88cd4-cwzcs" podStartSLOduration=14.795158019 podStartE2EDuration="14.795158019s" podCreationTimestamp="2026-02-24 05:19:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-24 05:19:20.794572874 +0000 UTC m=+291.829316060" watchObservedRunningTime="2026-02-24 05:19:20.795158019 +0000 UTC m=+291.829901175" Feb 24 05:19:20.798838 master-0 kubenswrapper[7614]: I0224 05:19:20.798055 7614 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-cbd75ff8d-jzkmq" podStartSLOduration=4.172023887 podStartE2EDuration="19.798048113s" podCreationTimestamp="2026-02-24 05:19:01 +0000 UTC" firstStartedPulling="2026-02-24 05:19:03.399086112 +0000 UTC m=+274.433829268" lastFinishedPulling="2026-02-24 05:19:19.025110298 +0000 UTC m=+290.059853494" observedRunningTime="2026-02-24 05:19:20.775884404 +0000 UTC m=+291.810627580" watchObservedRunningTime="2026-02-24 05:19:20.798048113 +0000 UTC m=+291.832791269" Feb 24 05:19:20.871907 master-0 kubenswrapper[7614]: I0224 05:19:20.871841 7614 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-daemon-c56dz" podStartSLOduration=12.871818139 podStartE2EDuration="12.871818139s" podCreationTimestamp="2026-02-24 05:19:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-24 05:19:20.868898124 +0000 UTC m=+291.903641280" watchObservedRunningTime="2026-02-24 05:19:20.871818139 +0000 UTC m=+291.906561295" Feb 24 05:19:20.888281 master-0 kubenswrapper[7614]: I0224 05:19:20.888197 7614 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-cbd75ff8d-jzkmq" Feb 24 05:19:20.925417 master-0 kubenswrapper[7614]: I0224 05:19:20.924922 7614 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-storage-operator/cluster-storage-operator-f94476f49-tlmg5" podStartSLOduration=4.933628103 podStartE2EDuration="19.924902203s" podCreationTimestamp="2026-02-24 05:19:01 +0000 UTC" firstStartedPulling="2026-02-24 05:19:03.994841856 +0000 UTC m=+275.029585012" lastFinishedPulling="2026-02-24 05:19:18.986115956 +0000 UTC m=+290.020859112" observedRunningTime="2026-02-24 05:19:20.9220522 +0000 UTC m=+291.956795356" watchObservedRunningTime="2026-02-24 05:19:20.924902203 +0000 UTC m=+291.959645359" Feb 24 05:19:20.961168 master-0 kubenswrapper[7614]: I0224 05:19:20.960979 7614 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cloud-credential-operator/cloud-credential-operator-6968c58f46-68rth" podStartSLOduration=4.778878689 podStartE2EDuration="19.96095736s" podCreationTimestamp="2026-02-24 05:19:01 +0000 UTC" firstStartedPulling="2026-02-24 05:19:03.977641527 +0000 UTC m=+275.012384683" lastFinishedPulling="2026-02-24 05:19:19.159720198 +0000 UTC m=+290.194463354" observedRunningTime="2026-02-24 05:19:20.95669757 +0000 UTC m=+291.991440746" watchObservedRunningTime="2026-02-24 05:19:20.96095736 +0000 UTC m=+291.995700516" Feb 24 05:19:21.026899 master-0 kubenswrapper[7614]: I0224 05:19:21.026860 7614 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cloud-controller-manager-operator-tls\" (UniqueName: \"kubernetes.io/secret/51b6b038-7029-4e3e-af6d-b7f85ac532b0-cloud-controller-manager-operator-tls\") pod \"51b6b038-7029-4e3e-af6d-b7f85ac532b0\" (UID: \"51b6b038-7029-4e3e-af6d-b7f85ac532b0\") " Feb 24 05:19:21.027262 master-0 kubenswrapper[7614]: I0224 05:19:21.026979 7614 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/51b6b038-7029-4e3e-af6d-b7f85ac532b0-images\") pod \"51b6b038-7029-4e3e-af6d-b7f85ac532b0\" (UID: \"51b6b038-7029-4e3e-af6d-b7f85ac532b0\") " Feb 24 05:19:21.027262 master-0 kubenswrapper[7614]: I0224 05:19:21.027028 7614 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/51b6b038-7029-4e3e-af6d-b7f85ac532b0-auth-proxy-config\") pod \"51b6b038-7029-4e3e-af6d-b7f85ac532b0\" (UID: \"51b6b038-7029-4e3e-af6d-b7f85ac532b0\") " Feb 24 05:19:21.027262 master-0 kubenswrapper[7614]: I0224 05:19:21.027057 7614 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zg9bg\" (UniqueName: \"kubernetes.io/projected/51b6b038-7029-4e3e-af6d-b7f85ac532b0-kube-api-access-zg9bg\") pod \"51b6b038-7029-4e3e-af6d-b7f85ac532b0\" (UID: \"51b6b038-7029-4e3e-af6d-b7f85ac532b0\") " Feb 24 05:19:21.027262 master-0 kubenswrapper[7614]: I0224 05:19:21.027079 7614 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/51b6b038-7029-4e3e-af6d-b7f85ac532b0-host-etc-kube\") pod \"51b6b038-7029-4e3e-af6d-b7f85ac532b0\" (UID: \"51b6b038-7029-4e3e-af6d-b7f85ac532b0\") " Feb 24 05:19:21.027262 master-0 kubenswrapper[7614]: I0224 05:19:21.027240 7614 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/51b6b038-7029-4e3e-af6d-b7f85ac532b0-host-etc-kube" (OuterVolumeSpecName: "host-etc-kube") pod "51b6b038-7029-4e3e-af6d-b7f85ac532b0" (UID: "51b6b038-7029-4e3e-af6d-b7f85ac532b0"). InnerVolumeSpecName "host-etc-kube". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 24 05:19:21.028155 master-0 kubenswrapper[7614]: I0224 05:19:21.028076 7614 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-api/machine-api-operator-5c7cf458b4-65mc5" podStartSLOduration=5.119801788 podStartE2EDuration="20.028048964s" podCreationTimestamp="2026-02-24 05:19:01 +0000 UTC" firstStartedPulling="2026-02-24 05:19:04.208531618 +0000 UTC m=+275.243274774" lastFinishedPulling="2026-02-24 05:19:19.116778784 +0000 UTC m=+290.151521950" observedRunningTime="2026-02-24 05:19:21.0212588 +0000 UTC m=+292.056001966" watchObservedRunningTime="2026-02-24 05:19:21.028048964 +0000 UTC m=+292.062792120" Feb 24 05:19:21.029081 master-0 kubenswrapper[7614]: I0224 05:19:21.029045 7614 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/51b6b038-7029-4e3e-af6d-b7f85ac532b0-auth-proxy-config" (OuterVolumeSpecName: "auth-proxy-config") pod "51b6b038-7029-4e3e-af6d-b7f85ac532b0" (UID: "51b6b038-7029-4e3e-af6d-b7f85ac532b0"). InnerVolumeSpecName "auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 24 05:19:21.029162 master-0 kubenswrapper[7614]: I0224 05:19:21.029104 7614 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/51b6b038-7029-4e3e-af6d-b7f85ac532b0-images" (OuterVolumeSpecName: "images") pod "51b6b038-7029-4e3e-af6d-b7f85ac532b0" (UID: "51b6b038-7029-4e3e-af6d-b7f85ac532b0"). InnerVolumeSpecName "images". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 24 05:19:21.054643 master-0 kubenswrapper[7614]: I0224 05:19:21.054503 7614 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/51b6b038-7029-4e3e-af6d-b7f85ac532b0-cloud-controller-manager-operator-tls" (OuterVolumeSpecName: "cloud-controller-manager-operator-tls") pod "51b6b038-7029-4e3e-af6d-b7f85ac532b0" (UID: "51b6b038-7029-4e3e-af6d-b7f85ac532b0"). InnerVolumeSpecName "cloud-controller-manager-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 24 05:19:21.067643 master-0 kubenswrapper[7614]: I0224 05:19:21.067562 7614 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/51b6b038-7029-4e3e-af6d-b7f85ac532b0-kube-api-access-zg9bg" (OuterVolumeSpecName: "kube-api-access-zg9bg") pod "51b6b038-7029-4e3e-af6d-b7f85ac532b0" (UID: "51b6b038-7029-4e3e-af6d-b7f85ac532b0"). InnerVolumeSpecName "kube-api-access-zg9bg". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 24 05:19:21.118239 master-0 kubenswrapper[7614]: I0224 05:19:21.118050 7614 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-samples-operator/cluster-samples-operator-65c5c48b9b-hmlsl" podStartSLOduration=5.407503831 podStartE2EDuration="20.118019186s" podCreationTimestamp="2026-02-24 05:19:01 +0000 UTC" firstStartedPulling="2026-02-24 05:19:04.26348957 +0000 UTC m=+275.298232726" lastFinishedPulling="2026-02-24 05:19:18.974004925 +0000 UTC m=+290.008748081" observedRunningTime="2026-02-24 05:19:21.085926522 +0000 UTC m=+292.120669678" watchObservedRunningTime="2026-02-24 05:19:21.118019186 +0000 UTC m=+292.152762352" Feb 24 05:19:21.131211 master-0 kubenswrapper[7614]: I0224 05:19:21.131157 7614 reconciler_common.go:293] "Volume detached for volume \"images\" (UniqueName: \"kubernetes.io/configmap/51b6b038-7029-4e3e-af6d-b7f85ac532b0-images\") on node \"master-0\" DevicePath \"\"" Feb 24 05:19:21.131440 master-0 kubenswrapper[7614]: I0224 05:19:21.131213 7614 reconciler_common.go:293] "Volume detached for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/51b6b038-7029-4e3e-af6d-b7f85ac532b0-auth-proxy-config\") on node \"master-0\" DevicePath \"\"" Feb 24 05:19:21.131440 master-0 kubenswrapper[7614]: I0224 05:19:21.131234 7614 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zg9bg\" (UniqueName: \"kubernetes.io/projected/51b6b038-7029-4e3e-af6d-b7f85ac532b0-kube-api-access-zg9bg\") on node \"master-0\" DevicePath \"\"" Feb 24 05:19:21.131440 master-0 kubenswrapper[7614]: I0224 05:19:21.131247 7614 reconciler_common.go:293] "Volume detached for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/51b6b038-7029-4e3e-af6d-b7f85ac532b0-host-etc-kube\") on node \"master-0\" DevicePath \"\"" Feb 24 05:19:21.131440 master-0 kubenswrapper[7614]: I0224 05:19:21.131259 7614 reconciler_common.go:293] "Volume detached for volume \"cloud-controller-manager-operator-tls\" (UniqueName: \"kubernetes.io/secret/51b6b038-7029-4e3e-af6d-b7f85ac532b0-cloud-controller-manager-operator-tls\") on node \"master-0\" DevicePath \"\"" Feb 24 05:19:21.801718 master-0 kubenswrapper[7614]: I0224 05:19:21.801644 7614 generic.go:334] "Generic (PLEG): container finished" podID="51b6b038-7029-4e3e-af6d-b7f85ac532b0" containerID="3e39021de07b2dcc4a8ef149f387beb7c3dc6fa4caab9c3f9d5b52e8cda914cd" exitCode=0 Feb 24 05:19:21.801718 master-0 kubenswrapper[7614]: I0224 05:19:21.801690 7614 generic.go:334] "Generic (PLEG): container finished" podID="51b6b038-7029-4e3e-af6d-b7f85ac532b0" containerID="12a0130eaadd74114141eb665226069978edad7d44d69a00b70c0c68e0a65d55" exitCode=0 Feb 24 05:19:21.801718 master-0 kubenswrapper[7614]: I0224 05:19:21.801698 7614 generic.go:334] "Generic (PLEG): container finished" podID="51b6b038-7029-4e3e-af6d-b7f85ac532b0" containerID="23aaf196fef5b9187bbb59c5703cab1031a98798417a6b48fb061800fe7935a4" exitCode=0 Feb 24 05:19:21.802378 master-0 kubenswrapper[7614]: I0224 05:19:21.801752 7614 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-cbd75ff8d-jzkmq" event={"ID":"51b6b038-7029-4e3e-af6d-b7f85ac532b0","Type":"ContainerDied","Data":"3e39021de07b2dcc4a8ef149f387beb7c3dc6fa4caab9c3f9d5b52e8cda914cd"} Feb 24 05:19:21.802378 master-0 kubenswrapper[7614]: I0224 05:19:21.801833 7614 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-cbd75ff8d-jzkmq" event={"ID":"51b6b038-7029-4e3e-af6d-b7f85ac532b0","Type":"ContainerDied","Data":"12a0130eaadd74114141eb665226069978edad7d44d69a00b70c0c68e0a65d55"} Feb 24 05:19:21.802378 master-0 kubenswrapper[7614]: I0224 05:19:21.801852 7614 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-cbd75ff8d-jzkmq" event={"ID":"51b6b038-7029-4e3e-af6d-b7f85ac532b0","Type":"ContainerDied","Data":"23aaf196fef5b9187bbb59c5703cab1031a98798417a6b48fb061800fe7935a4"} Feb 24 05:19:21.802378 master-0 kubenswrapper[7614]: I0224 05:19:21.801866 7614 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-cbd75ff8d-jzkmq" event={"ID":"51b6b038-7029-4e3e-af6d-b7f85ac532b0","Type":"ContainerDied","Data":"b0d97a3313f34611823cdda1a11180f5f55eb172ec7bcc000e94b7424e41c15c"} Feb 24 05:19:21.802378 master-0 kubenswrapper[7614]: I0224 05:19:21.801890 7614 scope.go:117] "RemoveContainer" containerID="3e39021de07b2dcc4a8ef149f387beb7c3dc6fa4caab9c3f9d5b52e8cda914cd" Feb 24 05:19:21.802378 master-0 kubenswrapper[7614]: I0224 05:19:21.802251 7614 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-cbd75ff8d-jzkmq" Feb 24 05:19:21.822163 master-0 kubenswrapper[7614]: I0224 05:19:21.820341 7614 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-cbd75ff8d-jzkmq"] Feb 24 05:19:21.829907 master-0 kubenswrapper[7614]: I0224 05:19:21.829860 7614 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-cbd75ff8d-jzkmq"] Feb 24 05:19:21.838526 master-0 kubenswrapper[7614]: I0224 05:19:21.838484 7614 scope.go:117] "RemoveContainer" containerID="12a0130eaadd74114141eb665226069978edad7d44d69a00b70c0c68e0a65d55" Feb 24 05:19:21.870339 master-0 kubenswrapper[7614]: I0224 05:19:21.870157 7614 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-67dd8d7969-m8d2t"] Feb 24 05:19:21.870977 master-0 kubenswrapper[7614]: E0224 05:19:21.870929 7614 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="51b6b038-7029-4e3e-af6d-b7f85ac532b0" containerName="cluster-cloud-controller-manager" Feb 24 05:19:21.870977 master-0 kubenswrapper[7614]: I0224 05:19:21.870970 7614 state_mem.go:107] "Deleted CPUSet assignment" podUID="51b6b038-7029-4e3e-af6d-b7f85ac532b0" containerName="cluster-cloud-controller-manager" Feb 24 05:19:21.871073 master-0 kubenswrapper[7614]: E0224 05:19:21.870991 7614 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="51b6b038-7029-4e3e-af6d-b7f85ac532b0" containerName="config-sync-controllers" Feb 24 05:19:21.871073 master-0 kubenswrapper[7614]: I0224 05:19:21.870998 7614 state_mem.go:107] "Deleted CPUSet assignment" podUID="51b6b038-7029-4e3e-af6d-b7f85ac532b0" containerName="config-sync-controllers" Feb 24 05:19:21.871073 master-0 kubenswrapper[7614]: E0224 05:19:21.871006 7614 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="51b6b038-7029-4e3e-af6d-b7f85ac532b0" containerName="kube-rbac-proxy" Feb 24 05:19:21.871073 master-0 kubenswrapper[7614]: I0224 05:19:21.871013 7614 state_mem.go:107] "Deleted CPUSet assignment" podUID="51b6b038-7029-4e3e-af6d-b7f85ac532b0" containerName="kube-rbac-proxy" Feb 24 05:19:21.874506 master-0 kubenswrapper[7614]: I0224 05:19:21.871360 7614 memory_manager.go:354] "RemoveStaleState removing state" podUID="51b6b038-7029-4e3e-af6d-b7f85ac532b0" containerName="config-sync-controllers" Feb 24 05:19:21.874506 master-0 kubenswrapper[7614]: I0224 05:19:21.871376 7614 memory_manager.go:354] "RemoveStaleState removing state" podUID="51b6b038-7029-4e3e-af6d-b7f85ac532b0" containerName="cluster-cloud-controller-manager" Feb 24 05:19:21.874506 master-0 kubenswrapper[7614]: I0224 05:19:21.871389 7614 memory_manager.go:354] "RemoveStaleState removing state" podUID="51b6b038-7029-4e3e-af6d-b7f85ac532b0" containerName="kube-rbac-proxy" Feb 24 05:19:21.876606 master-0 kubenswrapper[7614]: I0224 05:19:21.876574 7614 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-67dd8d7969-m8d2t" Feb 24 05:19:21.880299 master-0 kubenswrapper[7614]: I0224 05:19:21.879350 7614 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cloud-controller-manager-operator"/"cloud-controller-manager-operator-tls" Feb 24 05:19:21.880299 master-0 kubenswrapper[7614]: I0224 05:19:21.879633 7614 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cloud-controller-manager-operator"/"cluster-cloud-controller-manager-dockercfg-zqsq8" Feb 24 05:19:21.880299 master-0 kubenswrapper[7614]: I0224 05:19:21.879846 7614 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cloud-controller-manager-operator"/"kube-rbac-proxy" Feb 24 05:19:21.880299 master-0 kubenswrapper[7614]: I0224 05:19:21.880032 7614 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cloud-controller-manager-operator"/"kube-root-ca.crt" Feb 24 05:19:21.880299 master-0 kubenswrapper[7614]: I0224 05:19:21.880258 7614 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cloud-controller-manager-operator"/"cloud-controller-manager-images" Feb 24 05:19:21.880671 master-0 kubenswrapper[7614]: I0224 05:19:21.880651 7614 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cloud-controller-manager-operator"/"openshift-service-ca.crt" Feb 24 05:19:22.047774 master-0 kubenswrapper[7614]: I0224 05:19:22.047718 7614 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/f3cd3830-62b5-49d1-917e-bd993d685c65-host-etc-kube\") pod \"cluster-cloud-controller-manager-operator-67dd8d7969-m8d2t\" (UID: \"f3cd3830-62b5-49d1-917e-bd993d685c65\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-67dd8d7969-m8d2t" Feb 24 05:19:22.047774 master-0 kubenswrapper[7614]: I0224 05:19:22.047788 7614 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cloud-controller-manager-operator-tls\" (UniqueName: \"kubernetes.io/secret/f3cd3830-62b5-49d1-917e-bd993d685c65-cloud-controller-manager-operator-tls\") pod \"cluster-cloud-controller-manager-operator-67dd8d7969-m8d2t\" (UID: \"f3cd3830-62b5-49d1-917e-bd993d685c65\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-67dd8d7969-m8d2t" Feb 24 05:19:22.048079 master-0 kubenswrapper[7614]: I0224 05:19:22.047821 7614 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/f3cd3830-62b5-49d1-917e-bd993d685c65-images\") pod \"cluster-cloud-controller-manager-operator-67dd8d7969-m8d2t\" (UID: \"f3cd3830-62b5-49d1-917e-bd993d685c65\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-67dd8d7969-m8d2t" Feb 24 05:19:22.048079 master-0 kubenswrapper[7614]: I0224 05:19:22.047851 7614 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/f3cd3830-62b5-49d1-917e-bd993d685c65-auth-proxy-config\") pod \"cluster-cloud-controller-manager-operator-67dd8d7969-m8d2t\" (UID: \"f3cd3830-62b5-49d1-917e-bd993d685c65\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-67dd8d7969-m8d2t" Feb 24 05:19:22.048079 master-0 kubenswrapper[7614]: I0224 05:19:22.047880 7614 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-957g9\" (UniqueName: \"kubernetes.io/projected/f3cd3830-62b5-49d1-917e-bd993d685c65-kube-api-access-957g9\") pod \"cluster-cloud-controller-manager-operator-67dd8d7969-m8d2t\" (UID: \"f3cd3830-62b5-49d1-917e-bd993d685c65\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-67dd8d7969-m8d2t" Feb 24 05:19:22.149431 master-0 kubenswrapper[7614]: I0224 05:19:22.149274 7614 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-957g9\" (UniqueName: \"kubernetes.io/projected/f3cd3830-62b5-49d1-917e-bd993d685c65-kube-api-access-957g9\") pod \"cluster-cloud-controller-manager-operator-67dd8d7969-m8d2t\" (UID: \"f3cd3830-62b5-49d1-917e-bd993d685c65\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-67dd8d7969-m8d2t" Feb 24 05:19:22.149431 master-0 kubenswrapper[7614]: I0224 05:19:22.149376 7614 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/f3cd3830-62b5-49d1-917e-bd993d685c65-host-etc-kube\") pod \"cluster-cloud-controller-manager-operator-67dd8d7969-m8d2t\" (UID: \"f3cd3830-62b5-49d1-917e-bd993d685c65\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-67dd8d7969-m8d2t" Feb 24 05:19:22.149431 master-0 kubenswrapper[7614]: I0224 05:19:22.149413 7614 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cloud-controller-manager-operator-tls\" (UniqueName: \"kubernetes.io/secret/f3cd3830-62b5-49d1-917e-bd993d685c65-cloud-controller-manager-operator-tls\") pod \"cluster-cloud-controller-manager-operator-67dd8d7969-m8d2t\" (UID: \"f3cd3830-62b5-49d1-917e-bd993d685c65\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-67dd8d7969-m8d2t" Feb 24 05:19:22.149735 master-0 kubenswrapper[7614]: I0224 05:19:22.149454 7614 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/f3cd3830-62b5-49d1-917e-bd993d685c65-images\") pod \"cluster-cloud-controller-manager-operator-67dd8d7969-m8d2t\" (UID: \"f3cd3830-62b5-49d1-917e-bd993d685c65\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-67dd8d7969-m8d2t" Feb 24 05:19:22.149735 master-0 kubenswrapper[7614]: I0224 05:19:22.149686 7614 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/f3cd3830-62b5-49d1-917e-bd993d685c65-auth-proxy-config\") pod \"cluster-cloud-controller-manager-operator-67dd8d7969-m8d2t\" (UID: \"f3cd3830-62b5-49d1-917e-bd993d685c65\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-67dd8d7969-m8d2t" Feb 24 05:19:22.150115 master-0 kubenswrapper[7614]: I0224 05:19:22.150063 7614 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/f3cd3830-62b5-49d1-917e-bd993d685c65-host-etc-kube\") pod \"cluster-cloud-controller-manager-operator-67dd8d7969-m8d2t\" (UID: \"f3cd3830-62b5-49d1-917e-bd993d685c65\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-67dd8d7969-m8d2t" Feb 24 05:19:22.150683 master-0 kubenswrapper[7614]: I0224 05:19:22.150659 7614 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/f3cd3830-62b5-49d1-917e-bd993d685c65-auth-proxy-config\") pod \"cluster-cloud-controller-manager-operator-67dd8d7969-m8d2t\" (UID: \"f3cd3830-62b5-49d1-917e-bd993d685c65\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-67dd8d7969-m8d2t" Feb 24 05:19:22.151757 master-0 kubenswrapper[7614]: I0224 05:19:22.151647 7614 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/f3cd3830-62b5-49d1-917e-bd993d685c65-images\") pod \"cluster-cloud-controller-manager-operator-67dd8d7969-m8d2t\" (UID: \"f3cd3830-62b5-49d1-917e-bd993d685c65\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-67dd8d7969-m8d2t" Feb 24 05:19:22.162368 master-0 kubenswrapper[7614]: I0224 05:19:22.162332 7614 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cloud-controller-manager-operator-tls\" (UniqueName: \"kubernetes.io/secret/f3cd3830-62b5-49d1-917e-bd993d685c65-cloud-controller-manager-operator-tls\") pod \"cluster-cloud-controller-manager-operator-67dd8d7969-m8d2t\" (UID: \"f3cd3830-62b5-49d1-917e-bd993d685c65\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-67dd8d7969-m8d2t" Feb 24 05:19:22.167491 master-0 kubenswrapper[7614]: I0224 05:19:22.167446 7614 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-957g9\" (UniqueName: \"kubernetes.io/projected/f3cd3830-62b5-49d1-917e-bd993d685c65-kube-api-access-957g9\") pod \"cluster-cloud-controller-manager-operator-67dd8d7969-m8d2t\" (UID: \"f3cd3830-62b5-49d1-917e-bd993d685c65\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-67dd8d7969-m8d2t" Feb 24 05:19:22.210725 master-0 kubenswrapper[7614]: I0224 05:19:22.210656 7614 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-67dd8d7969-m8d2t" Feb 24 05:19:23.085830 master-0 kubenswrapper[7614]: I0224 05:19:23.085513 7614 scope.go:117] "RemoveContainer" containerID="23aaf196fef5b9187bbb59c5703cab1031a98798417a6b48fb061800fe7935a4" Feb 24 05:19:23.139632 master-0 kubenswrapper[7614]: I0224 05:19:23.139466 7614 scope.go:117] "RemoveContainer" containerID="3e39021de07b2dcc4a8ef149f387beb7c3dc6fa4caab9c3f9d5b52e8cda914cd" Feb 24 05:19:23.140463 master-0 kubenswrapper[7614]: E0224 05:19:23.140415 7614 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3e39021de07b2dcc4a8ef149f387beb7c3dc6fa4caab9c3f9d5b52e8cda914cd\": container with ID starting with 3e39021de07b2dcc4a8ef149f387beb7c3dc6fa4caab9c3f9d5b52e8cda914cd not found: ID does not exist" containerID="3e39021de07b2dcc4a8ef149f387beb7c3dc6fa4caab9c3f9d5b52e8cda914cd" Feb 24 05:19:23.140523 master-0 kubenswrapper[7614]: I0224 05:19:23.140463 7614 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3e39021de07b2dcc4a8ef149f387beb7c3dc6fa4caab9c3f9d5b52e8cda914cd"} err="failed to get container status \"3e39021de07b2dcc4a8ef149f387beb7c3dc6fa4caab9c3f9d5b52e8cda914cd\": rpc error: code = NotFound desc = could not find container \"3e39021de07b2dcc4a8ef149f387beb7c3dc6fa4caab9c3f9d5b52e8cda914cd\": container with ID starting with 3e39021de07b2dcc4a8ef149f387beb7c3dc6fa4caab9c3f9d5b52e8cda914cd not found: ID does not exist" Feb 24 05:19:23.140523 master-0 kubenswrapper[7614]: I0224 05:19:23.140502 7614 scope.go:117] "RemoveContainer" containerID="12a0130eaadd74114141eb665226069978edad7d44d69a00b70c0c68e0a65d55" Feb 24 05:19:23.141670 master-0 kubenswrapper[7614]: E0224 05:19:23.141610 7614 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"12a0130eaadd74114141eb665226069978edad7d44d69a00b70c0c68e0a65d55\": container with ID starting with 12a0130eaadd74114141eb665226069978edad7d44d69a00b70c0c68e0a65d55 not found: ID does not exist" containerID="12a0130eaadd74114141eb665226069978edad7d44d69a00b70c0c68e0a65d55" Feb 24 05:19:23.141670 master-0 kubenswrapper[7614]: I0224 05:19:23.141634 7614 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"12a0130eaadd74114141eb665226069978edad7d44d69a00b70c0c68e0a65d55"} err="failed to get container status \"12a0130eaadd74114141eb665226069978edad7d44d69a00b70c0c68e0a65d55\": rpc error: code = NotFound desc = could not find container \"12a0130eaadd74114141eb665226069978edad7d44d69a00b70c0c68e0a65d55\": container with ID starting with 12a0130eaadd74114141eb665226069978edad7d44d69a00b70c0c68e0a65d55 not found: ID does not exist" Feb 24 05:19:23.141670 master-0 kubenswrapper[7614]: I0224 05:19:23.141649 7614 scope.go:117] "RemoveContainer" containerID="23aaf196fef5b9187bbb59c5703cab1031a98798417a6b48fb061800fe7935a4" Feb 24 05:19:23.141961 master-0 kubenswrapper[7614]: E0224 05:19:23.141906 7614 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"23aaf196fef5b9187bbb59c5703cab1031a98798417a6b48fb061800fe7935a4\": container with ID starting with 23aaf196fef5b9187bbb59c5703cab1031a98798417a6b48fb061800fe7935a4 not found: ID does not exist" containerID="23aaf196fef5b9187bbb59c5703cab1031a98798417a6b48fb061800fe7935a4" Feb 24 05:19:23.141961 master-0 kubenswrapper[7614]: I0224 05:19:23.141925 7614 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"23aaf196fef5b9187bbb59c5703cab1031a98798417a6b48fb061800fe7935a4"} err="failed to get container status \"23aaf196fef5b9187bbb59c5703cab1031a98798417a6b48fb061800fe7935a4\": rpc error: code = NotFound desc = could not find container \"23aaf196fef5b9187bbb59c5703cab1031a98798417a6b48fb061800fe7935a4\": container with ID starting with 23aaf196fef5b9187bbb59c5703cab1031a98798417a6b48fb061800fe7935a4 not found: ID does not exist" Feb 24 05:19:23.141961 master-0 kubenswrapper[7614]: I0224 05:19:23.141939 7614 scope.go:117] "RemoveContainer" containerID="3e39021de07b2dcc4a8ef149f387beb7c3dc6fa4caab9c3f9d5b52e8cda914cd" Feb 24 05:19:23.142284 master-0 kubenswrapper[7614]: I0224 05:19:23.142224 7614 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3e39021de07b2dcc4a8ef149f387beb7c3dc6fa4caab9c3f9d5b52e8cda914cd"} err="failed to get container status \"3e39021de07b2dcc4a8ef149f387beb7c3dc6fa4caab9c3f9d5b52e8cda914cd\": rpc error: code = NotFound desc = could not find container \"3e39021de07b2dcc4a8ef149f387beb7c3dc6fa4caab9c3f9d5b52e8cda914cd\": container with ID starting with 3e39021de07b2dcc4a8ef149f387beb7c3dc6fa4caab9c3f9d5b52e8cda914cd not found: ID does not exist" Feb 24 05:19:23.142284 master-0 kubenswrapper[7614]: I0224 05:19:23.142246 7614 scope.go:117] "RemoveContainer" containerID="12a0130eaadd74114141eb665226069978edad7d44d69a00b70c0c68e0a65d55" Feb 24 05:19:23.142655 master-0 kubenswrapper[7614]: I0224 05:19:23.142605 7614 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"12a0130eaadd74114141eb665226069978edad7d44d69a00b70c0c68e0a65d55"} err="failed to get container status \"12a0130eaadd74114141eb665226069978edad7d44d69a00b70c0c68e0a65d55\": rpc error: code = NotFound desc = could not find container \"12a0130eaadd74114141eb665226069978edad7d44d69a00b70c0c68e0a65d55\": container with ID starting with 12a0130eaadd74114141eb665226069978edad7d44d69a00b70c0c68e0a65d55 not found: ID does not exist" Feb 24 05:19:23.142655 master-0 kubenswrapper[7614]: I0224 05:19:23.142629 7614 scope.go:117] "RemoveContainer" containerID="23aaf196fef5b9187bbb59c5703cab1031a98798417a6b48fb061800fe7935a4" Feb 24 05:19:23.143057 master-0 kubenswrapper[7614]: I0224 05:19:23.143004 7614 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"23aaf196fef5b9187bbb59c5703cab1031a98798417a6b48fb061800fe7935a4"} err="failed to get container status \"23aaf196fef5b9187bbb59c5703cab1031a98798417a6b48fb061800fe7935a4\": rpc error: code = NotFound desc = could not find container \"23aaf196fef5b9187bbb59c5703cab1031a98798417a6b48fb061800fe7935a4\": container with ID starting with 23aaf196fef5b9187bbb59c5703cab1031a98798417a6b48fb061800fe7935a4 not found: ID does not exist" Feb 24 05:19:23.143057 master-0 kubenswrapper[7614]: I0224 05:19:23.143028 7614 scope.go:117] "RemoveContainer" containerID="3e39021de07b2dcc4a8ef149f387beb7c3dc6fa4caab9c3f9d5b52e8cda914cd" Feb 24 05:19:23.143277 master-0 kubenswrapper[7614]: I0224 05:19:23.143250 7614 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3e39021de07b2dcc4a8ef149f387beb7c3dc6fa4caab9c3f9d5b52e8cda914cd"} err="failed to get container status \"3e39021de07b2dcc4a8ef149f387beb7c3dc6fa4caab9c3f9d5b52e8cda914cd\": rpc error: code = NotFound desc = could not find container \"3e39021de07b2dcc4a8ef149f387beb7c3dc6fa4caab9c3f9d5b52e8cda914cd\": container with ID starting with 3e39021de07b2dcc4a8ef149f387beb7c3dc6fa4caab9c3f9d5b52e8cda914cd not found: ID does not exist" Feb 24 05:19:23.143277 master-0 kubenswrapper[7614]: I0224 05:19:23.143270 7614 scope.go:117] "RemoveContainer" containerID="12a0130eaadd74114141eb665226069978edad7d44d69a00b70c0c68e0a65d55" Feb 24 05:19:23.143982 master-0 kubenswrapper[7614]: I0224 05:19:23.143927 7614 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"12a0130eaadd74114141eb665226069978edad7d44d69a00b70c0c68e0a65d55"} err="failed to get container status \"12a0130eaadd74114141eb665226069978edad7d44d69a00b70c0c68e0a65d55\": rpc error: code = NotFound desc = could not find container \"12a0130eaadd74114141eb665226069978edad7d44d69a00b70c0c68e0a65d55\": container with ID starting with 12a0130eaadd74114141eb665226069978edad7d44d69a00b70c0c68e0a65d55 not found: ID does not exist" Feb 24 05:19:23.143982 master-0 kubenswrapper[7614]: I0224 05:19:23.143947 7614 scope.go:117] "RemoveContainer" containerID="23aaf196fef5b9187bbb59c5703cab1031a98798417a6b48fb061800fe7935a4" Feb 24 05:19:23.145644 master-0 kubenswrapper[7614]: I0224 05:19:23.145595 7614 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"23aaf196fef5b9187bbb59c5703cab1031a98798417a6b48fb061800fe7935a4"} err="failed to get container status \"23aaf196fef5b9187bbb59c5703cab1031a98798417a6b48fb061800fe7935a4\": rpc error: code = NotFound desc = could not find container \"23aaf196fef5b9187bbb59c5703cab1031a98798417a6b48fb061800fe7935a4\": container with ID starting with 23aaf196fef5b9187bbb59c5703cab1031a98798417a6b48fb061800fe7935a4 not found: ID does not exist" Feb 24 05:19:23.151358 master-0 kubenswrapper[7614]: W0224 05:19:23.151302 7614 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf3cd3830_62b5_49d1_917e_bd993d685c65.slice/crio-19df6454a08add523c5ff47203d9500ee4d5041717ffe824b8f6b33008f7fb0d WatchSource:0}: Error finding container 19df6454a08add523c5ff47203d9500ee4d5041717ffe824b8f6b33008f7fb0d: Status 404 returned error can't find the container with id 19df6454a08add523c5ff47203d9500ee4d5041717ffe824b8f6b33008f7fb0d Feb 24 05:19:23.204447 master-0 kubenswrapper[7614]: I0224 05:19:23.204256 7614 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="51b6b038-7029-4e3e-af6d-b7f85ac532b0" path="/var/lib/kubelet/pods/51b6b038-7029-4e3e-af6d-b7f85ac532b0/volumes" Feb 24 05:19:23.829555 master-0 kubenswrapper[7614]: I0224 05:19:23.829485 7614 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-6f47d587d6-7b87v" event={"ID":"3f511d03-a182-4968-ba40-5c5c10e5e6be","Type":"ContainerStarted","Data":"8c92ed541ae527386db4b6a76cf26d9c5a64e4216b4963a7e69a420ee8324c44"} Feb 24 05:19:23.829707 master-0 kubenswrapper[7614]: I0224 05:19:23.829603 7614 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-config-operator/openshift-config-operator-6f47d587d6-7b87v" Feb 24 05:19:23.833533 master-0 kubenswrapper[7614]: I0224 05:19:23.831805 7614 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-67dd8d7969-m8d2t" event={"ID":"f3cd3830-62b5-49d1-917e-bd993d685c65","Type":"ContainerStarted","Data":"1f44dc53b225ecb6e6f89dd2368c871c5572185f200fea78cfb5b504bac772aa"} Feb 24 05:19:23.833533 master-0 kubenswrapper[7614]: I0224 05:19:23.831855 7614 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-67dd8d7969-m8d2t" event={"ID":"f3cd3830-62b5-49d1-917e-bd993d685c65","Type":"ContainerStarted","Data":"19df6454a08add523c5ff47203d9500ee4d5041717ffe824b8f6b33008f7fb0d"} Feb 24 05:19:23.849555 master-0 kubenswrapper[7614]: I0224 05:19:23.849493 7614 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-config-operator/openshift-config-operator-6f47d587d6-7b87v" podStartSLOduration=3.872922516 podStartE2EDuration="22.849473745s" podCreationTimestamp="2026-02-24 05:19:01 +0000 UTC" firstStartedPulling="2026-02-24 05:19:04.167489646 +0000 UTC m=+275.202232802" lastFinishedPulling="2026-02-24 05:19:23.144040875 +0000 UTC m=+294.178784031" observedRunningTime="2026-02-24 05:19:23.848833059 +0000 UTC m=+294.883576215" watchObservedRunningTime="2026-02-24 05:19:23.849473745 +0000 UTC m=+294.884216901" Feb 24 05:19:24.038853 master-0 kubenswrapper[7614]: I0224 05:19:24.036961 7614 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-controller-54cb48566c-9ww5z"] Feb 24 05:19:24.038853 master-0 kubenswrapper[7614]: I0224 05:19:24.038474 7614 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-54cb48566c-9ww5z" Feb 24 05:19:24.043463 master-0 kubenswrapper[7614]: I0224 05:19:24.043429 7614 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-controller-dockercfg-2bzhs" Feb 24 05:19:24.043814 master-0 kubenswrapper[7614]: I0224 05:19:24.043769 7614 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mcc-proxy-tls" Feb 24 05:19:24.054742 master-0 kubenswrapper[7614]: I0224 05:19:24.054668 7614 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-controller-54cb48566c-9ww5z"] Feb 24 05:19:24.102265 master-0 kubenswrapper[7614]: I0224 05:19:24.101993 7614 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4bf6w\" (UniqueName: \"kubernetes.io/projected/1e7f7c02-4c84-432a-8d59-25dd3bfef5c2-kube-api-access-4bf6w\") pod \"machine-config-controller-54cb48566c-9ww5z\" (UID: \"1e7f7c02-4c84-432a-8d59-25dd3bfef5c2\") " pod="openshift-machine-config-operator/machine-config-controller-54cb48566c-9ww5z" Feb 24 05:19:24.102265 master-0 kubenswrapper[7614]: I0224 05:19:24.102087 7614 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/1e7f7c02-4c84-432a-8d59-25dd3bfef5c2-mcc-auth-proxy-config\") pod \"machine-config-controller-54cb48566c-9ww5z\" (UID: \"1e7f7c02-4c84-432a-8d59-25dd3bfef5c2\") " pod="openshift-machine-config-operator/machine-config-controller-54cb48566c-9ww5z" Feb 24 05:19:24.102265 master-0 kubenswrapper[7614]: I0224 05:19:24.102126 7614 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/1e7f7c02-4c84-432a-8d59-25dd3bfef5c2-proxy-tls\") pod \"machine-config-controller-54cb48566c-9ww5z\" (UID: \"1e7f7c02-4c84-432a-8d59-25dd3bfef5c2\") " pod="openshift-machine-config-operator/machine-config-controller-54cb48566c-9ww5z" Feb 24 05:19:24.203486 master-0 kubenswrapper[7614]: I0224 05:19:24.203382 7614 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4bf6w\" (UniqueName: \"kubernetes.io/projected/1e7f7c02-4c84-432a-8d59-25dd3bfef5c2-kube-api-access-4bf6w\") pod \"machine-config-controller-54cb48566c-9ww5z\" (UID: \"1e7f7c02-4c84-432a-8d59-25dd3bfef5c2\") " pod="openshift-machine-config-operator/machine-config-controller-54cb48566c-9ww5z" Feb 24 05:19:24.203486 master-0 kubenswrapper[7614]: I0224 05:19:24.203491 7614 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/1e7f7c02-4c84-432a-8d59-25dd3bfef5c2-mcc-auth-proxy-config\") pod \"machine-config-controller-54cb48566c-9ww5z\" (UID: \"1e7f7c02-4c84-432a-8d59-25dd3bfef5c2\") " pod="openshift-machine-config-operator/machine-config-controller-54cb48566c-9ww5z" Feb 24 05:19:24.203804 master-0 kubenswrapper[7614]: I0224 05:19:24.203541 7614 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/1e7f7c02-4c84-432a-8d59-25dd3bfef5c2-proxy-tls\") pod \"machine-config-controller-54cb48566c-9ww5z\" (UID: \"1e7f7c02-4c84-432a-8d59-25dd3bfef5c2\") " pod="openshift-machine-config-operator/machine-config-controller-54cb48566c-9ww5z" Feb 24 05:19:24.205806 master-0 kubenswrapper[7614]: I0224 05:19:24.205667 7614 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/1e7f7c02-4c84-432a-8d59-25dd3bfef5c2-mcc-auth-proxy-config\") pod \"machine-config-controller-54cb48566c-9ww5z\" (UID: \"1e7f7c02-4c84-432a-8d59-25dd3bfef5c2\") " pod="openshift-machine-config-operator/machine-config-controller-54cb48566c-9ww5z" Feb 24 05:19:24.218723 master-0 kubenswrapper[7614]: I0224 05:19:24.218672 7614 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/1e7f7c02-4c84-432a-8d59-25dd3bfef5c2-proxy-tls\") pod \"machine-config-controller-54cb48566c-9ww5z\" (UID: \"1e7f7c02-4c84-432a-8d59-25dd3bfef5c2\") " pod="openshift-machine-config-operator/machine-config-controller-54cb48566c-9ww5z" Feb 24 05:19:24.224246 master-0 kubenswrapper[7614]: I0224 05:19:24.224146 7614 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4bf6w\" (UniqueName: \"kubernetes.io/projected/1e7f7c02-4c84-432a-8d59-25dd3bfef5c2-kube-api-access-4bf6w\") pod \"machine-config-controller-54cb48566c-9ww5z\" (UID: \"1e7f7c02-4c84-432a-8d59-25dd3bfef5c2\") " pod="openshift-machine-config-operator/machine-config-controller-54cb48566c-9ww5z" Feb 24 05:19:24.389703 master-0 kubenswrapper[7614]: I0224 05:19:24.389414 7614 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-54cb48566c-9ww5z" Feb 24 05:19:24.846541 master-0 kubenswrapper[7614]: I0224 05:19:24.846466 7614 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-67dd8d7969-m8d2t" event={"ID":"f3cd3830-62b5-49d1-917e-bd993d685c65","Type":"ContainerStarted","Data":"d38839e0ed7e846ab14040f4d998fb06d6c2e5c03c631a20168764efc4c0607c"} Feb 24 05:19:24.846541 master-0 kubenswrapper[7614]: I0224 05:19:24.846541 7614 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-67dd8d7969-m8d2t" event={"ID":"f3cd3830-62b5-49d1-917e-bd993d685c65","Type":"ContainerStarted","Data":"1bb8d464111f0e717ad599e137d9e8e3853e8cfeea75bffbb868b896a7e93fff"} Feb 24 05:19:24.861178 master-0 kubenswrapper[7614]: I0224 05:19:24.861128 7614 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-controller-54cb48566c-9ww5z"] Feb 24 05:19:24.872057 master-0 kubenswrapper[7614]: I0224 05:19:24.871978 7614 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-67dd8d7969-m8d2t" podStartSLOduration=3.871953993 podStartE2EDuration="3.871953993s" podCreationTimestamp="2026-02-24 05:19:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-24 05:19:24.868907845 +0000 UTC m=+295.903651001" watchObservedRunningTime="2026-02-24 05:19:24.871953993 +0000 UTC m=+295.906697149" Feb 24 05:19:24.875340 master-0 kubenswrapper[7614]: W0224 05:19:24.875271 7614 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod1e7f7c02_4c84_432a_8d59_25dd3bfef5c2.slice/crio-8b96b8f7d5979105f35e071dc0c704b23c24808d5269da621b3e55a924016c6c WatchSource:0}: Error finding container 8b96b8f7d5979105f35e071dc0c704b23c24808d5269da621b3e55a924016c6c: Status 404 returned error can't find the container with id 8b96b8f7d5979105f35e071dc0c704b23c24808d5269da621b3e55a924016c6c Feb 24 05:19:25.269572 master-0 kubenswrapper[7614]: I0224 05:19:25.269156 7614 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-monitoring/prometheus-operator-admission-webhook-75d56db95f-hw4m2"] Feb 24 05:19:25.270389 master-0 kubenswrapper[7614]: I0224 05:19:25.270139 7614 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/prometheus-operator-admission-webhook-75d56db95f-hw4m2" Feb 24 05:19:25.275844 master-0 kubenswrapper[7614]: I0224 05:19:25.275688 7614 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-operator-admission-webhook-tls" Feb 24 05:19:25.280002 master-0 kubenswrapper[7614]: I0224 05:19:25.279959 7614 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29531835-tsgrz"] Feb 24 05:19:25.281881 master-0 kubenswrapper[7614]: I0224 05:19:25.280798 7614 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29531835-tsgrz" Feb 24 05:19:25.283661 master-0 kubenswrapper[7614]: I0224 05:19:25.282442 7614 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Feb 24 05:19:25.286432 master-0 kubenswrapper[7614]: I0224 05:19:25.286403 7614 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-network-diagnostics/network-check-source-58fb6744f5-kn2z7"] Feb 24 05:19:25.287079 master-0 kubenswrapper[7614]: I0224 05:19:25.287048 7614 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-58fb6744f5-kn2z7" Feb 24 05:19:25.292045 master-0 kubenswrapper[7614]: I0224 05:19:25.290762 7614 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress/router-default-7b65dc9fcb-zxkt2"] Feb 24 05:19:25.292045 master-0 kubenswrapper[7614]: I0224 05:19:25.291410 7614 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress/router-default-7b65dc9fcb-zxkt2" Feb 24 05:19:25.297484 master-0 kubenswrapper[7614]: I0224 05:19:25.295972 7614 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-metrics-certs-default" Feb 24 05:19:25.297484 master-0 kubenswrapper[7614]: I0224 05:19:25.296230 7614 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"openshift-service-ca.crt" Feb 24 05:19:25.297484 master-0 kubenswrapper[7614]: I0224 05:19:25.296378 7614 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-stats-default" Feb 24 05:19:25.297484 master-0 kubenswrapper[7614]: I0224 05:19:25.296511 7614 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-certs-default" Feb 24 05:19:25.297484 master-0 kubenswrapper[7614]: I0224 05:19:25.296705 7614 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"service-ca-bundle" Feb 24 05:19:25.297484 master-0 kubenswrapper[7614]: I0224 05:19:25.296858 7614 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"kube-root-ca.crt" Feb 24 05:19:25.300974 master-0 kubenswrapper[7614]: I0224 05:19:25.300905 7614 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/prometheus-operator-admission-webhook-75d56db95f-hw4m2"] Feb 24 05:19:25.308766 master-0 kubenswrapper[7614]: I0224 05:19:25.308670 7614 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29531835-tsgrz"] Feb 24 05:19:25.313007 master-0 kubenswrapper[7614]: I0224 05:19:25.312967 7614 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-network-diagnostics/network-check-source-58fb6744f5-kn2z7"] Feb 24 05:19:25.319380 master-0 kubenswrapper[7614]: I0224 05:19:25.318619 7614 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/8978e4e5-18ef-4b69-a127-5e9409163935-secret-volume\") pod \"collect-profiles-29531835-tsgrz\" (UID: \"8978e4e5-18ef-4b69-a127-5e9409163935\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29531835-tsgrz" Feb 24 05:19:25.319380 master-0 kubenswrapper[7614]: I0224 05:19:25.318700 7614 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jspzm\" (UniqueName: \"kubernetes.io/projected/1533c5fa-0387-40bd-a959-e714b65cdacc-kube-api-access-jspzm\") pod \"network-check-source-58fb6744f5-kn2z7\" (UID: \"1533c5fa-0387-40bd-a959-e714b65cdacc\") " pod="openshift-network-diagnostics/network-check-source-58fb6744f5-kn2z7" Feb 24 05:19:25.319380 master-0 kubenswrapper[7614]: I0224 05:19:25.318739 7614 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/8978e4e5-18ef-4b69-a127-5e9409163935-config-volume\") pod \"collect-profiles-29531835-tsgrz\" (UID: \"8978e4e5-18ef-4b69-a127-5e9409163935\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29531835-tsgrz" Feb 24 05:19:25.319380 master-0 kubenswrapper[7614]: I0224 05:19:25.318764 7614 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-certificates\" (UniqueName: \"kubernetes.io/secret/b9a96f0d-16b8-47ee-baf2-807d2260fa71-tls-certificates\") pod \"prometheus-operator-admission-webhook-75d56db95f-hw4m2\" (UID: \"b9a96f0d-16b8-47ee-baf2-807d2260fa71\") " pod="openshift-monitoring/prometheus-operator-admission-webhook-75d56db95f-hw4m2" Feb 24 05:19:25.319380 master-0 kubenswrapper[7614]: I0224 05:19:25.318792 7614 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jchr9\" (UniqueName: \"kubernetes.io/projected/8978e4e5-18ef-4b69-a127-5e9409163935-kube-api-access-jchr9\") pod \"collect-profiles-29531835-tsgrz\" (UID: \"8978e4e5-18ef-4b69-a127-5e9409163935\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29531835-tsgrz" Feb 24 05:19:25.319380 master-0 kubenswrapper[7614]: I0224 05:19:25.318813 7614 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/be7a4b9e-1e9a-4298-b804-21b683805c0e-default-certificate\") pod \"router-default-7b65dc9fcb-zxkt2\" (UID: \"be7a4b9e-1e9a-4298-b804-21b683805c0e\") " pod="openshift-ingress/router-default-7b65dc9fcb-zxkt2" Feb 24 05:19:25.319380 master-0 kubenswrapper[7614]: I0224 05:19:25.318842 7614 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/be7a4b9e-1e9a-4298-b804-21b683805c0e-metrics-certs\") pod \"router-default-7b65dc9fcb-zxkt2\" (UID: \"be7a4b9e-1e9a-4298-b804-21b683805c0e\") " pod="openshift-ingress/router-default-7b65dc9fcb-zxkt2" Feb 24 05:19:25.319380 master-0 kubenswrapper[7614]: I0224 05:19:25.318863 7614 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/be7a4b9e-1e9a-4298-b804-21b683805c0e-service-ca-bundle\") pod \"router-default-7b65dc9fcb-zxkt2\" (UID: \"be7a4b9e-1e9a-4298-b804-21b683805c0e\") " pod="openshift-ingress/router-default-7b65dc9fcb-zxkt2" Feb 24 05:19:25.319380 master-0 kubenswrapper[7614]: I0224 05:19:25.318883 7614 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wvm29\" (UniqueName: \"kubernetes.io/projected/be7a4b9e-1e9a-4298-b804-21b683805c0e-kube-api-access-wvm29\") pod \"router-default-7b65dc9fcb-zxkt2\" (UID: \"be7a4b9e-1e9a-4298-b804-21b683805c0e\") " pod="openshift-ingress/router-default-7b65dc9fcb-zxkt2" Feb 24 05:19:25.319380 master-0 kubenswrapper[7614]: I0224 05:19:25.318917 7614 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/be7a4b9e-1e9a-4298-b804-21b683805c0e-stats-auth\") pod \"router-default-7b65dc9fcb-zxkt2\" (UID: \"be7a4b9e-1e9a-4298-b804-21b683805c0e\") " pod="openshift-ingress/router-default-7b65dc9fcb-zxkt2" Feb 24 05:19:25.420424 master-0 kubenswrapper[7614]: I0224 05:19:25.420256 7614 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/8978e4e5-18ef-4b69-a127-5e9409163935-config-volume\") pod \"collect-profiles-29531835-tsgrz\" (UID: \"8978e4e5-18ef-4b69-a127-5e9409163935\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29531835-tsgrz" Feb 24 05:19:25.420673 master-0 kubenswrapper[7614]: I0224 05:19:25.420505 7614 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-certificates\" (UniqueName: \"kubernetes.io/secret/b9a96f0d-16b8-47ee-baf2-807d2260fa71-tls-certificates\") pod \"prometheus-operator-admission-webhook-75d56db95f-hw4m2\" (UID: \"b9a96f0d-16b8-47ee-baf2-807d2260fa71\") " pod="openshift-monitoring/prometheus-operator-admission-webhook-75d56db95f-hw4m2" Feb 24 05:19:25.420673 master-0 kubenswrapper[7614]: I0224 05:19:25.420552 7614 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jchr9\" (UniqueName: \"kubernetes.io/projected/8978e4e5-18ef-4b69-a127-5e9409163935-kube-api-access-jchr9\") pod \"collect-profiles-29531835-tsgrz\" (UID: \"8978e4e5-18ef-4b69-a127-5e9409163935\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29531835-tsgrz" Feb 24 05:19:25.420673 master-0 kubenswrapper[7614]: I0224 05:19:25.420578 7614 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/be7a4b9e-1e9a-4298-b804-21b683805c0e-default-certificate\") pod \"router-default-7b65dc9fcb-zxkt2\" (UID: \"be7a4b9e-1e9a-4298-b804-21b683805c0e\") " pod="openshift-ingress/router-default-7b65dc9fcb-zxkt2" Feb 24 05:19:25.420673 master-0 kubenswrapper[7614]: I0224 05:19:25.420615 7614 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/be7a4b9e-1e9a-4298-b804-21b683805c0e-metrics-certs\") pod \"router-default-7b65dc9fcb-zxkt2\" (UID: \"be7a4b9e-1e9a-4298-b804-21b683805c0e\") " pod="openshift-ingress/router-default-7b65dc9fcb-zxkt2" Feb 24 05:19:25.420673 master-0 kubenswrapper[7614]: I0224 05:19:25.420638 7614 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/be7a4b9e-1e9a-4298-b804-21b683805c0e-service-ca-bundle\") pod \"router-default-7b65dc9fcb-zxkt2\" (UID: \"be7a4b9e-1e9a-4298-b804-21b683805c0e\") " pod="openshift-ingress/router-default-7b65dc9fcb-zxkt2" Feb 24 05:19:25.420673 master-0 kubenswrapper[7614]: I0224 05:19:25.420660 7614 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wvm29\" (UniqueName: \"kubernetes.io/projected/be7a4b9e-1e9a-4298-b804-21b683805c0e-kube-api-access-wvm29\") pod \"router-default-7b65dc9fcb-zxkt2\" (UID: \"be7a4b9e-1e9a-4298-b804-21b683805c0e\") " pod="openshift-ingress/router-default-7b65dc9fcb-zxkt2" Feb 24 05:19:25.420865 master-0 kubenswrapper[7614]: I0224 05:19:25.420697 7614 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/be7a4b9e-1e9a-4298-b804-21b683805c0e-stats-auth\") pod \"router-default-7b65dc9fcb-zxkt2\" (UID: \"be7a4b9e-1e9a-4298-b804-21b683805c0e\") " pod="openshift-ingress/router-default-7b65dc9fcb-zxkt2" Feb 24 05:19:25.420865 master-0 kubenswrapper[7614]: I0224 05:19:25.420724 7614 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/8978e4e5-18ef-4b69-a127-5e9409163935-secret-volume\") pod \"collect-profiles-29531835-tsgrz\" (UID: \"8978e4e5-18ef-4b69-a127-5e9409163935\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29531835-tsgrz" Feb 24 05:19:25.420865 master-0 kubenswrapper[7614]: I0224 05:19:25.420756 7614 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jspzm\" (UniqueName: \"kubernetes.io/projected/1533c5fa-0387-40bd-a959-e714b65cdacc-kube-api-access-jspzm\") pod \"network-check-source-58fb6744f5-kn2z7\" (UID: \"1533c5fa-0387-40bd-a959-e714b65cdacc\") " pod="openshift-network-diagnostics/network-check-source-58fb6744f5-kn2z7" Feb 24 05:19:25.422184 master-0 kubenswrapper[7614]: I0224 05:19:25.421641 7614 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/8978e4e5-18ef-4b69-a127-5e9409163935-config-volume\") pod \"collect-profiles-29531835-tsgrz\" (UID: \"8978e4e5-18ef-4b69-a127-5e9409163935\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29531835-tsgrz" Feb 24 05:19:25.422184 master-0 kubenswrapper[7614]: I0224 05:19:25.422089 7614 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/be7a4b9e-1e9a-4298-b804-21b683805c0e-service-ca-bundle\") pod \"router-default-7b65dc9fcb-zxkt2\" (UID: \"be7a4b9e-1e9a-4298-b804-21b683805c0e\") " pod="openshift-ingress/router-default-7b65dc9fcb-zxkt2" Feb 24 05:19:25.426057 master-0 kubenswrapper[7614]: I0224 05:19:25.426023 7614 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/8978e4e5-18ef-4b69-a127-5e9409163935-secret-volume\") pod \"collect-profiles-29531835-tsgrz\" (UID: \"8978e4e5-18ef-4b69-a127-5e9409163935\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29531835-tsgrz" Feb 24 05:19:25.426662 master-0 kubenswrapper[7614]: I0224 05:19:25.426613 7614 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/be7a4b9e-1e9a-4298-b804-21b683805c0e-default-certificate\") pod \"router-default-7b65dc9fcb-zxkt2\" (UID: \"be7a4b9e-1e9a-4298-b804-21b683805c0e\") " pod="openshift-ingress/router-default-7b65dc9fcb-zxkt2" Feb 24 05:19:25.426882 master-0 kubenswrapper[7614]: I0224 05:19:25.426827 7614 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/be7a4b9e-1e9a-4298-b804-21b683805c0e-stats-auth\") pod \"router-default-7b65dc9fcb-zxkt2\" (UID: \"be7a4b9e-1e9a-4298-b804-21b683805c0e\") " pod="openshift-ingress/router-default-7b65dc9fcb-zxkt2" Feb 24 05:19:25.434171 master-0 kubenswrapper[7614]: I0224 05:19:25.434117 7614 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-certificates\" (UniqueName: \"kubernetes.io/secret/b9a96f0d-16b8-47ee-baf2-807d2260fa71-tls-certificates\") pod \"prometheus-operator-admission-webhook-75d56db95f-hw4m2\" (UID: \"b9a96f0d-16b8-47ee-baf2-807d2260fa71\") " pod="openshift-monitoring/prometheus-operator-admission-webhook-75d56db95f-hw4m2" Feb 24 05:19:25.437852 master-0 kubenswrapper[7614]: I0224 05:19:25.437777 7614 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/be7a4b9e-1e9a-4298-b804-21b683805c0e-metrics-certs\") pod \"router-default-7b65dc9fcb-zxkt2\" (UID: \"be7a4b9e-1e9a-4298-b804-21b683805c0e\") " pod="openshift-ingress/router-default-7b65dc9fcb-zxkt2" Feb 24 05:19:25.444622 master-0 kubenswrapper[7614]: I0224 05:19:25.444579 7614 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wvm29\" (UniqueName: \"kubernetes.io/projected/be7a4b9e-1e9a-4298-b804-21b683805c0e-kube-api-access-wvm29\") pod \"router-default-7b65dc9fcb-zxkt2\" (UID: \"be7a4b9e-1e9a-4298-b804-21b683805c0e\") " pod="openshift-ingress/router-default-7b65dc9fcb-zxkt2" Feb 24 05:19:25.444758 master-0 kubenswrapper[7614]: I0224 05:19:25.444724 7614 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jchr9\" (UniqueName: \"kubernetes.io/projected/8978e4e5-18ef-4b69-a127-5e9409163935-kube-api-access-jchr9\") pod \"collect-profiles-29531835-tsgrz\" (UID: \"8978e4e5-18ef-4b69-a127-5e9409163935\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29531835-tsgrz" Feb 24 05:19:25.452572 master-0 kubenswrapper[7614]: I0224 05:19:25.452490 7614 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jspzm\" (UniqueName: \"kubernetes.io/projected/1533c5fa-0387-40bd-a959-e714b65cdacc-kube-api-access-jspzm\") pod \"network-check-source-58fb6744f5-kn2z7\" (UID: \"1533c5fa-0387-40bd-a959-e714b65cdacc\") " pod="openshift-network-diagnostics/network-check-source-58fb6744f5-kn2z7" Feb 24 05:19:25.601160 master-0 kubenswrapper[7614]: I0224 05:19:25.601069 7614 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/prometheus-operator-admission-webhook-75d56db95f-hw4m2" Feb 24 05:19:25.618267 master-0 kubenswrapper[7614]: I0224 05:19:25.618217 7614 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29531835-tsgrz" Feb 24 05:19:25.640914 master-0 kubenswrapper[7614]: I0224 05:19:25.640858 7614 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-58fb6744f5-kn2z7" Feb 24 05:19:25.659657 master-0 kubenswrapper[7614]: I0224 05:19:25.659607 7614 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress/router-default-7b65dc9fcb-zxkt2" Feb 24 05:19:25.857465 master-0 kubenswrapper[7614]: I0224 05:19:25.857407 7614 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-54cb48566c-9ww5z" event={"ID":"1e7f7c02-4c84-432a-8d59-25dd3bfef5c2","Type":"ContainerStarted","Data":"5cc7429167a7ff295c5a58015d0670bc117216e9d932b18a47e0703c768cb63f"} Feb 24 05:19:25.857592 master-0 kubenswrapper[7614]: I0224 05:19:25.857476 7614 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-54cb48566c-9ww5z" event={"ID":"1e7f7c02-4c84-432a-8d59-25dd3bfef5c2","Type":"ContainerStarted","Data":"efa90e77631439dbef62b24eb0a109dbbb0250a2d2b24124da5e8a8cbc7dcbd0"} Feb 24 05:19:25.857592 master-0 kubenswrapper[7614]: I0224 05:19:25.857497 7614 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-54cb48566c-9ww5z" event={"ID":"1e7f7c02-4c84-432a-8d59-25dd3bfef5c2","Type":"ContainerStarted","Data":"8b96b8f7d5979105f35e071dc0c704b23c24808d5269da621b3e55a924016c6c"} Feb 24 05:19:25.859010 master-0 kubenswrapper[7614]: I0224 05:19:25.858962 7614 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-7b65dc9fcb-zxkt2" event={"ID":"be7a4b9e-1e9a-4298-b804-21b683805c0e","Type":"ContainerStarted","Data":"5c76314bfc127c2893886d4278db6947daa2fbb82909a575cdadd2f5a3b4b008"} Feb 24 05:19:25.880014 master-0 kubenswrapper[7614]: I0224 05:19:25.879889 7614 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-controller-54cb48566c-9ww5z" podStartSLOduration=1.879850407 podStartE2EDuration="1.879850407s" podCreationTimestamp="2026-02-24 05:19:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-24 05:19:25.877021244 +0000 UTC m=+296.911764400" watchObservedRunningTime="2026-02-24 05:19:25.879850407 +0000 UTC m=+296.914593563" Feb 24 05:19:26.065953 master-0 kubenswrapper[7614]: I0224 05:19:26.065905 7614 dynamic_cafile_content.go:123] "Loaded a new CA Bundle and Verifier" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Feb 24 05:19:26.111230 master-0 kubenswrapper[7614]: I0224 05:19:26.107186 7614 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29531835-tsgrz"] Feb 24 05:19:26.115226 master-0 kubenswrapper[7614]: W0224 05:19:26.113652 7614 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8978e4e5_18ef_4b69_a127_5e9409163935.slice/crio-379b0200953b199da1fee7353da8664ed763cba78b2a8cda5a307db9466ab184 WatchSource:0}: Error finding container 379b0200953b199da1fee7353da8664ed763cba78b2a8cda5a307db9466ab184: Status 404 returned error can't find the container with id 379b0200953b199da1fee7353da8664ed763cba78b2a8cda5a307db9466ab184 Feb 24 05:19:26.228265 master-0 kubenswrapper[7614]: I0224 05:19:26.228205 7614 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/prometheus-operator-admission-webhook-75d56db95f-hw4m2"] Feb 24 05:19:26.231269 master-0 kubenswrapper[7614]: I0224 05:19:26.231211 7614 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-network-diagnostics/network-check-source-58fb6744f5-kn2z7"] Feb 24 05:19:26.255485 master-0 kubenswrapper[7614]: W0224 05:19:26.255409 7614 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod1533c5fa_0387_40bd_a959_e714b65cdacc.slice/crio-c932287e23f5b8d24efa88b511b35c92261a32985b4d2a556c22eb4a08ba11cb WatchSource:0}: Error finding container c932287e23f5b8d24efa88b511b35c92261a32985b4d2a556c22eb4a08ba11cb: Status 404 returned error can't find the container with id c932287e23f5b8d24efa88b511b35c92261a32985b4d2a556c22eb4a08ba11cb Feb 24 05:19:26.871040 master-0 kubenswrapper[7614]: I0224 05:19:26.870940 7614 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-operator-admission-webhook-75d56db95f-hw4m2" event={"ID":"b9a96f0d-16b8-47ee-baf2-807d2260fa71","Type":"ContainerStarted","Data":"9e66323acb79027dbee260b2bd6ea317379967ab104a220c1093c958a45ebc27"} Feb 24 05:19:26.874724 master-0 kubenswrapper[7614]: I0224 05:19:26.874389 7614 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29531835-tsgrz" event={"ID":"8978e4e5-18ef-4b69-a127-5e9409163935","Type":"ContainerStarted","Data":"3c24b58bd92b804a63d803200f7a1ff1770a8e7351e2091f1326f31e84f6d272"} Feb 24 05:19:26.874724 master-0 kubenswrapper[7614]: I0224 05:19:26.874420 7614 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29531835-tsgrz" event={"ID":"8978e4e5-18ef-4b69-a127-5e9409163935","Type":"ContainerStarted","Data":"379b0200953b199da1fee7353da8664ed763cba78b2a8cda5a307db9466ab184"} Feb 24 05:19:26.879058 master-0 kubenswrapper[7614]: I0224 05:19:26.878769 7614 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-source-58fb6744f5-kn2z7" event={"ID":"1533c5fa-0387-40bd-a959-e714b65cdacc","Type":"ContainerStarted","Data":"c8ac9898a91ce871e4aeb489f9a7534d985b997cb9893d7961d516e592edcab0"} Feb 24 05:19:26.879058 master-0 kubenswrapper[7614]: I0224 05:19:26.878800 7614 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-source-58fb6744f5-kn2z7" event={"ID":"1533c5fa-0387-40bd-a959-e714b65cdacc","Type":"ContainerStarted","Data":"c932287e23f5b8d24efa88b511b35c92261a32985b4d2a556c22eb4a08ba11cb"} Feb 24 05:19:26.919573 master-0 kubenswrapper[7614]: I0224 05:19:26.915886 7614 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29531835-tsgrz" podStartSLOduration=266.915863573 podStartE2EDuration="4m26.915863573s" podCreationTimestamp="2026-02-24 05:15:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-24 05:19:26.911002127 +0000 UTC m=+297.945745313" watchObservedRunningTime="2026-02-24 05:19:26.915863573 +0000 UTC m=+297.950606729" Feb 24 05:19:26.931873 master-0 kubenswrapper[7614]: I0224 05:19:26.931592 7614 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-network-diagnostics/network-check-source-58fb6744f5-kn2z7" podStartSLOduration=343.931568386 podStartE2EDuration="5m43.931568386s" podCreationTimestamp="2026-02-24 05:13:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-24 05:19:26.929866602 +0000 UTC m=+297.964609758" watchObservedRunningTime="2026-02-24 05:19:26.931568386 +0000 UTC m=+297.966311542" Feb 24 05:19:27.518336 master-0 kubenswrapper[7614]: I0224 05:19:27.514346 7614 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-config-operator/openshift-config-operator-6f47d587d6-7b87v" Feb 24 05:19:27.719810 master-0 kubenswrapper[7614]: I0224 05:19:27.719734 7614 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-authentication-operator_authentication-operator-5bd7c86784-kbb8z_59333a14-5bdc-4590-a3da-af7300f086da/authentication-operator/0.log" Feb 24 05:19:28.289630 master-0 kubenswrapper[7614]: I0224 05:19:28.289584 7614 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-oauth-apiserver_apiserver-6f8b7f45f7-5df4m_812552f3-09b1-43f8-b910-c78e776127f8/fix-audit-permissions/0.log" Feb 24 05:19:28.316855 master-0 kubenswrapper[7614]: I0224 05:19:28.316776 7614 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-oauth-apiserver_apiserver-6f8b7f45f7-5df4m_812552f3-09b1-43f8-b910-c78e776127f8/oauth-apiserver/0.log" Feb 24 05:19:28.514254 master-0 kubenswrapper[7614]: I0224 05:19:28.513072 7614 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-server-xxl55"] Feb 24 05:19:28.515225 master-0 kubenswrapper[7614]: I0224 05:19:28.514525 7614 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd-operator_etcd-operator-545bf96f4d-tfmbs_7a2c651d-ea1a-41f2-9745-04adc8d88904/etcd-operator/0.log" Feb 24 05:19:28.515550 master-0 kubenswrapper[7614]: I0224 05:19:28.515513 7614 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-server-xxl55" Feb 24 05:19:28.518783 master-0 kubenswrapper[7614]: I0224 05:19:28.518733 7614 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-tls" Feb 24 05:19:28.518958 master-0 kubenswrapper[7614]: I0224 05:19:28.518918 7614 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-dockercfg-rzbrp" Feb 24 05:19:28.525362 master-0 kubenswrapper[7614]: I0224 05:19:28.519124 7614 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"node-bootstrapper-token" Feb 24 05:19:28.587816 master-0 kubenswrapper[7614]: I0224 05:19:28.587292 7614 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/c847d0c0-cc92-4d56-9e47-b83d9a39a745-node-bootstrap-token\") pod \"machine-config-server-xxl55\" (UID: \"c847d0c0-cc92-4d56-9e47-b83d9a39a745\") " pod="openshift-machine-config-operator/machine-config-server-xxl55" Feb 24 05:19:28.587816 master-0 kubenswrapper[7614]: I0224 05:19:28.587401 7614 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/c847d0c0-cc92-4d56-9e47-b83d9a39a745-certs\") pod \"machine-config-server-xxl55\" (UID: \"c847d0c0-cc92-4d56-9e47-b83d9a39a745\") " pod="openshift-machine-config-operator/machine-config-server-xxl55" Feb 24 05:19:28.587816 master-0 kubenswrapper[7614]: I0224 05:19:28.587447 7614 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qvznm\" (UniqueName: \"kubernetes.io/projected/c847d0c0-cc92-4d56-9e47-b83d9a39a745-kube-api-access-qvznm\") pod \"machine-config-server-xxl55\" (UID: \"c847d0c0-cc92-4d56-9e47-b83d9a39a745\") " pod="openshift-machine-config-operator/machine-config-server-xxl55" Feb 24 05:19:28.691342 master-0 kubenswrapper[7614]: I0224 05:19:28.690820 7614 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/c847d0c0-cc92-4d56-9e47-b83d9a39a745-node-bootstrap-token\") pod \"machine-config-server-xxl55\" (UID: \"c847d0c0-cc92-4d56-9e47-b83d9a39a745\") " pod="openshift-machine-config-operator/machine-config-server-xxl55" Feb 24 05:19:28.691342 master-0 kubenswrapper[7614]: I0224 05:19:28.691209 7614 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/c847d0c0-cc92-4d56-9e47-b83d9a39a745-certs\") pod \"machine-config-server-xxl55\" (UID: \"c847d0c0-cc92-4d56-9e47-b83d9a39a745\") " pod="openshift-machine-config-operator/machine-config-server-xxl55" Feb 24 05:19:28.691342 master-0 kubenswrapper[7614]: I0224 05:19:28.691286 7614 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qvznm\" (UniqueName: \"kubernetes.io/projected/c847d0c0-cc92-4d56-9e47-b83d9a39a745-kube-api-access-qvznm\") pod \"machine-config-server-xxl55\" (UID: \"c847d0c0-cc92-4d56-9e47-b83d9a39a745\") " pod="openshift-machine-config-operator/machine-config-server-xxl55" Feb 24 05:19:28.707619 master-0 kubenswrapper[7614]: I0224 05:19:28.697441 7614 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/c847d0c0-cc92-4d56-9e47-b83d9a39a745-node-bootstrap-token\") pod \"machine-config-server-xxl55\" (UID: \"c847d0c0-cc92-4d56-9e47-b83d9a39a745\") " pod="openshift-machine-config-operator/machine-config-server-xxl55" Feb 24 05:19:28.707619 master-0 kubenswrapper[7614]: I0224 05:19:28.702303 7614 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"certs\" (UniqueName: \"kubernetes.io/secret/c847d0c0-cc92-4d56-9e47-b83d9a39a745-certs\") pod \"machine-config-server-xxl55\" (UID: \"c847d0c0-cc92-4d56-9e47-b83d9a39a745\") " pod="openshift-machine-config-operator/machine-config-server-xxl55" Feb 24 05:19:28.720883 master-0 kubenswrapper[7614]: I0224 05:19:28.713554 7614 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd-operator_etcd-operator-545bf96f4d-tfmbs_7a2c651d-ea1a-41f2-9745-04adc8d88904/etcd-operator/1.log" Feb 24 05:19:28.720883 master-0 kubenswrapper[7614]: I0224 05:19:28.716567 7614 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qvznm\" (UniqueName: \"kubernetes.io/projected/c847d0c0-cc92-4d56-9e47-b83d9a39a745-kube-api-access-qvznm\") pod \"machine-config-server-xxl55\" (UID: \"c847d0c0-cc92-4d56-9e47-b83d9a39a745\") " pod="openshift-machine-config-operator/machine-config-server-xxl55" Feb 24 05:19:28.849216 master-0 kubenswrapper[7614]: I0224 05:19:28.849054 7614 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-server-xxl55" Feb 24 05:19:28.893491 master-0 kubenswrapper[7614]: I0224 05:19:28.893378 7614 generic.go:334] "Generic (PLEG): container finished" podID="8978e4e5-18ef-4b69-a127-5e9409163935" containerID="3c24b58bd92b804a63d803200f7a1ff1770a8e7351e2091f1326f31e84f6d272" exitCode=0 Feb 24 05:19:28.893491 master-0 kubenswrapper[7614]: I0224 05:19:28.893463 7614 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29531835-tsgrz" event={"ID":"8978e4e5-18ef-4b69-a127-5e9409163935","Type":"ContainerDied","Data":"3c24b58bd92b804a63d803200f7a1ff1770a8e7351e2091f1326f31e84f6d272"} Feb 24 05:19:28.904377 master-0 kubenswrapper[7614]: I0224 05:19:28.904236 7614 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd_etcd-master-0_18a83278819db2092fa26d8274eb3f00/setup/0.log" Feb 24 05:19:29.103467 master-0 kubenswrapper[7614]: I0224 05:19:29.103255 7614 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd_etcd-master-0_18a83278819db2092fa26d8274eb3f00/etcd-ensure-env-vars/0.log" Feb 24 05:19:29.266649 master-0 kubenswrapper[7614]: I0224 05:19:29.266552 7614 scope.go:117] "RemoveContainer" containerID="c068b345adaab906615d4122b8703a382ed80a18092bab0453b7f7d8b6ad8324" Feb 24 05:19:29.304065 master-0 kubenswrapper[7614]: I0224 05:19:29.304006 7614 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd_etcd-master-0_18a83278819db2092fa26d8274eb3f00/etcd-resources-copy/0.log" Feb 24 05:19:29.525428 master-0 kubenswrapper[7614]: I0224 05:19:29.525350 7614 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd_etcd-master-0_18a83278819db2092fa26d8274eb3f00/etcdctl/0.log" Feb 24 05:19:29.710129 master-0 kubenswrapper[7614]: I0224 05:19:29.710077 7614 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd_etcd-master-0_18a83278819db2092fa26d8274eb3f00/etcd/0.log" Feb 24 05:19:29.907837 master-0 kubenswrapper[7614]: I0224 05:19:29.907798 7614 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd_etcd-master-0_18a83278819db2092fa26d8274eb3f00/etcd-metrics/0.log" Feb 24 05:19:30.102978 master-0 kubenswrapper[7614]: I0224 05:19:30.102909 7614 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd_etcd-master-0_18a83278819db2092fa26d8274eb3f00/etcd-readyz/0.log" Feb 24 05:19:30.304919 master-0 kubenswrapper[7614]: I0224 05:19:30.304841 7614 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd_etcd-master-0_18a83278819db2092fa26d8274eb3f00/etcd-rev/0.log" Feb 24 05:19:30.516916 master-0 kubenswrapper[7614]: I0224 05:19:30.516849 7614 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd_installer-1-master-0_2d3d57f1-cd67-4f1d-b267-f652b9bb3448/installer/0.log" Feb 24 05:19:30.705694 master-0 kubenswrapper[7614]: I0224 05:19:30.705548 7614 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver-operator_kube-apiserver-operator-5d87bf58c-ncrqj_17f8e10b-88dc-4158-a7c4-aaa2f5d5fb9d/kube-apiserver-operator/1.log" Feb 24 05:19:30.904094 master-0 kubenswrapper[7614]: I0224 05:19:30.903953 7614 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver-operator_kube-apiserver-operator-5d87bf58c-ncrqj_17f8e10b-88dc-4158-a7c4-aaa2f5d5fb9d/kube-apiserver-operator/2.log" Feb 24 05:19:31.103250 master-0 kubenswrapper[7614]: I0224 05:19:31.103156 7614 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_bootstrap-kube-apiserver-master-0_687e92a6cecf1e2beeef16a0b322ad08/setup/0.log" Feb 24 05:19:31.313386 master-0 kubenswrapper[7614]: I0224 05:19:31.312984 7614 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_bootstrap-kube-apiserver-master-0_687e92a6cecf1e2beeef16a0b322ad08/kube-apiserver/0.log" Feb 24 05:19:31.503204 master-0 kubenswrapper[7614]: I0224 05:19:31.503120 7614 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_bootstrap-kube-apiserver-master-0_687e92a6cecf1e2beeef16a0b322ad08/kube-apiserver-insecure-readyz/0.log" Feb 24 05:19:31.712882 master-0 kubenswrapper[7614]: I0224 05:19:31.712784 7614 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_installer-1-master-0_e44f770d-f88d-446a-a22f-51b30e89690c/installer/0.log" Feb 24 05:19:31.903733 master-0 kubenswrapper[7614]: I0224 05:19:31.903578 7614 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager-operator_kube-controller-manager-operator-7bcfbc574b-8zrj9_22813c83-2f60-44ad-9624-ad367cec08f7/kube-controller-manager-operator/1.log" Feb 24 05:19:32.105157 master-0 kubenswrapper[7614]: I0224 05:19:32.105077 7614 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager-operator_kube-controller-manager-operator-7bcfbc574b-8zrj9_22813c83-2f60-44ad-9624-ad367cec08f7/kube-controller-manager-operator/2.log" Feb 24 05:19:32.789850 master-0 kubenswrapper[7614]: I0224 05:19:32.789452 7614 log.go:25] "Finished parsing log file" path="/var/log/pods/kube-system_bootstrap-kube-controller-manager-master-0_c9ad9373c007a4fcd25e70622bdc8deb/kube-controller-manager/3.log" Feb 24 05:19:32.805601 master-0 kubenswrapper[7614]: I0224 05:19:32.805566 7614 log.go:25] "Finished parsing log file" path="/var/log/pods/kube-system_bootstrap-kube-controller-manager-master-0_c9ad9373c007a4fcd25e70622bdc8deb/kube-controller-manager/4.log" Feb 24 05:19:32.906030 master-0 kubenswrapper[7614]: I0224 05:19:32.905921 7614 log.go:25] "Finished parsing log file" path="/var/log/pods/kube-system_bootstrap-kube-controller-manager-master-0_c9ad9373c007a4fcd25e70622bdc8deb/cluster-policy-controller/0.log" Feb 24 05:19:33.111918 master-0 kubenswrapper[7614]: I0224 05:19:33.111737 7614 log.go:25] "Finished parsing log file" path="/var/log/pods/kube-system_bootstrap-kube-scheduler-master-0_56c3cb71c9851003c8de7e7c5db4b87e/kube-scheduler/0.log" Feb 24 05:19:33.308947 master-0 kubenswrapper[7614]: I0224 05:19:33.308888 7614 log.go:25] "Finished parsing log file" path="/var/log/pods/kube-system_bootstrap-kube-scheduler-master-0_56c3cb71c9851003c8de7e7c5db4b87e/kube-scheduler/1.log" Feb 24 05:19:33.620748 master-0 kubenswrapper[7614]: I0224 05:19:33.620663 7614 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-scheduler_installer-1-master-0_74d070e9-4193-4598-ad68-15955b07d649/installer/0.log" Feb 24 05:19:33.621592 master-0 kubenswrapper[7614]: I0224 05:19:33.621484 7614 kubelet.go:2431] "SyncLoop REMOVE" source="file" pods=["kube-system/bootstrap-kube-scheduler-master-0"] Feb 24 05:19:33.623198 master-0 kubenswrapper[7614]: I0224 05:19:33.621748 7614 kuberuntime_container.go:808] "Killing container with a grace period" pod="kube-system/bootstrap-kube-scheduler-master-0" podUID="56c3cb71c9851003c8de7e7c5db4b87e" containerName="kube-scheduler" containerID="cri-o://28b8da242544132c6f029ed620036b6ee2e59516b410b237f207e8e4173db9a8" gracePeriod=30 Feb 24 05:19:33.623724 master-0 kubenswrapper[7614]: I0224 05:19:33.623676 7614 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-scheduler/openshift-kube-scheduler-master-0"] Feb 24 05:19:33.624077 master-0 kubenswrapper[7614]: E0224 05:19:33.624036 7614 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="56c3cb71c9851003c8de7e7c5db4b87e" containerName="kube-scheduler" Feb 24 05:19:33.624077 master-0 kubenswrapper[7614]: I0224 05:19:33.624070 7614 state_mem.go:107] "Deleted CPUSet assignment" podUID="56c3cb71c9851003c8de7e7c5db4b87e" containerName="kube-scheduler" Feb 24 05:19:33.624143 master-0 kubenswrapper[7614]: E0224 05:19:33.624082 7614 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="56c3cb71c9851003c8de7e7c5db4b87e" containerName="kube-scheduler" Feb 24 05:19:33.624143 master-0 kubenswrapper[7614]: I0224 05:19:33.624093 7614 state_mem.go:107] "Deleted CPUSet assignment" podUID="56c3cb71c9851003c8de7e7c5db4b87e" containerName="kube-scheduler" Feb 24 05:19:33.624282 master-0 kubenswrapper[7614]: I0224 05:19:33.624250 7614 memory_manager.go:354] "RemoveStaleState removing state" podUID="56c3cb71c9851003c8de7e7c5db4b87e" containerName="kube-scheduler" Feb 24 05:19:33.624347 master-0 kubenswrapper[7614]: I0224 05:19:33.624290 7614 memory_manager.go:354] "RemoveStaleState removing state" podUID="56c3cb71c9851003c8de7e7c5db4b87e" containerName="kube-scheduler" Feb 24 05:19:33.625428 master-0 kubenswrapper[7614]: I0224 05:19:33.625396 7614 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" Feb 24 05:19:33.677693 master-0 kubenswrapper[7614]: I0224 05:19:33.677652 7614 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/ebb9c3b6f4ad10a97951cbde655daea9-resource-dir\") pod \"openshift-kube-scheduler-master-0\" (UID: \"ebb9c3b6f4ad10a97951cbde655daea9\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" Feb 24 05:19:33.677961 master-0 kubenswrapper[7614]: I0224 05:19:33.677944 7614 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/ebb9c3b6f4ad10a97951cbde655daea9-cert-dir\") pod \"openshift-kube-scheduler-master-0\" (UID: \"ebb9c3b6f4ad10a97951cbde655daea9\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" Feb 24 05:19:33.778973 master-0 kubenswrapper[7614]: I0224 05:19:33.778913 7614 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/ebb9c3b6f4ad10a97951cbde655daea9-cert-dir\") pod \"openshift-kube-scheduler-master-0\" (UID: \"ebb9c3b6f4ad10a97951cbde655daea9\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" Feb 24 05:19:33.779318 master-0 kubenswrapper[7614]: I0224 05:19:33.779063 7614 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/ebb9c3b6f4ad10a97951cbde655daea9-cert-dir\") pod \"openshift-kube-scheduler-master-0\" (UID: \"ebb9c3b6f4ad10a97951cbde655daea9\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" Feb 24 05:19:33.779318 master-0 kubenswrapper[7614]: I0224 05:19:33.779120 7614 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/ebb9c3b6f4ad10a97951cbde655daea9-resource-dir\") pod \"openshift-kube-scheduler-master-0\" (UID: \"ebb9c3b6f4ad10a97951cbde655daea9\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" Feb 24 05:19:33.779318 master-0 kubenswrapper[7614]: I0224 05:19:33.779201 7614 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/ebb9c3b6f4ad10a97951cbde655daea9-resource-dir\") pod \"openshift-kube-scheduler-master-0\" (UID: \"ebb9c3b6f4ad10a97951cbde655daea9\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" Feb 24 05:19:35.038943 master-0 kubenswrapper[7614]: I0224 05:19:35.038829 7614 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-scheduler_installer-1-retry-1-master-0_4df29682-0936-44a2-9629-2e90115671e0/installer/0.log" Feb 24 05:19:35.947196 master-0 kubenswrapper[7614]: I0224 05:19:35.947078 7614 generic.go:334] "Generic (PLEG): container finished" podID="4df29682-0936-44a2-9629-2e90115671e0" containerID="9591bdc727c99f89e551f4c32dad8c2aa3f7be8a52343c558f1322701668f7df" exitCode=0 Feb 24 05:19:35.947578 master-0 kubenswrapper[7614]: I0224 05:19:35.947174 7614 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/installer-1-retry-1-master-0" event={"ID":"4df29682-0936-44a2-9629-2e90115671e0","Type":"ContainerDied","Data":"9591bdc727c99f89e551f4c32dad8c2aa3f7be8a52343c558f1322701668f7df"} Feb 24 05:19:35.951047 master-0 kubenswrapper[7614]: I0224 05:19:35.950968 7614 generic.go:334] "Generic (PLEG): container finished" podID="56c3cb71c9851003c8de7e7c5db4b87e" containerID="28b8da242544132c6f029ed620036b6ee2e59516b410b237f207e8e4173db9a8" exitCode=0 Feb 24 05:19:35.951047 master-0 kubenswrapper[7614]: I0224 05:19:35.951056 7614 scope.go:117] "RemoveContainer" containerID="ec92c2ccaab799d81de24af8faba27c40dd8197fcd80279d1de6e4daee2ed87c" Feb 24 05:19:36.579786 master-0 kubenswrapper[7614]: I0224 05:19:36.579646 7614 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" Feb 24 05:19:36.583875 master-0 kubenswrapper[7614]: I0224 05:19:36.583817 7614 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler/openshift-kube-scheduler-master-0"] Feb 24 05:19:37.609694 master-0 kubenswrapper[7614]: I0224 05:19:37.609619 7614 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-scheduler-operator_openshift-kube-scheduler-operator-77cd4d9559-8l7xv_e6f05507-d5c1-4102-a220-1db715a496e3/kube-scheduler-operator-container/0.log" Feb 24 05:19:38.735690 master-0 kubenswrapper[7614]: I0224 05:19:38.729523 7614 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-scheduler-operator_openshift-kube-scheduler-operator-77cd4d9559-8l7xv_e6f05507-d5c1-4102-a220-1db715a496e3/kube-scheduler-operator-container/1.log" Feb 24 05:19:40.517507 master-0 kubenswrapper[7614]: I0224 05:19:40.517434 7614 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-apiserver-operator_openshift-apiserver-operator-8586dccc9b-49fsv_58ecd829-4749-4c8a-933b-16b4acccac90/openshift-apiserver-operator/0.log" Feb 24 05:19:41.458125 master-0 kubenswrapper[7614]: I0224 05:19:41.454977 7614 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-apiserver_apiserver-fdc9d7cdd-8v72m_b21148ab-4e3e-4d0b-b198-3278dd8e2e7e/fix-audit-permissions/0.log" Feb 24 05:19:41.466206 master-0 kubenswrapper[7614]: I0224 05:19:41.463597 7614 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-apiserver_apiserver-fdc9d7cdd-8v72m_b21148ab-4e3e-4d0b-b198-3278dd8e2e7e/openshift-apiserver/0.log" Feb 24 05:19:41.477752 master-0 kubenswrapper[7614]: I0224 05:19:41.477717 7614 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-apiserver_apiserver-fdc9d7cdd-8v72m_b21148ab-4e3e-4d0b-b198-3278dd8e2e7e/openshift-apiserver-check-endpoints/0.log" Feb 24 05:19:42.291256 master-0 kubenswrapper[7614]: I0224 05:19:42.291168 7614 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd-operator_etcd-operator-545bf96f4d-tfmbs_7a2c651d-ea1a-41f2-9745-04adc8d88904/etcd-operator/0.log" Feb 24 05:19:42.440080 master-0 kubenswrapper[7614]: I0224 05:19:42.440020 7614 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29531835-tsgrz" Feb 24 05:19:42.545131 master-0 kubenswrapper[7614]: I0224 05:19:42.544937 7614 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/8978e4e5-18ef-4b69-a127-5e9409163935-secret-volume\") pod \"8978e4e5-18ef-4b69-a127-5e9409163935\" (UID: \"8978e4e5-18ef-4b69-a127-5e9409163935\") " Feb 24 05:19:42.545462 master-0 kubenswrapper[7614]: I0224 05:19:42.545217 7614 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jchr9\" (UniqueName: \"kubernetes.io/projected/8978e4e5-18ef-4b69-a127-5e9409163935-kube-api-access-jchr9\") pod \"8978e4e5-18ef-4b69-a127-5e9409163935\" (UID: \"8978e4e5-18ef-4b69-a127-5e9409163935\") " Feb 24 05:19:42.545462 master-0 kubenswrapper[7614]: I0224 05:19:42.545276 7614 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/8978e4e5-18ef-4b69-a127-5e9409163935-config-volume\") pod \"8978e4e5-18ef-4b69-a127-5e9409163935\" (UID: \"8978e4e5-18ef-4b69-a127-5e9409163935\") " Feb 24 05:19:42.546286 master-0 kubenswrapper[7614]: I0224 05:19:42.546194 7614 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8978e4e5-18ef-4b69-a127-5e9409163935-config-volume" (OuterVolumeSpecName: "config-volume") pod "8978e4e5-18ef-4b69-a127-5e9409163935" (UID: "8978e4e5-18ef-4b69-a127-5e9409163935"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 24 05:19:42.549938 master-0 kubenswrapper[7614]: I0224 05:19:42.549848 7614 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8978e4e5-18ef-4b69-a127-5e9409163935-kube-api-access-jchr9" (OuterVolumeSpecName: "kube-api-access-jchr9") pod "8978e4e5-18ef-4b69-a127-5e9409163935" (UID: "8978e4e5-18ef-4b69-a127-5e9409163935"). InnerVolumeSpecName "kube-api-access-jchr9". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 24 05:19:42.550423 master-0 kubenswrapper[7614]: I0224 05:19:42.550352 7614 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8978e4e5-18ef-4b69-a127-5e9409163935-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "8978e4e5-18ef-4b69-a127-5e9409163935" (UID: "8978e4e5-18ef-4b69-a127-5e9409163935"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 24 05:19:42.650207 master-0 kubenswrapper[7614]: I0224 05:19:42.647418 7614 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd-operator_etcd-operator-545bf96f4d-tfmbs_7a2c651d-ea1a-41f2-9745-04adc8d88904/etcd-operator/1.log" Feb 24 05:19:42.650207 master-0 kubenswrapper[7614]: I0224 05:19:42.648931 7614 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/8978e4e5-18ef-4b69-a127-5e9409163935-config-volume\") on node \"master-0\" DevicePath \"\"" Feb 24 05:19:42.650207 master-0 kubenswrapper[7614]: I0224 05:19:42.648998 7614 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/8978e4e5-18ef-4b69-a127-5e9409163935-secret-volume\") on node \"master-0\" DevicePath \"\"" Feb 24 05:19:42.650207 master-0 kubenswrapper[7614]: I0224 05:19:42.649021 7614 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jchr9\" (UniqueName: \"kubernetes.io/projected/8978e4e5-18ef-4b69-a127-5e9409163935-kube-api-access-jchr9\") on node \"master-0\" DevicePath \"\"" Feb 24 05:19:42.670607 master-0 kubenswrapper[7614]: I0224 05:19:42.670529 7614 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operator-lifecycle-manager_catalog-operator-596f79dd6f-v22h2_cc0cfdd6-99d8-40dc-87d0-06c2a6767f38/catalog-operator/0.log" Feb 24 05:19:42.681878 master-0 kubenswrapper[7614]: I0224 05:19:42.681815 7614 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operator-lifecycle-manager_collect-profiles-29531835-tsgrz_8978e4e5-18ef-4b69-a127-5e9409163935/collect-profiles/0.log" Feb 24 05:19:42.736394 master-0 kubenswrapper[7614]: I0224 05:19:42.735724 7614 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operator-lifecycle-manager_olm-operator-5499d7f7bb-8xdmq_9666fc94-71e3-46af-8b45-26e3a085d076/olm-operator/0.log" Feb 24 05:19:42.742757 master-0 kubenswrapper[7614]: I0224 05:19:42.742715 7614 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operator-lifecycle-manager_package-server-manager-5c75f78c8b-9d82f_49bfccec-61ec-4bef-a561-9f6e6f906215/kube-rbac-proxy/0.log" Feb 24 05:19:42.756263 master-0 kubenswrapper[7614]: I0224 05:19:42.756203 7614 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operator-lifecycle-manager_package-server-manager-5c75f78c8b-9d82f_49bfccec-61ec-4bef-a561-9f6e6f906215/package-server-manager/0.log" Feb 24 05:19:42.766456 master-0 kubenswrapper[7614]: I0224 05:19:42.765516 7614 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operator-lifecycle-manager_packageserver-df5f88cd4-cwzcs_49b426a3-f16e-40e9-a166-7270d4cfcc60/packageserver/0.log" Feb 24 05:19:42.798653 master-0 kubenswrapper[7614]: I0224 05:19:42.798376 7614 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/installer-1-retry-1-master-0" Feb 24 05:19:42.952847 master-0 kubenswrapper[7614]: I0224 05:19:42.952766 7614 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/4df29682-0936-44a2-9629-2e90115671e0-kube-api-access\") pod \"4df29682-0936-44a2-9629-2e90115671e0\" (UID: \"4df29682-0936-44a2-9629-2e90115671e0\") " Feb 24 05:19:42.953115 master-0 kubenswrapper[7614]: I0224 05:19:42.952872 7614 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/4df29682-0936-44a2-9629-2e90115671e0-var-lock\") pod \"4df29682-0936-44a2-9629-2e90115671e0\" (UID: \"4df29682-0936-44a2-9629-2e90115671e0\") " Feb 24 05:19:42.953115 master-0 kubenswrapper[7614]: I0224 05:19:42.952972 7614 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/4df29682-0936-44a2-9629-2e90115671e0-kubelet-dir\") pod \"4df29682-0936-44a2-9629-2e90115671e0\" (UID: \"4df29682-0936-44a2-9629-2e90115671e0\") " Feb 24 05:19:42.953381 master-0 kubenswrapper[7614]: I0224 05:19:42.953347 7614 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4df29682-0936-44a2-9629-2e90115671e0-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "4df29682-0936-44a2-9629-2e90115671e0" (UID: "4df29682-0936-44a2-9629-2e90115671e0"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 24 05:19:42.953976 master-0 kubenswrapper[7614]: I0224 05:19:42.953910 7614 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4df29682-0936-44a2-9629-2e90115671e0-var-lock" (OuterVolumeSpecName: "var-lock") pod "4df29682-0936-44a2-9629-2e90115671e0" (UID: "4df29682-0936-44a2-9629-2e90115671e0"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 24 05:19:42.956935 master-0 kubenswrapper[7614]: I0224 05:19:42.956860 7614 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4df29682-0936-44a2-9629-2e90115671e0-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "4df29682-0936-44a2-9629-2e90115671e0" (UID: "4df29682-0936-44a2-9629-2e90115671e0"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 24 05:19:43.007280 master-0 kubenswrapper[7614]: I0224 05:19:43.007089 7614 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29531835-tsgrz" event={"ID":"8978e4e5-18ef-4b69-a127-5e9409163935","Type":"ContainerDied","Data":"379b0200953b199da1fee7353da8664ed763cba78b2a8cda5a307db9466ab184"} Feb 24 05:19:43.007280 master-0 kubenswrapper[7614]: I0224 05:19:43.007152 7614 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="379b0200953b199da1fee7353da8664ed763cba78b2a8cda5a307db9466ab184" Feb 24 05:19:43.007280 master-0 kubenswrapper[7614]: I0224 05:19:43.007249 7614 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29531835-tsgrz" Feb 24 05:19:43.013534 master-0 kubenswrapper[7614]: I0224 05:19:43.013481 7614 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/installer-1-retry-1-master-0" event={"ID":"4df29682-0936-44a2-9629-2e90115671e0","Type":"ContainerDied","Data":"33a4bcbe5ee93a7507e3b17c9d65e1fc83f9e2c984de2f2f9d7e2c4fd84b6d8a"} Feb 24 05:19:43.013635 master-0 kubenswrapper[7614]: I0224 05:19:43.013547 7614 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="33a4bcbe5ee93a7507e3b17c9d65e1fc83f9e2c984de2f2f9d7e2c4fd84b6d8a" Feb 24 05:19:43.013635 master-0 kubenswrapper[7614]: I0224 05:19:43.013569 7614 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/installer-1-retry-1-master-0" Feb 24 05:19:43.055201 master-0 kubenswrapper[7614]: I0224 05:19:43.055042 7614 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/4df29682-0936-44a2-9629-2e90115671e0-kube-api-access\") on node \"master-0\" DevicePath \"\"" Feb 24 05:19:43.055201 master-0 kubenswrapper[7614]: I0224 05:19:43.055095 7614 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/4df29682-0936-44a2-9629-2e90115671e0-var-lock\") on node \"master-0\" DevicePath \"\"" Feb 24 05:19:43.055201 master-0 kubenswrapper[7614]: I0224 05:19:43.055106 7614 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/4df29682-0936-44a2-9629-2e90115671e0-kubelet-dir\") on node \"master-0\" DevicePath \"\"" Feb 24 05:19:49.967102 master-0 kubenswrapper[7614]: W0224 05:19:49.966671 7614 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc847d0c0_cc92_4d56_9e47_b83d9a39a745.slice/crio-53aff8ce601eb36b54bc43ffb3ad6e1b16683e9a02c222af744cc38c77ef8aa0 WatchSource:0}: Error finding container 53aff8ce601eb36b54bc43ffb3ad6e1b16683e9a02c222af744cc38c77ef8aa0: Status 404 returned error can't find the container with id 53aff8ce601eb36b54bc43ffb3ad6e1b16683e9a02c222af744cc38c77ef8aa0 Feb 24 05:19:49.971148 master-0 kubenswrapper[7614]: W0224 05:19:49.971091 7614 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podebb9c3b6f4ad10a97951cbde655daea9.slice/crio-639ae518497ba1706dda96412a5f991e087afb115a63188b2e7c534e5017f902 WatchSource:0}: Error finding container 639ae518497ba1706dda96412a5f991e087afb115a63188b2e7c534e5017f902: Status 404 returned error can't find the container with id 639ae518497ba1706dda96412a5f991e087afb115a63188b2e7c534e5017f902 Feb 24 05:19:50.001262 master-0 kubenswrapper[7614]: I0224 05:19:50.001206 7614 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="kube-system/bootstrap-kube-scheduler-master-0" Feb 24 05:19:50.075549 master-0 kubenswrapper[7614]: I0224 05:19:50.075471 7614 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="kube-system/bootstrap-kube-scheduler-master-0" Feb 24 05:19:50.075966 master-0 kubenswrapper[7614]: I0224 05:19:50.075479 7614 scope.go:117] "RemoveContainer" containerID="28b8da242544132c6f029ed620036b6ee2e59516b410b237f207e8e4173db9a8" Feb 24 05:19:50.077648 master-0 kubenswrapper[7614]: I0224 05:19:50.077571 7614 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" event={"ID":"ebb9c3b6f4ad10a97951cbde655daea9","Type":"ContainerStarted","Data":"639ae518497ba1706dda96412a5f991e087afb115a63188b2e7c534e5017f902"} Feb 24 05:19:50.078225 master-0 kubenswrapper[7614]: I0224 05:19:50.078198 7614 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secrets\" (UniqueName: \"kubernetes.io/host-path/56c3cb71c9851003c8de7e7c5db4b87e-secrets\") pod \"56c3cb71c9851003c8de7e7c5db4b87e\" (UID: \"56c3cb71c9851003c8de7e7c5db4b87e\") " Feb 24 05:19:50.078521 master-0 kubenswrapper[7614]: I0224 05:19:50.078375 7614 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/56c3cb71c9851003c8de7e7c5db4b87e-secrets" (OuterVolumeSpecName: "secrets") pod "56c3cb71c9851003c8de7e7c5db4b87e" (UID: "56c3cb71c9851003c8de7e7c5db4b87e"). InnerVolumeSpecName "secrets". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 24 05:19:50.078521 master-0 kubenswrapper[7614]: I0224 05:19:50.078388 7614 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/host-path/56c3cb71c9851003c8de7e7c5db4b87e-logs\") pod \"56c3cb71c9851003c8de7e7c5db4b87e\" (UID: \"56c3cb71c9851003c8de7e7c5db4b87e\") " Feb 24 05:19:50.078593 master-0 kubenswrapper[7614]: I0224 05:19:50.078545 7614 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/56c3cb71c9851003c8de7e7c5db4b87e-logs" (OuterVolumeSpecName: "logs") pod "56c3cb71c9851003c8de7e7c5db4b87e" (UID: "56c3cb71c9851003c8de7e7c5db4b87e"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 24 05:19:50.078835 master-0 kubenswrapper[7614]: I0224 05:19:50.078812 7614 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/host-path/56c3cb71c9851003c8de7e7c5db4b87e-logs\") on node \"master-0\" DevicePath \"\"" Feb 24 05:19:50.078884 master-0 kubenswrapper[7614]: I0224 05:19:50.078843 7614 reconciler_common.go:293] "Volume detached for volume \"secrets\" (UniqueName: \"kubernetes.io/host-path/56c3cb71c9851003c8de7e7c5db4b87e-secrets\") on node \"master-0\" DevicePath \"\"" Feb 24 05:19:50.079909 master-0 kubenswrapper[7614]: I0224 05:19:50.079860 7614 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-server-xxl55" event={"ID":"c847d0c0-cc92-4d56-9e47-b83d9a39a745","Type":"ContainerStarted","Data":"53aff8ce601eb36b54bc43ffb3ad6e1b16683e9a02c222af744cc38c77ef8aa0"} Feb 24 05:19:51.089724 master-0 kubenswrapper[7614]: I0224 05:19:51.089634 7614 generic.go:334] "Generic (PLEG): container finished" podID="8f3825c1-975c-40b5-a6ad-0f200968b3cd" containerID="ffc314400db214f427906ec4ca12f75c59303e7a375e1e0d03ee1ca927488079" exitCode=0 Feb 24 05:19:51.090433 master-0 kubenswrapper[7614]: I0224 05:19:51.089728 7614 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-xm8sw" event={"ID":"8f3825c1-975c-40b5-a6ad-0f200968b3cd","Type":"ContainerDied","Data":"ffc314400db214f427906ec4ca12f75c59303e7a375e1e0d03ee1ca927488079"} Feb 24 05:19:51.091203 master-0 kubenswrapper[7614]: I0224 05:19:51.091161 7614 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-server-xxl55" event={"ID":"c847d0c0-cc92-4d56-9e47-b83d9a39a745","Type":"ContainerStarted","Data":"445b9ac1491d74c92f5eb5dd67bf98eb6f4bf0829718ba104fe1937509f44357"} Feb 24 05:19:51.092195 master-0 kubenswrapper[7614]: I0224 05:19:51.092136 7614 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 24 05:19:51.095908 master-0 kubenswrapper[7614]: I0224 05:19:51.095852 7614 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-7b65dc9fcb-zxkt2" event={"ID":"be7a4b9e-1e9a-4298-b804-21b683805c0e","Type":"ContainerStarted","Data":"140a9b5fdc72c4b3ab1b7bcc97ac10d0500b7b5e5c7d097d9570d8dd233f08cb"} Feb 24 05:19:51.099074 master-0 kubenswrapper[7614]: I0224 05:19:51.099024 7614 generic.go:334] "Generic (PLEG): container finished" podID="2c6bb439-ed17-4761-b193-580be5f6aa00" containerID="53c7e3fc41d9bab35b02eeb11ff0277359d3318a819e1c141438a6ded2b7e362" exitCode=0 Feb 24 05:19:51.099168 master-0 kubenswrapper[7614]: I0224 05:19:51.099109 7614 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-gn8m8" event={"ID":"2c6bb439-ed17-4761-b193-580be5f6aa00","Type":"ContainerDied","Data":"53c7e3fc41d9bab35b02eeb11ff0277359d3318a819e1c141438a6ded2b7e362"} Feb 24 05:19:51.102024 master-0 kubenswrapper[7614]: I0224 05:19:51.101981 7614 generic.go:334] "Generic (PLEG): container finished" podID="ebb9c3b6f4ad10a97951cbde655daea9" containerID="c31c78349f1ab025d6ecadfbb83b67c0bce9a73e637fa587febda4c860d8e036" exitCode=0 Feb 24 05:19:51.102120 master-0 kubenswrapper[7614]: I0224 05:19:51.102059 7614 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" event={"ID":"ebb9c3b6f4ad10a97951cbde655daea9","Type":"ContainerDied","Data":"c31c78349f1ab025d6ecadfbb83b67c0bce9a73e637fa587febda4c860d8e036"} Feb 24 05:19:51.104059 master-0 kubenswrapper[7614]: I0224 05:19:51.104009 7614 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-operator-admission-webhook-75d56db95f-hw4m2" event={"ID":"b9a96f0d-16b8-47ee-baf2-807d2260fa71","Type":"ContainerStarted","Data":"e812e63ef208b52a5576642c47ee03ef1c2e9f1ca87c0a9d25d7923e244b2f62"} Feb 24 05:19:51.105147 master-0 kubenswrapper[7614]: I0224 05:19:51.105105 7614 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-monitoring/prometheus-operator-admission-webhook-75d56db95f-hw4m2" Feb 24 05:19:51.109933 master-0 kubenswrapper[7614]: I0224 05:19:51.107531 7614 generic.go:334] "Generic (PLEG): container finished" podID="75b4304c-09f2-499e-8c2f-da603e43ba72" containerID="6d21fdb0da7b4e08eb7332d4f6f4cc9f79390ab0d373543815483f10f2185255" exitCode=0 Feb 24 05:19:51.109933 master-0 kubenswrapper[7614]: I0224 05:19:51.107616 7614 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-v64s6" event={"ID":"75b4304c-09f2-499e-8c2f-da603e43ba72","Type":"ContainerDied","Data":"6d21fdb0da7b4e08eb7332d4f6f4cc9f79390ab0d373543815483f10f2185255"} Feb 24 05:19:51.110506 master-0 kubenswrapper[7614]: I0224 05:19:51.110459 7614 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-monitoring/prometheus-operator-admission-webhook-75d56db95f-hw4m2" Feb 24 05:19:51.110839 master-0 kubenswrapper[7614]: I0224 05:19:51.110798 7614 generic.go:334] "Generic (PLEG): container finished" podID="cd674e58-b749-46fb-8a28-66012fd8b401" containerID="124c812cfefad15d4947b33d7dd6cb8f0bef4d7acc6ad12461d90e6b781bfc01" exitCode=0 Feb 24 05:19:51.110891 master-0 kubenswrapper[7614]: I0224 05:19:51.110846 7614 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-68vwc" event={"ID":"cd674e58-b749-46fb-8a28-66012fd8b401","Type":"ContainerDied","Data":"124c812cfefad15d4947b33d7dd6cb8f0bef4d7acc6ad12461d90e6b781bfc01"} Feb 24 05:19:51.181190 master-0 kubenswrapper[7614]: I0224 05:19:51.181059 7614 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-monitoring/prometheus-operator-admission-webhook-75d56db95f-hw4m2" podStartSLOduration=256.496204498 podStartE2EDuration="4m40.181026943s" podCreationTimestamp="2026-02-24 05:15:11 +0000 UTC" firstStartedPulling="2026-02-24 05:19:26.261193757 +0000 UTC m=+297.295936913" lastFinishedPulling="2026-02-24 05:19:49.946016202 +0000 UTC m=+320.980759358" observedRunningTime="2026-02-24 05:19:51.177573963 +0000 UTC m=+322.212317159" watchObservedRunningTime="2026-02-24 05:19:51.181026943 +0000 UTC m=+322.215770119" Feb 24 05:19:51.190595 master-0 kubenswrapper[7614]: I0224 05:19:51.189984 7614 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="56c3cb71c9851003c8de7e7c5db4b87e" path="/var/lib/kubelet/pods/56c3cb71c9851003c8de7e7c5db4b87e/volumes" Feb 24 05:19:51.190595 master-0 kubenswrapper[7614]: I0224 05:19:51.190390 7614 mirror_client.go:130] "Deleting a mirror pod" pod="kube-system/bootstrap-kube-scheduler-master-0" podUID="" Feb 24 05:19:51.211400 master-0 kubenswrapper[7614]: I0224 05:19:51.211197 7614 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["kube-system/bootstrap-kube-scheduler-master-0"] Feb 24 05:19:51.211400 master-0 kubenswrapper[7614]: I0224 05:19:51.211237 7614 kubelet.go:2649] "Unable to find pod for mirror pod, skipping" mirrorPod="kube-system/bootstrap-kube-scheduler-master-0" mirrorPodUID="cea19e65-8294-4a11-9458-560b9b3fbebf" Feb 24 05:19:51.225262 master-0 kubenswrapper[7614]: I0224 05:19:51.224632 7614 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["kube-system/bootstrap-kube-scheduler-master-0"] Feb 24 05:19:51.225262 master-0 kubenswrapper[7614]: I0224 05:19:51.224685 7614 kubelet.go:2673] "Unable to find pod for mirror pod, skipping" mirrorPod="kube-system/bootstrap-kube-scheduler-master-0" mirrorPodUID="cea19e65-8294-4a11-9458-560b9b3fbebf" Feb 24 05:19:51.245097 master-0 kubenswrapper[7614]: I0224 05:19:51.241459 7614 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress/router-default-7b65dc9fcb-zxkt2" podStartSLOduration=273.100456425 podStartE2EDuration="4m57.241427134s" podCreationTimestamp="2026-02-24 05:14:54 +0000 UTC" firstStartedPulling="2026-02-24 05:19:25.741009048 +0000 UTC m=+296.775752204" lastFinishedPulling="2026-02-24 05:19:49.881979737 +0000 UTC m=+320.916722913" observedRunningTime="2026-02-24 05:19:51.238696514 +0000 UTC m=+322.273439700" watchObservedRunningTime="2026-02-24 05:19:51.241427134 +0000 UTC m=+322.276170320" Feb 24 05:19:51.262670 master-0 kubenswrapper[7614]: I0224 05:19:51.260057 7614 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-server-xxl55" podStartSLOduration=23.260031793 podStartE2EDuration="23.260031793s" podCreationTimestamp="2026-02-24 05:19:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-24 05:19:51.256548203 +0000 UTC m=+322.291291359" watchObservedRunningTime="2026-02-24 05:19:51.260031793 +0000 UTC m=+322.294774949" Feb 24 05:19:51.661099 master-0 kubenswrapper[7614]: I0224 05:19:51.660813 7614 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-ingress/router-default-7b65dc9fcb-zxkt2" Feb 24 05:19:51.664304 master-0 kubenswrapper[7614]: I0224 05:19:51.664131 7614 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-zxkt2 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 05:19:51.664304 master-0 kubenswrapper[7614]: [-]has-synced failed: reason withheld Feb 24 05:19:51.664304 master-0 kubenswrapper[7614]: [+]process-running ok Feb 24 05:19:51.664304 master-0 kubenswrapper[7614]: healthz check failed Feb 24 05:19:51.665086 master-0 kubenswrapper[7614]: I0224 05:19:51.664410 7614 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-zxkt2" podUID="be7a4b9e-1e9a-4298-b804-21b683805c0e" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 05:19:52.126190 master-0 kubenswrapper[7614]: I0224 05:19:52.124644 7614 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-xm8sw" event={"ID":"8f3825c1-975c-40b5-a6ad-0f200968b3cd","Type":"ContainerStarted","Data":"fa3c6513ec21859f86a1087328f6abe91ad6ee071120c66817b3fc9969a4c811"} Feb 24 05:19:52.127441 master-0 kubenswrapper[7614]: I0224 05:19:52.127385 7614 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-v64s6" event={"ID":"75b4304c-09f2-499e-8c2f-da603e43ba72","Type":"ContainerStarted","Data":"ae7de3342af5078caa37bed094df1dafa8dc262f6444e76d1cedcc4817cb7411"} Feb 24 05:19:52.130551 master-0 kubenswrapper[7614]: I0224 05:19:52.130512 7614 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-68vwc" event={"ID":"cd674e58-b749-46fb-8a28-66012fd8b401","Type":"ContainerStarted","Data":"2dc81d1a84d3af738f2784ed962234a542979021ce039b484e9563b4768da46e"} Feb 24 05:19:52.132742 master-0 kubenswrapper[7614]: I0224 05:19:52.132702 7614 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-gn8m8" event={"ID":"2c6bb439-ed17-4761-b193-580be5f6aa00","Type":"ContainerStarted","Data":"9fe8849afb56c0a7ecf45be6912f90be4aaa1ae80f75bf977ea2f2f70206f1d2"} Feb 24 05:19:52.135621 master-0 kubenswrapper[7614]: I0224 05:19:52.135584 7614 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" event={"ID":"ebb9c3b6f4ad10a97951cbde655daea9","Type":"ContainerStarted","Data":"9e0cc0f7f581085a792db3f9717a0c7d3e86218c9ccfa7f2c67da547aa98fac9"} Feb 24 05:19:52.135621 master-0 kubenswrapper[7614]: I0224 05:19:52.135619 7614 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" event={"ID":"ebb9c3b6f4ad10a97951cbde655daea9","Type":"ContainerStarted","Data":"4ada702e991319865f9dacb414ee4288bbdec2d1eeae1681a213589c60b83506"} Feb 24 05:19:52.160463 master-0 kubenswrapper[7614]: I0224 05:19:52.160379 7614 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-xm8sw" podStartSLOduration=13.344494957 podStartE2EDuration="44.160355391s" podCreationTimestamp="2026-02-24 05:19:08 +0000 UTC" firstStartedPulling="2026-02-24 05:19:20.75046473 +0000 UTC m=+291.785207886" lastFinishedPulling="2026-02-24 05:19:51.566325164 +0000 UTC m=+322.601068320" observedRunningTime="2026-02-24 05:19:52.158843963 +0000 UTC m=+323.193587129" watchObservedRunningTime="2026-02-24 05:19:52.160355391 +0000 UTC m=+323.195098547" Feb 24 05:19:52.186537 master-0 kubenswrapper[7614]: I0224 05:19:52.186463 7614 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-68vwc" podStartSLOduration=15.394994244 podStartE2EDuration="46.186442381s" podCreationTimestamp="2026-02-24 05:19:06 +0000 UTC" firstStartedPulling="2026-02-24 05:19:20.775009781 +0000 UTC m=+291.809752937" lastFinishedPulling="2026-02-24 05:19:51.566457898 +0000 UTC m=+322.601201074" observedRunningTime="2026-02-24 05:19:52.183015283 +0000 UTC m=+323.217758439" watchObservedRunningTime="2026-02-24 05:19:52.186442381 +0000 UTC m=+323.221185537" Feb 24 05:19:52.206169 master-0 kubenswrapper[7614]: I0224 05:19:52.205845 7614 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-v64s6" podStartSLOduration=14.204603701 podStartE2EDuration="45.20582585s" podCreationTimestamp="2026-02-24 05:19:07 +0000 UTC" firstStartedPulling="2026-02-24 05:19:20.731722858 +0000 UTC m=+291.766466014" lastFinishedPulling="2026-02-24 05:19:51.732945007 +0000 UTC m=+322.767688163" observedRunningTime="2026-02-24 05:19:52.204837054 +0000 UTC m=+323.239580210" watchObservedRunningTime="2026-02-24 05:19:52.20582585 +0000 UTC m=+323.240569006" Feb 24 05:19:52.226582 master-0 kubenswrapper[7614]: I0224 05:19:52.226502 7614 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-gn8m8" podStartSLOduration=16.394480952 podStartE2EDuration="47.226473791s" podCreationTimestamp="2026-02-24 05:19:05 +0000 UTC" firstStartedPulling="2026-02-24 05:19:20.736075931 +0000 UTC m=+291.770819087" lastFinishedPulling="2026-02-24 05:19:51.56806877 +0000 UTC m=+322.602811926" observedRunningTime="2026-02-24 05:19:52.226411429 +0000 UTC m=+323.261154585" watchObservedRunningTime="2026-02-24 05:19:52.226473791 +0000 UTC m=+323.261216947" Feb 24 05:19:52.274032 master-0 kubenswrapper[7614]: I0224 05:19:52.273974 7614 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-monitoring/prometheus-operator-754bc4d665-xjddh"] Feb 24 05:19:52.274357 master-0 kubenswrapper[7614]: E0224 05:19:52.274239 7614 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4df29682-0936-44a2-9629-2e90115671e0" containerName="installer" Feb 24 05:19:52.274357 master-0 kubenswrapper[7614]: I0224 05:19:52.274254 7614 state_mem.go:107] "Deleted CPUSet assignment" podUID="4df29682-0936-44a2-9629-2e90115671e0" containerName="installer" Feb 24 05:19:52.274357 master-0 kubenswrapper[7614]: E0224 05:19:52.274271 7614 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8978e4e5-18ef-4b69-a127-5e9409163935" containerName="collect-profiles" Feb 24 05:19:52.274357 master-0 kubenswrapper[7614]: I0224 05:19:52.274278 7614 state_mem.go:107] "Deleted CPUSet assignment" podUID="8978e4e5-18ef-4b69-a127-5e9409163935" containerName="collect-profiles" Feb 24 05:19:52.274472 master-0 kubenswrapper[7614]: I0224 05:19:52.274406 7614 memory_manager.go:354] "RemoveStaleState removing state" podUID="8978e4e5-18ef-4b69-a127-5e9409163935" containerName="collect-profiles" Feb 24 05:19:52.274472 master-0 kubenswrapper[7614]: I0224 05:19:52.274430 7614 memory_manager.go:354] "RemoveStaleState removing state" podUID="4df29682-0936-44a2-9629-2e90115671e0" containerName="installer" Feb 24 05:19:52.275086 master-0 kubenswrapper[7614]: I0224 05:19:52.275055 7614 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/prometheus-operator-754bc4d665-xjddh" Feb 24 05:19:52.277751 master-0 kubenswrapper[7614]: I0224 05:19:52.277681 7614 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-operator-dockercfg-9xtkh" Feb 24 05:19:52.278010 master-0 kubenswrapper[7614]: I0224 05:19:52.277987 7614 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-operator-tls" Feb 24 05:19:52.280073 master-0 kubenswrapper[7614]: I0224 05:19:52.280032 7614 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"metrics-client-ca" Feb 24 05:19:52.295339 master-0 kubenswrapper[7614]: I0224 05:19:52.290211 7614 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-operator-kube-rbac-proxy-config" Feb 24 05:19:52.309526 master-0 kubenswrapper[7614]: I0224 05:19:52.303109 7614 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/prometheus-operator-754bc4d665-xjddh"] Feb 24 05:19:52.433513 master-0 kubenswrapper[7614]: I0224 05:19:52.433348 7614 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-77lsr\" (UniqueName: \"kubernetes.io/projected/b8d28792-2365-4e9e-b61a-46cd2ef8b632-kube-api-access-77lsr\") pod \"prometheus-operator-754bc4d665-xjddh\" (UID: \"b8d28792-2365-4e9e-b61a-46cd2ef8b632\") " pod="openshift-monitoring/prometheus-operator-754bc4d665-xjddh" Feb 24 05:19:52.433513 master-0 kubenswrapper[7614]: I0224 05:19:52.433437 7614 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-operator-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/b8d28792-2365-4e9e-b61a-46cd2ef8b632-prometheus-operator-kube-rbac-proxy-config\") pod \"prometheus-operator-754bc4d665-xjddh\" (UID: \"b8d28792-2365-4e9e-b61a-46cd2ef8b632\") " pod="openshift-monitoring/prometheus-operator-754bc4d665-xjddh" Feb 24 05:19:52.433513 master-0 kubenswrapper[7614]: I0224 05:19:52.433470 7614 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/b8d28792-2365-4e9e-b61a-46cd2ef8b632-metrics-client-ca\") pod \"prometheus-operator-754bc4d665-xjddh\" (UID: \"b8d28792-2365-4e9e-b61a-46cd2ef8b632\") " pod="openshift-monitoring/prometheus-operator-754bc4d665-xjddh" Feb 24 05:19:52.433513 master-0 kubenswrapper[7614]: I0224 05:19:52.433490 7614 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-operator-tls\" (UniqueName: \"kubernetes.io/secret/b8d28792-2365-4e9e-b61a-46cd2ef8b632-prometheus-operator-tls\") pod \"prometheus-operator-754bc4d665-xjddh\" (UID: \"b8d28792-2365-4e9e-b61a-46cd2ef8b632\") " pod="openshift-monitoring/prometheus-operator-754bc4d665-xjddh" Feb 24 05:19:52.534958 master-0 kubenswrapper[7614]: I0224 05:19:52.534886 7614 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/b8d28792-2365-4e9e-b61a-46cd2ef8b632-metrics-client-ca\") pod \"prometheus-operator-754bc4d665-xjddh\" (UID: \"b8d28792-2365-4e9e-b61a-46cd2ef8b632\") " pod="openshift-monitoring/prometheus-operator-754bc4d665-xjddh" Feb 24 05:19:52.534958 master-0 kubenswrapper[7614]: I0224 05:19:52.534950 7614 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-operator-tls\" (UniqueName: \"kubernetes.io/secret/b8d28792-2365-4e9e-b61a-46cd2ef8b632-prometheus-operator-tls\") pod \"prometheus-operator-754bc4d665-xjddh\" (UID: \"b8d28792-2365-4e9e-b61a-46cd2ef8b632\") " pod="openshift-monitoring/prometheus-operator-754bc4d665-xjddh" Feb 24 05:19:52.535351 master-0 kubenswrapper[7614]: I0224 05:19:52.535272 7614 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-77lsr\" (UniqueName: \"kubernetes.io/projected/b8d28792-2365-4e9e-b61a-46cd2ef8b632-kube-api-access-77lsr\") pod \"prometheus-operator-754bc4d665-xjddh\" (UID: \"b8d28792-2365-4e9e-b61a-46cd2ef8b632\") " pod="openshift-monitoring/prometheus-operator-754bc4d665-xjddh" Feb 24 05:19:52.535552 master-0 kubenswrapper[7614]: I0224 05:19:52.535520 7614 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-operator-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/b8d28792-2365-4e9e-b61a-46cd2ef8b632-prometheus-operator-kube-rbac-proxy-config\") pod \"prometheus-operator-754bc4d665-xjddh\" (UID: \"b8d28792-2365-4e9e-b61a-46cd2ef8b632\") " pod="openshift-monitoring/prometheus-operator-754bc4d665-xjddh" Feb 24 05:19:52.536188 master-0 kubenswrapper[7614]: I0224 05:19:52.536138 7614 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/b8d28792-2365-4e9e-b61a-46cd2ef8b632-metrics-client-ca\") pod \"prometheus-operator-754bc4d665-xjddh\" (UID: \"b8d28792-2365-4e9e-b61a-46cd2ef8b632\") " pod="openshift-monitoring/prometheus-operator-754bc4d665-xjddh" Feb 24 05:19:52.540752 master-0 kubenswrapper[7614]: I0224 05:19:52.540021 7614 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-operator-tls\" (UniqueName: \"kubernetes.io/secret/b8d28792-2365-4e9e-b61a-46cd2ef8b632-prometheus-operator-tls\") pod \"prometheus-operator-754bc4d665-xjddh\" (UID: \"b8d28792-2365-4e9e-b61a-46cd2ef8b632\") " pod="openshift-monitoring/prometheus-operator-754bc4d665-xjddh" Feb 24 05:19:52.540752 master-0 kubenswrapper[7614]: I0224 05:19:52.540033 7614 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-operator-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/b8d28792-2365-4e9e-b61a-46cd2ef8b632-prometheus-operator-kube-rbac-proxy-config\") pod \"prometheus-operator-754bc4d665-xjddh\" (UID: \"b8d28792-2365-4e9e-b61a-46cd2ef8b632\") " pod="openshift-monitoring/prometheus-operator-754bc4d665-xjddh" Feb 24 05:19:52.554678 master-0 kubenswrapper[7614]: I0224 05:19:52.554621 7614 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-77lsr\" (UniqueName: \"kubernetes.io/projected/b8d28792-2365-4e9e-b61a-46cd2ef8b632-kube-api-access-77lsr\") pod \"prometheus-operator-754bc4d665-xjddh\" (UID: \"b8d28792-2365-4e9e-b61a-46cd2ef8b632\") " pod="openshift-monitoring/prometheus-operator-754bc4d665-xjddh" Feb 24 05:19:52.657304 master-0 kubenswrapper[7614]: I0224 05:19:52.657225 7614 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/prometheus-operator-754bc4d665-xjddh" Feb 24 05:19:52.663600 master-0 kubenswrapper[7614]: I0224 05:19:52.663537 7614 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-zxkt2 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 05:19:52.663600 master-0 kubenswrapper[7614]: [-]has-synced failed: reason withheld Feb 24 05:19:52.663600 master-0 kubenswrapper[7614]: [+]process-running ok Feb 24 05:19:52.663600 master-0 kubenswrapper[7614]: healthz check failed Feb 24 05:19:52.663793 master-0 kubenswrapper[7614]: I0224 05:19:52.663632 7614 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-zxkt2" podUID="be7a4b9e-1e9a-4298-b804-21b683805c0e" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 05:19:53.109208 master-0 kubenswrapper[7614]: I0224 05:19:53.109141 7614 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/prometheus-operator-754bc4d665-xjddh"] Feb 24 05:19:53.112268 master-0 kubenswrapper[7614]: W0224 05:19:53.112226 7614 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb8d28792_2365_4e9e_b61a_46cd2ef8b632.slice/crio-47463debfe8a4cd4bfc5f6610d0dc3da5ba2eb733f6d27a5379ed121dc26350d WatchSource:0}: Error finding container 47463debfe8a4cd4bfc5f6610d0dc3da5ba2eb733f6d27a5379ed121dc26350d: Status 404 returned error can't find the container with id 47463debfe8a4cd4bfc5f6610d0dc3da5ba2eb733f6d27a5379ed121dc26350d Feb 24 05:19:53.144165 master-0 kubenswrapper[7614]: I0224 05:19:53.144092 7614 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-operator-754bc4d665-xjddh" event={"ID":"b8d28792-2365-4e9e-b61a-46cd2ef8b632","Type":"ContainerStarted","Data":"47463debfe8a4cd4bfc5f6610d0dc3da5ba2eb733f6d27a5379ed121dc26350d"} Feb 24 05:19:53.149202 master-0 kubenswrapper[7614]: I0224 05:19:53.149146 7614 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" event={"ID":"ebb9c3b6f4ad10a97951cbde655daea9","Type":"ContainerStarted","Data":"856274500e14cb82370664b7fa9205dec8cf8d13575deae834feb4190cf946dd"} Feb 24 05:19:53.151002 master-0 kubenswrapper[7614]: I0224 05:19:53.150979 7614 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" Feb 24 05:19:53.188887 master-0 kubenswrapper[7614]: I0224 05:19:53.188170 7614 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" podStartSLOduration=18.188135045 podStartE2EDuration="18.188135045s" podCreationTimestamp="2026-02-24 05:19:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-24 05:19:53.180808157 +0000 UTC m=+324.215551323" watchObservedRunningTime="2026-02-24 05:19:53.188135045 +0000 UTC m=+324.222878211" Feb 24 05:19:53.663276 master-0 kubenswrapper[7614]: I0224 05:19:53.663218 7614 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-zxkt2 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 05:19:53.663276 master-0 kubenswrapper[7614]: [-]has-synced failed: reason withheld Feb 24 05:19:53.663276 master-0 kubenswrapper[7614]: [+]process-running ok Feb 24 05:19:53.663276 master-0 kubenswrapper[7614]: healthz check failed Feb 24 05:19:53.663616 master-0 kubenswrapper[7614]: I0224 05:19:53.663298 7614 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-zxkt2" podUID="be7a4b9e-1e9a-4298-b804-21b683805c0e" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 05:19:54.666993 master-0 kubenswrapper[7614]: I0224 05:19:54.666924 7614 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-zxkt2 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 05:19:54.666993 master-0 kubenswrapper[7614]: [-]has-synced failed: reason withheld Feb 24 05:19:54.666993 master-0 kubenswrapper[7614]: [+]process-running ok Feb 24 05:19:54.666993 master-0 kubenswrapper[7614]: healthz check failed Feb 24 05:19:54.667848 master-0 kubenswrapper[7614]: I0224 05:19:54.667020 7614 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-zxkt2" podUID="be7a4b9e-1e9a-4298-b804-21b683805c0e" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 05:19:55.169036 master-0 kubenswrapper[7614]: I0224 05:19:55.168937 7614 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-operator-754bc4d665-xjddh" event={"ID":"b8d28792-2365-4e9e-b61a-46cd2ef8b632","Type":"ContainerStarted","Data":"85a93b524d39757f42211035845af84f4f6c3cad4bddaa1164281282b8bdd276"} Feb 24 05:19:55.660399 master-0 kubenswrapper[7614]: I0224 05:19:55.660324 7614 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ingress/router-default-7b65dc9fcb-zxkt2" Feb 24 05:19:55.664053 master-0 kubenswrapper[7614]: I0224 05:19:55.663552 7614 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-zxkt2 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 05:19:55.664053 master-0 kubenswrapper[7614]: [-]has-synced failed: reason withheld Feb 24 05:19:55.664053 master-0 kubenswrapper[7614]: [+]process-running ok Feb 24 05:19:55.664053 master-0 kubenswrapper[7614]: healthz check failed Feb 24 05:19:55.664053 master-0 kubenswrapper[7614]: I0224 05:19:55.663629 7614 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-zxkt2" podUID="be7a4b9e-1e9a-4298-b804-21b683805c0e" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 05:19:56.181035 master-0 kubenswrapper[7614]: I0224 05:19:56.180931 7614 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-operator-754bc4d665-xjddh" event={"ID":"b8d28792-2365-4e9e-b61a-46cd2ef8b632","Type":"ContainerStarted","Data":"5618e70a0ddad686dcf190bc7892a4dc4343d2086710ec43bf62540377d50e44"} Feb 24 05:19:56.216903 master-0 kubenswrapper[7614]: I0224 05:19:56.216785 7614 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-monitoring/prometheus-operator-754bc4d665-xjddh" podStartSLOduration=2.569818396 podStartE2EDuration="4.216754662s" podCreationTimestamp="2026-02-24 05:19:52 +0000 UTC" firstStartedPulling="2026-02-24 05:19:53.115103079 +0000 UTC m=+324.149846235" lastFinishedPulling="2026-02-24 05:19:54.762039345 +0000 UTC m=+325.796782501" observedRunningTime="2026-02-24 05:19:56.213362895 +0000 UTC m=+327.248106091" watchObservedRunningTime="2026-02-24 05:19:56.216754662 +0000 UTC m=+327.251497858" Feb 24 05:19:56.278546 master-0 kubenswrapper[7614]: I0224 05:19:56.278449 7614 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-gn8m8" Feb 24 05:19:56.278546 master-0 kubenswrapper[7614]: I0224 05:19:56.278550 7614 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-gn8m8" Feb 24 05:19:56.367267 master-0 kubenswrapper[7614]: I0224 05:19:56.367188 7614 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-gn8m8" Feb 24 05:19:56.436438 master-0 kubenswrapper[7614]: I0224 05:19:56.436253 7614 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-68vwc" Feb 24 05:19:56.436438 master-0 kubenswrapper[7614]: I0224 05:19:56.436339 7614 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-68vwc" Feb 24 05:19:56.489463 master-0 kubenswrapper[7614]: I0224 05:19:56.489389 7614 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-68vwc" Feb 24 05:19:56.662673 master-0 kubenswrapper[7614]: I0224 05:19:56.662601 7614 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-zxkt2 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 05:19:56.662673 master-0 kubenswrapper[7614]: [-]has-synced failed: reason withheld Feb 24 05:19:56.662673 master-0 kubenswrapper[7614]: [+]process-running ok Feb 24 05:19:56.662673 master-0 kubenswrapper[7614]: healthz check failed Feb 24 05:19:56.662673 master-0 kubenswrapper[7614]: I0224 05:19:56.662675 7614 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-zxkt2" podUID="be7a4b9e-1e9a-4298-b804-21b683805c0e" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 05:19:57.257078 master-0 kubenswrapper[7614]: I0224 05:19:57.256990 7614 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-68vwc" Feb 24 05:19:57.264541 master-0 kubenswrapper[7614]: I0224 05:19:57.264463 7614 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-gn8m8" Feb 24 05:19:57.663410 master-0 kubenswrapper[7614]: I0224 05:19:57.663215 7614 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-zxkt2 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 05:19:57.663410 master-0 kubenswrapper[7614]: [-]has-synced failed: reason withheld Feb 24 05:19:57.663410 master-0 kubenswrapper[7614]: [+]process-running ok Feb 24 05:19:57.663410 master-0 kubenswrapper[7614]: healthz check failed Feb 24 05:19:57.663410 master-0 kubenswrapper[7614]: I0224 05:19:57.663299 7614 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-zxkt2" podUID="be7a4b9e-1e9a-4298-b804-21b683805c0e" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 05:19:58.039686 master-0 kubenswrapper[7614]: I0224 05:19:58.039620 7614 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-v64s6" Feb 24 05:19:58.039686 master-0 kubenswrapper[7614]: I0224 05:19:58.039702 7614 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-v64s6" Feb 24 05:19:58.098168 master-0 kubenswrapper[7614]: I0224 05:19:58.098108 7614 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-v64s6" Feb 24 05:19:58.260950 master-0 kubenswrapper[7614]: I0224 05:19:58.260873 7614 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-v64s6" Feb 24 05:19:58.664754 master-0 kubenswrapper[7614]: I0224 05:19:58.664671 7614 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-zxkt2 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 05:19:58.664754 master-0 kubenswrapper[7614]: [-]has-synced failed: reason withheld Feb 24 05:19:58.664754 master-0 kubenswrapper[7614]: [+]process-running ok Feb 24 05:19:58.664754 master-0 kubenswrapper[7614]: healthz check failed Feb 24 05:19:58.665141 master-0 kubenswrapper[7614]: I0224 05:19:58.664757 7614 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-zxkt2" podUID="be7a4b9e-1e9a-4298-b804-21b683805c0e" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 05:19:58.666669 master-0 kubenswrapper[7614]: I0224 05:19:58.666611 7614 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-monitoring/openshift-state-metrics-6dbff8cb4c-hvjlk"] Feb 24 05:19:58.673302 master-0 kubenswrapper[7614]: I0224 05:19:58.673238 7614 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/openshift-state-metrics-6dbff8cb4c-hvjlk" Feb 24 05:19:58.677083 master-0 kubenswrapper[7614]: I0224 05:19:58.677030 7614 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"openshift-state-metrics-kube-rbac-proxy-config" Feb 24 05:19:58.677503 master-0 kubenswrapper[7614]: I0224 05:19:58.677477 7614 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"openshift-state-metrics-tls" Feb 24 05:19:58.677867 master-0 kubenswrapper[7614]: I0224 05:19:58.677840 7614 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"openshift-state-metrics-dockercfg-ll4w9" Feb 24 05:19:58.692169 master-0 kubenswrapper[7614]: I0224 05:19:58.692094 7614 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-monitoring/node-exporter-qk7rz"] Feb 24 05:19:58.698413 master-0 kubenswrapper[7614]: I0224 05:19:58.698365 7614 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/node-exporter-qk7rz" Feb 24 05:19:58.706581 master-0 kubenswrapper[7614]: I0224 05:19:58.704086 7614 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"node-exporter-dockercfg-gnn9c" Feb 24 05:19:58.706581 master-0 kubenswrapper[7614]: I0224 05:19:58.704476 7614 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"node-exporter-tls" Feb 24 05:19:58.706581 master-0 kubenswrapper[7614]: I0224 05:19:58.704705 7614 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"node-exporter-kube-rbac-proxy-config" Feb 24 05:19:58.716019 master-0 kubenswrapper[7614]: I0224 05:19:58.712121 7614 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/openshift-state-metrics-6dbff8cb4c-hvjlk"] Feb 24 05:19:58.726330 master-0 kubenswrapper[7614]: I0224 05:19:58.723650 7614 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-monitoring/kube-state-metrics-59584d565f-gsgxz"] Feb 24 05:19:58.726494 master-0 kubenswrapper[7614]: I0224 05:19:58.726393 7614 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/kube-state-metrics-59584d565f-gsgxz" Feb 24 05:19:58.737601 master-0 kubenswrapper[7614]: I0224 05:19:58.737484 7614 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"kube-state-metrics-kube-rbac-proxy-config" Feb 24 05:19:58.737865 master-0 kubenswrapper[7614]: I0224 05:19:58.737829 7614 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"kube-state-metrics-dockercfg-9sp2t" Feb 24 05:19:58.738029 master-0 kubenswrapper[7614]: I0224 05:19:58.737949 7614 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"kube-state-metrics-tls" Feb 24 05:19:58.740341 master-0 kubenswrapper[7614]: I0224 05:19:58.740293 7614 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openshift-state-metrics-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/bf303acd-b62e-4aa3-bd8d-15f5844302d8-openshift-state-metrics-kube-rbac-proxy-config\") pod \"openshift-state-metrics-6dbff8cb4c-hvjlk\" (UID: \"bf303acd-b62e-4aa3-bd8d-15f5844302d8\") " pod="openshift-monitoring/openshift-state-metrics-6dbff8cb4c-hvjlk" Feb 24 05:19:58.740416 master-0 kubenswrapper[7614]: I0224 05:19:58.740364 7614 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f92qq\" (UniqueName: \"kubernetes.io/projected/bf303acd-b62e-4aa3-bd8d-15f5844302d8-kube-api-access-f92qq\") pod \"openshift-state-metrics-6dbff8cb4c-hvjlk\" (UID: \"bf303acd-b62e-4aa3-bd8d-15f5844302d8\") " pod="openshift-monitoring/openshift-state-metrics-6dbff8cb4c-hvjlk" Feb 24 05:19:58.740416 master-0 kubenswrapper[7614]: I0224 05:19:58.740407 7614 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openshift-state-metrics-tls\" (UniqueName: \"kubernetes.io/secret/bf303acd-b62e-4aa3-bd8d-15f5844302d8-openshift-state-metrics-tls\") pod \"openshift-state-metrics-6dbff8cb4c-hvjlk\" (UID: \"bf303acd-b62e-4aa3-bd8d-15f5844302d8\") " pod="openshift-monitoring/openshift-state-metrics-6dbff8cb4c-hvjlk" Feb 24 05:19:58.740480 master-0 kubenswrapper[7614]: I0224 05:19:58.740445 7614 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/bf303acd-b62e-4aa3-bd8d-15f5844302d8-metrics-client-ca\") pod \"openshift-state-metrics-6dbff8cb4c-hvjlk\" (UID: \"bf303acd-b62e-4aa3-bd8d-15f5844302d8\") " pod="openshift-monitoring/openshift-state-metrics-6dbff8cb4c-hvjlk" Feb 24 05:19:58.740585 master-0 kubenswrapper[7614]: I0224 05:19:58.740565 7614 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"kube-state-metrics-custom-resource-state-configmap" Feb 24 05:19:58.747504 master-0 kubenswrapper[7614]: I0224 05:19:58.746191 7614 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/kube-state-metrics-59584d565f-gsgxz"] Feb 24 05:19:58.849333 master-0 kubenswrapper[7614]: I0224 05:19:58.844180 7614 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/80cc7ad6-051b-4ee5-94af-611388d9622a-metrics-client-ca\") pod \"kube-state-metrics-59584d565f-gsgxz\" (UID: \"80cc7ad6-051b-4ee5-94af-611388d9622a\") " pod="openshift-monitoring/kube-state-metrics-59584d565f-gsgxz" Feb 24 05:19:58.849333 master-0 kubenswrapper[7614]: I0224 05:19:58.844259 7614 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-exporter-textfile\" (UniqueName: \"kubernetes.io/empty-dir/f2be5ed6-fdf0-4462-a319-eed1a5a1c778-node-exporter-textfile\") pod \"node-exporter-qk7rz\" (UID: \"f2be5ed6-fdf0-4462-a319-eed1a5a1c778\") " pod="openshift-monitoring/node-exporter-qk7rz" Feb 24 05:19:58.849333 master-0 kubenswrapper[7614]: I0224 05:19:58.844300 7614 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-exporter-wtmp\" (UniqueName: \"kubernetes.io/host-path/f2be5ed6-fdf0-4462-a319-eed1a5a1c778-node-exporter-wtmp\") pod \"node-exporter-qk7rz\" (UID: \"f2be5ed6-fdf0-4462-a319-eed1a5a1c778\") " pod="openshift-monitoring/node-exporter-qk7rz" Feb 24 05:19:58.849333 master-0 kubenswrapper[7614]: I0224 05:19:58.844358 7614 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openshift-state-metrics-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/bf303acd-b62e-4aa3-bd8d-15f5844302d8-openshift-state-metrics-kube-rbac-proxy-config\") pod \"openshift-state-metrics-6dbff8cb4c-hvjlk\" (UID: \"bf303acd-b62e-4aa3-bd8d-15f5844302d8\") " pod="openshift-monitoring/openshift-state-metrics-6dbff8cb4c-hvjlk" Feb 24 05:19:58.849333 master-0 kubenswrapper[7614]: I0224 05:19:58.844389 7614 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-exporter-tls\" (UniqueName: \"kubernetes.io/secret/f2be5ed6-fdf0-4462-a319-eed1a5a1c778-node-exporter-tls\") pod \"node-exporter-qk7rz\" (UID: \"f2be5ed6-fdf0-4462-a319-eed1a5a1c778\") " pod="openshift-monitoring/node-exporter-qk7rz" Feb 24 05:19:58.849333 master-0 kubenswrapper[7614]: I0224 05:19:58.844417 7614 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f92qq\" (UniqueName: \"kubernetes.io/projected/bf303acd-b62e-4aa3-bd8d-15f5844302d8-kube-api-access-f92qq\") pod \"openshift-state-metrics-6dbff8cb4c-hvjlk\" (UID: \"bf303acd-b62e-4aa3-bd8d-15f5844302d8\") " pod="openshift-monitoring/openshift-state-metrics-6dbff8cb4c-hvjlk" Feb 24 05:19:58.849333 master-0 kubenswrapper[7614]: I0224 05:19:58.844568 7614 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"volume-directive-shadow\" (UniqueName: \"kubernetes.io/empty-dir/80cc7ad6-051b-4ee5-94af-611388d9622a-volume-directive-shadow\") pod \"kube-state-metrics-59584d565f-gsgxz\" (UID: \"80cc7ad6-051b-4ee5-94af-611388d9622a\") " pod="openshift-monitoring/kube-state-metrics-59584d565f-gsgxz" Feb 24 05:19:58.849333 master-0 kubenswrapper[7614]: I0224 05:19:58.844640 7614 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openshift-state-metrics-tls\" (UniqueName: \"kubernetes.io/secret/bf303acd-b62e-4aa3-bd8d-15f5844302d8-openshift-state-metrics-tls\") pod \"openshift-state-metrics-6dbff8cb4c-hvjlk\" (UID: \"bf303acd-b62e-4aa3-bd8d-15f5844302d8\") " pod="openshift-monitoring/openshift-state-metrics-6dbff8cb4c-hvjlk" Feb 24 05:19:58.849333 master-0 kubenswrapper[7614]: I0224 05:19:58.844671 7614 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-state-metrics-tls\" (UniqueName: \"kubernetes.io/secret/80cc7ad6-051b-4ee5-94af-611388d9622a-kube-state-metrics-tls\") pod \"kube-state-metrics-59584d565f-gsgxz\" (UID: \"80cc7ad6-051b-4ee5-94af-611388d9622a\") " pod="openshift-monitoring/kube-state-metrics-59584d565f-gsgxz" Feb 24 05:19:58.849333 master-0 kubenswrapper[7614]: I0224 05:19:58.844760 7614 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/bf303acd-b62e-4aa3-bd8d-15f5844302d8-metrics-client-ca\") pod \"openshift-state-metrics-6dbff8cb4c-hvjlk\" (UID: \"bf303acd-b62e-4aa3-bd8d-15f5844302d8\") " pod="openshift-monitoring/openshift-state-metrics-6dbff8cb4c-hvjlk" Feb 24 05:19:58.849333 master-0 kubenswrapper[7614]: I0224 05:19:58.844787 7614 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-state-metrics-custom-resource-state-configmap\" (UniqueName: \"kubernetes.io/configmap/80cc7ad6-051b-4ee5-94af-611388d9622a-kube-state-metrics-custom-resource-state-configmap\") pod \"kube-state-metrics-59584d565f-gsgxz\" (UID: \"80cc7ad6-051b-4ee5-94af-611388d9622a\") " pod="openshift-monitoring/kube-state-metrics-59584d565f-gsgxz" Feb 24 05:19:58.849333 master-0 kubenswrapper[7614]: I0224 05:19:58.844821 7614 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hgl5l\" (UniqueName: \"kubernetes.io/projected/80cc7ad6-051b-4ee5-94af-611388d9622a-kube-api-access-hgl5l\") pod \"kube-state-metrics-59584d565f-gsgxz\" (UID: \"80cc7ad6-051b-4ee5-94af-611388d9622a\") " pod="openshift-monitoring/kube-state-metrics-59584d565f-gsgxz" Feb 24 05:19:58.849333 master-0 kubenswrapper[7614]: I0224 05:19:58.845942 7614 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/bf303acd-b62e-4aa3-bd8d-15f5844302d8-metrics-client-ca\") pod \"openshift-state-metrics-6dbff8cb4c-hvjlk\" (UID: \"bf303acd-b62e-4aa3-bd8d-15f5844302d8\") " pod="openshift-monitoring/openshift-state-metrics-6dbff8cb4c-hvjlk" Feb 24 05:19:58.855329 master-0 kubenswrapper[7614]: I0224 05:19:58.850437 7614 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"root\" (UniqueName: \"kubernetes.io/host-path/f2be5ed6-fdf0-4462-a319-eed1a5a1c778-root\") pod \"node-exporter-qk7rz\" (UID: \"f2be5ed6-fdf0-4462-a319-eed1a5a1c778\") " pod="openshift-monitoring/node-exporter-qk7rz" Feb 24 05:19:58.855329 master-0 kubenswrapper[7614]: I0224 05:19:58.850477 7614 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lm88x\" (UniqueName: \"kubernetes.io/projected/f2be5ed6-fdf0-4462-a319-eed1a5a1c778-kube-api-access-lm88x\") pod \"node-exporter-qk7rz\" (UID: \"f2be5ed6-fdf0-4462-a319-eed1a5a1c778\") " pod="openshift-monitoring/node-exporter-qk7rz" Feb 24 05:19:58.855329 master-0 kubenswrapper[7614]: I0224 05:19:58.850523 7614 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/f2be5ed6-fdf0-4462-a319-eed1a5a1c778-sys\") pod \"node-exporter-qk7rz\" (UID: \"f2be5ed6-fdf0-4462-a319-eed1a5a1c778\") " pod="openshift-monitoring/node-exporter-qk7rz" Feb 24 05:19:58.855329 master-0 kubenswrapper[7614]: I0224 05:19:58.850544 7614 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/f2be5ed6-fdf0-4462-a319-eed1a5a1c778-metrics-client-ca\") pod \"node-exporter-qk7rz\" (UID: \"f2be5ed6-fdf0-4462-a319-eed1a5a1c778\") " pod="openshift-monitoring/node-exporter-qk7rz" Feb 24 05:19:58.855329 master-0 kubenswrapper[7614]: I0224 05:19:58.850565 7614 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-exporter-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/f2be5ed6-fdf0-4462-a319-eed1a5a1c778-node-exporter-kube-rbac-proxy-config\") pod \"node-exporter-qk7rz\" (UID: \"f2be5ed6-fdf0-4462-a319-eed1a5a1c778\") " pod="openshift-monitoring/node-exporter-qk7rz" Feb 24 05:19:58.855329 master-0 kubenswrapper[7614]: I0224 05:19:58.850595 7614 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-state-metrics-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/80cc7ad6-051b-4ee5-94af-611388d9622a-kube-state-metrics-kube-rbac-proxy-config\") pod \"kube-state-metrics-59584d565f-gsgxz\" (UID: \"80cc7ad6-051b-4ee5-94af-611388d9622a\") " pod="openshift-monitoring/kube-state-metrics-59584d565f-gsgxz" Feb 24 05:19:58.877336 master-0 kubenswrapper[7614]: I0224 05:19:58.871120 7614 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openshift-state-metrics-tls\" (UniqueName: \"kubernetes.io/secret/bf303acd-b62e-4aa3-bd8d-15f5844302d8-openshift-state-metrics-tls\") pod \"openshift-state-metrics-6dbff8cb4c-hvjlk\" (UID: \"bf303acd-b62e-4aa3-bd8d-15f5844302d8\") " pod="openshift-monitoring/openshift-state-metrics-6dbff8cb4c-hvjlk" Feb 24 05:19:58.877336 master-0 kubenswrapper[7614]: I0224 05:19:58.872381 7614 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f92qq\" (UniqueName: \"kubernetes.io/projected/bf303acd-b62e-4aa3-bd8d-15f5844302d8-kube-api-access-f92qq\") pod \"openshift-state-metrics-6dbff8cb4c-hvjlk\" (UID: \"bf303acd-b62e-4aa3-bd8d-15f5844302d8\") " pod="openshift-monitoring/openshift-state-metrics-6dbff8cb4c-hvjlk" Feb 24 05:19:58.877336 master-0 kubenswrapper[7614]: I0224 05:19:58.876207 7614 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openshift-state-metrics-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/bf303acd-b62e-4aa3-bd8d-15f5844302d8-openshift-state-metrics-kube-rbac-proxy-config\") pod \"openshift-state-metrics-6dbff8cb4c-hvjlk\" (UID: \"bf303acd-b62e-4aa3-bd8d-15f5844302d8\") " pod="openshift-monitoring/openshift-state-metrics-6dbff8cb4c-hvjlk" Feb 24 05:19:58.951997 master-0 kubenswrapper[7614]: I0224 05:19:58.951861 7614 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/80cc7ad6-051b-4ee5-94af-611388d9622a-metrics-client-ca\") pod \"kube-state-metrics-59584d565f-gsgxz\" (UID: \"80cc7ad6-051b-4ee5-94af-611388d9622a\") " pod="openshift-monitoring/kube-state-metrics-59584d565f-gsgxz" Feb 24 05:19:58.951997 master-0 kubenswrapper[7614]: I0224 05:19:58.951932 7614 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-exporter-textfile\" (UniqueName: \"kubernetes.io/empty-dir/f2be5ed6-fdf0-4462-a319-eed1a5a1c778-node-exporter-textfile\") pod \"node-exporter-qk7rz\" (UID: \"f2be5ed6-fdf0-4462-a319-eed1a5a1c778\") " pod="openshift-monitoring/node-exporter-qk7rz" Feb 24 05:19:58.952267 master-0 kubenswrapper[7614]: I0224 05:19:58.952163 7614 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-exporter-wtmp\" (UniqueName: \"kubernetes.io/host-path/f2be5ed6-fdf0-4462-a319-eed1a5a1c778-node-exporter-wtmp\") pod \"node-exporter-qk7rz\" (UID: \"f2be5ed6-fdf0-4462-a319-eed1a5a1c778\") " pod="openshift-monitoring/node-exporter-qk7rz" Feb 24 05:19:58.952333 master-0 kubenswrapper[7614]: I0224 05:19:58.952284 7614 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-exporter-tls\" (UniqueName: \"kubernetes.io/secret/f2be5ed6-fdf0-4462-a319-eed1a5a1c778-node-exporter-tls\") pod \"node-exporter-qk7rz\" (UID: \"f2be5ed6-fdf0-4462-a319-eed1a5a1c778\") " pod="openshift-monitoring/node-exporter-qk7rz" Feb 24 05:19:58.952413 master-0 kubenswrapper[7614]: I0224 05:19:58.952387 7614 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"volume-directive-shadow\" (UniqueName: \"kubernetes.io/empty-dir/80cc7ad6-051b-4ee5-94af-611388d9622a-volume-directive-shadow\") pod \"kube-state-metrics-59584d565f-gsgxz\" (UID: \"80cc7ad6-051b-4ee5-94af-611388d9622a\") " pod="openshift-monitoring/kube-state-metrics-59584d565f-gsgxz" Feb 24 05:19:58.952454 master-0 kubenswrapper[7614]: I0224 05:19:58.952422 7614 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-tls\" (UniqueName: \"kubernetes.io/secret/80cc7ad6-051b-4ee5-94af-611388d9622a-kube-state-metrics-tls\") pod \"kube-state-metrics-59584d565f-gsgxz\" (UID: \"80cc7ad6-051b-4ee5-94af-611388d9622a\") " pod="openshift-monitoring/kube-state-metrics-59584d565f-gsgxz" Feb 24 05:19:58.952568 master-0 kubenswrapper[7614]: I0224 05:19:58.952542 7614 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-custom-resource-state-configmap\" (UniqueName: \"kubernetes.io/configmap/80cc7ad6-051b-4ee5-94af-611388d9622a-kube-state-metrics-custom-resource-state-configmap\") pod \"kube-state-metrics-59584d565f-gsgxz\" (UID: \"80cc7ad6-051b-4ee5-94af-611388d9622a\") " pod="openshift-monitoring/kube-state-metrics-59584d565f-gsgxz" Feb 24 05:19:58.952615 master-0 kubenswrapper[7614]: I0224 05:19:58.952591 7614 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hgl5l\" (UniqueName: \"kubernetes.io/projected/80cc7ad6-051b-4ee5-94af-611388d9622a-kube-api-access-hgl5l\") pod \"kube-state-metrics-59584d565f-gsgxz\" (UID: \"80cc7ad6-051b-4ee5-94af-611388d9622a\") " pod="openshift-monitoring/kube-state-metrics-59584d565f-gsgxz" Feb 24 05:19:58.952670 master-0 kubenswrapper[7614]: I0224 05:19:58.952648 7614 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"root\" (UniqueName: \"kubernetes.io/host-path/f2be5ed6-fdf0-4462-a319-eed1a5a1c778-root\") pod \"node-exporter-qk7rz\" (UID: \"f2be5ed6-fdf0-4462-a319-eed1a5a1c778\") " pod="openshift-monitoring/node-exporter-qk7rz" Feb 24 05:19:58.952714 master-0 kubenswrapper[7614]: I0224 05:19:58.952672 7614 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lm88x\" (UniqueName: \"kubernetes.io/projected/f2be5ed6-fdf0-4462-a319-eed1a5a1c778-kube-api-access-lm88x\") pod \"node-exporter-qk7rz\" (UID: \"f2be5ed6-fdf0-4462-a319-eed1a5a1c778\") " pod="openshift-monitoring/node-exporter-qk7rz" Feb 24 05:19:58.952746 master-0 kubenswrapper[7614]: I0224 05:19:58.952714 7614 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/f2be5ed6-fdf0-4462-a319-eed1a5a1c778-sys\") pod \"node-exporter-qk7rz\" (UID: \"f2be5ed6-fdf0-4462-a319-eed1a5a1c778\") " pod="openshift-monitoring/node-exporter-qk7rz" Feb 24 05:19:58.952746 master-0 kubenswrapper[7614]: I0224 05:19:58.952733 7614 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/f2be5ed6-fdf0-4462-a319-eed1a5a1c778-metrics-client-ca\") pod \"node-exporter-qk7rz\" (UID: \"f2be5ed6-fdf0-4462-a319-eed1a5a1c778\") " pod="openshift-monitoring/node-exporter-qk7rz" Feb 24 05:19:58.952805 master-0 kubenswrapper[7614]: I0224 05:19:58.952763 7614 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-exporter-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/f2be5ed6-fdf0-4462-a319-eed1a5a1c778-node-exporter-kube-rbac-proxy-config\") pod \"node-exporter-qk7rz\" (UID: \"f2be5ed6-fdf0-4462-a319-eed1a5a1c778\") " pod="openshift-monitoring/node-exporter-qk7rz" Feb 24 05:19:58.952805 master-0 kubenswrapper[7614]: I0224 05:19:58.952794 7614 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/80cc7ad6-051b-4ee5-94af-611388d9622a-kube-state-metrics-kube-rbac-proxy-config\") pod \"kube-state-metrics-59584d565f-gsgxz\" (UID: \"80cc7ad6-051b-4ee5-94af-611388d9622a\") " pod="openshift-monitoring/kube-state-metrics-59584d565f-gsgxz" Feb 24 05:19:58.956325 master-0 kubenswrapper[7614]: I0224 05:19:58.952965 7614 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/80cc7ad6-051b-4ee5-94af-611388d9622a-metrics-client-ca\") pod \"kube-state-metrics-59584d565f-gsgxz\" (UID: \"80cc7ad6-051b-4ee5-94af-611388d9622a\") " pod="openshift-monitoring/kube-state-metrics-59584d565f-gsgxz" Feb 24 05:19:58.956325 master-0 kubenswrapper[7614]: I0224 05:19:58.953734 7614 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-state-metrics-custom-resource-state-configmap\" (UniqueName: \"kubernetes.io/configmap/80cc7ad6-051b-4ee5-94af-611388d9622a-kube-state-metrics-custom-resource-state-configmap\") pod \"kube-state-metrics-59584d565f-gsgxz\" (UID: \"80cc7ad6-051b-4ee5-94af-611388d9622a\") " pod="openshift-monitoring/kube-state-metrics-59584d565f-gsgxz" Feb 24 05:19:58.956325 master-0 kubenswrapper[7614]: I0224 05:19:58.954165 7614 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"volume-directive-shadow\" (UniqueName: \"kubernetes.io/empty-dir/80cc7ad6-051b-4ee5-94af-611388d9622a-volume-directive-shadow\") pod \"kube-state-metrics-59584d565f-gsgxz\" (UID: \"80cc7ad6-051b-4ee5-94af-611388d9622a\") " pod="openshift-monitoring/kube-state-metrics-59584d565f-gsgxz" Feb 24 05:19:58.956325 master-0 kubenswrapper[7614]: I0224 05:19:58.954279 7614 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-exporter-textfile\" (UniqueName: \"kubernetes.io/empty-dir/f2be5ed6-fdf0-4462-a319-eed1a5a1c778-node-exporter-textfile\") pod \"node-exporter-qk7rz\" (UID: \"f2be5ed6-fdf0-4462-a319-eed1a5a1c778\") " pod="openshift-monitoring/node-exporter-qk7rz" Feb 24 05:19:58.956325 master-0 kubenswrapper[7614]: I0224 05:19:58.954390 7614 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"root\" (UniqueName: \"kubernetes.io/host-path/f2be5ed6-fdf0-4462-a319-eed1a5a1c778-root\") pod \"node-exporter-qk7rz\" (UID: \"f2be5ed6-fdf0-4462-a319-eed1a5a1c778\") " pod="openshift-monitoring/node-exporter-qk7rz" Feb 24 05:19:58.956325 master-0 kubenswrapper[7614]: I0224 05:19:58.954670 7614 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-exporter-wtmp\" (UniqueName: \"kubernetes.io/host-path/f2be5ed6-fdf0-4462-a319-eed1a5a1c778-node-exporter-wtmp\") pod \"node-exporter-qk7rz\" (UID: \"f2be5ed6-fdf0-4462-a319-eed1a5a1c778\") " pod="openshift-monitoring/node-exporter-qk7rz" Feb 24 05:19:58.956325 master-0 kubenswrapper[7614]: I0224 05:19:58.954782 7614 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/f2be5ed6-fdf0-4462-a319-eed1a5a1c778-metrics-client-ca\") pod \"node-exporter-qk7rz\" (UID: \"f2be5ed6-fdf0-4462-a319-eed1a5a1c778\") " pod="openshift-monitoring/node-exporter-qk7rz" Feb 24 05:19:58.956325 master-0 kubenswrapper[7614]: I0224 05:19:58.954792 7614 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/f2be5ed6-fdf0-4462-a319-eed1a5a1c778-sys\") pod \"node-exporter-qk7rz\" (UID: \"f2be5ed6-fdf0-4462-a319-eed1a5a1c778\") " pod="openshift-monitoring/node-exporter-qk7rz" Feb 24 05:19:58.961323 master-0 kubenswrapper[7614]: I0224 05:19:58.957708 7614 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-exporter-tls\" (UniqueName: \"kubernetes.io/secret/f2be5ed6-fdf0-4462-a319-eed1a5a1c778-node-exporter-tls\") pod \"node-exporter-qk7rz\" (UID: \"f2be5ed6-fdf0-4462-a319-eed1a5a1c778\") " pod="openshift-monitoring/node-exporter-qk7rz" Feb 24 05:19:58.961323 master-0 kubenswrapper[7614]: I0224 05:19:58.958246 7614 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-exporter-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/f2be5ed6-fdf0-4462-a319-eed1a5a1c778-node-exporter-kube-rbac-proxy-config\") pod \"node-exporter-qk7rz\" (UID: \"f2be5ed6-fdf0-4462-a319-eed1a5a1c778\") " pod="openshift-monitoring/node-exporter-qk7rz" Feb 24 05:19:58.961323 master-0 kubenswrapper[7614]: I0224 05:19:58.960033 7614 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-state-metrics-tls\" (UniqueName: \"kubernetes.io/secret/80cc7ad6-051b-4ee5-94af-611388d9622a-kube-state-metrics-tls\") pod \"kube-state-metrics-59584d565f-gsgxz\" (UID: \"80cc7ad6-051b-4ee5-94af-611388d9622a\") " pod="openshift-monitoring/kube-state-metrics-59584d565f-gsgxz" Feb 24 05:19:58.973332 master-0 kubenswrapper[7614]: I0224 05:19:58.971639 7614 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lm88x\" (UniqueName: \"kubernetes.io/projected/f2be5ed6-fdf0-4462-a319-eed1a5a1c778-kube-api-access-lm88x\") pod \"node-exporter-qk7rz\" (UID: \"f2be5ed6-fdf0-4462-a319-eed1a5a1c778\") " pod="openshift-monitoring/node-exporter-qk7rz" Feb 24 05:19:58.978322 master-0 kubenswrapper[7614]: I0224 05:19:58.975026 7614 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-state-metrics-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/80cc7ad6-051b-4ee5-94af-611388d9622a-kube-state-metrics-kube-rbac-proxy-config\") pod \"kube-state-metrics-59584d565f-gsgxz\" (UID: \"80cc7ad6-051b-4ee5-94af-611388d9622a\") " pod="openshift-monitoring/kube-state-metrics-59584d565f-gsgxz" Feb 24 05:19:58.978322 master-0 kubenswrapper[7614]: I0224 05:19:58.976262 7614 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hgl5l\" (UniqueName: \"kubernetes.io/projected/80cc7ad6-051b-4ee5-94af-611388d9622a-kube-api-access-hgl5l\") pod \"kube-state-metrics-59584d565f-gsgxz\" (UID: \"80cc7ad6-051b-4ee5-94af-611388d9622a\") " pod="openshift-monitoring/kube-state-metrics-59584d565f-gsgxz" Feb 24 05:19:59.038352 master-0 kubenswrapper[7614]: I0224 05:19:59.037130 7614 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/openshift-state-metrics-6dbff8cb4c-hvjlk" Feb 24 05:19:59.038352 master-0 kubenswrapper[7614]: I0224 05:19:59.038060 7614 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-xm8sw" Feb 24 05:19:59.038352 master-0 kubenswrapper[7614]: I0224 05:19:59.038092 7614 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-xm8sw" Feb 24 05:19:59.055335 master-0 kubenswrapper[7614]: I0224 05:19:59.052120 7614 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/node-exporter-qk7rz" Feb 24 05:19:59.079664 master-0 kubenswrapper[7614]: I0224 05:19:59.077524 7614 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/kube-state-metrics-59584d565f-gsgxz" Feb 24 05:19:59.204990 master-0 kubenswrapper[7614]: I0224 05:19:59.204851 7614 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/node-exporter-qk7rz" event={"ID":"f2be5ed6-fdf0-4462-a319-eed1a5a1c778","Type":"ContainerStarted","Data":"6042346e04d14789f9df563facc73503846c93f9a58755284a883ae67d6dfa74"} Feb 24 05:19:59.560639 master-0 kubenswrapper[7614]: I0224 05:19:59.557782 7614 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/openshift-state-metrics-6dbff8cb4c-hvjlk"] Feb 24 05:19:59.571597 master-0 kubenswrapper[7614]: W0224 05:19:59.571534 7614 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podbf303acd_b62e_4aa3_bd8d_15f5844302d8.slice/crio-d279f5c83a7334bb036cb98c51916708c8e0553fc71eae75ca717993b0118072 WatchSource:0}: Error finding container d279f5c83a7334bb036cb98c51916708c8e0553fc71eae75ca717993b0118072: Status 404 returned error can't find the container with id d279f5c83a7334bb036cb98c51916708c8e0553fc71eae75ca717993b0118072 Feb 24 05:19:59.571877 master-0 kubenswrapper[7614]: I0224 05:19:59.571768 7614 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/kube-state-metrics-59584d565f-gsgxz"] Feb 24 05:19:59.576295 master-0 kubenswrapper[7614]: W0224 05:19:59.576237 7614 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod80cc7ad6_051b_4ee5_94af_611388d9622a.slice/crio-894870cb71b93cf170c026145b9ea2c31998ab3f9fd22cdcbd9083b354b5406e WatchSource:0}: Error finding container 894870cb71b93cf170c026145b9ea2c31998ab3f9fd22cdcbd9083b354b5406e: Status 404 returned error can't find the container with id 894870cb71b93cf170c026145b9ea2c31998ab3f9fd22cdcbd9083b354b5406e Feb 24 05:19:59.667350 master-0 kubenswrapper[7614]: I0224 05:19:59.664075 7614 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-zxkt2 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 05:19:59.667350 master-0 kubenswrapper[7614]: [-]has-synced failed: reason withheld Feb 24 05:19:59.667350 master-0 kubenswrapper[7614]: [+]process-running ok Feb 24 05:19:59.667350 master-0 kubenswrapper[7614]: healthz check failed Feb 24 05:19:59.667350 master-0 kubenswrapper[7614]: I0224 05:19:59.664154 7614 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-zxkt2" podUID="be7a4b9e-1e9a-4298-b804-21b683805c0e" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 05:20:00.083976 master-0 kubenswrapper[7614]: I0224 05:20:00.083801 7614 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-xm8sw" podUID="8f3825c1-975c-40b5-a6ad-0f200968b3cd" containerName="registry-server" probeResult="failure" output=< Feb 24 05:20:00.083976 master-0 kubenswrapper[7614]: timeout: failed to connect service ":50051" within 1s Feb 24 05:20:00.083976 master-0 kubenswrapper[7614]: > Feb 24 05:20:00.213861 master-0 kubenswrapper[7614]: I0224 05:20:00.213790 7614 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/openshift-state-metrics-6dbff8cb4c-hvjlk" event={"ID":"bf303acd-b62e-4aa3-bd8d-15f5844302d8","Type":"ContainerStarted","Data":"85a52fc82ddf223f9ad85432535abd947a52d33fc494d903eb55eb170159b94a"} Feb 24 05:20:00.213861 master-0 kubenswrapper[7614]: I0224 05:20:00.213855 7614 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/openshift-state-metrics-6dbff8cb4c-hvjlk" event={"ID":"bf303acd-b62e-4aa3-bd8d-15f5844302d8","Type":"ContainerStarted","Data":"1f0662e9cfb4bb8a75dd43613f61e1b1c7dadbd6863daed4890e2e482957ac58"} Feb 24 05:20:00.213861 master-0 kubenswrapper[7614]: I0224 05:20:00.213870 7614 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/openshift-state-metrics-6dbff8cb4c-hvjlk" event={"ID":"bf303acd-b62e-4aa3-bd8d-15f5844302d8","Type":"ContainerStarted","Data":"d279f5c83a7334bb036cb98c51916708c8e0553fc71eae75ca717993b0118072"} Feb 24 05:20:00.215848 master-0 kubenswrapper[7614]: I0224 05:20:00.215780 7614 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/kube-state-metrics-59584d565f-gsgxz" event={"ID":"80cc7ad6-051b-4ee5-94af-611388d9622a","Type":"ContainerStarted","Data":"894870cb71b93cf170c026145b9ea2c31998ab3f9fd22cdcbd9083b354b5406e"} Feb 24 05:20:00.663858 master-0 kubenswrapper[7614]: I0224 05:20:00.663783 7614 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-zxkt2 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 05:20:00.663858 master-0 kubenswrapper[7614]: [-]has-synced failed: reason withheld Feb 24 05:20:00.663858 master-0 kubenswrapper[7614]: [+]process-running ok Feb 24 05:20:00.663858 master-0 kubenswrapper[7614]: healthz check failed Feb 24 05:20:00.664663 master-0 kubenswrapper[7614]: I0224 05:20:00.663868 7614 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-zxkt2" podUID="be7a4b9e-1e9a-4298-b804-21b683805c0e" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 05:20:01.241412 master-0 kubenswrapper[7614]: I0224 05:20:01.241288 7614 generic.go:334] "Generic (PLEG): container finished" podID="f2be5ed6-fdf0-4462-a319-eed1a5a1c778" containerID="9efe8f0118c66739205c89b7031607a78cfad712b2abd1398e2a5aea5ff44c44" exitCode=0 Feb 24 05:20:01.241412 master-0 kubenswrapper[7614]: I0224 05:20:01.241384 7614 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/node-exporter-qk7rz" event={"ID":"f2be5ed6-fdf0-4462-a319-eed1a5a1c778","Type":"ContainerDied","Data":"9efe8f0118c66739205c89b7031607a78cfad712b2abd1398e2a5aea5ff44c44"} Feb 24 05:20:01.662980 master-0 kubenswrapper[7614]: I0224 05:20:01.662799 7614 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-zxkt2 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 05:20:01.662980 master-0 kubenswrapper[7614]: [-]has-synced failed: reason withheld Feb 24 05:20:01.662980 master-0 kubenswrapper[7614]: [+]process-running ok Feb 24 05:20:01.662980 master-0 kubenswrapper[7614]: healthz check failed Feb 24 05:20:01.662980 master-0 kubenswrapper[7614]: I0224 05:20:01.662924 7614 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-zxkt2" podUID="be7a4b9e-1e9a-4298-b804-21b683805c0e" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 05:20:02.252970 master-0 kubenswrapper[7614]: I0224 05:20:02.252858 7614 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/node-exporter-qk7rz" event={"ID":"f2be5ed6-fdf0-4462-a319-eed1a5a1c778","Type":"ContainerStarted","Data":"1c19bb4d3ae29309738be66793cbba98971f689d22948773a55eaf264364ee9a"} Feb 24 05:20:02.662757 master-0 kubenswrapper[7614]: I0224 05:20:02.662661 7614 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-zxkt2 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 05:20:02.662757 master-0 kubenswrapper[7614]: [-]has-synced failed: reason withheld Feb 24 05:20:02.662757 master-0 kubenswrapper[7614]: [+]process-running ok Feb 24 05:20:02.662757 master-0 kubenswrapper[7614]: healthz check failed Feb 24 05:20:02.663142 master-0 kubenswrapper[7614]: I0224 05:20:02.662797 7614 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-zxkt2" podUID="be7a4b9e-1e9a-4298-b804-21b683805c0e" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 05:20:03.663357 master-0 kubenswrapper[7614]: I0224 05:20:03.663281 7614 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-zxkt2 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 05:20:03.663357 master-0 kubenswrapper[7614]: [-]has-synced failed: reason withheld Feb 24 05:20:03.663357 master-0 kubenswrapper[7614]: [+]process-running ok Feb 24 05:20:03.663357 master-0 kubenswrapper[7614]: healthz check failed Feb 24 05:20:03.663949 master-0 kubenswrapper[7614]: I0224 05:20:03.663378 7614 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-zxkt2" podUID="be7a4b9e-1e9a-4298-b804-21b683805c0e" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 05:20:04.272840 master-0 kubenswrapper[7614]: I0224 05:20:04.272452 7614 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/kube-state-metrics-59584d565f-gsgxz" event={"ID":"80cc7ad6-051b-4ee5-94af-611388d9622a","Type":"ContainerStarted","Data":"b56e532198ea23a55afb0fb1e1759dc37a6750ae3a50883b5dc0b6aef7c664e9"} Feb 24 05:20:04.272840 master-0 kubenswrapper[7614]: I0224 05:20:04.272520 7614 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/kube-state-metrics-59584d565f-gsgxz" event={"ID":"80cc7ad6-051b-4ee5-94af-611388d9622a","Type":"ContainerStarted","Data":"20a507a5f8b66a8aaa7095c23be2b633a9dbba6c94fd4f91db42ebc121985040"} Feb 24 05:20:04.272840 master-0 kubenswrapper[7614]: I0224 05:20:04.272534 7614 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/kube-state-metrics-59584d565f-gsgxz" event={"ID":"80cc7ad6-051b-4ee5-94af-611388d9622a","Type":"ContainerStarted","Data":"958f4106110719bafaa900c46b0ee3b0b4cee328d7df9fe4ea9a034a409b9712"} Feb 24 05:20:04.276258 master-0 kubenswrapper[7614]: I0224 05:20:04.276214 7614 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/node-exporter-qk7rz" event={"ID":"f2be5ed6-fdf0-4462-a319-eed1a5a1c778","Type":"ContainerStarted","Data":"95e4d23a5cec79d1863a79ca309b7616d82044c7591ccda82c6210db1b7118fe"} Feb 24 05:20:04.279424 master-0 kubenswrapper[7614]: I0224 05:20:04.279359 7614 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/openshift-state-metrics-6dbff8cb4c-hvjlk" event={"ID":"bf303acd-b62e-4aa3-bd8d-15f5844302d8","Type":"ContainerStarted","Data":"e40dfc970857375d1b809124a44bd92914ad52f23168053416d97b0702b54235"} Feb 24 05:20:04.288686 master-0 kubenswrapper[7614]: I0224 05:20:04.288614 7614 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-monitoring/metrics-server-65cdf565cd-555rj"] Feb 24 05:20:04.290051 master-0 kubenswrapper[7614]: I0224 05:20:04.290006 7614 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/metrics-server-65cdf565cd-555rj" Feb 24 05:20:04.292033 master-0 kubenswrapper[7614]: I0224 05:20:04.291975 7614 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"metrics-server-dockercfg-hpmvm" Feb 24 05:20:04.292912 master-0 kubenswrapper[7614]: I0224 05:20:04.292849 7614 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"metrics-server-7qtvbjhkqad41" Feb 24 05:20:04.293670 master-0 kubenswrapper[7614]: I0224 05:20:04.293629 7614 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"kubelet-serving-ca-bundle" Feb 24 05:20:04.293788 master-0 kubenswrapper[7614]: I0224 05:20:04.293747 7614 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"metrics-client-certs" Feb 24 05:20:04.294003 master-0 kubenswrapper[7614]: I0224 05:20:04.293965 7614 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"metrics-server-audit-profiles" Feb 24 05:20:04.294003 master-0 kubenswrapper[7614]: I0224 05:20:04.293996 7614 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"metrics-server-tls" Feb 24 05:20:04.311776 master-0 kubenswrapper[7614]: I0224 05:20:04.311691 7614 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/metrics-server-65cdf565cd-555rj"] Feb 24 05:20:04.315411 master-0 kubenswrapper[7614]: I0224 05:20:04.315329 7614 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-monitoring/kube-state-metrics-59584d565f-gsgxz" podStartSLOduration=2.482161564 podStartE2EDuration="6.315290846s" podCreationTimestamp="2026-02-24 05:19:58 +0000 UTC" firstStartedPulling="2026-02-24 05:19:59.579583188 +0000 UTC m=+330.614326354" lastFinishedPulling="2026-02-24 05:20:03.41271247 +0000 UTC m=+334.447455636" observedRunningTime="2026-02-24 05:20:04.310299867 +0000 UTC m=+335.345043053" watchObservedRunningTime="2026-02-24 05:20:04.315290846 +0000 UTC m=+335.350034002" Feb 24 05:20:04.363719 master-0 kubenswrapper[7614]: I0224 05:20:04.363477 7614 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-metrics-client-certs\" (UniqueName: \"kubernetes.io/secret/2f48332e-92de-42aa-a6e6-db161f005e74-secret-metrics-client-certs\") pod \"metrics-server-65cdf565cd-555rj\" (UID: \"2f48332e-92de-42aa-a6e6-db161f005e74\") " pod="openshift-monitoring/metrics-server-65cdf565cd-555rj" Feb 24 05:20:04.363719 master-0 kubenswrapper[7614]: I0224 05:20:04.363647 7614 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2f48332e-92de-42aa-a6e6-db161f005e74-client-ca-bundle\") pod \"metrics-server-65cdf565cd-555rj\" (UID: \"2f48332e-92de-42aa-a6e6-db161f005e74\") " pod="openshift-monitoring/metrics-server-65cdf565cd-555rj" Feb 24 05:20:04.363719 master-0 kubenswrapper[7614]: I0224 05:20:04.363703 7614 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"configmap-kubelet-serving-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/2f48332e-92de-42aa-a6e6-db161f005e74-configmap-kubelet-serving-ca-bundle\") pod \"metrics-server-65cdf565cd-555rj\" (UID: \"2f48332e-92de-42aa-a6e6-db161f005e74\") " pod="openshift-monitoring/metrics-server-65cdf565cd-555rj" Feb 24 05:20:04.363719 master-0 kubenswrapper[7614]: I0224 05:20:04.363727 7614 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-metrics-server-tls\" (UniqueName: \"kubernetes.io/secret/2f48332e-92de-42aa-a6e6-db161f005e74-secret-metrics-server-tls\") pod \"metrics-server-65cdf565cd-555rj\" (UID: \"2f48332e-92de-42aa-a6e6-db161f005e74\") " pod="openshift-monitoring/metrics-server-65cdf565cd-555rj" Feb 24 05:20:04.363971 master-0 kubenswrapper[7614]: I0224 05:20:04.363749 7614 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kc42f\" (UniqueName: \"kubernetes.io/projected/2f48332e-92de-42aa-a6e6-db161f005e74-kube-api-access-kc42f\") pod \"metrics-server-65cdf565cd-555rj\" (UID: \"2f48332e-92de-42aa-a6e6-db161f005e74\") " pod="openshift-monitoring/metrics-server-65cdf565cd-555rj" Feb 24 05:20:04.363971 master-0 kubenswrapper[7614]: I0224 05:20:04.363779 7614 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-log\" (UniqueName: \"kubernetes.io/empty-dir/2f48332e-92de-42aa-a6e6-db161f005e74-audit-log\") pod \"metrics-server-65cdf565cd-555rj\" (UID: \"2f48332e-92de-42aa-a6e6-db161f005e74\") " pod="openshift-monitoring/metrics-server-65cdf565cd-555rj" Feb 24 05:20:04.363971 master-0 kubenswrapper[7614]: I0224 05:20:04.363795 7614 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-server-audit-profiles\" (UniqueName: \"kubernetes.io/configmap/2f48332e-92de-42aa-a6e6-db161f005e74-metrics-server-audit-profiles\") pod \"metrics-server-65cdf565cd-555rj\" (UID: \"2f48332e-92de-42aa-a6e6-db161f005e74\") " pod="openshift-monitoring/metrics-server-65cdf565cd-555rj" Feb 24 05:20:04.375572 master-0 kubenswrapper[7614]: I0224 05:20:04.374545 7614 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-monitoring/openshift-state-metrics-6dbff8cb4c-hvjlk" podStartSLOduration=2.8550362160000002 podStartE2EDuration="6.374506668s" podCreationTimestamp="2026-02-24 05:19:58 +0000 UTC" firstStartedPulling="2026-02-24 05:19:59.907216707 +0000 UTC m=+330.941959863" lastFinishedPulling="2026-02-24 05:20:03.426687149 +0000 UTC m=+334.461430315" observedRunningTime="2026-02-24 05:20:04.372664871 +0000 UTC m=+335.407408027" watchObservedRunningTime="2026-02-24 05:20:04.374506668 +0000 UTC m=+335.409249824" Feb 24 05:20:04.398726 master-0 kubenswrapper[7614]: I0224 05:20:04.398565 7614 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-monitoring/node-exporter-qk7rz" podStartSLOduration=5.339615631 podStartE2EDuration="6.398527965s" podCreationTimestamp="2026-02-24 05:19:58 +0000 UTC" firstStartedPulling="2026-02-24 05:19:59.078947641 +0000 UTC m=+330.113690797" lastFinishedPulling="2026-02-24 05:20:00.137859975 +0000 UTC m=+331.172603131" observedRunningTime="2026-02-24 05:20:04.395400065 +0000 UTC m=+335.430143241" watchObservedRunningTime="2026-02-24 05:20:04.398527965 +0000 UTC m=+335.433271121" Feb 24 05:20:04.465582 master-0 kubenswrapper[7614]: I0224 05:20:04.465493 7614 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-log\" (UniqueName: \"kubernetes.io/empty-dir/2f48332e-92de-42aa-a6e6-db161f005e74-audit-log\") pod \"metrics-server-65cdf565cd-555rj\" (UID: \"2f48332e-92de-42aa-a6e6-db161f005e74\") " pod="openshift-monitoring/metrics-server-65cdf565cd-555rj" Feb 24 05:20:04.465582 master-0 kubenswrapper[7614]: I0224 05:20:04.465553 7614 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-server-audit-profiles\" (UniqueName: \"kubernetes.io/configmap/2f48332e-92de-42aa-a6e6-db161f005e74-metrics-server-audit-profiles\") pod \"metrics-server-65cdf565cd-555rj\" (UID: \"2f48332e-92de-42aa-a6e6-db161f005e74\") " pod="openshift-monitoring/metrics-server-65cdf565cd-555rj" Feb 24 05:20:04.465990 master-0 kubenswrapper[7614]: I0224 05:20:04.465637 7614 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-metrics-client-certs\" (UniqueName: \"kubernetes.io/secret/2f48332e-92de-42aa-a6e6-db161f005e74-secret-metrics-client-certs\") pod \"metrics-server-65cdf565cd-555rj\" (UID: \"2f48332e-92de-42aa-a6e6-db161f005e74\") " pod="openshift-monitoring/metrics-server-65cdf565cd-555rj" Feb 24 05:20:04.465990 master-0 kubenswrapper[7614]: I0224 05:20:04.465676 7614 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2f48332e-92de-42aa-a6e6-db161f005e74-client-ca-bundle\") pod \"metrics-server-65cdf565cd-555rj\" (UID: \"2f48332e-92de-42aa-a6e6-db161f005e74\") " pod="openshift-monitoring/metrics-server-65cdf565cd-555rj" Feb 24 05:20:04.465990 master-0 kubenswrapper[7614]: I0224 05:20:04.465844 7614 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"configmap-kubelet-serving-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/2f48332e-92de-42aa-a6e6-db161f005e74-configmap-kubelet-serving-ca-bundle\") pod \"metrics-server-65cdf565cd-555rj\" (UID: \"2f48332e-92de-42aa-a6e6-db161f005e74\") " pod="openshift-monitoring/metrics-server-65cdf565cd-555rj" Feb 24 05:20:04.465990 master-0 kubenswrapper[7614]: I0224 05:20:04.465923 7614 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-metrics-server-tls\" (UniqueName: \"kubernetes.io/secret/2f48332e-92de-42aa-a6e6-db161f005e74-secret-metrics-server-tls\") pod \"metrics-server-65cdf565cd-555rj\" (UID: \"2f48332e-92de-42aa-a6e6-db161f005e74\") " pod="openshift-monitoring/metrics-server-65cdf565cd-555rj" Feb 24 05:20:04.465990 master-0 kubenswrapper[7614]: I0224 05:20:04.465978 7614 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kc42f\" (UniqueName: \"kubernetes.io/projected/2f48332e-92de-42aa-a6e6-db161f005e74-kube-api-access-kc42f\") pod \"metrics-server-65cdf565cd-555rj\" (UID: \"2f48332e-92de-42aa-a6e6-db161f005e74\") " pod="openshift-monitoring/metrics-server-65cdf565cd-555rj" Feb 24 05:20:04.466292 master-0 kubenswrapper[7614]: I0224 05:20:04.466108 7614 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-log\" (UniqueName: \"kubernetes.io/empty-dir/2f48332e-92de-42aa-a6e6-db161f005e74-audit-log\") pod \"metrics-server-65cdf565cd-555rj\" (UID: \"2f48332e-92de-42aa-a6e6-db161f005e74\") " pod="openshift-monitoring/metrics-server-65cdf565cd-555rj" Feb 24 05:20:04.468504 master-0 kubenswrapper[7614]: I0224 05:20:04.467011 7614 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"configmap-kubelet-serving-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/2f48332e-92de-42aa-a6e6-db161f005e74-configmap-kubelet-serving-ca-bundle\") pod \"metrics-server-65cdf565cd-555rj\" (UID: \"2f48332e-92de-42aa-a6e6-db161f005e74\") " pod="openshift-monitoring/metrics-server-65cdf565cd-555rj" Feb 24 05:20:04.468504 master-0 kubenswrapper[7614]: I0224 05:20:04.467048 7614 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-server-audit-profiles\" (UniqueName: \"kubernetes.io/configmap/2f48332e-92de-42aa-a6e6-db161f005e74-metrics-server-audit-profiles\") pod \"metrics-server-65cdf565cd-555rj\" (UID: \"2f48332e-92de-42aa-a6e6-db161f005e74\") " pod="openshift-monitoring/metrics-server-65cdf565cd-555rj" Feb 24 05:20:04.469921 master-0 kubenswrapper[7614]: I0224 05:20:04.469846 7614 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-metrics-client-certs\" (UniqueName: \"kubernetes.io/secret/2f48332e-92de-42aa-a6e6-db161f005e74-secret-metrics-client-certs\") pod \"metrics-server-65cdf565cd-555rj\" (UID: \"2f48332e-92de-42aa-a6e6-db161f005e74\") " pod="openshift-monitoring/metrics-server-65cdf565cd-555rj" Feb 24 05:20:04.472287 master-0 kubenswrapper[7614]: I0224 05:20:04.472221 7614 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-metrics-server-tls\" (UniqueName: \"kubernetes.io/secret/2f48332e-92de-42aa-a6e6-db161f005e74-secret-metrics-server-tls\") pod \"metrics-server-65cdf565cd-555rj\" (UID: \"2f48332e-92de-42aa-a6e6-db161f005e74\") " pod="openshift-monitoring/metrics-server-65cdf565cd-555rj" Feb 24 05:20:04.473681 master-0 kubenswrapper[7614]: I0224 05:20:04.473628 7614 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2f48332e-92de-42aa-a6e6-db161f005e74-client-ca-bundle\") pod \"metrics-server-65cdf565cd-555rj\" (UID: \"2f48332e-92de-42aa-a6e6-db161f005e74\") " pod="openshift-monitoring/metrics-server-65cdf565cd-555rj" Feb 24 05:20:04.486845 master-0 kubenswrapper[7614]: I0224 05:20:04.486787 7614 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kc42f\" (UniqueName: \"kubernetes.io/projected/2f48332e-92de-42aa-a6e6-db161f005e74-kube-api-access-kc42f\") pod \"metrics-server-65cdf565cd-555rj\" (UID: \"2f48332e-92de-42aa-a6e6-db161f005e74\") " pod="openshift-monitoring/metrics-server-65cdf565cd-555rj" Feb 24 05:20:04.609664 master-0 kubenswrapper[7614]: I0224 05:20:04.609446 7614 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/metrics-server-65cdf565cd-555rj" Feb 24 05:20:04.663977 master-0 kubenswrapper[7614]: I0224 05:20:04.663871 7614 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-zxkt2 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 05:20:04.663977 master-0 kubenswrapper[7614]: [-]has-synced failed: reason withheld Feb 24 05:20:04.663977 master-0 kubenswrapper[7614]: [+]process-running ok Feb 24 05:20:04.663977 master-0 kubenswrapper[7614]: healthz check failed Feb 24 05:20:04.664768 master-0 kubenswrapper[7614]: I0224 05:20:04.664021 7614 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-zxkt2" podUID="be7a4b9e-1e9a-4298-b804-21b683805c0e" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 05:20:05.060674 master-0 kubenswrapper[7614]: I0224 05:20:05.060608 7614 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/metrics-server-65cdf565cd-555rj"] Feb 24 05:20:05.287687 master-0 kubenswrapper[7614]: I0224 05:20:05.287625 7614 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/metrics-server-65cdf565cd-555rj" event={"ID":"2f48332e-92de-42aa-a6e6-db161f005e74","Type":"ContainerStarted","Data":"4ebd137aadd86a90697f1884cb52d1970bb5138e39026928308cfa18816924e6"} Feb 24 05:20:05.663254 master-0 kubenswrapper[7614]: I0224 05:20:05.663153 7614 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-zxkt2 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 05:20:05.663254 master-0 kubenswrapper[7614]: [-]has-synced failed: reason withheld Feb 24 05:20:05.663254 master-0 kubenswrapper[7614]: [+]process-running ok Feb 24 05:20:05.663254 master-0 kubenswrapper[7614]: healthz check failed Feb 24 05:20:05.663738 master-0 kubenswrapper[7614]: I0224 05:20:05.663265 7614 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-zxkt2" podUID="be7a4b9e-1e9a-4298-b804-21b683805c0e" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 05:20:06.682677 master-0 kubenswrapper[7614]: I0224 05:20:06.682599 7614 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-zxkt2 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 05:20:06.682677 master-0 kubenswrapper[7614]: [-]has-synced failed: reason withheld Feb 24 05:20:06.682677 master-0 kubenswrapper[7614]: [+]process-running ok Feb 24 05:20:06.682677 master-0 kubenswrapper[7614]: healthz check failed Feb 24 05:20:06.683416 master-0 kubenswrapper[7614]: I0224 05:20:06.682707 7614 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-zxkt2" podUID="be7a4b9e-1e9a-4298-b804-21b683805c0e" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 05:20:07.303811 master-0 kubenswrapper[7614]: I0224 05:20:07.303699 7614 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/metrics-server-65cdf565cd-555rj" event={"ID":"2f48332e-92de-42aa-a6e6-db161f005e74","Type":"ContainerStarted","Data":"4858fceb65a04923cb067f166343a6fb2307e9f4257f862dde38618470f1f9bb"} Feb 24 05:20:07.332509 master-0 kubenswrapper[7614]: I0224 05:20:07.332410 7614 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-monitoring/metrics-server-65cdf565cd-555rj" podStartSLOduration=1.5256096609999998 podStartE2EDuration="3.332390186s" podCreationTimestamp="2026-02-24 05:20:04 +0000 UTC" firstStartedPulling="2026-02-24 05:20:05.06914816 +0000 UTC m=+336.103891306" lastFinishedPulling="2026-02-24 05:20:06.875928675 +0000 UTC m=+337.910671831" observedRunningTime="2026-02-24 05:20:07.332362325 +0000 UTC m=+338.367105501" watchObservedRunningTime="2026-02-24 05:20:07.332390186 +0000 UTC m=+338.367133342" Feb 24 05:20:07.663360 master-0 kubenswrapper[7614]: I0224 05:20:07.663115 7614 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-zxkt2 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 05:20:07.663360 master-0 kubenswrapper[7614]: [-]has-synced failed: reason withheld Feb 24 05:20:07.663360 master-0 kubenswrapper[7614]: [+]process-running ok Feb 24 05:20:07.663360 master-0 kubenswrapper[7614]: healthz check failed Feb 24 05:20:07.663360 master-0 kubenswrapper[7614]: I0224 05:20:07.663222 7614 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-zxkt2" podUID="be7a4b9e-1e9a-4298-b804-21b683805c0e" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 05:20:08.663679 master-0 kubenswrapper[7614]: I0224 05:20:08.663590 7614 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-zxkt2 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 05:20:08.663679 master-0 kubenswrapper[7614]: [-]has-synced failed: reason withheld Feb 24 05:20:08.663679 master-0 kubenswrapper[7614]: [+]process-running ok Feb 24 05:20:08.663679 master-0 kubenswrapper[7614]: healthz check failed Feb 24 05:20:08.664627 master-0 kubenswrapper[7614]: I0224 05:20:08.663701 7614 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-zxkt2" podUID="be7a4b9e-1e9a-4298-b804-21b683805c0e" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 05:20:09.107814 master-0 kubenswrapper[7614]: I0224 05:20:09.107737 7614 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-xm8sw" Feb 24 05:20:09.166414 master-0 kubenswrapper[7614]: I0224 05:20:09.166292 7614 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-xm8sw" Feb 24 05:20:09.663440 master-0 kubenswrapper[7614]: I0224 05:20:09.663241 7614 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-zxkt2 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 05:20:09.663440 master-0 kubenswrapper[7614]: [-]has-synced failed: reason withheld Feb 24 05:20:09.663440 master-0 kubenswrapper[7614]: [+]process-running ok Feb 24 05:20:09.663440 master-0 kubenswrapper[7614]: healthz check failed Feb 24 05:20:09.663440 master-0 kubenswrapper[7614]: I0224 05:20:09.663393 7614 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-zxkt2" podUID="be7a4b9e-1e9a-4298-b804-21b683805c0e" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 05:20:10.664289 master-0 kubenswrapper[7614]: I0224 05:20:10.664169 7614 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-zxkt2 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 05:20:10.664289 master-0 kubenswrapper[7614]: [-]has-synced failed: reason withheld Feb 24 05:20:10.664289 master-0 kubenswrapper[7614]: [+]process-running ok Feb 24 05:20:10.664289 master-0 kubenswrapper[7614]: healthz check failed Feb 24 05:20:10.665759 master-0 kubenswrapper[7614]: I0224 05:20:10.664337 7614 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-zxkt2" podUID="be7a4b9e-1e9a-4298-b804-21b683805c0e" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 05:20:11.663609 master-0 kubenswrapper[7614]: I0224 05:20:11.663506 7614 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-zxkt2 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 05:20:11.663609 master-0 kubenswrapper[7614]: [-]has-synced failed: reason withheld Feb 24 05:20:11.663609 master-0 kubenswrapper[7614]: [+]process-running ok Feb 24 05:20:11.663609 master-0 kubenswrapper[7614]: healthz check failed Feb 24 05:20:11.663609 master-0 kubenswrapper[7614]: I0224 05:20:11.663609 7614 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-zxkt2" podUID="be7a4b9e-1e9a-4298-b804-21b683805c0e" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 05:20:12.663760 master-0 kubenswrapper[7614]: I0224 05:20:12.663648 7614 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-zxkt2 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 05:20:12.663760 master-0 kubenswrapper[7614]: [-]has-synced failed: reason withheld Feb 24 05:20:12.663760 master-0 kubenswrapper[7614]: [+]process-running ok Feb 24 05:20:12.663760 master-0 kubenswrapper[7614]: healthz check failed Feb 24 05:20:12.664897 master-0 kubenswrapper[7614]: I0224 05:20:12.663787 7614 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-zxkt2" podUID="be7a4b9e-1e9a-4298-b804-21b683805c0e" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 05:20:13.663781 master-0 kubenswrapper[7614]: I0224 05:20:13.663644 7614 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-zxkt2 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 05:20:13.663781 master-0 kubenswrapper[7614]: [-]has-synced failed: reason withheld Feb 24 05:20:13.663781 master-0 kubenswrapper[7614]: [+]process-running ok Feb 24 05:20:13.663781 master-0 kubenswrapper[7614]: healthz check failed Feb 24 05:20:13.665048 master-0 kubenswrapper[7614]: I0224 05:20:13.663805 7614 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-zxkt2" podUID="be7a4b9e-1e9a-4298-b804-21b683805c0e" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 05:20:14.664555 master-0 kubenswrapper[7614]: I0224 05:20:14.664454 7614 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-zxkt2 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 05:20:14.664555 master-0 kubenswrapper[7614]: [-]has-synced failed: reason withheld Feb 24 05:20:14.664555 master-0 kubenswrapper[7614]: [+]process-running ok Feb 24 05:20:14.664555 master-0 kubenswrapper[7614]: healthz check failed Feb 24 05:20:14.665711 master-0 kubenswrapper[7614]: I0224 05:20:14.664579 7614 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-zxkt2" podUID="be7a4b9e-1e9a-4298-b804-21b683805c0e" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 05:20:15.664444 master-0 kubenswrapper[7614]: I0224 05:20:15.664347 7614 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-zxkt2 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 05:20:15.664444 master-0 kubenswrapper[7614]: [-]has-synced failed: reason withheld Feb 24 05:20:15.664444 master-0 kubenswrapper[7614]: [+]process-running ok Feb 24 05:20:15.664444 master-0 kubenswrapper[7614]: healthz check failed Feb 24 05:20:15.665166 master-0 kubenswrapper[7614]: I0224 05:20:15.664521 7614 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-zxkt2" podUID="be7a4b9e-1e9a-4298-b804-21b683805c0e" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 05:20:16.664432 master-0 kubenswrapper[7614]: I0224 05:20:16.664357 7614 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-zxkt2 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 05:20:16.664432 master-0 kubenswrapper[7614]: [-]has-synced failed: reason withheld Feb 24 05:20:16.664432 master-0 kubenswrapper[7614]: [+]process-running ok Feb 24 05:20:16.664432 master-0 kubenswrapper[7614]: healthz check failed Feb 24 05:20:16.665362 master-0 kubenswrapper[7614]: I0224 05:20:16.664470 7614 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-zxkt2" podUID="be7a4b9e-1e9a-4298-b804-21b683805c0e" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 05:20:17.664211 master-0 kubenswrapper[7614]: I0224 05:20:17.664135 7614 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-zxkt2 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 05:20:17.664211 master-0 kubenswrapper[7614]: [-]has-synced failed: reason withheld Feb 24 05:20:17.664211 master-0 kubenswrapper[7614]: [+]process-running ok Feb 24 05:20:17.664211 master-0 kubenswrapper[7614]: healthz check failed Feb 24 05:20:17.664582 master-0 kubenswrapper[7614]: I0224 05:20:17.664248 7614 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-zxkt2" podUID="be7a4b9e-1e9a-4298-b804-21b683805c0e" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 05:20:18.663209 master-0 kubenswrapper[7614]: I0224 05:20:18.663069 7614 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-zxkt2 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 05:20:18.663209 master-0 kubenswrapper[7614]: [-]has-synced failed: reason withheld Feb 24 05:20:18.663209 master-0 kubenswrapper[7614]: [+]process-running ok Feb 24 05:20:18.663209 master-0 kubenswrapper[7614]: healthz check failed Feb 24 05:20:18.663209 master-0 kubenswrapper[7614]: I0224 05:20:18.663192 7614 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-zxkt2" podUID="be7a4b9e-1e9a-4298-b804-21b683805c0e" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 05:20:19.664002 master-0 kubenswrapper[7614]: I0224 05:20:19.663866 7614 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-zxkt2 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 05:20:19.664002 master-0 kubenswrapper[7614]: [-]has-synced failed: reason withheld Feb 24 05:20:19.664002 master-0 kubenswrapper[7614]: [+]process-running ok Feb 24 05:20:19.664002 master-0 kubenswrapper[7614]: healthz check failed Feb 24 05:20:19.664690 master-0 kubenswrapper[7614]: I0224 05:20:19.664656 7614 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-zxkt2" podUID="be7a4b9e-1e9a-4298-b804-21b683805c0e" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 05:20:20.664868 master-0 kubenswrapper[7614]: I0224 05:20:20.664749 7614 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-zxkt2 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 05:20:20.664868 master-0 kubenswrapper[7614]: [-]has-synced failed: reason withheld Feb 24 05:20:20.664868 master-0 kubenswrapper[7614]: [+]process-running ok Feb 24 05:20:20.664868 master-0 kubenswrapper[7614]: healthz check failed Feb 24 05:20:20.664868 master-0 kubenswrapper[7614]: I0224 05:20:20.664864 7614 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-zxkt2" podUID="be7a4b9e-1e9a-4298-b804-21b683805c0e" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 05:20:21.662527 master-0 kubenswrapper[7614]: I0224 05:20:21.662427 7614 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-zxkt2 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 05:20:21.662527 master-0 kubenswrapper[7614]: [-]has-synced failed: reason withheld Feb 24 05:20:21.662527 master-0 kubenswrapper[7614]: [+]process-running ok Feb 24 05:20:21.662527 master-0 kubenswrapper[7614]: healthz check failed Feb 24 05:20:21.662931 master-0 kubenswrapper[7614]: I0224 05:20:21.662540 7614 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-zxkt2" podUID="be7a4b9e-1e9a-4298-b804-21b683805c0e" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 05:20:22.663413 master-0 kubenswrapper[7614]: I0224 05:20:22.663266 7614 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-zxkt2 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 05:20:22.663413 master-0 kubenswrapper[7614]: [-]has-synced failed: reason withheld Feb 24 05:20:22.663413 master-0 kubenswrapper[7614]: [+]process-running ok Feb 24 05:20:22.663413 master-0 kubenswrapper[7614]: healthz check failed Feb 24 05:20:22.664759 master-0 kubenswrapper[7614]: I0224 05:20:22.663421 7614 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-zxkt2" podUID="be7a4b9e-1e9a-4298-b804-21b683805c0e" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 05:20:23.663102 master-0 kubenswrapper[7614]: I0224 05:20:23.662968 7614 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-zxkt2 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 05:20:23.663102 master-0 kubenswrapper[7614]: [-]has-synced failed: reason withheld Feb 24 05:20:23.663102 master-0 kubenswrapper[7614]: [+]process-running ok Feb 24 05:20:23.663102 master-0 kubenswrapper[7614]: healthz check failed Feb 24 05:20:23.664195 master-0 kubenswrapper[7614]: I0224 05:20:23.663110 7614 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-zxkt2" podUID="be7a4b9e-1e9a-4298-b804-21b683805c0e" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 05:20:24.610551 master-0 kubenswrapper[7614]: I0224 05:20:24.610425 7614 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-monitoring/metrics-server-65cdf565cd-555rj" Feb 24 05:20:24.610551 master-0 kubenswrapper[7614]: I0224 05:20:24.610563 7614 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-monitoring/metrics-server-65cdf565cd-555rj" Feb 24 05:20:24.664880 master-0 kubenswrapper[7614]: I0224 05:20:24.664784 7614 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-zxkt2 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 05:20:24.664880 master-0 kubenswrapper[7614]: [-]has-synced failed: reason withheld Feb 24 05:20:24.664880 master-0 kubenswrapper[7614]: [+]process-running ok Feb 24 05:20:24.664880 master-0 kubenswrapper[7614]: healthz check failed Feb 24 05:20:24.665534 master-0 kubenswrapper[7614]: I0224 05:20:24.664904 7614 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-zxkt2" podUID="be7a4b9e-1e9a-4298-b804-21b683805c0e" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 05:20:25.663161 master-0 kubenswrapper[7614]: I0224 05:20:25.663064 7614 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-zxkt2 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 05:20:25.663161 master-0 kubenswrapper[7614]: [-]has-synced failed: reason withheld Feb 24 05:20:25.663161 master-0 kubenswrapper[7614]: [+]process-running ok Feb 24 05:20:25.663161 master-0 kubenswrapper[7614]: healthz check failed Feb 24 05:20:25.663619 master-0 kubenswrapper[7614]: I0224 05:20:25.663173 7614 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-zxkt2" podUID="be7a4b9e-1e9a-4298-b804-21b683805c0e" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 05:20:26.665784 master-0 kubenswrapper[7614]: I0224 05:20:26.665693 7614 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-zxkt2 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 05:20:26.665784 master-0 kubenswrapper[7614]: [-]has-synced failed: reason withheld Feb 24 05:20:26.665784 master-0 kubenswrapper[7614]: [+]process-running ok Feb 24 05:20:26.665784 master-0 kubenswrapper[7614]: healthz check failed Feb 24 05:20:26.666625 master-0 kubenswrapper[7614]: I0224 05:20:26.665805 7614 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-zxkt2" podUID="be7a4b9e-1e9a-4298-b804-21b683805c0e" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 05:20:27.663618 master-0 kubenswrapper[7614]: I0224 05:20:27.663511 7614 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-zxkt2 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 05:20:27.663618 master-0 kubenswrapper[7614]: [-]has-synced failed: reason withheld Feb 24 05:20:27.663618 master-0 kubenswrapper[7614]: [+]process-running ok Feb 24 05:20:27.663618 master-0 kubenswrapper[7614]: healthz check failed Feb 24 05:20:27.664063 master-0 kubenswrapper[7614]: I0224 05:20:27.663663 7614 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-zxkt2" podUID="be7a4b9e-1e9a-4298-b804-21b683805c0e" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 05:20:28.663580 master-0 kubenswrapper[7614]: I0224 05:20:28.663502 7614 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-zxkt2 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 05:20:28.663580 master-0 kubenswrapper[7614]: [-]has-synced failed: reason withheld Feb 24 05:20:28.663580 master-0 kubenswrapper[7614]: [+]process-running ok Feb 24 05:20:28.663580 master-0 kubenswrapper[7614]: healthz check failed Feb 24 05:20:28.664602 master-0 kubenswrapper[7614]: I0224 05:20:28.663619 7614 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-zxkt2" podUID="be7a4b9e-1e9a-4298-b804-21b683805c0e" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 05:20:29.664457 master-0 kubenswrapper[7614]: I0224 05:20:29.664208 7614 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-zxkt2 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 05:20:29.664457 master-0 kubenswrapper[7614]: [-]has-synced failed: reason withheld Feb 24 05:20:29.664457 master-0 kubenswrapper[7614]: [+]process-running ok Feb 24 05:20:29.664457 master-0 kubenswrapper[7614]: healthz check failed Feb 24 05:20:29.664457 master-0 kubenswrapper[7614]: I0224 05:20:29.664353 7614 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-zxkt2" podUID="be7a4b9e-1e9a-4298-b804-21b683805c0e" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 05:20:30.663986 master-0 kubenswrapper[7614]: I0224 05:20:30.663854 7614 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-zxkt2 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 05:20:30.663986 master-0 kubenswrapper[7614]: [-]has-synced failed: reason withheld Feb 24 05:20:30.663986 master-0 kubenswrapper[7614]: [+]process-running ok Feb 24 05:20:30.663986 master-0 kubenswrapper[7614]: healthz check failed Feb 24 05:20:30.664580 master-0 kubenswrapper[7614]: I0224 05:20:30.663992 7614 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-zxkt2" podUID="be7a4b9e-1e9a-4298-b804-21b683805c0e" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 05:20:31.663022 master-0 kubenswrapper[7614]: I0224 05:20:31.662911 7614 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-zxkt2 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 05:20:31.663022 master-0 kubenswrapper[7614]: [-]has-synced failed: reason withheld Feb 24 05:20:31.663022 master-0 kubenswrapper[7614]: [+]process-running ok Feb 24 05:20:31.663022 master-0 kubenswrapper[7614]: healthz check failed Feb 24 05:20:31.663022 master-0 kubenswrapper[7614]: I0224 05:20:31.663001 7614 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-zxkt2" podUID="be7a4b9e-1e9a-4298-b804-21b683805c0e" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 05:20:32.664086 master-0 kubenswrapper[7614]: I0224 05:20:32.663974 7614 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-zxkt2 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 05:20:32.664086 master-0 kubenswrapper[7614]: [-]has-synced failed: reason withheld Feb 24 05:20:32.664086 master-0 kubenswrapper[7614]: [+]process-running ok Feb 24 05:20:32.664086 master-0 kubenswrapper[7614]: healthz check failed Feb 24 05:20:32.665178 master-0 kubenswrapper[7614]: I0224 05:20:32.664105 7614 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-zxkt2" podUID="be7a4b9e-1e9a-4298-b804-21b683805c0e" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 05:20:33.664216 master-0 kubenswrapper[7614]: I0224 05:20:33.664110 7614 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-zxkt2 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 05:20:33.664216 master-0 kubenswrapper[7614]: [-]has-synced failed: reason withheld Feb 24 05:20:33.664216 master-0 kubenswrapper[7614]: [+]process-running ok Feb 24 05:20:33.664216 master-0 kubenswrapper[7614]: healthz check failed Feb 24 05:20:33.664216 master-0 kubenswrapper[7614]: I0224 05:20:33.664206 7614 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-zxkt2" podUID="be7a4b9e-1e9a-4298-b804-21b683805c0e" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 05:20:34.663477 master-0 kubenswrapper[7614]: I0224 05:20:34.663377 7614 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-zxkt2 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 05:20:34.663477 master-0 kubenswrapper[7614]: [-]has-synced failed: reason withheld Feb 24 05:20:34.663477 master-0 kubenswrapper[7614]: [+]process-running ok Feb 24 05:20:34.663477 master-0 kubenswrapper[7614]: healthz check failed Feb 24 05:20:34.663912 master-0 kubenswrapper[7614]: I0224 05:20:34.663486 7614 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-zxkt2" podUID="be7a4b9e-1e9a-4298-b804-21b683805c0e" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 05:20:35.663781 master-0 kubenswrapper[7614]: I0224 05:20:35.663677 7614 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-zxkt2 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 05:20:35.663781 master-0 kubenswrapper[7614]: [-]has-synced failed: reason withheld Feb 24 05:20:35.663781 master-0 kubenswrapper[7614]: [+]process-running ok Feb 24 05:20:35.663781 master-0 kubenswrapper[7614]: healthz check failed Feb 24 05:20:35.664916 master-0 kubenswrapper[7614]: I0224 05:20:35.663787 7614 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-zxkt2" podUID="be7a4b9e-1e9a-4298-b804-21b683805c0e" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 05:20:36.588540 master-0 kubenswrapper[7614]: I0224 05:20:36.588430 7614 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" Feb 24 05:20:36.663550 master-0 kubenswrapper[7614]: I0224 05:20:36.663420 7614 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-zxkt2 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 05:20:36.663550 master-0 kubenswrapper[7614]: [-]has-synced failed: reason withheld Feb 24 05:20:36.663550 master-0 kubenswrapper[7614]: [+]process-running ok Feb 24 05:20:36.663550 master-0 kubenswrapper[7614]: healthz check failed Feb 24 05:20:36.664606 master-0 kubenswrapper[7614]: I0224 05:20:36.663576 7614 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-zxkt2" podUID="be7a4b9e-1e9a-4298-b804-21b683805c0e" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 05:20:37.558735 master-0 kubenswrapper[7614]: I0224 05:20:37.558631 7614 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ingress-operator_ingress-operator-6569778c84-rr8r7_3d6b1ce7-1213-494c-829d-186d39eac7eb/ingress-operator/1.log" Feb 24 05:20:37.560248 master-0 kubenswrapper[7614]: I0224 05:20:37.560175 7614 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ingress-operator_ingress-operator-6569778c84-rr8r7_3d6b1ce7-1213-494c-829d-186d39eac7eb/ingress-operator/0.log" Feb 24 05:20:37.560415 master-0 kubenswrapper[7614]: I0224 05:20:37.560272 7614 generic.go:334] "Generic (PLEG): container finished" podID="3d6b1ce7-1213-494c-829d-186d39eac7eb" containerID="f9e75ea6f0c81eec46e337376adf731ab535fc067c7d1c6d227f14a9e7433ffe" exitCode=1 Feb 24 05:20:37.560510 master-0 kubenswrapper[7614]: I0224 05:20:37.560378 7614 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-6569778c84-rr8r7" event={"ID":"3d6b1ce7-1213-494c-829d-186d39eac7eb","Type":"ContainerDied","Data":"f9e75ea6f0c81eec46e337376adf731ab535fc067c7d1c6d227f14a9e7433ffe"} Feb 24 05:20:37.560582 master-0 kubenswrapper[7614]: I0224 05:20:37.560502 7614 scope.go:117] "RemoveContainer" containerID="dd6d3f4e8c90f9e72cf283fa2ee57699a971df08e7b5a82fbc21deb33aca4d26" Feb 24 05:20:37.561771 master-0 kubenswrapper[7614]: I0224 05:20:37.561731 7614 scope.go:117] "RemoveContainer" containerID="f9e75ea6f0c81eec46e337376adf731ab535fc067c7d1c6d227f14a9e7433ffe" Feb 24 05:20:37.562403 master-0 kubenswrapper[7614]: E0224 05:20:37.562362 7614 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ingress-operator\" with CrashLoopBackOff: \"back-off 10s restarting failed container=ingress-operator pod=ingress-operator-6569778c84-rr8r7_openshift-ingress-operator(3d6b1ce7-1213-494c-829d-186d39eac7eb)\"" pod="openshift-ingress-operator/ingress-operator-6569778c84-rr8r7" podUID="3d6b1ce7-1213-494c-829d-186d39eac7eb" Feb 24 05:20:37.665145 master-0 kubenswrapper[7614]: I0224 05:20:37.664482 7614 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-zxkt2 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 05:20:37.665145 master-0 kubenswrapper[7614]: [-]has-synced failed: reason withheld Feb 24 05:20:37.665145 master-0 kubenswrapper[7614]: [+]process-running ok Feb 24 05:20:37.665145 master-0 kubenswrapper[7614]: healthz check failed Feb 24 05:20:37.665145 master-0 kubenswrapper[7614]: I0224 05:20:37.664602 7614 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-zxkt2" podUID="be7a4b9e-1e9a-4298-b804-21b683805c0e" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 05:20:38.574914 master-0 kubenswrapper[7614]: I0224 05:20:38.574828 7614 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ingress-operator_ingress-operator-6569778c84-rr8r7_3d6b1ce7-1213-494c-829d-186d39eac7eb/ingress-operator/1.log" Feb 24 05:20:38.663729 master-0 kubenswrapper[7614]: I0224 05:20:38.663634 7614 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-zxkt2 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 05:20:38.663729 master-0 kubenswrapper[7614]: [-]has-synced failed: reason withheld Feb 24 05:20:38.663729 master-0 kubenswrapper[7614]: [+]process-running ok Feb 24 05:20:38.663729 master-0 kubenswrapper[7614]: healthz check failed Feb 24 05:20:38.663729 master-0 kubenswrapper[7614]: I0224 05:20:38.663725 7614 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-zxkt2" podUID="be7a4b9e-1e9a-4298-b804-21b683805c0e" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 05:20:39.663656 master-0 kubenswrapper[7614]: I0224 05:20:39.663437 7614 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-zxkt2 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 05:20:39.663656 master-0 kubenswrapper[7614]: [-]has-synced failed: reason withheld Feb 24 05:20:39.663656 master-0 kubenswrapper[7614]: [+]process-running ok Feb 24 05:20:39.663656 master-0 kubenswrapper[7614]: healthz check failed Feb 24 05:20:39.663656 master-0 kubenswrapper[7614]: I0224 05:20:39.663572 7614 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-zxkt2" podUID="be7a4b9e-1e9a-4298-b804-21b683805c0e" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 05:20:40.664376 master-0 kubenswrapper[7614]: I0224 05:20:40.664229 7614 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-zxkt2 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 05:20:40.664376 master-0 kubenswrapper[7614]: [-]has-synced failed: reason withheld Feb 24 05:20:40.664376 master-0 kubenswrapper[7614]: [+]process-running ok Feb 24 05:20:40.664376 master-0 kubenswrapper[7614]: healthz check failed Feb 24 05:20:40.665934 master-0 kubenswrapper[7614]: I0224 05:20:40.665502 7614 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-zxkt2" podUID="be7a4b9e-1e9a-4298-b804-21b683805c0e" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 05:20:41.663663 master-0 kubenswrapper[7614]: I0224 05:20:41.663543 7614 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-zxkt2 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 05:20:41.663663 master-0 kubenswrapper[7614]: [-]has-synced failed: reason withheld Feb 24 05:20:41.663663 master-0 kubenswrapper[7614]: [+]process-running ok Feb 24 05:20:41.663663 master-0 kubenswrapper[7614]: healthz check failed Feb 24 05:20:41.664140 master-0 kubenswrapper[7614]: I0224 05:20:41.663659 7614 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-zxkt2" podUID="be7a4b9e-1e9a-4298-b804-21b683805c0e" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 05:20:42.663878 master-0 kubenswrapper[7614]: I0224 05:20:42.663765 7614 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-zxkt2 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 05:20:42.663878 master-0 kubenswrapper[7614]: [-]has-synced failed: reason withheld Feb 24 05:20:42.663878 master-0 kubenswrapper[7614]: [+]process-running ok Feb 24 05:20:42.663878 master-0 kubenswrapper[7614]: healthz check failed Feb 24 05:20:42.663878 master-0 kubenswrapper[7614]: I0224 05:20:42.663862 7614 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-zxkt2" podUID="be7a4b9e-1e9a-4298-b804-21b683805c0e" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 05:20:43.662904 master-0 kubenswrapper[7614]: I0224 05:20:43.662817 7614 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-zxkt2 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 05:20:43.662904 master-0 kubenswrapper[7614]: [-]has-synced failed: reason withheld Feb 24 05:20:43.662904 master-0 kubenswrapper[7614]: [+]process-running ok Feb 24 05:20:43.662904 master-0 kubenswrapper[7614]: healthz check failed Feb 24 05:20:43.663285 master-0 kubenswrapper[7614]: I0224 05:20:43.662914 7614 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-zxkt2" podUID="be7a4b9e-1e9a-4298-b804-21b683805c0e" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 05:20:44.622425 master-0 kubenswrapper[7614]: I0224 05:20:44.622279 7614 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-monitoring/metrics-server-65cdf565cd-555rj" Feb 24 05:20:44.629195 master-0 kubenswrapper[7614]: I0224 05:20:44.629146 7614 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-monitoring/metrics-server-65cdf565cd-555rj" Feb 24 05:20:44.663521 master-0 kubenswrapper[7614]: I0224 05:20:44.663446 7614 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-zxkt2 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 05:20:44.663521 master-0 kubenswrapper[7614]: [-]has-synced failed: reason withheld Feb 24 05:20:44.663521 master-0 kubenswrapper[7614]: [+]process-running ok Feb 24 05:20:44.663521 master-0 kubenswrapper[7614]: healthz check failed Feb 24 05:20:44.664171 master-0 kubenswrapper[7614]: I0224 05:20:44.663538 7614 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-zxkt2" podUID="be7a4b9e-1e9a-4298-b804-21b683805c0e" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 05:20:45.662725 master-0 kubenswrapper[7614]: I0224 05:20:45.662641 7614 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-zxkt2 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 05:20:45.662725 master-0 kubenswrapper[7614]: [-]has-synced failed: reason withheld Feb 24 05:20:45.662725 master-0 kubenswrapper[7614]: [+]process-running ok Feb 24 05:20:45.662725 master-0 kubenswrapper[7614]: healthz check failed Feb 24 05:20:45.663340 master-0 kubenswrapper[7614]: I0224 05:20:45.662753 7614 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-zxkt2" podUID="be7a4b9e-1e9a-4298-b804-21b683805c0e" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 05:20:46.666259 master-0 kubenswrapper[7614]: I0224 05:20:46.666164 7614 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-zxkt2 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 05:20:46.666259 master-0 kubenswrapper[7614]: [-]has-synced failed: reason withheld Feb 24 05:20:46.666259 master-0 kubenswrapper[7614]: [+]process-running ok Feb 24 05:20:46.666259 master-0 kubenswrapper[7614]: healthz check failed Feb 24 05:20:46.667656 master-0 kubenswrapper[7614]: I0224 05:20:46.666278 7614 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-zxkt2" podUID="be7a4b9e-1e9a-4298-b804-21b683805c0e" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 05:20:47.663667 master-0 kubenswrapper[7614]: I0224 05:20:47.663575 7614 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-zxkt2 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 05:20:47.663667 master-0 kubenswrapper[7614]: [-]has-synced failed: reason withheld Feb 24 05:20:47.663667 master-0 kubenswrapper[7614]: [+]process-running ok Feb 24 05:20:47.663667 master-0 kubenswrapper[7614]: healthz check failed Feb 24 05:20:47.664034 master-0 kubenswrapper[7614]: I0224 05:20:47.663684 7614 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-zxkt2" podUID="be7a4b9e-1e9a-4298-b804-21b683805c0e" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 05:20:48.664044 master-0 kubenswrapper[7614]: I0224 05:20:48.663930 7614 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-zxkt2 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 05:20:48.664044 master-0 kubenswrapper[7614]: [-]has-synced failed: reason withheld Feb 24 05:20:48.664044 master-0 kubenswrapper[7614]: [+]process-running ok Feb 24 05:20:48.664044 master-0 kubenswrapper[7614]: healthz check failed Feb 24 05:20:48.665382 master-0 kubenswrapper[7614]: I0224 05:20:48.664067 7614 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-zxkt2" podUID="be7a4b9e-1e9a-4298-b804-21b683805c0e" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 05:20:49.664054 master-0 kubenswrapper[7614]: I0224 05:20:49.663909 7614 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-zxkt2 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 05:20:49.664054 master-0 kubenswrapper[7614]: [-]has-synced failed: reason withheld Feb 24 05:20:49.664054 master-0 kubenswrapper[7614]: [+]process-running ok Feb 24 05:20:49.664054 master-0 kubenswrapper[7614]: healthz check failed Feb 24 05:20:49.665074 master-0 kubenswrapper[7614]: I0224 05:20:49.664949 7614 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-zxkt2" podUID="be7a4b9e-1e9a-4298-b804-21b683805c0e" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 05:20:50.174813 master-0 kubenswrapper[7614]: I0224 05:20:50.174740 7614 scope.go:117] "RemoveContainer" containerID="f9e75ea6f0c81eec46e337376adf731ab535fc067c7d1c6d227f14a9e7433ffe" Feb 24 05:20:50.664386 master-0 kubenswrapper[7614]: I0224 05:20:50.664142 7614 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-zxkt2 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 05:20:50.664386 master-0 kubenswrapper[7614]: [-]has-synced failed: reason withheld Feb 24 05:20:50.664386 master-0 kubenswrapper[7614]: [+]process-running ok Feb 24 05:20:50.664386 master-0 kubenswrapper[7614]: healthz check failed Feb 24 05:20:50.664386 master-0 kubenswrapper[7614]: I0224 05:20:50.664343 7614 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-zxkt2" podUID="be7a4b9e-1e9a-4298-b804-21b683805c0e" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 05:20:50.679536 master-0 kubenswrapper[7614]: I0224 05:20:50.679483 7614 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ingress-operator_ingress-operator-6569778c84-rr8r7_3d6b1ce7-1213-494c-829d-186d39eac7eb/ingress-operator/1.log" Feb 24 05:20:50.680273 master-0 kubenswrapper[7614]: I0224 05:20:50.680219 7614 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-6569778c84-rr8r7" event={"ID":"3d6b1ce7-1213-494c-829d-186d39eac7eb","Type":"ContainerStarted","Data":"50c8d66910cbcf1dcdc03811dff2f9abc3d95e2e93235a68b4cc89109830e7b9"} Feb 24 05:20:51.663519 master-0 kubenswrapper[7614]: I0224 05:20:51.663421 7614 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-zxkt2 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 05:20:51.663519 master-0 kubenswrapper[7614]: [-]has-synced failed: reason withheld Feb 24 05:20:51.663519 master-0 kubenswrapper[7614]: [+]process-running ok Feb 24 05:20:51.663519 master-0 kubenswrapper[7614]: healthz check failed Feb 24 05:20:51.663920 master-0 kubenswrapper[7614]: I0224 05:20:51.663544 7614 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-zxkt2" podUID="be7a4b9e-1e9a-4298-b804-21b683805c0e" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 05:20:52.663776 master-0 kubenswrapper[7614]: I0224 05:20:52.663691 7614 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-zxkt2 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 05:20:52.663776 master-0 kubenswrapper[7614]: [-]has-synced failed: reason withheld Feb 24 05:20:52.663776 master-0 kubenswrapper[7614]: [+]process-running ok Feb 24 05:20:52.663776 master-0 kubenswrapper[7614]: healthz check failed Feb 24 05:20:52.664962 master-0 kubenswrapper[7614]: I0224 05:20:52.663791 7614 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-zxkt2" podUID="be7a4b9e-1e9a-4298-b804-21b683805c0e" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 05:20:53.663048 master-0 kubenswrapper[7614]: I0224 05:20:53.662950 7614 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-zxkt2 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 05:20:53.663048 master-0 kubenswrapper[7614]: [-]has-synced failed: reason withheld Feb 24 05:20:53.663048 master-0 kubenswrapper[7614]: [+]process-running ok Feb 24 05:20:53.663048 master-0 kubenswrapper[7614]: healthz check failed Feb 24 05:20:53.663393 master-0 kubenswrapper[7614]: I0224 05:20:53.663088 7614 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-zxkt2" podUID="be7a4b9e-1e9a-4298-b804-21b683805c0e" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 05:20:54.665615 master-0 kubenswrapper[7614]: I0224 05:20:54.665490 7614 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-zxkt2 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 05:20:54.665615 master-0 kubenswrapper[7614]: [-]has-synced failed: reason withheld Feb 24 05:20:54.665615 master-0 kubenswrapper[7614]: [+]process-running ok Feb 24 05:20:54.665615 master-0 kubenswrapper[7614]: healthz check failed Feb 24 05:20:54.666701 master-0 kubenswrapper[7614]: I0224 05:20:54.665621 7614 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-zxkt2" podUID="be7a4b9e-1e9a-4298-b804-21b683805c0e" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 05:20:55.663715 master-0 kubenswrapper[7614]: I0224 05:20:55.663609 7614 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-zxkt2 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 05:20:55.663715 master-0 kubenswrapper[7614]: [-]has-synced failed: reason withheld Feb 24 05:20:55.663715 master-0 kubenswrapper[7614]: [+]process-running ok Feb 24 05:20:55.663715 master-0 kubenswrapper[7614]: healthz check failed Feb 24 05:20:55.664282 master-0 kubenswrapper[7614]: I0224 05:20:55.663748 7614 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-zxkt2" podUID="be7a4b9e-1e9a-4298-b804-21b683805c0e" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 05:20:56.664052 master-0 kubenswrapper[7614]: I0224 05:20:56.663962 7614 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-zxkt2 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 05:20:56.664052 master-0 kubenswrapper[7614]: [-]has-synced failed: reason withheld Feb 24 05:20:56.664052 master-0 kubenswrapper[7614]: [+]process-running ok Feb 24 05:20:56.664052 master-0 kubenswrapper[7614]: healthz check failed Feb 24 05:20:56.665102 master-0 kubenswrapper[7614]: I0224 05:20:56.664076 7614 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-zxkt2" podUID="be7a4b9e-1e9a-4298-b804-21b683805c0e" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 05:20:57.663945 master-0 kubenswrapper[7614]: I0224 05:20:57.663836 7614 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-zxkt2 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 05:20:57.663945 master-0 kubenswrapper[7614]: [-]has-synced failed: reason withheld Feb 24 05:20:57.663945 master-0 kubenswrapper[7614]: [+]process-running ok Feb 24 05:20:57.663945 master-0 kubenswrapper[7614]: healthz check failed Feb 24 05:20:57.665058 master-0 kubenswrapper[7614]: I0224 05:20:57.663968 7614 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-zxkt2" podUID="be7a4b9e-1e9a-4298-b804-21b683805c0e" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 05:20:58.663684 master-0 kubenswrapper[7614]: I0224 05:20:58.663573 7614 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-zxkt2 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 05:20:58.663684 master-0 kubenswrapper[7614]: [-]has-synced failed: reason withheld Feb 24 05:20:58.663684 master-0 kubenswrapper[7614]: [+]process-running ok Feb 24 05:20:58.663684 master-0 kubenswrapper[7614]: healthz check failed Feb 24 05:20:58.663684 master-0 kubenswrapper[7614]: I0224 05:20:58.663685 7614 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-zxkt2" podUID="be7a4b9e-1e9a-4298-b804-21b683805c0e" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 05:20:59.664649 master-0 kubenswrapper[7614]: I0224 05:20:59.664441 7614 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-zxkt2 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 05:20:59.664649 master-0 kubenswrapper[7614]: [-]has-synced failed: reason withheld Feb 24 05:20:59.664649 master-0 kubenswrapper[7614]: [+]process-running ok Feb 24 05:20:59.664649 master-0 kubenswrapper[7614]: healthz check failed Feb 24 05:20:59.664649 master-0 kubenswrapper[7614]: I0224 05:20:59.664564 7614 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-zxkt2" podUID="be7a4b9e-1e9a-4298-b804-21b683805c0e" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 05:21:00.664939 master-0 kubenswrapper[7614]: I0224 05:21:00.664846 7614 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-zxkt2 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 05:21:00.664939 master-0 kubenswrapper[7614]: [-]has-synced failed: reason withheld Feb 24 05:21:00.664939 master-0 kubenswrapper[7614]: [+]process-running ok Feb 24 05:21:00.664939 master-0 kubenswrapper[7614]: healthz check failed Feb 24 05:21:00.664939 master-0 kubenswrapper[7614]: I0224 05:21:00.664944 7614 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-zxkt2" podUID="be7a4b9e-1e9a-4298-b804-21b683805c0e" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 05:21:01.663728 master-0 kubenswrapper[7614]: I0224 05:21:01.663643 7614 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-zxkt2 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 05:21:01.663728 master-0 kubenswrapper[7614]: [-]has-synced failed: reason withheld Feb 24 05:21:01.663728 master-0 kubenswrapper[7614]: [+]process-running ok Feb 24 05:21:01.663728 master-0 kubenswrapper[7614]: healthz check failed Feb 24 05:21:01.664151 master-0 kubenswrapper[7614]: I0224 05:21:01.663760 7614 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-zxkt2" podUID="be7a4b9e-1e9a-4298-b804-21b683805c0e" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 05:21:02.664624 master-0 kubenswrapper[7614]: I0224 05:21:02.664419 7614 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-zxkt2 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 05:21:02.664624 master-0 kubenswrapper[7614]: [-]has-synced failed: reason withheld Feb 24 05:21:02.664624 master-0 kubenswrapper[7614]: [+]process-running ok Feb 24 05:21:02.664624 master-0 kubenswrapper[7614]: healthz check failed Feb 24 05:21:02.664624 master-0 kubenswrapper[7614]: I0224 05:21:02.664570 7614 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-zxkt2" podUID="be7a4b9e-1e9a-4298-b804-21b683805c0e" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 05:21:03.663010 master-0 kubenswrapper[7614]: I0224 05:21:03.662910 7614 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-zxkt2 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 05:21:03.663010 master-0 kubenswrapper[7614]: [-]has-synced failed: reason withheld Feb 24 05:21:03.663010 master-0 kubenswrapper[7614]: [+]process-running ok Feb 24 05:21:03.663010 master-0 kubenswrapper[7614]: healthz check failed Feb 24 05:21:03.663608 master-0 kubenswrapper[7614]: I0224 05:21:03.663043 7614 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-zxkt2" podUID="be7a4b9e-1e9a-4298-b804-21b683805c0e" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 05:21:04.663451 master-0 kubenswrapper[7614]: I0224 05:21:04.663377 7614 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-zxkt2 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 05:21:04.663451 master-0 kubenswrapper[7614]: [-]has-synced failed: reason withheld Feb 24 05:21:04.663451 master-0 kubenswrapper[7614]: [+]process-running ok Feb 24 05:21:04.663451 master-0 kubenswrapper[7614]: healthz check failed Feb 24 05:21:04.664362 master-0 kubenswrapper[7614]: I0224 05:21:04.663469 7614 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-zxkt2" podUID="be7a4b9e-1e9a-4298-b804-21b683805c0e" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 05:21:05.663375 master-0 kubenswrapper[7614]: I0224 05:21:05.663282 7614 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-zxkt2 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 05:21:05.663375 master-0 kubenswrapper[7614]: [-]has-synced failed: reason withheld Feb 24 05:21:05.663375 master-0 kubenswrapper[7614]: [+]process-running ok Feb 24 05:21:05.663375 master-0 kubenswrapper[7614]: healthz check failed Feb 24 05:21:05.663375 master-0 kubenswrapper[7614]: I0224 05:21:05.663360 7614 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-zxkt2" podUID="be7a4b9e-1e9a-4298-b804-21b683805c0e" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 05:21:06.663691 master-0 kubenswrapper[7614]: I0224 05:21:06.663588 7614 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-zxkt2 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 05:21:06.663691 master-0 kubenswrapper[7614]: [-]has-synced failed: reason withheld Feb 24 05:21:06.663691 master-0 kubenswrapper[7614]: [+]process-running ok Feb 24 05:21:06.663691 master-0 kubenswrapper[7614]: healthz check failed Feb 24 05:21:06.663691 master-0 kubenswrapper[7614]: I0224 05:21:06.663691 7614 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-zxkt2" podUID="be7a4b9e-1e9a-4298-b804-21b683805c0e" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 05:21:07.663385 master-0 kubenswrapper[7614]: I0224 05:21:07.663276 7614 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-zxkt2 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 05:21:07.663385 master-0 kubenswrapper[7614]: [-]has-synced failed: reason withheld Feb 24 05:21:07.663385 master-0 kubenswrapper[7614]: [+]process-running ok Feb 24 05:21:07.663385 master-0 kubenswrapper[7614]: healthz check failed Feb 24 05:21:07.663385 master-0 kubenswrapper[7614]: I0224 05:21:07.663390 7614 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-zxkt2" podUID="be7a4b9e-1e9a-4298-b804-21b683805c0e" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 05:21:08.663046 master-0 kubenswrapper[7614]: I0224 05:21:08.662947 7614 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-zxkt2 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 05:21:08.663046 master-0 kubenswrapper[7614]: [-]has-synced failed: reason withheld Feb 24 05:21:08.663046 master-0 kubenswrapper[7614]: [+]process-running ok Feb 24 05:21:08.663046 master-0 kubenswrapper[7614]: healthz check failed Feb 24 05:21:08.663400 master-0 kubenswrapper[7614]: I0224 05:21:08.663068 7614 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-zxkt2" podUID="be7a4b9e-1e9a-4298-b804-21b683805c0e" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 05:21:09.664467 master-0 kubenswrapper[7614]: I0224 05:21:09.664286 7614 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-zxkt2 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 05:21:09.664467 master-0 kubenswrapper[7614]: [-]has-synced failed: reason withheld Feb 24 05:21:09.664467 master-0 kubenswrapper[7614]: [+]process-running ok Feb 24 05:21:09.664467 master-0 kubenswrapper[7614]: healthz check failed Feb 24 05:21:09.665856 master-0 kubenswrapper[7614]: I0224 05:21:09.664499 7614 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-zxkt2" podUID="be7a4b9e-1e9a-4298-b804-21b683805c0e" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 05:21:10.664488 master-0 kubenswrapper[7614]: I0224 05:21:10.664386 7614 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-zxkt2 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 05:21:10.664488 master-0 kubenswrapper[7614]: [-]has-synced failed: reason withheld Feb 24 05:21:10.664488 master-0 kubenswrapper[7614]: [+]process-running ok Feb 24 05:21:10.664488 master-0 kubenswrapper[7614]: healthz check failed Feb 24 05:21:10.665448 master-0 kubenswrapper[7614]: I0224 05:21:10.664502 7614 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-zxkt2" podUID="be7a4b9e-1e9a-4298-b804-21b683805c0e" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 05:21:11.663345 master-0 kubenswrapper[7614]: I0224 05:21:11.663233 7614 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-zxkt2 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 05:21:11.663345 master-0 kubenswrapper[7614]: [-]has-synced failed: reason withheld Feb 24 05:21:11.663345 master-0 kubenswrapper[7614]: [+]process-running ok Feb 24 05:21:11.663345 master-0 kubenswrapper[7614]: healthz check failed Feb 24 05:21:11.663753 master-0 kubenswrapper[7614]: I0224 05:21:11.663368 7614 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-zxkt2" podUID="be7a4b9e-1e9a-4298-b804-21b683805c0e" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 05:21:12.664735 master-0 kubenswrapper[7614]: I0224 05:21:12.664642 7614 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-zxkt2 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 05:21:12.664735 master-0 kubenswrapper[7614]: [-]has-synced failed: reason withheld Feb 24 05:21:12.664735 master-0 kubenswrapper[7614]: [+]process-running ok Feb 24 05:21:12.664735 master-0 kubenswrapper[7614]: healthz check failed Feb 24 05:21:12.665573 master-0 kubenswrapper[7614]: I0224 05:21:12.664754 7614 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-zxkt2" podUID="be7a4b9e-1e9a-4298-b804-21b683805c0e" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 05:21:13.664270 master-0 kubenswrapper[7614]: I0224 05:21:13.664132 7614 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-zxkt2 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 05:21:13.664270 master-0 kubenswrapper[7614]: [-]has-synced failed: reason withheld Feb 24 05:21:13.664270 master-0 kubenswrapper[7614]: [+]process-running ok Feb 24 05:21:13.664270 master-0 kubenswrapper[7614]: healthz check failed Feb 24 05:21:13.664270 master-0 kubenswrapper[7614]: I0224 05:21:13.664270 7614 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-zxkt2" podUID="be7a4b9e-1e9a-4298-b804-21b683805c0e" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 05:21:14.663302 master-0 kubenswrapper[7614]: I0224 05:21:14.663181 7614 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-zxkt2 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 05:21:14.663302 master-0 kubenswrapper[7614]: [-]has-synced failed: reason withheld Feb 24 05:21:14.663302 master-0 kubenswrapper[7614]: [+]process-running ok Feb 24 05:21:14.663302 master-0 kubenswrapper[7614]: healthz check failed Feb 24 05:21:14.663302 master-0 kubenswrapper[7614]: I0224 05:21:14.663293 7614 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-zxkt2" podUID="be7a4b9e-1e9a-4298-b804-21b683805c0e" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 05:21:15.664095 master-0 kubenswrapper[7614]: I0224 05:21:15.663973 7614 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-zxkt2 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 05:21:15.664095 master-0 kubenswrapper[7614]: [-]has-synced failed: reason withheld Feb 24 05:21:15.664095 master-0 kubenswrapper[7614]: [+]process-running ok Feb 24 05:21:15.664095 master-0 kubenswrapper[7614]: healthz check failed Feb 24 05:21:15.664957 master-0 kubenswrapper[7614]: I0224 05:21:15.664117 7614 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-zxkt2" podUID="be7a4b9e-1e9a-4298-b804-21b683805c0e" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 05:21:16.663919 master-0 kubenswrapper[7614]: I0224 05:21:16.663789 7614 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-zxkt2 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 05:21:16.663919 master-0 kubenswrapper[7614]: [-]has-synced failed: reason withheld Feb 24 05:21:16.663919 master-0 kubenswrapper[7614]: [+]process-running ok Feb 24 05:21:16.663919 master-0 kubenswrapper[7614]: healthz check failed Feb 24 05:21:16.665348 master-0 kubenswrapper[7614]: I0224 05:21:16.663938 7614 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-zxkt2" podUID="be7a4b9e-1e9a-4298-b804-21b683805c0e" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 05:21:17.663608 master-0 kubenswrapper[7614]: I0224 05:21:17.663423 7614 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-zxkt2 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 05:21:17.663608 master-0 kubenswrapper[7614]: [-]has-synced failed: reason withheld Feb 24 05:21:17.663608 master-0 kubenswrapper[7614]: [+]process-running ok Feb 24 05:21:17.663608 master-0 kubenswrapper[7614]: healthz check failed Feb 24 05:21:17.663608 master-0 kubenswrapper[7614]: I0224 05:21:17.663572 7614 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-zxkt2" podUID="be7a4b9e-1e9a-4298-b804-21b683805c0e" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 05:21:18.663659 master-0 kubenswrapper[7614]: I0224 05:21:18.663564 7614 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-zxkt2 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 05:21:18.663659 master-0 kubenswrapper[7614]: [-]has-synced failed: reason withheld Feb 24 05:21:18.663659 master-0 kubenswrapper[7614]: [+]process-running ok Feb 24 05:21:18.663659 master-0 kubenswrapper[7614]: healthz check failed Feb 24 05:21:18.664774 master-0 kubenswrapper[7614]: I0224 05:21:18.663687 7614 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-zxkt2" podUID="be7a4b9e-1e9a-4298-b804-21b683805c0e" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 05:21:19.663897 master-0 kubenswrapper[7614]: I0224 05:21:19.663807 7614 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-zxkt2 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 05:21:19.663897 master-0 kubenswrapper[7614]: [-]has-synced failed: reason withheld Feb 24 05:21:19.663897 master-0 kubenswrapper[7614]: [+]process-running ok Feb 24 05:21:19.663897 master-0 kubenswrapper[7614]: healthz check failed Feb 24 05:21:19.663897 master-0 kubenswrapper[7614]: I0224 05:21:19.663893 7614 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-zxkt2" podUID="be7a4b9e-1e9a-4298-b804-21b683805c0e" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 05:21:20.662605 master-0 kubenswrapper[7614]: I0224 05:21:20.662518 7614 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-zxkt2 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 05:21:20.662605 master-0 kubenswrapper[7614]: [-]has-synced failed: reason withheld Feb 24 05:21:20.662605 master-0 kubenswrapper[7614]: [+]process-running ok Feb 24 05:21:20.662605 master-0 kubenswrapper[7614]: healthz check failed Feb 24 05:21:20.662955 master-0 kubenswrapper[7614]: I0224 05:21:20.662631 7614 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-zxkt2" podUID="be7a4b9e-1e9a-4298-b804-21b683805c0e" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 05:21:21.663041 master-0 kubenswrapper[7614]: I0224 05:21:21.662924 7614 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-zxkt2 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 05:21:21.663041 master-0 kubenswrapper[7614]: [-]has-synced failed: reason withheld Feb 24 05:21:21.663041 master-0 kubenswrapper[7614]: [+]process-running ok Feb 24 05:21:21.663041 master-0 kubenswrapper[7614]: healthz check failed Feb 24 05:21:21.664239 master-0 kubenswrapper[7614]: I0224 05:21:21.663044 7614 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-zxkt2" podUID="be7a4b9e-1e9a-4298-b804-21b683805c0e" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 05:21:22.663947 master-0 kubenswrapper[7614]: I0224 05:21:22.663843 7614 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-zxkt2 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 05:21:22.663947 master-0 kubenswrapper[7614]: [-]has-synced failed: reason withheld Feb 24 05:21:22.663947 master-0 kubenswrapper[7614]: [+]process-running ok Feb 24 05:21:22.663947 master-0 kubenswrapper[7614]: healthz check failed Feb 24 05:21:22.665415 master-0 kubenswrapper[7614]: I0224 05:21:22.663959 7614 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-zxkt2" podUID="be7a4b9e-1e9a-4298-b804-21b683805c0e" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 05:21:23.663090 master-0 kubenswrapper[7614]: I0224 05:21:23.663012 7614 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-zxkt2 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 05:21:23.663090 master-0 kubenswrapper[7614]: [-]has-synced failed: reason withheld Feb 24 05:21:23.663090 master-0 kubenswrapper[7614]: [+]process-running ok Feb 24 05:21:23.663090 master-0 kubenswrapper[7614]: healthz check failed Feb 24 05:21:23.663736 master-0 kubenswrapper[7614]: I0224 05:21:23.663685 7614 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-zxkt2" podUID="be7a4b9e-1e9a-4298-b804-21b683805c0e" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 05:21:24.700047 master-0 kubenswrapper[7614]: I0224 05:21:24.699944 7614 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-zxkt2 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 05:21:24.700047 master-0 kubenswrapper[7614]: [-]has-synced failed: reason withheld Feb 24 05:21:24.700047 master-0 kubenswrapper[7614]: [+]process-running ok Feb 24 05:21:24.700047 master-0 kubenswrapper[7614]: healthz check failed Feb 24 05:21:24.700790 master-0 kubenswrapper[7614]: I0224 05:21:24.700070 7614 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-zxkt2" podUID="be7a4b9e-1e9a-4298-b804-21b683805c0e" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 05:21:25.663959 master-0 kubenswrapper[7614]: I0224 05:21:25.663811 7614 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-zxkt2 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 05:21:25.663959 master-0 kubenswrapper[7614]: [-]has-synced failed: reason withheld Feb 24 05:21:25.663959 master-0 kubenswrapper[7614]: [+]process-running ok Feb 24 05:21:25.663959 master-0 kubenswrapper[7614]: healthz check failed Feb 24 05:21:25.663959 master-0 kubenswrapper[7614]: I0224 05:21:25.663953 7614 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-zxkt2" podUID="be7a4b9e-1e9a-4298-b804-21b683805c0e" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 05:21:26.662911 master-0 kubenswrapper[7614]: I0224 05:21:26.662800 7614 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-zxkt2 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 05:21:26.662911 master-0 kubenswrapper[7614]: [-]has-synced failed: reason withheld Feb 24 05:21:26.662911 master-0 kubenswrapper[7614]: [+]process-running ok Feb 24 05:21:26.662911 master-0 kubenswrapper[7614]: healthz check failed Feb 24 05:21:26.664231 master-0 kubenswrapper[7614]: I0224 05:21:26.662935 7614 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-zxkt2" podUID="be7a4b9e-1e9a-4298-b804-21b683805c0e" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 05:21:27.664325 master-0 kubenswrapper[7614]: I0224 05:21:27.664216 7614 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-zxkt2 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 05:21:27.664325 master-0 kubenswrapper[7614]: [-]has-synced failed: reason withheld Feb 24 05:21:27.664325 master-0 kubenswrapper[7614]: [+]process-running ok Feb 24 05:21:27.664325 master-0 kubenswrapper[7614]: healthz check failed Feb 24 05:21:27.664994 master-0 kubenswrapper[7614]: I0224 05:21:27.664342 7614 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-zxkt2" podUID="be7a4b9e-1e9a-4298-b804-21b683805c0e" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 05:21:28.663526 master-0 kubenswrapper[7614]: I0224 05:21:28.663416 7614 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-zxkt2 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 05:21:28.663526 master-0 kubenswrapper[7614]: [-]has-synced failed: reason withheld Feb 24 05:21:28.663526 master-0 kubenswrapper[7614]: [+]process-running ok Feb 24 05:21:28.663526 master-0 kubenswrapper[7614]: healthz check failed Feb 24 05:21:28.664229 master-0 kubenswrapper[7614]: I0224 05:21:28.663583 7614 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-zxkt2" podUID="be7a4b9e-1e9a-4298-b804-21b683805c0e" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 05:21:29.661816 master-0 kubenswrapper[7614]: I0224 05:21:29.661722 7614 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-zxkt2 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 05:21:29.661816 master-0 kubenswrapper[7614]: [-]has-synced failed: reason withheld Feb 24 05:21:29.661816 master-0 kubenswrapper[7614]: [+]process-running ok Feb 24 05:21:29.661816 master-0 kubenswrapper[7614]: healthz check failed Feb 24 05:21:29.662605 master-0 kubenswrapper[7614]: I0224 05:21:29.661840 7614 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-zxkt2" podUID="be7a4b9e-1e9a-4298-b804-21b683805c0e" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 05:21:30.663886 master-0 kubenswrapper[7614]: I0224 05:21:30.663775 7614 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-zxkt2 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 05:21:30.663886 master-0 kubenswrapper[7614]: [-]has-synced failed: reason withheld Feb 24 05:21:30.663886 master-0 kubenswrapper[7614]: [+]process-running ok Feb 24 05:21:30.663886 master-0 kubenswrapper[7614]: healthz check failed Feb 24 05:21:30.665041 master-0 kubenswrapper[7614]: I0224 05:21:30.663904 7614 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-zxkt2" podUID="be7a4b9e-1e9a-4298-b804-21b683805c0e" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 05:21:31.663468 master-0 kubenswrapper[7614]: I0224 05:21:31.663360 7614 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-zxkt2 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 05:21:31.663468 master-0 kubenswrapper[7614]: [-]has-synced failed: reason withheld Feb 24 05:21:31.663468 master-0 kubenswrapper[7614]: [+]process-running ok Feb 24 05:21:31.663468 master-0 kubenswrapper[7614]: healthz check failed Feb 24 05:21:31.663892 master-0 kubenswrapper[7614]: I0224 05:21:31.663501 7614 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-zxkt2" podUID="be7a4b9e-1e9a-4298-b804-21b683805c0e" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 05:21:32.662719 master-0 kubenswrapper[7614]: I0224 05:21:32.662580 7614 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-zxkt2 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 05:21:32.662719 master-0 kubenswrapper[7614]: [-]has-synced failed: reason withheld Feb 24 05:21:32.662719 master-0 kubenswrapper[7614]: [+]process-running ok Feb 24 05:21:32.662719 master-0 kubenswrapper[7614]: healthz check failed Feb 24 05:21:32.663821 master-0 kubenswrapper[7614]: I0224 05:21:32.662754 7614 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-zxkt2" podUID="be7a4b9e-1e9a-4298-b804-21b683805c0e" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 05:21:33.662534 master-0 kubenswrapper[7614]: I0224 05:21:33.662447 7614 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-zxkt2 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 05:21:33.662534 master-0 kubenswrapper[7614]: [-]has-synced failed: reason withheld Feb 24 05:21:33.662534 master-0 kubenswrapper[7614]: [+]process-running ok Feb 24 05:21:33.662534 master-0 kubenswrapper[7614]: healthz check failed Feb 24 05:21:33.664000 master-0 kubenswrapper[7614]: I0224 05:21:33.662559 7614 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-zxkt2" podUID="be7a4b9e-1e9a-4298-b804-21b683805c0e" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 05:21:34.664130 master-0 kubenswrapper[7614]: I0224 05:21:34.664037 7614 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-zxkt2 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 05:21:34.664130 master-0 kubenswrapper[7614]: [-]has-synced failed: reason withheld Feb 24 05:21:34.664130 master-0 kubenswrapper[7614]: [+]process-running ok Feb 24 05:21:34.664130 master-0 kubenswrapper[7614]: healthz check failed Feb 24 05:21:34.665227 master-0 kubenswrapper[7614]: I0224 05:21:34.664137 7614 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-zxkt2" podUID="be7a4b9e-1e9a-4298-b804-21b683805c0e" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 05:21:35.663203 master-0 kubenswrapper[7614]: I0224 05:21:35.663066 7614 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-zxkt2 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 05:21:35.663203 master-0 kubenswrapper[7614]: [-]has-synced failed: reason withheld Feb 24 05:21:35.663203 master-0 kubenswrapper[7614]: [+]process-running ok Feb 24 05:21:35.663203 master-0 kubenswrapper[7614]: healthz check failed Feb 24 05:21:35.663793 master-0 kubenswrapper[7614]: I0224 05:21:35.663230 7614 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-zxkt2" podUID="be7a4b9e-1e9a-4298-b804-21b683805c0e" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 05:21:36.664118 master-0 kubenswrapper[7614]: I0224 05:21:36.664006 7614 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-zxkt2 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 05:21:36.664118 master-0 kubenswrapper[7614]: [-]has-synced failed: reason withheld Feb 24 05:21:36.664118 master-0 kubenswrapper[7614]: [+]process-running ok Feb 24 05:21:36.664118 master-0 kubenswrapper[7614]: healthz check failed Feb 24 05:21:36.664118 master-0 kubenswrapper[7614]: I0224 05:21:36.664101 7614 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-zxkt2" podUID="be7a4b9e-1e9a-4298-b804-21b683805c0e" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 05:21:37.663353 master-0 kubenswrapper[7614]: I0224 05:21:37.663237 7614 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-zxkt2 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 05:21:37.663353 master-0 kubenswrapper[7614]: [-]has-synced failed: reason withheld Feb 24 05:21:37.663353 master-0 kubenswrapper[7614]: [+]process-running ok Feb 24 05:21:37.663353 master-0 kubenswrapper[7614]: healthz check failed Feb 24 05:21:37.663862 master-0 kubenswrapper[7614]: I0224 05:21:37.663366 7614 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-zxkt2" podUID="be7a4b9e-1e9a-4298-b804-21b683805c0e" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 05:21:38.663101 master-0 kubenswrapper[7614]: I0224 05:21:38.662973 7614 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-zxkt2 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 05:21:38.663101 master-0 kubenswrapper[7614]: [-]has-synced failed: reason withheld Feb 24 05:21:38.663101 master-0 kubenswrapper[7614]: [+]process-running ok Feb 24 05:21:38.663101 master-0 kubenswrapper[7614]: healthz check failed Feb 24 05:21:38.664461 master-0 kubenswrapper[7614]: I0224 05:21:38.663910 7614 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-zxkt2" podUID="be7a4b9e-1e9a-4298-b804-21b683805c0e" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 05:21:39.664638 master-0 kubenswrapper[7614]: I0224 05:21:39.664543 7614 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-zxkt2 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 05:21:39.664638 master-0 kubenswrapper[7614]: [-]has-synced failed: reason withheld Feb 24 05:21:39.664638 master-0 kubenswrapper[7614]: [+]process-running ok Feb 24 05:21:39.664638 master-0 kubenswrapper[7614]: healthz check failed Feb 24 05:21:39.666030 master-0 kubenswrapper[7614]: I0224 05:21:39.664643 7614 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-zxkt2" podUID="be7a4b9e-1e9a-4298-b804-21b683805c0e" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 05:21:40.664074 master-0 kubenswrapper[7614]: I0224 05:21:40.663943 7614 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-zxkt2 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 05:21:40.664074 master-0 kubenswrapper[7614]: [-]has-synced failed: reason withheld Feb 24 05:21:40.664074 master-0 kubenswrapper[7614]: [+]process-running ok Feb 24 05:21:40.664074 master-0 kubenswrapper[7614]: healthz check failed Feb 24 05:21:40.664074 master-0 kubenswrapper[7614]: I0224 05:21:40.664056 7614 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-zxkt2" podUID="be7a4b9e-1e9a-4298-b804-21b683805c0e" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 05:21:41.664190 master-0 kubenswrapper[7614]: I0224 05:21:41.664082 7614 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-zxkt2 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 05:21:41.664190 master-0 kubenswrapper[7614]: [-]has-synced failed: reason withheld Feb 24 05:21:41.664190 master-0 kubenswrapper[7614]: [+]process-running ok Feb 24 05:21:41.664190 master-0 kubenswrapper[7614]: healthz check failed Feb 24 05:21:41.664190 master-0 kubenswrapper[7614]: I0224 05:21:41.664186 7614 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-zxkt2" podUID="be7a4b9e-1e9a-4298-b804-21b683805c0e" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 05:21:42.664008 master-0 kubenswrapper[7614]: I0224 05:21:42.663855 7614 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-zxkt2 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 05:21:42.664008 master-0 kubenswrapper[7614]: [-]has-synced failed: reason withheld Feb 24 05:21:42.664008 master-0 kubenswrapper[7614]: [+]process-running ok Feb 24 05:21:42.664008 master-0 kubenswrapper[7614]: healthz check failed Feb 24 05:21:42.664008 master-0 kubenswrapper[7614]: I0224 05:21:42.663982 7614 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-zxkt2" podUID="be7a4b9e-1e9a-4298-b804-21b683805c0e" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 05:21:43.664652 master-0 kubenswrapper[7614]: I0224 05:21:43.664511 7614 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-zxkt2 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 05:21:43.664652 master-0 kubenswrapper[7614]: [-]has-synced failed: reason withheld Feb 24 05:21:43.664652 master-0 kubenswrapper[7614]: [+]process-running ok Feb 24 05:21:43.664652 master-0 kubenswrapper[7614]: healthz check failed Feb 24 05:21:43.665792 master-0 kubenswrapper[7614]: I0224 05:21:43.664668 7614 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-zxkt2" podUID="be7a4b9e-1e9a-4298-b804-21b683805c0e" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 05:21:44.662342 master-0 kubenswrapper[7614]: I0224 05:21:44.662194 7614 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-zxkt2 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 05:21:44.662342 master-0 kubenswrapper[7614]: [-]has-synced failed: reason withheld Feb 24 05:21:44.662342 master-0 kubenswrapper[7614]: [+]process-running ok Feb 24 05:21:44.662342 master-0 kubenswrapper[7614]: healthz check failed Feb 24 05:21:44.663141 master-0 kubenswrapper[7614]: I0224 05:21:44.662369 7614 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-zxkt2" podUID="be7a4b9e-1e9a-4298-b804-21b683805c0e" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 05:21:45.664359 master-0 kubenswrapper[7614]: I0224 05:21:45.664231 7614 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-zxkt2 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 05:21:45.664359 master-0 kubenswrapper[7614]: [-]has-synced failed: reason withheld Feb 24 05:21:45.664359 master-0 kubenswrapper[7614]: [+]process-running ok Feb 24 05:21:45.664359 master-0 kubenswrapper[7614]: healthz check failed Feb 24 05:21:45.665513 master-0 kubenswrapper[7614]: I0224 05:21:45.664413 7614 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-zxkt2" podUID="be7a4b9e-1e9a-4298-b804-21b683805c0e" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 05:21:46.663210 master-0 kubenswrapper[7614]: I0224 05:21:46.663112 7614 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-zxkt2 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 05:21:46.663210 master-0 kubenswrapper[7614]: [-]has-synced failed: reason withheld Feb 24 05:21:46.663210 master-0 kubenswrapper[7614]: [+]process-running ok Feb 24 05:21:46.663210 master-0 kubenswrapper[7614]: healthz check failed Feb 24 05:21:46.663210 master-0 kubenswrapper[7614]: I0224 05:21:46.663211 7614 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-zxkt2" podUID="be7a4b9e-1e9a-4298-b804-21b683805c0e" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 05:21:47.663151 master-0 kubenswrapper[7614]: I0224 05:21:47.663028 7614 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-zxkt2 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 05:21:47.663151 master-0 kubenswrapper[7614]: [-]has-synced failed: reason withheld Feb 24 05:21:47.663151 master-0 kubenswrapper[7614]: [+]process-running ok Feb 24 05:21:47.663151 master-0 kubenswrapper[7614]: healthz check failed Feb 24 05:21:47.664253 master-0 kubenswrapper[7614]: I0224 05:21:47.663170 7614 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-zxkt2" podUID="be7a4b9e-1e9a-4298-b804-21b683805c0e" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 05:21:48.663924 master-0 kubenswrapper[7614]: I0224 05:21:48.663827 7614 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-zxkt2 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 05:21:48.663924 master-0 kubenswrapper[7614]: [-]has-synced failed: reason withheld Feb 24 05:21:48.663924 master-0 kubenswrapper[7614]: [+]process-running ok Feb 24 05:21:48.663924 master-0 kubenswrapper[7614]: healthz check failed Feb 24 05:21:48.664986 master-0 kubenswrapper[7614]: I0224 05:21:48.663934 7614 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-zxkt2" podUID="be7a4b9e-1e9a-4298-b804-21b683805c0e" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 05:21:49.663155 master-0 kubenswrapper[7614]: I0224 05:21:49.663051 7614 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-zxkt2 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 05:21:49.663155 master-0 kubenswrapper[7614]: [-]has-synced failed: reason withheld Feb 24 05:21:49.663155 master-0 kubenswrapper[7614]: [+]process-running ok Feb 24 05:21:49.663155 master-0 kubenswrapper[7614]: healthz check failed Feb 24 05:21:49.663715 master-0 kubenswrapper[7614]: I0224 05:21:49.663181 7614 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-zxkt2" podUID="be7a4b9e-1e9a-4298-b804-21b683805c0e" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 05:21:50.088068 master-0 kubenswrapper[7614]: I0224 05:21:50.087959 7614 scope.go:117] "RemoveContainer" containerID="ab447b6da9854f88d9ed73e853efdddd099f2776799cafee02fcb896b0a6f932" Feb 24 05:21:50.663974 master-0 kubenswrapper[7614]: I0224 05:21:50.663873 7614 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-zxkt2 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 05:21:50.663974 master-0 kubenswrapper[7614]: [-]has-synced failed: reason withheld Feb 24 05:21:50.663974 master-0 kubenswrapper[7614]: [+]process-running ok Feb 24 05:21:50.663974 master-0 kubenswrapper[7614]: healthz check failed Feb 24 05:21:50.664588 master-0 kubenswrapper[7614]: I0224 05:21:50.663996 7614 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-zxkt2" podUID="be7a4b9e-1e9a-4298-b804-21b683805c0e" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 05:21:50.664588 master-0 kubenswrapper[7614]: I0224 05:21:50.664099 7614 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-ingress/router-default-7b65dc9fcb-zxkt2" Feb 24 05:21:50.665414 master-0 kubenswrapper[7614]: I0224 05:21:50.665346 7614 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="router" containerStatusID={"Type":"cri-o","ID":"140a9b5fdc72c4b3ab1b7bcc97ac10d0500b7b5e5c7d097d9570d8dd233f08cb"} pod="openshift-ingress/router-default-7b65dc9fcb-zxkt2" containerMessage="Container router failed startup probe, will be restarted" Feb 24 05:21:50.665532 master-0 kubenswrapper[7614]: I0224 05:21:50.665434 7614 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ingress/router-default-7b65dc9fcb-zxkt2" podUID="be7a4b9e-1e9a-4298-b804-21b683805c0e" containerName="router" containerID="cri-o://140a9b5fdc72c4b3ab1b7bcc97ac10d0500b7b5e5c7d097d9570d8dd233f08cb" gracePeriod=3600 Feb 24 05:22:37.630531 master-0 kubenswrapper[7614]: I0224 05:22:37.630446 7614 generic.go:334] "Generic (PLEG): container finished" podID="be7a4b9e-1e9a-4298-b804-21b683805c0e" containerID="140a9b5fdc72c4b3ab1b7bcc97ac10d0500b7b5e5c7d097d9570d8dd233f08cb" exitCode=0 Feb 24 05:22:37.631606 master-0 kubenswrapper[7614]: I0224 05:22:37.630547 7614 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-7b65dc9fcb-zxkt2" event={"ID":"be7a4b9e-1e9a-4298-b804-21b683805c0e","Type":"ContainerDied","Data":"140a9b5fdc72c4b3ab1b7bcc97ac10d0500b7b5e5c7d097d9570d8dd233f08cb"} Feb 24 05:22:37.631606 master-0 kubenswrapper[7614]: I0224 05:22:37.630643 7614 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-7b65dc9fcb-zxkt2" event={"ID":"be7a4b9e-1e9a-4298-b804-21b683805c0e","Type":"ContainerStarted","Data":"644f295cce6b864cf139013130d16889b14ef33754986616f48c2d2d58ffa92d"} Feb 24 05:22:37.661515 master-0 kubenswrapper[7614]: I0224 05:22:37.661417 7614 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-ingress/router-default-7b65dc9fcb-zxkt2" Feb 24 05:22:37.665444 master-0 kubenswrapper[7614]: I0224 05:22:37.665388 7614 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-zxkt2 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 05:22:37.665444 master-0 kubenswrapper[7614]: [-]has-synced failed: reason withheld Feb 24 05:22:37.665444 master-0 kubenswrapper[7614]: [+]process-running ok Feb 24 05:22:37.665444 master-0 kubenswrapper[7614]: healthz check failed Feb 24 05:22:37.666224 master-0 kubenswrapper[7614]: I0224 05:22:37.665464 7614 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-zxkt2" podUID="be7a4b9e-1e9a-4298-b804-21b683805c0e" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 05:22:38.664878 master-0 kubenswrapper[7614]: I0224 05:22:38.664762 7614 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-zxkt2 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 05:22:38.664878 master-0 kubenswrapper[7614]: [-]has-synced failed: reason withheld Feb 24 05:22:38.664878 master-0 kubenswrapper[7614]: [+]process-running ok Feb 24 05:22:38.664878 master-0 kubenswrapper[7614]: healthz check failed Feb 24 05:22:38.664878 master-0 kubenswrapper[7614]: I0224 05:22:38.664880 7614 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-zxkt2" podUID="be7a4b9e-1e9a-4298-b804-21b683805c0e" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 05:22:39.663198 master-0 kubenswrapper[7614]: I0224 05:22:39.663103 7614 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-zxkt2 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 05:22:39.663198 master-0 kubenswrapper[7614]: [-]has-synced failed: reason withheld Feb 24 05:22:39.663198 master-0 kubenswrapper[7614]: [+]process-running ok Feb 24 05:22:39.663198 master-0 kubenswrapper[7614]: healthz check failed Feb 24 05:22:39.663742 master-0 kubenswrapper[7614]: I0224 05:22:39.663215 7614 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-zxkt2" podUID="be7a4b9e-1e9a-4298-b804-21b683805c0e" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 05:22:40.663972 master-0 kubenswrapper[7614]: I0224 05:22:40.663830 7614 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-zxkt2 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 05:22:40.663972 master-0 kubenswrapper[7614]: [-]has-synced failed: reason withheld Feb 24 05:22:40.663972 master-0 kubenswrapper[7614]: [+]process-running ok Feb 24 05:22:40.663972 master-0 kubenswrapper[7614]: healthz check failed Feb 24 05:22:40.665254 master-0 kubenswrapper[7614]: I0224 05:22:40.663976 7614 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-zxkt2" podUID="be7a4b9e-1e9a-4298-b804-21b683805c0e" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 05:22:41.663460 master-0 kubenswrapper[7614]: I0224 05:22:41.663299 7614 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-zxkt2 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 05:22:41.663460 master-0 kubenswrapper[7614]: [-]has-synced failed: reason withheld Feb 24 05:22:41.663460 master-0 kubenswrapper[7614]: [+]process-running ok Feb 24 05:22:41.663460 master-0 kubenswrapper[7614]: healthz check failed Feb 24 05:22:41.664031 master-0 kubenswrapper[7614]: I0224 05:22:41.663500 7614 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-zxkt2" podUID="be7a4b9e-1e9a-4298-b804-21b683805c0e" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 05:22:42.664348 master-0 kubenswrapper[7614]: I0224 05:22:42.664219 7614 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-zxkt2 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 05:22:42.664348 master-0 kubenswrapper[7614]: [-]has-synced failed: reason withheld Feb 24 05:22:42.664348 master-0 kubenswrapper[7614]: [+]process-running ok Feb 24 05:22:42.664348 master-0 kubenswrapper[7614]: healthz check failed Feb 24 05:22:42.665874 master-0 kubenswrapper[7614]: I0224 05:22:42.664389 7614 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-zxkt2" podUID="be7a4b9e-1e9a-4298-b804-21b683805c0e" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 05:22:43.662742 master-0 kubenswrapper[7614]: I0224 05:22:43.662625 7614 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-zxkt2 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 05:22:43.662742 master-0 kubenswrapper[7614]: [-]has-synced failed: reason withheld Feb 24 05:22:43.662742 master-0 kubenswrapper[7614]: [+]process-running ok Feb 24 05:22:43.662742 master-0 kubenswrapper[7614]: healthz check failed Feb 24 05:22:43.662742 master-0 kubenswrapper[7614]: I0224 05:22:43.662710 7614 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-zxkt2" podUID="be7a4b9e-1e9a-4298-b804-21b683805c0e" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 05:22:44.664553 master-0 kubenswrapper[7614]: I0224 05:22:44.664415 7614 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-zxkt2 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 05:22:44.664553 master-0 kubenswrapper[7614]: [-]has-synced failed: reason withheld Feb 24 05:22:44.664553 master-0 kubenswrapper[7614]: [+]process-running ok Feb 24 05:22:44.664553 master-0 kubenswrapper[7614]: healthz check failed Feb 24 05:22:44.665740 master-0 kubenswrapper[7614]: I0224 05:22:44.664572 7614 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-zxkt2" podUID="be7a4b9e-1e9a-4298-b804-21b683805c0e" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 05:22:45.663270 master-0 kubenswrapper[7614]: I0224 05:22:45.660392 7614 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ingress/router-default-7b65dc9fcb-zxkt2" Feb 24 05:22:45.663270 master-0 kubenswrapper[7614]: I0224 05:22:45.663159 7614 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-zxkt2 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 05:22:45.663270 master-0 kubenswrapper[7614]: [-]has-synced failed: reason withheld Feb 24 05:22:45.663270 master-0 kubenswrapper[7614]: [+]process-running ok Feb 24 05:22:45.663270 master-0 kubenswrapper[7614]: healthz check failed Feb 24 05:22:45.663270 master-0 kubenswrapper[7614]: I0224 05:22:45.663229 7614 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-zxkt2" podUID="be7a4b9e-1e9a-4298-b804-21b683805c0e" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 05:22:46.664342 master-0 kubenswrapper[7614]: I0224 05:22:46.664216 7614 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-zxkt2 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 05:22:46.664342 master-0 kubenswrapper[7614]: [-]has-synced failed: reason withheld Feb 24 05:22:46.664342 master-0 kubenswrapper[7614]: [+]process-running ok Feb 24 05:22:46.664342 master-0 kubenswrapper[7614]: healthz check failed Feb 24 05:22:46.665661 master-0 kubenswrapper[7614]: I0224 05:22:46.664345 7614 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-zxkt2" podUID="be7a4b9e-1e9a-4298-b804-21b683805c0e" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 05:22:47.665052 master-0 kubenswrapper[7614]: I0224 05:22:47.664950 7614 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-zxkt2 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 05:22:47.665052 master-0 kubenswrapper[7614]: [-]has-synced failed: reason withheld Feb 24 05:22:47.665052 master-0 kubenswrapper[7614]: [+]process-running ok Feb 24 05:22:47.665052 master-0 kubenswrapper[7614]: healthz check failed Feb 24 05:22:47.666157 master-0 kubenswrapper[7614]: I0224 05:22:47.665083 7614 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-zxkt2" podUID="be7a4b9e-1e9a-4298-b804-21b683805c0e" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 05:22:48.663695 master-0 kubenswrapper[7614]: I0224 05:22:48.663574 7614 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-zxkt2 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 05:22:48.663695 master-0 kubenswrapper[7614]: [-]has-synced failed: reason withheld Feb 24 05:22:48.663695 master-0 kubenswrapper[7614]: [+]process-running ok Feb 24 05:22:48.663695 master-0 kubenswrapper[7614]: healthz check failed Feb 24 05:22:48.664267 master-0 kubenswrapper[7614]: I0224 05:22:48.663712 7614 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-zxkt2" podUID="be7a4b9e-1e9a-4298-b804-21b683805c0e" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 05:22:49.664245 master-0 kubenswrapper[7614]: I0224 05:22:49.664128 7614 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-zxkt2 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 05:22:49.664245 master-0 kubenswrapper[7614]: [-]has-synced failed: reason withheld Feb 24 05:22:49.664245 master-0 kubenswrapper[7614]: [+]process-running ok Feb 24 05:22:49.664245 master-0 kubenswrapper[7614]: healthz check failed Feb 24 05:22:49.665072 master-0 kubenswrapper[7614]: I0224 05:22:49.664253 7614 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-zxkt2" podUID="be7a4b9e-1e9a-4298-b804-21b683805c0e" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 05:22:50.664365 master-0 kubenswrapper[7614]: I0224 05:22:50.664241 7614 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-zxkt2 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 05:22:50.664365 master-0 kubenswrapper[7614]: [-]has-synced failed: reason withheld Feb 24 05:22:50.664365 master-0 kubenswrapper[7614]: [+]process-running ok Feb 24 05:22:50.664365 master-0 kubenswrapper[7614]: healthz check failed Feb 24 05:22:50.665943 master-0 kubenswrapper[7614]: I0224 05:22:50.664373 7614 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-zxkt2" podUID="be7a4b9e-1e9a-4298-b804-21b683805c0e" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 05:22:51.663547 master-0 kubenswrapper[7614]: I0224 05:22:51.663433 7614 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-zxkt2 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 05:22:51.663547 master-0 kubenswrapper[7614]: [-]has-synced failed: reason withheld Feb 24 05:22:51.663547 master-0 kubenswrapper[7614]: [+]process-running ok Feb 24 05:22:51.663547 master-0 kubenswrapper[7614]: healthz check failed Feb 24 05:22:51.663947 master-0 kubenswrapper[7614]: I0224 05:22:51.663573 7614 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-zxkt2" podUID="be7a4b9e-1e9a-4298-b804-21b683805c0e" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 05:22:52.400087 master-0 kubenswrapper[7614]: I0224 05:22:52.400019 7614 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress-canary/ingress-canary-5m82s"] Feb 24 05:22:52.401658 master-0 kubenswrapper[7614]: I0224 05:22:52.401615 7614 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-5m82s" Feb 24 05:22:52.404478 master-0 kubenswrapper[7614]: I0224 05:22:52.404408 7614 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"canary-serving-cert" Feb 24 05:22:52.404872 master-0 kubenswrapper[7614]: I0224 05:22:52.404820 7614 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"default-dockercfg-l2gcc" Feb 24 05:22:52.405160 master-0 kubenswrapper[7614]: I0224 05:22:52.405116 7614 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"openshift-service-ca.crt" Feb 24 05:22:52.408345 master-0 kubenswrapper[7614]: I0224 05:22:52.408250 7614 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-canary/ingress-canary-5m82s"] Feb 24 05:22:52.408908 master-0 kubenswrapper[7614]: I0224 05:22:52.408865 7614 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"kube-root-ca.crt" Feb 24 05:22:52.494921 master-0 kubenswrapper[7614]: I0224 05:22:52.494842 7614 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/f938daff-1d36-4348-a689-3d1607058296-cert\") pod \"ingress-canary-5m82s\" (UID: \"f938daff-1d36-4348-a689-3d1607058296\") " pod="openshift-ingress-canary/ingress-canary-5m82s" Feb 24 05:22:52.495361 master-0 kubenswrapper[7614]: I0224 05:22:52.495012 7614 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xbt92\" (UniqueName: \"kubernetes.io/projected/f938daff-1d36-4348-a689-3d1607058296-kube-api-access-xbt92\") pod \"ingress-canary-5m82s\" (UID: \"f938daff-1d36-4348-a689-3d1607058296\") " pod="openshift-ingress-canary/ingress-canary-5m82s" Feb 24 05:22:52.596820 master-0 kubenswrapper[7614]: I0224 05:22:52.596709 7614 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/f938daff-1d36-4348-a689-3d1607058296-cert\") pod \"ingress-canary-5m82s\" (UID: \"f938daff-1d36-4348-a689-3d1607058296\") " pod="openshift-ingress-canary/ingress-canary-5m82s" Feb 24 05:22:52.597202 master-0 kubenswrapper[7614]: I0224 05:22:52.596953 7614 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xbt92\" (UniqueName: \"kubernetes.io/projected/f938daff-1d36-4348-a689-3d1607058296-kube-api-access-xbt92\") pod \"ingress-canary-5m82s\" (UID: \"f938daff-1d36-4348-a689-3d1607058296\") " pod="openshift-ingress-canary/ingress-canary-5m82s" Feb 24 05:22:52.603853 master-0 kubenswrapper[7614]: I0224 05:22:52.603698 7614 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/f938daff-1d36-4348-a689-3d1607058296-cert\") pod \"ingress-canary-5m82s\" (UID: \"f938daff-1d36-4348-a689-3d1607058296\") " pod="openshift-ingress-canary/ingress-canary-5m82s" Feb 24 05:22:52.634787 master-0 kubenswrapper[7614]: I0224 05:22:52.634608 7614 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xbt92\" (UniqueName: \"kubernetes.io/projected/f938daff-1d36-4348-a689-3d1607058296-kube-api-access-xbt92\") pod \"ingress-canary-5m82s\" (UID: \"f938daff-1d36-4348-a689-3d1607058296\") " pod="openshift-ingress-canary/ingress-canary-5m82s" Feb 24 05:22:52.662929 master-0 kubenswrapper[7614]: I0224 05:22:52.662829 7614 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-zxkt2 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 05:22:52.662929 master-0 kubenswrapper[7614]: [-]has-synced failed: reason withheld Feb 24 05:22:52.662929 master-0 kubenswrapper[7614]: [+]process-running ok Feb 24 05:22:52.662929 master-0 kubenswrapper[7614]: healthz check failed Feb 24 05:22:52.662929 master-0 kubenswrapper[7614]: I0224 05:22:52.662918 7614 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-zxkt2" podUID="be7a4b9e-1e9a-4298-b804-21b683805c0e" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 05:22:52.731645 master-0 kubenswrapper[7614]: I0224 05:22:52.731522 7614 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-5m82s" Feb 24 05:22:52.787775 master-0 kubenswrapper[7614]: I0224 05:22:52.787695 7614 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ingress-operator_ingress-operator-6569778c84-rr8r7_3d6b1ce7-1213-494c-829d-186d39eac7eb/ingress-operator/2.log" Feb 24 05:22:52.788807 master-0 kubenswrapper[7614]: I0224 05:22:52.788753 7614 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ingress-operator_ingress-operator-6569778c84-rr8r7_3d6b1ce7-1213-494c-829d-186d39eac7eb/ingress-operator/1.log" Feb 24 05:22:52.789482 master-0 kubenswrapper[7614]: I0224 05:22:52.789415 7614 generic.go:334] "Generic (PLEG): container finished" podID="3d6b1ce7-1213-494c-829d-186d39eac7eb" containerID="50c8d66910cbcf1dcdc03811dff2f9abc3d95e2e93235a68b4cc89109830e7b9" exitCode=1 Feb 24 05:22:52.789600 master-0 kubenswrapper[7614]: I0224 05:22:52.789497 7614 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-6569778c84-rr8r7" event={"ID":"3d6b1ce7-1213-494c-829d-186d39eac7eb","Type":"ContainerDied","Data":"50c8d66910cbcf1dcdc03811dff2f9abc3d95e2e93235a68b4cc89109830e7b9"} Feb 24 05:22:52.789600 master-0 kubenswrapper[7614]: I0224 05:22:52.789585 7614 scope.go:117] "RemoveContainer" containerID="f9e75ea6f0c81eec46e337376adf731ab535fc067c7d1c6d227f14a9e7433ffe" Feb 24 05:22:52.790576 master-0 kubenswrapper[7614]: I0224 05:22:52.790526 7614 scope.go:117] "RemoveContainer" containerID="50c8d66910cbcf1dcdc03811dff2f9abc3d95e2e93235a68b4cc89109830e7b9" Feb 24 05:22:52.791015 master-0 kubenswrapper[7614]: E0224 05:22:52.790964 7614 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ingress-operator\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ingress-operator pod=ingress-operator-6569778c84-rr8r7_openshift-ingress-operator(3d6b1ce7-1213-494c-829d-186d39eac7eb)\"" pod="openshift-ingress-operator/ingress-operator-6569778c84-rr8r7" podUID="3d6b1ce7-1213-494c-829d-186d39eac7eb" Feb 24 05:22:53.236879 master-0 kubenswrapper[7614]: I0224 05:22:53.236767 7614 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-canary/ingress-canary-5m82s"] Feb 24 05:22:53.241685 master-0 kubenswrapper[7614]: W0224 05:22:53.241616 7614 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf938daff_1d36_4348_a689_3d1607058296.slice/crio-0a200f132e292ed5670ebdd181d6f49bb6c398710ac1ebdc14c3c7cdc32842f8 WatchSource:0}: Error finding container 0a200f132e292ed5670ebdd181d6f49bb6c398710ac1ebdc14c3c7cdc32842f8: Status 404 returned error can't find the container with id 0a200f132e292ed5670ebdd181d6f49bb6c398710ac1ebdc14c3c7cdc32842f8 Feb 24 05:22:53.664274 master-0 kubenswrapper[7614]: I0224 05:22:53.664103 7614 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-zxkt2 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 05:22:53.664274 master-0 kubenswrapper[7614]: [-]has-synced failed: reason withheld Feb 24 05:22:53.664274 master-0 kubenswrapper[7614]: [+]process-running ok Feb 24 05:22:53.664274 master-0 kubenswrapper[7614]: healthz check failed Feb 24 05:22:53.664274 master-0 kubenswrapper[7614]: I0224 05:22:53.664214 7614 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-zxkt2" podUID="be7a4b9e-1e9a-4298-b804-21b683805c0e" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 05:22:53.802095 master-0 kubenswrapper[7614]: I0224 05:22:53.802003 7614 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-canary/ingress-canary-5m82s" event={"ID":"f938daff-1d36-4348-a689-3d1607058296","Type":"ContainerStarted","Data":"c300c1ab6b722b5d59f3d218061d14e892b7db5ca8b64795d6a17d35f8779189"} Feb 24 05:22:53.802095 master-0 kubenswrapper[7614]: I0224 05:22:53.802085 7614 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-canary/ingress-canary-5m82s" event={"ID":"f938daff-1d36-4348-a689-3d1607058296","Type":"ContainerStarted","Data":"0a200f132e292ed5670ebdd181d6f49bb6c398710ac1ebdc14c3c7cdc32842f8"} Feb 24 05:22:53.805241 master-0 kubenswrapper[7614]: I0224 05:22:53.805177 7614 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ingress-operator_ingress-operator-6569778c84-rr8r7_3d6b1ce7-1213-494c-829d-186d39eac7eb/ingress-operator/2.log" Feb 24 05:22:53.832096 master-0 kubenswrapper[7614]: I0224 05:22:53.831197 7614 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress-canary/ingress-canary-5m82s" podStartSLOduration=1.831163917 podStartE2EDuration="1.831163917s" podCreationTimestamp="2026-02-24 05:22:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-24 05:22:53.82489082 +0000 UTC m=+504.859634006" watchObservedRunningTime="2026-02-24 05:22:53.831163917 +0000 UTC m=+504.865907113" Feb 24 05:22:54.663982 master-0 kubenswrapper[7614]: I0224 05:22:54.663871 7614 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-zxkt2 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 05:22:54.663982 master-0 kubenswrapper[7614]: [-]has-synced failed: reason withheld Feb 24 05:22:54.663982 master-0 kubenswrapper[7614]: [+]process-running ok Feb 24 05:22:54.663982 master-0 kubenswrapper[7614]: healthz check failed Feb 24 05:22:54.663982 master-0 kubenswrapper[7614]: I0224 05:22:54.663988 7614 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-zxkt2" podUID="be7a4b9e-1e9a-4298-b804-21b683805c0e" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 05:22:55.663845 master-0 kubenswrapper[7614]: I0224 05:22:55.663718 7614 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-zxkt2 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 05:22:55.663845 master-0 kubenswrapper[7614]: [-]has-synced failed: reason withheld Feb 24 05:22:55.663845 master-0 kubenswrapper[7614]: [+]process-running ok Feb 24 05:22:55.663845 master-0 kubenswrapper[7614]: healthz check failed Feb 24 05:22:55.663845 master-0 kubenswrapper[7614]: I0224 05:22:55.663839 7614 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-zxkt2" podUID="be7a4b9e-1e9a-4298-b804-21b683805c0e" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 05:22:56.663279 master-0 kubenswrapper[7614]: I0224 05:22:56.663154 7614 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-zxkt2 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 05:22:56.663279 master-0 kubenswrapper[7614]: [-]has-synced failed: reason withheld Feb 24 05:22:56.663279 master-0 kubenswrapper[7614]: [+]process-running ok Feb 24 05:22:56.663279 master-0 kubenswrapper[7614]: healthz check failed Feb 24 05:22:56.663279 master-0 kubenswrapper[7614]: I0224 05:22:56.663275 7614 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-zxkt2" podUID="be7a4b9e-1e9a-4298-b804-21b683805c0e" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 05:22:57.663050 master-0 kubenswrapper[7614]: I0224 05:22:57.662903 7614 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-zxkt2 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 05:22:57.663050 master-0 kubenswrapper[7614]: [-]has-synced failed: reason withheld Feb 24 05:22:57.663050 master-0 kubenswrapper[7614]: [+]process-running ok Feb 24 05:22:57.663050 master-0 kubenswrapper[7614]: healthz check failed Feb 24 05:22:57.663050 master-0 kubenswrapper[7614]: I0224 05:22:57.663036 7614 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-zxkt2" podUID="be7a4b9e-1e9a-4298-b804-21b683805c0e" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 05:22:58.664019 master-0 kubenswrapper[7614]: I0224 05:22:58.663930 7614 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-zxkt2 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 05:22:58.664019 master-0 kubenswrapper[7614]: [-]has-synced failed: reason withheld Feb 24 05:22:58.664019 master-0 kubenswrapper[7614]: [+]process-running ok Feb 24 05:22:58.664019 master-0 kubenswrapper[7614]: healthz check failed Feb 24 05:22:58.665498 master-0 kubenswrapper[7614]: I0224 05:22:58.665440 7614 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-zxkt2" podUID="be7a4b9e-1e9a-4298-b804-21b683805c0e" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 05:22:59.663082 master-0 kubenswrapper[7614]: I0224 05:22:59.662980 7614 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-zxkt2 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 05:22:59.663082 master-0 kubenswrapper[7614]: [-]has-synced failed: reason withheld Feb 24 05:22:59.663082 master-0 kubenswrapper[7614]: [+]process-running ok Feb 24 05:22:59.663082 master-0 kubenswrapper[7614]: healthz check failed Feb 24 05:22:59.663082 master-0 kubenswrapper[7614]: I0224 05:22:59.663055 7614 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-zxkt2" podUID="be7a4b9e-1e9a-4298-b804-21b683805c0e" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 05:23:00.663949 master-0 kubenswrapper[7614]: I0224 05:23:00.663827 7614 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-zxkt2 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 05:23:00.663949 master-0 kubenswrapper[7614]: [-]has-synced failed: reason withheld Feb 24 05:23:00.663949 master-0 kubenswrapper[7614]: [+]process-running ok Feb 24 05:23:00.663949 master-0 kubenswrapper[7614]: healthz check failed Feb 24 05:23:00.665058 master-0 kubenswrapper[7614]: I0224 05:23:00.663976 7614 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-zxkt2" podUID="be7a4b9e-1e9a-4298-b804-21b683805c0e" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 05:23:01.663815 master-0 kubenswrapper[7614]: I0224 05:23:01.663689 7614 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-zxkt2 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 05:23:01.663815 master-0 kubenswrapper[7614]: [-]has-synced failed: reason withheld Feb 24 05:23:01.663815 master-0 kubenswrapper[7614]: [+]process-running ok Feb 24 05:23:01.663815 master-0 kubenswrapper[7614]: healthz check failed Feb 24 05:23:01.663815 master-0 kubenswrapper[7614]: I0224 05:23:01.663821 7614 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-zxkt2" podUID="be7a4b9e-1e9a-4298-b804-21b683805c0e" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 05:23:02.663469 master-0 kubenswrapper[7614]: I0224 05:23:02.663299 7614 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-zxkt2 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 05:23:02.663469 master-0 kubenswrapper[7614]: [-]has-synced failed: reason withheld Feb 24 05:23:02.663469 master-0 kubenswrapper[7614]: [+]process-running ok Feb 24 05:23:02.663469 master-0 kubenswrapper[7614]: healthz check failed Feb 24 05:23:02.663469 master-0 kubenswrapper[7614]: I0224 05:23:02.663462 7614 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-zxkt2" podUID="be7a4b9e-1e9a-4298-b804-21b683805c0e" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 05:23:03.664531 master-0 kubenswrapper[7614]: I0224 05:23:03.664460 7614 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-zxkt2 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 05:23:03.664531 master-0 kubenswrapper[7614]: [-]has-synced failed: reason withheld Feb 24 05:23:03.664531 master-0 kubenswrapper[7614]: [+]process-running ok Feb 24 05:23:03.664531 master-0 kubenswrapper[7614]: healthz check failed Feb 24 05:23:03.665681 master-0 kubenswrapper[7614]: I0224 05:23:03.665577 7614 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-zxkt2" podUID="be7a4b9e-1e9a-4298-b804-21b683805c0e" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 05:23:04.175661 master-0 kubenswrapper[7614]: I0224 05:23:04.175581 7614 scope.go:117] "RemoveContainer" containerID="50c8d66910cbcf1dcdc03811dff2f9abc3d95e2e93235a68b4cc89109830e7b9" Feb 24 05:23:04.176230 master-0 kubenswrapper[7614]: E0224 05:23:04.176062 7614 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ingress-operator\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ingress-operator pod=ingress-operator-6569778c84-rr8r7_openshift-ingress-operator(3d6b1ce7-1213-494c-829d-186d39eac7eb)\"" pod="openshift-ingress-operator/ingress-operator-6569778c84-rr8r7" podUID="3d6b1ce7-1213-494c-829d-186d39eac7eb" Feb 24 05:23:04.664296 master-0 kubenswrapper[7614]: I0224 05:23:04.664083 7614 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-zxkt2 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 05:23:04.664296 master-0 kubenswrapper[7614]: [-]has-synced failed: reason withheld Feb 24 05:23:04.664296 master-0 kubenswrapper[7614]: [+]process-running ok Feb 24 05:23:04.664296 master-0 kubenswrapper[7614]: healthz check failed Feb 24 05:23:04.664296 master-0 kubenswrapper[7614]: I0224 05:23:04.664239 7614 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-zxkt2" podUID="be7a4b9e-1e9a-4298-b804-21b683805c0e" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 05:23:05.664050 master-0 kubenswrapper[7614]: I0224 05:23:05.663944 7614 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-zxkt2 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 05:23:05.664050 master-0 kubenswrapper[7614]: [-]has-synced failed: reason withheld Feb 24 05:23:05.664050 master-0 kubenswrapper[7614]: [+]process-running ok Feb 24 05:23:05.664050 master-0 kubenswrapper[7614]: healthz check failed Feb 24 05:23:05.664435 master-0 kubenswrapper[7614]: I0224 05:23:05.664075 7614 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-zxkt2" podUID="be7a4b9e-1e9a-4298-b804-21b683805c0e" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 05:23:06.663553 master-0 kubenswrapper[7614]: I0224 05:23:06.663458 7614 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-zxkt2 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 05:23:06.663553 master-0 kubenswrapper[7614]: [-]has-synced failed: reason withheld Feb 24 05:23:06.663553 master-0 kubenswrapper[7614]: [+]process-running ok Feb 24 05:23:06.663553 master-0 kubenswrapper[7614]: healthz check failed Feb 24 05:23:06.664682 master-0 kubenswrapper[7614]: I0224 05:23:06.663564 7614 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-zxkt2" podUID="be7a4b9e-1e9a-4298-b804-21b683805c0e" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 05:23:07.664327 master-0 kubenswrapper[7614]: I0224 05:23:07.664204 7614 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-zxkt2 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 05:23:07.664327 master-0 kubenswrapper[7614]: [-]has-synced failed: reason withheld Feb 24 05:23:07.664327 master-0 kubenswrapper[7614]: [+]process-running ok Feb 24 05:23:07.664327 master-0 kubenswrapper[7614]: healthz check failed Feb 24 05:23:07.664327 master-0 kubenswrapper[7614]: I0224 05:23:07.664296 7614 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-zxkt2" podUID="be7a4b9e-1e9a-4298-b804-21b683805c0e" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 05:23:08.663850 master-0 kubenswrapper[7614]: I0224 05:23:08.663722 7614 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-zxkt2 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 05:23:08.663850 master-0 kubenswrapper[7614]: [-]has-synced failed: reason withheld Feb 24 05:23:08.663850 master-0 kubenswrapper[7614]: [+]process-running ok Feb 24 05:23:08.663850 master-0 kubenswrapper[7614]: healthz check failed Feb 24 05:23:08.664409 master-0 kubenswrapper[7614]: I0224 05:23:08.663880 7614 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-zxkt2" podUID="be7a4b9e-1e9a-4298-b804-21b683805c0e" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 05:23:09.663039 master-0 kubenswrapper[7614]: I0224 05:23:09.662933 7614 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-zxkt2 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 05:23:09.663039 master-0 kubenswrapper[7614]: [-]has-synced failed: reason withheld Feb 24 05:23:09.663039 master-0 kubenswrapper[7614]: [+]process-running ok Feb 24 05:23:09.663039 master-0 kubenswrapper[7614]: healthz check failed Feb 24 05:23:09.663697 master-0 kubenswrapper[7614]: I0224 05:23:09.663041 7614 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-zxkt2" podUID="be7a4b9e-1e9a-4298-b804-21b683805c0e" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 05:23:10.664143 master-0 kubenswrapper[7614]: I0224 05:23:10.664010 7614 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-zxkt2 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 05:23:10.664143 master-0 kubenswrapper[7614]: [-]has-synced failed: reason withheld Feb 24 05:23:10.664143 master-0 kubenswrapper[7614]: [+]process-running ok Feb 24 05:23:10.664143 master-0 kubenswrapper[7614]: healthz check failed Feb 24 05:23:10.665377 master-0 kubenswrapper[7614]: I0224 05:23:10.664155 7614 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-zxkt2" podUID="be7a4b9e-1e9a-4298-b804-21b683805c0e" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 05:23:11.663081 master-0 kubenswrapper[7614]: I0224 05:23:11.662965 7614 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-zxkt2 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 05:23:11.663081 master-0 kubenswrapper[7614]: [-]has-synced failed: reason withheld Feb 24 05:23:11.663081 master-0 kubenswrapper[7614]: [+]process-running ok Feb 24 05:23:11.663081 master-0 kubenswrapper[7614]: healthz check failed Feb 24 05:23:11.663081 master-0 kubenswrapper[7614]: I0224 05:23:11.663071 7614 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-zxkt2" podUID="be7a4b9e-1e9a-4298-b804-21b683805c0e" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 05:23:12.664666 master-0 kubenswrapper[7614]: I0224 05:23:12.664559 7614 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-zxkt2 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 05:23:12.664666 master-0 kubenswrapper[7614]: [-]has-synced failed: reason withheld Feb 24 05:23:12.664666 master-0 kubenswrapper[7614]: [+]process-running ok Feb 24 05:23:12.664666 master-0 kubenswrapper[7614]: healthz check failed Feb 24 05:23:12.666005 master-0 kubenswrapper[7614]: I0224 05:23:12.664672 7614 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-zxkt2" podUID="be7a4b9e-1e9a-4298-b804-21b683805c0e" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 05:23:13.663264 master-0 kubenswrapper[7614]: I0224 05:23:13.663130 7614 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-zxkt2 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 05:23:13.663264 master-0 kubenswrapper[7614]: [-]has-synced failed: reason withheld Feb 24 05:23:13.663264 master-0 kubenswrapper[7614]: [+]process-running ok Feb 24 05:23:13.663264 master-0 kubenswrapper[7614]: healthz check failed Feb 24 05:23:13.663824 master-0 kubenswrapper[7614]: I0224 05:23:13.663262 7614 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-zxkt2" podUID="be7a4b9e-1e9a-4298-b804-21b683805c0e" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 05:23:14.663788 master-0 kubenswrapper[7614]: I0224 05:23:14.663654 7614 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-zxkt2 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 05:23:14.663788 master-0 kubenswrapper[7614]: [-]has-synced failed: reason withheld Feb 24 05:23:14.663788 master-0 kubenswrapper[7614]: [+]process-running ok Feb 24 05:23:14.663788 master-0 kubenswrapper[7614]: healthz check failed Feb 24 05:23:14.663788 master-0 kubenswrapper[7614]: I0224 05:23:14.663777 7614 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-zxkt2" podUID="be7a4b9e-1e9a-4298-b804-21b683805c0e" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 05:23:15.664620 master-0 kubenswrapper[7614]: I0224 05:23:15.664484 7614 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-zxkt2 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 05:23:15.664620 master-0 kubenswrapper[7614]: [-]has-synced failed: reason withheld Feb 24 05:23:15.664620 master-0 kubenswrapper[7614]: [+]process-running ok Feb 24 05:23:15.664620 master-0 kubenswrapper[7614]: healthz check failed Feb 24 05:23:15.665957 master-0 kubenswrapper[7614]: I0224 05:23:15.664636 7614 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-zxkt2" podUID="be7a4b9e-1e9a-4298-b804-21b683805c0e" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 05:23:16.663710 master-0 kubenswrapper[7614]: I0224 05:23:16.663586 7614 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-zxkt2 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 05:23:16.663710 master-0 kubenswrapper[7614]: [-]has-synced failed: reason withheld Feb 24 05:23:16.663710 master-0 kubenswrapper[7614]: [+]process-running ok Feb 24 05:23:16.663710 master-0 kubenswrapper[7614]: healthz check failed Feb 24 05:23:16.664210 master-0 kubenswrapper[7614]: I0224 05:23:16.663764 7614 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-zxkt2" podUID="be7a4b9e-1e9a-4298-b804-21b683805c0e" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 05:23:17.664375 master-0 kubenswrapper[7614]: I0224 05:23:17.664199 7614 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-zxkt2 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 05:23:17.664375 master-0 kubenswrapper[7614]: [-]has-synced failed: reason withheld Feb 24 05:23:17.664375 master-0 kubenswrapper[7614]: [+]process-running ok Feb 24 05:23:17.664375 master-0 kubenswrapper[7614]: healthz check failed Feb 24 05:23:17.665622 master-0 kubenswrapper[7614]: I0224 05:23:17.664437 7614 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-zxkt2" podUID="be7a4b9e-1e9a-4298-b804-21b683805c0e" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 05:23:18.664041 master-0 kubenswrapper[7614]: I0224 05:23:18.663946 7614 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-zxkt2 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 05:23:18.664041 master-0 kubenswrapper[7614]: [-]has-synced failed: reason withheld Feb 24 05:23:18.664041 master-0 kubenswrapper[7614]: [+]process-running ok Feb 24 05:23:18.664041 master-0 kubenswrapper[7614]: healthz check failed Feb 24 05:23:18.665419 master-0 kubenswrapper[7614]: I0224 05:23:18.664050 7614 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-zxkt2" podUID="be7a4b9e-1e9a-4298-b804-21b683805c0e" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 05:23:19.181215 master-0 kubenswrapper[7614]: I0224 05:23:19.181106 7614 scope.go:117] "RemoveContainer" containerID="50c8d66910cbcf1dcdc03811dff2f9abc3d95e2e93235a68b4cc89109830e7b9" Feb 24 05:23:19.663454 master-0 kubenswrapper[7614]: I0224 05:23:19.663340 7614 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-zxkt2 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 05:23:19.663454 master-0 kubenswrapper[7614]: [-]has-synced failed: reason withheld Feb 24 05:23:19.663454 master-0 kubenswrapper[7614]: [+]process-running ok Feb 24 05:23:19.663454 master-0 kubenswrapper[7614]: healthz check failed Feb 24 05:23:19.664455 master-0 kubenswrapper[7614]: I0224 05:23:19.663452 7614 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-zxkt2" podUID="be7a4b9e-1e9a-4298-b804-21b683805c0e" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 05:23:20.038226 master-0 kubenswrapper[7614]: I0224 05:23:20.038119 7614 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ingress-operator_ingress-operator-6569778c84-rr8r7_3d6b1ce7-1213-494c-829d-186d39eac7eb/ingress-operator/2.log" Feb 24 05:23:20.039000 master-0 kubenswrapper[7614]: I0224 05:23:20.038928 7614 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-6569778c84-rr8r7" event={"ID":"3d6b1ce7-1213-494c-829d-186d39eac7eb","Type":"ContainerStarted","Data":"cfe91b9dce3107eef3be77e003af99516d67b13614554d783a1ee356de5c61ba"} Feb 24 05:23:20.663771 master-0 kubenswrapper[7614]: I0224 05:23:20.663656 7614 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-zxkt2 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 05:23:20.663771 master-0 kubenswrapper[7614]: [-]has-synced failed: reason withheld Feb 24 05:23:20.663771 master-0 kubenswrapper[7614]: [+]process-running ok Feb 24 05:23:20.663771 master-0 kubenswrapper[7614]: healthz check failed Feb 24 05:23:20.664157 master-0 kubenswrapper[7614]: I0224 05:23:20.663813 7614 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-zxkt2" podUID="be7a4b9e-1e9a-4298-b804-21b683805c0e" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 05:23:21.663239 master-0 kubenswrapper[7614]: I0224 05:23:21.663124 7614 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-zxkt2 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 05:23:21.663239 master-0 kubenswrapper[7614]: [-]has-synced failed: reason withheld Feb 24 05:23:21.663239 master-0 kubenswrapper[7614]: [+]process-running ok Feb 24 05:23:21.663239 master-0 kubenswrapper[7614]: healthz check failed Feb 24 05:23:21.664352 master-0 kubenswrapper[7614]: I0224 05:23:21.663244 7614 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-zxkt2" podUID="be7a4b9e-1e9a-4298-b804-21b683805c0e" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 05:23:22.663566 master-0 kubenswrapper[7614]: I0224 05:23:22.663449 7614 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-zxkt2 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 05:23:22.663566 master-0 kubenswrapper[7614]: [-]has-synced failed: reason withheld Feb 24 05:23:22.663566 master-0 kubenswrapper[7614]: [+]process-running ok Feb 24 05:23:22.663566 master-0 kubenswrapper[7614]: healthz check failed Feb 24 05:23:22.664625 master-0 kubenswrapper[7614]: I0224 05:23:22.663570 7614 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-zxkt2" podUID="be7a4b9e-1e9a-4298-b804-21b683805c0e" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 05:23:23.664066 master-0 kubenswrapper[7614]: I0224 05:23:23.663942 7614 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-zxkt2 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 05:23:23.664066 master-0 kubenswrapper[7614]: [-]has-synced failed: reason withheld Feb 24 05:23:23.664066 master-0 kubenswrapper[7614]: [+]process-running ok Feb 24 05:23:23.664066 master-0 kubenswrapper[7614]: healthz check failed Feb 24 05:23:23.664066 master-0 kubenswrapper[7614]: I0224 05:23:23.664074 7614 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-zxkt2" podUID="be7a4b9e-1e9a-4298-b804-21b683805c0e" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 05:23:24.664114 master-0 kubenswrapper[7614]: I0224 05:23:24.664005 7614 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-zxkt2 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 05:23:24.664114 master-0 kubenswrapper[7614]: [-]has-synced failed: reason withheld Feb 24 05:23:24.664114 master-0 kubenswrapper[7614]: [+]process-running ok Feb 24 05:23:24.664114 master-0 kubenswrapper[7614]: healthz check failed Feb 24 05:23:24.665193 master-0 kubenswrapper[7614]: I0224 05:23:24.664119 7614 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-zxkt2" podUID="be7a4b9e-1e9a-4298-b804-21b683805c0e" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 05:23:25.664629 master-0 kubenswrapper[7614]: I0224 05:23:25.664521 7614 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-zxkt2 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 05:23:25.664629 master-0 kubenswrapper[7614]: [-]has-synced failed: reason withheld Feb 24 05:23:25.664629 master-0 kubenswrapper[7614]: [+]process-running ok Feb 24 05:23:25.664629 master-0 kubenswrapper[7614]: healthz check failed Feb 24 05:23:25.665273 master-0 kubenswrapper[7614]: I0224 05:23:25.664648 7614 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-zxkt2" podUID="be7a4b9e-1e9a-4298-b804-21b683805c0e" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 05:23:26.663832 master-0 kubenswrapper[7614]: I0224 05:23:26.663674 7614 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-zxkt2 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 05:23:26.663832 master-0 kubenswrapper[7614]: [-]has-synced failed: reason withheld Feb 24 05:23:26.663832 master-0 kubenswrapper[7614]: [+]process-running ok Feb 24 05:23:26.663832 master-0 kubenswrapper[7614]: healthz check failed Feb 24 05:23:26.664422 master-0 kubenswrapper[7614]: I0224 05:23:26.663825 7614 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-zxkt2" podUID="be7a4b9e-1e9a-4298-b804-21b683805c0e" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 05:23:27.663884 master-0 kubenswrapper[7614]: I0224 05:23:27.663777 7614 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-zxkt2 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 05:23:27.663884 master-0 kubenswrapper[7614]: [-]has-synced failed: reason withheld Feb 24 05:23:27.663884 master-0 kubenswrapper[7614]: [+]process-running ok Feb 24 05:23:27.663884 master-0 kubenswrapper[7614]: healthz check failed Feb 24 05:23:27.663884 master-0 kubenswrapper[7614]: I0224 05:23:27.663877 7614 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-zxkt2" podUID="be7a4b9e-1e9a-4298-b804-21b683805c0e" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 05:23:28.663325 master-0 kubenswrapper[7614]: I0224 05:23:28.663219 7614 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-zxkt2 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 05:23:28.663325 master-0 kubenswrapper[7614]: [-]has-synced failed: reason withheld Feb 24 05:23:28.663325 master-0 kubenswrapper[7614]: [+]process-running ok Feb 24 05:23:28.663325 master-0 kubenswrapper[7614]: healthz check failed Feb 24 05:23:28.663651 master-0 kubenswrapper[7614]: I0224 05:23:28.663421 7614 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-zxkt2" podUID="be7a4b9e-1e9a-4298-b804-21b683805c0e" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 05:23:29.662761 master-0 kubenswrapper[7614]: I0224 05:23:29.662664 7614 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-zxkt2 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 05:23:29.662761 master-0 kubenswrapper[7614]: [-]has-synced failed: reason withheld Feb 24 05:23:29.662761 master-0 kubenswrapper[7614]: [+]process-running ok Feb 24 05:23:29.662761 master-0 kubenswrapper[7614]: healthz check failed Feb 24 05:23:29.664394 master-0 kubenswrapper[7614]: I0224 05:23:29.664304 7614 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-zxkt2" podUID="be7a4b9e-1e9a-4298-b804-21b683805c0e" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 05:23:30.663771 master-0 kubenswrapper[7614]: I0224 05:23:30.663666 7614 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-zxkt2 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 05:23:30.663771 master-0 kubenswrapper[7614]: [-]has-synced failed: reason withheld Feb 24 05:23:30.663771 master-0 kubenswrapper[7614]: [+]process-running ok Feb 24 05:23:30.663771 master-0 kubenswrapper[7614]: healthz check failed Feb 24 05:23:30.664735 master-0 kubenswrapper[7614]: I0224 05:23:30.663801 7614 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-zxkt2" podUID="be7a4b9e-1e9a-4298-b804-21b683805c0e" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 05:23:31.663880 master-0 kubenswrapper[7614]: I0224 05:23:31.663754 7614 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-zxkt2 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 05:23:31.663880 master-0 kubenswrapper[7614]: [-]has-synced failed: reason withheld Feb 24 05:23:31.663880 master-0 kubenswrapper[7614]: [+]process-running ok Feb 24 05:23:31.663880 master-0 kubenswrapper[7614]: healthz check failed Feb 24 05:23:31.663880 master-0 kubenswrapper[7614]: I0224 05:23:31.663870 7614 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-zxkt2" podUID="be7a4b9e-1e9a-4298-b804-21b683805c0e" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 05:23:32.663694 master-0 kubenswrapper[7614]: I0224 05:23:32.663593 7614 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-zxkt2 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 05:23:32.663694 master-0 kubenswrapper[7614]: [-]has-synced failed: reason withheld Feb 24 05:23:32.663694 master-0 kubenswrapper[7614]: [+]process-running ok Feb 24 05:23:32.663694 master-0 kubenswrapper[7614]: healthz check failed Feb 24 05:23:32.664761 master-0 kubenswrapper[7614]: I0224 05:23:32.663730 7614 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-zxkt2" podUID="be7a4b9e-1e9a-4298-b804-21b683805c0e" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 05:23:33.664356 master-0 kubenswrapper[7614]: I0224 05:23:33.664220 7614 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-zxkt2 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 05:23:33.664356 master-0 kubenswrapper[7614]: [-]has-synced failed: reason withheld Feb 24 05:23:33.664356 master-0 kubenswrapper[7614]: [+]process-running ok Feb 24 05:23:33.664356 master-0 kubenswrapper[7614]: healthz check failed Feb 24 05:23:33.665660 master-0 kubenswrapper[7614]: I0224 05:23:33.664390 7614 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-zxkt2" podUID="be7a4b9e-1e9a-4298-b804-21b683805c0e" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 05:23:34.662510 master-0 kubenswrapper[7614]: I0224 05:23:34.662399 7614 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-zxkt2 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 05:23:34.662510 master-0 kubenswrapper[7614]: [-]has-synced failed: reason withheld Feb 24 05:23:34.662510 master-0 kubenswrapper[7614]: [+]process-running ok Feb 24 05:23:34.662510 master-0 kubenswrapper[7614]: healthz check failed Feb 24 05:23:34.663000 master-0 kubenswrapper[7614]: I0224 05:23:34.662525 7614 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-zxkt2" podUID="be7a4b9e-1e9a-4298-b804-21b683805c0e" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 05:23:35.664092 master-0 kubenswrapper[7614]: I0224 05:23:35.663982 7614 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-zxkt2 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 05:23:35.664092 master-0 kubenswrapper[7614]: [-]has-synced failed: reason withheld Feb 24 05:23:35.664092 master-0 kubenswrapper[7614]: [+]process-running ok Feb 24 05:23:35.664092 master-0 kubenswrapper[7614]: healthz check failed Feb 24 05:23:35.664868 master-0 kubenswrapper[7614]: I0224 05:23:35.664138 7614 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-zxkt2" podUID="be7a4b9e-1e9a-4298-b804-21b683805c0e" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 05:23:36.233120 master-0 kubenswrapper[7614]: I0224 05:23:36.233001 7614 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager/installer-2-master-0"] Feb 24 05:23:36.234653 master-0 kubenswrapper[7614]: I0224 05:23:36.234600 7614 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/installer-2-master-0" Feb 24 05:23:36.254448 master-0 kubenswrapper[7614]: I0224 05:23:36.254397 7614 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager"/"kube-root-ca.crt" Feb 24 05:23:36.254940 master-0 kubenswrapper[7614]: I0224 05:23:36.254879 7614 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager"/"installer-sa-dockercfg-kjfbr" Feb 24 05:23:36.275660 master-0 kubenswrapper[7614]: I0224 05:23:36.275566 7614 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/installer-2-master-0"] Feb 24 05:23:36.294743 master-0 kubenswrapper[7614]: I0224 05:23:36.294245 7614 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/4e058a29-f50f-473a-a217-0021923ebc7c-kube-api-access\") pod \"installer-2-master-0\" (UID: \"4e058a29-f50f-473a-a217-0021923ebc7c\") " pod="openshift-kube-controller-manager/installer-2-master-0" Feb 24 05:23:36.294743 master-0 kubenswrapper[7614]: I0224 05:23:36.294335 7614 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/4e058a29-f50f-473a-a217-0021923ebc7c-kubelet-dir\") pod \"installer-2-master-0\" (UID: \"4e058a29-f50f-473a-a217-0021923ebc7c\") " pod="openshift-kube-controller-manager/installer-2-master-0" Feb 24 05:23:36.294743 master-0 kubenswrapper[7614]: I0224 05:23:36.294443 7614 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/4e058a29-f50f-473a-a217-0021923ebc7c-var-lock\") pod \"installer-2-master-0\" (UID: \"4e058a29-f50f-473a-a217-0021923ebc7c\") " pod="openshift-kube-controller-manager/installer-2-master-0" Feb 24 05:23:36.395904 master-0 kubenswrapper[7614]: I0224 05:23:36.395799 7614 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/4e058a29-f50f-473a-a217-0021923ebc7c-var-lock\") pod \"installer-2-master-0\" (UID: \"4e058a29-f50f-473a-a217-0021923ebc7c\") " pod="openshift-kube-controller-manager/installer-2-master-0" Feb 24 05:23:36.396152 master-0 kubenswrapper[7614]: I0224 05:23:36.395941 7614 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/4e058a29-f50f-473a-a217-0021923ebc7c-kube-api-access\") pod \"installer-2-master-0\" (UID: \"4e058a29-f50f-473a-a217-0021923ebc7c\") " pod="openshift-kube-controller-manager/installer-2-master-0" Feb 24 05:23:36.396152 master-0 kubenswrapper[7614]: I0224 05:23:36.395977 7614 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/4e058a29-f50f-473a-a217-0021923ebc7c-kubelet-dir\") pod \"installer-2-master-0\" (UID: \"4e058a29-f50f-473a-a217-0021923ebc7c\") " pod="openshift-kube-controller-manager/installer-2-master-0" Feb 24 05:23:36.396152 master-0 kubenswrapper[7614]: I0224 05:23:36.395980 7614 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/4e058a29-f50f-473a-a217-0021923ebc7c-var-lock\") pod \"installer-2-master-0\" (UID: \"4e058a29-f50f-473a-a217-0021923ebc7c\") " pod="openshift-kube-controller-manager/installer-2-master-0" Feb 24 05:23:36.396152 master-0 kubenswrapper[7614]: I0224 05:23:36.396088 7614 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/4e058a29-f50f-473a-a217-0021923ebc7c-kubelet-dir\") pod \"installer-2-master-0\" (UID: \"4e058a29-f50f-473a-a217-0021923ebc7c\") " pod="openshift-kube-controller-manager/installer-2-master-0" Feb 24 05:23:36.412975 master-0 kubenswrapper[7614]: I0224 05:23:36.412927 7614 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/4e058a29-f50f-473a-a217-0021923ebc7c-kube-api-access\") pod \"installer-2-master-0\" (UID: \"4e058a29-f50f-473a-a217-0021923ebc7c\") " pod="openshift-kube-controller-manager/installer-2-master-0" Feb 24 05:23:36.568989 master-0 kubenswrapper[7614]: I0224 05:23:36.568786 7614 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/installer-2-master-0" Feb 24 05:23:36.666337 master-0 kubenswrapper[7614]: I0224 05:23:36.666244 7614 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-zxkt2 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 05:23:36.666337 master-0 kubenswrapper[7614]: [-]has-synced failed: reason withheld Feb 24 05:23:36.666337 master-0 kubenswrapper[7614]: [+]process-running ok Feb 24 05:23:36.666337 master-0 kubenswrapper[7614]: healthz check failed Feb 24 05:23:36.667195 master-0 kubenswrapper[7614]: I0224 05:23:36.666351 7614 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-zxkt2" podUID="be7a4b9e-1e9a-4298-b804-21b683805c0e" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 05:23:37.079765 master-0 kubenswrapper[7614]: I0224 05:23:37.079703 7614 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/installer-2-master-0"] Feb 24 05:23:37.187782 master-0 kubenswrapper[7614]: I0224 05:23:37.187710 7614 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/installer-2-master-0" event={"ID":"4e058a29-f50f-473a-a217-0021923ebc7c","Type":"ContainerStarted","Data":"769734d30190536a2d572317485788006caf1f452e2bf4039cbb5f5e275cd997"} Feb 24 05:23:37.663080 master-0 kubenswrapper[7614]: I0224 05:23:37.662922 7614 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-zxkt2 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 05:23:37.663080 master-0 kubenswrapper[7614]: [-]has-synced failed: reason withheld Feb 24 05:23:37.663080 master-0 kubenswrapper[7614]: [+]process-running ok Feb 24 05:23:37.663080 master-0 kubenswrapper[7614]: healthz check failed Feb 24 05:23:37.663080 master-0 kubenswrapper[7614]: I0224 05:23:37.663029 7614 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-zxkt2" podUID="be7a4b9e-1e9a-4298-b804-21b683805c0e" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 05:23:38.211573 master-0 kubenswrapper[7614]: I0224 05:23:38.211438 7614 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/installer-2-master-0" event={"ID":"4e058a29-f50f-473a-a217-0021923ebc7c","Type":"ContainerStarted","Data":"4a683c2df0643cd32ba4287e2bcfda52e85d58cdef62154fe0290d7b742d186c"} Feb 24 05:23:38.255660 master-0 kubenswrapper[7614]: I0224 05:23:38.255555 7614 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager/installer-2-master-0" podStartSLOduration=2.255494319 podStartE2EDuration="2.255494319s" podCreationTimestamp="2026-02-24 05:23:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-24 05:23:38.242769164 +0000 UTC m=+549.277512330" watchObservedRunningTime="2026-02-24 05:23:38.255494319 +0000 UTC m=+549.290237485" Feb 24 05:23:38.662475 master-0 kubenswrapper[7614]: I0224 05:23:38.662416 7614 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-zxkt2 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 05:23:38.662475 master-0 kubenswrapper[7614]: [-]has-synced failed: reason withheld Feb 24 05:23:38.662475 master-0 kubenswrapper[7614]: [+]process-running ok Feb 24 05:23:38.662475 master-0 kubenswrapper[7614]: healthz check failed Feb 24 05:23:38.662986 master-0 kubenswrapper[7614]: I0224 05:23:38.662951 7614 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-zxkt2" podUID="be7a4b9e-1e9a-4298-b804-21b683805c0e" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 05:23:39.662586 master-0 kubenswrapper[7614]: I0224 05:23:39.662521 7614 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-zxkt2 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 05:23:39.662586 master-0 kubenswrapper[7614]: [-]has-synced failed: reason withheld Feb 24 05:23:39.662586 master-0 kubenswrapper[7614]: [+]process-running ok Feb 24 05:23:39.662586 master-0 kubenswrapper[7614]: healthz check failed Feb 24 05:23:39.663545 master-0 kubenswrapper[7614]: I0224 05:23:39.662600 7614 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-zxkt2" podUID="be7a4b9e-1e9a-4298-b804-21b683805c0e" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 05:23:40.664453 master-0 kubenswrapper[7614]: I0224 05:23:40.664355 7614 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-zxkt2 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 05:23:40.664453 master-0 kubenswrapper[7614]: [-]has-synced failed: reason withheld Feb 24 05:23:40.664453 master-0 kubenswrapper[7614]: [+]process-running ok Feb 24 05:23:40.664453 master-0 kubenswrapper[7614]: healthz check failed Feb 24 05:23:40.665621 master-0 kubenswrapper[7614]: I0224 05:23:40.664457 7614 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-zxkt2" podUID="be7a4b9e-1e9a-4298-b804-21b683805c0e" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 05:23:41.663223 master-0 kubenswrapper[7614]: I0224 05:23:41.663132 7614 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-zxkt2 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 05:23:41.663223 master-0 kubenswrapper[7614]: [-]has-synced failed: reason withheld Feb 24 05:23:41.663223 master-0 kubenswrapper[7614]: [+]process-running ok Feb 24 05:23:41.663223 master-0 kubenswrapper[7614]: healthz check failed Feb 24 05:23:41.663751 master-0 kubenswrapper[7614]: I0224 05:23:41.663243 7614 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-zxkt2" podUID="be7a4b9e-1e9a-4298-b804-21b683805c0e" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 05:23:42.663049 master-0 kubenswrapper[7614]: I0224 05:23:42.662926 7614 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-zxkt2 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 05:23:42.663049 master-0 kubenswrapper[7614]: [-]has-synced failed: reason withheld Feb 24 05:23:42.663049 master-0 kubenswrapper[7614]: [+]process-running ok Feb 24 05:23:42.663049 master-0 kubenswrapper[7614]: healthz check failed Feb 24 05:23:42.664119 master-0 kubenswrapper[7614]: I0224 05:23:42.663090 7614 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-zxkt2" podUID="be7a4b9e-1e9a-4298-b804-21b683805c0e" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 05:23:43.664118 master-0 kubenswrapper[7614]: I0224 05:23:43.663998 7614 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-zxkt2 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 05:23:43.664118 master-0 kubenswrapper[7614]: [-]has-synced failed: reason withheld Feb 24 05:23:43.664118 master-0 kubenswrapper[7614]: [+]process-running ok Feb 24 05:23:43.664118 master-0 kubenswrapper[7614]: healthz check failed Feb 24 05:23:43.665224 master-0 kubenswrapper[7614]: I0224 05:23:43.664128 7614 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-zxkt2" podUID="be7a4b9e-1e9a-4298-b804-21b683805c0e" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 05:23:44.663709 master-0 kubenswrapper[7614]: I0224 05:23:44.663619 7614 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-zxkt2 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 05:23:44.663709 master-0 kubenswrapper[7614]: [-]has-synced failed: reason withheld Feb 24 05:23:44.663709 master-0 kubenswrapper[7614]: [+]process-running ok Feb 24 05:23:44.663709 master-0 kubenswrapper[7614]: healthz check failed Feb 24 05:23:44.664066 master-0 kubenswrapper[7614]: I0224 05:23:44.663744 7614 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-zxkt2" podUID="be7a4b9e-1e9a-4298-b804-21b683805c0e" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 05:23:45.098295 master-0 kubenswrapper[7614]: I0224 05:23:45.098210 7614 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/installer-2-master-0"] Feb 24 05:23:45.099680 master-0 kubenswrapper[7614]: I0224 05:23:45.099085 7614 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-2-master-0" Feb 24 05:23:45.102661 master-0 kubenswrapper[7614]: I0224 05:23:45.102596 7614 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver"/"installer-sa-dockercfg-d88q9" Feb 24 05:23:45.103758 master-0 kubenswrapper[7614]: I0224 05:23:45.103718 7614 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver"/"kube-root-ca.crt" Feb 24 05:23:45.125921 master-0 kubenswrapper[7614]: I0224 05:23:45.125850 7614 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-2-master-0"] Feb 24 05:23:45.146767 master-0 kubenswrapper[7614]: I0224 05:23:45.146685 7614 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/7d063f48-5f89-47d0-bafc-84a52839c806-kube-api-access\") pod \"installer-2-master-0\" (UID: \"7d063f48-5f89-47d0-bafc-84a52839c806\") " pod="openshift-kube-apiserver/installer-2-master-0" Feb 24 05:23:45.146767 master-0 kubenswrapper[7614]: I0224 05:23:45.146778 7614 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/7d063f48-5f89-47d0-bafc-84a52839c806-kubelet-dir\") pod \"installer-2-master-0\" (UID: \"7d063f48-5f89-47d0-bafc-84a52839c806\") " pod="openshift-kube-apiserver/installer-2-master-0" Feb 24 05:23:45.147164 master-0 kubenswrapper[7614]: I0224 05:23:45.146838 7614 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/7d063f48-5f89-47d0-bafc-84a52839c806-var-lock\") pod \"installer-2-master-0\" (UID: \"7d063f48-5f89-47d0-bafc-84a52839c806\") " pod="openshift-kube-apiserver/installer-2-master-0" Feb 24 05:23:45.247804 master-0 kubenswrapper[7614]: I0224 05:23:45.247688 7614 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/7d063f48-5f89-47d0-bafc-84a52839c806-kube-api-access\") pod \"installer-2-master-0\" (UID: \"7d063f48-5f89-47d0-bafc-84a52839c806\") " pod="openshift-kube-apiserver/installer-2-master-0" Feb 24 05:23:45.247804 master-0 kubenswrapper[7614]: I0224 05:23:45.247817 7614 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/7d063f48-5f89-47d0-bafc-84a52839c806-kubelet-dir\") pod \"installer-2-master-0\" (UID: \"7d063f48-5f89-47d0-bafc-84a52839c806\") " pod="openshift-kube-apiserver/installer-2-master-0" Feb 24 05:23:45.249165 master-0 kubenswrapper[7614]: I0224 05:23:45.249059 7614 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/7d063f48-5f89-47d0-bafc-84a52839c806-var-lock\") pod \"installer-2-master-0\" (UID: \"7d063f48-5f89-47d0-bafc-84a52839c806\") " pod="openshift-kube-apiserver/installer-2-master-0" Feb 24 05:23:45.249954 master-0 kubenswrapper[7614]: I0224 05:23:45.249512 7614 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/7d063f48-5f89-47d0-bafc-84a52839c806-var-lock\") pod \"installer-2-master-0\" (UID: \"7d063f48-5f89-47d0-bafc-84a52839c806\") " pod="openshift-kube-apiserver/installer-2-master-0" Feb 24 05:23:45.249954 master-0 kubenswrapper[7614]: I0224 05:23:45.249081 7614 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/7d063f48-5f89-47d0-bafc-84a52839c806-kubelet-dir\") pod \"installer-2-master-0\" (UID: \"7d063f48-5f89-47d0-bafc-84a52839c806\") " pod="openshift-kube-apiserver/installer-2-master-0" Feb 24 05:23:45.275205 master-0 kubenswrapper[7614]: I0224 05:23:45.275129 7614 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/7d063f48-5f89-47d0-bafc-84a52839c806-kube-api-access\") pod \"installer-2-master-0\" (UID: \"7d063f48-5f89-47d0-bafc-84a52839c806\") " pod="openshift-kube-apiserver/installer-2-master-0" Feb 24 05:23:45.423993 master-0 kubenswrapper[7614]: I0224 05:23:45.423772 7614 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-2-master-0" Feb 24 05:23:45.663921 master-0 kubenswrapper[7614]: I0224 05:23:45.663800 7614 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-zxkt2 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 05:23:45.663921 master-0 kubenswrapper[7614]: [-]has-synced failed: reason withheld Feb 24 05:23:45.663921 master-0 kubenswrapper[7614]: [+]process-running ok Feb 24 05:23:45.663921 master-0 kubenswrapper[7614]: healthz check failed Feb 24 05:23:45.664610 master-0 kubenswrapper[7614]: I0224 05:23:45.663931 7614 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-zxkt2" podUID="be7a4b9e-1e9a-4298-b804-21b683805c0e" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 05:23:45.944002 master-0 kubenswrapper[7614]: I0224 05:23:45.943926 7614 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-2-master-0"] Feb 24 05:23:45.954888 master-0 kubenswrapper[7614]: W0224 05:23:45.954707 7614 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-pod7d063f48_5f89_47d0_bafc_84a52839c806.slice/crio-835ae03e3e8588604d9220c7c10316442703346b5052f347621a9b0860a0156c WatchSource:0}: Error finding container 835ae03e3e8588604d9220c7c10316442703346b5052f347621a9b0860a0156c: Status 404 returned error can't find the container with id 835ae03e3e8588604d9220c7c10316442703346b5052f347621a9b0860a0156c Feb 24 05:23:46.279835 master-0 kubenswrapper[7614]: I0224 05:23:46.279697 7614 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-2-master-0" event={"ID":"7d063f48-5f89-47d0-bafc-84a52839c806","Type":"ContainerStarted","Data":"835ae03e3e8588604d9220c7c10316442703346b5052f347621a9b0860a0156c"} Feb 24 05:23:46.662665 master-0 kubenswrapper[7614]: I0224 05:23:46.662521 7614 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-zxkt2 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 05:23:46.662665 master-0 kubenswrapper[7614]: [-]has-synced failed: reason withheld Feb 24 05:23:46.662665 master-0 kubenswrapper[7614]: [+]process-running ok Feb 24 05:23:46.662665 master-0 kubenswrapper[7614]: healthz check failed Feb 24 05:23:46.663004 master-0 kubenswrapper[7614]: I0224 05:23:46.662657 7614 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-zxkt2" podUID="be7a4b9e-1e9a-4298-b804-21b683805c0e" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 05:23:47.289190 master-0 kubenswrapper[7614]: I0224 05:23:47.289114 7614 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-2-master-0" event={"ID":"7d063f48-5f89-47d0-bafc-84a52839c806","Type":"ContainerStarted","Data":"d347e24453ee574539f27391a430e305f8f75f2030a25c584a9b3378c1e400e8"} Feb 24 05:23:47.316617 master-0 kubenswrapper[7614]: I0224 05:23:47.316515 7614 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/installer-2-master-0" podStartSLOduration=2.316487273 podStartE2EDuration="2.316487273s" podCreationTimestamp="2026-02-24 05:23:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-24 05:23:47.310811903 +0000 UTC m=+558.345555079" watchObservedRunningTime="2026-02-24 05:23:47.316487273 +0000 UTC m=+558.351230429" Feb 24 05:23:47.664424 master-0 kubenswrapper[7614]: I0224 05:23:47.664111 7614 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-zxkt2 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 05:23:47.664424 master-0 kubenswrapper[7614]: [-]has-synced failed: reason withheld Feb 24 05:23:47.664424 master-0 kubenswrapper[7614]: [+]process-running ok Feb 24 05:23:47.664424 master-0 kubenswrapper[7614]: healthz check failed Feb 24 05:23:47.664424 master-0 kubenswrapper[7614]: I0224 05:23:47.664196 7614 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-zxkt2" podUID="be7a4b9e-1e9a-4298-b804-21b683805c0e" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 05:23:47.974009 master-0 kubenswrapper[7614]: I0224 05:23:47.973782 7614 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-etcd/installer-2-master-0"] Feb 24 05:23:47.975152 master-0 kubenswrapper[7614]: I0224 05:23:47.975100 7614 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/installer-2-master-0" Feb 24 05:23:47.979748 master-0 kubenswrapper[7614]: I0224 05:23:47.979702 7614 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd"/"installer-sa-dockercfg-726z4" Feb 24 05:23:47.980198 master-0 kubenswrapper[7614]: I0224 05:23:47.980142 7614 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd"/"kube-root-ca.crt" Feb 24 05:23:47.995795 master-0 kubenswrapper[7614]: I0224 05:23:47.995372 7614 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/29b0d9bb-1b88-4023-8b08-896d581c79c7-var-lock\") pod \"installer-2-master-0\" (UID: \"29b0d9bb-1b88-4023-8b08-896d581c79c7\") " pod="openshift-etcd/installer-2-master-0" Feb 24 05:23:47.995795 master-0 kubenswrapper[7614]: I0224 05:23:47.995753 7614 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/29b0d9bb-1b88-4023-8b08-896d581c79c7-kubelet-dir\") pod \"installer-2-master-0\" (UID: \"29b0d9bb-1b88-4023-8b08-896d581c79c7\") " pod="openshift-etcd/installer-2-master-0" Feb 24 05:23:47.995978 master-0 kubenswrapper[7614]: I0224 05:23:47.995835 7614 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/29b0d9bb-1b88-4023-8b08-896d581c79c7-kube-api-access\") pod \"installer-2-master-0\" (UID: \"29b0d9bb-1b88-4023-8b08-896d581c79c7\") " pod="openshift-etcd/installer-2-master-0" Feb 24 05:23:48.005658 master-0 kubenswrapper[7614]: I0224 05:23:48.005294 7614 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-etcd/installer-2-master-0"] Feb 24 05:23:48.097149 master-0 kubenswrapper[7614]: I0224 05:23:48.097058 7614 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/29b0d9bb-1b88-4023-8b08-896d581c79c7-var-lock\") pod \"installer-2-master-0\" (UID: \"29b0d9bb-1b88-4023-8b08-896d581c79c7\") " pod="openshift-etcd/installer-2-master-0" Feb 24 05:23:48.097512 master-0 kubenswrapper[7614]: I0224 05:23:48.097174 7614 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/29b0d9bb-1b88-4023-8b08-896d581c79c7-kubelet-dir\") pod \"installer-2-master-0\" (UID: \"29b0d9bb-1b88-4023-8b08-896d581c79c7\") " pod="openshift-etcd/installer-2-master-0" Feb 24 05:23:48.097512 master-0 kubenswrapper[7614]: I0224 05:23:48.097267 7614 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/29b0d9bb-1b88-4023-8b08-896d581c79c7-kube-api-access\") pod \"installer-2-master-0\" (UID: \"29b0d9bb-1b88-4023-8b08-896d581c79c7\") " pod="openshift-etcd/installer-2-master-0" Feb 24 05:23:48.097816 master-0 kubenswrapper[7614]: I0224 05:23:48.097562 7614 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/29b0d9bb-1b88-4023-8b08-896d581c79c7-kubelet-dir\") pod \"installer-2-master-0\" (UID: \"29b0d9bb-1b88-4023-8b08-896d581c79c7\") " pod="openshift-etcd/installer-2-master-0" Feb 24 05:23:48.097816 master-0 kubenswrapper[7614]: I0224 05:23:48.097696 7614 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/29b0d9bb-1b88-4023-8b08-896d581c79c7-var-lock\") pod \"installer-2-master-0\" (UID: \"29b0d9bb-1b88-4023-8b08-896d581c79c7\") " pod="openshift-etcd/installer-2-master-0" Feb 24 05:23:48.116501 master-0 kubenswrapper[7614]: I0224 05:23:48.115953 7614 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/29b0d9bb-1b88-4023-8b08-896d581c79c7-kube-api-access\") pod \"installer-2-master-0\" (UID: \"29b0d9bb-1b88-4023-8b08-896d581c79c7\") " pod="openshift-etcd/installer-2-master-0" Feb 24 05:23:48.307836 master-0 kubenswrapper[7614]: I0224 05:23:48.307733 7614 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/installer-2-master-0" Feb 24 05:23:48.663232 master-0 kubenswrapper[7614]: I0224 05:23:48.663070 7614 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-zxkt2 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 05:23:48.663232 master-0 kubenswrapper[7614]: [-]has-synced failed: reason withheld Feb 24 05:23:48.663232 master-0 kubenswrapper[7614]: [+]process-running ok Feb 24 05:23:48.663232 master-0 kubenswrapper[7614]: healthz check failed Feb 24 05:23:48.663232 master-0 kubenswrapper[7614]: I0224 05:23:48.663179 7614 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-zxkt2" podUID="be7a4b9e-1e9a-4298-b804-21b683805c0e" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 05:23:48.869713 master-0 kubenswrapper[7614]: I0224 05:23:48.869633 7614 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-etcd/installer-2-master-0"] Feb 24 05:23:49.307956 master-0 kubenswrapper[7614]: I0224 05:23:49.307834 7614 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/installer-2-master-0" event={"ID":"29b0d9bb-1b88-4023-8b08-896d581c79c7","Type":"ContainerStarted","Data":"e62e33bc2b32fa546c8b71cdec9803c18e73e881c996067ed355eb35c01427f7"} Feb 24 05:23:49.662834 master-0 kubenswrapper[7614]: I0224 05:23:49.662750 7614 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-zxkt2 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 05:23:49.662834 master-0 kubenswrapper[7614]: [-]has-synced failed: reason withheld Feb 24 05:23:49.662834 master-0 kubenswrapper[7614]: [+]process-running ok Feb 24 05:23:49.662834 master-0 kubenswrapper[7614]: healthz check failed Feb 24 05:23:49.663455 master-0 kubenswrapper[7614]: I0224 05:23:49.662840 7614 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-zxkt2" podUID="be7a4b9e-1e9a-4298-b804-21b683805c0e" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 05:23:50.317478 master-0 kubenswrapper[7614]: I0224 05:23:50.317402 7614 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/installer-2-master-0" event={"ID":"29b0d9bb-1b88-4023-8b08-896d581c79c7","Type":"ContainerStarted","Data":"e12e5627ae03ebb97ca362b2b8faa759ca1b9a419649b89bb29941198d85f2b3"} Feb 24 05:23:50.344265 master-0 kubenswrapper[7614]: I0224 05:23:50.344133 7614 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-etcd/installer-2-master-0" podStartSLOduration=3.344098882 podStartE2EDuration="3.344098882s" podCreationTimestamp="2026-02-24 05:23:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-24 05:23:50.339059089 +0000 UTC m=+561.373802255" watchObservedRunningTime="2026-02-24 05:23:50.344098882 +0000 UTC m=+561.378842078" Feb 24 05:23:50.663092 master-0 kubenswrapper[7614]: I0224 05:23:50.662897 7614 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-zxkt2 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 05:23:50.663092 master-0 kubenswrapper[7614]: [-]has-synced failed: reason withheld Feb 24 05:23:50.663092 master-0 kubenswrapper[7614]: [+]process-running ok Feb 24 05:23:50.663092 master-0 kubenswrapper[7614]: healthz check failed Feb 24 05:23:50.663092 master-0 kubenswrapper[7614]: I0224 05:23:50.662978 7614 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-zxkt2" podUID="be7a4b9e-1e9a-4298-b804-21b683805c0e" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 05:23:51.662929 master-0 kubenswrapper[7614]: I0224 05:23:51.662809 7614 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-zxkt2 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 05:23:51.662929 master-0 kubenswrapper[7614]: [-]has-synced failed: reason withheld Feb 24 05:23:51.662929 master-0 kubenswrapper[7614]: [+]process-running ok Feb 24 05:23:51.662929 master-0 kubenswrapper[7614]: healthz check failed Feb 24 05:23:51.662929 master-0 kubenswrapper[7614]: I0224 05:23:51.662934 7614 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-zxkt2" podUID="be7a4b9e-1e9a-4298-b804-21b683805c0e" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 05:23:52.663536 master-0 kubenswrapper[7614]: I0224 05:23:52.663406 7614 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-zxkt2 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 05:23:52.663536 master-0 kubenswrapper[7614]: [-]has-synced failed: reason withheld Feb 24 05:23:52.663536 master-0 kubenswrapper[7614]: [+]process-running ok Feb 24 05:23:52.663536 master-0 kubenswrapper[7614]: healthz check failed Feb 24 05:23:52.664856 master-0 kubenswrapper[7614]: I0224 05:23:52.663579 7614 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-zxkt2" podUID="be7a4b9e-1e9a-4298-b804-21b683805c0e" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 05:23:53.663635 master-0 kubenswrapper[7614]: I0224 05:23:53.663518 7614 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-zxkt2 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 05:23:53.663635 master-0 kubenswrapper[7614]: [-]has-synced failed: reason withheld Feb 24 05:23:53.663635 master-0 kubenswrapper[7614]: [+]process-running ok Feb 24 05:23:53.663635 master-0 kubenswrapper[7614]: healthz check failed Feb 24 05:23:53.664817 master-0 kubenswrapper[7614]: I0224 05:23:53.663651 7614 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-zxkt2" podUID="be7a4b9e-1e9a-4298-b804-21b683805c0e" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 05:23:54.663736 master-0 kubenswrapper[7614]: I0224 05:23:54.663635 7614 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-zxkt2 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 05:23:54.663736 master-0 kubenswrapper[7614]: [-]has-synced failed: reason withheld Feb 24 05:23:54.663736 master-0 kubenswrapper[7614]: [+]process-running ok Feb 24 05:23:54.663736 master-0 kubenswrapper[7614]: healthz check failed Feb 24 05:23:54.665174 master-0 kubenswrapper[7614]: I0224 05:23:54.663747 7614 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-zxkt2" podUID="be7a4b9e-1e9a-4298-b804-21b683805c0e" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 05:23:55.663969 master-0 kubenswrapper[7614]: I0224 05:23:55.663867 7614 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-zxkt2 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 05:23:55.663969 master-0 kubenswrapper[7614]: [-]has-synced failed: reason withheld Feb 24 05:23:55.663969 master-0 kubenswrapper[7614]: [+]process-running ok Feb 24 05:23:55.663969 master-0 kubenswrapper[7614]: healthz check failed Feb 24 05:23:55.663969 master-0 kubenswrapper[7614]: I0224 05:23:55.663965 7614 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-zxkt2" podUID="be7a4b9e-1e9a-4298-b804-21b683805c0e" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 05:23:56.664134 master-0 kubenswrapper[7614]: I0224 05:23:56.663977 7614 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-zxkt2 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 05:23:56.664134 master-0 kubenswrapper[7614]: [-]has-synced failed: reason withheld Feb 24 05:23:56.664134 master-0 kubenswrapper[7614]: [+]process-running ok Feb 24 05:23:56.664134 master-0 kubenswrapper[7614]: healthz check failed Feb 24 05:23:56.665517 master-0 kubenswrapper[7614]: I0224 05:23:56.664194 7614 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-zxkt2" podUID="be7a4b9e-1e9a-4298-b804-21b683805c0e" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 05:23:57.663847 master-0 kubenswrapper[7614]: I0224 05:23:57.663751 7614 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-zxkt2 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 05:23:57.663847 master-0 kubenswrapper[7614]: [-]has-synced failed: reason withheld Feb 24 05:23:57.663847 master-0 kubenswrapper[7614]: [+]process-running ok Feb 24 05:23:57.663847 master-0 kubenswrapper[7614]: healthz check failed Feb 24 05:23:57.664821 master-0 kubenswrapper[7614]: I0224 05:23:57.663870 7614 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-zxkt2" podUID="be7a4b9e-1e9a-4298-b804-21b683805c0e" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 05:23:58.289511 master-0 kubenswrapper[7614]: I0224 05:23:58.289441 7614 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/cni-sysctl-allowlist-ds-j28p2"] Feb 24 05:23:58.290567 master-0 kubenswrapper[7614]: I0224 05:23:58.290534 7614 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/cni-sysctl-allowlist-ds-j28p2" Feb 24 05:23:58.292434 master-0 kubenswrapper[7614]: I0224 05:23:58.292373 7614 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"default-dockercfg-htglv" Feb 24 05:23:58.294841 master-0 kubenswrapper[7614]: I0224 05:23:58.294807 7614 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"cni-sysctl-allowlist" Feb 24 05:23:58.487271 master-0 kubenswrapper[7614]: I0224 05:23:58.487178 7614 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6cvkh\" (UniqueName: \"kubernetes.io/projected/2303d3b8-fe6a-469a-a306-4e1685181dbe-kube-api-access-6cvkh\") pod \"cni-sysctl-allowlist-ds-j28p2\" (UID: \"2303d3b8-fe6a-469a-a306-4e1685181dbe\") " pod="openshift-multus/cni-sysctl-allowlist-ds-j28p2" Feb 24 05:23:58.487271 master-0 kubenswrapper[7614]: I0224 05:23:58.487256 7614 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/2303d3b8-fe6a-469a-a306-4e1685181dbe-cni-sysctl-allowlist\") pod \"cni-sysctl-allowlist-ds-j28p2\" (UID: \"2303d3b8-fe6a-469a-a306-4e1685181dbe\") " pod="openshift-multus/cni-sysctl-allowlist-ds-j28p2" Feb 24 05:23:58.487659 master-0 kubenswrapper[7614]: I0224 05:23:58.487480 7614 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ready\" (UniqueName: \"kubernetes.io/empty-dir/2303d3b8-fe6a-469a-a306-4e1685181dbe-ready\") pod \"cni-sysctl-allowlist-ds-j28p2\" (UID: \"2303d3b8-fe6a-469a-a306-4e1685181dbe\") " pod="openshift-multus/cni-sysctl-allowlist-ds-j28p2" Feb 24 05:23:58.487732 master-0 kubenswrapper[7614]: I0224 05:23:58.487687 7614 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/2303d3b8-fe6a-469a-a306-4e1685181dbe-tuning-conf-dir\") pod \"cni-sysctl-allowlist-ds-j28p2\" (UID: \"2303d3b8-fe6a-469a-a306-4e1685181dbe\") " pod="openshift-multus/cni-sysctl-allowlist-ds-j28p2" Feb 24 05:23:58.589398 master-0 kubenswrapper[7614]: I0224 05:23:58.589216 7614 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/2303d3b8-fe6a-469a-a306-4e1685181dbe-cni-sysctl-allowlist\") pod \"cni-sysctl-allowlist-ds-j28p2\" (UID: \"2303d3b8-fe6a-469a-a306-4e1685181dbe\") " pod="openshift-multus/cni-sysctl-allowlist-ds-j28p2" Feb 24 05:23:58.589398 master-0 kubenswrapper[7614]: I0224 05:23:58.589389 7614 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ready\" (UniqueName: \"kubernetes.io/empty-dir/2303d3b8-fe6a-469a-a306-4e1685181dbe-ready\") pod \"cni-sysctl-allowlist-ds-j28p2\" (UID: \"2303d3b8-fe6a-469a-a306-4e1685181dbe\") " pod="openshift-multus/cni-sysctl-allowlist-ds-j28p2" Feb 24 05:23:58.589691 master-0 kubenswrapper[7614]: I0224 05:23:58.589467 7614 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/2303d3b8-fe6a-469a-a306-4e1685181dbe-tuning-conf-dir\") pod \"cni-sysctl-allowlist-ds-j28p2\" (UID: \"2303d3b8-fe6a-469a-a306-4e1685181dbe\") " pod="openshift-multus/cni-sysctl-allowlist-ds-j28p2" Feb 24 05:23:58.589765 master-0 kubenswrapper[7614]: I0224 05:23:58.589702 7614 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6cvkh\" (UniqueName: \"kubernetes.io/projected/2303d3b8-fe6a-469a-a306-4e1685181dbe-kube-api-access-6cvkh\") pod \"cni-sysctl-allowlist-ds-j28p2\" (UID: \"2303d3b8-fe6a-469a-a306-4e1685181dbe\") " pod="openshift-multus/cni-sysctl-allowlist-ds-j28p2" Feb 24 05:23:58.589875 master-0 kubenswrapper[7614]: I0224 05:23:58.589820 7614 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/2303d3b8-fe6a-469a-a306-4e1685181dbe-tuning-conf-dir\") pod \"cni-sysctl-allowlist-ds-j28p2\" (UID: \"2303d3b8-fe6a-469a-a306-4e1685181dbe\") " pod="openshift-multus/cni-sysctl-allowlist-ds-j28p2" Feb 24 05:23:58.590555 master-0 kubenswrapper[7614]: I0224 05:23:58.590497 7614 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ready\" (UniqueName: \"kubernetes.io/empty-dir/2303d3b8-fe6a-469a-a306-4e1685181dbe-ready\") pod \"cni-sysctl-allowlist-ds-j28p2\" (UID: \"2303d3b8-fe6a-469a-a306-4e1685181dbe\") " pod="openshift-multus/cni-sysctl-allowlist-ds-j28p2" Feb 24 05:23:58.590659 master-0 kubenswrapper[7614]: I0224 05:23:58.590620 7614 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/2303d3b8-fe6a-469a-a306-4e1685181dbe-cni-sysctl-allowlist\") pod \"cni-sysctl-allowlist-ds-j28p2\" (UID: \"2303d3b8-fe6a-469a-a306-4e1685181dbe\") " pod="openshift-multus/cni-sysctl-allowlist-ds-j28p2" Feb 24 05:23:58.619840 master-0 kubenswrapper[7614]: I0224 05:23:58.619783 7614 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6cvkh\" (UniqueName: \"kubernetes.io/projected/2303d3b8-fe6a-469a-a306-4e1685181dbe-kube-api-access-6cvkh\") pod \"cni-sysctl-allowlist-ds-j28p2\" (UID: \"2303d3b8-fe6a-469a-a306-4e1685181dbe\") " pod="openshift-multus/cni-sysctl-allowlist-ds-j28p2" Feb 24 05:23:58.620830 master-0 kubenswrapper[7614]: I0224 05:23:58.620778 7614 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/cni-sysctl-allowlist-ds-j28p2" Feb 24 05:23:58.644002 master-0 kubenswrapper[7614]: W0224 05:23:58.643940 7614 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2303d3b8_fe6a_469a_a306_4e1685181dbe.slice/crio-ed7a8ce67f1a3dbd05e9ef13a20015a9c7a3ffc856c5287128e78f3d3c245000 WatchSource:0}: Error finding container ed7a8ce67f1a3dbd05e9ef13a20015a9c7a3ffc856c5287128e78f3d3c245000: Status 404 returned error can't find the container with id ed7a8ce67f1a3dbd05e9ef13a20015a9c7a3ffc856c5287128e78f3d3c245000 Feb 24 05:23:58.694624 master-0 kubenswrapper[7614]: I0224 05:23:58.694494 7614 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-zxkt2 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 05:23:58.694624 master-0 kubenswrapper[7614]: [-]has-synced failed: reason withheld Feb 24 05:23:58.694624 master-0 kubenswrapper[7614]: [+]process-running ok Feb 24 05:23:58.694624 master-0 kubenswrapper[7614]: healthz check failed Feb 24 05:23:58.694624 master-0 kubenswrapper[7614]: I0224 05:23:58.694564 7614 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-zxkt2" podUID="be7a4b9e-1e9a-4298-b804-21b683805c0e" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 05:23:59.404837 master-0 kubenswrapper[7614]: I0224 05:23:59.404726 7614 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/cni-sysctl-allowlist-ds-j28p2" event={"ID":"2303d3b8-fe6a-469a-a306-4e1685181dbe","Type":"ContainerStarted","Data":"e23e1fc319485f4b7af40af05bab78f64dc6feb9347acaf86cd9fe42147a009f"} Feb 24 05:23:59.404837 master-0 kubenswrapper[7614]: I0224 05:23:59.404827 7614 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/cni-sysctl-allowlist-ds-j28p2" event={"ID":"2303d3b8-fe6a-469a-a306-4e1685181dbe","Type":"ContainerStarted","Data":"ed7a8ce67f1a3dbd05e9ef13a20015a9c7a3ffc856c5287128e78f3d3c245000"} Feb 24 05:23:59.405236 master-0 kubenswrapper[7614]: I0224 05:23:59.405139 7614 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-multus/cni-sysctl-allowlist-ds-j28p2" Feb 24 05:23:59.434019 master-0 kubenswrapper[7614]: I0224 05:23:59.433829 7614 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/cni-sysctl-allowlist-ds-j28p2" podStartSLOduration=1.43380493 podStartE2EDuration="1.43380493s" podCreationTimestamp="2026-02-24 05:23:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-24 05:23:59.429868507 +0000 UTC m=+570.464611693" watchObservedRunningTime="2026-02-24 05:23:59.43380493 +0000 UTC m=+570.468548096" Feb 24 05:23:59.664752 master-0 kubenswrapper[7614]: I0224 05:23:59.664511 7614 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-zxkt2 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 05:23:59.664752 master-0 kubenswrapper[7614]: [-]has-synced failed: reason withheld Feb 24 05:23:59.664752 master-0 kubenswrapper[7614]: [+]process-running ok Feb 24 05:23:59.664752 master-0 kubenswrapper[7614]: healthz check failed Feb 24 05:23:59.664752 master-0 kubenswrapper[7614]: I0224 05:23:59.664618 7614 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-zxkt2" podUID="be7a4b9e-1e9a-4298-b804-21b683805c0e" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 05:24:00.447932 master-0 kubenswrapper[7614]: I0224 05:24:00.447833 7614 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-multus/cni-sysctl-allowlist-ds-j28p2" Feb 24 05:24:00.664326 master-0 kubenswrapper[7614]: I0224 05:24:00.664210 7614 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-zxkt2 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 05:24:00.664326 master-0 kubenswrapper[7614]: [-]has-synced failed: reason withheld Feb 24 05:24:00.664326 master-0 kubenswrapper[7614]: [+]process-running ok Feb 24 05:24:00.664326 master-0 kubenswrapper[7614]: healthz check failed Feb 24 05:24:00.664764 master-0 kubenswrapper[7614]: I0224 05:24:00.664365 7614 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-zxkt2" podUID="be7a4b9e-1e9a-4298-b804-21b683805c0e" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 05:24:01.292562 master-0 kubenswrapper[7614]: I0224 05:24:01.292441 7614 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-multus/cni-sysctl-allowlist-ds-j28p2"] Feb 24 05:24:01.663852 master-0 kubenswrapper[7614]: I0224 05:24:01.663658 7614 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-zxkt2 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 05:24:01.663852 master-0 kubenswrapper[7614]: [-]has-synced failed: reason withheld Feb 24 05:24:01.663852 master-0 kubenswrapper[7614]: [+]process-running ok Feb 24 05:24:01.663852 master-0 kubenswrapper[7614]: healthz check failed Feb 24 05:24:01.663852 master-0 kubenswrapper[7614]: I0224 05:24:01.663786 7614 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-zxkt2" podUID="be7a4b9e-1e9a-4298-b804-21b683805c0e" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 05:24:02.428169 master-0 kubenswrapper[7614]: I0224 05:24:02.428037 7614 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-multus/cni-sysctl-allowlist-ds-j28p2" podUID="2303d3b8-fe6a-469a-a306-4e1685181dbe" containerName="kube-multus-additional-cni-plugins" containerID="cri-o://e23e1fc319485f4b7af40af05bab78f64dc6feb9347acaf86cd9fe42147a009f" gracePeriod=30 Feb 24 05:24:02.665153 master-0 kubenswrapper[7614]: I0224 05:24:02.664953 7614 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-zxkt2 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 05:24:02.665153 master-0 kubenswrapper[7614]: [-]has-synced failed: reason withheld Feb 24 05:24:02.665153 master-0 kubenswrapper[7614]: [+]process-running ok Feb 24 05:24:02.665153 master-0 kubenswrapper[7614]: healthz check failed Feb 24 05:24:02.665153 master-0 kubenswrapper[7614]: I0224 05:24:02.665062 7614 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-zxkt2" podUID="be7a4b9e-1e9a-4298-b804-21b683805c0e" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 05:24:03.662558 master-0 kubenswrapper[7614]: I0224 05:24:03.662482 7614 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-zxkt2 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 05:24:03.662558 master-0 kubenswrapper[7614]: [-]has-synced failed: reason withheld Feb 24 05:24:03.662558 master-0 kubenswrapper[7614]: [+]process-running ok Feb 24 05:24:03.662558 master-0 kubenswrapper[7614]: healthz check failed Feb 24 05:24:03.662912 master-0 kubenswrapper[7614]: I0224 05:24:03.662573 7614 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-zxkt2" podUID="be7a4b9e-1e9a-4298-b804-21b683805c0e" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 05:24:04.664023 master-0 kubenswrapper[7614]: I0224 05:24:04.663916 7614 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-zxkt2 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 05:24:04.664023 master-0 kubenswrapper[7614]: [-]has-synced failed: reason withheld Feb 24 05:24:04.664023 master-0 kubenswrapper[7614]: [+]process-running ok Feb 24 05:24:04.664023 master-0 kubenswrapper[7614]: healthz check failed Feb 24 05:24:04.664023 master-0 kubenswrapper[7614]: I0224 05:24:04.664025 7614 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-zxkt2" podUID="be7a4b9e-1e9a-4298-b804-21b683805c0e" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 05:24:05.663445 master-0 kubenswrapper[7614]: I0224 05:24:05.663353 7614 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-zxkt2 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 05:24:05.663445 master-0 kubenswrapper[7614]: [-]has-synced failed: reason withheld Feb 24 05:24:05.663445 master-0 kubenswrapper[7614]: [+]process-running ok Feb 24 05:24:05.663445 master-0 kubenswrapper[7614]: healthz check failed Feb 24 05:24:05.664966 master-0 kubenswrapper[7614]: I0224 05:24:05.663480 7614 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-zxkt2" podUID="be7a4b9e-1e9a-4298-b804-21b683805c0e" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 05:24:06.664455 master-0 kubenswrapper[7614]: I0224 05:24:06.664339 7614 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-zxkt2 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 05:24:06.664455 master-0 kubenswrapper[7614]: [-]has-synced failed: reason withheld Feb 24 05:24:06.664455 master-0 kubenswrapper[7614]: [+]process-running ok Feb 24 05:24:06.664455 master-0 kubenswrapper[7614]: healthz check failed Feb 24 05:24:06.667922 master-0 kubenswrapper[7614]: I0224 05:24:06.664497 7614 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-zxkt2" podUID="be7a4b9e-1e9a-4298-b804-21b683805c0e" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 05:24:07.664139 master-0 kubenswrapper[7614]: I0224 05:24:07.664032 7614 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-zxkt2 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 05:24:07.664139 master-0 kubenswrapper[7614]: [-]has-synced failed: reason withheld Feb 24 05:24:07.664139 master-0 kubenswrapper[7614]: [+]process-running ok Feb 24 05:24:07.664139 master-0 kubenswrapper[7614]: healthz check failed Feb 24 05:24:07.665480 master-0 kubenswrapper[7614]: I0224 05:24:07.664162 7614 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-zxkt2" podUID="be7a4b9e-1e9a-4298-b804-21b683805c0e" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 05:24:08.286968 master-0 kubenswrapper[7614]: I0224 05:24:08.286862 7614 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-admission-controller-5f54bf67d4-5tf9t"] Feb 24 05:24:08.288072 master-0 kubenswrapper[7614]: I0224 05:24:08.288033 7614 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-5f54bf67d4-5tf9t" Feb 24 05:24:08.290680 master-0 kubenswrapper[7614]: I0224 05:24:08.290617 7614 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ac-dockercfg-zl45m" Feb 24 05:24:08.303571 master-0 kubenswrapper[7614]: I0224 05:24:08.303506 7614 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/multus-admission-controller-5f54bf67d4-5tf9t"] Feb 24 05:24:08.376101 master-0 kubenswrapper[7614]: I0224 05:24:08.376011 7614 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/6ddb5ab7-0c1f-44ed-84fa-aaeb6b553e03-webhook-certs\") pod \"multus-admission-controller-5f54bf67d4-5tf9t\" (UID: \"6ddb5ab7-0c1f-44ed-84fa-aaeb6b553e03\") " pod="openshift-multus/multus-admission-controller-5f54bf67d4-5tf9t" Feb 24 05:24:08.376101 master-0 kubenswrapper[7614]: I0224 05:24:08.376080 7614 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rkz2q\" (UniqueName: \"kubernetes.io/projected/6ddb5ab7-0c1f-44ed-84fa-aaeb6b553e03-kube-api-access-rkz2q\") pod \"multus-admission-controller-5f54bf67d4-5tf9t\" (UID: \"6ddb5ab7-0c1f-44ed-84fa-aaeb6b553e03\") " pod="openshift-multus/multus-admission-controller-5f54bf67d4-5tf9t" Feb 24 05:24:08.477425 master-0 kubenswrapper[7614]: I0224 05:24:08.477303 7614 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/6ddb5ab7-0c1f-44ed-84fa-aaeb6b553e03-webhook-certs\") pod \"multus-admission-controller-5f54bf67d4-5tf9t\" (UID: \"6ddb5ab7-0c1f-44ed-84fa-aaeb6b553e03\") " pod="openshift-multus/multus-admission-controller-5f54bf67d4-5tf9t" Feb 24 05:24:08.477785 master-0 kubenswrapper[7614]: I0224 05:24:08.477456 7614 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rkz2q\" (UniqueName: \"kubernetes.io/projected/6ddb5ab7-0c1f-44ed-84fa-aaeb6b553e03-kube-api-access-rkz2q\") pod \"multus-admission-controller-5f54bf67d4-5tf9t\" (UID: \"6ddb5ab7-0c1f-44ed-84fa-aaeb6b553e03\") " pod="openshift-multus/multus-admission-controller-5f54bf67d4-5tf9t" Feb 24 05:24:08.481509 master-0 kubenswrapper[7614]: I0224 05:24:08.481433 7614 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/6ddb5ab7-0c1f-44ed-84fa-aaeb6b553e03-webhook-certs\") pod \"multus-admission-controller-5f54bf67d4-5tf9t\" (UID: \"6ddb5ab7-0c1f-44ed-84fa-aaeb6b553e03\") " pod="openshift-multus/multus-admission-controller-5f54bf67d4-5tf9t" Feb 24 05:24:08.510254 master-0 kubenswrapper[7614]: I0224 05:24:08.510191 7614 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rkz2q\" (UniqueName: \"kubernetes.io/projected/6ddb5ab7-0c1f-44ed-84fa-aaeb6b553e03-kube-api-access-rkz2q\") pod \"multus-admission-controller-5f54bf67d4-5tf9t\" (UID: \"6ddb5ab7-0c1f-44ed-84fa-aaeb6b553e03\") " pod="openshift-multus/multus-admission-controller-5f54bf67d4-5tf9t" Feb 24 05:24:08.621387 master-0 kubenswrapper[7614]: I0224 05:24:08.621238 7614 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-5f54bf67d4-5tf9t" Feb 24 05:24:08.624532 master-0 kubenswrapper[7614]: E0224 05:24:08.624431 7614 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="e23e1fc319485f4b7af40af05bab78f64dc6feb9347acaf86cd9fe42147a009f" cmd=["/bin/bash","-c","test -f /ready/ready"] Feb 24 05:24:08.627266 master-0 kubenswrapper[7614]: E0224 05:24:08.627216 7614 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="e23e1fc319485f4b7af40af05bab78f64dc6feb9347acaf86cd9fe42147a009f" cmd=["/bin/bash","-c","test -f /ready/ready"] Feb 24 05:24:08.630717 master-0 kubenswrapper[7614]: E0224 05:24:08.630693 7614 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="e23e1fc319485f4b7af40af05bab78f64dc6feb9347acaf86cd9fe42147a009f" cmd=["/bin/bash","-c","test -f /ready/ready"] Feb 24 05:24:08.630827 master-0 kubenswrapper[7614]: E0224 05:24:08.630803 7614 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openshift-multus/cni-sysctl-allowlist-ds-j28p2" podUID="2303d3b8-fe6a-469a-a306-4e1685181dbe" containerName="kube-multus-additional-cni-plugins" Feb 24 05:24:08.665062 master-0 kubenswrapper[7614]: I0224 05:24:08.664958 7614 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-zxkt2 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 05:24:08.665062 master-0 kubenswrapper[7614]: [-]has-synced failed: reason withheld Feb 24 05:24:08.665062 master-0 kubenswrapper[7614]: [+]process-running ok Feb 24 05:24:08.665062 master-0 kubenswrapper[7614]: healthz check failed Feb 24 05:24:08.665658 master-0 kubenswrapper[7614]: I0224 05:24:08.665130 7614 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-zxkt2" podUID="be7a4b9e-1e9a-4298-b804-21b683805c0e" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 05:24:09.007643 master-0 kubenswrapper[7614]: I0224 05:24:09.007520 7614 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-monitoring/telemeter-client-96c995bf5-57k8x"] Feb 24 05:24:09.014168 master-0 kubenswrapper[7614]: I0224 05:24:09.009852 7614 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/telemeter-client-96c995bf5-57k8x" Feb 24 05:24:09.014168 master-0 kubenswrapper[7614]: I0224 05:24:09.012017 7614 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"telemeter-client-serving-certs-ca-bundle" Feb 24 05:24:09.014168 master-0 kubenswrapper[7614]: I0224 05:24:09.012990 7614 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"telemeter-client-kube-rbac-proxy-config" Feb 24 05:24:09.014426 master-0 kubenswrapper[7614]: I0224 05:24:09.014325 7614 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"telemeter-client-dockercfg-l6bv5" Feb 24 05:24:09.015579 master-0 kubenswrapper[7614]: I0224 05:24:09.015118 7614 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"telemeter-client" Feb 24 05:24:09.015579 master-0 kubenswrapper[7614]: I0224 05:24:09.015166 7614 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"federate-client-certs" Feb 24 05:24:09.015579 master-0 kubenswrapper[7614]: I0224 05:24:09.015118 7614 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"telemeter-client-tls" Feb 24 05:24:09.021795 master-0 kubenswrapper[7614]: I0224 05:24:09.021744 7614 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"telemeter-trusted-ca-bundle-8i12ta5c71j38" Feb 24 05:24:09.025723 master-0 kubenswrapper[7614]: I0224 05:24:09.024764 7614 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/telemeter-client-96c995bf5-57k8x"] Feb 24 05:24:09.093254 master-0 kubenswrapper[7614]: I0224 05:24:09.093189 7614 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-telemeter-client\" (UniqueName: \"kubernetes.io/secret/1163571d-f555-41ad-b04c-74c2dc452efe-secret-telemeter-client\") pod \"telemeter-client-96c995bf5-57k8x\" (UID: \"1163571d-f555-41ad-b04c-74c2dc452efe\") " pod="openshift-monitoring/telemeter-client-96c995bf5-57k8x" Feb 24 05:24:09.093254 master-0 kubenswrapper[7614]: I0224 05:24:09.093252 7614 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-telemeter-client-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/1163571d-f555-41ad-b04c-74c2dc452efe-secret-telemeter-client-kube-rbac-proxy-config\") pod \"telemeter-client-96c995bf5-57k8x\" (UID: \"1163571d-f555-41ad-b04c-74c2dc452efe\") " pod="openshift-monitoring/telemeter-client-96c995bf5-57k8x" Feb 24 05:24:09.093633 master-0 kubenswrapper[7614]: I0224 05:24:09.093302 7614 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-46fll\" (UniqueName: \"kubernetes.io/projected/1163571d-f555-41ad-b04c-74c2dc452efe-kube-api-access-46fll\") pod \"telemeter-client-96c995bf5-57k8x\" (UID: \"1163571d-f555-41ad-b04c-74c2dc452efe\") " pod="openshift-monitoring/telemeter-client-96c995bf5-57k8x" Feb 24 05:24:09.093633 master-0 kubenswrapper[7614]: I0224 05:24:09.093414 7614 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/1163571d-f555-41ad-b04c-74c2dc452efe-metrics-client-ca\") pod \"telemeter-client-96c995bf5-57k8x\" (UID: \"1163571d-f555-41ad-b04c-74c2dc452efe\") " pod="openshift-monitoring/telemeter-client-96c995bf5-57k8x" Feb 24 05:24:09.093633 master-0 kubenswrapper[7614]: I0224 05:24:09.093452 7614 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"federate-client-tls\" (UniqueName: \"kubernetes.io/secret/1163571d-f555-41ad-b04c-74c2dc452efe-federate-client-tls\") pod \"telemeter-client-96c995bf5-57k8x\" (UID: \"1163571d-f555-41ad-b04c-74c2dc452efe\") " pod="openshift-monitoring/telemeter-client-96c995bf5-57k8x" Feb 24 05:24:09.093727 master-0 kubenswrapper[7614]: I0224 05:24:09.093675 7614 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"telemeter-client-tls\" (UniqueName: \"kubernetes.io/secret/1163571d-f555-41ad-b04c-74c2dc452efe-telemeter-client-tls\") pod \"telemeter-client-96c995bf5-57k8x\" (UID: \"1163571d-f555-41ad-b04c-74c2dc452efe\") " pod="openshift-monitoring/telemeter-client-96c995bf5-57k8x" Feb 24 05:24:09.093878 master-0 kubenswrapper[7614]: I0224 05:24:09.093843 7614 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-certs-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1163571d-f555-41ad-b04c-74c2dc452efe-serving-certs-ca-bundle\") pod \"telemeter-client-96c995bf5-57k8x\" (UID: \"1163571d-f555-41ad-b04c-74c2dc452efe\") " pod="openshift-monitoring/telemeter-client-96c995bf5-57k8x" Feb 24 05:24:09.093979 master-0 kubenswrapper[7614]: I0224 05:24:09.093941 7614 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"telemeter-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1163571d-f555-41ad-b04c-74c2dc452efe-telemeter-trusted-ca-bundle\") pod \"telemeter-client-96c995bf5-57k8x\" (UID: \"1163571d-f555-41ad-b04c-74c2dc452efe\") " pod="openshift-monitoring/telemeter-client-96c995bf5-57k8x" Feb 24 05:24:09.198435 master-0 kubenswrapper[7614]: I0224 05:24:09.198214 7614 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/1163571d-f555-41ad-b04c-74c2dc452efe-metrics-client-ca\") pod \"telemeter-client-96c995bf5-57k8x\" (UID: \"1163571d-f555-41ad-b04c-74c2dc452efe\") " pod="openshift-monitoring/telemeter-client-96c995bf5-57k8x" Feb 24 05:24:09.198435 master-0 kubenswrapper[7614]: I0224 05:24:09.198349 7614 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"federate-client-tls\" (UniqueName: \"kubernetes.io/secret/1163571d-f555-41ad-b04c-74c2dc452efe-federate-client-tls\") pod \"telemeter-client-96c995bf5-57k8x\" (UID: \"1163571d-f555-41ad-b04c-74c2dc452efe\") " pod="openshift-monitoring/telemeter-client-96c995bf5-57k8x" Feb 24 05:24:09.198435 master-0 kubenswrapper[7614]: I0224 05:24:09.198420 7614 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"telemeter-client-tls\" (UniqueName: \"kubernetes.io/secret/1163571d-f555-41ad-b04c-74c2dc452efe-telemeter-client-tls\") pod \"telemeter-client-96c995bf5-57k8x\" (UID: \"1163571d-f555-41ad-b04c-74c2dc452efe\") " pod="openshift-monitoring/telemeter-client-96c995bf5-57k8x" Feb 24 05:24:09.199728 master-0 kubenswrapper[7614]: I0224 05:24:09.199659 7614 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-certs-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1163571d-f555-41ad-b04c-74c2dc452efe-serving-certs-ca-bundle\") pod \"telemeter-client-96c995bf5-57k8x\" (UID: \"1163571d-f555-41ad-b04c-74c2dc452efe\") " pod="openshift-monitoring/telemeter-client-96c995bf5-57k8x" Feb 24 05:24:09.200039 master-0 kubenswrapper[7614]: I0224 05:24:09.200008 7614 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"telemeter-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1163571d-f555-41ad-b04c-74c2dc452efe-telemeter-trusted-ca-bundle\") pod \"telemeter-client-96c995bf5-57k8x\" (UID: \"1163571d-f555-41ad-b04c-74c2dc452efe\") " pod="openshift-monitoring/telemeter-client-96c995bf5-57k8x" Feb 24 05:24:09.200673 master-0 kubenswrapper[7614]: I0224 05:24:09.200626 7614 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-telemeter-client\" (UniqueName: \"kubernetes.io/secret/1163571d-f555-41ad-b04c-74c2dc452efe-secret-telemeter-client\") pod \"telemeter-client-96c995bf5-57k8x\" (UID: \"1163571d-f555-41ad-b04c-74c2dc452efe\") " pod="openshift-monitoring/telemeter-client-96c995bf5-57k8x" Feb 24 05:24:09.200814 master-0 kubenswrapper[7614]: I0224 05:24:09.200706 7614 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-telemeter-client-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/1163571d-f555-41ad-b04c-74c2dc452efe-secret-telemeter-client-kube-rbac-proxy-config\") pod \"telemeter-client-96c995bf5-57k8x\" (UID: \"1163571d-f555-41ad-b04c-74c2dc452efe\") " pod="openshift-monitoring/telemeter-client-96c995bf5-57k8x" Feb 24 05:24:09.200919 master-0 kubenswrapper[7614]: I0224 05:24:09.200897 7614 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-46fll\" (UniqueName: \"kubernetes.io/projected/1163571d-f555-41ad-b04c-74c2dc452efe-kube-api-access-46fll\") pod \"telemeter-client-96c995bf5-57k8x\" (UID: \"1163571d-f555-41ad-b04c-74c2dc452efe\") " pod="openshift-monitoring/telemeter-client-96c995bf5-57k8x" Feb 24 05:24:09.201036 master-0 kubenswrapper[7614]: I0224 05:24:09.200981 7614 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/1163571d-f555-41ad-b04c-74c2dc452efe-metrics-client-ca\") pod \"telemeter-client-96c995bf5-57k8x\" (UID: \"1163571d-f555-41ad-b04c-74c2dc452efe\") " pod="openshift-monitoring/telemeter-client-96c995bf5-57k8x" Feb 24 05:24:09.202247 master-0 kubenswrapper[7614]: I0224 05:24:09.202176 7614 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"telemeter-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1163571d-f555-41ad-b04c-74c2dc452efe-telemeter-trusted-ca-bundle\") pod \"telemeter-client-96c995bf5-57k8x\" (UID: \"1163571d-f555-41ad-b04c-74c2dc452efe\") " pod="openshift-monitoring/telemeter-client-96c995bf5-57k8x" Feb 24 05:24:09.203615 master-0 kubenswrapper[7614]: I0224 05:24:09.202821 7614 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-certs-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1163571d-f555-41ad-b04c-74c2dc452efe-serving-certs-ca-bundle\") pod \"telemeter-client-96c995bf5-57k8x\" (UID: \"1163571d-f555-41ad-b04c-74c2dc452efe\") " pod="openshift-monitoring/telemeter-client-96c995bf5-57k8x" Feb 24 05:24:09.204077 master-0 kubenswrapper[7614]: I0224 05:24:09.203873 7614 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"federate-client-tls\" (UniqueName: \"kubernetes.io/secret/1163571d-f555-41ad-b04c-74c2dc452efe-federate-client-tls\") pod \"telemeter-client-96c995bf5-57k8x\" (UID: \"1163571d-f555-41ad-b04c-74c2dc452efe\") " pod="openshift-monitoring/telemeter-client-96c995bf5-57k8x" Feb 24 05:24:09.208860 master-0 kubenswrapper[7614]: I0224 05:24:09.208813 7614 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"telemeter-client-tls\" (UniqueName: \"kubernetes.io/secret/1163571d-f555-41ad-b04c-74c2dc452efe-telemeter-client-tls\") pod \"telemeter-client-96c995bf5-57k8x\" (UID: \"1163571d-f555-41ad-b04c-74c2dc452efe\") " pod="openshift-monitoring/telemeter-client-96c995bf5-57k8x" Feb 24 05:24:09.210995 master-0 kubenswrapper[7614]: I0224 05:24:09.210950 7614 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-telemeter-client-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/1163571d-f555-41ad-b04c-74c2dc452efe-secret-telemeter-client-kube-rbac-proxy-config\") pod \"telemeter-client-96c995bf5-57k8x\" (UID: \"1163571d-f555-41ad-b04c-74c2dc452efe\") " pod="openshift-monitoring/telemeter-client-96c995bf5-57k8x" Feb 24 05:24:09.211628 master-0 kubenswrapper[7614]: I0224 05:24:09.211566 7614 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-telemeter-client\" (UniqueName: \"kubernetes.io/secret/1163571d-f555-41ad-b04c-74c2dc452efe-secret-telemeter-client\") pod \"telemeter-client-96c995bf5-57k8x\" (UID: \"1163571d-f555-41ad-b04c-74c2dc452efe\") " pod="openshift-monitoring/telemeter-client-96c995bf5-57k8x" Feb 24 05:24:09.229920 master-0 kubenswrapper[7614]: I0224 05:24:09.229811 7614 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/multus-admission-controller-5f54bf67d4-5tf9t"] Feb 24 05:24:09.235866 master-0 kubenswrapper[7614]: I0224 05:24:09.235776 7614 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-46fll\" (UniqueName: \"kubernetes.io/projected/1163571d-f555-41ad-b04c-74c2dc452efe-kube-api-access-46fll\") pod \"telemeter-client-96c995bf5-57k8x\" (UID: \"1163571d-f555-41ad-b04c-74c2dc452efe\") " pod="openshift-monitoring/telemeter-client-96c995bf5-57k8x" Feb 24 05:24:09.240906 master-0 kubenswrapper[7614]: W0224 05:24:09.240835 7614 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod6ddb5ab7_0c1f_44ed_84fa_aaeb6b553e03.slice/crio-8f0c2bd56106a14890572575d4661ad3be97a3bf1270d2b66fc4d182958ebb72 WatchSource:0}: Error finding container 8f0c2bd56106a14890572575d4661ad3be97a3bf1270d2b66fc4d182958ebb72: Status 404 returned error can't find the container with id 8f0c2bd56106a14890572575d4661ad3be97a3bf1270d2b66fc4d182958ebb72 Feb 24 05:24:09.340007 master-0 kubenswrapper[7614]: I0224 05:24:09.339900 7614 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/telemeter-client-96c995bf5-57k8x" Feb 24 05:24:09.486798 master-0 kubenswrapper[7614]: I0224 05:24:09.486644 7614 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-5f54bf67d4-5tf9t" event={"ID":"6ddb5ab7-0c1f-44ed-84fa-aaeb6b553e03","Type":"ContainerStarted","Data":"8f0c2bd56106a14890572575d4661ad3be97a3bf1270d2b66fc4d182958ebb72"} Feb 24 05:24:09.645871 master-0 kubenswrapper[7614]: I0224 05:24:09.645758 7614 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/telemeter-client-96c995bf5-57k8x"] Feb 24 05:24:09.649662 master-0 kubenswrapper[7614]: W0224 05:24:09.649584 7614 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod1163571d_f555_41ad_b04c_74c2dc452efe.slice/crio-922eed7d19f9dd738cf0b3fc3e3b004e0316f8e1783948356d4d447355655a65 WatchSource:0}: Error finding container 922eed7d19f9dd738cf0b3fc3e3b004e0316f8e1783948356d4d447355655a65: Status 404 returned error can't find the container with id 922eed7d19f9dd738cf0b3fc3e3b004e0316f8e1783948356d4d447355655a65 Feb 24 05:24:09.662301 master-0 kubenswrapper[7614]: I0224 05:24:09.662256 7614 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-zxkt2 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 05:24:09.662301 master-0 kubenswrapper[7614]: [-]has-synced failed: reason withheld Feb 24 05:24:09.662301 master-0 kubenswrapper[7614]: [+]process-running ok Feb 24 05:24:09.662301 master-0 kubenswrapper[7614]: healthz check failed Feb 24 05:24:09.662509 master-0 kubenswrapper[7614]: I0224 05:24:09.662339 7614 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-zxkt2" podUID="be7a4b9e-1e9a-4298-b804-21b683805c0e" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 05:24:10.411510 master-0 kubenswrapper[7614]: I0224 05:24:10.411447 7614 kubelet.go:2431] "SyncLoop REMOVE" source="file" pods=["kube-system/bootstrap-kube-controller-manager-master-0"] Feb 24 05:24:10.412199 master-0 kubenswrapper[7614]: I0224 05:24:10.411789 7614 kuberuntime_container.go:808] "Killing container with a grace period" pod="kube-system/bootstrap-kube-controller-manager-master-0" podUID="c9ad9373c007a4fcd25e70622bdc8deb" containerName="cluster-policy-controller" containerID="cri-o://f4b821030f840d2d8e85c4724668b0193cc5837173c2d037d234e06dfa6a0d7e" gracePeriod=30 Feb 24 05:24:10.412199 master-0 kubenswrapper[7614]: I0224 05:24:10.411842 7614 kuberuntime_container.go:808] "Killing container with a grace period" pod="kube-system/bootstrap-kube-controller-manager-master-0" podUID="c9ad9373c007a4fcd25e70622bdc8deb" containerName="kube-controller-manager" containerID="cri-o://5557a4a59d6ed82812c29ef2ac4e6682ca871ccd9af2af6045fce7cc16101c3a" gracePeriod=30 Feb 24 05:24:10.412594 master-0 kubenswrapper[7614]: I0224 05:24:10.412561 7614 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-controller-manager/kube-controller-manager-master-0"] Feb 24 05:24:10.412915 master-0 kubenswrapper[7614]: E0224 05:24:10.412893 7614 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c9ad9373c007a4fcd25e70622bdc8deb" containerName="kube-controller-manager" Feb 24 05:24:10.412959 master-0 kubenswrapper[7614]: I0224 05:24:10.412915 7614 state_mem.go:107] "Deleted CPUSet assignment" podUID="c9ad9373c007a4fcd25e70622bdc8deb" containerName="kube-controller-manager" Feb 24 05:24:10.412959 master-0 kubenswrapper[7614]: E0224 05:24:10.412928 7614 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c9ad9373c007a4fcd25e70622bdc8deb" containerName="kube-controller-manager" Feb 24 05:24:10.412959 master-0 kubenswrapper[7614]: I0224 05:24:10.412937 7614 state_mem.go:107] "Deleted CPUSet assignment" podUID="c9ad9373c007a4fcd25e70622bdc8deb" containerName="kube-controller-manager" Feb 24 05:24:10.412959 master-0 kubenswrapper[7614]: E0224 05:24:10.412946 7614 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c9ad9373c007a4fcd25e70622bdc8deb" containerName="kube-controller-manager" Feb 24 05:24:10.412959 master-0 kubenswrapper[7614]: I0224 05:24:10.412957 7614 state_mem.go:107] "Deleted CPUSet assignment" podUID="c9ad9373c007a4fcd25e70622bdc8deb" containerName="kube-controller-manager" Feb 24 05:24:10.413134 master-0 kubenswrapper[7614]: E0224 05:24:10.412977 7614 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c9ad9373c007a4fcd25e70622bdc8deb" containerName="cluster-policy-controller" Feb 24 05:24:10.413134 master-0 kubenswrapper[7614]: I0224 05:24:10.412989 7614 state_mem.go:107] "Deleted CPUSet assignment" podUID="c9ad9373c007a4fcd25e70622bdc8deb" containerName="cluster-policy-controller" Feb 24 05:24:10.413134 master-0 kubenswrapper[7614]: E0224 05:24:10.413014 7614 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c9ad9373c007a4fcd25e70622bdc8deb" containerName="kube-controller-manager" Feb 24 05:24:10.413134 master-0 kubenswrapper[7614]: I0224 05:24:10.413023 7614 state_mem.go:107] "Deleted CPUSet assignment" podUID="c9ad9373c007a4fcd25e70622bdc8deb" containerName="kube-controller-manager" Feb 24 05:24:10.413134 master-0 kubenswrapper[7614]: E0224 05:24:10.413035 7614 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c9ad9373c007a4fcd25e70622bdc8deb" containerName="kube-controller-manager" Feb 24 05:24:10.413134 master-0 kubenswrapper[7614]: I0224 05:24:10.413044 7614 state_mem.go:107] "Deleted CPUSet assignment" podUID="c9ad9373c007a4fcd25e70622bdc8deb" containerName="kube-controller-manager" Feb 24 05:24:10.413300 master-0 kubenswrapper[7614]: I0224 05:24:10.413203 7614 memory_manager.go:354] "RemoveStaleState removing state" podUID="c9ad9373c007a4fcd25e70622bdc8deb" containerName="kube-controller-manager" Feb 24 05:24:10.413300 master-0 kubenswrapper[7614]: I0224 05:24:10.413220 7614 memory_manager.go:354] "RemoveStaleState removing state" podUID="c9ad9373c007a4fcd25e70622bdc8deb" containerName="kube-controller-manager" Feb 24 05:24:10.413300 master-0 kubenswrapper[7614]: I0224 05:24:10.413233 7614 memory_manager.go:354] "RemoveStaleState removing state" podUID="c9ad9373c007a4fcd25e70622bdc8deb" containerName="cluster-policy-controller" Feb 24 05:24:10.413300 master-0 kubenswrapper[7614]: I0224 05:24:10.413242 7614 memory_manager.go:354] "RemoveStaleState removing state" podUID="c9ad9373c007a4fcd25e70622bdc8deb" containerName="kube-controller-manager" Feb 24 05:24:10.413300 master-0 kubenswrapper[7614]: I0224 05:24:10.413260 7614 memory_manager.go:354] "RemoveStaleState removing state" podUID="c9ad9373c007a4fcd25e70622bdc8deb" containerName="kube-controller-manager" Feb 24 05:24:10.413616 master-0 kubenswrapper[7614]: I0224 05:24:10.413591 7614 memory_manager.go:354] "RemoveStaleState removing state" podUID="c9ad9373c007a4fcd25e70622bdc8deb" containerName="kube-controller-manager" Feb 24 05:24:10.415166 master-0 kubenswrapper[7614]: I0224 05:24:10.415101 7614 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Feb 24 05:24:10.433485 master-0 kubenswrapper[7614]: I0224 05:24:10.433427 7614 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/79656ffd720980cfc7e8a06d9f509855-resource-dir\") pod \"kube-controller-manager-master-0\" (UID: \"79656ffd720980cfc7e8a06d9f509855\") " pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Feb 24 05:24:10.433703 master-0 kubenswrapper[7614]: I0224 05:24:10.433656 7614 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/79656ffd720980cfc7e8a06d9f509855-cert-dir\") pod \"kube-controller-manager-master-0\" (UID: \"79656ffd720980cfc7e8a06d9f509855\") " pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Feb 24 05:24:10.475653 master-0 kubenswrapper[7614]: I0224 05:24:10.475587 7614 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager/kube-controller-manager-master-0"] Feb 24 05:24:10.513793 master-0 kubenswrapper[7614]: I0224 05:24:10.513704 7614 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-5f54bf67d4-5tf9t" event={"ID":"6ddb5ab7-0c1f-44ed-84fa-aaeb6b553e03","Type":"ContainerStarted","Data":"8b52274772851273cab1dc47c85e59d0c6850d39f04d77e98fa466c9eded5991"} Feb 24 05:24:10.513793 master-0 kubenswrapper[7614]: I0224 05:24:10.513785 7614 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-5f54bf67d4-5tf9t" event={"ID":"6ddb5ab7-0c1f-44ed-84fa-aaeb6b553e03","Type":"ContainerStarted","Data":"f656ece55628cf0e0197286469ab49ef01dd73d46218a7f352271e7d45adb23e"} Feb 24 05:24:10.515538 master-0 kubenswrapper[7614]: I0224 05:24:10.515500 7614 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/telemeter-client-96c995bf5-57k8x" event={"ID":"1163571d-f555-41ad-b04c-74c2dc452efe","Type":"ContainerStarted","Data":"922eed7d19f9dd738cf0b3fc3e3b004e0316f8e1783948356d4d447355655a65"} Feb 24 05:24:10.537814 master-0 kubenswrapper[7614]: I0224 05:24:10.537291 7614 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/79656ffd720980cfc7e8a06d9f509855-resource-dir\") pod \"kube-controller-manager-master-0\" (UID: \"79656ffd720980cfc7e8a06d9f509855\") " pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Feb 24 05:24:10.537814 master-0 kubenswrapper[7614]: I0224 05:24:10.537496 7614 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/79656ffd720980cfc7e8a06d9f509855-cert-dir\") pod \"kube-controller-manager-master-0\" (UID: \"79656ffd720980cfc7e8a06d9f509855\") " pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Feb 24 05:24:10.537814 master-0 kubenswrapper[7614]: I0224 05:24:10.537631 7614 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/79656ffd720980cfc7e8a06d9f509855-cert-dir\") pod \"kube-controller-manager-master-0\" (UID: \"79656ffd720980cfc7e8a06d9f509855\") " pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Feb 24 05:24:10.537814 master-0 kubenswrapper[7614]: I0224 05:24:10.537682 7614 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/79656ffd720980cfc7e8a06d9f509855-resource-dir\") pod \"kube-controller-manager-master-0\" (UID: \"79656ffd720980cfc7e8a06d9f509855\") " pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Feb 24 05:24:10.583075 master-0 kubenswrapper[7614]: I0224 05:24:10.582982 7614 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-admission-controller-5f54bf67d4-5tf9t" podStartSLOduration=2.582956808 podStartE2EDuration="2.582956808s" podCreationTimestamp="2026-02-24 05:24:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-24 05:24:10.581376197 +0000 UTC m=+581.616119363" watchObservedRunningTime="2026-02-24 05:24:10.582956808 +0000 UTC m=+581.617699974" Feb 24 05:24:10.606106 master-0 kubenswrapper[7614]: I0224 05:24:10.605632 7614 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="kube-system/bootstrap-kube-controller-manager-master-0" Feb 24 05:24:10.634647 master-0 kubenswrapper[7614]: I0224 05:24:10.634587 7614 kubelet.go:2706] "Unable to find pod for mirror pod, skipping" mirrorPod="kube-system/bootstrap-kube-controller-manager-master-0" mirrorPodUID="208f05b2-30d0-4793-9268-5d8f16844324" Feb 24 05:24:10.638756 master-0 kubenswrapper[7614]: I0224 05:24:10.638711 7614 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/host-path/c9ad9373c007a4fcd25e70622bdc8deb-config\") pod \"c9ad9373c007a4fcd25e70622bdc8deb\" (UID: \"c9ad9373c007a4fcd25e70622bdc8deb\") " Feb 24 05:24:10.638839 master-0 kubenswrapper[7614]: I0224 05:24:10.638765 7614 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/host-path/c9ad9373c007a4fcd25e70622bdc8deb-logs\") pod \"c9ad9373c007a4fcd25e70622bdc8deb\" (UID: \"c9ad9373c007a4fcd25e70622bdc8deb\") " Feb 24 05:24:10.638874 master-0 kubenswrapper[7614]: I0224 05:24:10.638850 7614 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssl-certs-host\" (UniqueName: \"kubernetes.io/host-path/c9ad9373c007a4fcd25e70622bdc8deb-ssl-certs-host\") pod \"c9ad9373c007a4fcd25e70622bdc8deb\" (UID: \"c9ad9373c007a4fcd25e70622bdc8deb\") " Feb 24 05:24:10.638905 master-0 kubenswrapper[7614]: I0224 05:24:10.638871 7614 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secrets\" (UniqueName: \"kubernetes.io/host-path/c9ad9373c007a4fcd25e70622bdc8deb-secrets\") pod \"c9ad9373c007a4fcd25e70622bdc8deb\" (UID: \"c9ad9373c007a4fcd25e70622bdc8deb\") " Feb 24 05:24:10.638937 master-0 kubenswrapper[7614]: I0224 05:24:10.638906 7614 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-kubernetes-cloud\" (UniqueName: \"kubernetes.io/host-path/c9ad9373c007a4fcd25e70622bdc8deb-etc-kubernetes-cloud\") pod \"c9ad9373c007a4fcd25e70622bdc8deb\" (UID: \"c9ad9373c007a4fcd25e70622bdc8deb\") " Feb 24 05:24:10.639719 master-0 kubenswrapper[7614]: I0224 05:24:10.639669 7614 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c9ad9373c007a4fcd25e70622bdc8deb-etc-kubernetes-cloud" (OuterVolumeSpecName: "etc-kubernetes-cloud") pod "c9ad9373c007a4fcd25e70622bdc8deb" (UID: "c9ad9373c007a4fcd25e70622bdc8deb"). InnerVolumeSpecName "etc-kubernetes-cloud". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 24 05:24:10.639783 master-0 kubenswrapper[7614]: I0224 05:24:10.639738 7614 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c9ad9373c007a4fcd25e70622bdc8deb-config" (OuterVolumeSpecName: "config") pod "c9ad9373c007a4fcd25e70622bdc8deb" (UID: "c9ad9373c007a4fcd25e70622bdc8deb"). InnerVolumeSpecName "config". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 24 05:24:10.639783 master-0 kubenswrapper[7614]: I0224 05:24:10.639755 7614 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c9ad9373c007a4fcd25e70622bdc8deb-logs" (OuterVolumeSpecName: "logs") pod "c9ad9373c007a4fcd25e70622bdc8deb" (UID: "c9ad9373c007a4fcd25e70622bdc8deb"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 24 05:24:10.639783 master-0 kubenswrapper[7614]: I0224 05:24:10.639771 7614 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c9ad9373c007a4fcd25e70622bdc8deb-ssl-certs-host" (OuterVolumeSpecName: "ssl-certs-host") pod "c9ad9373c007a4fcd25e70622bdc8deb" (UID: "c9ad9373c007a4fcd25e70622bdc8deb"). InnerVolumeSpecName "ssl-certs-host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 24 05:24:10.639874 master-0 kubenswrapper[7614]: I0224 05:24:10.639788 7614 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c9ad9373c007a4fcd25e70622bdc8deb-secrets" (OuterVolumeSpecName: "secrets") pod "c9ad9373c007a4fcd25e70622bdc8deb" (UID: "c9ad9373c007a4fcd25e70622bdc8deb"). InnerVolumeSpecName "secrets". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 24 05:24:10.662647 master-0 kubenswrapper[7614]: I0224 05:24:10.662511 7614 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-zxkt2 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 05:24:10.662647 master-0 kubenswrapper[7614]: [-]has-synced failed: reason withheld Feb 24 05:24:10.662647 master-0 kubenswrapper[7614]: [+]process-running ok Feb 24 05:24:10.662647 master-0 kubenswrapper[7614]: healthz check failed Feb 24 05:24:10.662647 master-0 kubenswrapper[7614]: I0224 05:24:10.662609 7614 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-zxkt2" podUID="be7a4b9e-1e9a-4298-b804-21b683805c0e" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 05:24:10.741705 master-0 kubenswrapper[7614]: I0224 05:24:10.741636 7614 reconciler_common.go:293] "Volume detached for volume \"ssl-certs-host\" (UniqueName: \"kubernetes.io/host-path/c9ad9373c007a4fcd25e70622bdc8deb-ssl-certs-host\") on node \"master-0\" DevicePath \"\"" Feb 24 05:24:10.741705 master-0 kubenswrapper[7614]: I0224 05:24:10.741715 7614 reconciler_common.go:293] "Volume detached for volume \"secrets\" (UniqueName: \"kubernetes.io/host-path/c9ad9373c007a4fcd25e70622bdc8deb-secrets\") on node \"master-0\" DevicePath \"\"" Feb 24 05:24:10.742045 master-0 kubenswrapper[7614]: I0224 05:24:10.741749 7614 reconciler_common.go:293] "Volume detached for volume \"etc-kubernetes-cloud\" (UniqueName: \"kubernetes.io/host-path/c9ad9373c007a4fcd25e70622bdc8deb-etc-kubernetes-cloud\") on node \"master-0\" DevicePath \"\"" Feb 24 05:24:10.742045 master-0 kubenswrapper[7614]: I0224 05:24:10.741769 7614 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/host-path/c9ad9373c007a4fcd25e70622bdc8deb-config\") on node \"master-0\" DevicePath \"\"" Feb 24 05:24:10.742045 master-0 kubenswrapper[7614]: I0224 05:24:10.741785 7614 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/host-path/c9ad9373c007a4fcd25e70622bdc8deb-logs\") on node \"master-0\" DevicePath \"\"" Feb 24 05:24:10.768368 master-0 kubenswrapper[7614]: I0224 05:24:10.768328 7614 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Feb 24 05:24:10.803029 master-0 kubenswrapper[7614]: W0224 05:24:10.802969 7614 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod79656ffd720980cfc7e8a06d9f509855.slice/crio-f2310e416d88da38fbb0fca1d98392abcf713c8ee1ea311eda061455242fba49 WatchSource:0}: Error finding container f2310e416d88da38fbb0fca1d98392abcf713c8ee1ea311eda061455242fba49: Status 404 returned error can't find the container with id f2310e416d88da38fbb0fca1d98392abcf713c8ee1ea311eda061455242fba49 Feb 24 05:24:11.188412 master-0 kubenswrapper[7614]: I0224 05:24:11.188296 7614 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c9ad9373c007a4fcd25e70622bdc8deb" path="/var/lib/kubelet/pods/c9ad9373c007a4fcd25e70622bdc8deb/volumes" Feb 24 05:24:11.188933 master-0 kubenswrapper[7614]: I0224 05:24:11.188894 7614 mirror_client.go:130] "Deleting a mirror pod" pod="kube-system/bootstrap-kube-controller-manager-master-0" podUID="" Feb 24 05:24:11.215641 master-0 kubenswrapper[7614]: I0224 05:24:11.215450 7614 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["kube-system/bootstrap-kube-controller-manager-master-0"] Feb 24 05:24:11.215641 master-0 kubenswrapper[7614]: I0224 05:24:11.215563 7614 kubelet.go:2649] "Unable to find pod for mirror pod, skipping" mirrorPod="kube-system/bootstrap-kube-controller-manager-master-0" mirrorPodUID="208f05b2-30d0-4793-9268-5d8f16844324" Feb 24 05:24:11.215641 master-0 kubenswrapper[7614]: I0224 05:24:11.215620 7614 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["kube-system/bootstrap-kube-controller-manager-master-0"] Feb 24 05:24:11.215641 master-0 kubenswrapper[7614]: I0224 05:24:11.215646 7614 kubelet.go:2673] "Unable to find pod for mirror pod, skipping" mirrorPod="kube-system/bootstrap-kube-controller-manager-master-0" mirrorPodUID="208f05b2-30d0-4793-9268-5d8f16844324" Feb 24 05:24:11.528630 master-0 kubenswrapper[7614]: I0224 05:24:11.528564 7614 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" event={"ID":"79656ffd720980cfc7e8a06d9f509855","Type":"ContainerStarted","Data":"e07d27e1643309ae09b9bf325c92ee0eea5fe45e32bcb73532559162327b2ce0"} Feb 24 05:24:11.528630 master-0 kubenswrapper[7614]: I0224 05:24:11.528626 7614 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" event={"ID":"79656ffd720980cfc7e8a06d9f509855","Type":"ContainerStarted","Data":"f2310e416d88da38fbb0fca1d98392abcf713c8ee1ea311eda061455242fba49"} Feb 24 05:24:11.530644 master-0 kubenswrapper[7614]: I0224 05:24:11.530264 7614 generic.go:334] "Generic (PLEG): container finished" podID="4e058a29-f50f-473a-a217-0021923ebc7c" containerID="4a683c2df0643cd32ba4287e2bcfda52e85d58cdef62154fe0290d7b742d186c" exitCode=0 Feb 24 05:24:11.530644 master-0 kubenswrapper[7614]: I0224 05:24:11.530339 7614 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/installer-2-master-0" event={"ID":"4e058a29-f50f-473a-a217-0021923ebc7c","Type":"ContainerDied","Data":"4a683c2df0643cd32ba4287e2bcfda52e85d58cdef62154fe0290d7b742d186c"} Feb 24 05:24:11.539990 master-0 kubenswrapper[7614]: I0224 05:24:11.539928 7614 generic.go:334] "Generic (PLEG): container finished" podID="c9ad9373c007a4fcd25e70622bdc8deb" containerID="5557a4a59d6ed82812c29ef2ac4e6682ca871ccd9af2af6045fce7cc16101c3a" exitCode=0 Feb 24 05:24:11.540088 master-0 kubenswrapper[7614]: I0224 05:24:11.540013 7614 generic.go:334] "Generic (PLEG): container finished" podID="c9ad9373c007a4fcd25e70622bdc8deb" containerID="f4b821030f840d2d8e85c4724668b0193cc5837173c2d037d234e06dfa6a0d7e" exitCode=0 Feb 24 05:24:11.540188 master-0 kubenswrapper[7614]: I0224 05:24:11.540149 7614 scope.go:117] "RemoveContainer" containerID="5557a4a59d6ed82812c29ef2ac4e6682ca871ccd9af2af6045fce7cc16101c3a" Feb 24 05:24:11.540401 master-0 kubenswrapper[7614]: I0224 05:24:11.540373 7614 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="kube-system/bootstrap-kube-controller-manager-master-0" Feb 24 05:24:11.577879 master-0 kubenswrapper[7614]: I0224 05:24:11.577833 7614 scope.go:117] "RemoveContainer" containerID="14f05c07a59574af5d65c41c4cab4b8f70b44e6f6cb8561bf8b61c71a4263402" Feb 24 05:24:11.613943 master-0 kubenswrapper[7614]: I0224 05:24:11.613895 7614 scope.go:117] "RemoveContainer" containerID="f4b821030f840d2d8e85c4724668b0193cc5837173c2d037d234e06dfa6a0d7e" Feb 24 05:24:11.646172 master-0 kubenswrapper[7614]: I0224 05:24:11.646116 7614 scope.go:117] "RemoveContainer" containerID="5557a4a59d6ed82812c29ef2ac4e6682ca871ccd9af2af6045fce7cc16101c3a" Feb 24 05:24:11.646726 master-0 kubenswrapper[7614]: E0224 05:24:11.646679 7614 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5557a4a59d6ed82812c29ef2ac4e6682ca871ccd9af2af6045fce7cc16101c3a\": container with ID starting with 5557a4a59d6ed82812c29ef2ac4e6682ca871ccd9af2af6045fce7cc16101c3a not found: ID does not exist" containerID="5557a4a59d6ed82812c29ef2ac4e6682ca871ccd9af2af6045fce7cc16101c3a" Feb 24 05:24:11.646767 master-0 kubenswrapper[7614]: I0224 05:24:11.646733 7614 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5557a4a59d6ed82812c29ef2ac4e6682ca871ccd9af2af6045fce7cc16101c3a"} err="failed to get container status \"5557a4a59d6ed82812c29ef2ac4e6682ca871ccd9af2af6045fce7cc16101c3a\": rpc error: code = NotFound desc = could not find container \"5557a4a59d6ed82812c29ef2ac4e6682ca871ccd9af2af6045fce7cc16101c3a\": container with ID starting with 5557a4a59d6ed82812c29ef2ac4e6682ca871ccd9af2af6045fce7cc16101c3a not found: ID does not exist" Feb 24 05:24:11.646818 master-0 kubenswrapper[7614]: I0224 05:24:11.646766 7614 scope.go:117] "RemoveContainer" containerID="14f05c07a59574af5d65c41c4cab4b8f70b44e6f6cb8561bf8b61c71a4263402" Feb 24 05:24:11.647454 master-0 kubenswrapper[7614]: E0224 05:24:11.647299 7614 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"14f05c07a59574af5d65c41c4cab4b8f70b44e6f6cb8561bf8b61c71a4263402\": container with ID starting with 14f05c07a59574af5d65c41c4cab4b8f70b44e6f6cb8561bf8b61c71a4263402 not found: ID does not exist" containerID="14f05c07a59574af5d65c41c4cab4b8f70b44e6f6cb8561bf8b61c71a4263402" Feb 24 05:24:11.647454 master-0 kubenswrapper[7614]: I0224 05:24:11.647378 7614 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"14f05c07a59574af5d65c41c4cab4b8f70b44e6f6cb8561bf8b61c71a4263402"} err="failed to get container status \"14f05c07a59574af5d65c41c4cab4b8f70b44e6f6cb8561bf8b61c71a4263402\": rpc error: code = NotFound desc = could not find container \"14f05c07a59574af5d65c41c4cab4b8f70b44e6f6cb8561bf8b61c71a4263402\": container with ID starting with 14f05c07a59574af5d65c41c4cab4b8f70b44e6f6cb8561bf8b61c71a4263402 not found: ID does not exist" Feb 24 05:24:11.647454 master-0 kubenswrapper[7614]: I0224 05:24:11.647396 7614 scope.go:117] "RemoveContainer" containerID="f4b821030f840d2d8e85c4724668b0193cc5837173c2d037d234e06dfa6a0d7e" Feb 24 05:24:11.648061 master-0 kubenswrapper[7614]: E0224 05:24:11.647959 7614 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f4b821030f840d2d8e85c4724668b0193cc5837173c2d037d234e06dfa6a0d7e\": container with ID starting with f4b821030f840d2d8e85c4724668b0193cc5837173c2d037d234e06dfa6a0d7e not found: ID does not exist" containerID="f4b821030f840d2d8e85c4724668b0193cc5837173c2d037d234e06dfa6a0d7e" Feb 24 05:24:11.648061 master-0 kubenswrapper[7614]: I0224 05:24:11.647995 7614 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f4b821030f840d2d8e85c4724668b0193cc5837173c2d037d234e06dfa6a0d7e"} err="failed to get container status \"f4b821030f840d2d8e85c4724668b0193cc5837173c2d037d234e06dfa6a0d7e\": rpc error: code = NotFound desc = could not find container \"f4b821030f840d2d8e85c4724668b0193cc5837173c2d037d234e06dfa6a0d7e\": container with ID starting with f4b821030f840d2d8e85c4724668b0193cc5837173c2d037d234e06dfa6a0d7e not found: ID does not exist" Feb 24 05:24:11.648061 master-0 kubenswrapper[7614]: I0224 05:24:11.648015 7614 scope.go:117] "RemoveContainer" containerID="5557a4a59d6ed82812c29ef2ac4e6682ca871ccd9af2af6045fce7cc16101c3a" Feb 24 05:24:11.648371 master-0 kubenswrapper[7614]: I0224 05:24:11.648334 7614 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5557a4a59d6ed82812c29ef2ac4e6682ca871ccd9af2af6045fce7cc16101c3a"} err="failed to get container status \"5557a4a59d6ed82812c29ef2ac4e6682ca871ccd9af2af6045fce7cc16101c3a\": rpc error: code = NotFound desc = could not find container \"5557a4a59d6ed82812c29ef2ac4e6682ca871ccd9af2af6045fce7cc16101c3a\": container with ID starting with 5557a4a59d6ed82812c29ef2ac4e6682ca871ccd9af2af6045fce7cc16101c3a not found: ID does not exist" Feb 24 05:24:11.648371 master-0 kubenswrapper[7614]: I0224 05:24:11.648362 7614 scope.go:117] "RemoveContainer" containerID="14f05c07a59574af5d65c41c4cab4b8f70b44e6f6cb8561bf8b61c71a4263402" Feb 24 05:24:11.648696 master-0 kubenswrapper[7614]: I0224 05:24:11.648662 7614 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"14f05c07a59574af5d65c41c4cab4b8f70b44e6f6cb8561bf8b61c71a4263402"} err="failed to get container status \"14f05c07a59574af5d65c41c4cab4b8f70b44e6f6cb8561bf8b61c71a4263402\": rpc error: code = NotFound desc = could not find container \"14f05c07a59574af5d65c41c4cab4b8f70b44e6f6cb8561bf8b61c71a4263402\": container with ID starting with 14f05c07a59574af5d65c41c4cab4b8f70b44e6f6cb8561bf8b61c71a4263402 not found: ID does not exist" Feb 24 05:24:11.648754 master-0 kubenswrapper[7614]: I0224 05:24:11.648684 7614 scope.go:117] "RemoveContainer" containerID="f4b821030f840d2d8e85c4724668b0193cc5837173c2d037d234e06dfa6a0d7e" Feb 24 05:24:11.649001 master-0 kubenswrapper[7614]: I0224 05:24:11.648962 7614 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f4b821030f840d2d8e85c4724668b0193cc5837173c2d037d234e06dfa6a0d7e"} err="failed to get container status \"f4b821030f840d2d8e85c4724668b0193cc5837173c2d037d234e06dfa6a0d7e\": rpc error: code = NotFound desc = could not find container \"f4b821030f840d2d8e85c4724668b0193cc5837173c2d037d234e06dfa6a0d7e\": container with ID starting with f4b821030f840d2d8e85c4724668b0193cc5837173c2d037d234e06dfa6a0d7e not found: ID does not exist" Feb 24 05:24:11.662229 master-0 kubenswrapper[7614]: I0224 05:24:11.662172 7614 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-zxkt2 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 05:24:11.662229 master-0 kubenswrapper[7614]: [-]has-synced failed: reason withheld Feb 24 05:24:11.662229 master-0 kubenswrapper[7614]: [+]process-running ok Feb 24 05:24:11.662229 master-0 kubenswrapper[7614]: healthz check failed Feb 24 05:24:11.662450 master-0 kubenswrapper[7614]: I0224 05:24:11.662242 7614 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-zxkt2" podUID="be7a4b9e-1e9a-4298-b804-21b683805c0e" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 05:24:12.552852 master-0 kubenswrapper[7614]: I0224 05:24:12.552685 7614 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" event={"ID":"79656ffd720980cfc7e8a06d9f509855","Type":"ContainerStarted","Data":"806562bb76905d2503927db61aa9771f4fa696f0288313aafa1dfd726165b997"} Feb 24 05:24:12.552852 master-0 kubenswrapper[7614]: I0224 05:24:12.552752 7614 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" event={"ID":"79656ffd720980cfc7e8a06d9f509855","Type":"ContainerStarted","Data":"dd597b6ad5f257c6a61d0a1a9b377d01faf516c3b8373c6ff9e2832da517d51c"} Feb 24 05:24:12.552852 master-0 kubenswrapper[7614]: I0224 05:24:12.552764 7614 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" event={"ID":"79656ffd720980cfc7e8a06d9f509855","Type":"ContainerStarted","Data":"bddc98ab8f891bcfeab1f13ad02fb7915d32f69a34209664b3c92c1ac4cbbe83"} Feb 24 05:24:12.555624 master-0 kubenswrapper[7614]: I0224 05:24:12.555576 7614 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/telemeter-client-96c995bf5-57k8x" event={"ID":"1163571d-f555-41ad-b04c-74c2dc452efe","Type":"ContainerStarted","Data":"3cb11eef8d37a0b70fc9a1af497eeefe197c159ae0710f6becc15f045fb6b447"} Feb 24 05:24:12.602141 master-0 kubenswrapper[7614]: I0224 05:24:12.599281 7614 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" podStartSLOduration=2.5992437219999998 podStartE2EDuration="2.599243722s" podCreationTimestamp="2026-02-24 05:24:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-24 05:24:12.591834427 +0000 UTC m=+583.626577633" watchObservedRunningTime="2026-02-24 05:24:12.599243722 +0000 UTC m=+583.633986908" Feb 24 05:24:12.670365 master-0 kubenswrapper[7614]: I0224 05:24:12.669589 7614 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-zxkt2 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 05:24:12.670365 master-0 kubenswrapper[7614]: [-]has-synced failed: reason withheld Feb 24 05:24:12.670365 master-0 kubenswrapper[7614]: [+]process-running ok Feb 24 05:24:12.670365 master-0 kubenswrapper[7614]: healthz check failed Feb 24 05:24:12.670365 master-0 kubenswrapper[7614]: I0224 05:24:12.669670 7614 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-zxkt2" podUID="be7a4b9e-1e9a-4298-b804-21b683805c0e" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 05:24:12.903777 master-0 kubenswrapper[7614]: I0224 05:24:12.903709 7614 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/installer-2-master-0" Feb 24 05:24:12.992770 master-0 kubenswrapper[7614]: I0224 05:24:12.992691 7614 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/4e058a29-f50f-473a-a217-0021923ebc7c-kube-api-access\") pod \"4e058a29-f50f-473a-a217-0021923ebc7c\" (UID: \"4e058a29-f50f-473a-a217-0021923ebc7c\") " Feb 24 05:24:12.993053 master-0 kubenswrapper[7614]: I0224 05:24:12.992878 7614 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/4e058a29-f50f-473a-a217-0021923ebc7c-var-lock\") pod \"4e058a29-f50f-473a-a217-0021923ebc7c\" (UID: \"4e058a29-f50f-473a-a217-0021923ebc7c\") " Feb 24 05:24:12.993204 master-0 kubenswrapper[7614]: I0224 05:24:12.993058 7614 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4e058a29-f50f-473a-a217-0021923ebc7c-var-lock" (OuterVolumeSpecName: "var-lock") pod "4e058a29-f50f-473a-a217-0021923ebc7c" (UID: "4e058a29-f50f-473a-a217-0021923ebc7c"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 24 05:24:12.993434 master-0 kubenswrapper[7614]: I0224 05:24:12.993397 7614 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/4e058a29-f50f-473a-a217-0021923ebc7c-kubelet-dir\") pod \"4e058a29-f50f-473a-a217-0021923ebc7c\" (UID: \"4e058a29-f50f-473a-a217-0021923ebc7c\") " Feb 24 05:24:12.993651 master-0 kubenswrapper[7614]: I0224 05:24:12.993601 7614 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4e058a29-f50f-473a-a217-0021923ebc7c-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "4e058a29-f50f-473a-a217-0021923ebc7c" (UID: "4e058a29-f50f-473a-a217-0021923ebc7c"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 24 05:24:12.994923 master-0 kubenswrapper[7614]: I0224 05:24:12.994860 7614 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/4e058a29-f50f-473a-a217-0021923ebc7c-kubelet-dir\") on node \"master-0\" DevicePath \"\"" Feb 24 05:24:12.994923 master-0 kubenswrapper[7614]: I0224 05:24:12.994918 7614 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/4e058a29-f50f-473a-a217-0021923ebc7c-var-lock\") on node \"master-0\" DevicePath \"\"" Feb 24 05:24:12.998004 master-0 kubenswrapper[7614]: I0224 05:24:12.997940 7614 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4e058a29-f50f-473a-a217-0021923ebc7c-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "4e058a29-f50f-473a-a217-0021923ebc7c" (UID: "4e058a29-f50f-473a-a217-0021923ebc7c"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 24 05:24:13.096642 master-0 kubenswrapper[7614]: I0224 05:24:13.096527 7614 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/4e058a29-f50f-473a-a217-0021923ebc7c-kube-api-access\") on node \"master-0\" DevicePath \"\"" Feb 24 05:24:13.568357 master-0 kubenswrapper[7614]: I0224 05:24:13.567830 7614 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/installer-2-master-0" Feb 24 05:24:13.568357 master-0 kubenswrapper[7614]: I0224 05:24:13.567866 7614 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/installer-2-master-0" event={"ID":"4e058a29-f50f-473a-a217-0021923ebc7c","Type":"ContainerDied","Data":"769734d30190536a2d572317485788006caf1f452e2bf4039cbb5f5e275cd997"} Feb 24 05:24:13.568357 master-0 kubenswrapper[7614]: I0224 05:24:13.568017 7614 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="769734d30190536a2d572317485788006caf1f452e2bf4039cbb5f5e275cd997" Feb 24 05:24:13.572918 master-0 kubenswrapper[7614]: I0224 05:24:13.572821 7614 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/telemeter-client-96c995bf5-57k8x" event={"ID":"1163571d-f555-41ad-b04c-74c2dc452efe","Type":"ContainerStarted","Data":"ac6c2b4cc37bc755a74cd05e1baea6c520c382e7daa791530c99b6c97bb47a31"} Feb 24 05:24:13.662694 master-0 kubenswrapper[7614]: I0224 05:24:13.662581 7614 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-zxkt2 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 05:24:13.662694 master-0 kubenswrapper[7614]: [-]has-synced failed: reason withheld Feb 24 05:24:13.662694 master-0 kubenswrapper[7614]: [+]process-running ok Feb 24 05:24:13.662694 master-0 kubenswrapper[7614]: healthz check failed Feb 24 05:24:13.663174 master-0 kubenswrapper[7614]: I0224 05:24:13.662714 7614 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-zxkt2" podUID="be7a4b9e-1e9a-4298-b804-21b683805c0e" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 05:24:14.585089 master-0 kubenswrapper[7614]: I0224 05:24:14.585032 7614 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/telemeter-client-96c995bf5-57k8x" event={"ID":"1163571d-f555-41ad-b04c-74c2dc452efe","Type":"ContainerStarted","Data":"51e58077dc85d613d929664a0dd3206a8364a6f2b4b2c57cf4f65a3af6759011"} Feb 24 05:24:14.640592 master-0 kubenswrapper[7614]: I0224 05:24:14.640450 7614 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-monitoring/telemeter-client-96c995bf5-57k8x" podStartSLOduration=3.021881313 podStartE2EDuration="6.64039985s" podCreationTimestamp="2026-02-24 05:24:08 +0000 UTC" firstStartedPulling="2026-02-24 05:24:09.654280625 +0000 UTC m=+580.689023781" lastFinishedPulling="2026-02-24 05:24:13.272799162 +0000 UTC m=+584.307542318" observedRunningTime="2026-02-24 05:24:14.63429791 +0000 UTC m=+585.669041056" watchObservedRunningTime="2026-02-24 05:24:14.64039985 +0000 UTC m=+585.675143046" Feb 24 05:24:14.667406 master-0 kubenswrapper[7614]: I0224 05:24:14.667231 7614 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-zxkt2 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 05:24:14.667406 master-0 kubenswrapper[7614]: [-]has-synced failed: reason withheld Feb 24 05:24:14.667406 master-0 kubenswrapper[7614]: [+]process-running ok Feb 24 05:24:14.667406 master-0 kubenswrapper[7614]: healthz check failed Feb 24 05:24:14.667406 master-0 kubenswrapper[7614]: I0224 05:24:14.667377 7614 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-zxkt2" podUID="be7a4b9e-1e9a-4298-b804-21b683805c0e" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 05:24:15.663858 master-0 kubenswrapper[7614]: I0224 05:24:15.663743 7614 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-zxkt2 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 05:24:15.663858 master-0 kubenswrapper[7614]: [-]has-synced failed: reason withheld Feb 24 05:24:15.663858 master-0 kubenswrapper[7614]: [+]process-running ok Feb 24 05:24:15.663858 master-0 kubenswrapper[7614]: healthz check failed Feb 24 05:24:15.665211 master-0 kubenswrapper[7614]: I0224 05:24:15.663880 7614 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-zxkt2" podUID="be7a4b9e-1e9a-4298-b804-21b683805c0e" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 05:24:16.663544 master-0 kubenswrapper[7614]: I0224 05:24:16.663426 7614 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-zxkt2 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 05:24:16.663544 master-0 kubenswrapper[7614]: [-]has-synced failed: reason withheld Feb 24 05:24:16.663544 master-0 kubenswrapper[7614]: [+]process-running ok Feb 24 05:24:16.663544 master-0 kubenswrapper[7614]: healthz check failed Feb 24 05:24:16.664625 master-0 kubenswrapper[7614]: I0224 05:24:16.663567 7614 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-zxkt2" podUID="be7a4b9e-1e9a-4298-b804-21b683805c0e" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 05:24:17.663841 master-0 kubenswrapper[7614]: I0224 05:24:17.663727 7614 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-zxkt2 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 05:24:17.663841 master-0 kubenswrapper[7614]: [-]has-synced failed: reason withheld Feb 24 05:24:17.663841 master-0 kubenswrapper[7614]: [+]process-running ok Feb 24 05:24:17.663841 master-0 kubenswrapper[7614]: healthz check failed Feb 24 05:24:17.663841 master-0 kubenswrapper[7614]: I0224 05:24:17.663843 7614 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-zxkt2" podUID="be7a4b9e-1e9a-4298-b804-21b683805c0e" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 05:24:18.624982 master-0 kubenswrapper[7614]: E0224 05:24:18.624477 7614 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="e23e1fc319485f4b7af40af05bab78f64dc6feb9347acaf86cd9fe42147a009f" cmd=["/bin/bash","-c","test -f /ready/ready"] Feb 24 05:24:18.627009 master-0 kubenswrapper[7614]: E0224 05:24:18.626912 7614 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="e23e1fc319485f4b7af40af05bab78f64dc6feb9347acaf86cd9fe42147a009f" cmd=["/bin/bash","-c","test -f /ready/ready"] Feb 24 05:24:18.629723 master-0 kubenswrapper[7614]: E0224 05:24:18.629618 7614 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="e23e1fc319485f4b7af40af05bab78f64dc6feb9347acaf86cd9fe42147a009f" cmd=["/bin/bash","-c","test -f /ready/ready"] Feb 24 05:24:18.629859 master-0 kubenswrapper[7614]: E0224 05:24:18.629752 7614 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openshift-multus/cni-sysctl-allowlist-ds-j28p2" podUID="2303d3b8-fe6a-469a-a306-4e1685181dbe" containerName="kube-multus-additional-cni-plugins" Feb 24 05:24:18.663219 master-0 kubenswrapper[7614]: I0224 05:24:18.663150 7614 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-zxkt2 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 05:24:18.663219 master-0 kubenswrapper[7614]: [-]has-synced failed: reason withheld Feb 24 05:24:18.663219 master-0 kubenswrapper[7614]: [+]process-running ok Feb 24 05:24:18.663219 master-0 kubenswrapper[7614]: healthz check failed Feb 24 05:24:18.663900 master-0 kubenswrapper[7614]: I0224 05:24:18.663848 7614 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-zxkt2" podUID="be7a4b9e-1e9a-4298-b804-21b683805c0e" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 05:24:19.662711 master-0 kubenswrapper[7614]: I0224 05:24:19.662516 7614 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-zxkt2 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 05:24:19.662711 master-0 kubenswrapper[7614]: [-]has-synced failed: reason withheld Feb 24 05:24:19.662711 master-0 kubenswrapper[7614]: [+]process-running ok Feb 24 05:24:19.662711 master-0 kubenswrapper[7614]: healthz check failed Feb 24 05:24:19.664178 master-0 kubenswrapper[7614]: I0224 05:24:19.662916 7614 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-zxkt2" podUID="be7a4b9e-1e9a-4298-b804-21b683805c0e" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 05:24:20.664560 master-0 kubenswrapper[7614]: I0224 05:24:20.664475 7614 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-zxkt2 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 05:24:20.664560 master-0 kubenswrapper[7614]: [-]has-synced failed: reason withheld Feb 24 05:24:20.664560 master-0 kubenswrapper[7614]: [+]process-running ok Feb 24 05:24:20.664560 master-0 kubenswrapper[7614]: healthz check failed Feb 24 05:24:20.665733 master-0 kubenswrapper[7614]: I0224 05:24:20.664597 7614 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-zxkt2" podUID="be7a4b9e-1e9a-4298-b804-21b683805c0e" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 05:24:20.769716 master-0 kubenswrapper[7614]: I0224 05:24:20.769588 7614 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Feb 24 05:24:20.769716 master-0 kubenswrapper[7614]: I0224 05:24:20.769710 7614 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Feb 24 05:24:20.769716 master-0 kubenswrapper[7614]: I0224 05:24:20.769740 7614 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Feb 24 05:24:20.770240 master-0 kubenswrapper[7614]: I0224 05:24:20.769762 7614 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Feb 24 05:24:20.777489 master-0 kubenswrapper[7614]: I0224 05:24:20.777419 7614 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Feb 24 05:24:20.777956 master-0 kubenswrapper[7614]: I0224 05:24:20.777894 7614 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Feb 24 05:24:20.890192 master-0 kubenswrapper[7614]: I0224 05:24:20.890067 7614 kubelet.go:2431] "SyncLoop REMOVE" source="file" pods=["openshift-etcd/etcd-master-0"] Feb 24 05:24:20.890942 master-0 kubenswrapper[7614]: I0224 05:24:20.890861 7614 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-etcd/etcd-master-0" podUID="18a83278819db2092fa26d8274eb3f00" containerName="etcdctl" containerID="cri-o://3ecee88921125a5f3daef9f73c06fadc6c6ff979e5d985e6de9e5a03f6b60138" gracePeriod=30 Feb 24 05:24:20.891032 master-0 kubenswrapper[7614]: I0224 05:24:20.890910 7614 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-etcd/etcd-master-0" podUID="18a83278819db2092fa26d8274eb3f00" containerName="etcd-rev" containerID="cri-o://7d727fc6e0e5d77f1006238fe0a64fa226a54016635e42983a2c118d1cbf2d4d" gracePeriod=30 Feb 24 05:24:20.891102 master-0 kubenswrapper[7614]: I0224 05:24:20.891016 7614 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-etcd/etcd-master-0" podUID="18a83278819db2092fa26d8274eb3f00" containerName="etcd-metrics" containerID="cri-o://91ecda8417b960b9a029d90ed995bb60f62836abfd23bf2acd7b0e5ecf1da02e" gracePeriod=30 Feb 24 05:24:20.891179 master-0 kubenswrapper[7614]: I0224 05:24:20.891051 7614 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-etcd/etcd-master-0" podUID="18a83278819db2092fa26d8274eb3f00" containerName="etcd" containerID="cri-o://cd4cd16837dd09734e5f614dd2260006bcf56b4e101d508499976475323a14e3" gracePeriod=30 Feb 24 05:24:20.891245 master-0 kubenswrapper[7614]: I0224 05:24:20.891089 7614 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-etcd/etcd-master-0" podUID="18a83278819db2092fa26d8274eb3f00" containerName="etcd-readyz" containerID="cri-o://4bfeb0f00fa17020db1e4f0103b58f3f7fb077a3afee3ad3d0b2ebfe6459b4f1" gracePeriod=30 Feb 24 05:24:20.894423 master-0 kubenswrapper[7614]: I0224 05:24:20.894353 7614 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-etcd/etcd-master-0"] Feb 24 05:24:20.894962 master-0 kubenswrapper[7614]: E0224 05:24:20.894910 7614 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="18a83278819db2092fa26d8274eb3f00" containerName="etcd-metrics" Feb 24 05:24:20.894962 master-0 kubenswrapper[7614]: I0224 05:24:20.894962 7614 state_mem.go:107] "Deleted CPUSet assignment" podUID="18a83278819db2092fa26d8274eb3f00" containerName="etcd-metrics" Feb 24 05:24:20.895209 master-0 kubenswrapper[7614]: E0224 05:24:20.895001 7614 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="18a83278819db2092fa26d8274eb3f00" containerName="setup" Feb 24 05:24:20.895209 master-0 kubenswrapper[7614]: I0224 05:24:20.895020 7614 state_mem.go:107] "Deleted CPUSet assignment" podUID="18a83278819db2092fa26d8274eb3f00" containerName="setup" Feb 24 05:24:20.895209 master-0 kubenswrapper[7614]: E0224 05:24:20.895049 7614 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="18a83278819db2092fa26d8274eb3f00" containerName="etcd" Feb 24 05:24:20.895209 master-0 kubenswrapper[7614]: I0224 05:24:20.895065 7614 state_mem.go:107] "Deleted CPUSet assignment" podUID="18a83278819db2092fa26d8274eb3f00" containerName="etcd" Feb 24 05:24:20.895209 master-0 kubenswrapper[7614]: E0224 05:24:20.895091 7614 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="18a83278819db2092fa26d8274eb3f00" containerName="etcd-resources-copy" Feb 24 05:24:20.895209 master-0 kubenswrapper[7614]: I0224 05:24:20.895109 7614 state_mem.go:107] "Deleted CPUSet assignment" podUID="18a83278819db2092fa26d8274eb3f00" containerName="etcd-resources-copy" Feb 24 05:24:20.895209 master-0 kubenswrapper[7614]: E0224 05:24:20.895143 7614 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="18a83278819db2092fa26d8274eb3f00" containerName="etcd-ensure-env-vars" Feb 24 05:24:20.895209 master-0 kubenswrapper[7614]: I0224 05:24:20.895161 7614 state_mem.go:107] "Deleted CPUSet assignment" podUID="18a83278819db2092fa26d8274eb3f00" containerName="etcd-ensure-env-vars" Feb 24 05:24:20.895209 master-0 kubenswrapper[7614]: E0224 05:24:20.895187 7614 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="18a83278819db2092fa26d8274eb3f00" containerName="etcd-readyz" Feb 24 05:24:20.895209 master-0 kubenswrapper[7614]: I0224 05:24:20.895204 7614 state_mem.go:107] "Deleted CPUSet assignment" podUID="18a83278819db2092fa26d8274eb3f00" containerName="etcd-readyz" Feb 24 05:24:20.896112 master-0 kubenswrapper[7614]: E0224 05:24:20.895236 7614 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="18a83278819db2092fa26d8274eb3f00" containerName="etcd-rev" Feb 24 05:24:20.896112 master-0 kubenswrapper[7614]: I0224 05:24:20.895254 7614 state_mem.go:107] "Deleted CPUSet assignment" podUID="18a83278819db2092fa26d8274eb3f00" containerName="etcd-rev" Feb 24 05:24:20.896112 master-0 kubenswrapper[7614]: E0224 05:24:20.895280 7614 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4e058a29-f50f-473a-a217-0021923ebc7c" containerName="installer" Feb 24 05:24:20.896112 master-0 kubenswrapper[7614]: I0224 05:24:20.895297 7614 state_mem.go:107] "Deleted CPUSet assignment" podUID="4e058a29-f50f-473a-a217-0021923ebc7c" containerName="installer" Feb 24 05:24:20.896112 master-0 kubenswrapper[7614]: E0224 05:24:20.895364 7614 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="18a83278819db2092fa26d8274eb3f00" containerName="etcdctl" Feb 24 05:24:20.896112 master-0 kubenswrapper[7614]: I0224 05:24:20.895385 7614 state_mem.go:107] "Deleted CPUSet assignment" podUID="18a83278819db2092fa26d8274eb3f00" containerName="etcdctl" Feb 24 05:24:20.896112 master-0 kubenswrapper[7614]: I0224 05:24:20.895706 7614 memory_manager.go:354] "RemoveStaleState removing state" podUID="18a83278819db2092fa26d8274eb3f00" containerName="etcd-rev" Feb 24 05:24:20.896112 master-0 kubenswrapper[7614]: I0224 05:24:20.895757 7614 memory_manager.go:354] "RemoveStaleState removing state" podUID="18a83278819db2092fa26d8274eb3f00" containerName="etcd-readyz" Feb 24 05:24:20.896112 master-0 kubenswrapper[7614]: I0224 05:24:20.895784 7614 memory_manager.go:354] "RemoveStaleState removing state" podUID="18a83278819db2092fa26d8274eb3f00" containerName="etcdctl" Feb 24 05:24:20.896112 master-0 kubenswrapper[7614]: I0224 05:24:20.895811 7614 memory_manager.go:354] "RemoveStaleState removing state" podUID="18a83278819db2092fa26d8274eb3f00" containerName="etcd" Feb 24 05:24:20.896112 master-0 kubenswrapper[7614]: I0224 05:24:20.895836 7614 memory_manager.go:354] "RemoveStaleState removing state" podUID="4e058a29-f50f-473a-a217-0021923ebc7c" containerName="installer" Feb 24 05:24:20.896112 master-0 kubenswrapper[7614]: I0224 05:24:20.895864 7614 memory_manager.go:354] "RemoveStaleState removing state" podUID="18a83278819db2092fa26d8274eb3f00" containerName="etcd-metrics" Feb 24 05:24:21.060082 master-0 kubenswrapper[7614]: I0224 05:24:21.059982 7614 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/b419b8533666d3ae7054c771ce97a95f-static-pod-dir\") pod \"etcd-master-0\" (UID: \"b419b8533666d3ae7054c771ce97a95f\") " pod="openshift-etcd/etcd-master-0" Feb 24 05:24:21.060082 master-0 kubenswrapper[7614]: I0224 05:24:21.060076 7614 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/b419b8533666d3ae7054c771ce97a95f-resource-dir\") pod \"etcd-master-0\" (UID: \"b419b8533666d3ae7054c771ce97a95f\") " pod="openshift-etcd/etcd-master-0" Feb 24 05:24:21.060545 master-0 kubenswrapper[7614]: I0224 05:24:21.060161 7614 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/b419b8533666d3ae7054c771ce97a95f-data-dir\") pod \"etcd-master-0\" (UID: \"b419b8533666d3ae7054c771ce97a95f\") " pod="openshift-etcd/etcd-master-0" Feb 24 05:24:21.061164 master-0 kubenswrapper[7614]: I0224 05:24:21.061105 7614 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/b419b8533666d3ae7054c771ce97a95f-cert-dir\") pod \"etcd-master-0\" (UID: \"b419b8533666d3ae7054c771ce97a95f\") " pod="openshift-etcd/etcd-master-0" Feb 24 05:24:21.061265 master-0 kubenswrapper[7614]: I0224 05:24:21.061182 7614 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/b419b8533666d3ae7054c771ce97a95f-log-dir\") pod \"etcd-master-0\" (UID: \"b419b8533666d3ae7054c771ce97a95f\") " pod="openshift-etcd/etcd-master-0" Feb 24 05:24:21.061265 master-0 kubenswrapper[7614]: I0224 05:24:21.061224 7614 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/b419b8533666d3ae7054c771ce97a95f-usr-local-bin\") pod \"etcd-master-0\" (UID: \"b419b8533666d3ae7054c771ce97a95f\") " pod="openshift-etcd/etcd-master-0" Feb 24 05:24:21.163299 master-0 kubenswrapper[7614]: I0224 05:24:21.163166 7614 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/b419b8533666d3ae7054c771ce97a95f-resource-dir\") pod \"etcd-master-0\" (UID: \"b419b8533666d3ae7054c771ce97a95f\") " pod="openshift-etcd/etcd-master-0" Feb 24 05:24:21.163799 master-0 kubenswrapper[7614]: I0224 05:24:21.163427 7614 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/b419b8533666d3ae7054c771ce97a95f-data-dir\") pod \"etcd-master-0\" (UID: \"b419b8533666d3ae7054c771ce97a95f\") " pod="openshift-etcd/etcd-master-0" Feb 24 05:24:21.163799 master-0 kubenswrapper[7614]: I0224 05:24:21.163433 7614 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/b419b8533666d3ae7054c771ce97a95f-resource-dir\") pod \"etcd-master-0\" (UID: \"b419b8533666d3ae7054c771ce97a95f\") " pod="openshift-etcd/etcd-master-0" Feb 24 05:24:21.163799 master-0 kubenswrapper[7614]: I0224 05:24:21.163499 7614 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/b419b8533666d3ae7054c771ce97a95f-data-dir\") pod \"etcd-master-0\" (UID: \"b419b8533666d3ae7054c771ce97a95f\") " pod="openshift-etcd/etcd-master-0" Feb 24 05:24:21.163799 master-0 kubenswrapper[7614]: I0224 05:24:21.163589 7614 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/b419b8533666d3ae7054c771ce97a95f-cert-dir\") pod \"etcd-master-0\" (UID: \"b419b8533666d3ae7054c771ce97a95f\") " pod="openshift-etcd/etcd-master-0" Feb 24 05:24:21.163799 master-0 kubenswrapper[7614]: I0224 05:24:21.163750 7614 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/b419b8533666d3ae7054c771ce97a95f-log-dir\") pod \"etcd-master-0\" (UID: \"b419b8533666d3ae7054c771ce97a95f\") " pod="openshift-etcd/etcd-master-0" Feb 24 05:24:21.164148 master-0 kubenswrapper[7614]: I0224 05:24:21.163842 7614 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/b419b8533666d3ae7054c771ce97a95f-usr-local-bin\") pod \"etcd-master-0\" (UID: \"b419b8533666d3ae7054c771ce97a95f\") " pod="openshift-etcd/etcd-master-0" Feb 24 05:24:21.164148 master-0 kubenswrapper[7614]: I0224 05:24:21.163872 7614 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/b419b8533666d3ae7054c771ce97a95f-cert-dir\") pod \"etcd-master-0\" (UID: \"b419b8533666d3ae7054c771ce97a95f\") " pod="openshift-etcd/etcd-master-0" Feb 24 05:24:21.164148 master-0 kubenswrapper[7614]: I0224 05:24:21.163977 7614 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/b419b8533666d3ae7054c771ce97a95f-static-pod-dir\") pod \"etcd-master-0\" (UID: \"b419b8533666d3ae7054c771ce97a95f\") " pod="openshift-etcd/etcd-master-0" Feb 24 05:24:21.164148 master-0 kubenswrapper[7614]: I0224 05:24:21.164002 7614 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/b419b8533666d3ae7054c771ce97a95f-log-dir\") pod \"etcd-master-0\" (UID: \"b419b8533666d3ae7054c771ce97a95f\") " pod="openshift-etcd/etcd-master-0" Feb 24 05:24:21.164148 master-0 kubenswrapper[7614]: I0224 05:24:21.164092 7614 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/b419b8533666d3ae7054c771ce97a95f-usr-local-bin\") pod \"etcd-master-0\" (UID: \"b419b8533666d3ae7054c771ce97a95f\") " pod="openshift-etcd/etcd-master-0" Feb 24 05:24:21.164548 master-0 kubenswrapper[7614]: I0224 05:24:21.164121 7614 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/b419b8533666d3ae7054c771ce97a95f-static-pod-dir\") pod \"etcd-master-0\" (UID: \"b419b8533666d3ae7054c771ce97a95f\") " pod="openshift-etcd/etcd-master-0" Feb 24 05:24:21.663064 master-0 kubenswrapper[7614]: I0224 05:24:21.662978 7614 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-zxkt2 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 05:24:21.663064 master-0 kubenswrapper[7614]: [-]has-synced failed: reason withheld Feb 24 05:24:21.663064 master-0 kubenswrapper[7614]: [+]process-running ok Feb 24 05:24:21.663064 master-0 kubenswrapper[7614]: healthz check failed Feb 24 05:24:21.663064 master-0 kubenswrapper[7614]: I0224 05:24:21.663055 7614 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-zxkt2" podUID="be7a4b9e-1e9a-4298-b804-21b683805c0e" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 05:24:21.665001 master-0 kubenswrapper[7614]: I0224 05:24:21.664954 7614 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd_etcd-master-0_18a83278819db2092fa26d8274eb3f00/etcd-rev/0.log" Feb 24 05:24:21.666371 master-0 kubenswrapper[7614]: I0224 05:24:21.666280 7614 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd_etcd-master-0_18a83278819db2092fa26d8274eb3f00/etcd-metrics/0.log" Feb 24 05:24:21.669143 master-0 kubenswrapper[7614]: I0224 05:24:21.669089 7614 generic.go:334] "Generic (PLEG): container finished" podID="18a83278819db2092fa26d8274eb3f00" containerID="7d727fc6e0e5d77f1006238fe0a64fa226a54016635e42983a2c118d1cbf2d4d" exitCode=2 Feb 24 05:24:21.669143 master-0 kubenswrapper[7614]: I0224 05:24:21.669127 7614 generic.go:334] "Generic (PLEG): container finished" podID="18a83278819db2092fa26d8274eb3f00" containerID="4bfeb0f00fa17020db1e4f0103b58f3f7fb077a3afee3ad3d0b2ebfe6459b4f1" exitCode=0 Feb 24 05:24:21.669143 master-0 kubenswrapper[7614]: I0224 05:24:21.669143 7614 generic.go:334] "Generic (PLEG): container finished" podID="18a83278819db2092fa26d8274eb3f00" containerID="91ecda8417b960b9a029d90ed995bb60f62836abfd23bf2acd7b0e5ecf1da02e" exitCode=2 Feb 24 05:24:21.675608 master-0 kubenswrapper[7614]: I0224 05:24:21.675547 7614 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Feb 24 05:24:21.677168 master-0 kubenswrapper[7614]: I0224 05:24:21.677104 7614 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Feb 24 05:24:22.663383 master-0 kubenswrapper[7614]: I0224 05:24:22.663195 7614 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-zxkt2 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 05:24:22.663383 master-0 kubenswrapper[7614]: [-]has-synced failed: reason withheld Feb 24 05:24:22.663383 master-0 kubenswrapper[7614]: [+]process-running ok Feb 24 05:24:22.663383 master-0 kubenswrapper[7614]: healthz check failed Feb 24 05:24:22.663383 master-0 kubenswrapper[7614]: I0224 05:24:22.663357 7614 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-zxkt2" podUID="be7a4b9e-1e9a-4298-b804-21b683805c0e" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 05:24:23.664498 master-0 kubenswrapper[7614]: I0224 05:24:23.664397 7614 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-zxkt2 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 05:24:23.664498 master-0 kubenswrapper[7614]: [-]has-synced failed: reason withheld Feb 24 05:24:23.664498 master-0 kubenswrapper[7614]: [+]process-running ok Feb 24 05:24:23.664498 master-0 kubenswrapper[7614]: healthz check failed Feb 24 05:24:23.665902 master-0 kubenswrapper[7614]: I0224 05:24:23.664513 7614 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-zxkt2" podUID="be7a4b9e-1e9a-4298-b804-21b683805c0e" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 05:24:24.664120 master-0 kubenswrapper[7614]: I0224 05:24:24.663990 7614 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-zxkt2 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 05:24:24.664120 master-0 kubenswrapper[7614]: [-]has-synced failed: reason withheld Feb 24 05:24:24.664120 master-0 kubenswrapper[7614]: [+]process-running ok Feb 24 05:24:24.664120 master-0 kubenswrapper[7614]: healthz check failed Feb 24 05:24:24.664120 master-0 kubenswrapper[7614]: I0224 05:24:24.664108 7614 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-zxkt2" podUID="be7a4b9e-1e9a-4298-b804-21b683805c0e" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 05:24:25.663864 master-0 kubenswrapper[7614]: I0224 05:24:25.663732 7614 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-zxkt2 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 05:24:25.663864 master-0 kubenswrapper[7614]: [-]has-synced failed: reason withheld Feb 24 05:24:25.663864 master-0 kubenswrapper[7614]: [+]process-running ok Feb 24 05:24:25.663864 master-0 kubenswrapper[7614]: healthz check failed Feb 24 05:24:25.663864 master-0 kubenswrapper[7614]: I0224 05:24:25.663813 7614 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-zxkt2" podUID="be7a4b9e-1e9a-4298-b804-21b683805c0e" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 05:24:26.664223 master-0 kubenswrapper[7614]: I0224 05:24:26.664101 7614 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-zxkt2 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 05:24:26.664223 master-0 kubenswrapper[7614]: [-]has-synced failed: reason withheld Feb 24 05:24:26.664223 master-0 kubenswrapper[7614]: [+]process-running ok Feb 24 05:24:26.664223 master-0 kubenswrapper[7614]: healthz check failed Feb 24 05:24:26.664223 master-0 kubenswrapper[7614]: I0224 05:24:26.664210 7614 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-zxkt2" podUID="be7a4b9e-1e9a-4298-b804-21b683805c0e" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 05:24:27.664054 master-0 kubenswrapper[7614]: I0224 05:24:27.663876 7614 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-zxkt2 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 05:24:27.664054 master-0 kubenswrapper[7614]: [-]has-synced failed: reason withheld Feb 24 05:24:27.664054 master-0 kubenswrapper[7614]: [+]process-running ok Feb 24 05:24:27.664054 master-0 kubenswrapper[7614]: healthz check failed Feb 24 05:24:27.665173 master-0 kubenswrapper[7614]: I0224 05:24:27.664064 7614 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-zxkt2" podUID="be7a4b9e-1e9a-4298-b804-21b683805c0e" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 05:24:28.624299 master-0 kubenswrapper[7614]: E0224 05:24:28.624157 7614 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="e23e1fc319485f4b7af40af05bab78f64dc6feb9347acaf86cd9fe42147a009f" cmd=["/bin/bash","-c","test -f /ready/ready"] Feb 24 05:24:28.626705 master-0 kubenswrapper[7614]: E0224 05:24:28.626598 7614 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="e23e1fc319485f4b7af40af05bab78f64dc6feb9347acaf86cd9fe42147a009f" cmd=["/bin/bash","-c","test -f /ready/ready"] Feb 24 05:24:28.628973 master-0 kubenswrapper[7614]: E0224 05:24:28.628914 7614 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="e23e1fc319485f4b7af40af05bab78f64dc6feb9347acaf86cd9fe42147a009f" cmd=["/bin/bash","-c","test -f /ready/ready"] Feb 24 05:24:28.629114 master-0 kubenswrapper[7614]: E0224 05:24:28.628982 7614 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openshift-multus/cni-sysctl-allowlist-ds-j28p2" podUID="2303d3b8-fe6a-469a-a306-4e1685181dbe" containerName="kube-multus-additional-cni-plugins" Feb 24 05:24:28.663371 master-0 kubenswrapper[7614]: I0224 05:24:28.663262 7614 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-zxkt2 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 05:24:28.663371 master-0 kubenswrapper[7614]: [-]has-synced failed: reason withheld Feb 24 05:24:28.663371 master-0 kubenswrapper[7614]: [+]process-running ok Feb 24 05:24:28.663371 master-0 kubenswrapper[7614]: healthz check failed Feb 24 05:24:28.663705 master-0 kubenswrapper[7614]: I0224 05:24:28.663403 7614 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-zxkt2" podUID="be7a4b9e-1e9a-4298-b804-21b683805c0e" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 05:24:29.662615 master-0 kubenswrapper[7614]: I0224 05:24:29.662538 7614 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-zxkt2 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 05:24:29.662615 master-0 kubenswrapper[7614]: [-]has-synced failed: reason withheld Feb 24 05:24:29.662615 master-0 kubenswrapper[7614]: [+]process-running ok Feb 24 05:24:29.662615 master-0 kubenswrapper[7614]: healthz check failed Feb 24 05:24:29.663674 master-0 kubenswrapper[7614]: I0224 05:24:29.662639 7614 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-zxkt2" podUID="be7a4b9e-1e9a-4298-b804-21b683805c0e" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 05:24:30.662879 master-0 kubenswrapper[7614]: I0224 05:24:30.662784 7614 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-zxkt2 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 05:24:30.662879 master-0 kubenswrapper[7614]: [-]has-synced failed: reason withheld Feb 24 05:24:30.662879 master-0 kubenswrapper[7614]: [+]process-running ok Feb 24 05:24:30.662879 master-0 kubenswrapper[7614]: healthz check failed Feb 24 05:24:30.664426 master-0 kubenswrapper[7614]: I0224 05:24:30.662909 7614 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-zxkt2" podUID="be7a4b9e-1e9a-4298-b804-21b683805c0e" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 05:24:31.663512 master-0 kubenswrapper[7614]: I0224 05:24:31.663365 7614 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-zxkt2 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 05:24:31.663512 master-0 kubenswrapper[7614]: [-]has-synced failed: reason withheld Feb 24 05:24:31.663512 master-0 kubenswrapper[7614]: [+]process-running ok Feb 24 05:24:31.663512 master-0 kubenswrapper[7614]: healthz check failed Feb 24 05:24:31.663512 master-0 kubenswrapper[7614]: I0224 05:24:31.663494 7614 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-zxkt2" podUID="be7a4b9e-1e9a-4298-b804-21b683805c0e" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 05:24:32.564362 master-0 kubenswrapper[7614]: I0224 05:24:32.564209 7614 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_cni-sysctl-allowlist-ds-j28p2_2303d3b8-fe6a-469a-a306-4e1685181dbe/kube-multus-additional-cni-plugins/0.log" Feb 24 05:24:32.564362 master-0 kubenswrapper[7614]: I0224 05:24:32.564375 7614 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-multus/cni-sysctl-allowlist-ds-j28p2" Feb 24 05:24:32.664513 master-0 kubenswrapper[7614]: I0224 05:24:32.664351 7614 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-zxkt2 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 05:24:32.664513 master-0 kubenswrapper[7614]: [-]has-synced failed: reason withheld Feb 24 05:24:32.664513 master-0 kubenswrapper[7614]: [+]process-running ok Feb 24 05:24:32.664513 master-0 kubenswrapper[7614]: healthz check failed Feb 24 05:24:32.664513 master-0 kubenswrapper[7614]: I0224 05:24:32.664473 7614 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-zxkt2" podUID="be7a4b9e-1e9a-4298-b804-21b683805c0e" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 05:24:32.701194 master-0 kubenswrapper[7614]: I0224 05:24:32.700995 7614 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ready\" (UniqueName: \"kubernetes.io/empty-dir/2303d3b8-fe6a-469a-a306-4e1685181dbe-ready\") pod \"2303d3b8-fe6a-469a-a306-4e1685181dbe\" (UID: \"2303d3b8-fe6a-469a-a306-4e1685181dbe\") " Feb 24 05:24:32.701194 master-0 kubenswrapper[7614]: I0224 05:24:32.701110 7614 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/2303d3b8-fe6a-469a-a306-4e1685181dbe-tuning-conf-dir\") pod \"2303d3b8-fe6a-469a-a306-4e1685181dbe\" (UID: \"2303d3b8-fe6a-469a-a306-4e1685181dbe\") " Feb 24 05:24:32.701554 master-0 kubenswrapper[7614]: I0224 05:24:32.701214 7614 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6cvkh\" (UniqueName: \"kubernetes.io/projected/2303d3b8-fe6a-469a-a306-4e1685181dbe-kube-api-access-6cvkh\") pod \"2303d3b8-fe6a-469a-a306-4e1685181dbe\" (UID: \"2303d3b8-fe6a-469a-a306-4e1685181dbe\") " Feb 24 05:24:32.701554 master-0 kubenswrapper[7614]: I0224 05:24:32.701257 7614 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/2303d3b8-fe6a-469a-a306-4e1685181dbe-cni-sysctl-allowlist\") pod \"2303d3b8-fe6a-469a-a306-4e1685181dbe\" (UID: \"2303d3b8-fe6a-469a-a306-4e1685181dbe\") " Feb 24 05:24:32.701554 master-0 kubenswrapper[7614]: I0224 05:24:32.701373 7614 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2303d3b8-fe6a-469a-a306-4e1685181dbe-tuning-conf-dir" (OuterVolumeSpecName: "tuning-conf-dir") pod "2303d3b8-fe6a-469a-a306-4e1685181dbe" (UID: "2303d3b8-fe6a-469a-a306-4e1685181dbe"). InnerVolumeSpecName "tuning-conf-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 24 05:24:32.701767 master-0 kubenswrapper[7614]: I0224 05:24:32.701674 7614 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2303d3b8-fe6a-469a-a306-4e1685181dbe-ready" (OuterVolumeSpecName: "ready") pod "2303d3b8-fe6a-469a-a306-4e1685181dbe" (UID: "2303d3b8-fe6a-469a-a306-4e1685181dbe"). InnerVolumeSpecName "ready". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 24 05:24:32.702006 master-0 kubenswrapper[7614]: I0224 05:24:32.701952 7614 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2303d3b8-fe6a-469a-a306-4e1685181dbe-cni-sysctl-allowlist" (OuterVolumeSpecName: "cni-sysctl-allowlist") pod "2303d3b8-fe6a-469a-a306-4e1685181dbe" (UID: "2303d3b8-fe6a-469a-a306-4e1685181dbe"). InnerVolumeSpecName "cni-sysctl-allowlist". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 24 05:24:32.702250 master-0 kubenswrapper[7614]: I0224 05:24:32.702211 7614 reconciler_common.go:293] "Volume detached for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/2303d3b8-fe6a-469a-a306-4e1685181dbe-cni-sysctl-allowlist\") on node \"master-0\" DevicePath \"\"" Feb 24 05:24:32.702250 master-0 kubenswrapper[7614]: I0224 05:24:32.702232 7614 reconciler_common.go:293] "Volume detached for volume \"ready\" (UniqueName: \"kubernetes.io/empty-dir/2303d3b8-fe6a-469a-a306-4e1685181dbe-ready\") on node \"master-0\" DevicePath \"\"" Feb 24 05:24:32.702250 master-0 kubenswrapper[7614]: I0224 05:24:32.702241 7614 reconciler_common.go:293] "Volume detached for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/2303d3b8-fe6a-469a-a306-4e1685181dbe-tuning-conf-dir\") on node \"master-0\" DevicePath \"\"" Feb 24 05:24:32.704478 master-0 kubenswrapper[7614]: I0224 05:24:32.704404 7614 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2303d3b8-fe6a-469a-a306-4e1685181dbe-kube-api-access-6cvkh" (OuterVolumeSpecName: "kube-api-access-6cvkh") pod "2303d3b8-fe6a-469a-a306-4e1685181dbe" (UID: "2303d3b8-fe6a-469a-a306-4e1685181dbe"). InnerVolumeSpecName "kube-api-access-6cvkh". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 24 05:24:32.762114 master-0 kubenswrapper[7614]: I0224 05:24:32.762032 7614 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_cni-sysctl-allowlist-ds-j28p2_2303d3b8-fe6a-469a-a306-4e1685181dbe/kube-multus-additional-cni-plugins/0.log" Feb 24 05:24:32.762438 master-0 kubenswrapper[7614]: I0224 05:24:32.762129 7614 generic.go:334] "Generic (PLEG): container finished" podID="2303d3b8-fe6a-469a-a306-4e1685181dbe" containerID="e23e1fc319485f4b7af40af05bab78f64dc6feb9347acaf86cd9fe42147a009f" exitCode=137 Feb 24 05:24:32.762438 master-0 kubenswrapper[7614]: I0224 05:24:32.762183 7614 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/cni-sysctl-allowlist-ds-j28p2" event={"ID":"2303d3b8-fe6a-469a-a306-4e1685181dbe","Type":"ContainerDied","Data":"e23e1fc319485f4b7af40af05bab78f64dc6feb9347acaf86cd9fe42147a009f"} Feb 24 05:24:32.762438 master-0 kubenswrapper[7614]: I0224 05:24:32.762238 7614 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/cni-sysctl-allowlist-ds-j28p2" event={"ID":"2303d3b8-fe6a-469a-a306-4e1685181dbe","Type":"ContainerDied","Data":"ed7a8ce67f1a3dbd05e9ef13a20015a9c7a3ffc856c5287128e78f3d3c245000"} Feb 24 05:24:32.762438 master-0 kubenswrapper[7614]: I0224 05:24:32.762279 7614 scope.go:117] "RemoveContainer" containerID="e23e1fc319485f4b7af40af05bab78f64dc6feb9347acaf86cd9fe42147a009f" Feb 24 05:24:32.762438 master-0 kubenswrapper[7614]: I0224 05:24:32.762345 7614 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-multus/cni-sysctl-allowlist-ds-j28p2" Feb 24 05:24:32.786568 master-0 kubenswrapper[7614]: I0224 05:24:32.786499 7614 scope.go:117] "RemoveContainer" containerID="e23e1fc319485f4b7af40af05bab78f64dc6feb9347acaf86cd9fe42147a009f" Feb 24 05:24:32.787354 master-0 kubenswrapper[7614]: E0224 05:24:32.787251 7614 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e23e1fc319485f4b7af40af05bab78f64dc6feb9347acaf86cd9fe42147a009f\": container with ID starting with e23e1fc319485f4b7af40af05bab78f64dc6feb9347acaf86cd9fe42147a009f not found: ID does not exist" containerID="e23e1fc319485f4b7af40af05bab78f64dc6feb9347acaf86cd9fe42147a009f" Feb 24 05:24:32.787448 master-0 kubenswrapper[7614]: I0224 05:24:32.787373 7614 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e23e1fc319485f4b7af40af05bab78f64dc6feb9347acaf86cd9fe42147a009f"} err="failed to get container status \"e23e1fc319485f4b7af40af05bab78f64dc6feb9347acaf86cd9fe42147a009f\": rpc error: code = NotFound desc = could not find container \"e23e1fc319485f4b7af40af05bab78f64dc6feb9347acaf86cd9fe42147a009f\": container with ID starting with e23e1fc319485f4b7af40af05bab78f64dc6feb9347acaf86cd9fe42147a009f not found: ID does not exist" Feb 24 05:24:32.804107 master-0 kubenswrapper[7614]: I0224 05:24:32.803982 7614 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6cvkh\" (UniqueName: \"kubernetes.io/projected/2303d3b8-fe6a-469a-a306-4e1685181dbe-kube-api-access-6cvkh\") on node \"master-0\" DevicePath \"\"" Feb 24 05:24:33.664545 master-0 kubenswrapper[7614]: I0224 05:24:33.664388 7614 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-zxkt2 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 05:24:33.664545 master-0 kubenswrapper[7614]: [-]has-synced failed: reason withheld Feb 24 05:24:33.664545 master-0 kubenswrapper[7614]: [+]process-running ok Feb 24 05:24:33.664545 master-0 kubenswrapper[7614]: healthz check failed Feb 24 05:24:33.664545 master-0 kubenswrapper[7614]: I0224 05:24:33.664534 7614 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-zxkt2" podUID="be7a4b9e-1e9a-4298-b804-21b683805c0e" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 05:24:33.903163 master-0 kubenswrapper[7614]: E0224 05:24:33.902780 7614 controller.go:195] "Failed to update lease" err="Put \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 24 05:24:34.664593 master-0 kubenswrapper[7614]: I0224 05:24:34.664458 7614 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-zxkt2 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 05:24:34.664593 master-0 kubenswrapper[7614]: [-]has-synced failed: reason withheld Feb 24 05:24:34.664593 master-0 kubenswrapper[7614]: [+]process-running ok Feb 24 05:24:34.664593 master-0 kubenswrapper[7614]: healthz check failed Feb 24 05:24:34.665833 master-0 kubenswrapper[7614]: I0224 05:24:34.664611 7614 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-zxkt2" podUID="be7a4b9e-1e9a-4298-b804-21b683805c0e" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 05:24:35.663721 master-0 kubenswrapper[7614]: I0224 05:24:35.663627 7614 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-zxkt2 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 05:24:35.663721 master-0 kubenswrapper[7614]: [-]has-synced failed: reason withheld Feb 24 05:24:35.663721 master-0 kubenswrapper[7614]: [+]process-running ok Feb 24 05:24:35.663721 master-0 kubenswrapper[7614]: healthz check failed Feb 24 05:24:35.664234 master-0 kubenswrapper[7614]: I0224 05:24:35.663743 7614 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-zxkt2" podUID="be7a4b9e-1e9a-4298-b804-21b683805c0e" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 05:24:35.808354 master-0 kubenswrapper[7614]: I0224 05:24:35.808211 7614 generic.go:334] "Generic (PLEG): container finished" podID="29b0d9bb-1b88-4023-8b08-896d581c79c7" containerID="e12e5627ae03ebb97ca362b2b8faa759ca1b9a419649b89bb29941198d85f2b3" exitCode=0 Feb 24 05:24:35.808354 master-0 kubenswrapper[7614]: I0224 05:24:35.808285 7614 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/installer-2-master-0" event={"ID":"29b0d9bb-1b88-4023-8b08-896d581c79c7","Type":"ContainerDied","Data":"e12e5627ae03ebb97ca362b2b8faa759ca1b9a419649b89bb29941198d85f2b3"} Feb 24 05:24:36.059372 master-0 kubenswrapper[7614]: E0224 05:24:36.059067 7614 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-24T05:24:26Z\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-24T05:24:26Z\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-24T05:24:26Z\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-24T05:24:26Z\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:08cff7c9164822cf90c1ddc99284f5fd3c4efbfdf7ff5d2da94ff20f03d57215\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8665346de3cec5b1443fb1e3bf6389962210affa684e5c1b521ec342f56e0901\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1703852494},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:94d88fe2fa42931a725508dbf17296b6ed99b8e20c1169f5d1fb8a36f4927ddd\\\"],\\\"sizeBytes\\\":1637274270},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:10e72e1dffd75bda73d89a11e18d98c99255c0f2c54d81f82a2a48b0b86b96b5\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:d64168b357c44a3e5febdd4d99c285c68217a6568f9de2371d72e8a089d42b69\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1238591178},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d7a8ac0ba2e5115c9d451d553741173ae8744d4544da15e28bf38f61630182fd\\\"],\\\"sizeBytes\\\":1237794314},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:155018f64a4d43025cb88586009847bd0f7844afa3e1aa81639d31b96bebd68e\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:4154e7856e2578eae0af7bc7ade3338a49c179e8e0b9d8b5167540e580ffc22b\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1210563790},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:518982b9ad8a8bfb7bb3b4216b235cac99e126df3bb48e390b36064560c76b83\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:b3293b04e31c8e67c885f77e0ad2ee994295afde7c42cb9761c7090ae0cdb3f8\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1202767548},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4775c6461221dafe3ddd67ff683ccb665bed6eb278fa047d9d744aab9af65dcf\\\"],\\\"sizeBytes\\\":992461126},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8177c465e14c63854e5c0fa95ca0635cffc9b5dd3d077ecf971feedbc42b1274\\\"],\\\"sizeBytes\\\":943734757},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6c7ec917f0eff7b41d7174f1b5fdc4ce53ad106e51599afba731a8431ff9caa7\\\"],\\\"sizeBytes\\\":918153745},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8ff40a2d97bf7a95e19303f7e972b7e8354a3864039111c6d33d5479117aaeed\\\"],\\\"sizeBytes\\\":880247193},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:72fafcd55ab739919dd8a114863fda27106af1c497f474e7ce0cb23b58dfa021\\\"],\\\"sizeBytes\\\":875998518},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7b9239f1f5e9590e3db71e61fde86db8f43e0085f61ae7769508d2ea058481c7\\\"],\\\"sizeBytes\\\":862501144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:572b0ca6e993beea2ee9346197665e56a2e4999fbb6958c747c48a35bf72ee34\\\"],\\\"sizeBytes\\\":862091954},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3fa84eaa1310d97fe55bb23a7c27ece85718d0643fa7fc0ff81014edb4b948b\\\"],\\\"sizeBytes\\\":772838975},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bd420e879c9f0271bca2d123a6d762591d9a4626b72f254d1f885842c32149e8\\\"],\\\"sizeBytes\\\":687849728},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3c467c1eeba7434b2aebf07169ab8afe0203d638e871dbdf29a16f830e9aef9e\\\"],\\\"sizeBytes\\\":682963466},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5121a0944000b7bfa57ae2e4eb3f412e1b4b89fcc75eec1ef20241182c0527f2\\\"],\\\"sizeBytes\\\":677827184},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d5a31b448302fbb994548ed801ac488a44e8a7c4ae9149c3b4cc20d6af832f83\\\"],\\\"sizeBytes\\\":621542709},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3e089c4e4fa9a22803b2673b776215e021a1f12a856dbcaba2fadee29bee10a3\\\"],\\\"sizeBytes\\\":589275174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1582ea693f35073e3316e2380a18227b78096ca7f4e1328f1dd8a2c423da26e9\\\"],\\\"sizeBytes\\\":582052489},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:314be88d356b2c8a3c4416daeb4cfcd58d617a4526319c01ddaffae4b4179e74\\\"],\\\"sizeBytes\\\":558105176},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f9df2f6b5cd83ab895e9e4a9bf8920d35fe450679ce06fb223944e95cfbe3e\\\"],\\\"sizeBytes\\\":557320737},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f86073cf0561e4b69668f8917ef5184cb0ef5aa16d0fefe38118f1167b268721\\\"],\\\"sizeBytes\\\":548646306},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d77a77c401bcfaa65a6ab6de82415af0e7ace1b470626647e5feb4875c89a5ef\\\"],\\\"sizeBytes\\\":529218694},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bc0ca626e5e17f9f78ddbfde54ea13ddc7749904911817bba16e6b59f30499ec\\\"],\\\"sizeBytes\\\":528829499},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:11f566fe2ae782ad96d36028b0fd81911a64ef787dcebc83803f741f272fa396\\\"],\\\"sizeBytes\\\":518279996},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-release@sha256:40bb7cf7c637bf9efd8fb0157839d325a019d67cc7d7279665fcf90dbb7f3f33\\\"],\\\"sizeBytes\\\":517888569},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fd63e2c1185e529c6e9f6e1426222ff2ac195132b44a1775f407e4593b66d4c\\\"],\\\"sizeBytes\\\":514875199},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a1b426a276216372c7d688fe60e9eaf251efd35071f94e1bcd4337f51a90fd75\\\"],\\\"sizeBytes\\\":513473308},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ce471c00b59fd855a59f7efa9afdb3f0f9cbf1c4bcce3a82fe1a4cb82e90f52e\\\"],\\\"sizeBytes\\\":513119434},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a9dcbc6b966928b7597d4a822948ae6f07b62feecb91679c1d825d0d19426e19\\\"],\\\"sizeBytes\\\":512172666},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d5f4a546983224e416dfcc3a700afc15f9790182a5a2f8f7c94892d0e95abab3\\\"],\\\"sizeBytes\\\":511125422},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2c8de5c5b21ed8c7829ba988d580ffa470c9913877fe0ee5e11bf507400ffbc7\\\"],\\\"sizeBytes\\\":511059399},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:64ba461fd5594e3a30bfd755f1496707a88249bc68d07c65124c8617d664d2ac\\\"],\\\"sizeBytes\\\":508786786},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a82e441a9e9b93f0e010f1ce26e30c24b6ca93f7752084d4694ebdb3c5b53f83\\\"],\\\"sizeBytes\\\":508443359},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d7bd3361d506dcc1be3afa62d35080c5dd37afccc26cd36019e2b9db2c45f896\\\"],\\\"sizeBytes\\\":507867630},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:034588ffd95ce834e866279bf80a45af2cddda631c6c9a6344c1bb2e033fd83e\\\"],\\\"sizeBytes\\\":506374680},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c8618d42fe4da4881abe39e98691d187e13713981b66d0dac0a11cb1287482b7\\\"],\\\"sizeBytes\\\":506291135},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ce68078d909b63bb5b872d94c04829aa1b5812c416abbaf9024840d348ee68b1\\\"],\\\"sizeBytes\\\":505244089},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:457c564075e8b14b1d24ff6eab750600ebc90ff8b7bb137306a579ee8445ae95\\\"],\\\"sizeBytes\\\":505137106},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ebf883de8fd905490f0c9b420a5d6446ecde18e12e15364f6dcd4e885104972c\\\"],\\\"sizeBytes\\\":504558291},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:897708222502e4d710dd737923f74d153c084ba6048bffceb16dfd30f79a6ecc\\\"],\\\"sizeBytes\\\":504513960},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:86d9e1fdf97794f44fc1c91da025714ec6900fafa6cdc4c0041ffa95e9d70c6c\\\"],\\\"sizeBytes\\\":495888162},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4e8c6ae1f9a450c90857c9fbccf1e5fb404dbc0d65d086afce005d6bd307853b\\\"],\\\"sizeBytes\\\":494959854},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cb94366d6d4423592369eeca84f0fe98325db13d0ab9e0291db9f1a337cd7143\\\"],\\\"sizeBytes\\\":487054953},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:117a846734fc8159b7172a40ed2feb43a969b7dbc113ee1a572cbf6f9f922655\\\"],\\\"sizeBytes\\\":486990304},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4797a485fd4ab3414ba8d52bdf2afccefab6c657b1d259baad703fca5145124c\\\"],\\\"sizeBytes\\\":484349508},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7a132d09565133b36ac7c797213d6a74ac810bb368ef59136320ab3d300f45bd\\\"],\\\"sizeBytes\\\":484074784},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a1dcd1b7d6878b28ed95aed9f0c0e2df156c17cb9fe5971400b983e3f2be29c\\\"],\\\"sizeBytes\\\":480427687},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2b05fb5dedd9a53747df98c2a1956ace8e233ad575204fbec990e39705e36dfb\\\"],\\\"sizeBytes\\\":471325816}],\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"master-0\": Patch \"https://api-int.sno.openstack.lab:6443/api/v1/nodes/master-0/status?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 24 05:24:36.663716 master-0 kubenswrapper[7614]: I0224 05:24:36.663617 7614 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-zxkt2 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 05:24:36.663716 master-0 kubenswrapper[7614]: [-]has-synced failed: reason withheld Feb 24 05:24:36.663716 master-0 kubenswrapper[7614]: [+]process-running ok Feb 24 05:24:36.663716 master-0 kubenswrapper[7614]: healthz check failed Feb 24 05:24:36.664534 master-0 kubenswrapper[7614]: I0224 05:24:36.663731 7614 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-zxkt2" podUID="be7a4b9e-1e9a-4298-b804-21b683805c0e" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 05:24:36.664534 master-0 kubenswrapper[7614]: I0224 05:24:36.663819 7614 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-ingress/router-default-7b65dc9fcb-zxkt2" Feb 24 05:24:36.664887 master-0 kubenswrapper[7614]: I0224 05:24:36.664824 7614 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="router" containerStatusID={"Type":"cri-o","ID":"644f295cce6b864cf139013130d16889b14ef33754986616f48c2d2d58ffa92d"} pod="openshift-ingress/router-default-7b65dc9fcb-zxkt2" containerMessage="Container router failed startup probe, will be restarted" Feb 24 05:24:36.664996 master-0 kubenswrapper[7614]: I0224 05:24:36.664898 7614 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ingress/router-default-7b65dc9fcb-zxkt2" podUID="be7a4b9e-1e9a-4298-b804-21b683805c0e" containerName="router" containerID="cri-o://644f295cce6b864cf139013130d16889b14ef33754986616f48c2d2d58ffa92d" gracePeriod=3600 Feb 24 05:24:37.273562 master-0 kubenswrapper[7614]: I0224 05:24:37.273450 7614 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/installer-2-master-0" Feb 24 05:24:37.294730 master-0 kubenswrapper[7614]: I0224 05:24:37.294647 7614 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/29b0d9bb-1b88-4023-8b08-896d581c79c7-var-lock\") pod \"29b0d9bb-1b88-4023-8b08-896d581c79c7\" (UID: \"29b0d9bb-1b88-4023-8b08-896d581c79c7\") " Feb 24 05:24:37.295184 master-0 kubenswrapper[7614]: I0224 05:24:37.294757 7614 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/29b0d9bb-1b88-4023-8b08-896d581c79c7-kubelet-dir\") pod \"29b0d9bb-1b88-4023-8b08-896d581c79c7\" (UID: \"29b0d9bb-1b88-4023-8b08-896d581c79c7\") " Feb 24 05:24:37.295184 master-0 kubenswrapper[7614]: I0224 05:24:37.295081 7614 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/29b0d9bb-1b88-4023-8b08-896d581c79c7-kube-api-access\") pod \"29b0d9bb-1b88-4023-8b08-896d581c79c7\" (UID: \"29b0d9bb-1b88-4023-8b08-896d581c79c7\") " Feb 24 05:24:37.295184 master-0 kubenswrapper[7614]: I0224 05:24:37.295085 7614 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/29b0d9bb-1b88-4023-8b08-896d581c79c7-var-lock" (OuterVolumeSpecName: "var-lock") pod "29b0d9bb-1b88-4023-8b08-896d581c79c7" (UID: "29b0d9bb-1b88-4023-8b08-896d581c79c7"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 24 05:24:37.295523 master-0 kubenswrapper[7614]: I0224 05:24:37.295214 7614 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/29b0d9bb-1b88-4023-8b08-896d581c79c7-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "29b0d9bb-1b88-4023-8b08-896d581c79c7" (UID: "29b0d9bb-1b88-4023-8b08-896d581c79c7"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 24 05:24:37.295523 master-0 kubenswrapper[7614]: I0224 05:24:37.295486 7614 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/29b0d9bb-1b88-4023-8b08-896d581c79c7-var-lock\") on node \"master-0\" DevicePath \"\"" Feb 24 05:24:37.295523 master-0 kubenswrapper[7614]: I0224 05:24:37.295507 7614 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/29b0d9bb-1b88-4023-8b08-896d581c79c7-kubelet-dir\") on node \"master-0\" DevicePath \"\"" Feb 24 05:24:37.300252 master-0 kubenswrapper[7614]: I0224 05:24:37.300196 7614 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/29b0d9bb-1b88-4023-8b08-896d581c79c7-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "29b0d9bb-1b88-4023-8b08-896d581c79c7" (UID: "29b0d9bb-1b88-4023-8b08-896d581c79c7"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 24 05:24:37.397836 master-0 kubenswrapper[7614]: I0224 05:24:37.397715 7614 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/29b0d9bb-1b88-4023-8b08-896d581c79c7-kube-api-access\") on node \"master-0\" DevicePath \"\"" Feb 24 05:24:37.828461 master-0 kubenswrapper[7614]: I0224 05:24:37.828368 7614 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/installer-2-master-0" event={"ID":"29b0d9bb-1b88-4023-8b08-896d581c79c7","Type":"ContainerDied","Data":"e62e33bc2b32fa546c8b71cdec9803c18e73e881c996067ed355eb35c01427f7"} Feb 24 05:24:37.828461 master-0 kubenswrapper[7614]: I0224 05:24:37.828438 7614 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e62e33bc2b32fa546c8b71cdec9803c18e73e881c996067ed355eb35c01427f7" Feb 24 05:24:37.829022 master-0 kubenswrapper[7614]: I0224 05:24:37.828508 7614 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/installer-2-master-0" Feb 24 05:24:43.903968 master-0 kubenswrapper[7614]: E0224 05:24:43.903860 7614 controller.go:195] "Failed to update lease" err="Put \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 24 05:24:45.897850 master-0 kubenswrapper[7614]: I0224 05:24:45.897725 7614 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-authentication-operator_authentication-operator-5bd7c86784-kbb8z_59333a14-5bdc-4590-a3da-af7300f086da/authentication-operator/0.log" Feb 24 05:24:45.898705 master-0 kubenswrapper[7614]: I0224 05:24:45.898612 7614 generic.go:334] "Generic (PLEG): container finished" podID="59333a14-5bdc-4590-a3da-af7300f086da" containerID="d5ce8ccd581f3f0a727f122a907bfeeff964d35571ffdd52c3f7804a92dfb1d9" exitCode=1 Feb 24 05:24:45.898768 master-0 kubenswrapper[7614]: I0224 05:24:45.898729 7614 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-5bd7c86784-kbb8z" event={"ID":"59333a14-5bdc-4590-a3da-af7300f086da","Type":"ContainerDied","Data":"d5ce8ccd581f3f0a727f122a907bfeeff964d35571ffdd52c3f7804a92dfb1d9"} Feb 24 05:24:45.899728 master-0 kubenswrapper[7614]: I0224 05:24:45.899681 7614 scope.go:117] "RemoveContainer" containerID="d5ce8ccd581f3f0a727f122a907bfeeff964d35571ffdd52c3f7804a92dfb1d9" Feb 24 05:24:46.059926 master-0 kubenswrapper[7614]: E0224 05:24:46.059810 7614 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"master-0\": Get \"https://api-int.sno.openstack.lab:6443/api/v1/nodes/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 24 05:24:46.911140 master-0 kubenswrapper[7614]: I0224 05:24:46.911058 7614 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-authentication-operator_authentication-operator-5bd7c86784-kbb8z_59333a14-5bdc-4590-a3da-af7300f086da/authentication-operator/0.log" Feb 24 05:24:46.911833 master-0 kubenswrapper[7614]: I0224 05:24:46.911154 7614 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-5bd7c86784-kbb8z" event={"ID":"59333a14-5bdc-4590-a3da-af7300f086da","Type":"ContainerStarted","Data":"d5c20b92312f36a79271d5fd1a9a93a147a0f9575364641bb14c812c34fb24f8"} Feb 24 05:24:50.960679 master-0 kubenswrapper[7614]: I0224 05:24:50.960553 7614 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd_etcd-master-0_18a83278819db2092fa26d8274eb3f00/etcd-rev/0.log" Feb 24 05:24:50.962180 master-0 kubenswrapper[7614]: I0224 05:24:50.962120 7614 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd_etcd-master-0_18a83278819db2092fa26d8274eb3f00/etcd-metrics/0.log" Feb 24 05:24:50.963421 master-0 kubenswrapper[7614]: I0224 05:24:50.963363 7614 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd_etcd-master-0_18a83278819db2092fa26d8274eb3f00/etcdctl/0.log" Feb 24 05:24:50.965142 master-0 kubenswrapper[7614]: I0224 05:24:50.965028 7614 generic.go:334] "Generic (PLEG): container finished" podID="18a83278819db2092fa26d8274eb3f00" containerID="3ecee88921125a5f3daef9f73c06fadc6c6ff979e5d985e6de9e5a03f6b60138" exitCode=137 Feb 24 05:24:51.516752 master-0 kubenswrapper[7614]: I0224 05:24:51.515775 7614 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd_etcd-master-0_18a83278819db2092fa26d8274eb3f00/etcd-rev/0.log" Feb 24 05:24:51.519174 master-0 kubenswrapper[7614]: I0224 05:24:51.519102 7614 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd_etcd-master-0_18a83278819db2092fa26d8274eb3f00/etcd-metrics/0.log" Feb 24 05:24:51.520508 master-0 kubenswrapper[7614]: I0224 05:24:51.520454 7614 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd_etcd-master-0_18a83278819db2092fa26d8274eb3f00/etcd/0.log" Feb 24 05:24:51.521422 master-0 kubenswrapper[7614]: I0224 05:24:51.521373 7614 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd_etcd-master-0_18a83278819db2092fa26d8274eb3f00/etcdctl/0.log" Feb 24 05:24:51.522766 master-0 kubenswrapper[7614]: I0224 05:24:51.522720 7614 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-master-0" Feb 24 05:24:51.661459 master-0 kubenswrapper[7614]: I0224 05:24:51.661271 7614 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/18a83278819db2092fa26d8274eb3f00-data-dir" (OuterVolumeSpecName: "data-dir") pod "18a83278819db2092fa26d8274eb3f00" (UID: "18a83278819db2092fa26d8274eb3f00"). InnerVolumeSpecName "data-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 24 05:24:51.661866 master-0 kubenswrapper[7614]: I0224 05:24:51.661600 7614 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/18a83278819db2092fa26d8274eb3f00-data-dir\") pod \"18a83278819db2092fa26d8274eb3f00\" (UID: \"18a83278819db2092fa26d8274eb3f00\") " Feb 24 05:24:51.661866 master-0 kubenswrapper[7614]: I0224 05:24:51.661799 7614 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/18a83278819db2092fa26d8274eb3f00-cert-dir\") pod \"18a83278819db2092fa26d8274eb3f00\" (UID: \"18a83278819db2092fa26d8274eb3f00\") " Feb 24 05:24:51.662064 master-0 kubenswrapper[7614]: I0224 05:24:51.661868 7614 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/18a83278819db2092fa26d8274eb3f00-log-dir\") pod \"18a83278819db2092fa26d8274eb3f00\" (UID: \"18a83278819db2092fa26d8274eb3f00\") " Feb 24 05:24:51.662064 master-0 kubenswrapper[7614]: I0224 05:24:51.661935 7614 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/18a83278819db2092fa26d8274eb3f00-log-dir" (OuterVolumeSpecName: "log-dir") pod "18a83278819db2092fa26d8274eb3f00" (UID: "18a83278819db2092fa26d8274eb3f00"). InnerVolumeSpecName "log-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 24 05:24:51.662064 master-0 kubenswrapper[7614]: I0224 05:24:51.661958 7614 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/18a83278819db2092fa26d8274eb3f00-usr-local-bin\") pod \"18a83278819db2092fa26d8274eb3f00\" (UID: \"18a83278819db2092fa26d8274eb3f00\") " Feb 24 05:24:51.662064 master-0 kubenswrapper[7614]: I0224 05:24:51.662024 7614 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/18a83278819db2092fa26d8274eb3f00-usr-local-bin" (OuterVolumeSpecName: "usr-local-bin") pod "18a83278819db2092fa26d8274eb3f00" (UID: "18a83278819db2092fa26d8274eb3f00"). InnerVolumeSpecName "usr-local-bin". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 24 05:24:51.662433 master-0 kubenswrapper[7614]: I0224 05:24:51.662176 7614 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/18a83278819db2092fa26d8274eb3f00-resource-dir\") pod \"18a83278819db2092fa26d8274eb3f00\" (UID: \"18a83278819db2092fa26d8274eb3f00\") " Feb 24 05:24:51.662433 master-0 kubenswrapper[7614]: I0224 05:24:51.662233 7614 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/18a83278819db2092fa26d8274eb3f00-static-pod-dir\") pod \"18a83278819db2092fa26d8274eb3f00\" (UID: \"18a83278819db2092fa26d8274eb3f00\") " Feb 24 05:24:51.662433 master-0 kubenswrapper[7614]: I0224 05:24:51.662370 7614 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/18a83278819db2092fa26d8274eb3f00-static-pod-dir" (OuterVolumeSpecName: "static-pod-dir") pod "18a83278819db2092fa26d8274eb3f00" (UID: "18a83278819db2092fa26d8274eb3f00"). InnerVolumeSpecName "static-pod-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 24 05:24:51.662433 master-0 kubenswrapper[7614]: I0224 05:24:51.662305 7614 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/18a83278819db2092fa26d8274eb3f00-resource-dir" (OuterVolumeSpecName: "resource-dir") pod "18a83278819db2092fa26d8274eb3f00" (UID: "18a83278819db2092fa26d8274eb3f00"). InnerVolumeSpecName "resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 24 05:24:51.663100 master-0 kubenswrapper[7614]: I0224 05:24:51.663043 7614 reconciler_common.go:293] "Volume detached for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/18a83278819db2092fa26d8274eb3f00-resource-dir\") on node \"master-0\" DevicePath \"\"" Feb 24 05:24:51.663100 master-0 kubenswrapper[7614]: I0224 05:24:51.663086 7614 reconciler_common.go:293] "Volume detached for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/18a83278819db2092fa26d8274eb3f00-static-pod-dir\") on node \"master-0\" DevicePath \"\"" Feb 24 05:24:51.663264 master-0 kubenswrapper[7614]: I0224 05:24:51.663106 7614 reconciler_common.go:293] "Volume detached for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/18a83278819db2092fa26d8274eb3f00-data-dir\") on node \"master-0\" DevicePath \"\"" Feb 24 05:24:51.663264 master-0 kubenswrapper[7614]: I0224 05:24:51.663127 7614 reconciler_common.go:293] "Volume detached for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/18a83278819db2092fa26d8274eb3f00-log-dir\") on node \"master-0\" DevicePath \"\"" Feb 24 05:24:51.663264 master-0 kubenswrapper[7614]: I0224 05:24:51.663149 7614 reconciler_common.go:293] "Volume detached for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/18a83278819db2092fa26d8274eb3f00-usr-local-bin\") on node \"master-0\" DevicePath \"\"" Feb 24 05:24:51.663264 master-0 kubenswrapper[7614]: I0224 05:24:51.663190 7614 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/18a83278819db2092fa26d8274eb3f00-cert-dir" (OuterVolumeSpecName: "cert-dir") pod "18a83278819db2092fa26d8274eb3f00" (UID: "18a83278819db2092fa26d8274eb3f00"). InnerVolumeSpecName "cert-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 24 05:24:51.764573 master-0 kubenswrapper[7614]: I0224 05:24:51.764466 7614 reconciler_common.go:293] "Volume detached for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/18a83278819db2092fa26d8274eb3f00-cert-dir\") on node \"master-0\" DevicePath \"\"" Feb 24 05:24:51.978666 master-0 kubenswrapper[7614]: I0224 05:24:51.978555 7614 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd_etcd-master-0_18a83278819db2092fa26d8274eb3f00/etcd-rev/0.log" Feb 24 05:24:51.980222 master-0 kubenswrapper[7614]: I0224 05:24:51.980164 7614 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd_etcd-master-0_18a83278819db2092fa26d8274eb3f00/etcd-metrics/0.log" Feb 24 05:24:51.981637 master-0 kubenswrapper[7614]: I0224 05:24:51.981561 7614 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd_etcd-master-0_18a83278819db2092fa26d8274eb3f00/etcd/0.log" Feb 24 05:24:51.982417 master-0 kubenswrapper[7614]: I0224 05:24:51.982305 7614 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd_etcd-master-0_18a83278819db2092fa26d8274eb3f00/etcdctl/0.log" Feb 24 05:24:51.984538 master-0 kubenswrapper[7614]: I0224 05:24:51.984454 7614 generic.go:334] "Generic (PLEG): container finished" podID="18a83278819db2092fa26d8274eb3f00" containerID="cd4cd16837dd09734e5f614dd2260006bcf56b4e101d508499976475323a14e3" exitCode=137 Feb 24 05:24:51.984688 master-0 kubenswrapper[7614]: I0224 05:24:51.984567 7614 scope.go:117] "RemoveContainer" containerID="7d727fc6e0e5d77f1006238fe0a64fa226a54016635e42983a2c118d1cbf2d4d" Feb 24 05:24:51.984763 master-0 kubenswrapper[7614]: I0224 05:24:51.984666 7614 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-master-0" Feb 24 05:24:52.012008 master-0 kubenswrapper[7614]: I0224 05:24:52.011938 7614 scope.go:117] "RemoveContainer" containerID="4bfeb0f00fa17020db1e4f0103b58f3f7fb077a3afee3ad3d0b2ebfe6459b4f1" Feb 24 05:24:52.046209 master-0 kubenswrapper[7614]: I0224 05:24:52.046106 7614 scope.go:117] "RemoveContainer" containerID="91ecda8417b960b9a029d90ed995bb60f62836abfd23bf2acd7b0e5ecf1da02e" Feb 24 05:24:52.076391 master-0 kubenswrapper[7614]: I0224 05:24:52.076339 7614 scope.go:117] "RemoveContainer" containerID="cd4cd16837dd09734e5f614dd2260006bcf56b4e101d508499976475323a14e3" Feb 24 05:24:52.103879 master-0 kubenswrapper[7614]: I0224 05:24:52.103792 7614 scope.go:117] "RemoveContainer" containerID="3ecee88921125a5f3daef9f73c06fadc6c6ff979e5d985e6de9e5a03f6b60138" Feb 24 05:24:52.128679 master-0 kubenswrapper[7614]: I0224 05:24:52.128613 7614 scope.go:117] "RemoveContainer" containerID="80c968b4b9fc354e4a6c8675410d09e381b46a2ac2e807d24b9c6b5794f1030b" Feb 24 05:24:52.163036 master-0 kubenswrapper[7614]: I0224 05:24:52.162982 7614 scope.go:117] "RemoveContainer" containerID="8e7e998099321e92b4a656cc6f1d593f93e765a527cc75d4dc4f7951434a0e8c" Feb 24 05:24:52.200236 master-0 kubenswrapper[7614]: I0224 05:24:52.200180 7614 scope.go:117] "RemoveContainer" containerID="aab7b09bde8c1057cef18f32fed6066df8d587332ad0a28d9336f34996955d46" Feb 24 05:24:52.240918 master-0 kubenswrapper[7614]: I0224 05:24:52.240838 7614 scope.go:117] "RemoveContainer" containerID="7d727fc6e0e5d77f1006238fe0a64fa226a54016635e42983a2c118d1cbf2d4d" Feb 24 05:24:52.241816 master-0 kubenswrapper[7614]: E0224 05:24:52.241729 7614 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7d727fc6e0e5d77f1006238fe0a64fa226a54016635e42983a2c118d1cbf2d4d\": container with ID starting with 7d727fc6e0e5d77f1006238fe0a64fa226a54016635e42983a2c118d1cbf2d4d not found: ID does not exist" containerID="7d727fc6e0e5d77f1006238fe0a64fa226a54016635e42983a2c118d1cbf2d4d" Feb 24 05:24:52.241934 master-0 kubenswrapper[7614]: I0224 05:24:52.241827 7614 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7d727fc6e0e5d77f1006238fe0a64fa226a54016635e42983a2c118d1cbf2d4d"} err="failed to get container status \"7d727fc6e0e5d77f1006238fe0a64fa226a54016635e42983a2c118d1cbf2d4d\": rpc error: code = NotFound desc = could not find container \"7d727fc6e0e5d77f1006238fe0a64fa226a54016635e42983a2c118d1cbf2d4d\": container with ID starting with 7d727fc6e0e5d77f1006238fe0a64fa226a54016635e42983a2c118d1cbf2d4d not found: ID does not exist" Feb 24 05:24:52.241934 master-0 kubenswrapper[7614]: I0224 05:24:52.241875 7614 scope.go:117] "RemoveContainer" containerID="4bfeb0f00fa17020db1e4f0103b58f3f7fb077a3afee3ad3d0b2ebfe6459b4f1" Feb 24 05:24:52.242550 master-0 kubenswrapper[7614]: E0224 05:24:52.242439 7614 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4bfeb0f00fa17020db1e4f0103b58f3f7fb077a3afee3ad3d0b2ebfe6459b4f1\": container with ID starting with 4bfeb0f00fa17020db1e4f0103b58f3f7fb077a3afee3ad3d0b2ebfe6459b4f1 not found: ID does not exist" containerID="4bfeb0f00fa17020db1e4f0103b58f3f7fb077a3afee3ad3d0b2ebfe6459b4f1" Feb 24 05:24:52.242550 master-0 kubenswrapper[7614]: I0224 05:24:52.242514 7614 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4bfeb0f00fa17020db1e4f0103b58f3f7fb077a3afee3ad3d0b2ebfe6459b4f1"} err="failed to get container status \"4bfeb0f00fa17020db1e4f0103b58f3f7fb077a3afee3ad3d0b2ebfe6459b4f1\": rpc error: code = NotFound desc = could not find container \"4bfeb0f00fa17020db1e4f0103b58f3f7fb077a3afee3ad3d0b2ebfe6459b4f1\": container with ID starting with 4bfeb0f00fa17020db1e4f0103b58f3f7fb077a3afee3ad3d0b2ebfe6459b4f1 not found: ID does not exist" Feb 24 05:24:52.242738 master-0 kubenswrapper[7614]: I0224 05:24:52.242564 7614 scope.go:117] "RemoveContainer" containerID="91ecda8417b960b9a029d90ed995bb60f62836abfd23bf2acd7b0e5ecf1da02e" Feb 24 05:24:52.243124 master-0 kubenswrapper[7614]: E0224 05:24:52.243064 7614 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"91ecda8417b960b9a029d90ed995bb60f62836abfd23bf2acd7b0e5ecf1da02e\": container with ID starting with 91ecda8417b960b9a029d90ed995bb60f62836abfd23bf2acd7b0e5ecf1da02e not found: ID does not exist" containerID="91ecda8417b960b9a029d90ed995bb60f62836abfd23bf2acd7b0e5ecf1da02e" Feb 24 05:24:52.243202 master-0 kubenswrapper[7614]: I0224 05:24:52.243115 7614 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"91ecda8417b960b9a029d90ed995bb60f62836abfd23bf2acd7b0e5ecf1da02e"} err="failed to get container status \"91ecda8417b960b9a029d90ed995bb60f62836abfd23bf2acd7b0e5ecf1da02e\": rpc error: code = NotFound desc = could not find container \"91ecda8417b960b9a029d90ed995bb60f62836abfd23bf2acd7b0e5ecf1da02e\": container with ID starting with 91ecda8417b960b9a029d90ed995bb60f62836abfd23bf2acd7b0e5ecf1da02e not found: ID does not exist" Feb 24 05:24:52.243202 master-0 kubenswrapper[7614]: I0224 05:24:52.243144 7614 scope.go:117] "RemoveContainer" containerID="cd4cd16837dd09734e5f614dd2260006bcf56b4e101d508499976475323a14e3" Feb 24 05:24:52.243661 master-0 kubenswrapper[7614]: E0224 05:24:52.243605 7614 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"cd4cd16837dd09734e5f614dd2260006bcf56b4e101d508499976475323a14e3\": container with ID starting with cd4cd16837dd09734e5f614dd2260006bcf56b4e101d508499976475323a14e3 not found: ID does not exist" containerID="cd4cd16837dd09734e5f614dd2260006bcf56b4e101d508499976475323a14e3" Feb 24 05:24:52.243754 master-0 kubenswrapper[7614]: I0224 05:24:52.243653 7614 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cd4cd16837dd09734e5f614dd2260006bcf56b4e101d508499976475323a14e3"} err="failed to get container status \"cd4cd16837dd09734e5f614dd2260006bcf56b4e101d508499976475323a14e3\": rpc error: code = NotFound desc = could not find container \"cd4cd16837dd09734e5f614dd2260006bcf56b4e101d508499976475323a14e3\": container with ID starting with cd4cd16837dd09734e5f614dd2260006bcf56b4e101d508499976475323a14e3 not found: ID does not exist" Feb 24 05:24:52.243754 master-0 kubenswrapper[7614]: I0224 05:24:52.243683 7614 scope.go:117] "RemoveContainer" containerID="3ecee88921125a5f3daef9f73c06fadc6c6ff979e5d985e6de9e5a03f6b60138" Feb 24 05:24:52.244095 master-0 kubenswrapper[7614]: E0224 05:24:52.244037 7614 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3ecee88921125a5f3daef9f73c06fadc6c6ff979e5d985e6de9e5a03f6b60138\": container with ID starting with 3ecee88921125a5f3daef9f73c06fadc6c6ff979e5d985e6de9e5a03f6b60138 not found: ID does not exist" containerID="3ecee88921125a5f3daef9f73c06fadc6c6ff979e5d985e6de9e5a03f6b60138" Feb 24 05:24:52.244095 master-0 kubenswrapper[7614]: I0224 05:24:52.244083 7614 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3ecee88921125a5f3daef9f73c06fadc6c6ff979e5d985e6de9e5a03f6b60138"} err="failed to get container status \"3ecee88921125a5f3daef9f73c06fadc6c6ff979e5d985e6de9e5a03f6b60138\": rpc error: code = NotFound desc = could not find container \"3ecee88921125a5f3daef9f73c06fadc6c6ff979e5d985e6de9e5a03f6b60138\": container with ID starting with 3ecee88921125a5f3daef9f73c06fadc6c6ff979e5d985e6de9e5a03f6b60138 not found: ID does not exist" Feb 24 05:24:52.244243 master-0 kubenswrapper[7614]: I0224 05:24:52.244111 7614 scope.go:117] "RemoveContainer" containerID="80c968b4b9fc354e4a6c8675410d09e381b46a2ac2e807d24b9c6b5794f1030b" Feb 24 05:24:52.244694 master-0 kubenswrapper[7614]: E0224 05:24:52.244600 7614 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"80c968b4b9fc354e4a6c8675410d09e381b46a2ac2e807d24b9c6b5794f1030b\": container with ID starting with 80c968b4b9fc354e4a6c8675410d09e381b46a2ac2e807d24b9c6b5794f1030b not found: ID does not exist" containerID="80c968b4b9fc354e4a6c8675410d09e381b46a2ac2e807d24b9c6b5794f1030b" Feb 24 05:24:52.244694 master-0 kubenswrapper[7614]: I0224 05:24:52.244682 7614 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"80c968b4b9fc354e4a6c8675410d09e381b46a2ac2e807d24b9c6b5794f1030b"} err="failed to get container status \"80c968b4b9fc354e4a6c8675410d09e381b46a2ac2e807d24b9c6b5794f1030b\": rpc error: code = NotFound desc = could not find container \"80c968b4b9fc354e4a6c8675410d09e381b46a2ac2e807d24b9c6b5794f1030b\": container with ID starting with 80c968b4b9fc354e4a6c8675410d09e381b46a2ac2e807d24b9c6b5794f1030b not found: ID does not exist" Feb 24 05:24:52.244851 master-0 kubenswrapper[7614]: I0224 05:24:52.244707 7614 scope.go:117] "RemoveContainer" containerID="8e7e998099321e92b4a656cc6f1d593f93e765a527cc75d4dc4f7951434a0e8c" Feb 24 05:24:52.245367 master-0 kubenswrapper[7614]: E0224 05:24:52.245228 7614 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8e7e998099321e92b4a656cc6f1d593f93e765a527cc75d4dc4f7951434a0e8c\": container with ID starting with 8e7e998099321e92b4a656cc6f1d593f93e765a527cc75d4dc4f7951434a0e8c not found: ID does not exist" containerID="8e7e998099321e92b4a656cc6f1d593f93e765a527cc75d4dc4f7951434a0e8c" Feb 24 05:24:52.245367 master-0 kubenswrapper[7614]: I0224 05:24:52.245297 7614 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8e7e998099321e92b4a656cc6f1d593f93e765a527cc75d4dc4f7951434a0e8c"} err="failed to get container status \"8e7e998099321e92b4a656cc6f1d593f93e765a527cc75d4dc4f7951434a0e8c\": rpc error: code = NotFound desc = could not find container \"8e7e998099321e92b4a656cc6f1d593f93e765a527cc75d4dc4f7951434a0e8c\": container with ID starting with 8e7e998099321e92b4a656cc6f1d593f93e765a527cc75d4dc4f7951434a0e8c not found: ID does not exist" Feb 24 05:24:52.245584 master-0 kubenswrapper[7614]: I0224 05:24:52.245372 7614 scope.go:117] "RemoveContainer" containerID="aab7b09bde8c1057cef18f32fed6066df8d587332ad0a28d9336f34996955d46" Feb 24 05:24:52.245921 master-0 kubenswrapper[7614]: E0224 05:24:52.245854 7614 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"aab7b09bde8c1057cef18f32fed6066df8d587332ad0a28d9336f34996955d46\": container with ID starting with aab7b09bde8c1057cef18f32fed6066df8d587332ad0a28d9336f34996955d46 not found: ID does not exist" containerID="aab7b09bde8c1057cef18f32fed6066df8d587332ad0a28d9336f34996955d46" Feb 24 05:24:52.246000 master-0 kubenswrapper[7614]: I0224 05:24:52.245912 7614 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"aab7b09bde8c1057cef18f32fed6066df8d587332ad0a28d9336f34996955d46"} err="failed to get container status \"aab7b09bde8c1057cef18f32fed6066df8d587332ad0a28d9336f34996955d46\": rpc error: code = NotFound desc = could not find container \"aab7b09bde8c1057cef18f32fed6066df8d587332ad0a28d9336f34996955d46\": container with ID starting with aab7b09bde8c1057cef18f32fed6066df8d587332ad0a28d9336f34996955d46 not found: ID does not exist" Feb 24 05:24:53.185879 master-0 kubenswrapper[7614]: I0224 05:24:53.185794 7614 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="18a83278819db2092fa26d8274eb3f00" path="/var/lib/kubelet/pods/18a83278819db2092fa26d8274eb3f00/volumes" Feb 24 05:24:53.904741 master-0 kubenswrapper[7614]: E0224 05:24:53.904623 7614 controller.go:195] "Failed to update lease" err="Put \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 24 05:24:54.916682 master-0 kubenswrapper[7614]: E0224 05:24:54.916365 7614 event.go:359] "Server rejected event (will not retry!)" err="Timeout: request did not complete within requested timeout - context deadline exceeded" event="&Event{ObjectMeta:{etcd-master-0.1897175bac20b82f openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-master-0,UID:18a83278819db2092fa26d8274eb3f00,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd-rev},},Reason:Killing,Message:Stopping container etcd-rev,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-02-24 05:24:20.890867759 +0000 UTC m=+591.925610955,LastTimestamp:2026-02-24 05:24:20.890867759 +0000 UTC m=+591.925610955,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Feb 24 05:24:56.060822 master-0 kubenswrapper[7614]: E0224 05:24:56.060696 7614 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"master-0\": Get \"https://api-int.sno.openstack.lab:6443/api/v1/nodes/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 24 05:25:00.174228 master-0 kubenswrapper[7614]: I0224 05:25:00.174108 7614 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-master-0" Feb 24 05:25:00.205245 master-0 kubenswrapper[7614]: I0224 05:25:00.205054 7614 kubelet.go:1909] "Trying to delete pod" pod="openshift-etcd/etcd-master-0" podUID="6636104e-8b36-4c09-9e6b-e13fc7237a3e" Feb 24 05:25:00.205747 master-0 kubenswrapper[7614]: I0224 05:25:00.205714 7614 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-etcd/etcd-master-0" podUID="6636104e-8b36-4c09-9e6b-e13fc7237a3e" Feb 24 05:25:03.905150 master-0 kubenswrapper[7614]: E0224 05:25:03.904991 7614 controller.go:195] "Failed to update lease" err="Put \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": context deadline exceeded" Feb 24 05:25:06.061287 master-0 kubenswrapper[7614]: E0224 05:25:06.061209 7614 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"master-0\": Get \"https://api-int.sno.openstack.lab:6443/api/v1/nodes/master-0?timeout=10s\": context deadline exceeded" Feb 24 05:25:12.235121 master-0 kubenswrapper[7614]: I0224 05:25:12.235055 7614 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-network-node-identity_network-node-identity-rlg4x_c106275b-72b6-4877-95c3-830f93e35375/approver/1.log" Feb 24 05:25:12.237105 master-0 kubenswrapper[7614]: I0224 05:25:12.237028 7614 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-network-node-identity_network-node-identity-rlg4x_c106275b-72b6-4877-95c3-830f93e35375/approver/0.log" Feb 24 05:25:12.238002 master-0 kubenswrapper[7614]: I0224 05:25:12.237916 7614 generic.go:334] "Generic (PLEG): container finished" podID="c106275b-72b6-4877-95c3-830f93e35375" containerID="8d89f8110c46f839405874fb4dba9bf410e3a518ca5d273b143187f669975cd0" exitCode=1 Feb 24 05:25:12.238154 master-0 kubenswrapper[7614]: I0224 05:25:12.238030 7614 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-rlg4x" event={"ID":"c106275b-72b6-4877-95c3-830f93e35375","Type":"ContainerDied","Data":"8d89f8110c46f839405874fb4dba9bf410e3a518ca5d273b143187f669975cd0"} Feb 24 05:25:12.238154 master-0 kubenswrapper[7614]: I0224 05:25:12.238136 7614 scope.go:117] "RemoveContainer" containerID="3c48cf95cb20519b43165b534538afb3afad0ec1beb464f9f497eefdb2dc3c0f" Feb 24 05:25:12.239245 master-0 kubenswrapper[7614]: I0224 05:25:12.239206 7614 scope.go:117] "RemoveContainer" containerID="8d89f8110c46f839405874fb4dba9bf410e3a518ca5d273b143187f669975cd0" Feb 24 05:25:12.239644 master-0 kubenswrapper[7614]: E0224 05:25:12.239590 7614 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"approver\" with CrashLoopBackOff: \"back-off 10s restarting failed container=approver pod=network-node-identity-rlg4x_openshift-network-node-identity(c106275b-72b6-4877-95c3-830f93e35375)\"" pod="openshift-network-node-identity/network-node-identity-rlg4x" podUID="c106275b-72b6-4877-95c3-830f93e35375" Feb 24 05:25:13.249879 master-0 kubenswrapper[7614]: I0224 05:25:13.249812 7614 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-network-node-identity_network-node-identity-rlg4x_c106275b-72b6-4877-95c3-830f93e35375/approver/1.log" Feb 24 05:25:13.906501 master-0 kubenswrapper[7614]: E0224 05:25:13.906389 7614 controller.go:195] "Failed to update lease" err="Put \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 24 05:25:13.906501 master-0 kubenswrapper[7614]: I0224 05:25:13.906484 7614 controller.go:115] "failed to update lease using latest lease, fallback to ensure lease" err="failed 5 attempts to update lease" Feb 24 05:25:16.063529 master-0 kubenswrapper[7614]: E0224 05:25:16.063380 7614 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"master-0\": Get \"https://api-int.sno.openstack.lab:6443/api/v1/nodes/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 24 05:25:16.063529 master-0 kubenswrapper[7614]: E0224 05:25:16.063479 7614 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Feb 24 05:25:21.677099 master-0 kubenswrapper[7614]: I0224 05:25:21.677019 7614 status_manager.go:851] "Failed to get status for pod" podUID="79656ffd720980cfc7e8a06d9f509855" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" err="the server was unable to return a response in the time allotted, but may still be processing the request (get pods kube-controller-manager-master-0)" Feb 24 05:25:23.349419 master-0 kubenswrapper[7614]: I0224 05:25:23.349250 7614 generic.go:334] "Generic (PLEG): container finished" podID="be7a4b9e-1e9a-4298-b804-21b683805c0e" containerID="644f295cce6b864cf139013130d16889b14ef33754986616f48c2d2d58ffa92d" exitCode=0 Feb 24 05:25:23.350399 master-0 kubenswrapper[7614]: I0224 05:25:23.349396 7614 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-7b65dc9fcb-zxkt2" event={"ID":"be7a4b9e-1e9a-4298-b804-21b683805c0e","Type":"ContainerDied","Data":"644f295cce6b864cf139013130d16889b14ef33754986616f48c2d2d58ffa92d"} Feb 24 05:25:23.350399 master-0 kubenswrapper[7614]: I0224 05:25:23.349520 7614 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-7b65dc9fcb-zxkt2" event={"ID":"be7a4b9e-1e9a-4298-b804-21b683805c0e","Type":"ContainerStarted","Data":"ebf89d5ba5d68a652168caf590af22fc79d75d991b321ff2b9f369556f4d28c8"} Feb 24 05:25:23.350399 master-0 kubenswrapper[7614]: I0224 05:25:23.349557 7614 scope.go:117] "RemoveContainer" containerID="140a9b5fdc72c4b3ab1b7bcc97ac10d0500b7b5e5c7d097d9570d8dd233f08cb" Feb 24 05:25:23.660672 master-0 kubenswrapper[7614]: I0224 05:25:23.660404 7614 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-ingress/router-default-7b65dc9fcb-zxkt2" Feb 24 05:25:23.664738 master-0 kubenswrapper[7614]: I0224 05:25:23.664663 7614 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-zxkt2 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 05:25:23.664738 master-0 kubenswrapper[7614]: [-]has-synced failed: reason withheld Feb 24 05:25:23.664738 master-0 kubenswrapper[7614]: [+]process-running ok Feb 24 05:25:23.664738 master-0 kubenswrapper[7614]: healthz check failed Feb 24 05:25:23.664953 master-0 kubenswrapper[7614]: I0224 05:25:23.664771 7614 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-zxkt2" podUID="be7a4b9e-1e9a-4298-b804-21b683805c0e" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 05:25:23.907127 master-0 kubenswrapper[7614]: E0224 05:25:23.906984 7614 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" interval="200ms" Feb 24 05:25:24.662930 master-0 kubenswrapper[7614]: I0224 05:25:24.662841 7614 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-zxkt2 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 05:25:24.662930 master-0 kubenswrapper[7614]: [-]has-synced failed: reason withheld Feb 24 05:25:24.662930 master-0 kubenswrapper[7614]: [+]process-running ok Feb 24 05:25:24.662930 master-0 kubenswrapper[7614]: healthz check failed Feb 24 05:25:24.663786 master-0 kubenswrapper[7614]: I0224 05:25:24.663747 7614 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-zxkt2" podUID="be7a4b9e-1e9a-4298-b804-21b683805c0e" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 05:25:24.703121 master-0 kubenswrapper[7614]: I0224 05:25:24.703033 7614 patch_prober.go:28] interesting pod/authentication-operator-5bd7c86784-kbb8z container/authentication-operator namespace/openshift-authentication-operator: Liveness probe status=failure output="Get \"https://10.128.0.17:8443/healthz\": dial tcp 10.128.0.17:8443: connect: connection refused" start-of-body= Feb 24 05:25:24.703462 master-0 kubenswrapper[7614]: I0224 05:25:24.703145 7614 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-authentication-operator/authentication-operator-5bd7c86784-kbb8z" podUID="59333a14-5bdc-4590-a3da-af7300f086da" containerName="authentication-operator" probeResult="failure" output="Get \"https://10.128.0.17:8443/healthz\": dial tcp 10.128.0.17:8443: connect: connection refused" Feb 24 05:25:25.174650 master-0 kubenswrapper[7614]: I0224 05:25:25.174533 7614 scope.go:117] "RemoveContainer" containerID="8d89f8110c46f839405874fb4dba9bf410e3a518ca5d273b143187f669975cd0" Feb 24 05:25:25.660581 master-0 kubenswrapper[7614]: I0224 05:25:25.660459 7614 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ingress/router-default-7b65dc9fcb-zxkt2" Feb 24 05:25:25.663647 master-0 kubenswrapper[7614]: I0224 05:25:25.663584 7614 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-zxkt2 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 05:25:25.663647 master-0 kubenswrapper[7614]: [-]has-synced failed: reason withheld Feb 24 05:25:25.663647 master-0 kubenswrapper[7614]: [+]process-running ok Feb 24 05:25:25.663647 master-0 kubenswrapper[7614]: healthz check failed Feb 24 05:25:25.664541 master-0 kubenswrapper[7614]: I0224 05:25:25.663650 7614 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-zxkt2" podUID="be7a4b9e-1e9a-4298-b804-21b683805c0e" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 05:25:26.384170 master-0 kubenswrapper[7614]: I0224 05:25:26.384109 7614 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-network-node-identity_network-node-identity-rlg4x_c106275b-72b6-4877-95c3-830f93e35375/approver/1.log" Feb 24 05:25:26.384850 master-0 kubenswrapper[7614]: I0224 05:25:26.384768 7614 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-rlg4x" event={"ID":"c106275b-72b6-4877-95c3-830f93e35375","Type":"ContainerStarted","Data":"9376699893ef77ad9560d13a5bbc2910480f5d269fca5231137e62c2b9f8713a"} Feb 24 05:25:26.663570 master-0 kubenswrapper[7614]: I0224 05:25:26.663376 7614 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-zxkt2 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 05:25:26.663570 master-0 kubenswrapper[7614]: [-]has-synced failed: reason withheld Feb 24 05:25:26.663570 master-0 kubenswrapper[7614]: [+]process-running ok Feb 24 05:25:26.663570 master-0 kubenswrapper[7614]: healthz check failed Feb 24 05:25:26.663570 master-0 kubenswrapper[7614]: I0224 05:25:26.663517 7614 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-zxkt2" podUID="be7a4b9e-1e9a-4298-b804-21b683805c0e" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 05:25:27.664226 master-0 kubenswrapper[7614]: I0224 05:25:27.664077 7614 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-zxkt2 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 05:25:27.664226 master-0 kubenswrapper[7614]: [-]has-synced failed: reason withheld Feb 24 05:25:27.664226 master-0 kubenswrapper[7614]: [+]process-running ok Feb 24 05:25:27.664226 master-0 kubenswrapper[7614]: healthz check failed Feb 24 05:25:27.664226 master-0 kubenswrapper[7614]: I0224 05:25:27.664221 7614 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-zxkt2" podUID="be7a4b9e-1e9a-4298-b804-21b683805c0e" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 05:25:28.670812 master-0 kubenswrapper[7614]: I0224 05:25:28.663227 7614 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-zxkt2 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 05:25:28.670812 master-0 kubenswrapper[7614]: [-]has-synced failed: reason withheld Feb 24 05:25:28.670812 master-0 kubenswrapper[7614]: [+]process-running ok Feb 24 05:25:28.670812 master-0 kubenswrapper[7614]: healthz check failed Feb 24 05:25:28.670812 master-0 kubenswrapper[7614]: I0224 05:25:28.663332 7614 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-zxkt2" podUID="be7a4b9e-1e9a-4298-b804-21b683805c0e" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 05:25:28.921817 master-0 kubenswrapper[7614]: E0224 05:25:28.921473 7614 event.go:359] "Server rejected event (will not retry!)" err="Timeout: request did not complete within requested timeout - context deadline exceeded" event=< Feb 24 05:25:28.921817 master-0 kubenswrapper[7614]: &Event{ObjectMeta:{router-default-7b65dc9fcb-zxkt2.1897171cfcf850d7 openshift-ingress 10517 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-ingress,Name:router-default-7b65dc9fcb-zxkt2,UID:be7a4b9e-1e9a-4298-b804-21b683805c0e,APIVersion:v1,ResourceVersion:9928,FieldPath:spec.containers{router},},Reason:ProbeError,Message:Startup probe error: HTTP probe failed with statuscode: 500 Feb 24 05:25:28.921817 master-0 kubenswrapper[7614]: body: [-]backend-http failed: reason withheld Feb 24 05:25:28.921817 master-0 kubenswrapper[7614]: [-]has-synced failed: reason withheld Feb 24 05:25:28.921817 master-0 kubenswrapper[7614]: [+]process-running ok Feb 24 05:25:28.921817 master-0 kubenswrapper[7614]: healthz check failed Feb 24 05:25:28.921817 master-0 kubenswrapper[7614]: Feb 24 05:25:28.921817 master-0 kubenswrapper[7614]: ,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-02-24 05:19:51 +0000 UTC,LastTimestamp:2026-02-24 05:24:21.66302826 +0000 UTC m=+592.697771426,Count:225,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,} Feb 24 05:25:28.921817 master-0 kubenswrapper[7614]: > Feb 24 05:25:29.664322 master-0 kubenswrapper[7614]: I0224 05:25:29.664210 7614 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-zxkt2 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 05:25:29.664322 master-0 kubenswrapper[7614]: [-]has-synced failed: reason withheld Feb 24 05:25:29.664322 master-0 kubenswrapper[7614]: [+]process-running ok Feb 24 05:25:29.664322 master-0 kubenswrapper[7614]: healthz check failed Feb 24 05:25:29.664682 master-0 kubenswrapper[7614]: I0224 05:25:29.664358 7614 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-zxkt2" podUID="be7a4b9e-1e9a-4298-b804-21b683805c0e" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 05:25:30.737920 master-0 kubenswrapper[7614]: I0224 05:25:30.737823 7614 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-zxkt2 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 05:25:30.737920 master-0 kubenswrapper[7614]: [-]has-synced failed: reason withheld Feb 24 05:25:30.737920 master-0 kubenswrapper[7614]: [+]process-running ok Feb 24 05:25:30.737920 master-0 kubenswrapper[7614]: healthz check failed Feb 24 05:25:30.738941 master-0 kubenswrapper[7614]: I0224 05:25:30.737954 7614 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-zxkt2" podUID="be7a4b9e-1e9a-4298-b804-21b683805c0e" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 05:25:31.664050 master-0 kubenswrapper[7614]: I0224 05:25:31.663793 7614 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-zxkt2 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 05:25:31.664050 master-0 kubenswrapper[7614]: [-]has-synced failed: reason withheld Feb 24 05:25:31.664050 master-0 kubenswrapper[7614]: [+]process-running ok Feb 24 05:25:31.664050 master-0 kubenswrapper[7614]: healthz check failed Feb 24 05:25:31.664050 master-0 kubenswrapper[7614]: I0224 05:25:31.663907 7614 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-zxkt2" podUID="be7a4b9e-1e9a-4298-b804-21b683805c0e" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 05:25:31.757709 master-0 kubenswrapper[7614]: I0224 05:25:31.757591 7614 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_installer-2-master-0_7d063f48-5f89-47d0-bafc-84a52839c806/installer/0.log" Feb 24 05:25:31.757709 master-0 kubenswrapper[7614]: I0224 05:25:31.757679 7614 generic.go:334] "Generic (PLEG): container finished" podID="7d063f48-5f89-47d0-bafc-84a52839c806" containerID="d347e24453ee574539f27391a430e305f8f75f2030a25c584a9b3378c1e400e8" exitCode=1 Feb 24 05:25:31.758900 master-0 kubenswrapper[7614]: I0224 05:25:31.757755 7614 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-2-master-0" event={"ID":"7d063f48-5f89-47d0-bafc-84a52839c806","Type":"ContainerDied","Data":"d347e24453ee574539f27391a430e305f8f75f2030a25c584a9b3378c1e400e8"} Feb 24 05:25:32.664034 master-0 kubenswrapper[7614]: I0224 05:25:32.663850 7614 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-zxkt2 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 05:25:32.664034 master-0 kubenswrapper[7614]: [-]has-synced failed: reason withheld Feb 24 05:25:32.664034 master-0 kubenswrapper[7614]: [+]process-running ok Feb 24 05:25:32.664034 master-0 kubenswrapper[7614]: healthz check failed Feb 24 05:25:32.664034 master-0 kubenswrapper[7614]: I0224 05:25:32.663997 7614 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-zxkt2" podUID="be7a4b9e-1e9a-4298-b804-21b683805c0e" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 05:25:33.217268 master-0 kubenswrapper[7614]: I0224 05:25:33.217156 7614 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_installer-2-master-0_7d063f48-5f89-47d0-bafc-84a52839c806/installer/0.log" Feb 24 05:25:33.217268 master-0 kubenswrapper[7614]: I0224 05:25:33.217269 7614 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-2-master-0" Feb 24 05:25:33.298816 master-0 kubenswrapper[7614]: I0224 05:25:33.298680 7614 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/7d063f48-5f89-47d0-bafc-84a52839c806-kubelet-dir\") pod \"7d063f48-5f89-47d0-bafc-84a52839c806\" (UID: \"7d063f48-5f89-47d0-bafc-84a52839c806\") " Feb 24 05:25:33.299211 master-0 kubenswrapper[7614]: I0224 05:25:33.298846 7614 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7d063f48-5f89-47d0-bafc-84a52839c806-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "7d063f48-5f89-47d0-bafc-84a52839c806" (UID: "7d063f48-5f89-47d0-bafc-84a52839c806"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 24 05:25:33.299211 master-0 kubenswrapper[7614]: I0224 05:25:33.299018 7614 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/7d063f48-5f89-47d0-bafc-84a52839c806-var-lock\") pod \"7d063f48-5f89-47d0-bafc-84a52839c806\" (UID: \"7d063f48-5f89-47d0-bafc-84a52839c806\") " Feb 24 05:25:33.299211 master-0 kubenswrapper[7614]: I0224 05:25:33.299149 7614 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7d063f48-5f89-47d0-bafc-84a52839c806-var-lock" (OuterVolumeSpecName: "var-lock") pod "7d063f48-5f89-47d0-bafc-84a52839c806" (UID: "7d063f48-5f89-47d0-bafc-84a52839c806"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 24 05:25:33.299211 master-0 kubenswrapper[7614]: I0224 05:25:33.299170 7614 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/7d063f48-5f89-47d0-bafc-84a52839c806-kube-api-access\") pod \"7d063f48-5f89-47d0-bafc-84a52839c806\" (UID: \"7d063f48-5f89-47d0-bafc-84a52839c806\") " Feb 24 05:25:33.299892 master-0 kubenswrapper[7614]: I0224 05:25:33.299833 7614 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/7d063f48-5f89-47d0-bafc-84a52839c806-var-lock\") on node \"master-0\" DevicePath \"\"" Feb 24 05:25:33.299892 master-0 kubenswrapper[7614]: I0224 05:25:33.299876 7614 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/7d063f48-5f89-47d0-bafc-84a52839c806-kubelet-dir\") on node \"master-0\" DevicePath \"\"" Feb 24 05:25:33.304443 master-0 kubenswrapper[7614]: I0224 05:25:33.304369 7614 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7d063f48-5f89-47d0-bafc-84a52839c806-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "7d063f48-5f89-47d0-bafc-84a52839c806" (UID: "7d063f48-5f89-47d0-bafc-84a52839c806"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 24 05:25:33.402226 master-0 kubenswrapper[7614]: I0224 05:25:33.402097 7614 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/7d063f48-5f89-47d0-bafc-84a52839c806-kube-api-access\") on node \"master-0\" DevicePath \"\"" Feb 24 05:25:33.664126 master-0 kubenswrapper[7614]: I0224 05:25:33.664020 7614 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-zxkt2 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 05:25:33.664126 master-0 kubenswrapper[7614]: [-]has-synced failed: reason withheld Feb 24 05:25:33.664126 master-0 kubenswrapper[7614]: [+]process-running ok Feb 24 05:25:33.664126 master-0 kubenswrapper[7614]: healthz check failed Feb 24 05:25:33.664695 master-0 kubenswrapper[7614]: I0224 05:25:33.664127 7614 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-zxkt2" podUID="be7a4b9e-1e9a-4298-b804-21b683805c0e" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 05:25:33.778108 master-0 kubenswrapper[7614]: I0224 05:25:33.778044 7614 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_installer-2-master-0_7d063f48-5f89-47d0-bafc-84a52839c806/installer/0.log" Feb 24 05:25:33.778108 master-0 kubenswrapper[7614]: I0224 05:25:33.778133 7614 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-2-master-0" event={"ID":"7d063f48-5f89-47d0-bafc-84a52839c806","Type":"ContainerDied","Data":"835ae03e3e8588604d9220c7c10316442703346b5052f347621a9b0860a0156c"} Feb 24 05:25:33.778589 master-0 kubenswrapper[7614]: I0224 05:25:33.778172 7614 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="835ae03e3e8588604d9220c7c10316442703346b5052f347621a9b0860a0156c" Feb 24 05:25:33.778589 master-0 kubenswrapper[7614]: I0224 05:25:33.778283 7614 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-2-master-0" Feb 24 05:25:34.108484 master-0 kubenswrapper[7614]: E0224 05:25:34.108380 7614 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" interval="400ms" Feb 24 05:25:34.209177 master-0 kubenswrapper[7614]: E0224 05:25:34.209077 7614 mirror_client.go:138] "Failed deleting a mirror pod" err="Timeout: request did not complete within requested timeout - context deadline exceeded" pod="openshift-etcd/etcd-master-0" Feb 24 05:25:34.210129 master-0 kubenswrapper[7614]: I0224 05:25:34.210083 7614 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-master-0" Feb 24 05:25:34.245757 master-0 kubenswrapper[7614]: W0224 05:25:34.245669 7614 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb419b8533666d3ae7054c771ce97a95f.slice/crio-2e6f428788cdb3f513e95cc63ecf43bbf7b7de35faa154cc080dbc5634ce8151 WatchSource:0}: Error finding container 2e6f428788cdb3f513e95cc63ecf43bbf7b7de35faa154cc080dbc5634ce8151: Status 404 returned error can't find the container with id 2e6f428788cdb3f513e95cc63ecf43bbf7b7de35faa154cc080dbc5634ce8151 Feb 24 05:25:34.663677 master-0 kubenswrapper[7614]: I0224 05:25:34.663585 7614 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-zxkt2 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 05:25:34.663677 master-0 kubenswrapper[7614]: [-]has-synced failed: reason withheld Feb 24 05:25:34.663677 master-0 kubenswrapper[7614]: [+]process-running ok Feb 24 05:25:34.663677 master-0 kubenswrapper[7614]: healthz check failed Feb 24 05:25:34.664098 master-0 kubenswrapper[7614]: I0224 05:25:34.663796 7614 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-zxkt2" podUID="be7a4b9e-1e9a-4298-b804-21b683805c0e" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 05:25:34.701947 master-0 kubenswrapper[7614]: I0224 05:25:34.701846 7614 patch_prober.go:28] interesting pod/authentication-operator-5bd7c86784-kbb8z container/authentication-operator namespace/openshift-authentication-operator: Liveness probe status=failure output="Get \"https://10.128.0.17:8443/healthz\": dial tcp 10.128.0.17:8443: connect: connection refused" start-of-body= Feb 24 05:25:34.702127 master-0 kubenswrapper[7614]: I0224 05:25:34.701959 7614 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-authentication-operator/authentication-operator-5bd7c86784-kbb8z" podUID="59333a14-5bdc-4590-a3da-af7300f086da" containerName="authentication-operator" probeResult="failure" output="Get \"https://10.128.0.17:8443/healthz\": dial tcp 10.128.0.17:8443: connect: connection refused" Feb 24 05:25:34.789346 master-0 kubenswrapper[7614]: I0224 05:25:34.789217 7614 generic.go:334] "Generic (PLEG): container finished" podID="b419b8533666d3ae7054c771ce97a95f" containerID="d72f9375dea0ad0635b80a9933bdb84b391c0ae97efa1ec6ec782f2d615cceb4" exitCode=0 Feb 24 05:25:34.789346 master-0 kubenswrapper[7614]: I0224 05:25:34.789301 7614 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-master-0" event={"ID":"b419b8533666d3ae7054c771ce97a95f","Type":"ContainerDied","Data":"d72f9375dea0ad0635b80a9933bdb84b391c0ae97efa1ec6ec782f2d615cceb4"} Feb 24 05:25:34.789747 master-0 kubenswrapper[7614]: I0224 05:25:34.789404 7614 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-master-0" event={"ID":"b419b8533666d3ae7054c771ce97a95f","Type":"ContainerStarted","Data":"2e6f428788cdb3f513e95cc63ecf43bbf7b7de35faa154cc080dbc5634ce8151"} Feb 24 05:25:34.789999 master-0 kubenswrapper[7614]: I0224 05:25:34.789944 7614 kubelet.go:1909] "Trying to delete pod" pod="openshift-etcd/etcd-master-0" podUID="6636104e-8b36-4c09-9e6b-e13fc7237a3e" Feb 24 05:25:34.789999 master-0 kubenswrapper[7614]: I0224 05:25:34.789986 7614 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-etcd/etcd-master-0" podUID="6636104e-8b36-4c09-9e6b-e13fc7237a3e" Feb 24 05:25:35.664404 master-0 kubenswrapper[7614]: I0224 05:25:35.664214 7614 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-zxkt2 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 05:25:35.664404 master-0 kubenswrapper[7614]: [-]has-synced failed: reason withheld Feb 24 05:25:35.664404 master-0 kubenswrapper[7614]: [+]process-running ok Feb 24 05:25:35.664404 master-0 kubenswrapper[7614]: healthz check failed Feb 24 05:25:35.664404 master-0 kubenswrapper[7614]: I0224 05:25:35.664383 7614 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-zxkt2" podUID="be7a4b9e-1e9a-4298-b804-21b683805c0e" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 05:25:36.441848 master-0 kubenswrapper[7614]: E0224 05:25:36.441549 7614 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-24T05:25:26Z\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-24T05:25:26Z\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-24T05:25:26Z\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-24T05:25:26Z\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:08cff7c9164822cf90c1ddc99284f5fd3c4efbfdf7ff5d2da94ff20f03d57215\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8665346de3cec5b1443fb1e3bf6389962210affa684e5c1b521ec342f56e0901\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1703852494},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:94d88fe2fa42931a725508dbf17296b6ed99b8e20c1169f5d1fb8a36f4927ddd\\\"],\\\"sizeBytes\\\":1637274270},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:10e72e1dffd75bda73d89a11e18d98c99255c0f2c54d81f82a2a48b0b86b96b5\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:d64168b357c44a3e5febdd4d99c285c68217a6568f9de2371d72e8a089d42b69\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1238591178},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d7a8ac0ba2e5115c9d451d553741173ae8744d4544da15e28bf38f61630182fd\\\"],\\\"sizeBytes\\\":1237794314},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:155018f64a4d43025cb88586009847bd0f7844afa3e1aa81639d31b96bebd68e\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:4154e7856e2578eae0af7bc7ade3338a49c179e8e0b9d8b5167540e580ffc22b\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1210563790},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:518982b9ad8a8bfb7bb3b4216b235cac99e126df3bb48e390b36064560c76b83\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:b3293b04e31c8e67c885f77e0ad2ee994295afde7c42cb9761c7090ae0cdb3f8\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1202767548},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4775c6461221dafe3ddd67ff683ccb665bed6eb278fa047d9d744aab9af65dcf\\\"],\\\"sizeBytes\\\":992461126},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8177c465e14c63854e5c0fa95ca0635cffc9b5dd3d077ecf971feedbc42b1274\\\"],\\\"sizeBytes\\\":943734757},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6c7ec917f0eff7b41d7174f1b5fdc4ce53ad106e51599afba731a8431ff9caa7\\\"],\\\"sizeBytes\\\":918153745},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8ff40a2d97bf7a95e19303f7e972b7e8354a3864039111c6d33d5479117aaeed\\\"],\\\"sizeBytes\\\":880247193},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:72fafcd55ab739919dd8a114863fda27106af1c497f474e7ce0cb23b58dfa021\\\"],\\\"sizeBytes\\\":875998518},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7b9239f1f5e9590e3db71e61fde86db8f43e0085f61ae7769508d2ea058481c7\\\"],\\\"sizeBytes\\\":862501144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:572b0ca6e993beea2ee9346197665e56a2e4999fbb6958c747c48a35bf72ee34\\\"],\\\"sizeBytes\\\":862091954},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3fa84eaa1310d97fe55bb23a7c27ece85718d0643fa7fc0ff81014edb4b948b\\\"],\\\"sizeBytes\\\":772838975},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bd420e879c9f0271bca2d123a6d762591d9a4626b72f254d1f885842c32149e8\\\"],\\\"sizeBytes\\\":687849728},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3c467c1eeba7434b2aebf07169ab8afe0203d638e871dbdf29a16f830e9aef9e\\\"],\\\"sizeBytes\\\":682963466},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5121a0944000b7bfa57ae2e4eb3f412e1b4b89fcc75eec1ef20241182c0527f2\\\"],\\\"sizeBytes\\\":677827184},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d5a31b448302fbb994548ed801ac488a44e8a7c4ae9149c3b4cc20d6af832f83\\\"],\\\"sizeBytes\\\":621542709},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3e089c4e4fa9a22803b2673b776215e021a1f12a856dbcaba2fadee29bee10a3\\\"],\\\"sizeBytes\\\":589275174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1582ea693f35073e3316e2380a18227b78096ca7f4e1328f1dd8a2c423da26e9\\\"],\\\"sizeBytes\\\":582052489},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:314be88d356b2c8a3c4416daeb4cfcd58d617a4526319c01ddaffae4b4179e74\\\"],\\\"sizeBytes\\\":558105176},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f9df2f6b5cd83ab895e9e4a9bf8920d35fe450679ce06fb223944e95cfbe3e\\\"],\\\"sizeBytes\\\":557320737},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f86073cf0561e4b69668f8917ef5184cb0ef5aa16d0fefe38118f1167b268721\\\"],\\\"sizeBytes\\\":548646306},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d77a77c401bcfaa65a6ab6de82415af0e7ace1b470626647e5feb4875c89a5ef\\\"],\\\"sizeBytes\\\":529218694},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bc0ca626e5e17f9f78ddbfde54ea13ddc7749904911817bba16e6b59f30499ec\\\"],\\\"sizeBytes\\\":528829499},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:11f566fe2ae782ad96d36028b0fd81911a64ef787dcebc83803f741f272fa396\\\"],\\\"sizeBytes\\\":518279996},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-release@sha256:40bb7cf7c637bf9efd8fb0157839d325a019d67cc7d7279665fcf90dbb7f3f33\\\"],\\\"sizeBytes\\\":517888569},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fd63e2c1185e529c6e9f6e1426222ff2ac195132b44a1775f407e4593b66d4c\\\"],\\\"sizeBytes\\\":514875199},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a1b426a276216372c7d688fe60e9eaf251efd35071f94e1bcd4337f51a90fd75\\\"],\\\"sizeBytes\\\":513473308},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ce471c00b59fd855a59f7efa9afdb3f0f9cbf1c4bcce3a82fe1a4cb82e90f52e\\\"],\\\"sizeBytes\\\":513119434},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a9dcbc6b966928b7597d4a822948ae6f07b62feecb91679c1d825d0d19426e19\\\"],\\\"sizeBytes\\\":512172666},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d5f4a546983224e416dfcc3a700afc15f9790182a5a2f8f7c94892d0e95abab3\\\"],\\\"sizeBytes\\\":511125422},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2c8de5c5b21ed8c7829ba988d580ffa470c9913877fe0ee5e11bf507400ffbc7\\\"],\\\"sizeBytes\\\":511059399},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:64ba461fd5594e3a30bfd755f1496707a88249bc68d07c65124c8617d664d2ac\\\"],\\\"sizeBytes\\\":508786786},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a82e441a9e9b93f0e010f1ce26e30c24b6ca93f7752084d4694ebdb3c5b53f83\\\"],\\\"sizeBytes\\\":508443359},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d7bd3361d506dcc1be3afa62d35080c5dd37afccc26cd36019e2b9db2c45f896\\\"],\\\"sizeBytes\\\":507867630},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:034588ffd95ce834e866279bf80a45af2cddda631c6c9a6344c1bb2e033fd83e\\\"],\\\"sizeBytes\\\":506374680},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c8618d42fe4da4881abe39e98691d187e13713981b66d0dac0a11cb1287482b7\\\"],\\\"sizeBytes\\\":506291135},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ce68078d909b63bb5b872d94c04829aa1b5812c416abbaf9024840d348ee68b1\\\"],\\\"sizeBytes\\\":505244089},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:457c564075e8b14b1d24ff6eab750600ebc90ff8b7bb137306a579ee8445ae95\\\"],\\\"sizeBytes\\\":505137106},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ebf883de8fd905490f0c9b420a5d6446ecde18e12e15364f6dcd4e885104972c\\\"],\\\"sizeBytes\\\":504558291},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:897708222502e4d710dd737923f74d153c084ba6048bffceb16dfd30f79a6ecc\\\"],\\\"sizeBytes\\\":504513960},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:86d9e1fdf97794f44fc1c91da025714ec6900fafa6cdc4c0041ffa95e9d70c6c\\\"],\\\"sizeBytes\\\":495888162},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4e8c6ae1f9a450c90857c9fbccf1e5fb404dbc0d65d086afce005d6bd307853b\\\"],\\\"sizeBytes\\\":494959854},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cb94366d6d4423592369eeca84f0fe98325db13d0ab9e0291db9f1a337cd7143\\\"],\\\"sizeBytes\\\":487054953},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:117a846734fc8159b7172a40ed2feb43a969b7dbc113ee1a572cbf6f9f922655\\\"],\\\"sizeBytes\\\":486990304},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4797a485fd4ab3414ba8d52bdf2afccefab6c657b1d259baad703fca5145124c\\\"],\\\"sizeBytes\\\":484349508},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7a132d09565133b36ac7c797213d6a74ac810bb368ef59136320ab3d300f45bd\\\"],\\\"sizeBytes\\\":484074784},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a1dcd1b7d6878b28ed95aed9f0c0e2df156c17cb9fe5971400b983e3f2be29c\\\"],\\\"sizeBytes\\\":480427687},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2b05fb5dedd9a53747df98c2a1956ace8e233ad575204fbec990e39705e36dfb\\\"],\\\"sizeBytes\\\":471325816}],\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"master-0\": Patch \"https://api-int.sno.openstack.lab:6443/api/v1/nodes/master-0/status?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 24 05:25:36.712365 master-0 kubenswrapper[7614]: I0224 05:25:36.712107 7614 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-zxkt2 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 05:25:36.712365 master-0 kubenswrapper[7614]: [-]has-synced failed: reason withheld Feb 24 05:25:36.712365 master-0 kubenswrapper[7614]: [+]process-running ok Feb 24 05:25:36.712365 master-0 kubenswrapper[7614]: healthz check failed Feb 24 05:25:36.712365 master-0 kubenswrapper[7614]: I0224 05:25:36.712227 7614 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-zxkt2" podUID="be7a4b9e-1e9a-4298-b804-21b683805c0e" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 05:25:37.664343 master-0 kubenswrapper[7614]: I0224 05:25:37.664228 7614 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-zxkt2 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 05:25:37.664343 master-0 kubenswrapper[7614]: [-]has-synced failed: reason withheld Feb 24 05:25:37.664343 master-0 kubenswrapper[7614]: [+]process-running ok Feb 24 05:25:37.664343 master-0 kubenswrapper[7614]: healthz check failed Feb 24 05:25:37.664776 master-0 kubenswrapper[7614]: I0224 05:25:37.664381 7614 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-zxkt2" podUID="be7a4b9e-1e9a-4298-b804-21b683805c0e" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 05:25:38.663753 master-0 kubenswrapper[7614]: I0224 05:25:38.663613 7614 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-zxkt2 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 05:25:38.663753 master-0 kubenswrapper[7614]: [-]has-synced failed: reason withheld Feb 24 05:25:38.663753 master-0 kubenswrapper[7614]: [+]process-running ok Feb 24 05:25:38.663753 master-0 kubenswrapper[7614]: healthz check failed Feb 24 05:25:38.665661 master-0 kubenswrapper[7614]: I0224 05:25:38.663776 7614 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-zxkt2" podUID="be7a4b9e-1e9a-4298-b804-21b683805c0e" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 05:25:39.663391 master-0 kubenswrapper[7614]: I0224 05:25:39.663286 7614 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-zxkt2 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 05:25:39.663391 master-0 kubenswrapper[7614]: [-]has-synced failed: reason withheld Feb 24 05:25:39.663391 master-0 kubenswrapper[7614]: [+]process-running ok Feb 24 05:25:39.663391 master-0 kubenswrapper[7614]: healthz check failed Feb 24 05:25:39.663608 master-0 kubenswrapper[7614]: I0224 05:25:39.663403 7614 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-zxkt2" podUID="be7a4b9e-1e9a-4298-b804-21b683805c0e" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 05:25:40.663192 master-0 kubenswrapper[7614]: I0224 05:25:40.663090 7614 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-zxkt2 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 05:25:40.663192 master-0 kubenswrapper[7614]: [-]has-synced failed: reason withheld Feb 24 05:25:40.663192 master-0 kubenswrapper[7614]: [+]process-running ok Feb 24 05:25:40.663192 master-0 kubenswrapper[7614]: healthz check failed Feb 24 05:25:40.664198 master-0 kubenswrapper[7614]: I0224 05:25:40.663200 7614 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-zxkt2" podUID="be7a4b9e-1e9a-4298-b804-21b683805c0e" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 05:25:41.664000 master-0 kubenswrapper[7614]: I0224 05:25:41.663895 7614 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-zxkt2 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 05:25:41.664000 master-0 kubenswrapper[7614]: [-]has-synced failed: reason withheld Feb 24 05:25:41.664000 master-0 kubenswrapper[7614]: [+]process-running ok Feb 24 05:25:41.664000 master-0 kubenswrapper[7614]: healthz check failed Feb 24 05:25:41.665020 master-0 kubenswrapper[7614]: I0224 05:25:41.664020 7614 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-zxkt2" podUID="be7a4b9e-1e9a-4298-b804-21b683805c0e" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 05:25:42.664145 master-0 kubenswrapper[7614]: I0224 05:25:42.663996 7614 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-zxkt2 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 05:25:42.664145 master-0 kubenswrapper[7614]: [-]has-synced failed: reason withheld Feb 24 05:25:42.664145 master-0 kubenswrapper[7614]: [+]process-running ok Feb 24 05:25:42.664145 master-0 kubenswrapper[7614]: healthz check failed Feb 24 05:25:42.664145 master-0 kubenswrapper[7614]: I0224 05:25:42.664135 7614 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-zxkt2" podUID="be7a4b9e-1e9a-4298-b804-21b683805c0e" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 05:25:43.663654 master-0 kubenswrapper[7614]: I0224 05:25:43.663550 7614 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-zxkt2 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 05:25:43.663654 master-0 kubenswrapper[7614]: [-]has-synced failed: reason withheld Feb 24 05:25:43.663654 master-0 kubenswrapper[7614]: [+]process-running ok Feb 24 05:25:43.663654 master-0 kubenswrapper[7614]: healthz check failed Feb 24 05:25:43.664418 master-0 kubenswrapper[7614]: I0224 05:25:43.664297 7614 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-zxkt2" podUID="be7a4b9e-1e9a-4298-b804-21b683805c0e" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 05:25:44.509807 master-0 kubenswrapper[7614]: E0224 05:25:44.509608 7614 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" interval="800ms" Feb 24 05:25:44.663856 master-0 kubenswrapper[7614]: I0224 05:25:44.663739 7614 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-zxkt2 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 05:25:44.663856 master-0 kubenswrapper[7614]: [-]has-synced failed: reason withheld Feb 24 05:25:44.663856 master-0 kubenswrapper[7614]: [+]process-running ok Feb 24 05:25:44.663856 master-0 kubenswrapper[7614]: healthz check failed Feb 24 05:25:44.664426 master-0 kubenswrapper[7614]: I0224 05:25:44.663861 7614 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-zxkt2" podUID="be7a4b9e-1e9a-4298-b804-21b683805c0e" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 05:25:44.701785 master-0 kubenswrapper[7614]: I0224 05:25:44.701704 7614 patch_prober.go:28] interesting pod/authentication-operator-5bd7c86784-kbb8z container/authentication-operator namespace/openshift-authentication-operator: Liveness probe status=failure output="Get \"https://10.128.0.17:8443/healthz\": dial tcp 10.128.0.17:8443: connect: connection refused" start-of-body= Feb 24 05:25:44.701785 master-0 kubenswrapper[7614]: I0224 05:25:44.701768 7614 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-authentication-operator/authentication-operator-5bd7c86784-kbb8z" podUID="59333a14-5bdc-4590-a3da-af7300f086da" containerName="authentication-operator" probeResult="failure" output="Get \"https://10.128.0.17:8443/healthz\": dial tcp 10.128.0.17:8443: connect: connection refused" Feb 24 05:25:44.702124 master-0 kubenswrapper[7614]: I0224 05:25:44.701846 7614 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-authentication-operator/authentication-operator-5bd7c86784-kbb8z" Feb 24 05:25:44.703106 master-0 kubenswrapper[7614]: I0224 05:25:44.703032 7614 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="authentication-operator" containerStatusID={"Type":"cri-o","ID":"d5c20b92312f36a79271d5fd1a9a93a147a0f9575364641bb14c812c34fb24f8"} pod="openshift-authentication-operator/authentication-operator-5bd7c86784-kbb8z" containerMessage="Container authentication-operator failed liveness probe, will be restarted" Feb 24 05:25:44.703214 master-0 kubenswrapper[7614]: I0224 05:25:44.703105 7614 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-authentication-operator/authentication-operator-5bd7c86784-kbb8z" podUID="59333a14-5bdc-4590-a3da-af7300f086da" containerName="authentication-operator" containerID="cri-o://d5c20b92312f36a79271d5fd1a9a93a147a0f9575364641bb14c812c34fb24f8" gracePeriod=30 Feb 24 05:25:45.663917 master-0 kubenswrapper[7614]: I0224 05:25:45.663700 7614 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-zxkt2 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 05:25:45.663917 master-0 kubenswrapper[7614]: [-]has-synced failed: reason withheld Feb 24 05:25:45.663917 master-0 kubenswrapper[7614]: [+]process-running ok Feb 24 05:25:45.663917 master-0 kubenswrapper[7614]: healthz check failed Feb 24 05:25:45.663917 master-0 kubenswrapper[7614]: I0224 05:25:45.663795 7614 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-zxkt2" podUID="be7a4b9e-1e9a-4298-b804-21b683805c0e" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 05:25:45.894903 master-0 kubenswrapper[7614]: I0224 05:25:45.894795 7614 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-authentication-operator_authentication-operator-5bd7c86784-kbb8z_59333a14-5bdc-4590-a3da-af7300f086da/authentication-operator/1.log" Feb 24 05:25:45.896154 master-0 kubenswrapper[7614]: I0224 05:25:45.896101 7614 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-authentication-operator_authentication-operator-5bd7c86784-kbb8z_59333a14-5bdc-4590-a3da-af7300f086da/authentication-operator/0.log" Feb 24 05:25:45.896303 master-0 kubenswrapper[7614]: I0224 05:25:45.896184 7614 generic.go:334] "Generic (PLEG): container finished" podID="59333a14-5bdc-4590-a3da-af7300f086da" containerID="d5c20b92312f36a79271d5fd1a9a93a147a0f9575364641bb14c812c34fb24f8" exitCode=255 Feb 24 05:25:45.896303 master-0 kubenswrapper[7614]: I0224 05:25:45.896247 7614 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-5bd7c86784-kbb8z" event={"ID":"59333a14-5bdc-4590-a3da-af7300f086da","Type":"ContainerDied","Data":"d5c20b92312f36a79271d5fd1a9a93a147a0f9575364641bb14c812c34fb24f8"} Feb 24 05:25:45.896567 master-0 kubenswrapper[7614]: I0224 05:25:45.896394 7614 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-5bd7c86784-kbb8z" event={"ID":"59333a14-5bdc-4590-a3da-af7300f086da","Type":"ContainerStarted","Data":"443e2cc8a24d2e54b563564a171d6e7bc732fa198a57aa6dc2d46c10dc569ce8"} Feb 24 05:25:45.896567 master-0 kubenswrapper[7614]: I0224 05:25:45.896446 7614 scope.go:117] "RemoveContainer" containerID="d5ce8ccd581f3f0a727f122a907bfeeff964d35571ffdd52c3f7804a92dfb1d9" Feb 24 05:25:46.443069 master-0 kubenswrapper[7614]: E0224 05:25:46.442980 7614 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"master-0\": Get \"https://api-int.sno.openstack.lab:6443/api/v1/nodes/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 24 05:25:46.663818 master-0 kubenswrapper[7614]: I0224 05:25:46.663684 7614 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-zxkt2 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 05:25:46.663818 master-0 kubenswrapper[7614]: [-]has-synced failed: reason withheld Feb 24 05:25:46.663818 master-0 kubenswrapper[7614]: [+]process-running ok Feb 24 05:25:46.663818 master-0 kubenswrapper[7614]: healthz check failed Feb 24 05:25:46.664407 master-0 kubenswrapper[7614]: I0224 05:25:46.663827 7614 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-zxkt2" podUID="be7a4b9e-1e9a-4298-b804-21b683805c0e" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 05:25:46.910528 master-0 kubenswrapper[7614]: I0224 05:25:46.910438 7614 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-authentication-operator_authentication-operator-5bd7c86784-kbb8z_59333a14-5bdc-4590-a3da-af7300f086da/authentication-operator/1.log" Feb 24 05:25:47.663572 master-0 kubenswrapper[7614]: I0224 05:25:47.663374 7614 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-zxkt2 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 05:25:47.663572 master-0 kubenswrapper[7614]: [-]has-synced failed: reason withheld Feb 24 05:25:47.663572 master-0 kubenswrapper[7614]: [+]process-running ok Feb 24 05:25:47.663572 master-0 kubenswrapper[7614]: healthz check failed Feb 24 05:25:47.663572 master-0 kubenswrapper[7614]: I0224 05:25:47.663534 7614 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-zxkt2" podUID="be7a4b9e-1e9a-4298-b804-21b683805c0e" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 05:25:48.663740 master-0 kubenswrapper[7614]: I0224 05:25:48.663648 7614 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-zxkt2 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 05:25:48.663740 master-0 kubenswrapper[7614]: [-]has-synced failed: reason withheld Feb 24 05:25:48.663740 master-0 kubenswrapper[7614]: [+]process-running ok Feb 24 05:25:48.663740 master-0 kubenswrapper[7614]: healthz check failed Feb 24 05:25:48.664821 master-0 kubenswrapper[7614]: I0224 05:25:48.663775 7614 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-zxkt2" podUID="be7a4b9e-1e9a-4298-b804-21b683805c0e" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 05:25:49.664197 master-0 kubenswrapper[7614]: I0224 05:25:49.664105 7614 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-zxkt2 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 05:25:49.664197 master-0 kubenswrapper[7614]: [-]has-synced failed: reason withheld Feb 24 05:25:49.664197 master-0 kubenswrapper[7614]: [+]process-running ok Feb 24 05:25:49.664197 master-0 kubenswrapper[7614]: healthz check failed Feb 24 05:25:49.665102 master-0 kubenswrapper[7614]: I0224 05:25:49.664212 7614 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-zxkt2" podUID="be7a4b9e-1e9a-4298-b804-21b683805c0e" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 05:25:50.664350 master-0 kubenswrapper[7614]: I0224 05:25:50.664211 7614 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-zxkt2 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 05:25:50.664350 master-0 kubenswrapper[7614]: [-]has-synced failed: reason withheld Feb 24 05:25:50.664350 master-0 kubenswrapper[7614]: [+]process-running ok Feb 24 05:25:50.664350 master-0 kubenswrapper[7614]: healthz check failed Feb 24 05:25:50.665659 master-0 kubenswrapper[7614]: I0224 05:25:50.664442 7614 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-zxkt2" podUID="be7a4b9e-1e9a-4298-b804-21b683805c0e" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 05:25:50.947600 master-0 kubenswrapper[7614]: I0224 05:25:50.947359 7614 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ingress-operator_ingress-operator-6569778c84-rr8r7_3d6b1ce7-1213-494c-829d-186d39eac7eb/ingress-operator/3.log" Feb 24 05:25:50.948728 master-0 kubenswrapper[7614]: I0224 05:25:50.948644 7614 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ingress-operator_ingress-operator-6569778c84-rr8r7_3d6b1ce7-1213-494c-829d-186d39eac7eb/ingress-operator/2.log" Feb 24 05:25:50.949428 master-0 kubenswrapper[7614]: I0224 05:25:50.949367 7614 generic.go:334] "Generic (PLEG): container finished" podID="3d6b1ce7-1213-494c-829d-186d39eac7eb" containerID="cfe91b9dce3107eef3be77e003af99516d67b13614554d783a1ee356de5c61ba" exitCode=1 Feb 24 05:25:50.949625 master-0 kubenswrapper[7614]: I0224 05:25:50.949426 7614 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-6569778c84-rr8r7" event={"ID":"3d6b1ce7-1213-494c-829d-186d39eac7eb","Type":"ContainerDied","Data":"cfe91b9dce3107eef3be77e003af99516d67b13614554d783a1ee356de5c61ba"} Feb 24 05:25:50.949625 master-0 kubenswrapper[7614]: I0224 05:25:50.949505 7614 scope.go:117] "RemoveContainer" containerID="50c8d66910cbcf1dcdc03811dff2f9abc3d95e2e93235a68b4cc89109830e7b9" Feb 24 05:25:50.950382 master-0 kubenswrapper[7614]: I0224 05:25:50.950267 7614 scope.go:117] "RemoveContainer" containerID="cfe91b9dce3107eef3be77e003af99516d67b13614554d783a1ee356de5c61ba" Feb 24 05:25:50.950779 master-0 kubenswrapper[7614]: E0224 05:25:50.950550 7614 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ingress-operator\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ingress-operator pod=ingress-operator-6569778c84-rr8r7_openshift-ingress-operator(3d6b1ce7-1213-494c-829d-186d39eac7eb)\"" pod="openshift-ingress-operator/ingress-operator-6569778c84-rr8r7" podUID="3d6b1ce7-1213-494c-829d-186d39eac7eb" Feb 24 05:25:51.663286 master-0 kubenswrapper[7614]: I0224 05:25:51.663162 7614 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-zxkt2 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 05:25:51.663286 master-0 kubenswrapper[7614]: [-]has-synced failed: reason withheld Feb 24 05:25:51.663286 master-0 kubenswrapper[7614]: [+]process-running ok Feb 24 05:25:51.663286 master-0 kubenswrapper[7614]: healthz check failed Feb 24 05:25:51.663286 master-0 kubenswrapper[7614]: I0224 05:25:51.663299 7614 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-zxkt2" podUID="be7a4b9e-1e9a-4298-b804-21b683805c0e" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 05:25:51.960625 master-0 kubenswrapper[7614]: I0224 05:25:51.960455 7614 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ingress-operator_ingress-operator-6569778c84-rr8r7_3d6b1ce7-1213-494c-829d-186d39eac7eb/ingress-operator/3.log" Feb 24 05:25:52.664834 master-0 kubenswrapper[7614]: I0224 05:25:52.664744 7614 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-zxkt2 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 05:25:52.664834 master-0 kubenswrapper[7614]: [-]has-synced failed: reason withheld Feb 24 05:25:52.664834 master-0 kubenswrapper[7614]: [+]process-running ok Feb 24 05:25:52.664834 master-0 kubenswrapper[7614]: healthz check failed Feb 24 05:25:52.665493 master-0 kubenswrapper[7614]: I0224 05:25:52.664837 7614 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-zxkt2" podUID="be7a4b9e-1e9a-4298-b804-21b683805c0e" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 05:25:53.664360 master-0 kubenswrapper[7614]: I0224 05:25:53.664237 7614 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-zxkt2 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 05:25:53.664360 master-0 kubenswrapper[7614]: [-]has-synced failed: reason withheld Feb 24 05:25:53.664360 master-0 kubenswrapper[7614]: [+]process-running ok Feb 24 05:25:53.664360 master-0 kubenswrapper[7614]: healthz check failed Feb 24 05:25:53.665452 master-0 kubenswrapper[7614]: I0224 05:25:53.664449 7614 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-zxkt2" podUID="be7a4b9e-1e9a-4298-b804-21b683805c0e" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 05:25:54.663527 master-0 kubenswrapper[7614]: I0224 05:25:54.663465 7614 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-zxkt2 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 05:25:54.663527 master-0 kubenswrapper[7614]: [-]has-synced failed: reason withheld Feb 24 05:25:54.663527 master-0 kubenswrapper[7614]: [+]process-running ok Feb 24 05:25:54.663527 master-0 kubenswrapper[7614]: healthz check failed Feb 24 05:25:54.664182 master-0 kubenswrapper[7614]: I0224 05:25:54.664137 7614 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-zxkt2" podUID="be7a4b9e-1e9a-4298-b804-21b683805c0e" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 05:25:55.311005 master-0 kubenswrapper[7614]: E0224 05:25:55.310872 7614 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" interval="1.6s" Feb 24 05:25:55.664601 master-0 kubenswrapper[7614]: I0224 05:25:55.664363 7614 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-zxkt2 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 05:25:55.664601 master-0 kubenswrapper[7614]: [-]has-synced failed: reason withheld Feb 24 05:25:55.664601 master-0 kubenswrapper[7614]: [+]process-running ok Feb 24 05:25:55.664601 master-0 kubenswrapper[7614]: healthz check failed Feb 24 05:25:55.664601 master-0 kubenswrapper[7614]: I0224 05:25:55.664498 7614 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-zxkt2" podUID="be7a4b9e-1e9a-4298-b804-21b683805c0e" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 05:25:56.443916 master-0 kubenswrapper[7614]: E0224 05:25:56.443655 7614 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"master-0\": Get \"https://api-int.sno.openstack.lab:6443/api/v1/nodes/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 24 05:25:56.663708 master-0 kubenswrapper[7614]: I0224 05:25:56.663621 7614 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-zxkt2 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 05:25:56.663708 master-0 kubenswrapper[7614]: [-]has-synced failed: reason withheld Feb 24 05:25:56.663708 master-0 kubenswrapper[7614]: [+]process-running ok Feb 24 05:25:56.663708 master-0 kubenswrapper[7614]: healthz check failed Feb 24 05:25:56.664527 master-0 kubenswrapper[7614]: I0224 05:25:56.664464 7614 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-zxkt2" podUID="be7a4b9e-1e9a-4298-b804-21b683805c0e" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 05:25:57.664532 master-0 kubenswrapper[7614]: I0224 05:25:57.664476 7614 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-zxkt2 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 05:25:57.664532 master-0 kubenswrapper[7614]: [-]has-synced failed: reason withheld Feb 24 05:25:57.664532 master-0 kubenswrapper[7614]: [+]process-running ok Feb 24 05:25:57.664532 master-0 kubenswrapper[7614]: healthz check failed Feb 24 05:25:57.665629 master-0 kubenswrapper[7614]: I0224 05:25:57.665573 7614 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-zxkt2" podUID="be7a4b9e-1e9a-4298-b804-21b683805c0e" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 05:25:58.022603 master-0 kubenswrapper[7614]: I0224 05:25:58.022490 7614 generic.go:334] "Generic (PLEG): container finished" podID="dd29bef3-d27e-48b3-9aa0-d915e949b3d5" containerID="c2d1c04894486e075c5bb15ad6bb88a45eb446ca42f9495fa6638b84c3d79262" exitCode=0 Feb 24 05:25:58.022603 master-0 kubenswrapper[7614]: I0224 05:25:58.022581 7614 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-6f5488b997-dbsnm" event={"ID":"dd29bef3-d27e-48b3-9aa0-d915e949b3d5","Type":"ContainerDied","Data":"c2d1c04894486e075c5bb15ad6bb88a45eb446ca42f9495fa6638b84c3d79262"} Feb 24 05:25:58.023088 master-0 kubenswrapper[7614]: I0224 05:25:58.022645 7614 scope.go:117] "RemoveContainer" containerID="270089d93d1aad8adc2c6f3a218f7c7455fbc8f4604c672dd2ed10a74721af6c" Feb 24 05:25:58.023613 master-0 kubenswrapper[7614]: I0224 05:25:58.023564 7614 scope.go:117] "RemoveContainer" containerID="c2d1c04894486e075c5bb15ad6bb88a45eb446ca42f9495fa6638b84c3d79262" Feb 24 05:25:58.023988 master-0 kubenswrapper[7614]: E0224 05:25:58.023921 7614 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"marketplace-operator\" with CrashLoopBackOff: \"back-off 10s restarting failed container=marketplace-operator pod=marketplace-operator-6f5488b997-dbsnm_openshift-marketplace(dd29bef3-d27e-48b3-9aa0-d915e949b3d5)\"" pod="openshift-marketplace/marketplace-operator-6f5488b997-dbsnm" podUID="dd29bef3-d27e-48b3-9aa0-d915e949b3d5" Feb 24 05:25:58.664348 master-0 kubenswrapper[7614]: I0224 05:25:58.664250 7614 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-zxkt2 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 05:25:58.664348 master-0 kubenswrapper[7614]: [-]has-synced failed: reason withheld Feb 24 05:25:58.664348 master-0 kubenswrapper[7614]: [+]process-running ok Feb 24 05:25:58.664348 master-0 kubenswrapper[7614]: healthz check failed Feb 24 05:25:58.665201 master-0 kubenswrapper[7614]: I0224 05:25:58.664461 7614 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-zxkt2" podUID="be7a4b9e-1e9a-4298-b804-21b683805c0e" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 05:25:59.663337 master-0 kubenswrapper[7614]: I0224 05:25:59.663216 7614 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-zxkt2 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 05:25:59.663337 master-0 kubenswrapper[7614]: [-]has-synced failed: reason withheld Feb 24 05:25:59.663337 master-0 kubenswrapper[7614]: [+]process-running ok Feb 24 05:25:59.663337 master-0 kubenswrapper[7614]: healthz check failed Feb 24 05:25:59.663789 master-0 kubenswrapper[7614]: I0224 05:25:59.663365 7614 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-zxkt2" podUID="be7a4b9e-1e9a-4298-b804-21b683805c0e" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 05:26:00.663475 master-0 kubenswrapper[7614]: I0224 05:26:00.663370 7614 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-zxkt2 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 05:26:00.663475 master-0 kubenswrapper[7614]: [-]has-synced failed: reason withheld Feb 24 05:26:00.663475 master-0 kubenswrapper[7614]: [+]process-running ok Feb 24 05:26:00.663475 master-0 kubenswrapper[7614]: healthz check failed Feb 24 05:26:00.663475 master-0 kubenswrapper[7614]: I0224 05:26:00.663474 7614 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-zxkt2" podUID="be7a4b9e-1e9a-4298-b804-21b683805c0e" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 05:26:01.663724 master-0 kubenswrapper[7614]: I0224 05:26:01.663641 7614 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-zxkt2 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 05:26:01.663724 master-0 kubenswrapper[7614]: [-]has-synced failed: reason withheld Feb 24 05:26:01.663724 master-0 kubenswrapper[7614]: [+]process-running ok Feb 24 05:26:01.663724 master-0 kubenswrapper[7614]: healthz check failed Feb 24 05:26:01.664932 master-0 kubenswrapper[7614]: I0224 05:26:01.663736 7614 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-zxkt2" podUID="be7a4b9e-1e9a-4298-b804-21b683805c0e" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 05:26:02.229668 master-0 kubenswrapper[7614]: I0224 05:26:02.229570 7614 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-6f5488b997-dbsnm" Feb 24 05:26:02.229668 master-0 kubenswrapper[7614]: I0224 05:26:02.229659 7614 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-marketplace/marketplace-operator-6f5488b997-dbsnm" Feb 24 05:26:02.230523 master-0 kubenswrapper[7614]: I0224 05:26:02.230365 7614 scope.go:117] "RemoveContainer" containerID="c2d1c04894486e075c5bb15ad6bb88a45eb446ca42f9495fa6638b84c3d79262" Feb 24 05:26:02.233141 master-0 kubenswrapper[7614]: E0224 05:26:02.232984 7614 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"marketplace-operator\" with CrashLoopBackOff: \"back-off 10s restarting failed container=marketplace-operator pod=marketplace-operator-6f5488b997-dbsnm_openshift-marketplace(dd29bef3-d27e-48b3-9aa0-d915e949b3d5)\"" pod="openshift-marketplace/marketplace-operator-6f5488b997-dbsnm" podUID="dd29bef3-d27e-48b3-9aa0-d915e949b3d5" Feb 24 05:26:02.663829 master-0 kubenswrapper[7614]: I0224 05:26:02.663734 7614 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-zxkt2 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 05:26:02.663829 master-0 kubenswrapper[7614]: [-]has-synced failed: reason withheld Feb 24 05:26:02.663829 master-0 kubenswrapper[7614]: [+]process-running ok Feb 24 05:26:02.663829 master-0 kubenswrapper[7614]: healthz check failed Feb 24 05:26:02.664526 master-0 kubenswrapper[7614]: I0224 05:26:02.663869 7614 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-zxkt2" podUID="be7a4b9e-1e9a-4298-b804-21b683805c0e" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 05:26:02.927727 master-0 kubenswrapper[7614]: E0224 05:26:02.927433 7614 event.go:359] "Server rejected event (will not retry!)" err="Timeout: request did not complete within requested timeout - context deadline exceeded" event="&Event{ObjectMeta:{cni-sysctl-allowlist-ds-j28p2.18971758d16046e2 openshift-multus 11388 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-multus,Name:cni-sysctl-allowlist-ds-j28p2,UID:2303d3b8-fe6a-469a-a306-4e1685181dbe,APIVersion:v1,ResourceVersion:11230,FieldPath:spec.containers{kube-multus-additional-cni-plugins},},Reason:Unhealthy,Message:Readiness probe errored: rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-02-24 05:24:08 +0000 UTC,LastTimestamp:2026-02-24 05:24:28.629027721 +0000 UTC m=+599.663770907,Count:3,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Feb 24 05:26:03.664182 master-0 kubenswrapper[7614]: I0224 05:26:03.664085 7614 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-zxkt2 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 05:26:03.664182 master-0 kubenswrapper[7614]: [-]has-synced failed: reason withheld Feb 24 05:26:03.664182 master-0 kubenswrapper[7614]: [+]process-running ok Feb 24 05:26:03.664182 master-0 kubenswrapper[7614]: healthz check failed Feb 24 05:26:03.665403 master-0 kubenswrapper[7614]: I0224 05:26:03.664204 7614 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-zxkt2" podUID="be7a4b9e-1e9a-4298-b804-21b683805c0e" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 05:26:04.174712 master-0 kubenswrapper[7614]: I0224 05:26:04.174640 7614 scope.go:117] "RemoveContainer" containerID="cfe91b9dce3107eef3be77e003af99516d67b13614554d783a1ee356de5c61ba" Feb 24 05:26:04.175779 master-0 kubenswrapper[7614]: E0224 05:26:04.175700 7614 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ingress-operator\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ingress-operator pod=ingress-operator-6569778c84-rr8r7_openshift-ingress-operator(3d6b1ce7-1213-494c-829d-186d39eac7eb)\"" pod="openshift-ingress-operator/ingress-operator-6569778c84-rr8r7" podUID="3d6b1ce7-1213-494c-829d-186d39eac7eb" Feb 24 05:26:04.664519 master-0 kubenswrapper[7614]: I0224 05:26:04.664405 7614 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-zxkt2 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 05:26:04.664519 master-0 kubenswrapper[7614]: [-]has-synced failed: reason withheld Feb 24 05:26:04.664519 master-0 kubenswrapper[7614]: [+]process-running ok Feb 24 05:26:04.664519 master-0 kubenswrapper[7614]: healthz check failed Feb 24 05:26:04.665788 master-0 kubenswrapper[7614]: I0224 05:26:04.664541 7614 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-zxkt2" podUID="be7a4b9e-1e9a-4298-b804-21b683805c0e" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 05:26:05.664217 master-0 kubenswrapper[7614]: I0224 05:26:05.664106 7614 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-zxkt2 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 05:26:05.664217 master-0 kubenswrapper[7614]: [-]has-synced failed: reason withheld Feb 24 05:26:05.664217 master-0 kubenswrapper[7614]: [+]process-running ok Feb 24 05:26:05.664217 master-0 kubenswrapper[7614]: healthz check failed Feb 24 05:26:05.664217 master-0 kubenswrapper[7614]: I0224 05:26:05.664209 7614 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-zxkt2" podUID="be7a4b9e-1e9a-4298-b804-21b683805c0e" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 05:26:06.444933 master-0 kubenswrapper[7614]: E0224 05:26:06.444845 7614 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"master-0\": Get \"https://api-int.sno.openstack.lab:6443/api/v1/nodes/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 24 05:26:06.664187 master-0 kubenswrapper[7614]: I0224 05:26:06.664071 7614 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-zxkt2 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 05:26:06.664187 master-0 kubenswrapper[7614]: [-]has-synced failed: reason withheld Feb 24 05:26:06.664187 master-0 kubenswrapper[7614]: [+]process-running ok Feb 24 05:26:06.664187 master-0 kubenswrapper[7614]: healthz check failed Feb 24 05:26:06.665371 master-0 kubenswrapper[7614]: I0224 05:26:06.664189 7614 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-zxkt2" podUID="be7a4b9e-1e9a-4298-b804-21b683805c0e" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 05:26:06.912075 master-0 kubenswrapper[7614]: E0224 05:26:06.911931 7614 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" interval="3.2s" Feb 24 05:26:07.663085 master-0 kubenswrapper[7614]: I0224 05:26:07.662945 7614 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-zxkt2 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 05:26:07.663085 master-0 kubenswrapper[7614]: [-]has-synced failed: reason withheld Feb 24 05:26:07.663085 master-0 kubenswrapper[7614]: [+]process-running ok Feb 24 05:26:07.663085 master-0 kubenswrapper[7614]: healthz check failed Feb 24 05:26:07.663085 master-0 kubenswrapper[7614]: I0224 05:26:07.663075 7614 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-zxkt2" podUID="be7a4b9e-1e9a-4298-b804-21b683805c0e" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 05:26:08.664007 master-0 kubenswrapper[7614]: I0224 05:26:08.663899 7614 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-zxkt2 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 05:26:08.664007 master-0 kubenswrapper[7614]: [-]has-synced failed: reason withheld Feb 24 05:26:08.664007 master-0 kubenswrapper[7614]: [+]process-running ok Feb 24 05:26:08.664007 master-0 kubenswrapper[7614]: healthz check failed Feb 24 05:26:08.665084 master-0 kubenswrapper[7614]: I0224 05:26:08.664005 7614 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-zxkt2" podUID="be7a4b9e-1e9a-4298-b804-21b683805c0e" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 05:26:08.794431 master-0 kubenswrapper[7614]: E0224 05:26:08.793982 7614 mirror_client.go:138] "Failed deleting a mirror pod" err="Timeout: request did not complete within requested timeout - context deadline exceeded" pod="openshift-etcd/etcd-master-0" Feb 24 05:26:09.665216 master-0 kubenswrapper[7614]: I0224 05:26:09.665109 7614 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-zxkt2 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 05:26:09.665216 master-0 kubenswrapper[7614]: [-]has-synced failed: reason withheld Feb 24 05:26:09.665216 master-0 kubenswrapper[7614]: [+]process-running ok Feb 24 05:26:09.665216 master-0 kubenswrapper[7614]: healthz check failed Feb 24 05:26:09.666564 master-0 kubenswrapper[7614]: I0224 05:26:09.665223 7614 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-zxkt2" podUID="be7a4b9e-1e9a-4298-b804-21b683805c0e" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 05:26:10.141062 master-0 kubenswrapper[7614]: I0224 05:26:10.140991 7614 generic.go:334] "Generic (PLEG): container finished" podID="b419b8533666d3ae7054c771ce97a95f" containerID="bafd9772766031fe924e6722dc991fce3b4b72af5430d21cc3d769595f49edeb" exitCode=0 Feb 24 05:26:10.141466 master-0 kubenswrapper[7614]: I0224 05:26:10.141104 7614 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-master-0" event={"ID":"b419b8533666d3ae7054c771ce97a95f","Type":"ContainerDied","Data":"bafd9772766031fe924e6722dc991fce3b4b72af5430d21cc3d769595f49edeb"} Feb 24 05:26:10.142008 master-0 kubenswrapper[7614]: I0224 05:26:10.141986 7614 kubelet.go:1909] "Trying to delete pod" pod="openshift-etcd/etcd-master-0" podUID="6636104e-8b36-4c09-9e6b-e13fc7237a3e" Feb 24 05:26:10.142115 master-0 kubenswrapper[7614]: I0224 05:26:10.142101 7614 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-etcd/etcd-master-0" podUID="6636104e-8b36-4c09-9e6b-e13fc7237a3e" Feb 24 05:26:10.664191 master-0 kubenswrapper[7614]: I0224 05:26:10.664075 7614 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-zxkt2 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 05:26:10.664191 master-0 kubenswrapper[7614]: [-]has-synced failed: reason withheld Feb 24 05:26:10.664191 master-0 kubenswrapper[7614]: [+]process-running ok Feb 24 05:26:10.664191 master-0 kubenswrapper[7614]: healthz check failed Feb 24 05:26:10.664760 master-0 kubenswrapper[7614]: I0224 05:26:10.664209 7614 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-zxkt2" podUID="be7a4b9e-1e9a-4298-b804-21b683805c0e" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 05:26:11.151930 master-0 kubenswrapper[7614]: I0224 05:26:11.151824 7614 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cluster-storage-operator_csi-snapshot-controller-6847bb4785-vqn96_b79ef90c-dc66-4d5f-8943-2c3ac68796ba/snapshot-controller/1.log" Feb 24 05:26:11.152940 master-0 kubenswrapper[7614]: I0224 05:26:11.152751 7614 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cluster-storage-operator_csi-snapshot-controller-6847bb4785-vqn96_b79ef90c-dc66-4d5f-8943-2c3ac68796ba/snapshot-controller/0.log" Feb 24 05:26:11.152940 master-0 kubenswrapper[7614]: I0224 05:26:11.152843 7614 generic.go:334] "Generic (PLEG): container finished" podID="b79ef90c-dc66-4d5f-8943-2c3ac68796ba" containerID="9223fb2da930fb3c50e82163a41bfe2c42eac1ee2e2d4f682d787074cbff45d5" exitCode=1 Feb 24 05:26:11.153086 master-0 kubenswrapper[7614]: I0224 05:26:11.152956 7614 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-storage-operator/csi-snapshot-controller-6847bb4785-vqn96" event={"ID":"b79ef90c-dc66-4d5f-8943-2c3ac68796ba","Type":"ContainerDied","Data":"9223fb2da930fb3c50e82163a41bfe2c42eac1ee2e2d4f682d787074cbff45d5"} Feb 24 05:26:11.153086 master-0 kubenswrapper[7614]: I0224 05:26:11.153018 7614 scope.go:117] "RemoveContainer" containerID="92100dde9dbd51740744fac31aa4b79ba4dfcf0cd902c28d6ae66b9259196300" Feb 24 05:26:11.153981 master-0 kubenswrapper[7614]: I0224 05:26:11.153928 7614 scope.go:117] "RemoveContainer" containerID="9223fb2da930fb3c50e82163a41bfe2c42eac1ee2e2d4f682d787074cbff45d5" Feb 24 05:26:11.154274 master-0 kubenswrapper[7614]: E0224 05:26:11.154214 7614 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"snapshot-controller\" with CrashLoopBackOff: \"back-off 10s restarting failed container=snapshot-controller pod=csi-snapshot-controller-6847bb4785-vqn96_openshift-cluster-storage-operator(b79ef90c-dc66-4d5f-8943-2c3ac68796ba)\"" pod="openshift-cluster-storage-operator/csi-snapshot-controller-6847bb4785-vqn96" podUID="b79ef90c-dc66-4d5f-8943-2c3ac68796ba" Feb 24 05:26:11.156119 master-0 kubenswrapper[7614]: I0224 05:26:11.155906 7614 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cloud-controller-manager-operator_cluster-cloud-controller-manager-operator-67dd8d7969-m8d2t_f3cd3830-62b5-49d1-917e-bd993d685c65/config-sync-controllers/0.log" Feb 24 05:26:11.156488 master-0 kubenswrapper[7614]: I0224 05:26:11.156437 7614 generic.go:334] "Generic (PLEG): container finished" podID="f3cd3830-62b5-49d1-917e-bd993d685c65" containerID="1bb8d464111f0e717ad599e137d9e8e3853e8cfeea75bffbb868b896a7e93fff" exitCode=1 Feb 24 05:26:11.156488 master-0 kubenswrapper[7614]: I0224 05:26:11.156483 7614 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-67dd8d7969-m8d2t" event={"ID":"f3cd3830-62b5-49d1-917e-bd993d685c65","Type":"ContainerDied","Data":"1bb8d464111f0e717ad599e137d9e8e3853e8cfeea75bffbb868b896a7e93fff"} Feb 24 05:26:11.157503 master-0 kubenswrapper[7614]: I0224 05:26:11.157451 7614 scope.go:117] "RemoveContainer" containerID="1bb8d464111f0e717ad599e137d9e8e3853e8cfeea75bffbb868b896a7e93fff" Feb 24 05:26:11.663059 master-0 kubenswrapper[7614]: I0224 05:26:11.662838 7614 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-zxkt2 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 05:26:11.663059 master-0 kubenswrapper[7614]: [-]has-synced failed: reason withheld Feb 24 05:26:11.663059 master-0 kubenswrapper[7614]: [+]process-running ok Feb 24 05:26:11.663059 master-0 kubenswrapper[7614]: healthz check failed Feb 24 05:26:11.663059 master-0 kubenswrapper[7614]: I0224 05:26:11.662993 7614 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-zxkt2" podUID="be7a4b9e-1e9a-4298-b804-21b683805c0e" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 05:26:12.167955 master-0 kubenswrapper[7614]: I0224 05:26:12.167843 7614 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cluster-storage-operator_csi-snapshot-controller-6847bb4785-vqn96_b79ef90c-dc66-4d5f-8943-2c3ac68796ba/snapshot-controller/1.log" Feb 24 05:26:12.171703 master-0 kubenswrapper[7614]: I0224 05:26:12.171648 7614 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cloud-controller-manager-operator_cluster-cloud-controller-manager-operator-67dd8d7969-m8d2t_f3cd3830-62b5-49d1-917e-bd993d685c65/config-sync-controllers/0.log" Feb 24 05:26:12.172257 master-0 kubenswrapper[7614]: I0224 05:26:12.172174 7614 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-67dd8d7969-m8d2t" event={"ID":"f3cd3830-62b5-49d1-917e-bd993d685c65","Type":"ContainerStarted","Data":"a799ef5937e1bb71d73effc95f1a22e9d1cc419cae3721dc36d10d1476b10b79"} Feb 24 05:26:12.663937 master-0 kubenswrapper[7614]: I0224 05:26:12.663813 7614 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-zxkt2 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 05:26:12.663937 master-0 kubenswrapper[7614]: [-]has-synced failed: reason withheld Feb 24 05:26:12.663937 master-0 kubenswrapper[7614]: [+]process-running ok Feb 24 05:26:12.663937 master-0 kubenswrapper[7614]: healthz check failed Feb 24 05:26:12.663937 master-0 kubenswrapper[7614]: I0224 05:26:12.663918 7614 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-zxkt2" podUID="be7a4b9e-1e9a-4298-b804-21b683805c0e" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 05:26:13.184662 master-0 kubenswrapper[7614]: I0224 05:26:13.184552 7614 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cloud-controller-manager-operator_cluster-cloud-controller-manager-operator-67dd8d7969-m8d2t_f3cd3830-62b5-49d1-917e-bd993d685c65/config-sync-controllers/0.log" Feb 24 05:26:13.185644 master-0 kubenswrapper[7614]: I0224 05:26:13.185384 7614 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cloud-controller-manager-operator_cluster-cloud-controller-manager-operator-67dd8d7969-m8d2t_f3cd3830-62b5-49d1-917e-bd993d685c65/cluster-cloud-controller-manager/0.log" Feb 24 05:26:13.185644 master-0 kubenswrapper[7614]: I0224 05:26:13.185470 7614 generic.go:334] "Generic (PLEG): container finished" podID="f3cd3830-62b5-49d1-917e-bd993d685c65" containerID="1f44dc53b225ecb6e6f89dd2368c871c5572185f200fea78cfb5b504bac772aa" exitCode=1 Feb 24 05:26:13.185644 master-0 kubenswrapper[7614]: I0224 05:26:13.185524 7614 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-67dd8d7969-m8d2t" event={"ID":"f3cd3830-62b5-49d1-917e-bd993d685c65","Type":"ContainerDied","Data":"1f44dc53b225ecb6e6f89dd2368c871c5572185f200fea78cfb5b504bac772aa"} Feb 24 05:26:13.186380 master-0 kubenswrapper[7614]: I0224 05:26:13.186341 7614 scope.go:117] "RemoveContainer" containerID="1f44dc53b225ecb6e6f89dd2368c871c5572185f200fea78cfb5b504bac772aa" Feb 24 05:26:13.664981 master-0 kubenswrapper[7614]: I0224 05:26:13.664544 7614 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-zxkt2 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 05:26:13.664981 master-0 kubenswrapper[7614]: [-]has-synced failed: reason withheld Feb 24 05:26:13.664981 master-0 kubenswrapper[7614]: [+]process-running ok Feb 24 05:26:13.664981 master-0 kubenswrapper[7614]: healthz check failed Feb 24 05:26:13.664981 master-0 kubenswrapper[7614]: I0224 05:26:13.664734 7614 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-zxkt2" podUID="be7a4b9e-1e9a-4298-b804-21b683805c0e" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 05:26:14.175021 master-0 kubenswrapper[7614]: I0224 05:26:14.174908 7614 scope.go:117] "RemoveContainer" containerID="c2d1c04894486e075c5bb15ad6bb88a45eb446ca42f9495fa6638b84c3d79262" Feb 24 05:26:14.200547 master-0 kubenswrapper[7614]: I0224 05:26:14.200462 7614 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cloud-controller-manager-operator_cluster-cloud-controller-manager-operator-67dd8d7969-m8d2t_f3cd3830-62b5-49d1-917e-bd993d685c65/config-sync-controllers/0.log" Feb 24 05:26:14.202153 master-0 kubenswrapper[7614]: I0224 05:26:14.202097 7614 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cloud-controller-manager-operator_cluster-cloud-controller-manager-operator-67dd8d7969-m8d2t_f3cd3830-62b5-49d1-917e-bd993d685c65/cluster-cloud-controller-manager/0.log" Feb 24 05:26:14.203172 master-0 kubenswrapper[7614]: I0224 05:26:14.203112 7614 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-67dd8d7969-m8d2t" event={"ID":"f3cd3830-62b5-49d1-917e-bd993d685c65","Type":"ContainerStarted","Data":"6820a0d549f5ec3fde025e98b7d478fa85e415bc8dd21a03332d733918c21ea6"} Feb 24 05:26:14.663293 master-0 kubenswrapper[7614]: I0224 05:26:14.663195 7614 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-zxkt2 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 05:26:14.663293 master-0 kubenswrapper[7614]: [-]has-synced failed: reason withheld Feb 24 05:26:14.663293 master-0 kubenswrapper[7614]: [+]process-running ok Feb 24 05:26:14.663293 master-0 kubenswrapper[7614]: healthz check failed Feb 24 05:26:14.663764 master-0 kubenswrapper[7614]: I0224 05:26:14.663299 7614 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-zxkt2" podUID="be7a4b9e-1e9a-4298-b804-21b683805c0e" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 05:26:15.175496 master-0 kubenswrapper[7614]: I0224 05:26:15.175418 7614 scope.go:117] "RemoveContainer" containerID="cfe91b9dce3107eef3be77e003af99516d67b13614554d783a1ee356de5c61ba" Feb 24 05:26:15.175965 master-0 kubenswrapper[7614]: E0224 05:26:15.175909 7614 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ingress-operator\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ingress-operator pod=ingress-operator-6569778c84-rr8r7_openshift-ingress-operator(3d6b1ce7-1213-494c-829d-186d39eac7eb)\"" pod="openshift-ingress-operator/ingress-operator-6569778c84-rr8r7" podUID="3d6b1ce7-1213-494c-829d-186d39eac7eb" Feb 24 05:26:15.218294 master-0 kubenswrapper[7614]: I0224 05:26:15.218194 7614 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-6f5488b997-dbsnm" event={"ID":"dd29bef3-d27e-48b3-9aa0-d915e949b3d5","Type":"ContainerStarted","Data":"dfa2027afdbc66c1b745336b98230daefed3256b7845e47e97a59af2c509d7eb"} Feb 24 05:26:15.219305 master-0 kubenswrapper[7614]: I0224 05:26:15.218762 7614 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-6f5488b997-dbsnm" Feb 24 05:26:15.228459 master-0 kubenswrapper[7614]: I0224 05:26:15.228359 7614 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-6f5488b997-dbsnm" Feb 24 05:26:15.663619 master-0 kubenswrapper[7614]: I0224 05:26:15.663493 7614 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-zxkt2 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 05:26:15.663619 master-0 kubenswrapper[7614]: [-]has-synced failed: reason withheld Feb 24 05:26:15.663619 master-0 kubenswrapper[7614]: [+]process-running ok Feb 24 05:26:15.663619 master-0 kubenswrapper[7614]: healthz check failed Feb 24 05:26:15.663619 master-0 kubenswrapper[7614]: I0224 05:26:15.663601 7614 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-zxkt2" podUID="be7a4b9e-1e9a-4298-b804-21b683805c0e" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 05:26:16.446023 master-0 kubenswrapper[7614]: E0224 05:26:16.445927 7614 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"master-0\": Get \"https://api-int.sno.openstack.lab:6443/api/v1/nodes/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 24 05:26:16.446023 master-0 kubenswrapper[7614]: E0224 05:26:16.445993 7614 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Feb 24 05:26:16.664107 master-0 kubenswrapper[7614]: I0224 05:26:16.663968 7614 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-zxkt2 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 05:26:16.664107 master-0 kubenswrapper[7614]: [-]has-synced failed: reason withheld Feb 24 05:26:16.664107 master-0 kubenswrapper[7614]: [+]process-running ok Feb 24 05:26:16.664107 master-0 kubenswrapper[7614]: healthz check failed Feb 24 05:26:16.664107 master-0 kubenswrapper[7614]: I0224 05:26:16.664094 7614 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-zxkt2" podUID="be7a4b9e-1e9a-4298-b804-21b683805c0e" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 05:26:17.663280 master-0 kubenswrapper[7614]: I0224 05:26:17.663141 7614 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-zxkt2 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 05:26:17.663280 master-0 kubenswrapper[7614]: [-]has-synced failed: reason withheld Feb 24 05:26:17.663280 master-0 kubenswrapper[7614]: [+]process-running ok Feb 24 05:26:17.663280 master-0 kubenswrapper[7614]: healthz check failed Feb 24 05:26:17.663280 master-0 kubenswrapper[7614]: I0224 05:26:17.663279 7614 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-zxkt2" podUID="be7a4b9e-1e9a-4298-b804-21b683805c0e" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 05:26:18.664191 master-0 kubenswrapper[7614]: I0224 05:26:18.664083 7614 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-zxkt2 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 05:26:18.664191 master-0 kubenswrapper[7614]: [-]has-synced failed: reason withheld Feb 24 05:26:18.664191 master-0 kubenswrapper[7614]: [+]process-running ok Feb 24 05:26:18.664191 master-0 kubenswrapper[7614]: healthz check failed Feb 24 05:26:18.664191 master-0 kubenswrapper[7614]: I0224 05:26:18.664186 7614 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-zxkt2" podUID="be7a4b9e-1e9a-4298-b804-21b683805c0e" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 05:26:19.664662 master-0 kubenswrapper[7614]: I0224 05:26:19.664536 7614 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-zxkt2 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 05:26:19.664662 master-0 kubenswrapper[7614]: [-]has-synced failed: reason withheld Feb 24 05:26:19.664662 master-0 kubenswrapper[7614]: [+]process-running ok Feb 24 05:26:19.664662 master-0 kubenswrapper[7614]: healthz check failed Feb 24 05:26:19.665825 master-0 kubenswrapper[7614]: I0224 05:26:19.664659 7614 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-zxkt2" podUID="be7a4b9e-1e9a-4298-b804-21b683805c0e" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 05:26:20.113815 master-0 kubenswrapper[7614]: E0224 05:26:20.113697 7614 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" interval="6.4s" Feb 24 05:26:20.663426 master-0 kubenswrapper[7614]: I0224 05:26:20.663236 7614 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-zxkt2 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 05:26:20.663426 master-0 kubenswrapper[7614]: [-]has-synced failed: reason withheld Feb 24 05:26:20.663426 master-0 kubenswrapper[7614]: [+]process-running ok Feb 24 05:26:20.663426 master-0 kubenswrapper[7614]: healthz check failed Feb 24 05:26:20.663972 master-0 kubenswrapper[7614]: I0224 05:26:20.663447 7614 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-zxkt2" podUID="be7a4b9e-1e9a-4298-b804-21b683805c0e" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 05:26:21.663753 master-0 kubenswrapper[7614]: I0224 05:26:21.663634 7614 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-zxkt2 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 05:26:21.663753 master-0 kubenswrapper[7614]: [-]has-synced failed: reason withheld Feb 24 05:26:21.663753 master-0 kubenswrapper[7614]: [+]process-running ok Feb 24 05:26:21.663753 master-0 kubenswrapper[7614]: healthz check failed Feb 24 05:26:21.663753 master-0 kubenswrapper[7614]: I0224 05:26:21.663735 7614 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-zxkt2" podUID="be7a4b9e-1e9a-4298-b804-21b683805c0e" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 05:26:21.685344 master-0 kubenswrapper[7614]: I0224 05:26:21.685216 7614 status_manager.go:851] "Failed to get status for pod" podUID="59333a14-5bdc-4590-a3da-af7300f086da" pod="openshift-authentication-operator/authentication-operator-5bd7c86784-kbb8z" err="the server was unable to return a response in the time allotted, but may still be processing the request (get pods authentication-operator-5bd7c86784-kbb8z)" Feb 24 05:26:22.282851 master-0 kubenswrapper[7614]: I0224 05:26:22.282753 7614 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operator-controller_operator-controller-controller-manager-9cc7d7bb-t75jj_347c43e5-86d5-436f-bdc5-1c7bbe19ab2a/manager/1.log" Feb 24 05:26:22.284400 master-0 kubenswrapper[7614]: I0224 05:26:22.284339 7614 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operator-controller_operator-controller-controller-manager-9cc7d7bb-t75jj_347c43e5-86d5-436f-bdc5-1c7bbe19ab2a/manager/0.log" Feb 24 05:26:22.284400 master-0 kubenswrapper[7614]: I0224 05:26:22.284429 7614 generic.go:334] "Generic (PLEG): container finished" podID="347c43e5-86d5-436f-bdc5-1c7bbe19ab2a" containerID="27d3c979d980c52be573082c4d98e2b43efa2f5962b15df7eb3f072aaaaf8885" exitCode=1 Feb 24 05:26:22.285019 master-0 kubenswrapper[7614]: I0224 05:26:22.284500 7614 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-controller/operator-controller-controller-manager-9cc7d7bb-t75jj" event={"ID":"347c43e5-86d5-436f-bdc5-1c7bbe19ab2a","Type":"ContainerDied","Data":"27d3c979d980c52be573082c4d98e2b43efa2f5962b15df7eb3f072aaaaf8885"} Feb 24 05:26:22.285019 master-0 kubenswrapper[7614]: I0224 05:26:22.284566 7614 scope.go:117] "RemoveContainer" containerID="54f08b019978c50707a9af7625f4b1969ac2f9de3d91bdb89125a98cc8b35f5f" Feb 24 05:26:22.285566 master-0 kubenswrapper[7614]: I0224 05:26:22.285519 7614 scope.go:117] "RemoveContainer" containerID="27d3c979d980c52be573082c4d98e2b43efa2f5962b15df7eb3f072aaaaf8885" Feb 24 05:26:22.286111 master-0 kubenswrapper[7614]: E0224 05:26:22.286025 7614 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 10s restarting failed container=manager pod=operator-controller-controller-manager-9cc7d7bb-t75jj_openshift-operator-controller(347c43e5-86d5-436f-bdc5-1c7bbe19ab2a)\"" pod="openshift-operator-controller/operator-controller-controller-manager-9cc7d7bb-t75jj" podUID="347c43e5-86d5-436f-bdc5-1c7bbe19ab2a" Feb 24 05:26:22.663742 master-0 kubenswrapper[7614]: I0224 05:26:22.663427 7614 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-zxkt2 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 05:26:22.663742 master-0 kubenswrapper[7614]: [-]has-synced failed: reason withheld Feb 24 05:26:22.663742 master-0 kubenswrapper[7614]: [+]process-running ok Feb 24 05:26:22.663742 master-0 kubenswrapper[7614]: healthz check failed Feb 24 05:26:22.663742 master-0 kubenswrapper[7614]: I0224 05:26:22.663595 7614 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-zxkt2" podUID="be7a4b9e-1e9a-4298-b804-21b683805c0e" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 05:26:23.299351 master-0 kubenswrapper[7614]: I0224 05:26:23.299275 7614 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operator-controller_operator-controller-controller-manager-9cc7d7bb-t75jj_347c43e5-86d5-436f-bdc5-1c7bbe19ab2a/manager/1.log" Feb 24 05:26:23.664051 master-0 kubenswrapper[7614]: I0224 05:26:23.663835 7614 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-zxkt2 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 05:26:23.664051 master-0 kubenswrapper[7614]: [-]has-synced failed: reason withheld Feb 24 05:26:23.664051 master-0 kubenswrapper[7614]: [+]process-running ok Feb 24 05:26:23.664051 master-0 kubenswrapper[7614]: healthz check failed Feb 24 05:26:23.664051 master-0 kubenswrapper[7614]: I0224 05:26:23.663940 7614 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-zxkt2" podUID="be7a4b9e-1e9a-4298-b804-21b683805c0e" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 05:26:24.663927 master-0 kubenswrapper[7614]: I0224 05:26:24.663822 7614 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-zxkt2 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 05:26:24.663927 master-0 kubenswrapper[7614]: [-]has-synced failed: reason withheld Feb 24 05:26:24.663927 master-0 kubenswrapper[7614]: [+]process-running ok Feb 24 05:26:24.663927 master-0 kubenswrapper[7614]: healthz check failed Feb 24 05:26:24.663927 master-0 kubenswrapper[7614]: I0224 05:26:24.663905 7614 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-zxkt2" podUID="be7a4b9e-1e9a-4298-b804-21b683805c0e" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 05:26:24.704847 master-0 kubenswrapper[7614]: I0224 05:26:24.704751 7614 patch_prober.go:28] interesting pod/authentication-operator-5bd7c86784-kbb8z container/authentication-operator namespace/openshift-authentication-operator: Liveness probe status=failure output="Get \"https://10.128.0.17:8443/healthz\": dial tcp 10.128.0.17:8443: connect: connection refused" start-of-body= Feb 24 05:26:24.705010 master-0 kubenswrapper[7614]: I0224 05:26:24.704852 7614 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-authentication-operator/authentication-operator-5bd7c86784-kbb8z" podUID="59333a14-5bdc-4590-a3da-af7300f086da" containerName="authentication-operator" probeResult="failure" output="Get \"https://10.128.0.17:8443/healthz\": dial tcp 10.128.0.17:8443: connect: connection refused" Feb 24 05:26:25.174557 master-0 kubenswrapper[7614]: I0224 05:26:25.174437 7614 scope.go:117] "RemoveContainer" containerID="9223fb2da930fb3c50e82163a41bfe2c42eac1ee2e2d4f682d787074cbff45d5" Feb 24 05:26:25.310190 master-0 kubenswrapper[7614]: I0224 05:26:25.310110 7614 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-controller/operator-controller-controller-manager-9cc7d7bb-t75jj" Feb 24 05:26:25.312678 master-0 kubenswrapper[7614]: I0224 05:26:25.312617 7614 scope.go:117] "RemoveContainer" containerID="27d3c979d980c52be573082c4d98e2b43efa2f5962b15df7eb3f072aaaaf8885" Feb 24 05:26:25.313179 master-0 kubenswrapper[7614]: E0224 05:26:25.313125 7614 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 10s restarting failed container=manager pod=operator-controller-controller-manager-9cc7d7bb-t75jj_openshift-operator-controller(347c43e5-86d5-436f-bdc5-1c7bbe19ab2a)\"" pod="openshift-operator-controller/operator-controller-controller-manager-9cc7d7bb-t75jj" podUID="347c43e5-86d5-436f-bdc5-1c7bbe19ab2a" Feb 24 05:26:25.663937 master-0 kubenswrapper[7614]: I0224 05:26:25.663806 7614 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-zxkt2 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 05:26:25.663937 master-0 kubenswrapper[7614]: [-]has-synced failed: reason withheld Feb 24 05:26:25.663937 master-0 kubenswrapper[7614]: [+]process-running ok Feb 24 05:26:25.663937 master-0 kubenswrapper[7614]: healthz check failed Feb 24 05:26:25.663937 master-0 kubenswrapper[7614]: I0224 05:26:25.663919 7614 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-zxkt2" podUID="be7a4b9e-1e9a-4298-b804-21b683805c0e" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 05:26:26.331295 master-0 kubenswrapper[7614]: I0224 05:26:26.331181 7614 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cluster-storage-operator_csi-snapshot-controller-6847bb4785-vqn96_b79ef90c-dc66-4d5f-8943-2c3ac68796ba/snapshot-controller/1.log" Feb 24 05:26:26.331295 master-0 kubenswrapper[7614]: I0224 05:26:26.331294 7614 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-storage-operator/csi-snapshot-controller-6847bb4785-vqn96" event={"ID":"b79ef90c-dc66-4d5f-8943-2c3ac68796ba","Type":"ContainerStarted","Data":"b136ebe01c73b0fd59c9db45f5467f27ec8e855aa02eaefd1377f780ef7c8176"} Feb 24 05:26:26.688885 master-0 kubenswrapper[7614]: I0224 05:26:26.688596 7614 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-zxkt2 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 05:26:26.688885 master-0 kubenswrapper[7614]: [-]has-synced failed: reason withheld Feb 24 05:26:26.688885 master-0 kubenswrapper[7614]: [+]process-running ok Feb 24 05:26:26.688885 master-0 kubenswrapper[7614]: healthz check failed Feb 24 05:26:26.688885 master-0 kubenswrapper[7614]: I0224 05:26:26.688714 7614 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-zxkt2" podUID="be7a4b9e-1e9a-4298-b804-21b683805c0e" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 05:26:27.175564 master-0 kubenswrapper[7614]: I0224 05:26:27.175469 7614 scope.go:117] "RemoveContainer" containerID="cfe91b9dce3107eef3be77e003af99516d67b13614554d783a1ee356de5c61ba" Feb 24 05:26:27.176041 master-0 kubenswrapper[7614]: E0224 05:26:27.175980 7614 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ingress-operator\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ingress-operator pod=ingress-operator-6569778c84-rr8r7_openshift-ingress-operator(3d6b1ce7-1213-494c-829d-186d39eac7eb)\"" pod="openshift-ingress-operator/ingress-operator-6569778c84-rr8r7" podUID="3d6b1ce7-1213-494c-829d-186d39eac7eb" Feb 24 05:26:27.663938 master-0 kubenswrapper[7614]: I0224 05:26:27.663821 7614 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-zxkt2 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 05:26:27.663938 master-0 kubenswrapper[7614]: [-]has-synced failed: reason withheld Feb 24 05:26:27.663938 master-0 kubenswrapper[7614]: [+]process-running ok Feb 24 05:26:27.663938 master-0 kubenswrapper[7614]: healthz check failed Feb 24 05:26:27.664527 master-0 kubenswrapper[7614]: I0224 05:26:27.663936 7614 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-zxkt2" podUID="be7a4b9e-1e9a-4298-b804-21b683805c0e" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 05:26:28.663328 master-0 kubenswrapper[7614]: I0224 05:26:28.663191 7614 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-zxkt2 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 05:26:28.663328 master-0 kubenswrapper[7614]: [-]has-synced failed: reason withheld Feb 24 05:26:28.663328 master-0 kubenswrapper[7614]: [+]process-running ok Feb 24 05:26:28.663328 master-0 kubenswrapper[7614]: healthz check failed Feb 24 05:26:28.664213 master-0 kubenswrapper[7614]: I0224 05:26:28.663345 7614 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-zxkt2" podUID="be7a4b9e-1e9a-4298-b804-21b683805c0e" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 05:26:29.715488 master-0 kubenswrapper[7614]: I0224 05:26:29.664995 7614 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-zxkt2 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 05:26:29.715488 master-0 kubenswrapper[7614]: [-]has-synced failed: reason withheld Feb 24 05:26:29.715488 master-0 kubenswrapper[7614]: [+]process-running ok Feb 24 05:26:29.715488 master-0 kubenswrapper[7614]: healthz check failed Feb 24 05:26:29.715488 master-0 kubenswrapper[7614]: I0224 05:26:29.665130 7614 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-zxkt2" podUID="be7a4b9e-1e9a-4298-b804-21b683805c0e" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 05:26:30.663705 master-0 kubenswrapper[7614]: I0224 05:26:30.663633 7614 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-zxkt2 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 05:26:30.663705 master-0 kubenswrapper[7614]: [-]has-synced failed: reason withheld Feb 24 05:26:30.663705 master-0 kubenswrapper[7614]: [+]process-running ok Feb 24 05:26:30.663705 master-0 kubenswrapper[7614]: healthz check failed Feb 24 05:26:30.664369 master-0 kubenswrapper[7614]: I0224 05:26:30.663713 7614 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-zxkt2" podUID="be7a4b9e-1e9a-4298-b804-21b683805c0e" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 05:26:31.663473 master-0 kubenswrapper[7614]: I0224 05:26:31.663393 7614 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-zxkt2 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 05:26:31.663473 master-0 kubenswrapper[7614]: [-]has-synced failed: reason withheld Feb 24 05:26:31.663473 master-0 kubenswrapper[7614]: [+]process-running ok Feb 24 05:26:31.663473 master-0 kubenswrapper[7614]: healthz check failed Feb 24 05:26:31.664573 master-0 kubenswrapper[7614]: I0224 05:26:31.663491 7614 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-zxkt2" podUID="be7a4b9e-1e9a-4298-b804-21b683805c0e" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 05:26:32.664094 master-0 kubenswrapper[7614]: I0224 05:26:32.663973 7614 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-zxkt2 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 05:26:32.664094 master-0 kubenswrapper[7614]: [-]has-synced failed: reason withheld Feb 24 05:26:32.664094 master-0 kubenswrapper[7614]: [+]process-running ok Feb 24 05:26:32.664094 master-0 kubenswrapper[7614]: healthz check failed Feb 24 05:26:32.665180 master-0 kubenswrapper[7614]: I0224 05:26:32.664110 7614 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-zxkt2" podUID="be7a4b9e-1e9a-4298-b804-21b683805c0e" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 05:26:33.397694 master-0 kubenswrapper[7614]: I0224 05:26:33.397613 7614 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-catalogd_catalogd-controller-manager-84b8d9d697-zvzxs_d9492fbf-d0f4-4ecf-84ba-b089d69535c1/manager/1.log" Feb 24 05:26:33.398525 master-0 kubenswrapper[7614]: I0224 05:26:33.398490 7614 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-catalogd_catalogd-controller-manager-84b8d9d697-zvzxs_d9492fbf-d0f4-4ecf-84ba-b089d69535c1/manager/0.log" Feb 24 05:26:33.399603 master-0 kubenswrapper[7614]: I0224 05:26:33.399549 7614 generic.go:334] "Generic (PLEG): container finished" podID="d9492fbf-d0f4-4ecf-84ba-b089d69535c1" containerID="54cc6a7eea7de4886fcefce8b98bd35f27338eed7eb5d39d1aa4df2fed85d25a" exitCode=1 Feb 24 05:26:33.399686 master-0 kubenswrapper[7614]: I0224 05:26:33.399617 7614 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-catalogd/catalogd-controller-manager-84b8d9d697-zvzxs" event={"ID":"d9492fbf-d0f4-4ecf-84ba-b089d69535c1","Type":"ContainerDied","Data":"54cc6a7eea7de4886fcefce8b98bd35f27338eed7eb5d39d1aa4df2fed85d25a"} Feb 24 05:26:33.399686 master-0 kubenswrapper[7614]: I0224 05:26:33.399677 7614 scope.go:117] "RemoveContainer" containerID="189c37430c077be09301cf49e843b65676efb76e5d67d2ea4dd214f2f7102ef5" Feb 24 05:26:33.401431 master-0 kubenswrapper[7614]: I0224 05:26:33.401350 7614 scope.go:117] "RemoveContainer" containerID="54cc6a7eea7de4886fcefce8b98bd35f27338eed7eb5d39d1aa4df2fed85d25a" Feb 24 05:26:33.402605 master-0 kubenswrapper[7614]: E0224 05:26:33.402172 7614 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 10s restarting failed container=manager pod=catalogd-controller-manager-84b8d9d697-zvzxs_openshift-catalogd(d9492fbf-d0f4-4ecf-84ba-b089d69535c1)\"" pod="openshift-catalogd/catalogd-controller-manager-84b8d9d697-zvzxs" podUID="d9492fbf-d0f4-4ecf-84ba-b089d69535c1" Feb 24 05:26:33.665293 master-0 kubenswrapper[7614]: I0224 05:26:33.665077 7614 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-zxkt2 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 05:26:33.665293 master-0 kubenswrapper[7614]: [-]has-synced failed: reason withheld Feb 24 05:26:33.665293 master-0 kubenswrapper[7614]: [+]process-running ok Feb 24 05:26:33.665293 master-0 kubenswrapper[7614]: healthz check failed Feb 24 05:26:33.665293 master-0 kubenswrapper[7614]: I0224 05:26:33.665193 7614 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-zxkt2" podUID="be7a4b9e-1e9a-4298-b804-21b683805c0e" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 05:26:34.412854 master-0 kubenswrapper[7614]: I0224 05:26:34.412749 7614 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-catalogd_catalogd-controller-manager-84b8d9d697-zvzxs_d9492fbf-d0f4-4ecf-84ba-b089d69535c1/manager/1.log" Feb 24 05:26:34.664260 master-0 kubenswrapper[7614]: I0224 05:26:34.664088 7614 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-zxkt2 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 05:26:34.664260 master-0 kubenswrapper[7614]: [-]has-synced failed: reason withheld Feb 24 05:26:34.664260 master-0 kubenswrapper[7614]: [+]process-running ok Feb 24 05:26:34.664260 master-0 kubenswrapper[7614]: healthz check failed Feb 24 05:26:34.664260 master-0 kubenswrapper[7614]: I0224 05:26:34.664213 7614 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-zxkt2" podUID="be7a4b9e-1e9a-4298-b804-21b683805c0e" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 05:26:34.702616 master-0 kubenswrapper[7614]: I0224 05:26:34.702512 7614 patch_prober.go:28] interesting pod/authentication-operator-5bd7c86784-kbb8z container/authentication-operator namespace/openshift-authentication-operator: Liveness probe status=failure output="Get \"https://10.128.0.17:8443/healthz\": dial tcp 10.128.0.17:8443: connect: connection refused" start-of-body= Feb 24 05:26:34.703236 master-0 kubenswrapper[7614]: I0224 05:26:34.702631 7614 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-authentication-operator/authentication-operator-5bd7c86784-kbb8z" podUID="59333a14-5bdc-4590-a3da-af7300f086da" containerName="authentication-operator" probeResult="failure" output="Get \"https://10.128.0.17:8443/healthz\": dial tcp 10.128.0.17:8443: connect: connection refused" Feb 24 05:26:34.856799 master-0 kubenswrapper[7614]: I0224 05:26:34.856632 7614 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-catalogd/catalogd-controller-manager-84b8d9d697-zvzxs" Feb 24 05:26:34.856799 master-0 kubenswrapper[7614]: I0224 05:26:34.856770 7614 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-catalogd/catalogd-controller-manager-84b8d9d697-zvzxs" Feb 24 05:26:34.858029 master-0 kubenswrapper[7614]: I0224 05:26:34.857981 7614 scope.go:117] "RemoveContainer" containerID="54cc6a7eea7de4886fcefce8b98bd35f27338eed7eb5d39d1aa4df2fed85d25a" Feb 24 05:26:34.858452 master-0 kubenswrapper[7614]: E0224 05:26:34.858405 7614 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 10s restarting failed container=manager pod=catalogd-controller-manager-84b8d9d697-zvzxs_openshift-catalogd(d9492fbf-d0f4-4ecf-84ba-b089d69535c1)\"" pod="openshift-catalogd/catalogd-controller-manager-84b8d9d697-zvzxs" podUID="d9492fbf-d0f4-4ecf-84ba-b089d69535c1" Feb 24 05:26:35.310971 master-0 kubenswrapper[7614]: I0224 05:26:35.310778 7614 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-operator-controller/operator-controller-controller-manager-9cc7d7bb-t75jj" Feb 24 05:26:35.312365 master-0 kubenswrapper[7614]: I0224 05:26:35.312283 7614 scope.go:117] "RemoveContainer" containerID="27d3c979d980c52be573082c4d98e2b43efa2f5962b15df7eb3f072aaaaf8885" Feb 24 05:26:35.663331 master-0 kubenswrapper[7614]: I0224 05:26:35.663242 7614 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-zxkt2 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 05:26:35.663331 master-0 kubenswrapper[7614]: [-]has-synced failed: reason withheld Feb 24 05:26:35.663331 master-0 kubenswrapper[7614]: [+]process-running ok Feb 24 05:26:35.663331 master-0 kubenswrapper[7614]: healthz check failed Feb 24 05:26:35.663753 master-0 kubenswrapper[7614]: I0224 05:26:35.663367 7614 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-zxkt2" podUID="be7a4b9e-1e9a-4298-b804-21b683805c0e" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 05:26:36.454337 master-0 kubenswrapper[7614]: I0224 05:26:36.454246 7614 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operator-controller_operator-controller-controller-manager-9cc7d7bb-t75jj_347c43e5-86d5-436f-bdc5-1c7bbe19ab2a/manager/1.log" Feb 24 05:26:36.456216 master-0 kubenswrapper[7614]: I0224 05:26:36.456142 7614 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-controller/operator-controller-controller-manager-9cc7d7bb-t75jj" event={"ID":"347c43e5-86d5-436f-bdc5-1c7bbe19ab2a","Type":"ContainerStarted","Data":"cd0e23c93d9d19329e94fefd8f36f16d6a2e80ce6bb51771ac2394b3eb011dde"} Feb 24 05:26:36.456677 master-0 kubenswrapper[7614]: I0224 05:26:36.456615 7614 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-controller/operator-controller-controller-manager-9cc7d7bb-t75jj" Feb 24 05:26:36.514829 master-0 kubenswrapper[7614]: E0224 05:26:36.514737 7614 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" interval="7s" Feb 24 05:26:36.604677 master-0 kubenswrapper[7614]: E0224 05:26:36.604382 7614 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-24T05:26:26Z\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-24T05:26:26Z\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-24T05:26:26Z\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-24T05:26:26Z\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:08cff7c9164822cf90c1ddc99284f5fd3c4efbfdf7ff5d2da94ff20f03d57215\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8665346de3cec5b1443fb1e3bf6389962210affa684e5c1b521ec342f56e0901\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1703852494},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:94d88fe2fa42931a725508dbf17296b6ed99b8e20c1169f5d1fb8a36f4927ddd\\\"],\\\"sizeBytes\\\":1637274270},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:10e72e1dffd75bda73d89a11e18d98c99255c0f2c54d81f82a2a48b0b86b96b5\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:d64168b357c44a3e5febdd4d99c285c68217a6568f9de2371d72e8a089d42b69\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1238591178},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d7a8ac0ba2e5115c9d451d553741173ae8744d4544da15e28bf38f61630182fd\\\"],\\\"sizeBytes\\\":1237794314},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:155018f64a4d43025cb88586009847bd0f7844afa3e1aa81639d31b96bebd68e\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:4154e7856e2578eae0af7bc7ade3338a49c179e8e0b9d8b5167540e580ffc22b\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1210563790},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:518982b9ad8a8bfb7bb3b4216b235cac99e126df3bb48e390b36064560c76b83\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:b3293b04e31c8e67c885f77e0ad2ee994295afde7c42cb9761c7090ae0cdb3f8\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1202767548},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4775c6461221dafe3ddd67ff683ccb665bed6eb278fa047d9d744aab9af65dcf\\\"],\\\"sizeBytes\\\":992461126},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8177c465e14c63854e5c0fa95ca0635cffc9b5dd3d077ecf971feedbc42b1274\\\"],\\\"sizeBytes\\\":943734757},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6c7ec917f0eff7b41d7174f1b5fdc4ce53ad106e51599afba731a8431ff9caa7\\\"],\\\"sizeBytes\\\":918153745},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8ff40a2d97bf7a95e19303f7e972b7e8354a3864039111c6d33d5479117aaeed\\\"],\\\"sizeBytes\\\":880247193},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:72fafcd55ab739919dd8a114863fda27106af1c497f474e7ce0cb23b58dfa021\\\"],\\\"sizeBytes\\\":875998518},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7b9239f1f5e9590e3db71e61fde86db8f43e0085f61ae7769508d2ea058481c7\\\"],\\\"sizeBytes\\\":862501144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:572b0ca6e993beea2ee9346197665e56a2e4999fbb6958c747c48a35bf72ee34\\\"],\\\"sizeBytes\\\":862091954},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3fa84eaa1310d97fe55bb23a7c27ece85718d0643fa7fc0ff81014edb4b948b\\\"],\\\"sizeBytes\\\":772838975},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bd420e879c9f0271bca2d123a6d762591d9a4626b72f254d1f885842c32149e8\\\"],\\\"sizeBytes\\\":687849728},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3c467c1eeba7434b2aebf07169ab8afe0203d638e871dbdf29a16f830e9aef9e\\\"],\\\"sizeBytes\\\":682963466},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5121a0944000b7bfa57ae2e4eb3f412e1b4b89fcc75eec1ef20241182c0527f2\\\"],\\\"sizeBytes\\\":677827184},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d5a31b448302fbb994548ed801ac488a44e8a7c4ae9149c3b4cc20d6af832f83\\\"],\\\"sizeBytes\\\":621542709},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3e089c4e4fa9a22803b2673b776215e021a1f12a856dbcaba2fadee29bee10a3\\\"],\\\"sizeBytes\\\":589275174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1582ea693f35073e3316e2380a18227b78096ca7f4e1328f1dd8a2c423da26e9\\\"],\\\"sizeBytes\\\":582052489},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:314be88d356b2c8a3c4416daeb4cfcd58d617a4526319c01ddaffae4b4179e74\\\"],\\\"sizeBytes\\\":558105176},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f9df2f6b5cd83ab895e9e4a9bf8920d35fe450679ce06fb223944e95cfbe3e\\\"],\\\"sizeBytes\\\":557320737},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f86073cf0561e4b69668f8917ef5184cb0ef5aa16d0fefe38118f1167b268721\\\"],\\\"sizeBytes\\\":548646306},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d77a77c401bcfaa65a6ab6de82415af0e7ace1b470626647e5feb4875c89a5ef\\\"],\\\"sizeBytes\\\":529218694},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bc0ca626e5e17f9f78ddbfde54ea13ddc7749904911817bba16e6b59f30499ec\\\"],\\\"sizeBytes\\\":528829499},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:11f566fe2ae782ad96d36028b0fd81911a64ef787dcebc83803f741f272fa396\\\"],\\\"sizeBytes\\\":518279996},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-release@sha256:40bb7cf7c637bf9efd8fb0157839d325a019d67cc7d7279665fcf90dbb7f3f33\\\"],\\\"sizeBytes\\\":517888569},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fd63e2c1185e529c6e9f6e1426222ff2ac195132b44a1775f407e4593b66d4c\\\"],\\\"sizeBytes\\\":514875199},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a1b426a276216372c7d688fe60e9eaf251efd35071f94e1bcd4337f51a90fd75\\\"],\\\"sizeBytes\\\":513473308},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ce471c00b59fd855a59f7efa9afdb3f0f9cbf1c4bcce3a82fe1a4cb82e90f52e\\\"],\\\"sizeBytes\\\":513119434},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a9dcbc6b966928b7597d4a822948ae6f07b62feecb91679c1d825d0d19426e19\\\"],\\\"sizeBytes\\\":512172666},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d5f4a546983224e416dfcc3a700afc15f9790182a5a2f8f7c94892d0e95abab3\\\"],\\\"sizeBytes\\\":511125422},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2c8de5c5b21ed8c7829ba988d580ffa470c9913877fe0ee5e11bf507400ffbc7\\\"],\\\"sizeBytes\\\":511059399},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:64ba461fd5594e3a30bfd755f1496707a88249bc68d07c65124c8617d664d2ac\\\"],\\\"sizeBytes\\\":508786786},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a82e441a9e9b93f0e010f1ce26e30c24b6ca93f7752084d4694ebdb3c5b53f83\\\"],\\\"sizeBytes\\\":508443359},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d7bd3361d506dcc1be3afa62d35080c5dd37afccc26cd36019e2b9db2c45f896\\\"],\\\"sizeBytes\\\":507867630},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:034588ffd95ce834e866279bf80a45af2cddda631c6c9a6344c1bb2e033fd83e\\\"],\\\"sizeBytes\\\":506374680},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c8618d42fe4da4881abe39e98691d187e13713981b66d0dac0a11cb1287482b7\\\"],\\\"sizeBytes\\\":506291135},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ce68078d909b63bb5b872d94c04829aa1b5812c416abbaf9024840d348ee68b1\\\"],\\\"sizeBytes\\\":505244089},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:457c564075e8b14b1d24ff6eab750600ebc90ff8b7bb137306a579ee8445ae95\\\"],\\\"sizeBytes\\\":505137106},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ebf883de8fd905490f0c9b420a5d6446ecde18e12e15364f6dcd4e885104972c\\\"],\\\"sizeBytes\\\":504558291},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:897708222502e4d710dd737923f74d153c084ba6048bffceb16dfd30f79a6ecc\\\"],\\\"sizeBytes\\\":504513960},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:86d9e1fdf97794f44fc1c91da025714ec6900fafa6cdc4c0041ffa95e9d70c6c\\\"],\\\"sizeBytes\\\":495888162},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4e8c6ae1f9a450c90857c9fbccf1e5fb404dbc0d65d086afce005d6bd307853b\\\"],\\\"sizeBytes\\\":494959854},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cb94366d6d4423592369eeca84f0fe98325db13d0ab9e0291db9f1a337cd7143\\\"],\\\"sizeBytes\\\":487054953},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:117a846734fc8159b7172a40ed2feb43a969b7dbc113ee1a572cbf6f9f922655\\\"],\\\"sizeBytes\\\":486990304},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4797a485fd4ab3414ba8d52bdf2afccefab6c657b1d259baad703fca5145124c\\\"],\\\"sizeBytes\\\":484349508},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7a132d09565133b36ac7c797213d6a74ac810bb368ef59136320ab3d300f45bd\\\"],\\\"sizeBytes\\\":484074784},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a1dcd1b7d6878b28ed95aed9f0c0e2df156c17cb9fe5971400b983e3f2be29c\\\"],\\\"sizeBytes\\\":480427687},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2b05fb5dedd9a53747df98c2a1956ace8e233ad575204fbec990e39705e36dfb\\\"],\\\"sizeBytes\\\":471325816}]}}\" for node \"master-0\": Patch \"https://api-int.sno.openstack.lab:6443/api/v1/nodes/master-0/status?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 24 05:26:36.663252 master-0 kubenswrapper[7614]: I0224 05:26:36.663173 7614 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-zxkt2 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 05:26:36.663252 master-0 kubenswrapper[7614]: [-]has-synced failed: reason withheld Feb 24 05:26:36.663252 master-0 kubenswrapper[7614]: [+]process-running ok Feb 24 05:26:36.663252 master-0 kubenswrapper[7614]: healthz check failed Feb 24 05:26:36.663788 master-0 kubenswrapper[7614]: I0224 05:26:36.663265 7614 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-zxkt2" podUID="be7a4b9e-1e9a-4298-b804-21b683805c0e" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 05:26:36.934591 master-0 kubenswrapper[7614]: E0224 05:26:36.934360 7614 event.go:359] "Server rejected event (will not retry!)" err="Timeout: request did not complete within requested timeout - context deadline exceeded" event="&Event{ObjectMeta:{authentication-operator-5bd7c86784-kbb8z.189717617ee2475a openshift-authentication-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-authentication-operator,Name:authentication-operator-5bd7c86784-kbb8z,UID:59333a14-5bdc-4590-a3da-af7300f086da,APIVersion:v1,ResourceVersion:3428,FieldPath:spec.containers{authentication-operator},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ce471c00b59fd855a59f7efa9afdb3f0f9cbf1c4bcce3a82fe1a4cb82e90f52e\" already present on machine,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-02-24 05:24:45.901604698 +0000 UTC m=+616.936347884,LastTimestamp:2026-02-24 05:24:45.901604698 +0000 UTC m=+616.936347884,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Feb 24 05:26:37.663145 master-0 kubenswrapper[7614]: I0224 05:26:37.663059 7614 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-zxkt2 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 05:26:37.663145 master-0 kubenswrapper[7614]: [-]has-synced failed: reason withheld Feb 24 05:26:37.663145 master-0 kubenswrapper[7614]: [+]process-running ok Feb 24 05:26:37.663145 master-0 kubenswrapper[7614]: healthz check failed Feb 24 05:26:37.664236 master-0 kubenswrapper[7614]: I0224 05:26:37.663155 7614 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-zxkt2" podUID="be7a4b9e-1e9a-4298-b804-21b683805c0e" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 05:26:38.175003 master-0 kubenswrapper[7614]: I0224 05:26:38.174901 7614 scope.go:117] "RemoveContainer" containerID="cfe91b9dce3107eef3be77e003af99516d67b13614554d783a1ee356de5c61ba" Feb 24 05:26:38.477278 master-0 kubenswrapper[7614]: I0224 05:26:38.477090 7614 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ingress-operator_ingress-operator-6569778c84-rr8r7_3d6b1ce7-1213-494c-829d-186d39eac7eb/ingress-operator/3.log" Feb 24 05:26:38.478081 master-0 kubenswrapper[7614]: I0224 05:26:38.477985 7614 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-6569778c84-rr8r7" event={"ID":"3d6b1ce7-1213-494c-829d-186d39eac7eb","Type":"ContainerStarted","Data":"09f85c8b01d7446d5646107c9c18780a59af7c98e21d551a62767e55f5cabf2d"} Feb 24 05:26:38.664358 master-0 kubenswrapper[7614]: I0224 05:26:38.664235 7614 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-zxkt2 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 05:26:38.664358 master-0 kubenswrapper[7614]: [-]has-synced failed: reason withheld Feb 24 05:26:38.664358 master-0 kubenswrapper[7614]: [+]process-running ok Feb 24 05:26:38.664358 master-0 kubenswrapper[7614]: healthz check failed Feb 24 05:26:38.664358 master-0 kubenswrapper[7614]: I0224 05:26:38.664354 7614 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-zxkt2" podUID="be7a4b9e-1e9a-4298-b804-21b683805c0e" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 05:26:39.664189 master-0 kubenswrapper[7614]: I0224 05:26:39.664115 7614 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-zxkt2 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 05:26:39.664189 master-0 kubenswrapper[7614]: [-]has-synced failed: reason withheld Feb 24 05:26:39.664189 master-0 kubenswrapper[7614]: [+]process-running ok Feb 24 05:26:39.664189 master-0 kubenswrapper[7614]: healthz check failed Feb 24 05:26:39.665041 master-0 kubenswrapper[7614]: I0224 05:26:39.664213 7614 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-zxkt2" podUID="be7a4b9e-1e9a-4298-b804-21b683805c0e" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 05:26:40.664862 master-0 kubenswrapper[7614]: I0224 05:26:40.664772 7614 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-zxkt2 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 05:26:40.664862 master-0 kubenswrapper[7614]: [-]has-synced failed: reason withheld Feb 24 05:26:40.664862 master-0 kubenswrapper[7614]: [+]process-running ok Feb 24 05:26:40.664862 master-0 kubenswrapper[7614]: healthz check failed Feb 24 05:26:40.666204 master-0 kubenswrapper[7614]: I0224 05:26:40.664875 7614 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-zxkt2" podUID="be7a4b9e-1e9a-4298-b804-21b683805c0e" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 05:26:41.662937 master-0 kubenswrapper[7614]: I0224 05:26:41.662860 7614 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-zxkt2 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 05:26:41.662937 master-0 kubenswrapper[7614]: [-]has-synced failed: reason withheld Feb 24 05:26:41.662937 master-0 kubenswrapper[7614]: [+]process-running ok Feb 24 05:26:41.662937 master-0 kubenswrapper[7614]: healthz check failed Feb 24 05:26:41.663877 master-0 kubenswrapper[7614]: I0224 05:26:41.663823 7614 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-zxkt2" podUID="be7a4b9e-1e9a-4298-b804-21b683805c0e" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 05:26:42.664431 master-0 kubenswrapper[7614]: I0224 05:26:42.664151 7614 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-zxkt2 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 05:26:42.664431 master-0 kubenswrapper[7614]: [-]has-synced failed: reason withheld Feb 24 05:26:42.664431 master-0 kubenswrapper[7614]: [+]process-running ok Feb 24 05:26:42.664431 master-0 kubenswrapper[7614]: healthz check failed Feb 24 05:26:42.664431 master-0 kubenswrapper[7614]: I0224 05:26:42.664335 7614 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-zxkt2" podUID="be7a4b9e-1e9a-4298-b804-21b683805c0e" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 05:26:43.664231 master-0 kubenswrapper[7614]: I0224 05:26:43.664111 7614 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-zxkt2 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 05:26:43.664231 master-0 kubenswrapper[7614]: [-]has-synced failed: reason withheld Feb 24 05:26:43.664231 master-0 kubenswrapper[7614]: [+]process-running ok Feb 24 05:26:43.664231 master-0 kubenswrapper[7614]: healthz check failed Feb 24 05:26:43.664231 master-0 kubenswrapper[7614]: I0224 05:26:43.664215 7614 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-zxkt2" podUID="be7a4b9e-1e9a-4298-b804-21b683805c0e" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 05:26:44.146533 master-0 kubenswrapper[7614]: E0224 05:26:44.146399 7614 mirror_client.go:138] "Failed deleting a mirror pod" err="Timeout: request did not complete within requested timeout - context deadline exceeded" pod="openshift-etcd/etcd-master-0" Feb 24 05:26:44.542243 master-0 kubenswrapper[7614]: I0224 05:26:44.542173 7614 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-master-0" event={"ID":"b419b8533666d3ae7054c771ce97a95f","Type":"ContainerStarted","Data":"91e0b255f1211698af466c04efc39f34de18fa6be54be7cd67ac60b0d5f244e7"} Feb 24 05:26:44.542969 master-0 kubenswrapper[7614]: I0224 05:26:44.542901 7614 kubelet.go:1909] "Trying to delete pod" pod="openshift-etcd/etcd-master-0" podUID="6636104e-8b36-4c09-9e6b-e13fc7237a3e" Feb 24 05:26:44.542969 master-0 kubenswrapper[7614]: I0224 05:26:44.542934 7614 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-etcd/etcd-master-0" podUID="6636104e-8b36-4c09-9e6b-e13fc7237a3e" Feb 24 05:26:44.664067 master-0 kubenswrapper[7614]: I0224 05:26:44.663964 7614 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-zxkt2 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 05:26:44.664067 master-0 kubenswrapper[7614]: [-]has-synced failed: reason withheld Feb 24 05:26:44.664067 master-0 kubenswrapper[7614]: [+]process-running ok Feb 24 05:26:44.664067 master-0 kubenswrapper[7614]: healthz check failed Feb 24 05:26:44.664570 master-0 kubenswrapper[7614]: I0224 05:26:44.664086 7614 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-zxkt2" podUID="be7a4b9e-1e9a-4298-b804-21b683805c0e" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 05:26:44.703956 master-0 kubenswrapper[7614]: I0224 05:26:44.703756 7614 patch_prober.go:28] interesting pod/authentication-operator-5bd7c86784-kbb8z container/authentication-operator namespace/openshift-authentication-operator: Liveness probe status=failure output="Get \"https://10.128.0.17:8443/healthz\": dial tcp 10.128.0.17:8443: connect: connection refused" start-of-body= Feb 24 05:26:44.703956 master-0 kubenswrapper[7614]: I0224 05:26:44.703853 7614 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-authentication-operator/authentication-operator-5bd7c86784-kbb8z" podUID="59333a14-5bdc-4590-a3da-af7300f086da" containerName="authentication-operator" probeResult="failure" output="Get \"https://10.128.0.17:8443/healthz\": dial tcp 10.128.0.17:8443: connect: connection refused" Feb 24 05:26:44.703956 master-0 kubenswrapper[7614]: I0224 05:26:44.703931 7614 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-authentication-operator/authentication-operator-5bd7c86784-kbb8z" Feb 24 05:26:44.705438 master-0 kubenswrapper[7614]: I0224 05:26:44.705361 7614 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="authentication-operator" containerStatusID={"Type":"cri-o","ID":"443e2cc8a24d2e54b563564a171d6e7bc732fa198a57aa6dc2d46c10dc569ce8"} pod="openshift-authentication-operator/authentication-operator-5bd7c86784-kbb8z" containerMessage="Container authentication-operator failed liveness probe, will be restarted" Feb 24 05:26:44.705582 master-0 kubenswrapper[7614]: I0224 05:26:44.705455 7614 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-authentication-operator/authentication-operator-5bd7c86784-kbb8z" podUID="59333a14-5bdc-4590-a3da-af7300f086da" containerName="authentication-operator" containerID="cri-o://443e2cc8a24d2e54b563564a171d6e7bc732fa198a57aa6dc2d46c10dc569ce8" gracePeriod=30 Feb 24 05:26:45.313776 master-0 kubenswrapper[7614]: I0224 05:26:45.313699 7614 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-controller/operator-controller-controller-manager-9cc7d7bb-t75jj" Feb 24 05:26:45.568724 master-0 kubenswrapper[7614]: I0224 05:26:45.568639 7614 generic.go:334] "Generic (PLEG): container finished" podID="b419b8533666d3ae7054c771ce97a95f" containerID="91e0b255f1211698af466c04efc39f34de18fa6be54be7cd67ac60b0d5f244e7" exitCode=0 Feb 24 05:26:45.569027 master-0 kubenswrapper[7614]: I0224 05:26:45.568711 7614 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-master-0" event={"ID":"b419b8533666d3ae7054c771ce97a95f","Type":"ContainerDied","Data":"91e0b255f1211698af466c04efc39f34de18fa6be54be7cd67ac60b0d5f244e7"} Feb 24 05:26:45.573125 master-0 kubenswrapper[7614]: I0224 05:26:45.573070 7614 generic.go:334] "Generic (PLEG): container finished" podID="88b915ff-fd94-4998-aa09-70f95c0f1b8a" containerID="96a4e787b3e1f9eeaea51f2ad42e9605d98e2f89f59460135daea10bdd951213" exitCode=0 Feb 24 05:26:45.573407 master-0 kubenswrapper[7614]: I0224 05:26:45.573353 7614 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-5d8dfcdc87-b8ght" event={"ID":"88b915ff-fd94-4998-aa09-70f95c0f1b8a","Type":"ContainerDied","Data":"96a4e787b3e1f9eeaea51f2ad42e9605d98e2f89f59460135daea10bdd951213"} Feb 24 05:26:45.573724 master-0 kubenswrapper[7614]: I0224 05:26:45.573676 7614 scope.go:117] "RemoveContainer" containerID="319aa71d8e4b9690e64904978260695fcae1163baf1014ab285b451aeabac3a9" Feb 24 05:26:45.574928 master-0 kubenswrapper[7614]: I0224 05:26:45.574427 7614 scope.go:117] "RemoveContainer" containerID="96a4e787b3e1f9eeaea51f2ad42e9605d98e2f89f59460135daea10bdd951213" Feb 24 05:26:45.574928 master-0 kubenswrapper[7614]: E0224 05:26:45.574682 7614 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-cluster-manager\" with CrashLoopBackOff: \"back-off 10s restarting failed container=ovnkube-cluster-manager pod=ovnkube-control-plane-5d8dfcdc87-b8ght_openshift-ovn-kubernetes(88b915ff-fd94-4998-aa09-70f95c0f1b8a)\"" pod="openshift-ovn-kubernetes/ovnkube-control-plane-5d8dfcdc87-b8ght" podUID="88b915ff-fd94-4998-aa09-70f95c0f1b8a" Feb 24 05:26:45.576606 master-0 kubenswrapper[7614]: I0224 05:26:45.576564 7614 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-authentication-operator_authentication-operator-5bd7c86784-kbb8z_59333a14-5bdc-4590-a3da-af7300f086da/authentication-operator/2.log" Feb 24 05:26:45.577809 master-0 kubenswrapper[7614]: I0224 05:26:45.577752 7614 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-authentication-operator_authentication-operator-5bd7c86784-kbb8z_59333a14-5bdc-4590-a3da-af7300f086da/authentication-operator/1.log" Feb 24 05:26:45.577890 master-0 kubenswrapper[7614]: I0224 05:26:45.577839 7614 generic.go:334] "Generic (PLEG): container finished" podID="59333a14-5bdc-4590-a3da-af7300f086da" containerID="443e2cc8a24d2e54b563564a171d6e7bc732fa198a57aa6dc2d46c10dc569ce8" exitCode=255 Feb 24 05:26:45.577935 master-0 kubenswrapper[7614]: I0224 05:26:45.577890 7614 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-5bd7c86784-kbb8z" event={"ID":"59333a14-5bdc-4590-a3da-af7300f086da","Type":"ContainerDied","Data":"443e2cc8a24d2e54b563564a171d6e7bc732fa198a57aa6dc2d46c10dc569ce8"} Feb 24 05:26:45.577935 master-0 kubenswrapper[7614]: I0224 05:26:45.577923 7614 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-5bd7c86784-kbb8z" event={"ID":"59333a14-5bdc-4590-a3da-af7300f086da","Type":"ContainerStarted","Data":"85a787ab234adbc4cec6c14f0d55a16949892b1a8442a2c568e5b38474ee2b06"} Feb 24 05:26:45.637289 master-0 kubenswrapper[7614]: I0224 05:26:45.637218 7614 scope.go:117] "RemoveContainer" containerID="d5c20b92312f36a79271d5fd1a9a93a147a0f9575364641bb14c812c34fb24f8" Feb 24 05:26:45.663693 master-0 kubenswrapper[7614]: I0224 05:26:45.663579 7614 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-zxkt2 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 05:26:45.663693 master-0 kubenswrapper[7614]: [-]has-synced failed: reason withheld Feb 24 05:26:45.663693 master-0 kubenswrapper[7614]: [+]process-running ok Feb 24 05:26:45.663693 master-0 kubenswrapper[7614]: healthz check failed Feb 24 05:26:45.664105 master-0 kubenswrapper[7614]: I0224 05:26:45.663712 7614 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-zxkt2" podUID="be7a4b9e-1e9a-4298-b804-21b683805c0e" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 05:26:46.591295 master-0 kubenswrapper[7614]: I0224 05:26:46.591245 7614 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-authentication-operator_authentication-operator-5bd7c86784-kbb8z_59333a14-5bdc-4590-a3da-af7300f086da/authentication-operator/2.log" Feb 24 05:26:46.605435 master-0 kubenswrapper[7614]: E0224 05:26:46.605383 7614 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"master-0\": Get \"https://api-int.sno.openstack.lab:6443/api/v1/nodes/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 24 05:26:46.662870 master-0 kubenswrapper[7614]: I0224 05:26:46.662784 7614 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-zxkt2 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 05:26:46.662870 master-0 kubenswrapper[7614]: [-]has-synced failed: reason withheld Feb 24 05:26:46.662870 master-0 kubenswrapper[7614]: [+]process-running ok Feb 24 05:26:46.662870 master-0 kubenswrapper[7614]: healthz check failed Feb 24 05:26:46.662870 master-0 kubenswrapper[7614]: I0224 05:26:46.662874 7614 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-zxkt2" podUID="be7a4b9e-1e9a-4298-b804-21b683805c0e" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 05:26:47.663909 master-0 kubenswrapper[7614]: I0224 05:26:47.663750 7614 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-zxkt2 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 05:26:47.663909 master-0 kubenswrapper[7614]: [-]has-synced failed: reason withheld Feb 24 05:26:47.663909 master-0 kubenswrapper[7614]: [+]process-running ok Feb 24 05:26:47.663909 master-0 kubenswrapper[7614]: healthz check failed Feb 24 05:26:47.663909 master-0 kubenswrapper[7614]: I0224 05:26:47.663902 7614 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-zxkt2" podUID="be7a4b9e-1e9a-4298-b804-21b683805c0e" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 05:26:48.175384 master-0 kubenswrapper[7614]: I0224 05:26:48.175316 7614 scope.go:117] "RemoveContainer" containerID="54cc6a7eea7de4886fcefce8b98bd35f27338eed7eb5d39d1aa4df2fed85d25a" Feb 24 05:26:48.614481 master-0 kubenswrapper[7614]: I0224 05:26:48.614391 7614 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-catalogd_catalogd-controller-manager-84b8d9d697-zvzxs_d9492fbf-d0f4-4ecf-84ba-b089d69535c1/manager/1.log" Feb 24 05:26:48.615816 master-0 kubenswrapper[7614]: I0224 05:26:48.615716 7614 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-catalogd/catalogd-controller-manager-84b8d9d697-zvzxs" event={"ID":"d9492fbf-d0f4-4ecf-84ba-b089d69535c1","Type":"ContainerStarted","Data":"6b6981938ff8c63254650d8fe3b01ee59914d9fee08dabec2efbe53746dfe7b3"} Feb 24 05:26:48.665050 master-0 kubenswrapper[7614]: I0224 05:26:48.664977 7614 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-zxkt2 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 05:26:48.665050 master-0 kubenswrapper[7614]: [-]has-synced failed: reason withheld Feb 24 05:26:48.665050 master-0 kubenswrapper[7614]: [+]process-running ok Feb 24 05:26:48.665050 master-0 kubenswrapper[7614]: healthz check failed Feb 24 05:26:48.666268 master-0 kubenswrapper[7614]: I0224 05:26:48.665120 7614 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-zxkt2" podUID="be7a4b9e-1e9a-4298-b804-21b683805c0e" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 05:26:49.679428 master-0 kubenswrapper[7614]: I0224 05:26:49.679222 7614 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-zxkt2 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 05:26:49.679428 master-0 kubenswrapper[7614]: [-]has-synced failed: reason withheld Feb 24 05:26:49.679428 master-0 kubenswrapper[7614]: [+]process-running ok Feb 24 05:26:49.679428 master-0 kubenswrapper[7614]: healthz check failed Feb 24 05:26:49.679428 master-0 kubenswrapper[7614]: I0224 05:26:49.679381 7614 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-zxkt2" podUID="be7a4b9e-1e9a-4298-b804-21b683805c0e" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 05:26:50.664074 master-0 kubenswrapper[7614]: I0224 05:26:50.663970 7614 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-zxkt2 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 05:26:50.664074 master-0 kubenswrapper[7614]: [-]has-synced failed: reason withheld Feb 24 05:26:50.664074 master-0 kubenswrapper[7614]: [+]process-running ok Feb 24 05:26:50.664074 master-0 kubenswrapper[7614]: healthz check failed Feb 24 05:26:50.664583 master-0 kubenswrapper[7614]: I0224 05:26:50.664086 7614 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-zxkt2" podUID="be7a4b9e-1e9a-4298-b804-21b683805c0e" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 05:26:51.663685 master-0 kubenswrapper[7614]: I0224 05:26:51.663546 7614 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-zxkt2 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 05:26:51.663685 master-0 kubenswrapper[7614]: [-]has-synced failed: reason withheld Feb 24 05:26:51.663685 master-0 kubenswrapper[7614]: [+]process-running ok Feb 24 05:26:51.663685 master-0 kubenswrapper[7614]: healthz check failed Feb 24 05:26:51.664745 master-0 kubenswrapper[7614]: I0224 05:26:51.663705 7614 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-zxkt2" podUID="be7a4b9e-1e9a-4298-b804-21b683805c0e" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 05:26:52.663677 master-0 kubenswrapper[7614]: I0224 05:26:52.663627 7614 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-zxkt2 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 05:26:52.663677 master-0 kubenswrapper[7614]: [-]has-synced failed: reason withheld Feb 24 05:26:52.663677 master-0 kubenswrapper[7614]: [+]process-running ok Feb 24 05:26:52.663677 master-0 kubenswrapper[7614]: healthz check failed Feb 24 05:26:52.664468 master-0 kubenswrapper[7614]: I0224 05:26:52.664438 7614 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-zxkt2" podUID="be7a4b9e-1e9a-4298-b804-21b683805c0e" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 05:26:53.516043 master-0 kubenswrapper[7614]: E0224 05:26:53.515955 7614 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" interval="7s" Feb 24 05:26:53.661764 master-0 kubenswrapper[7614]: I0224 05:26:53.661706 7614 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_cluster-baremetal-operator-d6bb9bb76-54hnv_39623346-691b-42c8-af76-409d4f6629af/cluster-baremetal-operator/0.log" Feb 24 05:26:53.662032 master-0 kubenswrapper[7614]: I0224 05:26:53.661792 7614 generic.go:334] "Generic (PLEG): container finished" podID="39623346-691b-42c8-af76-409d4f6629af" containerID="d4516cc83e87e18d7c8ea61312f0b1b6185fcfcd2b620f9f1b31d56f65e19d0a" exitCode=1 Feb 24 05:26:53.662032 master-0 kubenswrapper[7614]: I0224 05:26:53.661855 7614 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/cluster-baremetal-operator-d6bb9bb76-54hnv" event={"ID":"39623346-691b-42c8-af76-409d4f6629af","Type":"ContainerDied","Data":"d4516cc83e87e18d7c8ea61312f0b1b6185fcfcd2b620f9f1b31d56f65e19d0a"} Feb 24 05:26:53.663260 master-0 kubenswrapper[7614]: I0224 05:26:53.663189 7614 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-zxkt2 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 05:26:53.663260 master-0 kubenswrapper[7614]: [-]has-synced failed: reason withheld Feb 24 05:26:53.663260 master-0 kubenswrapper[7614]: [+]process-running ok Feb 24 05:26:53.663260 master-0 kubenswrapper[7614]: healthz check failed Feb 24 05:26:53.663431 master-0 kubenswrapper[7614]: I0224 05:26:53.663371 7614 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-zxkt2" podUID="be7a4b9e-1e9a-4298-b804-21b683805c0e" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 05:26:53.663567 master-0 kubenswrapper[7614]: I0224 05:26:53.663525 7614 scope.go:117] "RemoveContainer" containerID="d4516cc83e87e18d7c8ea61312f0b1b6185fcfcd2b620f9f1b31d56f65e19d0a" Feb 24 05:26:54.663302 master-0 kubenswrapper[7614]: I0224 05:26:54.663205 7614 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-zxkt2 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 05:26:54.663302 master-0 kubenswrapper[7614]: [-]has-synced failed: reason withheld Feb 24 05:26:54.663302 master-0 kubenswrapper[7614]: [+]process-running ok Feb 24 05:26:54.663302 master-0 kubenswrapper[7614]: healthz check failed Feb 24 05:26:54.664570 master-0 kubenswrapper[7614]: I0224 05:26:54.663349 7614 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-zxkt2" podUID="be7a4b9e-1e9a-4298-b804-21b683805c0e" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 05:26:54.676707 master-0 kubenswrapper[7614]: I0224 05:26:54.676633 7614 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_cluster-baremetal-operator-d6bb9bb76-54hnv_39623346-691b-42c8-af76-409d4f6629af/cluster-baremetal-operator/0.log" Feb 24 05:26:54.676875 master-0 kubenswrapper[7614]: I0224 05:26:54.676788 7614 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/cluster-baremetal-operator-d6bb9bb76-54hnv" event={"ID":"39623346-691b-42c8-af76-409d4f6629af","Type":"ContainerStarted","Data":"f390c9172bf667ce9e5a44fc191de51013e82e96eafb2547c98f9fa6aad29054"} Feb 24 05:26:54.857765 master-0 kubenswrapper[7614]: I0224 05:26:54.857626 7614 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-catalogd/catalogd-controller-manager-84b8d9d697-zvzxs" Feb 24 05:26:54.860646 master-0 kubenswrapper[7614]: I0224 05:26:54.860559 7614 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-catalogd/catalogd-controller-manager-84b8d9d697-zvzxs" Feb 24 05:26:55.665004 master-0 kubenswrapper[7614]: I0224 05:26:55.664885 7614 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-zxkt2 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 05:26:55.665004 master-0 kubenswrapper[7614]: [-]has-synced failed: reason withheld Feb 24 05:26:55.665004 master-0 kubenswrapper[7614]: [+]process-running ok Feb 24 05:26:55.665004 master-0 kubenswrapper[7614]: healthz check failed Feb 24 05:26:55.666256 master-0 kubenswrapper[7614]: I0224 05:26:55.665037 7614 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-zxkt2" podUID="be7a4b9e-1e9a-4298-b804-21b683805c0e" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 05:26:55.690896 master-0 kubenswrapper[7614]: I0224 05:26:55.690731 7614 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cluster-storage-operator_csi-snapshot-controller-6847bb4785-vqn96_b79ef90c-dc66-4d5f-8943-2c3ac68796ba/snapshot-controller/2.log" Feb 24 05:26:55.691852 master-0 kubenswrapper[7614]: I0224 05:26:55.691800 7614 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cluster-storage-operator_csi-snapshot-controller-6847bb4785-vqn96_b79ef90c-dc66-4d5f-8943-2c3ac68796ba/snapshot-controller/1.log" Feb 24 05:26:55.691995 master-0 kubenswrapper[7614]: I0224 05:26:55.691875 7614 generic.go:334] "Generic (PLEG): container finished" podID="b79ef90c-dc66-4d5f-8943-2c3ac68796ba" containerID="b136ebe01c73b0fd59c9db45f5467f27ec8e855aa02eaefd1377f780ef7c8176" exitCode=1 Feb 24 05:26:55.692096 master-0 kubenswrapper[7614]: I0224 05:26:55.692021 7614 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-storage-operator/csi-snapshot-controller-6847bb4785-vqn96" event={"ID":"b79ef90c-dc66-4d5f-8943-2c3ac68796ba","Type":"ContainerDied","Data":"b136ebe01c73b0fd59c9db45f5467f27ec8e855aa02eaefd1377f780ef7c8176"} Feb 24 05:26:55.692204 master-0 kubenswrapper[7614]: I0224 05:26:55.692165 7614 scope.go:117] "RemoveContainer" containerID="9223fb2da930fb3c50e82163a41bfe2c42eac1ee2e2d4f682d787074cbff45d5" Feb 24 05:26:55.694135 master-0 kubenswrapper[7614]: I0224 05:26:55.694036 7614 scope.go:117] "RemoveContainer" containerID="b136ebe01c73b0fd59c9db45f5467f27ec8e855aa02eaefd1377f780ef7c8176" Feb 24 05:26:55.694964 master-0 kubenswrapper[7614]: E0224 05:26:55.694578 7614 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"snapshot-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=snapshot-controller pod=csi-snapshot-controller-6847bb4785-vqn96_openshift-cluster-storage-operator(b79ef90c-dc66-4d5f-8943-2c3ac68796ba)\"" pod="openshift-cluster-storage-operator/csi-snapshot-controller-6847bb4785-vqn96" podUID="b79ef90c-dc66-4d5f-8943-2c3ac68796ba" Feb 24 05:26:56.606354 master-0 kubenswrapper[7614]: E0224 05:26:56.606200 7614 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"master-0\": Get \"https://api-int.sno.openstack.lab:6443/api/v1/nodes/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 24 05:26:56.664500 master-0 kubenswrapper[7614]: I0224 05:26:56.664409 7614 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-zxkt2 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 05:26:56.664500 master-0 kubenswrapper[7614]: [-]has-synced failed: reason withheld Feb 24 05:26:56.664500 master-0 kubenswrapper[7614]: [+]process-running ok Feb 24 05:26:56.664500 master-0 kubenswrapper[7614]: healthz check failed Feb 24 05:26:56.665009 master-0 kubenswrapper[7614]: I0224 05:26:56.664517 7614 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-zxkt2" podUID="be7a4b9e-1e9a-4298-b804-21b683805c0e" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 05:26:56.704014 master-0 kubenswrapper[7614]: I0224 05:26:56.703923 7614 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cluster-storage-operator_csi-snapshot-controller-6847bb4785-vqn96_b79ef90c-dc66-4d5f-8943-2c3ac68796ba/snapshot-controller/2.log" Feb 24 05:26:57.664433 master-0 kubenswrapper[7614]: I0224 05:26:57.664286 7614 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-zxkt2 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 05:26:57.664433 master-0 kubenswrapper[7614]: [-]has-synced failed: reason withheld Feb 24 05:26:57.664433 master-0 kubenswrapper[7614]: [+]process-running ok Feb 24 05:26:57.664433 master-0 kubenswrapper[7614]: healthz check failed Feb 24 05:26:57.664433 master-0 kubenswrapper[7614]: I0224 05:26:57.664426 7614 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-zxkt2" podUID="be7a4b9e-1e9a-4298-b804-21b683805c0e" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 05:26:58.174491 master-0 kubenswrapper[7614]: I0224 05:26:58.174269 7614 scope.go:117] "RemoveContainer" containerID="96a4e787b3e1f9eeaea51f2ad42e9605d98e2f89f59460135daea10bdd951213" Feb 24 05:26:58.664033 master-0 kubenswrapper[7614]: I0224 05:26:58.663928 7614 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-zxkt2 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 05:26:58.664033 master-0 kubenswrapper[7614]: [-]has-synced failed: reason withheld Feb 24 05:26:58.664033 master-0 kubenswrapper[7614]: [+]process-running ok Feb 24 05:26:58.664033 master-0 kubenswrapper[7614]: healthz check failed Feb 24 05:26:58.664470 master-0 kubenswrapper[7614]: I0224 05:26:58.664072 7614 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-zxkt2" podUID="be7a4b9e-1e9a-4298-b804-21b683805c0e" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 05:26:58.724517 master-0 kubenswrapper[7614]: I0224 05:26:58.724408 7614 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-5d8dfcdc87-b8ght" event={"ID":"88b915ff-fd94-4998-aa09-70f95c0f1b8a","Type":"ContainerStarted","Data":"1963f639dfcc8849d680fcf161b3d5b4fdca5718d4537c33eed743afbfeabc9d"} Feb 24 05:26:59.664773 master-0 kubenswrapper[7614]: I0224 05:26:59.664702 7614 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-zxkt2 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 05:26:59.664773 master-0 kubenswrapper[7614]: [-]has-synced failed: reason withheld Feb 24 05:26:59.664773 master-0 kubenswrapper[7614]: [+]process-running ok Feb 24 05:26:59.664773 master-0 kubenswrapper[7614]: healthz check failed Feb 24 05:26:59.665647 master-0 kubenswrapper[7614]: I0224 05:26:59.664804 7614 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-zxkt2" podUID="be7a4b9e-1e9a-4298-b804-21b683805c0e" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 05:26:59.736438 master-0 kubenswrapper[7614]: I0224 05:26:59.736287 7614 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-scheduler_openshift-kube-scheduler-master-0_ebb9c3b6f4ad10a97951cbde655daea9/kube-scheduler/0.log" Feb 24 05:26:59.737200 master-0 kubenswrapper[7614]: I0224 05:26:59.737144 7614 generic.go:334] "Generic (PLEG): container finished" podID="ebb9c3b6f4ad10a97951cbde655daea9" containerID="4ada702e991319865f9dacb414ee4288bbdec2d1eeae1681a213589c60b83506" exitCode=1 Feb 24 05:26:59.737357 master-0 kubenswrapper[7614]: I0224 05:26:59.737283 7614 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" event={"ID":"ebb9c3b6f4ad10a97951cbde655daea9","Type":"ContainerDied","Data":"4ada702e991319865f9dacb414ee4288bbdec2d1eeae1681a213589c60b83506"} Feb 24 05:26:59.738587 master-0 kubenswrapper[7614]: I0224 05:26:59.738538 7614 scope.go:117] "RemoveContainer" containerID="4ada702e991319865f9dacb414ee4288bbdec2d1eeae1681a213589c60b83506" Feb 24 05:26:59.742411 master-0 kubenswrapper[7614]: I0224 05:26:59.741453 7614 generic.go:334] "Generic (PLEG): container finished" podID="79656ffd720980cfc7e8a06d9f509855" containerID="bddc98ab8f891bcfeab1f13ad02fb7915d32f69a34209664b3c92c1ac4cbbe83" exitCode=0 Feb 24 05:26:59.742411 master-0 kubenswrapper[7614]: I0224 05:26:59.741491 7614 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" event={"ID":"79656ffd720980cfc7e8a06d9f509855","Type":"ContainerDied","Data":"bddc98ab8f891bcfeab1f13ad02fb7915d32f69a34209664b3c92c1ac4cbbe83"} Feb 24 05:26:59.742411 master-0 kubenswrapper[7614]: I0224 05:26:59.741890 7614 scope.go:117] "RemoveContainer" containerID="bddc98ab8f891bcfeab1f13ad02fb7915d32f69a34209664b3c92c1ac4cbbe83" Feb 24 05:27:00.664107 master-0 kubenswrapper[7614]: I0224 05:27:00.664006 7614 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-zxkt2 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 05:27:00.664107 master-0 kubenswrapper[7614]: [-]has-synced failed: reason withheld Feb 24 05:27:00.664107 master-0 kubenswrapper[7614]: [+]process-running ok Feb 24 05:27:00.664107 master-0 kubenswrapper[7614]: healthz check failed Feb 24 05:27:00.664634 master-0 kubenswrapper[7614]: I0224 05:27:00.664125 7614 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-zxkt2" podUID="be7a4b9e-1e9a-4298-b804-21b683805c0e" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 05:27:00.756977 master-0 kubenswrapper[7614]: I0224 05:27:00.756862 7614 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-scheduler_openshift-kube-scheduler-master-0_ebb9c3b6f4ad10a97951cbde655daea9/kube-scheduler/0.log" Feb 24 05:27:00.759910 master-0 kubenswrapper[7614]: I0224 05:27:00.758724 7614 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" event={"ID":"ebb9c3b6f4ad10a97951cbde655daea9","Type":"ContainerStarted","Data":"ce1534e77be4055b68d61d3ba9e804a6088794580111559891de6340e0482ba1"} Feb 24 05:27:00.759910 master-0 kubenswrapper[7614]: I0224 05:27:00.759087 7614 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" Feb 24 05:27:00.764999 master-0 kubenswrapper[7614]: I0224 05:27:00.764904 7614 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" event={"ID":"79656ffd720980cfc7e8a06d9f509855","Type":"ContainerStarted","Data":"6a82ca0444126f0d9d13c9d14a9452e234110172ab33d1a5f9dfae0996ef9cff"} Feb 24 05:27:00.769371 master-0 kubenswrapper[7614]: I0224 05:27:00.769072 7614 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Feb 24 05:27:00.769371 master-0 kubenswrapper[7614]: I0224 05:27:00.769282 7614 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Feb 24 05:27:01.664225 master-0 kubenswrapper[7614]: I0224 05:27:01.664118 7614 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-zxkt2 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 05:27:01.664225 master-0 kubenswrapper[7614]: [-]has-synced failed: reason withheld Feb 24 05:27:01.664225 master-0 kubenswrapper[7614]: [+]process-running ok Feb 24 05:27:01.664225 master-0 kubenswrapper[7614]: healthz check failed Feb 24 05:27:01.664778 master-0 kubenswrapper[7614]: I0224 05:27:01.664255 7614 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-zxkt2" podUID="be7a4b9e-1e9a-4298-b804-21b683805c0e" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 05:27:02.664386 master-0 kubenswrapper[7614]: I0224 05:27:02.664158 7614 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-zxkt2 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 05:27:02.664386 master-0 kubenswrapper[7614]: [-]has-synced failed: reason withheld Feb 24 05:27:02.664386 master-0 kubenswrapper[7614]: [+]process-running ok Feb 24 05:27:02.664386 master-0 kubenswrapper[7614]: healthz check failed Feb 24 05:27:02.664386 master-0 kubenswrapper[7614]: I0224 05:27:02.664268 7614 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-zxkt2" podUID="be7a4b9e-1e9a-4298-b804-21b683805c0e" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 05:27:03.663669 master-0 kubenswrapper[7614]: I0224 05:27:03.663575 7614 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-zxkt2 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 05:27:03.663669 master-0 kubenswrapper[7614]: [-]has-synced failed: reason withheld Feb 24 05:27:03.663669 master-0 kubenswrapper[7614]: [+]process-running ok Feb 24 05:27:03.663669 master-0 kubenswrapper[7614]: healthz check failed Feb 24 05:27:03.664106 master-0 kubenswrapper[7614]: I0224 05:27:03.663747 7614 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-zxkt2" podUID="be7a4b9e-1e9a-4298-b804-21b683805c0e" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 05:27:03.769406 master-0 kubenswrapper[7614]: I0224 05:27:03.769196 7614 patch_prober.go:28] interesting pod/kube-controller-manager-master-0 container/cluster-policy-controller namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://localhost:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 24 05:27:03.770557 master-0 kubenswrapper[7614]: I0224 05:27:03.770463 7614 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" podUID="79656ffd720980cfc7e8a06d9f509855" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://localhost:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 24 05:27:03.793761 master-0 kubenswrapper[7614]: I0224 05:27:03.793683 7614 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cluster-machine-approver_machine-approver-7dd9c7d7b9-pb6sw_e6a0fc47-b446-4902-9f8a-04870cbafcab/machine-approver-controller/0.log" Feb 24 05:27:03.794577 master-0 kubenswrapper[7614]: I0224 05:27:03.794512 7614 generic.go:334] "Generic (PLEG): container finished" podID="e6a0fc47-b446-4902-9f8a-04870cbafcab" containerID="ff86ebcc5c21c17d77b09c8668eacb2f60f3347c8c630b1700b81d719fb05f20" exitCode=255 Feb 24 05:27:03.794681 master-0 kubenswrapper[7614]: I0224 05:27:03.794589 7614 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-7dd9c7d7b9-pb6sw" event={"ID":"e6a0fc47-b446-4902-9f8a-04870cbafcab","Type":"ContainerDied","Data":"ff86ebcc5c21c17d77b09c8668eacb2f60f3347c8c630b1700b81d719fb05f20"} Feb 24 05:27:03.795610 master-0 kubenswrapper[7614]: I0224 05:27:03.795538 7614 scope.go:117] "RemoveContainer" containerID="ff86ebcc5c21c17d77b09c8668eacb2f60f3347c8c630b1700b81d719fb05f20" Feb 24 05:27:03.797274 master-0 kubenswrapper[7614]: I0224 05:27:03.797163 7614 generic.go:334] "Generic (PLEG): container finished" podID="da19bb93-c9ba-4e60-9e83-d92bc0dd33c4" containerID="d54fd19b9eb4386cf27b0171bbd26afecfaf6c5721e1c1b2aba9af1126e48295" exitCode=0 Feb 24 05:27:03.797402 master-0 kubenswrapper[7614]: I0224 05:27:03.797283 7614 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-7657d7494-mmsz6" event={"ID":"da19bb93-c9ba-4e60-9e83-d92bc0dd33c4","Type":"ContainerDied","Data":"d54fd19b9eb4386cf27b0171bbd26afecfaf6c5721e1c1b2aba9af1126e48295"} Feb 24 05:27:03.798052 master-0 kubenswrapper[7614]: I0224 05:27:03.797992 7614 scope.go:117] "RemoveContainer" containerID="d54fd19b9eb4386cf27b0171bbd26afecfaf6c5721e1c1b2aba9af1126e48295" Feb 24 05:27:04.663115 master-0 kubenswrapper[7614]: I0224 05:27:04.662970 7614 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-zxkt2 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 05:27:04.663115 master-0 kubenswrapper[7614]: [-]has-synced failed: reason withheld Feb 24 05:27:04.663115 master-0 kubenswrapper[7614]: [+]process-running ok Feb 24 05:27:04.663115 master-0 kubenswrapper[7614]: healthz check failed Feb 24 05:27:04.663115 master-0 kubenswrapper[7614]: I0224 05:27:04.663108 7614 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-zxkt2" podUID="be7a4b9e-1e9a-4298-b804-21b683805c0e" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 05:27:04.809354 master-0 kubenswrapper[7614]: I0224 05:27:04.809184 7614 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cluster-machine-approver_machine-approver-7dd9c7d7b9-pb6sw_e6a0fc47-b446-4902-9f8a-04870cbafcab/machine-approver-controller/0.log" Feb 24 05:27:04.810571 master-0 kubenswrapper[7614]: I0224 05:27:04.809736 7614 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-7dd9c7d7b9-pb6sw" event={"ID":"e6a0fc47-b446-4902-9f8a-04870cbafcab","Type":"ContainerStarted","Data":"d1424fd7bedb0aa1ebf619e5fe915346ccfa535a47447286e1c2df063f038395"} Feb 24 05:27:04.814715 master-0 kubenswrapper[7614]: I0224 05:27:04.814584 7614 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-7657d7494-mmsz6" event={"ID":"da19bb93-c9ba-4e60-9e83-d92bc0dd33c4","Type":"ContainerStarted","Data":"01f50b460983856284a210d9834ef5eef41fece749b0d8e696f6905032f26d3a"} Feb 24 05:27:04.815179 master-0 kubenswrapper[7614]: I0224 05:27:04.815112 7614 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-7657d7494-mmsz6" Feb 24 05:27:04.821861 master-0 kubenswrapper[7614]: I0224 05:27:04.821738 7614 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-7657d7494-mmsz6" Feb 24 05:27:05.664103 master-0 kubenswrapper[7614]: I0224 05:27:05.663959 7614 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-zxkt2 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 05:27:05.664103 master-0 kubenswrapper[7614]: [-]has-synced failed: reason withheld Feb 24 05:27:05.664103 master-0 kubenswrapper[7614]: [+]process-running ok Feb 24 05:27:05.664103 master-0 kubenswrapper[7614]: healthz check failed Feb 24 05:27:05.664550 master-0 kubenswrapper[7614]: I0224 05:27:05.664118 7614 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-zxkt2" podUID="be7a4b9e-1e9a-4298-b804-21b683805c0e" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 05:27:06.607683 master-0 kubenswrapper[7614]: E0224 05:27:06.607568 7614 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"master-0\": Get \"https://api-int.sno.openstack.lab:6443/api/v1/nodes/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 24 05:27:06.663200 master-0 kubenswrapper[7614]: I0224 05:27:06.663105 7614 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-zxkt2 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 05:27:06.663200 master-0 kubenswrapper[7614]: [-]has-synced failed: reason withheld Feb 24 05:27:06.663200 master-0 kubenswrapper[7614]: [+]process-running ok Feb 24 05:27:06.663200 master-0 kubenswrapper[7614]: healthz check failed Feb 24 05:27:06.663559 master-0 kubenswrapper[7614]: I0224 05:27:06.663221 7614 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-zxkt2" podUID="be7a4b9e-1e9a-4298-b804-21b683805c0e" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 05:27:07.663562 master-0 kubenswrapper[7614]: I0224 05:27:07.663348 7614 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-zxkt2 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 05:27:07.663562 master-0 kubenswrapper[7614]: [-]has-synced failed: reason withheld Feb 24 05:27:07.663562 master-0 kubenswrapper[7614]: [+]process-running ok Feb 24 05:27:07.663562 master-0 kubenswrapper[7614]: healthz check failed Feb 24 05:27:07.664846 master-0 kubenswrapper[7614]: I0224 05:27:07.663716 7614 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-zxkt2" podUID="be7a4b9e-1e9a-4298-b804-21b683805c0e" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 05:27:07.839749 master-0 kubenswrapper[7614]: I0224 05:27:07.839663 7614 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_control-plane-machine-set-operator-686847ff5f-zzvtt_32fd577d-8966-4ab1-95cf-357291084156/control-plane-machine-set-operator/1.log" Feb 24 05:27:07.840842 master-0 kubenswrapper[7614]: I0224 05:27:07.840778 7614 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_control-plane-machine-set-operator-686847ff5f-zzvtt_32fd577d-8966-4ab1-95cf-357291084156/control-plane-machine-set-operator/0.log" Feb 24 05:27:07.840943 master-0 kubenswrapper[7614]: I0224 05:27:07.840884 7614 generic.go:334] "Generic (PLEG): container finished" podID="32fd577d-8966-4ab1-95cf-357291084156" containerID="b931c4e73120acfd5edaa21c3bd09b78ab41757182041f2c3263ed0153cf894b" exitCode=1 Feb 24 05:27:07.841016 master-0 kubenswrapper[7614]: I0224 05:27:07.840946 7614 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/control-plane-machine-set-operator-686847ff5f-zzvtt" event={"ID":"32fd577d-8966-4ab1-95cf-357291084156","Type":"ContainerDied","Data":"b931c4e73120acfd5edaa21c3bd09b78ab41757182041f2c3263ed0153cf894b"} Feb 24 05:27:07.841083 master-0 kubenswrapper[7614]: I0224 05:27:07.841020 7614 scope.go:117] "RemoveContainer" containerID="cd2e094a618f188c882e23ef5f50ea70a38793ab6e08f1bfec1cd4a082e97144" Feb 24 05:27:07.843252 master-0 kubenswrapper[7614]: I0224 05:27:07.842611 7614 scope.go:117] "RemoveContainer" containerID="b931c4e73120acfd5edaa21c3bd09b78ab41757182041f2c3263ed0153cf894b" Feb 24 05:27:07.843252 master-0 kubenswrapper[7614]: E0224 05:27:07.843047 7614 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"control-plane-machine-set-operator\" with CrashLoopBackOff: \"back-off 10s restarting failed container=control-plane-machine-set-operator pod=control-plane-machine-set-operator-686847ff5f-zzvtt_openshift-machine-api(32fd577d-8966-4ab1-95cf-357291084156)\"" pod="openshift-machine-api/control-plane-machine-set-operator-686847ff5f-zzvtt" podUID="32fd577d-8966-4ab1-95cf-357291084156" Feb 24 05:27:08.664838 master-0 kubenswrapper[7614]: I0224 05:27:08.664727 7614 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-zxkt2 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 05:27:08.664838 master-0 kubenswrapper[7614]: [-]has-synced failed: reason withheld Feb 24 05:27:08.664838 master-0 kubenswrapper[7614]: [+]process-running ok Feb 24 05:27:08.664838 master-0 kubenswrapper[7614]: healthz check failed Feb 24 05:27:08.665889 master-0 kubenswrapper[7614]: I0224 05:27:08.664854 7614 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-zxkt2" podUID="be7a4b9e-1e9a-4298-b804-21b683805c0e" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 05:27:08.854627 master-0 kubenswrapper[7614]: I0224 05:27:08.854524 7614 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_control-plane-machine-set-operator-686847ff5f-zzvtt_32fd577d-8966-4ab1-95cf-357291084156/control-plane-machine-set-operator/1.log" Feb 24 05:27:09.664764 master-0 kubenswrapper[7614]: I0224 05:27:09.664427 7614 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-zxkt2 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 05:27:09.664764 master-0 kubenswrapper[7614]: [-]has-synced failed: reason withheld Feb 24 05:27:09.664764 master-0 kubenswrapper[7614]: [+]process-running ok Feb 24 05:27:09.664764 master-0 kubenswrapper[7614]: healthz check failed Feb 24 05:27:09.664764 master-0 kubenswrapper[7614]: I0224 05:27:09.664749 7614 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-zxkt2" podUID="be7a4b9e-1e9a-4298-b804-21b683805c0e" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 05:27:10.175071 master-0 kubenswrapper[7614]: I0224 05:27:10.174978 7614 scope.go:117] "RemoveContainer" containerID="b136ebe01c73b0fd59c9db45f5467f27ec8e855aa02eaefd1377f780ef7c8176" Feb 24 05:27:10.175493 master-0 kubenswrapper[7614]: E0224 05:27:10.175382 7614 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"snapshot-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=snapshot-controller pod=csi-snapshot-controller-6847bb4785-vqn96_openshift-cluster-storage-operator(b79ef90c-dc66-4d5f-8943-2c3ac68796ba)\"" pod="openshift-cluster-storage-operator/csi-snapshot-controller-6847bb4785-vqn96" podUID="b79ef90c-dc66-4d5f-8943-2c3ac68796ba" Feb 24 05:27:10.517219 master-0 kubenswrapper[7614]: E0224 05:27:10.517092 7614 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": context deadline exceeded" interval="7s" Feb 24 05:27:10.664294 master-0 kubenswrapper[7614]: I0224 05:27:10.664188 7614 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-zxkt2 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 05:27:10.664294 master-0 kubenswrapper[7614]: [-]has-synced failed: reason withheld Feb 24 05:27:10.664294 master-0 kubenswrapper[7614]: [+]process-running ok Feb 24 05:27:10.664294 master-0 kubenswrapper[7614]: healthz check failed Feb 24 05:27:10.664294 master-0 kubenswrapper[7614]: I0224 05:27:10.664277 7614 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-zxkt2" podUID="be7a4b9e-1e9a-4298-b804-21b683805c0e" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 05:27:10.939032 master-0 kubenswrapper[7614]: E0224 05:27:10.938683 7614 event.go:359] "Server rejected event (will not retry!)" err="Timeout: request did not complete within requested timeout - context deadline exceeded" event="&Event{ObjectMeta:{authentication-operator-5bd7c86784-kbb8z.189716d586873a8b openshift-authentication-operator 4365 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-authentication-operator,Name:authentication-operator-5bd7c86784-kbb8z,UID:59333a14-5bdc-4590-a3da-af7300f086da,APIVersion:v1,ResourceVersion:3428,FieldPath:spec.containers{authentication-operator},},Reason:Created,Message:Created container: authentication-operator,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-02-24 05:14:44 +0000 UTC,LastTimestamp:2026-02-24 05:24:46.082959729 +0000 UTC m=+617.117702915,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Feb 24 05:27:11.662918 master-0 kubenswrapper[7614]: I0224 05:27:11.662779 7614 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-zxkt2 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 05:27:11.662918 master-0 kubenswrapper[7614]: [-]has-synced failed: reason withheld Feb 24 05:27:11.662918 master-0 kubenswrapper[7614]: [+]process-running ok Feb 24 05:27:11.662918 master-0 kubenswrapper[7614]: healthz check failed Feb 24 05:27:11.663560 master-0 kubenswrapper[7614]: I0224 05:27:11.662926 7614 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-zxkt2" podUID="be7a4b9e-1e9a-4298-b804-21b683805c0e" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 05:27:12.663573 master-0 kubenswrapper[7614]: I0224 05:27:12.663442 7614 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-zxkt2 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 05:27:12.663573 master-0 kubenswrapper[7614]: [-]has-synced failed: reason withheld Feb 24 05:27:12.663573 master-0 kubenswrapper[7614]: [+]process-running ok Feb 24 05:27:12.663573 master-0 kubenswrapper[7614]: healthz check failed Feb 24 05:27:12.664809 master-0 kubenswrapper[7614]: I0224 05:27:12.663569 7614 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-zxkt2" podUID="be7a4b9e-1e9a-4298-b804-21b683805c0e" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 05:27:13.663653 master-0 kubenswrapper[7614]: I0224 05:27:13.663526 7614 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-zxkt2 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 05:27:13.663653 master-0 kubenswrapper[7614]: [-]has-synced failed: reason withheld Feb 24 05:27:13.663653 master-0 kubenswrapper[7614]: [+]process-running ok Feb 24 05:27:13.663653 master-0 kubenswrapper[7614]: healthz check failed Feb 24 05:27:13.664438 master-0 kubenswrapper[7614]: I0224 05:27:13.663650 7614 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-zxkt2" podUID="be7a4b9e-1e9a-4298-b804-21b683805c0e" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 05:27:13.770345 master-0 kubenswrapper[7614]: I0224 05:27:13.770195 7614 patch_prober.go:28] interesting pod/kube-controller-manager-master-0 container/cluster-policy-controller namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://localhost:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 24 05:27:13.770648 master-0 kubenswrapper[7614]: I0224 05:27:13.770384 7614 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" podUID="79656ffd720980cfc7e8a06d9f509855" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://localhost:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 24 05:27:14.664034 master-0 kubenswrapper[7614]: I0224 05:27:14.663916 7614 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-zxkt2 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 05:27:14.664034 master-0 kubenswrapper[7614]: [-]has-synced failed: reason withheld Feb 24 05:27:14.664034 master-0 kubenswrapper[7614]: [+]process-running ok Feb 24 05:27:14.664034 master-0 kubenswrapper[7614]: healthz check failed Feb 24 05:27:14.665422 master-0 kubenswrapper[7614]: I0224 05:27:14.664033 7614 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-zxkt2" podUID="be7a4b9e-1e9a-4298-b804-21b683805c0e" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 05:27:15.664027 master-0 kubenswrapper[7614]: I0224 05:27:15.663922 7614 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-zxkt2 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 05:27:15.664027 master-0 kubenswrapper[7614]: [-]has-synced failed: reason withheld Feb 24 05:27:15.664027 master-0 kubenswrapper[7614]: [+]process-running ok Feb 24 05:27:15.664027 master-0 kubenswrapper[7614]: healthz check failed Feb 24 05:27:15.664027 master-0 kubenswrapper[7614]: I0224 05:27:15.664015 7614 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-zxkt2" podUID="be7a4b9e-1e9a-4298-b804-21b683805c0e" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 05:27:16.609363 master-0 kubenswrapper[7614]: E0224 05:27:16.609093 7614 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"master-0\": Get \"https://api-int.sno.openstack.lab:6443/api/v1/nodes/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 24 05:27:16.609363 master-0 kubenswrapper[7614]: E0224 05:27:16.609160 7614 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Feb 24 05:27:16.663355 master-0 kubenswrapper[7614]: I0224 05:27:16.663235 7614 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-zxkt2 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 05:27:16.663355 master-0 kubenswrapper[7614]: [-]has-synced failed: reason withheld Feb 24 05:27:16.663355 master-0 kubenswrapper[7614]: [+]process-running ok Feb 24 05:27:16.663355 master-0 kubenswrapper[7614]: healthz check failed Feb 24 05:27:16.663806 master-0 kubenswrapper[7614]: I0224 05:27:16.663364 7614 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-zxkt2" podUID="be7a4b9e-1e9a-4298-b804-21b683805c0e" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 05:27:17.663443 master-0 kubenswrapper[7614]: I0224 05:27:17.663265 7614 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-zxkt2 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 05:27:17.663443 master-0 kubenswrapper[7614]: [-]has-synced failed: reason withheld Feb 24 05:27:17.663443 master-0 kubenswrapper[7614]: [+]process-running ok Feb 24 05:27:17.663443 master-0 kubenswrapper[7614]: healthz check failed Feb 24 05:27:17.664580 master-0 kubenswrapper[7614]: I0224 05:27:17.663445 7614 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-zxkt2" podUID="be7a4b9e-1e9a-4298-b804-21b683805c0e" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 05:27:18.547122 master-0 kubenswrapper[7614]: E0224 05:27:18.547003 7614 mirror_client.go:138] "Failed deleting a mirror pod" err="Timeout: request did not complete within requested timeout - context deadline exceeded" pod="openshift-etcd/etcd-master-0" Feb 24 05:27:18.664160 master-0 kubenswrapper[7614]: I0224 05:27:18.663840 7614 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-zxkt2 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 05:27:18.664160 master-0 kubenswrapper[7614]: [-]has-synced failed: reason withheld Feb 24 05:27:18.664160 master-0 kubenswrapper[7614]: [+]process-running ok Feb 24 05:27:18.664160 master-0 kubenswrapper[7614]: healthz check failed Feb 24 05:27:18.664160 master-0 kubenswrapper[7614]: I0224 05:27:18.663941 7614 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-zxkt2" podUID="be7a4b9e-1e9a-4298-b804-21b683805c0e" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 05:27:18.948932 master-0 kubenswrapper[7614]: I0224 05:27:18.948746 7614 kubelet.go:1909] "Trying to delete pod" pod="openshift-etcd/etcd-master-0" podUID="6636104e-8b36-4c09-9e6b-e13fc7237a3e" Feb 24 05:27:18.948932 master-0 kubenswrapper[7614]: I0224 05:27:18.948810 7614 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-etcd/etcd-master-0" podUID="6636104e-8b36-4c09-9e6b-e13fc7237a3e" Feb 24 05:27:19.664251 master-0 kubenswrapper[7614]: I0224 05:27:19.664137 7614 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-zxkt2 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 05:27:19.664251 master-0 kubenswrapper[7614]: [-]has-synced failed: reason withheld Feb 24 05:27:19.664251 master-0 kubenswrapper[7614]: [+]process-running ok Feb 24 05:27:19.664251 master-0 kubenswrapper[7614]: healthz check failed Feb 24 05:27:19.665400 master-0 kubenswrapper[7614]: I0224 05:27:19.664268 7614 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-zxkt2" podUID="be7a4b9e-1e9a-4298-b804-21b683805c0e" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 05:27:20.174906 master-0 kubenswrapper[7614]: I0224 05:27:20.174820 7614 scope.go:117] "RemoveContainer" containerID="b931c4e73120acfd5edaa21c3bd09b78ab41757182041f2c3263ed0153cf894b" Feb 24 05:27:20.664013 master-0 kubenswrapper[7614]: I0224 05:27:20.663858 7614 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-zxkt2 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 05:27:20.664013 master-0 kubenswrapper[7614]: [-]has-synced failed: reason withheld Feb 24 05:27:20.664013 master-0 kubenswrapper[7614]: [+]process-running ok Feb 24 05:27:20.664013 master-0 kubenswrapper[7614]: healthz check failed Feb 24 05:27:20.664013 master-0 kubenswrapper[7614]: I0224 05:27:20.664023 7614 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-zxkt2" podUID="be7a4b9e-1e9a-4298-b804-21b683805c0e" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 05:27:20.973894 master-0 kubenswrapper[7614]: I0224 05:27:20.973679 7614 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_control-plane-machine-set-operator-686847ff5f-zzvtt_32fd577d-8966-4ab1-95cf-357291084156/control-plane-machine-set-operator/1.log" Feb 24 05:27:20.973894 master-0 kubenswrapper[7614]: I0224 05:27:20.973791 7614 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/control-plane-machine-set-operator-686847ff5f-zzvtt" event={"ID":"32fd577d-8966-4ab1-95cf-357291084156","Type":"ContainerStarted","Data":"5bd676fac813f61741ff08eecd797517ce2fd911a291cf763c4a7e611a77c974"} Feb 24 05:27:21.662768 master-0 kubenswrapper[7614]: I0224 05:27:21.662722 7614 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-zxkt2 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 05:27:21.662768 master-0 kubenswrapper[7614]: [-]has-synced failed: reason withheld Feb 24 05:27:21.662768 master-0 kubenswrapper[7614]: [+]process-running ok Feb 24 05:27:21.662768 master-0 kubenswrapper[7614]: healthz check failed Feb 24 05:27:21.663200 master-0 kubenswrapper[7614]: I0224 05:27:21.663151 7614 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-zxkt2" podUID="be7a4b9e-1e9a-4298-b804-21b683805c0e" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 05:27:21.687435 master-0 kubenswrapper[7614]: I0224 05:27:21.687379 7614 status_manager.go:851] "Failed to get status for pod" podUID="79656ffd720980cfc7e8a06d9f509855" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" err="the server was unable to return a response in the time allotted, but may still be processing the request (get pods kube-controller-manager-master-0)" Feb 24 05:27:22.663975 master-0 kubenswrapper[7614]: I0224 05:27:22.663824 7614 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-zxkt2 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 05:27:22.663975 master-0 kubenswrapper[7614]: [-]has-synced failed: reason withheld Feb 24 05:27:22.663975 master-0 kubenswrapper[7614]: [+]process-running ok Feb 24 05:27:22.663975 master-0 kubenswrapper[7614]: healthz check failed Feb 24 05:27:22.664653 master-0 kubenswrapper[7614]: I0224 05:27:22.663979 7614 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-zxkt2" podUID="be7a4b9e-1e9a-4298-b804-21b683805c0e" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 05:27:22.664653 master-0 kubenswrapper[7614]: I0224 05:27:22.664085 7614 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-ingress/router-default-7b65dc9fcb-zxkt2" Feb 24 05:27:22.665565 master-0 kubenswrapper[7614]: I0224 05:27:22.665483 7614 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="router" containerStatusID={"Type":"cri-o","ID":"ebf89d5ba5d68a652168caf590af22fc79d75d991b321ff2b9f369556f4d28c8"} pod="openshift-ingress/router-default-7b65dc9fcb-zxkt2" containerMessage="Container router failed startup probe, will be restarted" Feb 24 05:27:22.665741 master-0 kubenswrapper[7614]: I0224 05:27:22.665566 7614 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ingress/router-default-7b65dc9fcb-zxkt2" podUID="be7a4b9e-1e9a-4298-b804-21b683805c0e" containerName="router" containerID="cri-o://ebf89d5ba5d68a652168caf590af22fc79d75d991b321ff2b9f369556f4d28c8" gracePeriod=3600 Feb 24 05:27:23.174862 master-0 kubenswrapper[7614]: I0224 05:27:23.174748 7614 scope.go:117] "RemoveContainer" containerID="b136ebe01c73b0fd59c9db45f5467f27ec8e855aa02eaefd1377f780ef7c8176" Feb 24 05:27:23.773268 master-0 kubenswrapper[7614]: I0224 05:27:23.773150 7614 patch_prober.go:28] interesting pod/kube-controller-manager-master-0 container/cluster-policy-controller namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://localhost:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 24 05:27:23.773731 master-0 kubenswrapper[7614]: I0224 05:27:23.773299 7614 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" podUID="79656ffd720980cfc7e8a06d9f509855" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://localhost:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 24 05:27:23.773731 master-0 kubenswrapper[7614]: I0224 05:27:23.773475 7614 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Feb 24 05:27:23.777558 master-0 kubenswrapper[7614]: I0224 05:27:23.777450 7614 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="cluster-policy-controller" containerStatusID={"Type":"cri-o","ID":"6a82ca0444126f0d9d13c9d14a9452e234110172ab33d1a5f9dfae0996ef9cff"} pod="openshift-kube-controller-manager/kube-controller-manager-master-0" containerMessage="Container cluster-policy-controller failed startup probe, will be restarted" Feb 24 05:27:23.777810 master-0 kubenswrapper[7614]: I0224 05:27:23.777742 7614 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" podUID="79656ffd720980cfc7e8a06d9f509855" containerName="cluster-policy-controller" containerID="cri-o://6a82ca0444126f0d9d13c9d14a9452e234110172ab33d1a5f9dfae0996ef9cff" gracePeriod=30 Feb 24 05:27:24.013231 master-0 kubenswrapper[7614]: I0224 05:27:24.013142 7614 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-master-0_79656ffd720980cfc7e8a06d9f509855/cluster-policy-controller/1.log" Feb 24 05:27:24.014792 master-0 kubenswrapper[7614]: I0224 05:27:24.014730 7614 generic.go:334] "Generic (PLEG): container finished" podID="79656ffd720980cfc7e8a06d9f509855" containerID="6a82ca0444126f0d9d13c9d14a9452e234110172ab33d1a5f9dfae0996ef9cff" exitCode=255 Feb 24 05:27:24.014894 master-0 kubenswrapper[7614]: I0224 05:27:24.014826 7614 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" event={"ID":"79656ffd720980cfc7e8a06d9f509855","Type":"ContainerDied","Data":"6a82ca0444126f0d9d13c9d14a9452e234110172ab33d1a5f9dfae0996ef9cff"} Feb 24 05:27:24.014966 master-0 kubenswrapper[7614]: I0224 05:27:24.014891 7614 scope.go:117] "RemoveContainer" containerID="bddc98ab8f891bcfeab1f13ad02fb7915d32f69a34209664b3c92c1ac4cbbe83" Feb 24 05:27:24.020487 master-0 kubenswrapper[7614]: I0224 05:27:24.020407 7614 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cluster-storage-operator_csi-snapshot-controller-6847bb4785-vqn96_b79ef90c-dc66-4d5f-8943-2c3ac68796ba/snapshot-controller/2.log" Feb 24 05:27:24.020607 master-0 kubenswrapper[7614]: I0224 05:27:24.020538 7614 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-storage-operator/csi-snapshot-controller-6847bb4785-vqn96" event={"ID":"b79ef90c-dc66-4d5f-8943-2c3ac68796ba","Type":"ContainerStarted","Data":"cd2719bcd396d956c5ec0dbab7235948c54420d996fc5bd5c8732105713b4ef6"} Feb 24 05:27:24.704072 master-0 kubenswrapper[7614]: I0224 05:27:24.703974 7614 patch_prober.go:28] interesting pod/authentication-operator-5bd7c86784-kbb8z container/authentication-operator namespace/openshift-authentication-operator: Liveness probe status=failure output="Get \"https://10.128.0.17:8443/healthz\": dial tcp 10.128.0.17:8443: connect: connection refused" start-of-body= Feb 24 05:27:24.705076 master-0 kubenswrapper[7614]: I0224 05:27:24.704178 7614 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-authentication-operator/authentication-operator-5bd7c86784-kbb8z" podUID="59333a14-5bdc-4590-a3da-af7300f086da" containerName="authentication-operator" probeResult="failure" output="Get \"https://10.128.0.17:8443/healthz\": dial tcp 10.128.0.17:8443: connect: connection refused" Feb 24 05:27:25.035994 master-0 kubenswrapper[7614]: I0224 05:27:25.035885 7614 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-master-0_79656ffd720980cfc7e8a06d9f509855/cluster-policy-controller/1.log" Feb 24 05:27:25.037869 master-0 kubenswrapper[7614]: I0224 05:27:25.037792 7614 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" event={"ID":"79656ffd720980cfc7e8a06d9f509855","Type":"ContainerStarted","Data":"0689e27af42ae81a96ec3b76fadf7f543350248eb0be477efc6f481259da5952"} Feb 24 05:27:27.518823 master-0 kubenswrapper[7614]: E0224 05:27:27.518696 7614 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" interval="7s" Feb 24 05:27:30.769509 master-0 kubenswrapper[7614]: I0224 05:27:30.769432 7614 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Feb 24 05:27:30.770109 master-0 kubenswrapper[7614]: I0224 05:27:30.770015 7614 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Feb 24 05:27:33.769867 master-0 kubenswrapper[7614]: I0224 05:27:33.769632 7614 patch_prober.go:28] interesting pod/kube-controller-manager-master-0 container/cluster-policy-controller namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://localhost:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 24 05:27:33.769867 master-0 kubenswrapper[7614]: I0224 05:27:33.769787 7614 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" podUID="79656ffd720980cfc7e8a06d9f509855" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://localhost:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 24 05:27:34.701798 master-0 kubenswrapper[7614]: I0224 05:27:34.701696 7614 patch_prober.go:28] interesting pod/authentication-operator-5bd7c86784-kbb8z container/authentication-operator namespace/openshift-authentication-operator: Liveness probe status=failure output="Get \"https://10.128.0.17:8443/healthz\": dial tcp 10.128.0.17:8443: connect: connection refused" start-of-body= Feb 24 05:27:34.702213 master-0 kubenswrapper[7614]: I0224 05:27:34.701796 7614 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-authentication-operator/authentication-operator-5bd7c86784-kbb8z" podUID="59333a14-5bdc-4590-a3da-af7300f086da" containerName="authentication-operator" probeResult="failure" output="Get \"https://10.128.0.17:8443/healthz\": dial tcp 10.128.0.17:8443: connect: connection refused" Feb 24 05:27:36.897789 master-0 kubenswrapper[7614]: E0224 05:27:36.897483 7614 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-24T05:27:26Z\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-24T05:27:26Z\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-24T05:27:26Z\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-24T05:27:26Z\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:08cff7c9164822cf90c1ddc99284f5fd3c4efbfdf7ff5d2da94ff20f03d57215\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8665346de3cec5b1443fb1e3bf6389962210affa684e5c1b521ec342f56e0901\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1703852494},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:94d88fe2fa42931a725508dbf17296b6ed99b8e20c1169f5d1fb8a36f4927ddd\\\"],\\\"sizeBytes\\\":1637274270},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:10e72e1dffd75bda73d89a11e18d98c99255c0f2c54d81f82a2a48b0b86b96b5\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:d64168b357c44a3e5febdd4d99c285c68217a6568f9de2371d72e8a089d42b69\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1238591178},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d7a8ac0ba2e5115c9d451d553741173ae8744d4544da15e28bf38f61630182fd\\\"],\\\"sizeBytes\\\":1237794314},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:155018f64a4d43025cb88586009847bd0f7844afa3e1aa81639d31b96bebd68e\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:4154e7856e2578eae0af7bc7ade3338a49c179e8e0b9d8b5167540e580ffc22b\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1210563790},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:518982b9ad8a8bfb7bb3b4216b235cac99e126df3bb48e390b36064560c76b83\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:b3293b04e31c8e67c885f77e0ad2ee994295afde7c42cb9761c7090ae0cdb3f8\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1202767548},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4775c6461221dafe3ddd67ff683ccb665bed6eb278fa047d9d744aab9af65dcf\\\"],\\\"sizeBytes\\\":992461126},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8177c465e14c63854e5c0fa95ca0635cffc9b5dd3d077ecf971feedbc42b1274\\\"],\\\"sizeBytes\\\":943734757},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6c7ec917f0eff7b41d7174f1b5fdc4ce53ad106e51599afba731a8431ff9caa7\\\"],\\\"sizeBytes\\\":918153745},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8ff40a2d97bf7a95e19303f7e972b7e8354a3864039111c6d33d5479117aaeed\\\"],\\\"sizeBytes\\\":880247193},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:72fafcd55ab739919dd8a114863fda27106af1c497f474e7ce0cb23b58dfa021\\\"],\\\"sizeBytes\\\":875998518},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7b9239f1f5e9590e3db71e61fde86db8f43e0085f61ae7769508d2ea058481c7\\\"],\\\"sizeBytes\\\":862501144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:572b0ca6e993beea2ee9346197665e56a2e4999fbb6958c747c48a35bf72ee34\\\"],\\\"sizeBytes\\\":862091954},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3fa84eaa1310d97fe55bb23a7c27ece85718d0643fa7fc0ff81014edb4b948b\\\"],\\\"sizeBytes\\\":772838975},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bd420e879c9f0271bca2d123a6d762591d9a4626b72f254d1f885842c32149e8\\\"],\\\"sizeBytes\\\":687849728},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3c467c1eeba7434b2aebf07169ab8afe0203d638e871dbdf29a16f830e9aef9e\\\"],\\\"sizeBytes\\\":682963466},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5121a0944000b7bfa57ae2e4eb3f412e1b4b89fcc75eec1ef20241182c0527f2\\\"],\\\"sizeBytes\\\":677827184},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d5a31b448302fbb994548ed801ac488a44e8a7c4ae9149c3b4cc20d6af832f83\\\"],\\\"sizeBytes\\\":621542709},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3e089c4e4fa9a22803b2673b776215e021a1f12a856dbcaba2fadee29bee10a3\\\"],\\\"sizeBytes\\\":589275174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1582ea693f35073e3316e2380a18227b78096ca7f4e1328f1dd8a2c423da26e9\\\"],\\\"sizeBytes\\\":582052489},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:314be88d356b2c8a3c4416daeb4cfcd58d617a4526319c01ddaffae4b4179e74\\\"],\\\"sizeBytes\\\":558105176},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f9df2f6b5cd83ab895e9e4a9bf8920d35fe450679ce06fb223944e95cfbe3e\\\"],\\\"sizeBytes\\\":557320737},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f86073cf0561e4b69668f8917ef5184cb0ef5aa16d0fefe38118f1167b268721\\\"],\\\"sizeBytes\\\":548646306},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d77a77c401bcfaa65a6ab6de82415af0e7ace1b470626647e5feb4875c89a5ef\\\"],\\\"sizeBytes\\\":529218694},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bc0ca626e5e17f9f78ddbfde54ea13ddc7749904911817bba16e6b59f30499ec\\\"],\\\"sizeBytes\\\":528829499},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:11f566fe2ae782ad96d36028b0fd81911a64ef787dcebc83803f741f272fa396\\\"],\\\"sizeBytes\\\":518279996},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-release@sha256:40bb7cf7c637bf9efd8fb0157839d325a019d67cc7d7279665fcf90dbb7f3f33\\\"],\\\"sizeBytes\\\":517888569},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fd63e2c1185e529c6e9f6e1426222ff2ac195132b44a1775f407e4593b66d4c\\\"],\\\"sizeBytes\\\":514875199},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a1b426a276216372c7d688fe60e9eaf251efd35071f94e1bcd4337f51a90fd75\\\"],\\\"sizeBytes\\\":513473308},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ce471c00b59fd855a59f7efa9afdb3f0f9cbf1c4bcce3a82fe1a4cb82e90f52e\\\"],\\\"sizeBytes\\\":513119434},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a9dcbc6b966928b7597d4a822948ae6f07b62feecb91679c1d825d0d19426e19\\\"],\\\"sizeBytes\\\":512172666},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d5f4a546983224e416dfcc3a700afc15f9790182a5a2f8f7c94892d0e95abab3\\\"],\\\"sizeBytes\\\":511125422},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2c8de5c5b21ed8c7829ba988d580ffa470c9913877fe0ee5e11bf507400ffbc7\\\"],\\\"sizeBytes\\\":511059399},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:64ba461fd5594e3a30bfd755f1496707a88249bc68d07c65124c8617d664d2ac\\\"],\\\"sizeBytes\\\":508786786},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a82e441a9e9b93f0e010f1ce26e30c24b6ca93f7752084d4694ebdb3c5b53f83\\\"],\\\"sizeBytes\\\":508443359},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d7bd3361d506dcc1be3afa62d35080c5dd37afccc26cd36019e2b9db2c45f896\\\"],\\\"sizeBytes\\\":507867630},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:034588ffd95ce834e866279bf80a45af2cddda631c6c9a6344c1bb2e033fd83e\\\"],\\\"sizeBytes\\\":506374680},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c8618d42fe4da4881abe39e98691d187e13713981b66d0dac0a11cb1287482b7\\\"],\\\"sizeBytes\\\":506291135},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ce68078d909b63bb5b872d94c04829aa1b5812c416abbaf9024840d348ee68b1\\\"],\\\"sizeBytes\\\":505244089},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:457c564075e8b14b1d24ff6eab750600ebc90ff8b7bb137306a579ee8445ae95\\\"],\\\"sizeBytes\\\":505137106},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ebf883de8fd905490f0c9b420a5d6446ecde18e12e15364f6dcd4e885104972c\\\"],\\\"sizeBytes\\\":504558291},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:897708222502e4d710dd737923f74d153c084ba6048bffceb16dfd30f79a6ecc\\\"],\\\"sizeBytes\\\":504513960},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:86d9e1fdf97794f44fc1c91da025714ec6900fafa6cdc4c0041ffa95e9d70c6c\\\"],\\\"sizeBytes\\\":495888162},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4e8c6ae1f9a450c90857c9fbccf1e5fb404dbc0d65d086afce005d6bd307853b\\\"],\\\"sizeBytes\\\":494959854},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cb94366d6d4423592369eeca84f0fe98325db13d0ab9e0291db9f1a337cd7143\\\"],\\\"sizeBytes\\\":487054953},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:117a846734fc8159b7172a40ed2feb43a969b7dbc113ee1a572cbf6f9f922655\\\"],\\\"sizeBytes\\\":486990304},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4797a485fd4ab3414ba8d52bdf2afccefab6c657b1d259baad703fca5145124c\\\"],\\\"sizeBytes\\\":484349508},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7a132d09565133b36ac7c797213d6a74ac810bb368ef59136320ab3d300f45bd\\\"],\\\"sizeBytes\\\":484074784},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a1dcd1b7d6878b28ed95aed9f0c0e2df156c17cb9fe5971400b983e3f2be29c\\\"],\\\"sizeBytes\\\":480427687},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2b05fb5dedd9a53747df98c2a1956ace8e233ad575204fbec990e39705e36dfb\\\"],\\\"sizeBytes\\\":471325816}],\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"master-0\": Patch \"https://api-int.sno.openstack.lab:6443/api/v1/nodes/master-0/status?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 24 05:27:43.769003 master-0 kubenswrapper[7614]: I0224 05:27:43.768900 7614 patch_prober.go:28] interesting pod/kube-controller-manager-master-0 container/cluster-policy-controller namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://localhost:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 24 05:27:43.770133 master-0 kubenswrapper[7614]: I0224 05:27:43.769021 7614 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" podUID="79656ffd720980cfc7e8a06d9f509855" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://localhost:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 24 05:27:44.520917 master-0 kubenswrapper[7614]: E0224 05:27:44.520784 7614 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" interval="7s" Feb 24 05:27:44.701523 master-0 kubenswrapper[7614]: I0224 05:27:44.701428 7614 patch_prober.go:28] interesting pod/authentication-operator-5bd7c86784-kbb8z container/authentication-operator namespace/openshift-authentication-operator: Liveness probe status=failure output="Get \"https://10.128.0.17:8443/healthz\": dial tcp 10.128.0.17:8443: connect: connection refused" start-of-body= Feb 24 05:27:44.701523 master-0 kubenswrapper[7614]: I0224 05:27:44.701520 7614 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-authentication-operator/authentication-operator-5bd7c86784-kbb8z" podUID="59333a14-5bdc-4590-a3da-af7300f086da" containerName="authentication-operator" probeResult="failure" output="Get \"https://10.128.0.17:8443/healthz\": dial tcp 10.128.0.17:8443: connect: connection refused" Feb 24 05:27:44.701956 master-0 kubenswrapper[7614]: I0224 05:27:44.701591 7614 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-authentication-operator/authentication-operator-5bd7c86784-kbb8z" Feb 24 05:27:44.702527 master-0 kubenswrapper[7614]: I0224 05:27:44.702461 7614 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="authentication-operator" containerStatusID={"Type":"cri-o","ID":"85a787ab234adbc4cec6c14f0d55a16949892b1a8442a2c568e5b38474ee2b06"} pod="openshift-authentication-operator/authentication-operator-5bd7c86784-kbb8z" containerMessage="Container authentication-operator failed liveness probe, will be restarted" Feb 24 05:27:44.702676 master-0 kubenswrapper[7614]: I0224 05:27:44.702536 7614 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-authentication-operator/authentication-operator-5bd7c86784-kbb8z" podUID="59333a14-5bdc-4590-a3da-af7300f086da" containerName="authentication-operator" containerID="cri-o://85a787ab234adbc4cec6c14f0d55a16949892b1a8442a2c568e5b38474ee2b06" gracePeriod=30 Feb 24 05:27:44.943174 master-0 kubenswrapper[7614]: E0224 05:27:44.942747 7614 event.go:359] "Server rejected event (will not retry!)" err="Timeout: request did not complete within requested timeout - context deadline exceeded" event="&Event{ObjectMeta:{authentication-operator-5bd7c86784-kbb8z.189716d58785168b openshift-authentication-operator 4367 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-authentication-operator,Name:authentication-operator-5bd7c86784-kbb8z,UID:59333a14-5bdc-4590-a3da-af7300f086da,APIVersion:v1,ResourceVersion:3428,FieldPath:spec.containers{authentication-operator},},Reason:Started,Message:Started container authentication-operator,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-02-24 05:14:44 +0000 UTC,LastTimestamp:2026-02-24 05:24:46.09668796 +0000 UTC m=+617.131431126,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Feb 24 05:27:45.222788 master-0 kubenswrapper[7614]: I0224 05:27:45.222595 7614 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-authentication-operator_authentication-operator-5bd7c86784-kbb8z_59333a14-5bdc-4590-a3da-af7300f086da/authentication-operator/3.log" Feb 24 05:27:45.223943 master-0 kubenswrapper[7614]: I0224 05:27:45.223900 7614 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-authentication-operator_authentication-operator-5bd7c86784-kbb8z_59333a14-5bdc-4590-a3da-af7300f086da/authentication-operator/2.log" Feb 24 05:27:45.224070 master-0 kubenswrapper[7614]: I0224 05:27:45.223957 7614 generic.go:334] "Generic (PLEG): container finished" podID="59333a14-5bdc-4590-a3da-af7300f086da" containerID="85a787ab234adbc4cec6c14f0d55a16949892b1a8442a2c568e5b38474ee2b06" exitCode=255 Feb 24 05:27:45.224070 master-0 kubenswrapper[7614]: I0224 05:27:45.224005 7614 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-5bd7c86784-kbb8z" event={"ID":"59333a14-5bdc-4590-a3da-af7300f086da","Type":"ContainerDied","Data":"85a787ab234adbc4cec6c14f0d55a16949892b1a8442a2c568e5b38474ee2b06"} Feb 24 05:27:45.224070 master-0 kubenswrapper[7614]: I0224 05:27:45.224062 7614 scope.go:117] "RemoveContainer" containerID="443e2cc8a24d2e54b563564a171d6e7bc732fa198a57aa6dc2d46c10dc569ce8" Feb 24 05:27:46.237597 master-0 kubenswrapper[7614]: I0224 05:27:46.237509 7614 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-authentication-operator_authentication-operator-5bd7c86784-kbb8z_59333a14-5bdc-4590-a3da-af7300f086da/authentication-operator/3.log" Feb 24 05:27:46.238520 master-0 kubenswrapper[7614]: I0224 05:27:46.237629 7614 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-5bd7c86784-kbb8z" event={"ID":"59333a14-5bdc-4590-a3da-af7300f086da","Type":"ContainerStarted","Data":"fb14b25796af448c2bc49088ecec2ac65559ebf13053ffc78319aa0e2b8844d9"} Feb 24 05:27:46.898376 master-0 kubenswrapper[7614]: E0224 05:27:46.898269 7614 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"master-0\": Get \"https://api-int.sno.openstack.lab:6443/api/v1/nodes/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 24 05:27:47.580754 master-0 kubenswrapper[7614]: I0224 05:27:47.580646 7614 patch_prober.go:28] interesting pod/openshift-kube-scheduler-master-0 container/kube-scheduler namespace/openshift-kube-scheduler: Readiness probe status=failure output="Get \"https://192.168.32.10:10259/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 24 05:27:47.580754 master-0 kubenswrapper[7614]: I0224 05:27:47.580731 7614 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" podUID="ebb9c3b6f4ad10a97951cbde655daea9" containerName="kube-scheduler" probeResult="failure" output="Get \"https://192.168.32.10:10259/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 24 05:27:47.581743 master-0 kubenswrapper[7614]: I0224 05:27:47.580783 7614 patch_prober.go:28] interesting pod/openshift-kube-scheduler-master-0 container/kube-scheduler namespace/openshift-kube-scheduler: Liveness probe status=failure output="Get \"https://192.168.32.10:10259/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 24 05:27:47.581743 master-0 kubenswrapper[7614]: I0224 05:27:47.580908 7614 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" podUID="ebb9c3b6f4ad10a97951cbde655daea9" containerName="kube-scheduler" probeResult="failure" output="Get \"https://192.168.32.10:10259/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 24 05:27:52.952654 master-0 kubenswrapper[7614]: E0224 05:27:52.952419 7614 mirror_client.go:138] "Failed deleting a mirror pod" err="Timeout: request did not complete within requested timeout - context deadline exceeded" pod="openshift-etcd/etcd-master-0" Feb 24 05:27:53.770789 master-0 kubenswrapper[7614]: I0224 05:27:53.770707 7614 patch_prober.go:28] interesting pod/kube-controller-manager-master-0 container/cluster-policy-controller namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://localhost:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 24 05:27:53.771421 master-0 kubenswrapper[7614]: I0224 05:27:53.770850 7614 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" podUID="79656ffd720980cfc7e8a06d9f509855" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://localhost:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 24 05:27:53.771421 master-0 kubenswrapper[7614]: I0224 05:27:53.771169 7614 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Feb 24 05:27:53.772463 master-0 kubenswrapper[7614]: I0224 05:27:53.772397 7614 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="cluster-policy-controller" containerStatusID={"Type":"cri-o","ID":"0689e27af42ae81a96ec3b76fadf7f543350248eb0be477efc6f481259da5952"} pod="openshift-kube-controller-manager/kube-controller-manager-master-0" containerMessage="Container cluster-policy-controller failed startup probe, will be restarted" Feb 24 05:27:53.772648 master-0 kubenswrapper[7614]: I0224 05:27:53.772592 7614 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" podUID="79656ffd720980cfc7e8a06d9f509855" containerName="cluster-policy-controller" containerID="cri-o://0689e27af42ae81a96ec3b76fadf7f543350248eb0be477efc6f481259da5952" gracePeriod=30 Feb 24 05:27:54.324180 master-0 kubenswrapper[7614]: I0224 05:27:54.323956 7614 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-master-0_79656ffd720980cfc7e8a06d9f509855/cluster-policy-controller/2.log" Feb 24 05:27:54.325617 master-0 kubenswrapper[7614]: I0224 05:27:54.325555 7614 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-master-0_79656ffd720980cfc7e8a06d9f509855/cluster-policy-controller/1.log" Feb 24 05:27:54.327565 master-0 kubenswrapper[7614]: I0224 05:27:54.327507 7614 generic.go:334] "Generic (PLEG): container finished" podID="79656ffd720980cfc7e8a06d9f509855" containerID="0689e27af42ae81a96ec3b76fadf7f543350248eb0be477efc6f481259da5952" exitCode=255 Feb 24 05:27:54.327712 master-0 kubenswrapper[7614]: I0224 05:27:54.327635 7614 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" event={"ID":"79656ffd720980cfc7e8a06d9f509855","Type":"ContainerDied","Data":"0689e27af42ae81a96ec3b76fadf7f543350248eb0be477efc6f481259da5952"} Feb 24 05:27:54.327767 master-0 kubenswrapper[7614]: I0224 05:27:54.327751 7614 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" event={"ID":"79656ffd720980cfc7e8a06d9f509855","Type":"ContainerStarted","Data":"386f03588b2fec464602c05980d3d5bd01154f4e28abd6dc77dfb5d576846982"} Feb 24 05:27:54.327813 master-0 kubenswrapper[7614]: I0224 05:27:54.327786 7614 scope.go:117] "RemoveContainer" containerID="6a82ca0444126f0d9d13c9d14a9452e234110172ab33d1a5f9dfae0996ef9cff" Feb 24 05:27:54.335813 master-0 kubenswrapper[7614]: I0224 05:27:54.335727 7614 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-master-0" event={"ID":"b419b8533666d3ae7054c771ce97a95f","Type":"ContainerStarted","Data":"3faa482b60d54621bea5a4ad6da8d12fd13e54888c7a5e9ca7eac409b6e3607e"} Feb 24 05:27:54.335920 master-0 kubenswrapper[7614]: I0224 05:27:54.335826 7614 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-master-0" event={"ID":"b419b8533666d3ae7054c771ce97a95f","Type":"ContainerStarted","Data":"3b8e272471b366b9bb172b6754ab88ba7b2f94edde98e730bec762fb2e90114b"} Feb 24 05:27:54.335920 master-0 kubenswrapper[7614]: I0224 05:27:54.335856 7614 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-master-0" event={"ID":"b419b8533666d3ae7054c771ce97a95f","Type":"ContainerStarted","Data":"d0ab31f6f0d346b7ad6a527bcfc361448429c220e4ee35962995980c2b8c2920"} Feb 24 05:27:54.339252 master-0 kubenswrapper[7614]: I0224 05:27:54.339202 7614 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_cluster-baremetal-operator-d6bb9bb76-54hnv_39623346-691b-42c8-af76-409d4f6629af/cluster-baremetal-operator/1.log" Feb 24 05:27:54.341061 master-0 kubenswrapper[7614]: I0224 05:27:54.341020 7614 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_cluster-baremetal-operator-d6bb9bb76-54hnv_39623346-691b-42c8-af76-409d4f6629af/cluster-baremetal-operator/0.log" Feb 24 05:27:54.341142 master-0 kubenswrapper[7614]: I0224 05:27:54.341095 7614 generic.go:334] "Generic (PLEG): container finished" podID="39623346-691b-42c8-af76-409d4f6629af" containerID="f390c9172bf667ce9e5a44fc191de51013e82e96eafb2547c98f9fa6aad29054" exitCode=1 Feb 24 05:27:54.341227 master-0 kubenswrapper[7614]: I0224 05:27:54.341199 7614 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/cluster-baremetal-operator-d6bb9bb76-54hnv" event={"ID":"39623346-691b-42c8-af76-409d4f6629af","Type":"ContainerDied","Data":"f390c9172bf667ce9e5a44fc191de51013e82e96eafb2547c98f9fa6aad29054"} Feb 24 05:27:54.341915 master-0 kubenswrapper[7614]: I0224 05:27:54.341874 7614 scope.go:117] "RemoveContainer" containerID="f390c9172bf667ce9e5a44fc191de51013e82e96eafb2547c98f9fa6aad29054" Feb 24 05:27:54.342303 master-0 kubenswrapper[7614]: E0224 05:27:54.342258 7614 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cluster-baremetal-operator\" with CrashLoopBackOff: \"back-off 10s restarting failed container=cluster-baremetal-operator pod=cluster-baremetal-operator-d6bb9bb76-54hnv_openshift-machine-api(39623346-691b-42c8-af76-409d4f6629af)\"" pod="openshift-machine-api/cluster-baremetal-operator-d6bb9bb76-54hnv" podUID="39623346-691b-42c8-af76-409d4f6629af" Feb 24 05:27:54.344489 master-0 kubenswrapper[7614]: I0224 05:27:54.344438 7614 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cluster-storage-operator_csi-snapshot-controller-6847bb4785-vqn96_b79ef90c-dc66-4d5f-8943-2c3ac68796ba/snapshot-controller/3.log" Feb 24 05:27:54.345191 master-0 kubenswrapper[7614]: I0224 05:27:54.345146 7614 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cluster-storage-operator_csi-snapshot-controller-6847bb4785-vqn96_b79ef90c-dc66-4d5f-8943-2c3ac68796ba/snapshot-controller/2.log" Feb 24 05:27:54.345288 master-0 kubenswrapper[7614]: I0224 05:27:54.345241 7614 generic.go:334] "Generic (PLEG): container finished" podID="b79ef90c-dc66-4d5f-8943-2c3ac68796ba" containerID="cd2719bcd396d956c5ec0dbab7235948c54420d996fc5bd5c8732105713b4ef6" exitCode=1 Feb 24 05:27:54.345372 master-0 kubenswrapper[7614]: I0224 05:27:54.345298 7614 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-storage-operator/csi-snapshot-controller-6847bb4785-vqn96" event={"ID":"b79ef90c-dc66-4d5f-8943-2c3ac68796ba","Type":"ContainerDied","Data":"cd2719bcd396d956c5ec0dbab7235948c54420d996fc5bd5c8732105713b4ef6"} Feb 24 05:27:54.346303 master-0 kubenswrapper[7614]: I0224 05:27:54.346252 7614 scope.go:117] "RemoveContainer" containerID="cd2719bcd396d956c5ec0dbab7235948c54420d996fc5bd5c8732105713b4ef6" Feb 24 05:27:54.346705 master-0 kubenswrapper[7614]: E0224 05:27:54.346658 7614 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"snapshot-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=snapshot-controller pod=csi-snapshot-controller-6847bb4785-vqn96_openshift-cluster-storage-operator(b79ef90c-dc66-4d5f-8943-2c3ac68796ba)\"" pod="openshift-cluster-storage-operator/csi-snapshot-controller-6847bb4785-vqn96" podUID="b79ef90c-dc66-4d5f-8943-2c3ac68796ba" Feb 24 05:27:54.367774 master-0 kubenswrapper[7614]: I0224 05:27:54.367671 7614 scope.go:117] "RemoveContainer" containerID="d4516cc83e87e18d7c8ea61312f0b1b6185fcfcd2b620f9f1b31d56f65e19d0a" Feb 24 05:27:54.424074 master-0 kubenswrapper[7614]: I0224 05:27:54.424022 7614 scope.go:117] "RemoveContainer" containerID="b136ebe01c73b0fd59c9db45f5467f27ec8e855aa02eaefd1377f780ef7c8176" Feb 24 05:27:55.362637 master-0 kubenswrapper[7614]: I0224 05:27:55.362530 7614 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-master-0" event={"ID":"b419b8533666d3ae7054c771ce97a95f","Type":"ContainerStarted","Data":"fdbebfeba39a731ff604c815c6df5321e69f6b2fb32e9fc408276330fc71c740"} Feb 24 05:27:55.362637 master-0 kubenswrapper[7614]: I0224 05:27:55.362612 7614 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-master-0" event={"ID":"b419b8533666d3ae7054c771ce97a95f","Type":"ContainerStarted","Data":"a2e40245bac675d1008091343bd8e0a984311d8d60109e460ea7d49e335d061a"} Feb 24 05:27:55.363899 master-0 kubenswrapper[7614]: I0224 05:27:55.362991 7614 kubelet.go:1909] "Trying to delete pod" pod="openshift-etcd/etcd-master-0" podUID="6636104e-8b36-4c09-9e6b-e13fc7237a3e" Feb 24 05:27:55.363899 master-0 kubenswrapper[7614]: I0224 05:27:55.363027 7614 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-etcd/etcd-master-0" podUID="6636104e-8b36-4c09-9e6b-e13fc7237a3e" Feb 24 05:27:55.366245 master-0 kubenswrapper[7614]: I0224 05:27:55.366186 7614 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_cluster-baremetal-operator-d6bb9bb76-54hnv_39623346-691b-42c8-af76-409d4f6629af/cluster-baremetal-operator/1.log" Feb 24 05:27:55.370548 master-0 kubenswrapper[7614]: I0224 05:27:55.370439 7614 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cluster-storage-operator_csi-snapshot-controller-6847bb4785-vqn96_b79ef90c-dc66-4d5f-8943-2c3ac68796ba/snapshot-controller/3.log" Feb 24 05:27:55.373859 master-0 kubenswrapper[7614]: I0224 05:27:55.373803 7614 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-master-0_79656ffd720980cfc7e8a06d9f509855/cluster-policy-controller/2.log" Feb 24 05:27:56.899242 master-0 kubenswrapper[7614]: E0224 05:27:56.899093 7614 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"master-0\": Get \"https://api-int.sno.openstack.lab:6443/api/v1/nodes/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 24 05:27:57.580542 master-0 kubenswrapper[7614]: I0224 05:27:57.580398 7614 patch_prober.go:28] interesting pod/openshift-kube-scheduler-master-0 container/kube-scheduler namespace/openshift-kube-scheduler: Liveness probe status=failure output="Get \"https://192.168.32.10:10259/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 24 05:27:57.580542 master-0 kubenswrapper[7614]: I0224 05:27:57.580494 7614 patch_prober.go:28] interesting pod/openshift-kube-scheduler-master-0 container/kube-scheduler namespace/openshift-kube-scheduler: Readiness probe status=failure output="Get \"https://192.168.32.10:10259/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 24 05:27:57.581031 master-0 kubenswrapper[7614]: I0224 05:27:57.580584 7614 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" podUID="ebb9c3b6f4ad10a97951cbde655daea9" containerName="kube-scheduler" probeResult="failure" output="Get \"https://192.168.32.10:10259/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 24 05:27:57.581031 master-0 kubenswrapper[7614]: I0224 05:27:57.580632 7614 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" podUID="ebb9c3b6f4ad10a97951cbde655daea9" containerName="kube-scheduler" probeResult="failure" output="Get \"https://192.168.32.10:10259/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 24 05:27:59.210875 master-0 kubenswrapper[7614]: I0224 05:27:59.210797 7614 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-etcd/etcd-master-0" Feb 24 05:28:00.769085 master-0 kubenswrapper[7614]: I0224 05:28:00.768986 7614 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Feb 24 05:28:00.769085 master-0 kubenswrapper[7614]: I0224 05:28:00.769099 7614 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Feb 24 05:28:01.522832 master-0 kubenswrapper[7614]: E0224 05:28:01.522719 7614 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" interval="7s" Feb 24 05:28:03.769710 master-0 kubenswrapper[7614]: I0224 05:28:03.769573 7614 patch_prober.go:28] interesting pod/kube-controller-manager-master-0 container/cluster-policy-controller namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://localhost:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 24 05:28:03.771010 master-0 kubenswrapper[7614]: I0224 05:28:03.769700 7614 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" podUID="79656ffd720980cfc7e8a06d9f509855" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://localhost:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 24 05:28:04.210824 master-0 kubenswrapper[7614]: I0224 05:28:04.210583 7614 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-etcd/etcd-master-0" Feb 24 05:28:04.247191 master-0 kubenswrapper[7614]: I0224 05:28:04.247124 7614 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-etcd/etcd-master-0" Feb 24 05:28:06.589886 master-0 kubenswrapper[7614]: I0224 05:28:06.589797 7614 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" Feb 24 05:28:06.900635 master-0 kubenswrapper[7614]: E0224 05:28:06.900377 7614 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"master-0\": Get \"https://api-int.sno.openstack.lab:6443/api/v1/nodes/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 24 05:28:07.175294 master-0 kubenswrapper[7614]: I0224 05:28:07.175086 7614 scope.go:117] "RemoveContainer" containerID="f390c9172bf667ce9e5a44fc191de51013e82e96eafb2547c98f9fa6aad29054" Feb 24 05:28:07.495868 master-0 kubenswrapper[7614]: I0224 05:28:07.495758 7614 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_cluster-baremetal-operator-d6bb9bb76-54hnv_39623346-691b-42c8-af76-409d4f6629af/cluster-baremetal-operator/1.log" Feb 24 05:28:07.496587 master-0 kubenswrapper[7614]: I0224 05:28:07.496496 7614 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/cluster-baremetal-operator-d6bb9bb76-54hnv" event={"ID":"39623346-691b-42c8-af76-409d4f6629af","Type":"ContainerStarted","Data":"86e637d0b5dc95d562f8425432d6a525c0e0e358c1d51fc8a2c0d80b43fd747a"} Feb 24 05:28:09.175280 master-0 kubenswrapper[7614]: I0224 05:28:09.175201 7614 scope.go:117] "RemoveContainer" containerID="cd2719bcd396d956c5ec0dbab7235948c54420d996fc5bd5c8732105713b4ef6" Feb 24 05:28:09.176208 master-0 kubenswrapper[7614]: E0224 05:28:09.175651 7614 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"snapshot-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=snapshot-controller pod=csi-snapshot-controller-6847bb4785-vqn96_openshift-cluster-storage-operator(b79ef90c-dc66-4d5f-8943-2c3ac68796ba)\"" pod="openshift-cluster-storage-operator/csi-snapshot-controller-6847bb4785-vqn96" podUID="b79ef90c-dc66-4d5f-8943-2c3ac68796ba" Feb 24 05:28:09.240910 master-0 kubenswrapper[7614]: I0224 05:28:09.240690 7614 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-etcd/etcd-master-0" Feb 24 05:28:09.522169 master-0 kubenswrapper[7614]: I0224 05:28:09.521934 7614 generic.go:334] "Generic (PLEG): container finished" podID="be7a4b9e-1e9a-4298-b804-21b683805c0e" containerID="ebf89d5ba5d68a652168caf590af22fc79d75d991b321ff2b9f369556f4d28c8" exitCode=0 Feb 24 05:28:09.522169 master-0 kubenswrapper[7614]: I0224 05:28:09.522009 7614 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-7b65dc9fcb-zxkt2" event={"ID":"be7a4b9e-1e9a-4298-b804-21b683805c0e","Type":"ContainerDied","Data":"ebf89d5ba5d68a652168caf590af22fc79d75d991b321ff2b9f369556f4d28c8"} Feb 24 05:28:09.522169 master-0 kubenswrapper[7614]: I0224 05:28:09.522102 7614 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-7b65dc9fcb-zxkt2" event={"ID":"be7a4b9e-1e9a-4298-b804-21b683805c0e","Type":"ContainerStarted","Data":"0d9c40e1ab9fe194700e549fe0bed42e1d026dc7732cf97087ef5f334f860eb9"} Feb 24 05:28:09.522169 master-0 kubenswrapper[7614]: I0224 05:28:09.522141 7614 scope.go:117] "RemoveContainer" containerID="644f295cce6b864cf139013130d16889b14ef33754986616f48c2d2d58ffa92d" Feb 24 05:28:09.660483 master-0 kubenswrapper[7614]: I0224 05:28:09.660376 7614 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-ingress/router-default-7b65dc9fcb-zxkt2" Feb 24 05:28:09.665192 master-0 kubenswrapper[7614]: I0224 05:28:09.665103 7614 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-zxkt2 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 05:28:09.665192 master-0 kubenswrapper[7614]: [-]has-synced failed: reason withheld Feb 24 05:28:09.665192 master-0 kubenswrapper[7614]: [+]process-running ok Feb 24 05:28:09.665192 master-0 kubenswrapper[7614]: healthz check failed Feb 24 05:28:09.665547 master-0 kubenswrapper[7614]: I0224 05:28:09.665229 7614 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-zxkt2" podUID="be7a4b9e-1e9a-4298-b804-21b683805c0e" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 05:28:10.665030 master-0 kubenswrapper[7614]: I0224 05:28:10.664882 7614 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-zxkt2 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 05:28:10.665030 master-0 kubenswrapper[7614]: [-]has-synced failed: reason withheld Feb 24 05:28:10.665030 master-0 kubenswrapper[7614]: [+]process-running ok Feb 24 05:28:10.665030 master-0 kubenswrapper[7614]: healthz check failed Feb 24 05:28:10.666176 master-0 kubenswrapper[7614]: I0224 05:28:10.665046 7614 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-zxkt2" podUID="be7a4b9e-1e9a-4298-b804-21b683805c0e" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 05:28:11.663778 master-0 kubenswrapper[7614]: I0224 05:28:11.663642 7614 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-zxkt2 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 05:28:11.663778 master-0 kubenswrapper[7614]: [-]has-synced failed: reason withheld Feb 24 05:28:11.663778 master-0 kubenswrapper[7614]: [+]process-running ok Feb 24 05:28:11.663778 master-0 kubenswrapper[7614]: healthz check failed Feb 24 05:28:11.663778 master-0 kubenswrapper[7614]: I0224 05:28:11.663764 7614 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-zxkt2" podUID="be7a4b9e-1e9a-4298-b804-21b683805c0e" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 05:28:12.663830 master-0 kubenswrapper[7614]: I0224 05:28:12.663703 7614 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-zxkt2 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 05:28:12.663830 master-0 kubenswrapper[7614]: [-]has-synced failed: reason withheld Feb 24 05:28:12.663830 master-0 kubenswrapper[7614]: [+]process-running ok Feb 24 05:28:12.663830 master-0 kubenswrapper[7614]: healthz check failed Feb 24 05:28:12.665185 master-0 kubenswrapper[7614]: I0224 05:28:12.663830 7614 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-zxkt2" podUID="be7a4b9e-1e9a-4298-b804-21b683805c0e" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 05:28:13.664156 master-0 kubenswrapper[7614]: I0224 05:28:13.664029 7614 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-zxkt2 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 05:28:13.664156 master-0 kubenswrapper[7614]: [-]has-synced failed: reason withheld Feb 24 05:28:13.664156 master-0 kubenswrapper[7614]: [+]process-running ok Feb 24 05:28:13.664156 master-0 kubenswrapper[7614]: healthz check failed Feb 24 05:28:13.664156 master-0 kubenswrapper[7614]: I0224 05:28:13.664148 7614 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-zxkt2" podUID="be7a4b9e-1e9a-4298-b804-21b683805c0e" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 05:28:13.770391 master-0 kubenswrapper[7614]: I0224 05:28:13.770195 7614 patch_prober.go:28] interesting pod/kube-controller-manager-master-0 container/cluster-policy-controller namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://localhost:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 24 05:28:13.770800 master-0 kubenswrapper[7614]: I0224 05:28:13.770402 7614 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" podUID="79656ffd720980cfc7e8a06d9f509855" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://localhost:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 24 05:28:14.664142 master-0 kubenswrapper[7614]: I0224 05:28:14.664024 7614 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-zxkt2 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 05:28:14.664142 master-0 kubenswrapper[7614]: [-]has-synced failed: reason withheld Feb 24 05:28:14.664142 master-0 kubenswrapper[7614]: [+]process-running ok Feb 24 05:28:14.664142 master-0 kubenswrapper[7614]: healthz check failed Feb 24 05:28:14.664142 master-0 kubenswrapper[7614]: I0224 05:28:14.664138 7614 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-zxkt2" podUID="be7a4b9e-1e9a-4298-b804-21b683805c0e" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 05:28:15.659976 master-0 kubenswrapper[7614]: I0224 05:28:15.659814 7614 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ingress/router-default-7b65dc9fcb-zxkt2" Feb 24 05:28:15.663675 master-0 kubenswrapper[7614]: I0224 05:28:15.663594 7614 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-zxkt2 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 05:28:15.663675 master-0 kubenswrapper[7614]: [-]has-synced failed: reason withheld Feb 24 05:28:15.663675 master-0 kubenswrapper[7614]: [+]process-running ok Feb 24 05:28:15.663675 master-0 kubenswrapper[7614]: healthz check failed Feb 24 05:28:15.663961 master-0 kubenswrapper[7614]: I0224 05:28:15.663704 7614 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-zxkt2" podUID="be7a4b9e-1e9a-4298-b804-21b683805c0e" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 05:28:16.663884 master-0 kubenswrapper[7614]: I0224 05:28:16.663728 7614 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-zxkt2 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 05:28:16.663884 master-0 kubenswrapper[7614]: [-]has-synced failed: reason withheld Feb 24 05:28:16.663884 master-0 kubenswrapper[7614]: [+]process-running ok Feb 24 05:28:16.663884 master-0 kubenswrapper[7614]: healthz check failed Feb 24 05:28:16.663884 master-0 kubenswrapper[7614]: I0224 05:28:16.663832 7614 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-zxkt2" podUID="be7a4b9e-1e9a-4298-b804-21b683805c0e" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 05:28:16.901259 master-0 kubenswrapper[7614]: E0224 05:28:16.901151 7614 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"master-0\": Get \"https://api-int.sno.openstack.lab:6443/api/v1/nodes/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 24 05:28:16.901259 master-0 kubenswrapper[7614]: E0224 05:28:16.901213 7614 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Feb 24 05:28:17.664748 master-0 kubenswrapper[7614]: I0224 05:28:17.664643 7614 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-zxkt2 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 05:28:17.664748 master-0 kubenswrapper[7614]: [-]has-synced failed: reason withheld Feb 24 05:28:17.664748 master-0 kubenswrapper[7614]: [+]process-running ok Feb 24 05:28:17.664748 master-0 kubenswrapper[7614]: healthz check failed Feb 24 05:28:17.665981 master-0 kubenswrapper[7614]: I0224 05:28:17.664773 7614 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-zxkt2" podUID="be7a4b9e-1e9a-4298-b804-21b683805c0e" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 05:28:18.523516 master-0 kubenswrapper[7614]: E0224 05:28:18.523379 7614 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" interval="7s" Feb 24 05:28:18.663971 master-0 kubenswrapper[7614]: I0224 05:28:18.663798 7614 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-zxkt2 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 05:28:18.663971 master-0 kubenswrapper[7614]: [-]has-synced failed: reason withheld Feb 24 05:28:18.663971 master-0 kubenswrapper[7614]: [+]process-running ok Feb 24 05:28:18.663971 master-0 kubenswrapper[7614]: healthz check failed Feb 24 05:28:18.663971 master-0 kubenswrapper[7614]: I0224 05:28:18.663911 7614 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-zxkt2" podUID="be7a4b9e-1e9a-4298-b804-21b683805c0e" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 05:28:18.947947 master-0 kubenswrapper[7614]: E0224 05:28:18.947574 7614 event.go:359] "Server rejected event (will not retry!)" err="Timeout: request did not complete within requested timeout - context deadline exceeded" event="&Event{ObjectMeta:{network-node-identity-rlg4x.18971767a0bf66fa openshift-network-node-identity 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-network-node-identity,Name:network-node-identity-rlg4x,UID:c106275b-72b6-4877-95c3-830f93e35375,APIVersion:v1,ResourceVersion:3147,FieldPath:spec.containers{approver},},Reason:BackOff,Message:Back-off restarting failed container approver in pod network-node-identity-rlg4x_openshift-network-node-identity(c106275b-72b6-4877-95c3-830f93e35375),Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-02-24 05:25:12.239548154 +0000 UTC m=+643.274291350,LastTimestamp:2026-02-24 05:25:12.239548154 +0000 UTC m=+643.274291350,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Feb 24 05:28:19.664188 master-0 kubenswrapper[7614]: I0224 05:28:19.664096 7614 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-zxkt2 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 05:28:19.664188 master-0 kubenswrapper[7614]: [-]has-synced failed: reason withheld Feb 24 05:28:19.664188 master-0 kubenswrapper[7614]: [+]process-running ok Feb 24 05:28:19.664188 master-0 kubenswrapper[7614]: healthz check failed Feb 24 05:28:19.664568 master-0 kubenswrapper[7614]: I0224 05:28:19.664231 7614 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-zxkt2" podUID="be7a4b9e-1e9a-4298-b804-21b683805c0e" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 05:28:20.664535 master-0 kubenswrapper[7614]: I0224 05:28:20.664405 7614 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-zxkt2 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 05:28:20.664535 master-0 kubenswrapper[7614]: [-]has-synced failed: reason withheld Feb 24 05:28:20.664535 master-0 kubenswrapper[7614]: [+]process-running ok Feb 24 05:28:20.664535 master-0 kubenswrapper[7614]: healthz check failed Feb 24 05:28:20.665773 master-0 kubenswrapper[7614]: I0224 05:28:20.664550 7614 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-zxkt2" podUID="be7a4b9e-1e9a-4298-b804-21b683805c0e" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 05:28:21.662847 master-0 kubenswrapper[7614]: I0224 05:28:21.662701 7614 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-zxkt2 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 05:28:21.662847 master-0 kubenswrapper[7614]: [-]has-synced failed: reason withheld Feb 24 05:28:21.662847 master-0 kubenswrapper[7614]: [+]process-running ok Feb 24 05:28:21.662847 master-0 kubenswrapper[7614]: healthz check failed Feb 24 05:28:21.662847 master-0 kubenswrapper[7614]: I0224 05:28:21.662808 7614 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-zxkt2" podUID="be7a4b9e-1e9a-4298-b804-21b683805c0e" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 05:28:21.690131 master-0 kubenswrapper[7614]: I0224 05:28:21.690060 7614 status_manager.go:851] "Failed to get status for pod" podUID="2303d3b8-fe6a-469a-a306-4e1685181dbe" pod="openshift-multus/cni-sysctl-allowlist-ds-j28p2" err="the server was unable to return a response in the time allotted, but may still be processing the request (get pods cni-sysctl-allowlist-ds-j28p2)" Feb 24 05:28:22.663947 master-0 kubenswrapper[7614]: I0224 05:28:22.663845 7614 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-zxkt2 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 05:28:22.663947 master-0 kubenswrapper[7614]: [-]has-synced failed: reason withheld Feb 24 05:28:22.663947 master-0 kubenswrapper[7614]: [+]process-running ok Feb 24 05:28:22.663947 master-0 kubenswrapper[7614]: healthz check failed Feb 24 05:28:22.664334 master-0 kubenswrapper[7614]: I0224 05:28:22.663979 7614 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-zxkt2" podUID="be7a4b9e-1e9a-4298-b804-21b683805c0e" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 05:28:23.174596 master-0 kubenswrapper[7614]: I0224 05:28:23.174487 7614 scope.go:117] "RemoveContainer" containerID="cd2719bcd396d956c5ec0dbab7235948c54420d996fc5bd5c8732105713b4ef6" Feb 24 05:28:23.175638 master-0 kubenswrapper[7614]: E0224 05:28:23.174976 7614 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"snapshot-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=snapshot-controller pod=csi-snapshot-controller-6847bb4785-vqn96_openshift-cluster-storage-operator(b79ef90c-dc66-4d5f-8943-2c3ac68796ba)\"" pod="openshift-cluster-storage-operator/csi-snapshot-controller-6847bb4785-vqn96" podUID="b79ef90c-dc66-4d5f-8943-2c3ac68796ba" Feb 24 05:28:23.664125 master-0 kubenswrapper[7614]: I0224 05:28:23.664009 7614 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-zxkt2 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 05:28:23.664125 master-0 kubenswrapper[7614]: [-]has-synced failed: reason withheld Feb 24 05:28:23.664125 master-0 kubenswrapper[7614]: [+]process-running ok Feb 24 05:28:23.664125 master-0 kubenswrapper[7614]: healthz check failed Feb 24 05:28:23.664125 master-0 kubenswrapper[7614]: I0224 05:28:23.664129 7614 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-zxkt2" podUID="be7a4b9e-1e9a-4298-b804-21b683805c0e" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 05:28:23.769784 master-0 kubenswrapper[7614]: I0224 05:28:23.769655 7614 patch_prober.go:28] interesting pod/kube-controller-manager-master-0 container/cluster-policy-controller namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://localhost:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 24 05:28:23.769784 master-0 kubenswrapper[7614]: I0224 05:28:23.769768 7614 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" podUID="79656ffd720980cfc7e8a06d9f509855" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://localhost:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 24 05:28:23.770245 master-0 kubenswrapper[7614]: I0224 05:28:23.769867 7614 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Feb 24 05:28:23.770905 master-0 kubenswrapper[7614]: I0224 05:28:23.770837 7614 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="cluster-policy-controller" containerStatusID={"Type":"cri-o","ID":"386f03588b2fec464602c05980d3d5bd01154f4e28abd6dc77dfb5d576846982"} pod="openshift-kube-controller-manager/kube-controller-manager-master-0" containerMessage="Container cluster-policy-controller failed startup probe, will be restarted" Feb 24 05:28:23.771059 master-0 kubenswrapper[7614]: I0224 05:28:23.771008 7614 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" podUID="79656ffd720980cfc7e8a06d9f509855" containerName="cluster-policy-controller" containerID="cri-o://386f03588b2fec464602c05980d3d5bd01154f4e28abd6dc77dfb5d576846982" gracePeriod=30 Feb 24 05:28:23.894150 master-0 kubenswrapper[7614]: E0224 05:28:23.894084 7614 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cluster-policy-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=cluster-policy-controller pod=kube-controller-manager-master-0_openshift-kube-controller-manager(79656ffd720980cfc7e8a06d9f509855)\"" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" podUID="79656ffd720980cfc7e8a06d9f509855" Feb 24 05:28:24.663487 master-0 kubenswrapper[7614]: I0224 05:28:24.663363 7614 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-zxkt2 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 05:28:24.663487 master-0 kubenswrapper[7614]: [-]has-synced failed: reason withheld Feb 24 05:28:24.663487 master-0 kubenswrapper[7614]: [+]process-running ok Feb 24 05:28:24.663487 master-0 kubenswrapper[7614]: healthz check failed Feb 24 05:28:24.663487 master-0 kubenswrapper[7614]: I0224 05:28:24.663483 7614 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-zxkt2" podUID="be7a4b9e-1e9a-4298-b804-21b683805c0e" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 05:28:24.674734 master-0 kubenswrapper[7614]: I0224 05:28:24.674664 7614 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-master-0_79656ffd720980cfc7e8a06d9f509855/cluster-policy-controller/3.log" Feb 24 05:28:24.675500 master-0 kubenswrapper[7614]: I0224 05:28:24.675445 7614 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-master-0_79656ffd720980cfc7e8a06d9f509855/cluster-policy-controller/2.log" Feb 24 05:28:24.676975 master-0 kubenswrapper[7614]: I0224 05:28:24.676893 7614 generic.go:334] "Generic (PLEG): container finished" podID="79656ffd720980cfc7e8a06d9f509855" containerID="386f03588b2fec464602c05980d3d5bd01154f4e28abd6dc77dfb5d576846982" exitCode=255 Feb 24 05:28:24.676975 master-0 kubenswrapper[7614]: I0224 05:28:24.676962 7614 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" event={"ID":"79656ffd720980cfc7e8a06d9f509855","Type":"ContainerDied","Data":"386f03588b2fec464602c05980d3d5bd01154f4e28abd6dc77dfb5d576846982"} Feb 24 05:28:24.677956 master-0 kubenswrapper[7614]: I0224 05:28:24.677027 7614 scope.go:117] "RemoveContainer" containerID="0689e27af42ae81a96ec3b76fadf7f543350248eb0be477efc6f481259da5952" Feb 24 05:28:24.678444 master-0 kubenswrapper[7614]: I0224 05:28:24.678253 7614 scope.go:117] "RemoveContainer" containerID="386f03588b2fec464602c05980d3d5bd01154f4e28abd6dc77dfb5d576846982" Feb 24 05:28:24.678910 master-0 kubenswrapper[7614]: E0224 05:28:24.678861 7614 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cluster-policy-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=cluster-policy-controller pod=kube-controller-manager-master-0_openshift-kube-controller-manager(79656ffd720980cfc7e8a06d9f509855)\"" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" podUID="79656ffd720980cfc7e8a06d9f509855" Feb 24 05:28:24.704620 master-0 kubenswrapper[7614]: I0224 05:28:24.704008 7614 patch_prober.go:28] interesting pod/authentication-operator-5bd7c86784-kbb8z container/authentication-operator namespace/openshift-authentication-operator: Liveness probe status=failure output="Get \"https://10.128.0.17:8443/healthz\": dial tcp 10.128.0.17:8443: connect: connection refused" start-of-body= Feb 24 05:28:24.704620 master-0 kubenswrapper[7614]: I0224 05:28:24.704135 7614 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-authentication-operator/authentication-operator-5bd7c86784-kbb8z" podUID="59333a14-5bdc-4590-a3da-af7300f086da" containerName="authentication-operator" probeResult="failure" output="Get \"https://10.128.0.17:8443/healthz\": dial tcp 10.128.0.17:8443: connect: connection refused" Feb 24 05:28:25.664153 master-0 kubenswrapper[7614]: I0224 05:28:25.664050 7614 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-zxkt2 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 05:28:25.664153 master-0 kubenswrapper[7614]: [-]has-synced failed: reason withheld Feb 24 05:28:25.664153 master-0 kubenswrapper[7614]: [+]process-running ok Feb 24 05:28:25.664153 master-0 kubenswrapper[7614]: healthz check failed Feb 24 05:28:25.665402 master-0 kubenswrapper[7614]: I0224 05:28:25.664169 7614 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-zxkt2" podUID="be7a4b9e-1e9a-4298-b804-21b683805c0e" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 05:28:25.690186 master-0 kubenswrapper[7614]: I0224 05:28:25.690108 7614 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-master-0_79656ffd720980cfc7e8a06d9f509855/cluster-policy-controller/3.log" Feb 24 05:28:26.663933 master-0 kubenswrapper[7614]: I0224 05:28:26.663830 7614 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-zxkt2 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 05:28:26.663933 master-0 kubenswrapper[7614]: [-]has-synced failed: reason withheld Feb 24 05:28:26.663933 master-0 kubenswrapper[7614]: [+]process-running ok Feb 24 05:28:26.663933 master-0 kubenswrapper[7614]: healthz check failed Feb 24 05:28:26.663933 master-0 kubenswrapper[7614]: I0224 05:28:26.663907 7614 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-zxkt2" podUID="be7a4b9e-1e9a-4298-b804-21b683805c0e" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 05:28:27.665709 master-0 kubenswrapper[7614]: I0224 05:28:27.665521 7614 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-zxkt2 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 05:28:27.665709 master-0 kubenswrapper[7614]: [-]has-synced failed: reason withheld Feb 24 05:28:27.665709 master-0 kubenswrapper[7614]: [+]process-running ok Feb 24 05:28:27.665709 master-0 kubenswrapper[7614]: healthz check failed Feb 24 05:28:27.665709 master-0 kubenswrapper[7614]: I0224 05:28:27.665679 7614 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-zxkt2" podUID="be7a4b9e-1e9a-4298-b804-21b683805c0e" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 05:28:28.664255 master-0 kubenswrapper[7614]: I0224 05:28:28.664101 7614 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-zxkt2 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 05:28:28.664255 master-0 kubenswrapper[7614]: [-]has-synced failed: reason withheld Feb 24 05:28:28.664255 master-0 kubenswrapper[7614]: [+]process-running ok Feb 24 05:28:28.664255 master-0 kubenswrapper[7614]: healthz check failed Feb 24 05:28:28.664850 master-0 kubenswrapper[7614]: I0224 05:28:28.664259 7614 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-zxkt2" podUID="be7a4b9e-1e9a-4298-b804-21b683805c0e" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 05:28:29.367834 master-0 kubenswrapper[7614]: E0224 05:28:29.367730 7614 mirror_client.go:138] "Failed deleting a mirror pod" err="Timeout: request did not complete within requested timeout - context deadline exceeded" pod="openshift-etcd/etcd-master-0" Feb 24 05:28:29.919614 master-0 kubenswrapper[7614]: I0224 05:28:29.919527 7614 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-zxkt2 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 05:28:29.919614 master-0 kubenswrapper[7614]: [-]has-synced failed: reason withheld Feb 24 05:28:29.919614 master-0 kubenswrapper[7614]: [+]process-running ok Feb 24 05:28:29.919614 master-0 kubenswrapper[7614]: healthz check failed Feb 24 05:28:29.920016 master-0 kubenswrapper[7614]: I0224 05:28:29.919630 7614 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-zxkt2" podUID="be7a4b9e-1e9a-4298-b804-21b683805c0e" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 05:28:29.927043 master-0 kubenswrapper[7614]: I0224 05:28:29.926951 7614 kubelet.go:1909] "Trying to delete pod" pod="openshift-etcd/etcd-master-0" podUID="6636104e-8b36-4c09-9e6b-e13fc7237a3e" Feb 24 05:28:29.927043 master-0 kubenswrapper[7614]: I0224 05:28:29.927039 7614 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-etcd/etcd-master-0" podUID="6636104e-8b36-4c09-9e6b-e13fc7237a3e" Feb 24 05:28:30.663627 master-0 kubenswrapper[7614]: I0224 05:28:30.663520 7614 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-zxkt2 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 05:28:30.663627 master-0 kubenswrapper[7614]: [-]has-synced failed: reason withheld Feb 24 05:28:30.663627 master-0 kubenswrapper[7614]: [+]process-running ok Feb 24 05:28:30.663627 master-0 kubenswrapper[7614]: healthz check failed Feb 24 05:28:30.663627 master-0 kubenswrapper[7614]: I0224 05:28:30.663608 7614 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-zxkt2" podUID="be7a4b9e-1e9a-4298-b804-21b683805c0e" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 05:28:30.769226 master-0 kubenswrapper[7614]: I0224 05:28:30.769129 7614 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Feb 24 05:28:30.770209 master-0 kubenswrapper[7614]: I0224 05:28:30.770166 7614 scope.go:117] "RemoveContainer" containerID="386f03588b2fec464602c05980d3d5bd01154f4e28abd6dc77dfb5d576846982" Feb 24 05:28:30.770714 master-0 kubenswrapper[7614]: E0224 05:28:30.770672 7614 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cluster-policy-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=cluster-policy-controller pod=kube-controller-manager-master-0_openshift-kube-controller-manager(79656ffd720980cfc7e8a06d9f509855)\"" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" podUID="79656ffd720980cfc7e8a06d9f509855" Feb 24 05:28:31.670373 master-0 kubenswrapper[7614]: I0224 05:28:31.667947 7614 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-zxkt2 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 05:28:31.670373 master-0 kubenswrapper[7614]: [-]has-synced failed: reason withheld Feb 24 05:28:31.670373 master-0 kubenswrapper[7614]: [+]process-running ok Feb 24 05:28:31.670373 master-0 kubenswrapper[7614]: healthz check failed Feb 24 05:28:31.670373 master-0 kubenswrapper[7614]: I0224 05:28:31.668109 7614 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-zxkt2" podUID="be7a4b9e-1e9a-4298-b804-21b683805c0e" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 05:28:32.664235 master-0 kubenswrapper[7614]: I0224 05:28:32.664032 7614 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-zxkt2 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 05:28:32.664235 master-0 kubenswrapper[7614]: [-]has-synced failed: reason withheld Feb 24 05:28:32.664235 master-0 kubenswrapper[7614]: [+]process-running ok Feb 24 05:28:32.664235 master-0 kubenswrapper[7614]: healthz check failed Feb 24 05:28:32.664235 master-0 kubenswrapper[7614]: I0224 05:28:32.664144 7614 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-zxkt2" podUID="be7a4b9e-1e9a-4298-b804-21b683805c0e" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 05:28:33.663961 master-0 kubenswrapper[7614]: I0224 05:28:33.663891 7614 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-zxkt2 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 05:28:33.663961 master-0 kubenswrapper[7614]: [-]has-synced failed: reason withheld Feb 24 05:28:33.663961 master-0 kubenswrapper[7614]: [+]process-running ok Feb 24 05:28:33.663961 master-0 kubenswrapper[7614]: healthz check failed Feb 24 05:28:33.665247 master-0 kubenswrapper[7614]: I0224 05:28:33.664578 7614 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-zxkt2" podUID="be7a4b9e-1e9a-4298-b804-21b683805c0e" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 05:28:34.174082 master-0 kubenswrapper[7614]: I0224 05:28:34.174003 7614 scope.go:117] "RemoveContainer" containerID="cd2719bcd396d956c5ec0dbab7235948c54420d996fc5bd5c8732105713b4ef6" Feb 24 05:28:34.663688 master-0 kubenswrapper[7614]: I0224 05:28:34.663570 7614 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-zxkt2 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 05:28:34.663688 master-0 kubenswrapper[7614]: [-]has-synced failed: reason withheld Feb 24 05:28:34.663688 master-0 kubenswrapper[7614]: [+]process-running ok Feb 24 05:28:34.663688 master-0 kubenswrapper[7614]: healthz check failed Feb 24 05:28:34.664012 master-0 kubenswrapper[7614]: I0224 05:28:34.663722 7614 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-zxkt2" podUID="be7a4b9e-1e9a-4298-b804-21b683805c0e" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 05:28:34.702415 master-0 kubenswrapper[7614]: I0224 05:28:34.701881 7614 patch_prober.go:28] interesting pod/authentication-operator-5bd7c86784-kbb8z container/authentication-operator namespace/openshift-authentication-operator: Liveness probe status=failure output="Get \"https://10.128.0.17:8443/healthz\": dial tcp 10.128.0.17:8443: connect: connection refused" start-of-body= Feb 24 05:28:34.702415 master-0 kubenswrapper[7614]: I0224 05:28:34.702193 7614 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-authentication-operator/authentication-operator-5bd7c86784-kbb8z" podUID="59333a14-5bdc-4590-a3da-af7300f086da" containerName="authentication-operator" probeResult="failure" output="Get \"https://10.128.0.17:8443/healthz\": dial tcp 10.128.0.17:8443: connect: connection refused" Feb 24 05:28:34.974514 master-0 kubenswrapper[7614]: I0224 05:28:34.974338 7614 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cluster-storage-operator_csi-snapshot-controller-6847bb4785-vqn96_b79ef90c-dc66-4d5f-8943-2c3ac68796ba/snapshot-controller/3.log" Feb 24 05:28:34.974514 master-0 kubenswrapper[7614]: I0224 05:28:34.974436 7614 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-storage-operator/csi-snapshot-controller-6847bb4785-vqn96" event={"ID":"b79ef90c-dc66-4d5f-8943-2c3ac68796ba","Type":"ContainerStarted","Data":"1dbe14eb848b87711b564dbd00190070ac04cfc0d462906a427a3af22f0cfd2a"} Feb 24 05:28:35.524782 master-0 kubenswrapper[7614]: E0224 05:28:35.524650 7614 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" interval="7s" Feb 24 05:28:35.663867 master-0 kubenswrapper[7614]: I0224 05:28:35.663738 7614 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-zxkt2 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 05:28:35.663867 master-0 kubenswrapper[7614]: [-]has-synced failed: reason withheld Feb 24 05:28:35.663867 master-0 kubenswrapper[7614]: [+]process-running ok Feb 24 05:28:35.663867 master-0 kubenswrapper[7614]: healthz check failed Feb 24 05:28:35.663867 master-0 kubenswrapper[7614]: I0224 05:28:35.663862 7614 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-zxkt2" podUID="be7a4b9e-1e9a-4298-b804-21b683805c0e" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 05:28:36.664518 master-0 kubenswrapper[7614]: I0224 05:28:36.664417 7614 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-zxkt2 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 05:28:36.664518 master-0 kubenswrapper[7614]: [-]has-synced failed: reason withheld Feb 24 05:28:36.664518 master-0 kubenswrapper[7614]: [+]process-running ok Feb 24 05:28:36.664518 master-0 kubenswrapper[7614]: healthz check failed Feb 24 05:28:36.665816 master-0 kubenswrapper[7614]: I0224 05:28:36.664531 7614 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-zxkt2" podUID="be7a4b9e-1e9a-4298-b804-21b683805c0e" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 05:28:36.944463 master-0 kubenswrapper[7614]: E0224 05:28:36.944050 7614 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-24T05:28:26Z\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-24T05:28:26Z\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-24T05:28:26Z\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-24T05:28:26Z\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:08cff7c9164822cf90c1ddc99284f5fd3c4efbfdf7ff5d2da94ff20f03d57215\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8665346de3cec5b1443fb1e3bf6389962210affa684e5c1b521ec342f56e0901\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1703852494},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:94d88fe2fa42931a725508dbf17296b6ed99b8e20c1169f5d1fb8a36f4927ddd\\\"],\\\"sizeBytes\\\":1637274270},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:10e72e1dffd75bda73d89a11e18d98c99255c0f2c54d81f82a2a48b0b86b96b5\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:d64168b357c44a3e5febdd4d99c285c68217a6568f9de2371d72e8a089d42b69\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1238591178},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d7a8ac0ba2e5115c9d451d553741173ae8744d4544da15e28bf38f61630182fd\\\"],\\\"sizeBytes\\\":1237794314},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:155018f64a4d43025cb88586009847bd0f7844afa3e1aa81639d31b96bebd68e\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:4154e7856e2578eae0af7bc7ade3338a49c179e8e0b9d8b5167540e580ffc22b\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1210563790},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:518982b9ad8a8bfb7bb3b4216b235cac99e126df3bb48e390b36064560c76b83\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:b3293b04e31c8e67c885f77e0ad2ee994295afde7c42cb9761c7090ae0cdb3f8\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1202767548},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4775c6461221dafe3ddd67ff683ccb665bed6eb278fa047d9d744aab9af65dcf\\\"],\\\"sizeBytes\\\":992461126},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8177c465e14c63854e5c0fa95ca0635cffc9b5dd3d077ecf971feedbc42b1274\\\"],\\\"sizeBytes\\\":943734757},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6c7ec917f0eff7b41d7174f1b5fdc4ce53ad106e51599afba731a8431ff9caa7\\\"],\\\"sizeBytes\\\":918153745},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8ff40a2d97bf7a95e19303f7e972b7e8354a3864039111c6d33d5479117aaeed\\\"],\\\"sizeBytes\\\":880247193},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:72fafcd55ab739919dd8a114863fda27106af1c497f474e7ce0cb23b58dfa021\\\"],\\\"sizeBytes\\\":875998518},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7b9239f1f5e9590e3db71e61fde86db8f43e0085f61ae7769508d2ea058481c7\\\"],\\\"sizeBytes\\\":862501144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:572b0ca6e993beea2ee9346197665e56a2e4999fbb6958c747c48a35bf72ee34\\\"],\\\"sizeBytes\\\":862091954},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3fa84eaa1310d97fe55bb23a7c27ece85718d0643fa7fc0ff81014edb4b948b\\\"],\\\"sizeBytes\\\":772838975},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bd420e879c9f0271bca2d123a6d762591d9a4626b72f254d1f885842c32149e8\\\"],\\\"sizeBytes\\\":687849728},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3c467c1eeba7434b2aebf07169ab8afe0203d638e871dbdf29a16f830e9aef9e\\\"],\\\"sizeBytes\\\":682963466},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5121a0944000b7bfa57ae2e4eb3f412e1b4b89fcc75eec1ef20241182c0527f2\\\"],\\\"sizeBytes\\\":677827184},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d5a31b448302fbb994548ed801ac488a44e8a7c4ae9149c3b4cc20d6af832f83\\\"],\\\"sizeBytes\\\":621542709},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3e089c4e4fa9a22803b2673b776215e021a1f12a856dbcaba2fadee29bee10a3\\\"],\\\"sizeBytes\\\":589275174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1582ea693f35073e3316e2380a18227b78096ca7f4e1328f1dd8a2c423da26e9\\\"],\\\"sizeBytes\\\":582052489},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:314be88d356b2c8a3c4416daeb4cfcd58d617a4526319c01ddaffae4b4179e74\\\"],\\\"sizeBytes\\\":558105176},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:69f9df2f6b5cd83ab895e9e4a9bf8920d35fe450679ce06fb223944e95cfbe3e\\\"],\\\"sizeBytes\\\":557320737},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f86073cf0561e4b69668f8917ef5184cb0ef5aa16d0fefe38118f1167b268721\\\"],\\\"sizeBytes\\\":548646306},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d77a77c401bcfaa65a6ab6de82415af0e7ace1b470626647e5feb4875c89a5ef\\\"],\\\"sizeBytes\\\":529218694},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bc0ca626e5e17f9f78ddbfde54ea13ddc7749904911817bba16e6b59f30499ec\\\"],\\\"sizeBytes\\\":528829499},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:11f566fe2ae782ad96d36028b0fd81911a64ef787dcebc83803f741f272fa396\\\"],\\\"sizeBytes\\\":518279996},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-release@sha256:40bb7cf7c637bf9efd8fb0157839d325a019d67cc7d7279665fcf90dbb7f3f33\\\"],\\\"sizeBytes\\\":517888569},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fd63e2c1185e529c6e9f6e1426222ff2ac195132b44a1775f407e4593b66d4c\\\"],\\\"sizeBytes\\\":514875199},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a1b426a276216372c7d688fe60e9eaf251efd35071f94e1bcd4337f51a90fd75\\\"],\\\"sizeBytes\\\":513473308},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ce471c00b59fd855a59f7efa9afdb3f0f9cbf1c4bcce3a82fe1a4cb82e90f52e\\\"],\\\"sizeBytes\\\":513119434},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a9dcbc6b966928b7597d4a822948ae6f07b62feecb91679c1d825d0d19426e19\\\"],\\\"sizeBytes\\\":512172666},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d5f4a546983224e416dfcc3a700afc15f9790182a5a2f8f7c94892d0e95abab3\\\"],\\\"sizeBytes\\\":511125422},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2c8de5c5b21ed8c7829ba988d580ffa470c9913877fe0ee5e11bf507400ffbc7\\\"],\\\"sizeBytes\\\":511059399},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:64ba461fd5594e3a30bfd755f1496707a88249bc68d07c65124c8617d664d2ac\\\"],\\\"sizeBytes\\\":508786786},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a82e441a9e9b93f0e010f1ce26e30c24b6ca93f7752084d4694ebdb3c5b53f83\\\"],\\\"sizeBytes\\\":508443359},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d7bd3361d506dcc1be3afa62d35080c5dd37afccc26cd36019e2b9db2c45f896\\\"],\\\"sizeBytes\\\":507867630},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:034588ffd95ce834e866279bf80a45af2cddda631c6c9a6344c1bb2e033fd83e\\\"],\\\"sizeBytes\\\":506374680},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c8618d42fe4da4881abe39e98691d187e13713981b66d0dac0a11cb1287482b7\\\"],\\\"sizeBytes\\\":506291135},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ce68078d909b63bb5b872d94c04829aa1b5812c416abbaf9024840d348ee68b1\\\"],\\\"sizeBytes\\\":505244089},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:457c564075e8b14b1d24ff6eab750600ebc90ff8b7bb137306a579ee8445ae95\\\"],\\\"sizeBytes\\\":505137106},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ebf883de8fd905490f0c9b420a5d6446ecde18e12e15364f6dcd4e885104972c\\\"],\\\"sizeBytes\\\":504558291},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:897708222502e4d710dd737923f74d153c084ba6048bffceb16dfd30f79a6ecc\\\"],\\\"sizeBytes\\\":504513960},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:86d9e1fdf97794f44fc1c91da025714ec6900fafa6cdc4c0041ffa95e9d70c6c\\\"],\\\"sizeBytes\\\":495888162},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4e8c6ae1f9a450c90857c9fbccf1e5fb404dbc0d65d086afce005d6bd307853b\\\"],\\\"sizeBytes\\\":494959854},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cb94366d6d4423592369eeca84f0fe98325db13d0ab9e0291db9f1a337cd7143\\\"],\\\"sizeBytes\\\":487054953},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:117a846734fc8159b7172a40ed2feb43a969b7dbc113ee1a572cbf6f9f922655\\\"],\\\"sizeBytes\\\":486990304},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4797a485fd4ab3414ba8d52bdf2afccefab6c657b1d259baad703fca5145124c\\\"],\\\"sizeBytes\\\":484349508},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7a132d09565133b36ac7c797213d6a74ac810bb368ef59136320ab3d300f45bd\\\"],\\\"sizeBytes\\\":484074784},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a1dcd1b7d6878b28ed95aed9f0c0e2df156c17cb9fe5971400b983e3f2be29c\\\"],\\\"sizeBytes\\\":480427687},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2b05fb5dedd9a53747df98c2a1956ace8e233ad575204fbec990e39705e36dfb\\\"],\\\"sizeBytes\\\":471325816}]}}\" for node \"master-0\": Patch \"https://api-int.sno.openstack.lab:6443/api/v1/nodes/master-0/status?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 24 05:28:37.665516 master-0 kubenswrapper[7614]: I0224 05:28:37.665342 7614 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-zxkt2 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 05:28:37.665516 master-0 kubenswrapper[7614]: [-]has-synced failed: reason withheld Feb 24 05:28:37.665516 master-0 kubenswrapper[7614]: [+]process-running ok Feb 24 05:28:37.665516 master-0 kubenswrapper[7614]: healthz check failed Feb 24 05:28:37.665516 master-0 kubenswrapper[7614]: I0224 05:28:37.665473 7614 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-zxkt2" podUID="be7a4b9e-1e9a-4298-b804-21b683805c0e" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 05:28:38.664947 master-0 kubenswrapper[7614]: I0224 05:28:38.664753 7614 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-zxkt2 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 05:28:38.664947 master-0 kubenswrapper[7614]: [-]has-synced failed: reason withheld Feb 24 05:28:38.664947 master-0 kubenswrapper[7614]: [+]process-running ok Feb 24 05:28:38.664947 master-0 kubenswrapper[7614]: healthz check failed Feb 24 05:28:38.664947 master-0 kubenswrapper[7614]: I0224 05:28:38.664852 7614 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-zxkt2" podUID="be7a4b9e-1e9a-4298-b804-21b683805c0e" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 05:28:39.011339 master-0 kubenswrapper[7614]: I0224 05:28:39.011254 7614 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ingress-operator_ingress-operator-6569778c84-rr8r7_3d6b1ce7-1213-494c-829d-186d39eac7eb/ingress-operator/4.log" Feb 24 05:28:39.012224 master-0 kubenswrapper[7614]: I0224 05:28:39.011992 7614 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ingress-operator_ingress-operator-6569778c84-rr8r7_3d6b1ce7-1213-494c-829d-186d39eac7eb/ingress-operator/3.log" Feb 24 05:28:39.012869 master-0 kubenswrapper[7614]: I0224 05:28:39.012786 7614 generic.go:334] "Generic (PLEG): container finished" podID="3d6b1ce7-1213-494c-829d-186d39eac7eb" containerID="09f85c8b01d7446d5646107c9c18780a59af7c98e21d551a62767e55f5cabf2d" exitCode=1 Feb 24 05:28:39.012978 master-0 kubenswrapper[7614]: I0224 05:28:39.012827 7614 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-6569778c84-rr8r7" event={"ID":"3d6b1ce7-1213-494c-829d-186d39eac7eb","Type":"ContainerDied","Data":"09f85c8b01d7446d5646107c9c18780a59af7c98e21d551a62767e55f5cabf2d"} Feb 24 05:28:39.013051 master-0 kubenswrapper[7614]: I0224 05:28:39.012992 7614 scope.go:117] "RemoveContainer" containerID="cfe91b9dce3107eef3be77e003af99516d67b13614554d783a1ee356de5c61ba" Feb 24 05:28:39.014168 master-0 kubenswrapper[7614]: I0224 05:28:39.014083 7614 scope.go:117] "RemoveContainer" containerID="09f85c8b01d7446d5646107c9c18780a59af7c98e21d551a62767e55f5cabf2d" Feb 24 05:28:39.014752 master-0 kubenswrapper[7614]: E0224 05:28:39.014697 7614 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ingress-operator\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=ingress-operator pod=ingress-operator-6569778c84-rr8r7_openshift-ingress-operator(3d6b1ce7-1213-494c-829d-186d39eac7eb)\"" pod="openshift-ingress-operator/ingress-operator-6569778c84-rr8r7" podUID="3d6b1ce7-1213-494c-829d-186d39eac7eb" Feb 24 05:28:39.664193 master-0 kubenswrapper[7614]: I0224 05:28:39.664071 7614 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-zxkt2 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 05:28:39.664193 master-0 kubenswrapper[7614]: [-]has-synced failed: reason withheld Feb 24 05:28:39.664193 master-0 kubenswrapper[7614]: [+]process-running ok Feb 24 05:28:39.664193 master-0 kubenswrapper[7614]: healthz check failed Feb 24 05:28:39.664193 master-0 kubenswrapper[7614]: I0224 05:28:39.664185 7614 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-zxkt2" podUID="be7a4b9e-1e9a-4298-b804-21b683805c0e" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 05:28:40.021371 master-0 kubenswrapper[7614]: I0224 05:28:40.021284 7614 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ingress-operator_ingress-operator-6569778c84-rr8r7_3d6b1ce7-1213-494c-829d-186d39eac7eb/ingress-operator/4.log" Feb 24 05:28:40.663932 master-0 kubenswrapper[7614]: I0224 05:28:40.663817 7614 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-zxkt2 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 05:28:40.663932 master-0 kubenswrapper[7614]: [-]has-synced failed: reason withheld Feb 24 05:28:40.663932 master-0 kubenswrapper[7614]: [+]process-running ok Feb 24 05:28:40.663932 master-0 kubenswrapper[7614]: healthz check failed Feb 24 05:28:40.664514 master-0 kubenswrapper[7614]: I0224 05:28:40.663973 7614 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-zxkt2" podUID="be7a4b9e-1e9a-4298-b804-21b683805c0e" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 05:28:41.175170 master-0 kubenswrapper[7614]: I0224 05:28:41.175089 7614 scope.go:117] "RemoveContainer" containerID="386f03588b2fec464602c05980d3d5bd01154f4e28abd6dc77dfb5d576846982" Feb 24 05:28:41.175788 master-0 kubenswrapper[7614]: E0224 05:28:41.175641 7614 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cluster-policy-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=cluster-policy-controller pod=kube-controller-manager-master-0_openshift-kube-controller-manager(79656ffd720980cfc7e8a06d9f509855)\"" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" podUID="79656ffd720980cfc7e8a06d9f509855" Feb 24 05:28:41.663889 master-0 kubenswrapper[7614]: I0224 05:28:41.663795 7614 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-zxkt2 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 05:28:41.663889 master-0 kubenswrapper[7614]: [-]has-synced failed: reason withheld Feb 24 05:28:41.663889 master-0 kubenswrapper[7614]: [+]process-running ok Feb 24 05:28:41.663889 master-0 kubenswrapper[7614]: healthz check failed Feb 24 05:28:41.664454 master-0 kubenswrapper[7614]: I0224 05:28:41.663898 7614 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-zxkt2" podUID="be7a4b9e-1e9a-4298-b804-21b683805c0e" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 05:28:42.663920 master-0 kubenswrapper[7614]: I0224 05:28:42.663816 7614 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-zxkt2 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 05:28:42.663920 master-0 kubenswrapper[7614]: [-]has-synced failed: reason withheld Feb 24 05:28:42.663920 master-0 kubenswrapper[7614]: [+]process-running ok Feb 24 05:28:42.663920 master-0 kubenswrapper[7614]: healthz check failed Feb 24 05:28:42.663920 master-0 kubenswrapper[7614]: I0224 05:28:42.663920 7614 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-zxkt2" podUID="be7a4b9e-1e9a-4298-b804-21b683805c0e" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 05:28:43.664787 master-0 kubenswrapper[7614]: I0224 05:28:43.664671 7614 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-zxkt2 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 05:28:43.664787 master-0 kubenswrapper[7614]: [-]has-synced failed: reason withheld Feb 24 05:28:43.664787 master-0 kubenswrapper[7614]: [+]process-running ok Feb 24 05:28:43.664787 master-0 kubenswrapper[7614]: healthz check failed Feb 24 05:28:43.665781 master-0 kubenswrapper[7614]: I0224 05:28:43.664797 7614 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-zxkt2" podUID="be7a4b9e-1e9a-4298-b804-21b683805c0e" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 05:28:44.663714 master-0 kubenswrapper[7614]: I0224 05:28:44.663623 7614 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-zxkt2 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 05:28:44.663714 master-0 kubenswrapper[7614]: [-]has-synced failed: reason withheld Feb 24 05:28:44.663714 master-0 kubenswrapper[7614]: [+]process-running ok Feb 24 05:28:44.663714 master-0 kubenswrapper[7614]: healthz check failed Feb 24 05:28:44.664463 master-0 kubenswrapper[7614]: I0224 05:28:44.663727 7614 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-zxkt2" podUID="be7a4b9e-1e9a-4298-b804-21b683805c0e" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 05:28:44.702279 master-0 kubenswrapper[7614]: I0224 05:28:44.702163 7614 patch_prober.go:28] interesting pod/authentication-operator-5bd7c86784-kbb8z container/authentication-operator namespace/openshift-authentication-operator: Liveness probe status=failure output="Get \"https://10.128.0.17:8443/healthz\": dial tcp 10.128.0.17:8443: connect: connection refused" start-of-body= Feb 24 05:28:44.703028 master-0 kubenswrapper[7614]: I0224 05:28:44.702279 7614 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-authentication-operator/authentication-operator-5bd7c86784-kbb8z" podUID="59333a14-5bdc-4590-a3da-af7300f086da" containerName="authentication-operator" probeResult="failure" output="Get \"https://10.128.0.17:8443/healthz\": dial tcp 10.128.0.17:8443: connect: connection refused" Feb 24 05:28:44.703028 master-0 kubenswrapper[7614]: I0224 05:28:44.702378 7614 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-authentication-operator/authentication-operator-5bd7c86784-kbb8z" Feb 24 05:28:44.703565 master-0 kubenswrapper[7614]: I0224 05:28:44.703507 7614 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="authentication-operator" containerStatusID={"Type":"cri-o","ID":"fb14b25796af448c2bc49088ecec2ac65559ebf13053ffc78319aa0e2b8844d9"} pod="openshift-authentication-operator/authentication-operator-5bd7c86784-kbb8z" containerMessage="Container authentication-operator failed liveness probe, will be restarted" Feb 24 05:28:44.703686 master-0 kubenswrapper[7614]: I0224 05:28:44.703573 7614 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-authentication-operator/authentication-operator-5bd7c86784-kbb8z" podUID="59333a14-5bdc-4590-a3da-af7300f086da" containerName="authentication-operator" containerID="cri-o://fb14b25796af448c2bc49088ecec2ac65559ebf13053ffc78319aa0e2b8844d9" gracePeriod=30 Feb 24 05:28:45.041662 master-0 kubenswrapper[7614]: E0224 05:28:45.041569 7614 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"authentication-operator\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=authentication-operator pod=authentication-operator-5bd7c86784-kbb8z_openshift-authentication-operator(59333a14-5bdc-4590-a3da-af7300f086da)\"" pod="openshift-authentication-operator/authentication-operator-5bd7c86784-kbb8z" podUID="59333a14-5bdc-4590-a3da-af7300f086da" Feb 24 05:28:45.074586 master-0 kubenswrapper[7614]: I0224 05:28:45.074459 7614 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-authentication-operator_authentication-operator-5bd7c86784-kbb8z_59333a14-5bdc-4590-a3da-af7300f086da/authentication-operator/4.log" Feb 24 05:28:45.075816 master-0 kubenswrapper[7614]: I0224 05:28:45.075754 7614 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-authentication-operator_authentication-operator-5bd7c86784-kbb8z_59333a14-5bdc-4590-a3da-af7300f086da/authentication-operator/3.log" Feb 24 05:28:45.075934 master-0 kubenswrapper[7614]: I0224 05:28:45.075837 7614 generic.go:334] "Generic (PLEG): container finished" podID="59333a14-5bdc-4590-a3da-af7300f086da" containerID="fb14b25796af448c2bc49088ecec2ac65559ebf13053ffc78319aa0e2b8844d9" exitCode=255 Feb 24 05:28:45.075934 master-0 kubenswrapper[7614]: I0224 05:28:45.075889 7614 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-5bd7c86784-kbb8z" event={"ID":"59333a14-5bdc-4590-a3da-af7300f086da","Type":"ContainerDied","Data":"fb14b25796af448c2bc49088ecec2ac65559ebf13053ffc78319aa0e2b8844d9"} Feb 24 05:28:45.076063 master-0 kubenswrapper[7614]: I0224 05:28:45.075968 7614 scope.go:117] "RemoveContainer" containerID="85a787ab234adbc4cec6c14f0d55a16949892b1a8442a2c568e5b38474ee2b06" Feb 24 05:28:45.077043 master-0 kubenswrapper[7614]: I0224 05:28:45.076961 7614 scope.go:117] "RemoveContainer" containerID="fb14b25796af448c2bc49088ecec2ac65559ebf13053ffc78319aa0e2b8844d9" Feb 24 05:28:45.077487 master-0 kubenswrapper[7614]: E0224 05:28:45.077431 7614 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"authentication-operator\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=authentication-operator pod=authentication-operator-5bd7c86784-kbb8z_openshift-authentication-operator(59333a14-5bdc-4590-a3da-af7300f086da)\"" pod="openshift-authentication-operator/authentication-operator-5bd7c86784-kbb8z" podUID="59333a14-5bdc-4590-a3da-af7300f086da" Feb 24 05:28:45.663980 master-0 kubenswrapper[7614]: I0224 05:28:45.663843 7614 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-zxkt2 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 05:28:45.663980 master-0 kubenswrapper[7614]: [-]has-synced failed: reason withheld Feb 24 05:28:45.663980 master-0 kubenswrapper[7614]: [+]process-running ok Feb 24 05:28:45.663980 master-0 kubenswrapper[7614]: healthz check failed Feb 24 05:28:45.663980 master-0 kubenswrapper[7614]: I0224 05:28:45.663955 7614 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-zxkt2" podUID="be7a4b9e-1e9a-4298-b804-21b683805c0e" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 05:28:46.085829 master-0 kubenswrapper[7614]: I0224 05:28:46.085722 7614 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-authentication-operator_authentication-operator-5bd7c86784-kbb8z_59333a14-5bdc-4590-a3da-af7300f086da/authentication-operator/4.log" Feb 24 05:28:46.663991 master-0 kubenswrapper[7614]: I0224 05:28:46.663889 7614 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-zxkt2 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 05:28:46.663991 master-0 kubenswrapper[7614]: [-]has-synced failed: reason withheld Feb 24 05:28:46.663991 master-0 kubenswrapper[7614]: [+]process-running ok Feb 24 05:28:46.663991 master-0 kubenswrapper[7614]: healthz check failed Feb 24 05:28:46.664551 master-0 kubenswrapper[7614]: I0224 05:28:46.663998 7614 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-zxkt2" podUID="be7a4b9e-1e9a-4298-b804-21b683805c0e" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 05:28:46.944970 master-0 kubenswrapper[7614]: E0224 05:28:46.944778 7614 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"master-0\": Get \"https://api-int.sno.openstack.lab:6443/api/v1/nodes/master-0?timeout=10s\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 24 05:28:47.665153 master-0 kubenswrapper[7614]: I0224 05:28:47.664995 7614 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-zxkt2 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 05:28:47.665153 master-0 kubenswrapper[7614]: [-]has-synced failed: reason withheld Feb 24 05:28:47.665153 master-0 kubenswrapper[7614]: [+]process-running ok Feb 24 05:28:47.665153 master-0 kubenswrapper[7614]: healthz check failed Feb 24 05:28:47.665153 master-0 kubenswrapper[7614]: I0224 05:28:47.665125 7614 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-zxkt2" podUID="be7a4b9e-1e9a-4298-b804-21b683805c0e" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 05:28:48.664350 master-0 kubenswrapper[7614]: I0224 05:28:48.663996 7614 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-zxkt2 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 05:28:48.664350 master-0 kubenswrapper[7614]: [-]has-synced failed: reason withheld Feb 24 05:28:48.664350 master-0 kubenswrapper[7614]: [+]process-running ok Feb 24 05:28:48.664350 master-0 kubenswrapper[7614]: healthz check failed Feb 24 05:28:48.665066 master-0 kubenswrapper[7614]: I0224 05:28:48.664375 7614 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-zxkt2" podUID="be7a4b9e-1e9a-4298-b804-21b683805c0e" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 05:28:49.664009 master-0 kubenswrapper[7614]: I0224 05:28:49.663878 7614 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-zxkt2 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 05:28:49.664009 master-0 kubenswrapper[7614]: [-]has-synced failed: reason withheld Feb 24 05:28:49.664009 master-0 kubenswrapper[7614]: [+]process-running ok Feb 24 05:28:49.664009 master-0 kubenswrapper[7614]: healthz check failed Feb 24 05:28:49.664980 master-0 kubenswrapper[7614]: I0224 05:28:49.664022 7614 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-zxkt2" podUID="be7a4b9e-1e9a-4298-b804-21b683805c0e" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 05:28:50.664366 master-0 kubenswrapper[7614]: I0224 05:28:50.664251 7614 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-zxkt2 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 05:28:50.664366 master-0 kubenswrapper[7614]: [-]has-synced failed: reason withheld Feb 24 05:28:50.664366 master-0 kubenswrapper[7614]: [+]process-running ok Feb 24 05:28:50.664366 master-0 kubenswrapper[7614]: healthz check failed Feb 24 05:28:50.665176 master-0 kubenswrapper[7614]: I0224 05:28:50.664384 7614 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-zxkt2" podUID="be7a4b9e-1e9a-4298-b804-21b683805c0e" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 05:28:51.662999 master-0 kubenswrapper[7614]: I0224 05:28:51.662898 7614 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-zxkt2 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 05:28:51.662999 master-0 kubenswrapper[7614]: [-]has-synced failed: reason withheld Feb 24 05:28:51.662999 master-0 kubenswrapper[7614]: [+]process-running ok Feb 24 05:28:51.662999 master-0 kubenswrapper[7614]: healthz check failed Feb 24 05:28:51.662999 master-0 kubenswrapper[7614]: I0224 05:28:51.662992 7614 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-zxkt2" podUID="be7a4b9e-1e9a-4298-b804-21b683805c0e" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 05:28:52.174967 master-0 kubenswrapper[7614]: I0224 05:28:52.174880 7614 scope.go:117] "RemoveContainer" containerID="09f85c8b01d7446d5646107c9c18780a59af7c98e21d551a62767e55f5cabf2d" Feb 24 05:28:52.175631 master-0 kubenswrapper[7614]: E0224 05:28:52.175602 7614 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ingress-operator\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=ingress-operator pod=ingress-operator-6569778c84-rr8r7_openshift-ingress-operator(3d6b1ce7-1213-494c-829d-186d39eac7eb)\"" pod="openshift-ingress-operator/ingress-operator-6569778c84-rr8r7" podUID="3d6b1ce7-1213-494c-829d-186d39eac7eb" Feb 24 05:28:52.525792 master-0 kubenswrapper[7614]: E0224 05:28:52.525671 7614 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" interval="7s" Feb 24 05:28:52.665079 master-0 kubenswrapper[7614]: I0224 05:28:52.664981 7614 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-zxkt2 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 05:28:52.665079 master-0 kubenswrapper[7614]: [-]has-synced failed: reason withheld Feb 24 05:28:52.665079 master-0 kubenswrapper[7614]: [+]process-running ok Feb 24 05:28:52.665079 master-0 kubenswrapper[7614]: healthz check failed Feb 24 05:28:52.666405 master-0 kubenswrapper[7614]: I0224 05:28:52.666343 7614 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-zxkt2" podUID="be7a4b9e-1e9a-4298-b804-21b683805c0e" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 05:28:52.953538 master-0 kubenswrapper[7614]: E0224 05:28:52.953199 7614 event.go:359] "Server rejected event (will not retry!)" err="Timeout: request did not complete within requested timeout - context deadline exceeded" event=< Feb 24 05:28:52.953538 master-0 kubenswrapper[7614]: &Event{ObjectMeta:{authentication-operator-5bd7c86784-kbb8z.1897176a87a258a5 openshift-authentication-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-authentication-operator,Name:authentication-operator-5bd7c86784-kbb8z,UID:59333a14-5bdc-4590-a3da-af7300f086da,APIVersion:v1,ResourceVersion:3428,FieldPath:spec.containers{authentication-operator},},Reason:ProbeError,Message:Liveness probe error: Get "https://10.128.0.17:8443/healthz": dial tcp 10.128.0.17:8443: connect: connection refused Feb 24 05:28:52.953538 master-0 kubenswrapper[7614]: body: Feb 24 05:28:52.953538 master-0 kubenswrapper[7614]: ,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-02-24 05:25:24.703115429 +0000 UTC m=+655.737858635,LastTimestamp:2026-02-24 05:25:24.703115429 +0000 UTC m=+655.737858635,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,} Feb 24 05:28:52.953538 master-0 kubenswrapper[7614]: > Feb 24 05:28:53.663271 master-0 kubenswrapper[7614]: I0224 05:28:53.663178 7614 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-zxkt2 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 05:28:53.663271 master-0 kubenswrapper[7614]: [-]has-synced failed: reason withheld Feb 24 05:28:53.663271 master-0 kubenswrapper[7614]: [+]process-running ok Feb 24 05:28:53.663271 master-0 kubenswrapper[7614]: healthz check failed Feb 24 05:28:53.664805 master-0 kubenswrapper[7614]: I0224 05:28:53.664743 7614 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-zxkt2" podUID="be7a4b9e-1e9a-4298-b804-21b683805c0e" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 05:28:54.214991 master-0 kubenswrapper[7614]: I0224 05:28:54.214830 7614 scope.go:117] "RemoveContainer" containerID="386f03588b2fec464602c05980d3d5bd01154f4e28abd6dc77dfb5d576846982" Feb 24 05:28:54.215985 master-0 kubenswrapper[7614]: E0224 05:28:54.215931 7614 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cluster-policy-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=cluster-policy-controller pod=kube-controller-manager-master-0_openshift-kube-controller-manager(79656ffd720980cfc7e8a06d9f509855)\"" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" podUID="79656ffd720980cfc7e8a06d9f509855" Feb 24 05:28:54.662913 master-0 kubenswrapper[7614]: I0224 05:28:54.662758 7614 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-zxkt2 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 05:28:54.662913 master-0 kubenswrapper[7614]: [-]has-synced failed: reason withheld Feb 24 05:28:54.662913 master-0 kubenswrapper[7614]: [+]process-running ok Feb 24 05:28:54.662913 master-0 kubenswrapper[7614]: healthz check failed Feb 24 05:28:54.662913 master-0 kubenswrapper[7614]: I0224 05:28:54.662858 7614 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-zxkt2" podUID="be7a4b9e-1e9a-4298-b804-21b683805c0e" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 05:28:55.663184 master-0 kubenswrapper[7614]: I0224 05:28:55.663048 7614 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-zxkt2 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 05:28:55.663184 master-0 kubenswrapper[7614]: [-]has-synced failed: reason withheld Feb 24 05:28:55.663184 master-0 kubenswrapper[7614]: [+]process-running ok Feb 24 05:28:55.663184 master-0 kubenswrapper[7614]: healthz check failed Feb 24 05:28:55.664519 master-0 kubenswrapper[7614]: I0224 05:28:55.663189 7614 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-zxkt2" podUID="be7a4b9e-1e9a-4298-b804-21b683805c0e" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 05:28:56.663041 master-0 kubenswrapper[7614]: I0224 05:28:56.662954 7614 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-zxkt2 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 05:28:56.663041 master-0 kubenswrapper[7614]: [-]has-synced failed: reason withheld Feb 24 05:28:56.663041 master-0 kubenswrapper[7614]: [+]process-running ok Feb 24 05:28:56.663041 master-0 kubenswrapper[7614]: healthz check failed Feb 24 05:28:56.663041 master-0 kubenswrapper[7614]: I0224 05:28:56.663039 7614 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-zxkt2" podUID="be7a4b9e-1e9a-4298-b804-21b683805c0e" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 05:28:57.665712 master-0 kubenswrapper[7614]: I0224 05:28:57.664673 7614 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-zxkt2 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 05:28:57.665712 master-0 kubenswrapper[7614]: [-]has-synced failed: reason withheld Feb 24 05:28:57.665712 master-0 kubenswrapper[7614]: [+]process-running ok Feb 24 05:28:57.665712 master-0 kubenswrapper[7614]: healthz check failed Feb 24 05:28:57.665712 master-0 kubenswrapper[7614]: I0224 05:28:57.665270 7614 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-zxkt2" podUID="be7a4b9e-1e9a-4298-b804-21b683805c0e" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 05:28:58.664134 master-0 kubenswrapper[7614]: I0224 05:28:58.663924 7614 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-zxkt2 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 05:28:58.664134 master-0 kubenswrapper[7614]: [-]has-synced failed: reason withheld Feb 24 05:28:58.664134 master-0 kubenswrapper[7614]: [+]process-running ok Feb 24 05:28:58.664134 master-0 kubenswrapper[7614]: healthz check failed Feb 24 05:28:58.664134 master-0 kubenswrapper[7614]: I0224 05:28:58.664031 7614 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-zxkt2" podUID="be7a4b9e-1e9a-4298-b804-21b683805c0e" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 05:28:59.175235 master-0 kubenswrapper[7614]: I0224 05:28:59.175148 7614 scope.go:117] "RemoveContainer" containerID="fb14b25796af448c2bc49088ecec2ac65559ebf13053ffc78319aa0e2b8844d9" Feb 24 05:28:59.176224 master-0 kubenswrapper[7614]: E0224 05:28:59.175564 7614 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"authentication-operator\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=authentication-operator pod=authentication-operator-5bd7c86784-kbb8z_openshift-authentication-operator(59333a14-5bdc-4590-a3da-af7300f086da)\"" pod="openshift-authentication-operator/authentication-operator-5bd7c86784-kbb8z" podUID="59333a14-5bdc-4590-a3da-af7300f086da" Feb 24 05:28:59.221185 master-0 kubenswrapper[7614]: I0224 05:28:59.221065 7614 generic.go:334] "Generic (PLEG): container finished" podID="1e7f7c02-4c84-432a-8d59-25dd3bfef5c2" containerID="efa90e77631439dbef62b24eb0a109dbbb0250a2d2b24124da5e8a8cbc7dcbd0" exitCode=0 Feb 24 05:28:59.221566 master-0 kubenswrapper[7614]: I0224 05:28:59.221168 7614 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-54cb48566c-9ww5z" event={"ID":"1e7f7c02-4c84-432a-8d59-25dd3bfef5c2","Type":"ContainerDied","Data":"efa90e77631439dbef62b24eb0a109dbbb0250a2d2b24124da5e8a8cbc7dcbd0"} Feb 24 05:28:59.222162 master-0 kubenswrapper[7614]: I0224 05:28:59.222085 7614 scope.go:117] "RemoveContainer" containerID="efa90e77631439dbef62b24eb0a109dbbb0250a2d2b24124da5e8a8cbc7dcbd0" Feb 24 05:28:59.225001 master-0 kubenswrapper[7614]: I0224 05:28:59.224948 7614 generic.go:334] "Generic (PLEG): container finished" podID="3f511d03-a182-4968-ba40-5c5c10e5e6be" containerID="8c92ed541ae527386db4b6a76cf26d9c5a64e4216b4963a7e69a420ee8324c44" exitCode=0 Feb 24 05:28:59.225170 master-0 kubenswrapper[7614]: I0224 05:28:59.225003 7614 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-6f47d587d6-7b87v" event={"ID":"3f511d03-a182-4968-ba40-5c5c10e5e6be","Type":"ContainerDied","Data":"8c92ed541ae527386db4b6a76cf26d9c5a64e4216b4963a7e69a420ee8324c44"} Feb 24 05:28:59.226034 master-0 kubenswrapper[7614]: I0224 05:28:59.225934 7614 scope.go:117] "RemoveContainer" containerID="8c92ed541ae527386db4b6a76cf26d9c5a64e4216b4963a7e69a420ee8324c44" Feb 24 05:28:59.664077 master-0 kubenswrapper[7614]: I0224 05:28:59.663990 7614 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-zxkt2 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 05:28:59.664077 master-0 kubenswrapper[7614]: [-]has-synced failed: reason withheld Feb 24 05:28:59.664077 master-0 kubenswrapper[7614]: [+]process-running ok Feb 24 05:28:59.664077 master-0 kubenswrapper[7614]: healthz check failed Feb 24 05:28:59.664616 master-0 kubenswrapper[7614]: I0224 05:28:59.664092 7614 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-zxkt2" podUID="be7a4b9e-1e9a-4298-b804-21b683805c0e" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 05:29:00.250717 master-0 kubenswrapper[7614]: I0224 05:29:00.250601 7614 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-6f47d587d6-7b87v" event={"ID":"3f511d03-a182-4968-ba40-5c5c10e5e6be","Type":"ContainerStarted","Data":"8ea9d13281e6d20cdeced5c381efed4b0919698bffbbef309d207e550b38c166"} Feb 24 05:29:00.251713 master-0 kubenswrapper[7614]: I0224 05:29:00.251172 7614 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-config-operator/openshift-config-operator-6f47d587d6-7b87v" Feb 24 05:29:00.253489 master-0 kubenswrapper[7614]: I0224 05:29:00.253442 7614 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-54cb48566c-9ww5z" event={"ID":"1e7f7c02-4c84-432a-8d59-25dd3bfef5c2","Type":"ContainerStarted","Data":"842ea8e40e21d1e17280531b7bf3366a27fe38d70f8174b9bcacf28d6df95dc0"} Feb 24 05:29:00.664355 master-0 kubenswrapper[7614]: I0224 05:29:00.664099 7614 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-zxkt2 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 05:29:00.664355 master-0 kubenswrapper[7614]: [-]has-synced failed: reason withheld Feb 24 05:29:00.664355 master-0 kubenswrapper[7614]: [+]process-running ok Feb 24 05:29:00.664355 master-0 kubenswrapper[7614]: healthz check failed Feb 24 05:29:00.664355 master-0 kubenswrapper[7614]: I0224 05:29:00.664211 7614 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-zxkt2" podUID="be7a4b9e-1e9a-4298-b804-21b683805c0e" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 05:29:01.663524 master-0 kubenswrapper[7614]: I0224 05:29:01.663419 7614 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-zxkt2 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 05:29:01.663524 master-0 kubenswrapper[7614]: [-]has-synced failed: reason withheld Feb 24 05:29:01.663524 master-0 kubenswrapper[7614]: [+]process-running ok Feb 24 05:29:01.663524 master-0 kubenswrapper[7614]: healthz check failed Feb 24 05:29:01.664691 master-0 kubenswrapper[7614]: I0224 05:29:01.663532 7614 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-zxkt2" podUID="be7a4b9e-1e9a-4298-b804-21b683805c0e" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 05:29:02.663820 master-0 kubenswrapper[7614]: I0224 05:29:02.663680 7614 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-zxkt2 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 05:29:02.663820 master-0 kubenswrapper[7614]: [-]has-synced failed: reason withheld Feb 24 05:29:02.663820 master-0 kubenswrapper[7614]: [+]process-running ok Feb 24 05:29:02.663820 master-0 kubenswrapper[7614]: healthz check failed Feb 24 05:29:02.663820 master-0 kubenswrapper[7614]: I0224 05:29:02.663799 7614 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-zxkt2" podUID="be7a4b9e-1e9a-4298-b804-21b683805c0e" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 05:29:03.288773 master-0 kubenswrapper[7614]: I0224 05:29:03.288677 7614 generic.go:334] "Generic (PLEG): container finished" podID="7a2c651d-ea1a-41f2-9745-04adc8d88904" containerID="5b9fbeb4c761c7177b525ed4d8c68cf8e069fca30c46bcfac1010c8ec65d4d07" exitCode=0 Feb 24 05:29:03.288773 master-0 kubenswrapper[7614]: I0224 05:29:03.288745 7614 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-545bf96f4d-tfmbs" event={"ID":"7a2c651d-ea1a-41f2-9745-04adc8d88904","Type":"ContainerDied","Data":"5b9fbeb4c761c7177b525ed4d8c68cf8e069fca30c46bcfac1010c8ec65d4d07"} Feb 24 05:29:03.289294 master-0 kubenswrapper[7614]: I0224 05:29:03.288802 7614 scope.go:117] "RemoveContainer" containerID="1fe643ed33a9f72192d56893c5e0183a5530b52d1fd5cb43d00c8adaabb5837c" Feb 24 05:29:03.290079 master-0 kubenswrapper[7614]: I0224 05:29:03.289743 7614 scope.go:117] "RemoveContainer" containerID="5b9fbeb4c761c7177b525ed4d8c68cf8e069fca30c46bcfac1010c8ec65d4d07" Feb 24 05:29:03.507088 master-0 kubenswrapper[7614]: I0224 05:29:03.506978 7614 patch_prober.go:28] interesting pod/openshift-config-operator-6f47d587d6-7b87v container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.128.0.57:8443/healthz\": dial tcp 10.128.0.57:8443: connect: connection refused" start-of-body= Feb 24 05:29:03.507457 master-0 kubenswrapper[7614]: I0224 05:29:03.507089 7614 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-6f47d587d6-7b87v" podUID="3f511d03-a182-4968-ba40-5c5c10e5e6be" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.128.0.57:8443/healthz\": dial tcp 10.128.0.57:8443: connect: connection refused" Feb 24 05:29:03.507457 master-0 kubenswrapper[7614]: I0224 05:29:03.507118 7614 patch_prober.go:28] interesting pod/openshift-config-operator-6f47d587d6-7b87v container/openshift-config-operator namespace/openshift-config-operator: Liveness probe status=failure output="Get \"https://10.128.0.57:8443/healthz\": dial tcp 10.128.0.57:8443: connect: connection refused" start-of-body= Feb 24 05:29:03.507457 master-0 kubenswrapper[7614]: I0224 05:29:03.507244 7614 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-config-operator/openshift-config-operator-6f47d587d6-7b87v" podUID="3f511d03-a182-4968-ba40-5c5c10e5e6be" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.128.0.57:8443/healthz\": dial tcp 10.128.0.57:8443: connect: connection refused" Feb 24 05:29:03.663274 master-0 kubenswrapper[7614]: I0224 05:29:03.663214 7614 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-zxkt2 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 05:29:03.663274 master-0 kubenswrapper[7614]: [-]has-synced failed: reason withheld Feb 24 05:29:03.663274 master-0 kubenswrapper[7614]: [+]process-running ok Feb 24 05:29:03.663274 master-0 kubenswrapper[7614]: healthz check failed Feb 24 05:29:03.663588 master-0 kubenswrapper[7614]: I0224 05:29:03.663323 7614 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-zxkt2" podUID="be7a4b9e-1e9a-4298-b804-21b683805c0e" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 05:29:03.931563 master-0 kubenswrapper[7614]: E0224 05:29:03.931363 7614 mirror_client.go:138] "Failed deleting a mirror pod" err="Timeout: request did not complete within requested timeout - context deadline exceeded" pod="openshift-etcd/etcd-master-0" Feb 24 05:29:04.303396 master-0 kubenswrapper[7614]: I0224 05:29:04.303262 7614 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-545bf96f4d-tfmbs" event={"ID":"7a2c651d-ea1a-41f2-9745-04adc8d88904","Type":"ContainerStarted","Data":"bebe1f967ad68db6af23fd9f462703e7dad0d313cdcd91fbfb4c9b90869adf49"} Feb 24 05:29:04.307559 master-0 kubenswrapper[7614]: I0224 05:29:04.307489 7614 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver-operator_kube-apiserver-operator-5d87bf58c-ncrqj_17f8e10b-88dc-4158-a7c4-aaa2f5d5fb9d/kube-apiserver-operator/1.log" Feb 24 05:29:04.307708 master-0 kubenswrapper[7614]: I0224 05:29:04.307624 7614 generic.go:334] "Generic (PLEG): container finished" podID="17f8e10b-88dc-4158-a7c4-aaa2f5d5fb9d" containerID="4128e6ec737b6b0efca5e7827427326735a8755e3faf1df48d6f075e6755cd88" exitCode=0 Feb 24 05:29:04.307784 master-0 kubenswrapper[7614]: I0224 05:29:04.307743 7614 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-5d87bf58c-ncrqj" event={"ID":"17f8e10b-88dc-4158-a7c4-aaa2f5d5fb9d","Type":"ContainerDied","Data":"4128e6ec737b6b0efca5e7827427326735a8755e3faf1df48d6f075e6755cd88"} Feb 24 05:29:04.307859 master-0 kubenswrapper[7614]: I0224 05:29:04.307817 7614 scope.go:117] "RemoveContainer" containerID="3b73827e2bb1f8b20c02df6acec604b6c43e878ca9e2bd5192c12a2a62cbd894" Feb 24 05:29:04.308823 master-0 kubenswrapper[7614]: I0224 05:29:04.308776 7614 scope.go:117] "RemoveContainer" containerID="4128e6ec737b6b0efca5e7827427326735a8755e3faf1df48d6f075e6755cd88" Feb 24 05:29:04.312343 master-0 kubenswrapper[7614]: I0224 05:29:04.312236 7614 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-storage-version-migrator-operator_kube-storage-version-migrator-operator-fc889cfd5-r6p58_c3fed34f-b275-42c6-af6c-8de3e6fe0f9e/kube-storage-version-migrator-operator/1.log" Feb 24 05:29:04.312464 master-0 kubenswrapper[7614]: I0224 05:29:04.312396 7614 generic.go:334] "Generic (PLEG): container finished" podID="c3fed34f-b275-42c6-af6c-8de3e6fe0f9e" containerID="8eadd02a3eb053b6fcdd393a3aeb7df438083855b4ae5ac3cfedf974ce5cb69c" exitCode=0 Feb 24 05:29:04.312535 master-0 kubenswrapper[7614]: I0224 05:29:04.312467 7614 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-fc889cfd5-r6p58" event={"ID":"c3fed34f-b275-42c6-af6c-8de3e6fe0f9e","Type":"ContainerDied","Data":"8eadd02a3eb053b6fcdd393a3aeb7df438083855b4ae5ac3cfedf974ce5cb69c"} Feb 24 05:29:04.313458 master-0 kubenswrapper[7614]: I0224 05:29:04.313391 7614 scope.go:117] "RemoveContainer" containerID="8eadd02a3eb053b6fcdd393a3aeb7df438083855b4ae5ac3cfedf974ce5cb69c" Feb 24 05:29:04.370473 master-0 kubenswrapper[7614]: I0224 05:29:04.370394 7614 scope.go:117] "RemoveContainer" containerID="49b21c85c511839ea61bf1eb992b507dfd3ec3bd10df341c02909db55b0a753b" Feb 24 05:29:04.664192 master-0 kubenswrapper[7614]: I0224 05:29:04.664073 7614 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-zxkt2 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 05:29:04.664192 master-0 kubenswrapper[7614]: [-]has-synced failed: reason withheld Feb 24 05:29:04.664192 master-0 kubenswrapper[7614]: [+]process-running ok Feb 24 05:29:04.664192 master-0 kubenswrapper[7614]: healthz check failed Feb 24 05:29:04.664192 master-0 kubenswrapper[7614]: I0224 05:29:04.664174 7614 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-zxkt2" podUID="be7a4b9e-1e9a-4298-b804-21b683805c0e" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 05:29:05.344539 master-0 kubenswrapper[7614]: I0224 05:29:05.344404 7614 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-5d87bf58c-ncrqj" event={"ID":"17f8e10b-88dc-4158-a7c4-aaa2f5d5fb9d","Type":"ContainerStarted","Data":"dead2ce00b079adf81cc56cead1d7dbc7aa32c74452fee094d754f08356f419a"} Feb 24 05:29:05.347103 master-0 kubenswrapper[7614]: I0224 05:29:05.347053 7614 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cluster-storage-operator_csi-snapshot-controller-6847bb4785-vqn96_b79ef90c-dc66-4d5f-8943-2c3ac68796ba/snapshot-controller/4.log" Feb 24 05:29:05.347532 master-0 kubenswrapper[7614]: I0224 05:29:05.347488 7614 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cluster-storage-operator_csi-snapshot-controller-6847bb4785-vqn96_b79ef90c-dc66-4d5f-8943-2c3ac68796ba/snapshot-controller/3.log" Feb 24 05:29:05.347635 master-0 kubenswrapper[7614]: I0224 05:29:05.347540 7614 generic.go:334] "Generic (PLEG): container finished" podID="b79ef90c-dc66-4d5f-8943-2c3ac68796ba" containerID="1dbe14eb848b87711b564dbd00190070ac04cfc0d462906a427a3af22f0cfd2a" exitCode=1 Feb 24 05:29:05.347635 master-0 kubenswrapper[7614]: I0224 05:29:05.347616 7614 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-storage-operator/csi-snapshot-controller-6847bb4785-vqn96" event={"ID":"b79ef90c-dc66-4d5f-8943-2c3ac68796ba","Type":"ContainerDied","Data":"1dbe14eb848b87711b564dbd00190070ac04cfc0d462906a427a3af22f0cfd2a"} Feb 24 05:29:05.347969 master-0 kubenswrapper[7614]: I0224 05:29:05.347654 7614 scope.go:117] "RemoveContainer" containerID="cd2719bcd396d956c5ec0dbab7235948c54420d996fc5bd5c8732105713b4ef6" Feb 24 05:29:05.348126 master-0 kubenswrapper[7614]: I0224 05:29:05.348084 7614 scope.go:117] "RemoveContainer" containerID="1dbe14eb848b87711b564dbd00190070ac04cfc0d462906a427a3af22f0cfd2a" Feb 24 05:29:05.348374 master-0 kubenswrapper[7614]: E0224 05:29:05.348303 7614 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"snapshot-controller\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=snapshot-controller pod=csi-snapshot-controller-6847bb4785-vqn96_openshift-cluster-storage-operator(b79ef90c-dc66-4d5f-8943-2c3ac68796ba)\"" pod="openshift-cluster-storage-operator/csi-snapshot-controller-6847bb4785-vqn96" podUID="b79ef90c-dc66-4d5f-8943-2c3ac68796ba" Feb 24 05:29:05.352247 master-0 kubenswrapper[7614]: I0224 05:29:05.352193 7614 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-fc889cfd5-r6p58" event={"ID":"c3fed34f-b275-42c6-af6c-8de3e6fe0f9e","Type":"ContainerStarted","Data":"fe1287b1a87015507ac4c861f23d2a687a581416d4d043e3b3e2b99b58059fa4"} Feb 24 05:29:05.664398 master-0 kubenswrapper[7614]: I0224 05:29:05.664139 7614 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-zxkt2 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 05:29:05.664398 master-0 kubenswrapper[7614]: [-]has-synced failed: reason withheld Feb 24 05:29:05.664398 master-0 kubenswrapper[7614]: [+]process-running ok Feb 24 05:29:05.664398 master-0 kubenswrapper[7614]: healthz check failed Feb 24 05:29:05.664398 master-0 kubenswrapper[7614]: I0224 05:29:05.664273 7614 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-zxkt2" podUID="be7a4b9e-1e9a-4298-b804-21b683805c0e" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 05:29:06.174786 master-0 kubenswrapper[7614]: I0224 05:29:06.174706 7614 scope.go:117] "RemoveContainer" containerID="09f85c8b01d7446d5646107c9c18780a59af7c98e21d551a62767e55f5cabf2d" Feb 24 05:29:06.175698 master-0 kubenswrapper[7614]: E0224 05:29:06.175657 7614 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ingress-operator\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=ingress-operator pod=ingress-operator-6569778c84-rr8r7_openshift-ingress-operator(3d6b1ce7-1213-494c-829d-186d39eac7eb)\"" pod="openshift-ingress-operator/ingress-operator-6569778c84-rr8r7" podUID="3d6b1ce7-1213-494c-829d-186d39eac7eb" Feb 24 05:29:06.362350 master-0 kubenswrapper[7614]: I0224 05:29:06.362282 7614 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cluster-storage-operator_csi-snapshot-controller-6847bb4785-vqn96_b79ef90c-dc66-4d5f-8943-2c3ac68796ba/snapshot-controller/4.log" Feb 24 05:29:06.506404 master-0 kubenswrapper[7614]: I0224 05:29:06.506276 7614 patch_prober.go:28] interesting pod/openshift-config-operator-6f47d587d6-7b87v container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.128.0.57:8443/healthz\": dial tcp 10.128.0.57:8443: connect: connection refused" start-of-body= Feb 24 05:29:06.506822 master-0 kubenswrapper[7614]: I0224 05:29:06.506449 7614 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-6f47d587d6-7b87v" podUID="3f511d03-a182-4968-ba40-5c5c10e5e6be" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.128.0.57:8443/healthz\": dial tcp 10.128.0.57:8443: connect: connection refused" Feb 24 05:29:06.506993 master-0 kubenswrapper[7614]: I0224 05:29:06.506359 7614 patch_prober.go:28] interesting pod/openshift-config-operator-6f47d587d6-7b87v container/openshift-config-operator namespace/openshift-config-operator: Liveness probe status=failure output="Get \"https://10.128.0.57:8443/healthz\": dial tcp 10.128.0.57:8443: connect: connection refused" start-of-body= Feb 24 05:29:06.509402 master-0 kubenswrapper[7614]: I0224 05:29:06.509289 7614 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-config-operator/openshift-config-operator-6f47d587d6-7b87v" podUID="3f511d03-a182-4968-ba40-5c5c10e5e6be" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.128.0.57:8443/healthz\": dial tcp 10.128.0.57:8443: connect: connection refused" Feb 24 05:29:06.665015 master-0 kubenswrapper[7614]: I0224 05:29:06.664869 7614 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-zxkt2 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 05:29:06.665015 master-0 kubenswrapper[7614]: [-]has-synced failed: reason withheld Feb 24 05:29:06.665015 master-0 kubenswrapper[7614]: [+]process-running ok Feb 24 05:29:06.665015 master-0 kubenswrapper[7614]: healthz check failed Feb 24 05:29:06.665015 master-0 kubenswrapper[7614]: I0224 05:29:06.665013 7614 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-zxkt2" podUID="be7a4b9e-1e9a-4298-b804-21b683805c0e" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 05:29:07.379240 master-0 kubenswrapper[7614]: I0224 05:29:07.379152 7614 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager-operator_kube-controller-manager-operator-7bcfbc574b-8zrj9_22813c83-2f60-44ad-9624-ad367cec08f7/kube-controller-manager-operator/1.log" Feb 24 05:29:07.380432 master-0 kubenswrapper[7614]: I0224 05:29:07.379253 7614 generic.go:334] "Generic (PLEG): container finished" podID="22813c83-2f60-44ad-9624-ad367cec08f7" containerID="c0559153cb9d3232da1d9baca34a653eff61d748f8d7e4af8a7f1e0e1d63e86d" exitCode=0 Feb 24 05:29:07.380432 master-0 kubenswrapper[7614]: I0224 05:29:07.379378 7614 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-7bcfbc574b-8zrj9" event={"ID":"22813c83-2f60-44ad-9624-ad367cec08f7","Type":"ContainerDied","Data":"c0559153cb9d3232da1d9baca34a653eff61d748f8d7e4af8a7f1e0e1d63e86d"} Feb 24 05:29:07.380432 master-0 kubenswrapper[7614]: I0224 05:29:07.379519 7614 scope.go:117] "RemoveContainer" containerID="03dd9053750096b7f82252736f4fac427fd0dcd291c847a9672ee97680c7a2e7" Feb 24 05:29:07.380432 master-0 kubenswrapper[7614]: I0224 05:29:07.380343 7614 scope.go:117] "RemoveContainer" containerID="c0559153cb9d3232da1d9baca34a653eff61d748f8d7e4af8a7f1e0e1d63e86d" Feb 24 05:29:07.663235 master-0 kubenswrapper[7614]: I0224 05:29:07.663135 7614 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-zxkt2 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 05:29:07.663235 master-0 kubenswrapper[7614]: [-]has-synced failed: reason withheld Feb 24 05:29:07.663235 master-0 kubenswrapper[7614]: [+]process-running ok Feb 24 05:29:07.663235 master-0 kubenswrapper[7614]: healthz check failed Feb 24 05:29:07.663843 master-0 kubenswrapper[7614]: I0224 05:29:07.663248 7614 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-zxkt2" podUID="be7a4b9e-1e9a-4298-b804-21b683805c0e" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 05:29:08.399621 master-0 kubenswrapper[7614]: I0224 05:29:08.399482 7614 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-7bcfbc574b-8zrj9" event={"ID":"22813c83-2f60-44ad-9624-ad367cec08f7","Type":"ContainerStarted","Data":"dbcf666fa2144c38b8f4a57a6b23e5154028b4cf93dae639bfab2a5c6eefe2f1"} Feb 24 05:29:08.406917 master-0 kubenswrapper[7614]: I0224 05:29:08.402570 7614 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_cluster-baremetal-operator-d6bb9bb76-54hnv_39623346-691b-42c8-af76-409d4f6629af/cluster-baremetal-operator/2.log" Feb 24 05:29:08.406917 master-0 kubenswrapper[7614]: I0224 05:29:08.403548 7614 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_cluster-baremetal-operator-d6bb9bb76-54hnv_39623346-691b-42c8-af76-409d4f6629af/cluster-baremetal-operator/1.log" Feb 24 05:29:08.406917 master-0 kubenswrapper[7614]: I0224 05:29:08.404297 7614 generic.go:334] "Generic (PLEG): container finished" podID="39623346-691b-42c8-af76-409d4f6629af" containerID="86e637d0b5dc95d562f8425432d6a525c0e0e358c1d51fc8a2c0d80b43fd747a" exitCode=1 Feb 24 05:29:08.406917 master-0 kubenswrapper[7614]: I0224 05:29:08.404386 7614 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/cluster-baremetal-operator-d6bb9bb76-54hnv" event={"ID":"39623346-691b-42c8-af76-409d4f6629af","Type":"ContainerDied","Data":"86e637d0b5dc95d562f8425432d6a525c0e0e358c1d51fc8a2c0d80b43fd747a"} Feb 24 05:29:08.406917 master-0 kubenswrapper[7614]: I0224 05:29:08.404459 7614 scope.go:117] "RemoveContainer" containerID="f390c9172bf667ce9e5a44fc191de51013e82e96eafb2547c98f9fa6aad29054" Feb 24 05:29:08.406917 master-0 kubenswrapper[7614]: I0224 05:29:08.405287 7614 scope.go:117] "RemoveContainer" containerID="86e637d0b5dc95d562f8425432d6a525c0e0e358c1d51fc8a2c0d80b43fd747a" Feb 24 05:29:08.406917 master-0 kubenswrapper[7614]: E0224 05:29:08.405781 7614 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cluster-baremetal-operator\" with CrashLoopBackOff: \"back-off 20s restarting failed container=cluster-baremetal-operator pod=cluster-baremetal-operator-d6bb9bb76-54hnv_openshift-machine-api(39623346-691b-42c8-af76-409d4f6629af)\"" pod="openshift-machine-api/cluster-baremetal-operator-d6bb9bb76-54hnv" podUID="39623346-691b-42c8-af76-409d4f6629af" Feb 24 05:29:08.664098 master-0 kubenswrapper[7614]: I0224 05:29:08.663854 7614 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-zxkt2 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 05:29:08.664098 master-0 kubenswrapper[7614]: [-]has-synced failed: reason withheld Feb 24 05:29:08.664098 master-0 kubenswrapper[7614]: [+]process-running ok Feb 24 05:29:08.664098 master-0 kubenswrapper[7614]: healthz check failed Feb 24 05:29:08.664098 master-0 kubenswrapper[7614]: I0224 05:29:08.663998 7614 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-zxkt2" podUID="be7a4b9e-1e9a-4298-b804-21b683805c0e" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 05:29:09.175364 master-0 kubenswrapper[7614]: I0224 05:29:09.175215 7614 scope.go:117] "RemoveContainer" containerID="386f03588b2fec464602c05980d3d5bd01154f4e28abd6dc77dfb5d576846982" Feb 24 05:29:09.416356 master-0 kubenswrapper[7614]: I0224 05:29:09.416238 7614 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_cluster-baremetal-operator-d6bb9bb76-54hnv_39623346-691b-42c8-af76-409d4f6629af/cluster-baremetal-operator/2.log" Feb 24 05:29:09.506584 master-0 kubenswrapper[7614]: I0224 05:29:09.506458 7614 patch_prober.go:28] interesting pod/openshift-config-operator-6f47d587d6-7b87v container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.128.0.57:8443/healthz\": dial tcp 10.128.0.57:8443: connect: connection refused" start-of-body= Feb 24 05:29:09.506584 master-0 kubenswrapper[7614]: I0224 05:29:09.506474 7614 patch_prober.go:28] interesting pod/openshift-config-operator-6f47d587d6-7b87v container/openshift-config-operator namespace/openshift-config-operator: Liveness probe status=failure output="Get \"https://10.128.0.57:8443/healthz\": dial tcp 10.128.0.57:8443: connect: connection refused" start-of-body= Feb 24 05:29:09.506584 master-0 kubenswrapper[7614]: I0224 05:29:09.506553 7614 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-6f47d587d6-7b87v" podUID="3f511d03-a182-4968-ba40-5c5c10e5e6be" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.128.0.57:8443/healthz\": dial tcp 10.128.0.57:8443: connect: connection refused" Feb 24 05:29:09.506983 master-0 kubenswrapper[7614]: I0224 05:29:09.506632 7614 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-config-operator/openshift-config-operator-6f47d587d6-7b87v" podUID="3f511d03-a182-4968-ba40-5c5c10e5e6be" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.128.0.57:8443/healthz\": dial tcp 10.128.0.57:8443: connect: connection refused" Feb 24 05:29:09.506983 master-0 kubenswrapper[7614]: I0224 05:29:09.506711 7614 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-config-operator/openshift-config-operator-6f47d587d6-7b87v" Feb 24 05:29:09.507662 master-0 kubenswrapper[7614]: I0224 05:29:09.507602 7614 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="openshift-config-operator" containerStatusID={"Type":"cri-o","ID":"8ea9d13281e6d20cdeced5c381efed4b0919698bffbbef309d207e550b38c166"} pod="openshift-config-operator/openshift-config-operator-6f47d587d6-7b87v" containerMessage="Container openshift-config-operator failed liveness probe, will be restarted" Feb 24 05:29:09.507839 master-0 kubenswrapper[7614]: I0224 05:29:09.507663 7614 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-config-operator/openshift-config-operator-6f47d587d6-7b87v" podUID="3f511d03-a182-4968-ba40-5c5c10e5e6be" containerName="openshift-config-operator" containerID="cri-o://8ea9d13281e6d20cdeced5c381efed4b0919698bffbbef309d207e550b38c166" gracePeriod=30 Feb 24 05:29:09.508174 master-0 kubenswrapper[7614]: I0224 05:29:09.508129 7614 patch_prober.go:28] interesting pod/openshift-config-operator-6f47d587d6-7b87v container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.128.0.57:8443/healthz\": dial tcp 10.128.0.57:8443: connect: connection refused" start-of-body= Feb 24 05:29:09.508597 master-0 kubenswrapper[7614]: I0224 05:29:09.508551 7614 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-6f47d587d6-7b87v" podUID="3f511d03-a182-4968-ba40-5c5c10e5e6be" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.128.0.57:8443/healthz\": dial tcp 10.128.0.57:8443: connect: connection refused" Feb 24 05:29:09.527900 master-0 kubenswrapper[7614]: E0224 05:29:09.527808 7614 controller.go:145] "Failed to ensure lease exists, will retry" err="the server was unable to return a response in the time allotted, but may still be processing the request (get leases.coordination.k8s.io master-0)" interval="7s" Feb 24 05:29:09.663957 master-0 kubenswrapper[7614]: I0224 05:29:09.663809 7614 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-zxkt2 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 05:29:09.663957 master-0 kubenswrapper[7614]: [-]has-synced failed: reason withheld Feb 24 05:29:09.663957 master-0 kubenswrapper[7614]: [+]process-running ok Feb 24 05:29:09.663957 master-0 kubenswrapper[7614]: healthz check failed Feb 24 05:29:09.664914 master-0 kubenswrapper[7614]: I0224 05:29:09.663968 7614 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-zxkt2" podUID="be7a4b9e-1e9a-4298-b804-21b683805c0e" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 05:29:10.433830 master-0 kubenswrapper[7614]: I0224 05:29:10.433759 7614 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-config-operator_openshift-config-operator-6f47d587d6-7b87v_3f511d03-a182-4968-ba40-5c5c10e5e6be/openshift-config-operator/1.log" Feb 24 05:29:10.435303 master-0 kubenswrapper[7614]: I0224 05:29:10.435226 7614 generic.go:334] "Generic (PLEG): container finished" podID="3f511d03-a182-4968-ba40-5c5c10e5e6be" containerID="8ea9d13281e6d20cdeced5c381efed4b0919698bffbbef309d207e550b38c166" exitCode=255 Feb 24 05:29:10.435480 master-0 kubenswrapper[7614]: I0224 05:29:10.435284 7614 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-6f47d587d6-7b87v" event={"ID":"3f511d03-a182-4968-ba40-5c5c10e5e6be","Type":"ContainerDied","Data":"8ea9d13281e6d20cdeced5c381efed4b0919698bffbbef309d207e550b38c166"} Feb 24 05:29:10.435600 master-0 kubenswrapper[7614]: I0224 05:29:10.435480 7614 scope.go:117] "RemoveContainer" containerID="8c92ed541ae527386db4b6a76cf26d9c5a64e4216b4963a7e69a420ee8324c44" Feb 24 05:29:10.439188 master-0 kubenswrapper[7614]: I0224 05:29:10.439119 7614 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-master-0_79656ffd720980cfc7e8a06d9f509855/cluster-policy-controller/3.log" Feb 24 05:29:10.440608 master-0 kubenswrapper[7614]: I0224 05:29:10.440546 7614 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" event={"ID":"79656ffd720980cfc7e8a06d9f509855","Type":"ContainerStarted","Data":"9fd60c40d34b354fc0543b6f24498012605ff3f1280ac8ad107ef8c7af8045c6"} Feb 24 05:29:10.663843 master-0 kubenswrapper[7614]: I0224 05:29:10.663708 7614 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-zxkt2 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 05:29:10.663843 master-0 kubenswrapper[7614]: [-]has-synced failed: reason withheld Feb 24 05:29:10.663843 master-0 kubenswrapper[7614]: [+]process-running ok Feb 24 05:29:10.663843 master-0 kubenswrapper[7614]: healthz check failed Feb 24 05:29:10.663843 master-0 kubenswrapper[7614]: I0224 05:29:10.663839 7614 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-zxkt2" podUID="be7a4b9e-1e9a-4298-b804-21b683805c0e" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 05:29:10.769251 master-0 kubenswrapper[7614]: I0224 05:29:10.769127 7614 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Feb 24 05:29:10.769251 master-0 kubenswrapper[7614]: I0224 05:29:10.769240 7614 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Feb 24 05:29:11.453366 master-0 kubenswrapper[7614]: I0224 05:29:11.453253 7614 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-config-operator_openshift-config-operator-6f47d587d6-7b87v_3f511d03-a182-4968-ba40-5c5c10e5e6be/openshift-config-operator/1.log" Feb 24 05:29:11.454457 master-0 kubenswrapper[7614]: I0224 05:29:11.454413 7614 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-6f47d587d6-7b87v" event={"ID":"3f511d03-a182-4968-ba40-5c5c10e5e6be","Type":"ContainerStarted","Data":"dd63c41ce4fcbe7d75c9e6d476fdf1e340af8b590914ef213ef34b53284ccafe"} Feb 24 05:29:11.455622 master-0 kubenswrapper[7614]: I0224 05:29:11.455530 7614 patch_prober.go:28] interesting pod/openshift-config-operator-6f47d587d6-7b87v container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.128.0.57:8443/healthz\": dial tcp 10.128.0.57:8443: connect: connection refused" start-of-body= Feb 24 05:29:11.455755 master-0 kubenswrapper[7614]: I0224 05:29:11.455683 7614 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-6f47d587d6-7b87v" podUID="3f511d03-a182-4968-ba40-5c5c10e5e6be" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.128.0.57:8443/healthz\": dial tcp 10.128.0.57:8443: connect: connection refused" Feb 24 05:29:11.663507 master-0 kubenswrapper[7614]: I0224 05:29:11.663384 7614 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-zxkt2 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 05:29:11.663507 master-0 kubenswrapper[7614]: [-]has-synced failed: reason withheld Feb 24 05:29:11.663507 master-0 kubenswrapper[7614]: [+]process-running ok Feb 24 05:29:11.663507 master-0 kubenswrapper[7614]: healthz check failed Feb 24 05:29:11.663507 master-0 kubenswrapper[7614]: I0224 05:29:11.663502 7614 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-zxkt2" podUID="be7a4b9e-1e9a-4298-b804-21b683805c0e" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 05:29:12.463490 master-0 kubenswrapper[7614]: I0224 05:29:12.463414 7614 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-config-operator/openshift-config-operator-6f47d587d6-7b87v" Feb 24 05:29:12.664500 master-0 kubenswrapper[7614]: I0224 05:29:12.664406 7614 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-zxkt2 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 05:29:12.664500 master-0 kubenswrapper[7614]: [-]has-synced failed: reason withheld Feb 24 05:29:12.664500 master-0 kubenswrapper[7614]: [+]process-running ok Feb 24 05:29:12.664500 master-0 kubenswrapper[7614]: healthz check failed Feb 24 05:29:12.665187 master-0 kubenswrapper[7614]: I0224 05:29:12.665130 7614 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-zxkt2" podUID="be7a4b9e-1e9a-4298-b804-21b683805c0e" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 05:29:13.474529 master-0 kubenswrapper[7614]: I0224 05:29:13.474387 7614 patch_prober.go:28] interesting pod/openshift-config-operator-6f47d587d6-7b87v container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.128.0.57:8443/healthz\": dial tcp 10.128.0.57:8443: connect: connection refused" start-of-body= Feb 24 05:29:13.475499 master-0 kubenswrapper[7614]: I0224 05:29:13.474520 7614 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-6f47d587d6-7b87v" podUID="3f511d03-a182-4968-ba40-5c5c10e5e6be" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.128.0.57:8443/healthz\": dial tcp 10.128.0.57:8443: connect: connection refused" Feb 24 05:29:13.664634 master-0 kubenswrapper[7614]: I0224 05:29:13.664494 7614 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-zxkt2 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 05:29:13.664634 master-0 kubenswrapper[7614]: [-]has-synced failed: reason withheld Feb 24 05:29:13.664634 master-0 kubenswrapper[7614]: [+]process-running ok Feb 24 05:29:13.664634 master-0 kubenswrapper[7614]: healthz check failed Feb 24 05:29:13.664634 master-0 kubenswrapper[7614]: I0224 05:29:13.664642 7614 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-zxkt2" podUID="be7a4b9e-1e9a-4298-b804-21b683805c0e" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 05:29:13.770097 master-0 kubenswrapper[7614]: I0224 05:29:13.769952 7614 patch_prober.go:28] interesting pod/kube-controller-manager-master-0 container/cluster-policy-controller namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://localhost:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 24 05:29:13.770553 master-0 kubenswrapper[7614]: I0224 05:29:13.770104 7614 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" podUID="79656ffd720980cfc7e8a06d9f509855" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://localhost:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 24 05:29:14.175474 master-0 kubenswrapper[7614]: I0224 05:29:14.175200 7614 scope.go:117] "RemoveContainer" containerID="fb14b25796af448c2bc49088ecec2ac65559ebf13053ffc78319aa0e2b8844d9" Feb 24 05:29:14.175819 master-0 kubenswrapper[7614]: E0224 05:29:14.175646 7614 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"authentication-operator\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=authentication-operator pod=authentication-operator-5bd7c86784-kbb8z_openshift-authentication-operator(59333a14-5bdc-4590-a3da-af7300f086da)\"" pod="openshift-authentication-operator/authentication-operator-5bd7c86784-kbb8z" podUID="59333a14-5bdc-4590-a3da-af7300f086da" Feb 24 05:29:14.664354 master-0 kubenswrapper[7614]: I0224 05:29:14.664179 7614 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-zxkt2 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 05:29:14.664354 master-0 kubenswrapper[7614]: [-]has-synced failed: reason withheld Feb 24 05:29:14.664354 master-0 kubenswrapper[7614]: [+]process-running ok Feb 24 05:29:14.664354 master-0 kubenswrapper[7614]: healthz check failed Feb 24 05:29:14.665572 master-0 kubenswrapper[7614]: I0224 05:29:14.664405 7614 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-zxkt2" podUID="be7a4b9e-1e9a-4298-b804-21b683805c0e" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 05:29:15.506793 master-0 kubenswrapper[7614]: I0224 05:29:15.506712 7614 patch_prober.go:28] interesting pod/openshift-config-operator-6f47d587d6-7b87v container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.128.0.57:8443/healthz\": dial tcp 10.128.0.57:8443: connect: connection refused" start-of-body= Feb 24 05:29:15.507437 master-0 kubenswrapper[7614]: I0224 05:29:15.506815 7614 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-6f47d587d6-7b87v" podUID="3f511d03-a182-4968-ba40-5c5c10e5e6be" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.128.0.57:8443/healthz\": dial tcp 10.128.0.57:8443: connect: connection refused" Feb 24 05:29:15.507437 master-0 kubenswrapper[7614]: I0224 05:29:15.506712 7614 patch_prober.go:28] interesting pod/openshift-config-operator-6f47d587d6-7b87v container/openshift-config-operator namespace/openshift-config-operator: Liveness probe status=failure output="Get \"https://10.128.0.57:8443/healthz\": dial tcp 10.128.0.57:8443: connect: connection refused" start-of-body= Feb 24 05:29:15.507437 master-0 kubenswrapper[7614]: I0224 05:29:15.506886 7614 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-config-operator/openshift-config-operator-6f47d587d6-7b87v" podUID="3f511d03-a182-4968-ba40-5c5c10e5e6be" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.128.0.57:8443/healthz\": dial tcp 10.128.0.57:8443: connect: connection refused" Feb 24 05:29:15.662880 master-0 kubenswrapper[7614]: I0224 05:29:15.662753 7614 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-zxkt2 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 05:29:15.662880 master-0 kubenswrapper[7614]: [-]has-synced failed: reason withheld Feb 24 05:29:15.662880 master-0 kubenswrapper[7614]: [+]process-running ok Feb 24 05:29:15.662880 master-0 kubenswrapper[7614]: healthz check failed Feb 24 05:29:15.663514 master-0 kubenswrapper[7614]: I0224 05:29:15.662875 7614 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-zxkt2" podUID="be7a4b9e-1e9a-4298-b804-21b683805c0e" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 05:29:16.664607 master-0 kubenswrapper[7614]: I0224 05:29:16.664491 7614 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-zxkt2 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 05:29:16.664607 master-0 kubenswrapper[7614]: [-]has-synced failed: reason withheld Feb 24 05:29:16.664607 master-0 kubenswrapper[7614]: [+]process-running ok Feb 24 05:29:16.664607 master-0 kubenswrapper[7614]: healthz check failed Feb 24 05:29:16.666388 master-0 kubenswrapper[7614]: I0224 05:29:16.664620 7614 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-zxkt2" podUID="be7a4b9e-1e9a-4298-b804-21b683805c0e" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 05:29:17.174445 master-0 kubenswrapper[7614]: I0224 05:29:17.174283 7614 scope.go:117] "RemoveContainer" containerID="1dbe14eb848b87711b564dbd00190070ac04cfc0d462906a427a3af22f0cfd2a" Feb 24 05:29:17.174864 master-0 kubenswrapper[7614]: E0224 05:29:17.174738 7614 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"snapshot-controller\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=snapshot-controller pod=csi-snapshot-controller-6847bb4785-vqn96_openshift-cluster-storage-operator(b79ef90c-dc66-4d5f-8943-2c3ac68796ba)\"" pod="openshift-cluster-storage-operator/csi-snapshot-controller-6847bb4785-vqn96" podUID="b79ef90c-dc66-4d5f-8943-2c3ac68796ba" Feb 24 05:29:17.177471 master-0 kubenswrapper[7614]: I0224 05:29:17.177203 7614 scope.go:117] "RemoveContainer" containerID="09f85c8b01d7446d5646107c9c18780a59af7c98e21d551a62767e55f5cabf2d" Feb 24 05:29:17.180610 master-0 kubenswrapper[7614]: E0224 05:29:17.180544 7614 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ingress-operator\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=ingress-operator pod=ingress-operator-6569778c84-rr8r7_openshift-ingress-operator(3d6b1ce7-1213-494c-829d-186d39eac7eb)\"" pod="openshift-ingress-operator/ingress-operator-6569778c84-rr8r7" podUID="3d6b1ce7-1213-494c-829d-186d39eac7eb" Feb 24 05:29:17.663977 master-0 kubenswrapper[7614]: I0224 05:29:17.663846 7614 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-zxkt2 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 05:29:17.663977 master-0 kubenswrapper[7614]: [-]has-synced failed: reason withheld Feb 24 05:29:17.663977 master-0 kubenswrapper[7614]: [+]process-running ok Feb 24 05:29:17.663977 master-0 kubenswrapper[7614]: healthz check failed Feb 24 05:29:17.663977 master-0 kubenswrapper[7614]: I0224 05:29:17.663970 7614 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-zxkt2" podUID="be7a4b9e-1e9a-4298-b804-21b683805c0e" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 05:29:18.506589 master-0 kubenswrapper[7614]: I0224 05:29:18.506463 7614 patch_prober.go:28] interesting pod/openshift-config-operator-6f47d587d6-7b87v container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.128.0.57:8443/healthz\": dial tcp 10.128.0.57:8443: connect: connection refused" start-of-body= Feb 24 05:29:18.506931 master-0 kubenswrapper[7614]: I0224 05:29:18.506589 7614 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-6f47d587d6-7b87v" podUID="3f511d03-a182-4968-ba40-5c5c10e5e6be" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.128.0.57:8443/healthz\": dial tcp 10.128.0.57:8443: connect: connection refused" Feb 24 05:29:18.506931 master-0 kubenswrapper[7614]: I0224 05:29:18.506464 7614 patch_prober.go:28] interesting pod/openshift-config-operator-6f47d587d6-7b87v container/openshift-config-operator namespace/openshift-config-operator: Liveness probe status=failure output="Get \"https://10.128.0.57:8443/healthz\": dial tcp 10.128.0.57:8443: connect: connection refused" start-of-body= Feb 24 05:29:18.506931 master-0 kubenswrapper[7614]: I0224 05:29:18.506766 7614 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-config-operator/openshift-config-operator-6f47d587d6-7b87v" podUID="3f511d03-a182-4968-ba40-5c5c10e5e6be" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.128.0.57:8443/healthz\": dial tcp 10.128.0.57:8443: connect: connection refused" Feb 24 05:29:18.663975 master-0 kubenswrapper[7614]: I0224 05:29:18.663862 7614 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-zxkt2 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 05:29:18.663975 master-0 kubenswrapper[7614]: [-]has-synced failed: reason withheld Feb 24 05:29:18.663975 master-0 kubenswrapper[7614]: [+]process-running ok Feb 24 05:29:18.663975 master-0 kubenswrapper[7614]: healthz check failed Feb 24 05:29:18.664659 master-0 kubenswrapper[7614]: I0224 05:29:18.663994 7614 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-zxkt2" podUID="be7a4b9e-1e9a-4298-b804-21b683805c0e" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 05:29:19.532131 master-0 kubenswrapper[7614]: I0224 05:29:19.532045 7614 generic.go:334] "Generic (PLEG): container finished" podID="633d33a1-e1b1-40b0-b56a-afb0c1085d97" containerID="f0a59447aa5599eed278c625c9ff436eeea9214419570f5ba689ba155470685a" exitCode=0 Feb 24 05:29:19.532479 master-0 kubenswrapper[7614]: I0224 05:29:19.532148 7614 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-olm-operator/cluster-olm-operator-5bd7768f54-qh6j7" event={"ID":"633d33a1-e1b1-40b0-b56a-afb0c1085d97","Type":"ContainerDied","Data":"f0a59447aa5599eed278c625c9ff436eeea9214419570f5ba689ba155470685a"} Feb 24 05:29:19.533858 master-0 kubenswrapper[7614]: I0224 05:29:19.533812 7614 scope.go:117] "RemoveContainer" containerID="f0a59447aa5599eed278c625c9ff436eeea9214419570f5ba689ba155470685a" Feb 24 05:29:19.534356 master-0 kubenswrapper[7614]: I0224 05:29:19.534294 7614 generic.go:334] "Generic (PLEG): container finished" podID="58ecd829-4749-4c8a-933b-16b4acccac90" containerID="19a4a70cd708813c9cf34e54dd49971eba939aacdcaa013905918a3ca917b13e" exitCode=0 Feb 24 05:29:19.534422 master-0 kubenswrapper[7614]: I0224 05:29:19.534394 7614 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-8586dccc9b-49fsv" event={"ID":"58ecd829-4749-4c8a-933b-16b4acccac90","Type":"ContainerDied","Data":"19a4a70cd708813c9cf34e54dd49971eba939aacdcaa013905918a3ca917b13e"} Feb 24 05:29:19.535507 master-0 kubenswrapper[7614]: I0224 05:29:19.535076 7614 scope.go:117] "RemoveContainer" containerID="19a4a70cd708813c9cf34e54dd49971eba939aacdcaa013905918a3ca917b13e" Feb 24 05:29:19.546826 master-0 kubenswrapper[7614]: I0224 05:29:19.546710 7614 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-584cc7bcb5-zz9fm" event={"ID":"933beda1-c930-4831-a886-3cc6b7a992ad","Type":"ContainerDied","Data":"2c56b69fc4337064fa388eb97509499abfd2df910bf7a2fa34bbdc4682b29843"} Feb 24 05:29:19.546906 master-0 kubenswrapper[7614]: I0224 05:29:19.546748 7614 generic.go:334] "Generic (PLEG): container finished" podID="933beda1-c930-4831-a886-3cc6b7a992ad" containerID="2c56b69fc4337064fa388eb97509499abfd2df910bf7a2fa34bbdc4682b29843" exitCode=0 Feb 24 05:29:19.548014 master-0 kubenswrapper[7614]: I0224 05:29:19.547823 7614 scope.go:117] "RemoveContainer" containerID="2c56b69fc4337064fa388eb97509499abfd2df910bf7a2fa34bbdc4682b29843" Feb 24 05:29:19.549274 master-0 kubenswrapper[7614]: I0224 05:29:19.549237 7614 generic.go:334] "Generic (PLEG): container finished" podID="ab5afff8-1081-4acc-8ab9-d6bfd8df1d67" containerID="6c52c639645d2cd2c7e662742a4602420e9f03d769221f35786d315c1351ca22" exitCode=0 Feb 24 05:29:19.549621 master-0 kubenswrapper[7614]: I0224 05:29:19.549293 7614 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-576b4d78bd-fsmrl" event={"ID":"ab5afff8-1081-4acc-8ab9-d6bfd8df1d67","Type":"ContainerDied","Data":"6c52c639645d2cd2c7e662742a4602420e9f03d769221f35786d315c1351ca22"} Feb 24 05:29:19.550447 master-0 kubenswrapper[7614]: I0224 05:29:19.550409 7614 scope.go:117] "RemoveContainer" containerID="6c52c639645d2cd2c7e662742a4602420e9f03d769221f35786d315c1351ca22" Feb 24 05:29:19.551784 master-0 kubenswrapper[7614]: I0224 05:29:19.551750 7614 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cluster-node-tuning-operator_cluster-node-tuning-operator-bcf775fc9-h99t4_6e5ede6a-9d4b-47a2-b4ba-e6018910d05a/cluster-node-tuning-operator/0.log" Feb 24 05:29:19.551836 master-0 kubenswrapper[7614]: I0224 05:29:19.551796 7614 generic.go:334] "Generic (PLEG): container finished" podID="6e5ede6a-9d4b-47a2-b4ba-e6018910d05a" containerID="8e61e1d5a62185ea40dd7889454ccd250bbeb0122433d8e3015d94ba9f1d1334" exitCode=1 Feb 24 05:29:19.551912 master-0 kubenswrapper[7614]: I0224 05:29:19.551861 7614 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-bcf775fc9-h99t4" event={"ID":"6e5ede6a-9d4b-47a2-b4ba-e6018910d05a","Type":"ContainerDied","Data":"8e61e1d5a62185ea40dd7889454ccd250bbeb0122433d8e3015d94ba9f1d1334"} Feb 24 05:29:19.552661 master-0 kubenswrapper[7614]: I0224 05:29:19.552613 7614 scope.go:117] "RemoveContainer" containerID="8e61e1d5a62185ea40dd7889454ccd250bbeb0122433d8e3015d94ba9f1d1334" Feb 24 05:29:19.554973 master-0 kubenswrapper[7614]: I0224 05:29:19.554919 7614 generic.go:334] "Generic (PLEG): container finished" podID="feee7fe8-e805-4807-b4c0-ecc7ef0f88d9" containerID="e0310f65eb21da7836bef1892997027dc547f133c634a87f14b119b040f60bd1" exitCode=0 Feb 24 05:29:19.555074 master-0 kubenswrapper[7614]: I0224 05:29:19.555012 7614 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-storage-operator/csi-snapshot-controller-operator-6fb4df594f-8tv99" event={"ID":"feee7fe8-e805-4807-b4c0-ecc7ef0f88d9","Type":"ContainerDied","Data":"e0310f65eb21da7836bef1892997027dc547f133c634a87f14b119b040f60bd1"} Feb 24 05:29:19.560616 master-0 kubenswrapper[7614]: I0224 05:29:19.560537 7614 generic.go:334] "Generic (PLEG): container finished" podID="e6f05507-d5c1-4102-a220-1db715a496e3" containerID="e2064230fd04624f769c4f745b80aa38ea29b6c2deabd8a0fd7e19128af8486a" exitCode=0 Feb 24 05:29:19.560774 master-0 kubenswrapper[7614]: I0224 05:29:19.560716 7614 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-77cd4d9559-8l7xv" event={"ID":"e6f05507-d5c1-4102-a220-1db715a496e3","Type":"ContainerDied","Data":"e2064230fd04624f769c4f745b80aa38ea29b6c2deabd8a0fd7e19128af8486a"} Feb 24 05:29:19.561004 master-0 kubenswrapper[7614]: I0224 05:29:19.560850 7614 scope.go:117] "RemoveContainer" containerID="acdec98fa977010c1aa977c4f0cce838f4bc4ae8e6cd6029b1446085a34e0532" Feb 24 05:29:19.561004 master-0 kubenswrapper[7614]: I0224 05:29:19.560968 7614 scope.go:117] "RemoveContainer" containerID="e0310f65eb21da7836bef1892997027dc547f133c634a87f14b119b040f60bd1" Feb 24 05:29:19.561575 master-0 kubenswrapper[7614]: I0224 05:29:19.561543 7614 scope.go:117] "RemoveContainer" containerID="e2064230fd04624f769c4f745b80aa38ea29b6c2deabd8a0fd7e19128af8486a" Feb 24 05:29:19.662809 master-0 kubenswrapper[7614]: I0224 05:29:19.662717 7614 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-zxkt2 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 05:29:19.662809 master-0 kubenswrapper[7614]: [-]has-synced failed: reason withheld Feb 24 05:29:19.662809 master-0 kubenswrapper[7614]: [+]process-running ok Feb 24 05:29:19.662809 master-0 kubenswrapper[7614]: healthz check failed Feb 24 05:29:19.663149 master-0 kubenswrapper[7614]: I0224 05:29:19.662836 7614 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-zxkt2" podUID="be7a4b9e-1e9a-4298-b804-21b683805c0e" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 05:29:20.572077 master-0 kubenswrapper[7614]: I0224 05:29:20.571953 7614 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-584cc7bcb5-zz9fm" event={"ID":"933beda1-c930-4831-a886-3cc6b7a992ad","Type":"ContainerStarted","Data":"8aa349d597bf760450a366f96ca7234e3be47d009a25539c34dc8f659f32233a"} Feb 24 05:29:20.575472 master-0 kubenswrapper[7614]: I0224 05:29:20.574610 7614 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-576b4d78bd-fsmrl" event={"ID":"ab5afff8-1081-4acc-8ab9-d6bfd8df1d67","Type":"ContainerStarted","Data":"1a1056973c5d5473d209548f2222681ed78b53c6a1d0f16c47730474eecf263b"} Feb 24 05:29:20.577737 master-0 kubenswrapper[7614]: I0224 05:29:20.577682 7614 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cluster-node-tuning-operator_cluster-node-tuning-operator-bcf775fc9-h99t4_6e5ede6a-9d4b-47a2-b4ba-e6018910d05a/cluster-node-tuning-operator/0.log" Feb 24 05:29:20.577933 master-0 kubenswrapper[7614]: I0224 05:29:20.577786 7614 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-bcf775fc9-h99t4" event={"ID":"6e5ede6a-9d4b-47a2-b4ba-e6018910d05a","Type":"ContainerStarted","Data":"7218c71bfaf9900bb82fd7b2ba79a6944bf8dbcf396fbc5dff49455b8a451b8c"} Feb 24 05:29:20.580026 master-0 kubenswrapper[7614]: I0224 05:29:20.579973 7614 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-storage-operator/csi-snapshot-controller-operator-6fb4df594f-8tv99" event={"ID":"feee7fe8-e805-4807-b4c0-ecc7ef0f88d9","Type":"ContainerStarted","Data":"dc9459ecf254063f382f42c31f0409bf79b3482ff431cc477a5ec1517a052a93"} Feb 24 05:29:20.582510 master-0 kubenswrapper[7614]: I0224 05:29:20.582432 7614 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-77cd4d9559-8l7xv" event={"ID":"e6f05507-d5c1-4102-a220-1db715a496e3","Type":"ContainerStarted","Data":"309486d87e5bf6c3d9e46b825241505b294a1369fa6f7f02b745dc6f886f7414"} Feb 24 05:29:20.585782 master-0 kubenswrapper[7614]: I0224 05:29:20.585732 7614 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-olm-operator/cluster-olm-operator-5bd7768f54-qh6j7" event={"ID":"633d33a1-e1b1-40b0-b56a-afb0c1085d97","Type":"ContainerStarted","Data":"5c18d4db83da9ac249e0e392fd241f4a27edc0ea37df736f71ca2e00ec959acd"} Feb 24 05:29:20.588008 master-0 kubenswrapper[7614]: I0224 05:29:20.587972 7614 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-8586dccc9b-49fsv" event={"ID":"58ecd829-4749-4c8a-933b-16b4acccac90","Type":"ContainerStarted","Data":"5ebc309c9f528f49cdd47c193b075c87f914c97869c55ff088b66b1afe76e021"} Feb 24 05:29:20.663048 master-0 kubenswrapper[7614]: I0224 05:29:20.662977 7614 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-zxkt2 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 05:29:20.663048 master-0 kubenswrapper[7614]: [-]has-synced failed: reason withheld Feb 24 05:29:20.663048 master-0 kubenswrapper[7614]: [+]process-running ok Feb 24 05:29:20.663048 master-0 kubenswrapper[7614]: healthz check failed Feb 24 05:29:20.663048 master-0 kubenswrapper[7614]: I0224 05:29:20.663030 7614 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-zxkt2" podUID="be7a4b9e-1e9a-4298-b804-21b683805c0e" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 05:29:21.174282 master-0 kubenswrapper[7614]: I0224 05:29:21.174177 7614 scope.go:117] "RemoveContainer" containerID="86e637d0b5dc95d562f8425432d6a525c0e0e358c1d51fc8a2c0d80b43fd747a" Feb 24 05:29:21.174868 master-0 kubenswrapper[7614]: E0224 05:29:21.174614 7614 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cluster-baremetal-operator\" with CrashLoopBackOff: \"back-off 20s restarting failed container=cluster-baremetal-operator pod=cluster-baremetal-operator-d6bb9bb76-54hnv_openshift-machine-api(39623346-691b-42c8-af76-409d4f6629af)\"" pod="openshift-machine-api/cluster-baremetal-operator-d6bb9bb76-54hnv" podUID="39623346-691b-42c8-af76-409d4f6629af" Feb 24 05:29:21.407876 master-0 kubenswrapper[7614]: I0224 05:29:21.406761 7614 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Feb 24 05:29:21.418643 master-0 kubenswrapper[7614]: I0224 05:29:21.418564 7614 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Feb 24 05:29:21.506903 master-0 kubenswrapper[7614]: I0224 05:29:21.506818 7614 patch_prober.go:28] interesting pod/openshift-config-operator-6f47d587d6-7b87v container/openshift-config-operator namespace/openshift-config-operator: Liveness probe status=failure output="Get \"https://10.128.0.57:8443/healthz\": dial tcp 10.128.0.57:8443: connect: connection refused" start-of-body= Feb 24 05:29:21.507143 master-0 kubenswrapper[7614]: I0224 05:29:21.506933 7614 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-config-operator/openshift-config-operator-6f47d587d6-7b87v" podUID="3f511d03-a182-4968-ba40-5c5c10e5e6be" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.128.0.57:8443/healthz\": dial tcp 10.128.0.57:8443: connect: connection refused" Feb 24 05:29:21.507143 master-0 kubenswrapper[7614]: I0224 05:29:21.507023 7614 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-config-operator/openshift-config-operator-6f47d587d6-7b87v" Feb 24 05:29:21.507239 master-0 kubenswrapper[7614]: I0224 05:29:21.507179 7614 patch_prober.go:28] interesting pod/openshift-config-operator-6f47d587d6-7b87v container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.128.0.57:8443/healthz\": dial tcp 10.128.0.57:8443: connect: connection refused" start-of-body= Feb 24 05:29:21.507354 master-0 kubenswrapper[7614]: I0224 05:29:21.507267 7614 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-6f47d587d6-7b87v" podUID="3f511d03-a182-4968-ba40-5c5c10e5e6be" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.128.0.57:8443/healthz\": dial tcp 10.128.0.57:8443: connect: connection refused" Feb 24 05:29:21.507876 master-0 kubenswrapper[7614]: I0224 05:29:21.507830 7614 patch_prober.go:28] interesting pod/openshift-config-operator-6f47d587d6-7b87v container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.128.0.57:8443/healthz\": dial tcp 10.128.0.57:8443: connect: connection refused" start-of-body= Feb 24 05:29:21.507975 master-0 kubenswrapper[7614]: I0224 05:29:21.507884 7614 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-6f47d587d6-7b87v" podUID="3f511d03-a182-4968-ba40-5c5c10e5e6be" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.128.0.57:8443/healthz\": dial tcp 10.128.0.57:8443: connect: connection refused" Feb 24 05:29:21.508384 master-0 kubenswrapper[7614]: I0224 05:29:21.508291 7614 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="openshift-config-operator" containerStatusID={"Type":"cri-o","ID":"dd63c41ce4fcbe7d75c9e6d476fdf1e340af8b590914ef213ef34b53284ccafe"} pod="openshift-config-operator/openshift-config-operator-6f47d587d6-7b87v" containerMessage="Container openshift-config-operator failed liveness probe, will be restarted" Feb 24 05:29:21.508492 master-0 kubenswrapper[7614]: I0224 05:29:21.508408 7614 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-config-operator/openshift-config-operator-6f47d587d6-7b87v" podUID="3f511d03-a182-4968-ba40-5c5c10e5e6be" containerName="openshift-config-operator" containerID="cri-o://dd63c41ce4fcbe7d75c9e6d476fdf1e340af8b590914ef213ef34b53284ccafe" gracePeriod=30 Feb 24 05:29:21.641337 master-0 kubenswrapper[7614]: I0224 05:29:21.639695 7614 generic.go:334] "Generic (PLEG): container finished" podID="39c4d0aa-c372-4d02-9302-337e68b56784" containerID="986b482003ff19c4b718ec972373fc705ec17bcf47510b88393859e89ab2007d" exitCode=0 Feb 24 05:29:21.641337 master-0 kubenswrapper[7614]: I0224 05:29:21.639803 7614 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-7f8c75f984-922md" event={"ID":"39c4d0aa-c372-4d02-9302-337e68b56784","Type":"ContainerDied","Data":"986b482003ff19c4b718ec972373fc705ec17bcf47510b88393859e89ab2007d"} Feb 24 05:29:21.641337 master-0 kubenswrapper[7614]: I0224 05:29:21.640328 7614 scope.go:117] "RemoveContainer" containerID="986b482003ff19c4b718ec972373fc705ec17bcf47510b88393859e89ab2007d" Feb 24 05:29:21.648329 master-0 kubenswrapper[7614]: I0224 05:29:21.643238 7614 generic.go:334] "Generic (PLEG): container finished" podID="e1f03d97-1a6a-41e4-9ed3-cd9b01c46400" containerID="a36fb847cfc8df5fc6c5185376329dd9ae5ab47df139ba0d792b1adb2ce6277f" exitCode=0 Feb 24 05:29:21.648329 master-0 kubenswrapper[7614]: I0224 05:29:21.643395 7614 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-storage-operator/cluster-storage-operator-f94476f49-tlmg5" event={"ID":"e1f03d97-1a6a-41e4-9ed3-cd9b01c46400","Type":"ContainerDied","Data":"a36fb847cfc8df5fc6c5185376329dd9ae5ab47df139ba0d792b1adb2ce6277f"} Feb 24 05:29:21.648329 master-0 kubenswrapper[7614]: I0224 05:29:21.644245 7614 scope.go:117] "RemoveContainer" containerID="a36fb847cfc8df5fc6c5185376329dd9ae5ab47df139ba0d792b1adb2ce6277f" Feb 24 05:29:21.663339 master-0 kubenswrapper[7614]: I0224 05:29:21.656862 7614 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-network-operator_network-operator-7d7db75979-4fk6k_f77227c8-c52d-4a71-ae1b-792055f6f23d/network-operator/1.log" Feb 24 05:29:21.663339 master-0 kubenswrapper[7614]: I0224 05:29:21.656917 7614 generic.go:334] "Generic (PLEG): container finished" podID="f77227c8-c52d-4a71-ae1b-792055f6f23d" containerID="77344984c3a22910313574fd5443c3f8c0826a85a9d2f12dd8592b5e925a1b84" exitCode=0 Feb 24 05:29:21.663339 master-0 kubenswrapper[7614]: I0224 05:29:21.656984 7614 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-7d7db75979-4fk6k" event={"ID":"f77227c8-c52d-4a71-ae1b-792055f6f23d","Type":"ContainerDied","Data":"77344984c3a22910313574fd5443c3f8c0826a85a9d2f12dd8592b5e925a1b84"} Feb 24 05:29:21.663339 master-0 kubenswrapper[7614]: I0224 05:29:21.657026 7614 scope.go:117] "RemoveContainer" containerID="6e3c93a1a355eeeb3f5cb2283a174709bfd59dc7e2e2f1d724c2278f1e630da9" Feb 24 05:29:21.663339 master-0 kubenswrapper[7614]: I0224 05:29:21.657427 7614 scope.go:117] "RemoveContainer" containerID="77344984c3a22910313574fd5443c3f8c0826a85a9d2f12dd8592b5e925a1b84" Feb 24 05:29:21.674326 master-0 kubenswrapper[7614]: I0224 05:29:21.671199 7614 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-zxkt2 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 05:29:21.674326 master-0 kubenswrapper[7614]: [-]has-synced failed: reason withheld Feb 24 05:29:21.674326 master-0 kubenswrapper[7614]: [+]process-running ok Feb 24 05:29:21.674326 master-0 kubenswrapper[7614]: healthz check failed Feb 24 05:29:21.674326 master-0 kubenswrapper[7614]: I0224 05:29:21.671281 7614 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-zxkt2" podUID="be7a4b9e-1e9a-4298-b804-21b683805c0e" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 05:29:21.723399 master-0 kubenswrapper[7614]: I0224 05:29:21.719699 7614 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operator-lifecycle-manager_package-server-manager-5c75f78c8b-9d82f_49bfccec-61ec-4bef-a561-9f6e6f906215/package-server-manager/0.log" Feb 24 05:29:21.723399 master-0 kubenswrapper[7614]: I0224 05:29:21.720811 7614 generic.go:334] "Generic (PLEG): container finished" podID="49bfccec-61ec-4bef-a561-9f6e6f906215" containerID="44c8e9a1ff88f591315795d60d58a57e8877a5eadcf63c1d03aab3f292d278d7" exitCode=1 Feb 24 05:29:21.723399 master-0 kubenswrapper[7614]: I0224 05:29:21.720905 7614 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-5c75f78c8b-9d82f" event={"ID":"49bfccec-61ec-4bef-a561-9f6e6f906215","Type":"ContainerDied","Data":"44c8e9a1ff88f591315795d60d58a57e8877a5eadcf63c1d03aab3f292d278d7"} Feb 24 05:29:21.723399 master-0 kubenswrapper[7614]: I0224 05:29:21.721624 7614 scope.go:117] "RemoveContainer" containerID="44c8e9a1ff88f591315795d60d58a57e8877a5eadcf63c1d03aab3f292d278d7" Feb 24 05:29:21.731328 master-0 kubenswrapper[7614]: I0224 05:29:21.730777 7614 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5c7cf458b4-65mc5_116e6b47-d435-49ca-abb5-088788daf16a/machine-api-operator/0.log" Feb 24 05:29:21.731328 master-0 kubenswrapper[7614]: I0224 05:29:21.731064 7614 generic.go:334] "Generic (PLEG): container finished" podID="116e6b47-d435-49ca-abb5-088788daf16a" containerID="6b3c3ebf05dd2e018df6f39f4bdd076d24f312bc4472c6ee016795dfeeb9269e" exitCode=255 Feb 24 05:29:21.731328 master-0 kubenswrapper[7614]: I0224 05:29:21.731123 7614 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5c7cf458b4-65mc5" event={"ID":"116e6b47-d435-49ca-abb5-088788daf16a","Type":"ContainerDied","Data":"6b3c3ebf05dd2e018df6f39f4bdd076d24f312bc4472c6ee016795dfeeb9269e"} Feb 24 05:29:21.741326 master-0 kubenswrapper[7614]: I0224 05:29:21.732694 7614 scope.go:117] "RemoveContainer" containerID="6b3c3ebf05dd2e018df6f39f4bdd076d24f312bc4472c6ee016795dfeeb9269e" Feb 24 05:29:21.741326 master-0 kubenswrapper[7614]: I0224 05:29:21.732736 7614 generic.go:334] "Generic (PLEG): container finished" podID="23bdafdd-27c9-4461-be4a-3ea916ac3875" containerID="e316013fb83fe451b12a337302e18c3ea427b3968c1f30f37e4c5892013d663c" exitCode=0 Feb 24 05:29:21.741326 master-0 kubenswrapper[7614]: I0224 05:29:21.733460 7614 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/cluster-image-registry-operator-779979bdf7-t98nr" event={"ID":"23bdafdd-27c9-4461-be4a-3ea916ac3875","Type":"ContainerDied","Data":"e316013fb83fe451b12a337302e18c3ea427b3968c1f30f37e4c5892013d663c"} Feb 24 05:29:21.741326 master-0 kubenswrapper[7614]: I0224 05:29:21.734213 7614 scope.go:117] "RemoveContainer" containerID="e316013fb83fe451b12a337302e18c3ea427b3968c1f30f37e4c5892013d663c" Feb 24 05:29:21.796332 master-0 kubenswrapper[7614]: I0224 05:29:21.785539 7614 generic.go:334] "Generic (PLEG): container finished" podID="b426cb33-1624-45e6-b8c5-4e8d251f6339" containerID="adced94d4c33380fc07637d5383f9ac889fe6f6c9825230d6be68622728bb5ff" exitCode=0 Feb 24 05:29:21.796332 master-0 kubenswrapper[7614]: I0224 05:29:21.785664 7614 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-654dcf5585-fgmnd" event={"ID":"b426cb33-1624-45e6-b8c5-4e8d251f6339","Type":"ContainerDied","Data":"adced94d4c33380fc07637d5383f9ac889fe6f6c9825230d6be68622728bb5ff"} Feb 24 05:29:21.796332 master-0 kubenswrapper[7614]: I0224 05:29:21.786269 7614 scope.go:117] "RemoveContainer" containerID="adced94d4c33380fc07637d5383f9ac889fe6f6c9825230d6be68622728bb5ff" Feb 24 05:29:21.855332 master-0 kubenswrapper[7614]: I0224 05:29:21.852057 7614 generic.go:334] "Generic (PLEG): container finished" podID="d86d5bbe-3768-4695-810b-245a56e4fd1d" containerID="2f151e3442498eed531dc228511816d55db9ae5db685cbb2166ce65b5b71997d" exitCode=0 Feb 24 05:29:21.855332 master-0 kubenswrapper[7614]: I0224 05:29:21.852179 7614 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-c48c8bf7c-mcdrl" event={"ID":"d86d5bbe-3768-4695-810b-245a56e4fd1d","Type":"ContainerDied","Data":"2f151e3442498eed531dc228511816d55db9ae5db685cbb2166ce65b5b71997d"} Feb 24 05:29:21.855332 master-0 kubenswrapper[7614]: I0224 05:29:21.852920 7614 scope.go:117] "RemoveContainer" containerID="2f151e3442498eed531dc228511816d55db9ae5db685cbb2166ce65b5b71997d" Feb 24 05:29:21.870549 master-0 kubenswrapper[7614]: I0224 05:29:21.870433 7614 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_cluster-autoscaler-operator-86b8dc6d6-mcf2z_5d51ce58-55f6-45d5-9d5d-7b31ae42380a/cluster-autoscaler-operator/0.log" Feb 24 05:29:21.893785 master-0 kubenswrapper[7614]: I0224 05:29:21.882629 7614 generic.go:334] "Generic (PLEG): container finished" podID="5d51ce58-55f6-45d5-9d5d-7b31ae42380a" containerID="bb3a0e8898f8ea9060490a27cc51b9a9e7a34486fe6313b2342ac6b15f983128" exitCode=255 Feb 24 05:29:21.893785 master-0 kubenswrapper[7614]: I0224 05:29:21.883707 7614 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/cluster-autoscaler-operator-86b8dc6d6-mcf2z" event={"ID":"5d51ce58-55f6-45d5-9d5d-7b31ae42380a","Type":"ContainerDied","Data":"bb3a0e8898f8ea9060490a27cc51b9a9e7a34486fe6313b2342ac6b15f983128"} Feb 24 05:29:21.893785 master-0 kubenswrapper[7614]: I0224 05:29:21.884045 7614 scope.go:117] "RemoveContainer" containerID="bb3a0e8898f8ea9060490a27cc51b9a9e7a34486fe6313b2342ac6b15f983128" Feb 24 05:29:21.902343 master-0 kubenswrapper[7614]: E0224 05:29:21.894347 7614 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"openshift-config-operator\" with CrashLoopBackOff: \"back-off 20s restarting failed container=openshift-config-operator pod=openshift-config-operator-6f47d587d6-7b87v_openshift-config-operator(3f511d03-a182-4968-ba40-5c5c10e5e6be)\"" pod="openshift-config-operator/openshift-config-operator-6f47d587d6-7b87v" podUID="3f511d03-a182-4968-ba40-5c5c10e5e6be" Feb 24 05:29:21.931848 master-0 kubenswrapper[7614]: I0224 05:29:21.927582 7614 scope.go:117] "RemoveContainer" containerID="104b76f7ac0ef4084c50822d35c6690afc0cd965133c5d489594ae901dd1b9f2" Feb 24 05:29:21.985719 master-0 kubenswrapper[7614]: I0224 05:29:21.985232 7614 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-multus/cni-sysctl-allowlist-ds-j28p2"] Feb 24 05:29:21.985719 master-0 kubenswrapper[7614]: I0224 05:29:21.985329 7614 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-multus/cni-sysctl-allowlist-ds-j28p2"] Feb 24 05:29:22.221120 master-0 kubenswrapper[7614]: I0224 05:29:22.218840 7614 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-multus/multus-admission-controller-5f98f4f8d5-b985k"] Feb 24 05:29:22.221120 master-0 kubenswrapper[7614]: I0224 05:29:22.219099 7614 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-multus/multus-admission-controller-5f98f4f8d5-b985k" podUID="8be1f8db-3f0b-4d6f-be42-7564fba66820" containerName="multus-admission-controller" containerID="cri-o://eb40f700665ddc5a59ad171b706d2fdf1426e6e5d152e9cd1782903011fd60d0" gracePeriod=30 Feb 24 05:29:22.221120 master-0 kubenswrapper[7614]: I0224 05:29:22.219247 7614 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-multus/multus-admission-controller-5f98f4f8d5-b985k" podUID="8be1f8db-3f0b-4d6f-be42-7564fba66820" containerName="kube-rbac-proxy" containerID="cri-o://bbd5aa582f8241ea4c62c11beba1abad300d328a3af1603fa3f170227b163e28" gracePeriod=30 Feb 24 05:29:22.237150 master-0 kubenswrapper[7614]: I0224 05:29:22.237097 7614 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-operator-lifecycle-manager/package-server-manager-5c75f78c8b-9d82f" Feb 24 05:29:22.237252 master-0 kubenswrapper[7614]: I0224 05:29:22.237212 7614 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/package-server-manager-5c75f78c8b-9d82f" Feb 24 05:29:22.664144 master-0 kubenswrapper[7614]: I0224 05:29:22.664078 7614 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-zxkt2 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 05:29:22.664144 master-0 kubenswrapper[7614]: [-]has-synced failed: reason withheld Feb 24 05:29:22.664144 master-0 kubenswrapper[7614]: [+]process-running ok Feb 24 05:29:22.664144 master-0 kubenswrapper[7614]: healthz check failed Feb 24 05:29:22.664888 master-0 kubenswrapper[7614]: I0224 05:29:22.664148 7614 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-zxkt2" podUID="be7a4b9e-1e9a-4298-b804-21b683805c0e" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 05:29:22.896094 master-0 kubenswrapper[7614]: I0224 05:29:22.896014 7614 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5c7cf458b4-65mc5_116e6b47-d435-49ca-abb5-088788daf16a/machine-api-operator/0.log" Feb 24 05:29:22.896979 master-0 kubenswrapper[7614]: I0224 05:29:22.896721 7614 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5c7cf458b4-65mc5" event={"ID":"116e6b47-d435-49ca-abb5-088788daf16a","Type":"ContainerStarted","Data":"f13437b7c066da83c3ef3cdc2a397491362db231ac58fc8d6236e57e89d48a7c"} Feb 24 05:29:22.900189 master-0 kubenswrapper[7614]: I0224 05:29:22.900164 7614 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-7f8c75f984-922md" event={"ID":"39c4d0aa-c372-4d02-9302-337e68b56784","Type":"ContainerStarted","Data":"730a8868d3beee1872cc5db52659fe5a3318920390993dcaedf0138db12ca3c2"} Feb 24 05:29:22.902767 master-0 kubenswrapper[7614]: I0224 05:29:22.902744 7614 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-7d7db75979-4fk6k" event={"ID":"f77227c8-c52d-4a71-ae1b-792055f6f23d","Type":"ContainerStarted","Data":"8b0e2de44700c246f20a78645413835caafd841dc6349acfa7b33a12ef643edb"} Feb 24 05:29:22.905088 master-0 kubenswrapper[7614]: I0224 05:29:22.905019 7614 generic.go:334] "Generic (PLEG): container finished" podID="8be1f8db-3f0b-4d6f-be42-7564fba66820" containerID="bbd5aa582f8241ea4c62c11beba1abad300d328a3af1603fa3f170227b163e28" exitCode=0 Feb 24 05:29:22.905172 master-0 kubenswrapper[7614]: I0224 05:29:22.905086 7614 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-5f98f4f8d5-b985k" event={"ID":"8be1f8db-3f0b-4d6f-be42-7564fba66820","Type":"ContainerDied","Data":"bbd5aa582f8241ea4c62c11beba1abad300d328a3af1603fa3f170227b163e28"} Feb 24 05:29:22.907686 master-0 kubenswrapper[7614]: I0224 05:29:22.907472 7614 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-storage-operator/cluster-storage-operator-f94476f49-tlmg5" event={"ID":"e1f03d97-1a6a-41e4-9ed3-cd9b01c46400","Type":"ContainerStarted","Data":"4e9ea0c1b8cf5336c013f2ac92bd4866e16e85a1e800b41135a94419b84cf316"} Feb 24 05:29:22.910356 master-0 kubenswrapper[7614]: I0224 05:29:22.910107 7614 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-c48c8bf7c-mcdrl" event={"ID":"d86d5bbe-3768-4695-810b-245a56e4fd1d","Type":"ContainerStarted","Data":"beb40dd037a06d74101b4373244bdd4912189e92990d1f1c96d326d55437328c"} Feb 24 05:29:22.913207 master-0 kubenswrapper[7614]: I0224 05:29:22.913178 7614 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operator-lifecycle-manager_package-server-manager-5c75f78c8b-9d82f_49bfccec-61ec-4bef-a561-9f6e6f906215/package-server-manager/0.log" Feb 24 05:29:22.914918 master-0 kubenswrapper[7614]: I0224 05:29:22.914706 7614 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-5c75f78c8b-9d82f" event={"ID":"49bfccec-61ec-4bef-a561-9f6e6f906215","Type":"ContainerStarted","Data":"fd45e20baf6826161e3f8d563f7848432903e79f7d403f8df59edf2b1c375183"} Feb 24 05:29:22.915277 master-0 kubenswrapper[7614]: I0224 05:29:22.915242 7614 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/package-server-manager-5c75f78c8b-9d82f" Feb 24 05:29:22.925962 master-0 kubenswrapper[7614]: I0224 05:29:22.925895 7614 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_cluster-autoscaler-operator-86b8dc6d6-mcf2z_5d51ce58-55f6-45d5-9d5d-7b31ae42380a/cluster-autoscaler-operator/0.log" Feb 24 05:29:22.926526 master-0 kubenswrapper[7614]: I0224 05:29:22.926491 7614 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/cluster-autoscaler-operator-86b8dc6d6-mcf2z" event={"ID":"5d51ce58-55f6-45d5-9d5d-7b31ae42380a","Type":"ContainerStarted","Data":"94ba0651d2d417f17fbc5af34cc83e17a6cf86b902070160637fa40a80ddda81"} Feb 24 05:29:22.929838 master-0 kubenswrapper[7614]: I0224 05:29:22.929810 7614 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/cluster-image-registry-operator-779979bdf7-t98nr" event={"ID":"23bdafdd-27c9-4461-be4a-3ea916ac3875","Type":"ContainerStarted","Data":"dce08e1ec986996e970efbb6f6648c033d109954843d8d7fdff3f85b1470728f"} Feb 24 05:29:22.935670 master-0 kubenswrapper[7614]: I0224 05:29:22.934732 7614 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-654dcf5585-fgmnd" event={"ID":"b426cb33-1624-45e6-b8c5-4e8d251f6339","Type":"ContainerStarted","Data":"772e9c8aa1fdcd46c0ef3ed958fb64c82190e2210686900e097b65be829cca8b"} Feb 24 05:29:22.938949 master-0 kubenswrapper[7614]: I0224 05:29:22.936119 7614 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-654dcf5585-fgmnd" Feb 24 05:29:22.939869 master-0 kubenswrapper[7614]: I0224 05:29:22.939744 7614 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-654dcf5585-fgmnd" Feb 24 05:29:22.941258 master-0 kubenswrapper[7614]: I0224 05:29:22.941221 7614 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-config-operator_openshift-config-operator-6f47d587d6-7b87v_3f511d03-a182-4968-ba40-5c5c10e5e6be/openshift-config-operator/2.log" Feb 24 05:29:22.942220 master-0 kubenswrapper[7614]: I0224 05:29:22.942039 7614 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-config-operator_openshift-config-operator-6f47d587d6-7b87v_3f511d03-a182-4968-ba40-5c5c10e5e6be/openshift-config-operator/1.log" Feb 24 05:29:22.942971 master-0 kubenswrapper[7614]: I0224 05:29:22.942929 7614 generic.go:334] "Generic (PLEG): container finished" podID="3f511d03-a182-4968-ba40-5c5c10e5e6be" containerID="dd63c41ce4fcbe7d75c9e6d476fdf1e340af8b590914ef213ef34b53284ccafe" exitCode=255 Feb 24 05:29:22.943049 master-0 kubenswrapper[7614]: I0224 05:29:22.942974 7614 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-6f47d587d6-7b87v" event={"ID":"3f511d03-a182-4968-ba40-5c5c10e5e6be","Type":"ContainerDied","Data":"dd63c41ce4fcbe7d75c9e6d476fdf1e340af8b590914ef213ef34b53284ccafe"} Feb 24 05:29:22.943835 master-0 kubenswrapper[7614]: I0224 05:29:22.943802 7614 scope.go:117] "RemoveContainer" containerID="8ea9d13281e6d20cdeced5c381efed4b0919698bffbbef309d207e550b38c166" Feb 24 05:29:22.944450 master-0 kubenswrapper[7614]: I0224 05:29:22.944400 7614 scope.go:117] "RemoveContainer" containerID="dd63c41ce4fcbe7d75c9e6d476fdf1e340af8b590914ef213ef34b53284ccafe" Feb 24 05:29:22.944783 master-0 kubenswrapper[7614]: E0224 05:29:22.944733 7614 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"openshift-config-operator\" with CrashLoopBackOff: \"back-off 20s restarting failed container=openshift-config-operator pod=openshift-config-operator-6f47d587d6-7b87v_openshift-config-operator(3f511d03-a182-4968-ba40-5c5c10e5e6be)\"" pod="openshift-config-operator/openshift-config-operator-6f47d587d6-7b87v" podUID="3f511d03-a182-4968-ba40-5c5c10e5e6be" Feb 24 05:29:23.189570 master-0 kubenswrapper[7614]: I0224 05:29:23.189433 7614 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2303d3b8-fe6a-469a-a306-4e1685181dbe" path="/var/lib/kubelet/pods/2303d3b8-fe6a-469a-a306-4e1685181dbe/volumes" Feb 24 05:29:23.663577 master-0 kubenswrapper[7614]: I0224 05:29:23.663491 7614 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-zxkt2 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 05:29:23.663577 master-0 kubenswrapper[7614]: [-]has-synced failed: reason withheld Feb 24 05:29:23.663577 master-0 kubenswrapper[7614]: [+]process-running ok Feb 24 05:29:23.663577 master-0 kubenswrapper[7614]: healthz check failed Feb 24 05:29:23.664214 master-0 kubenswrapper[7614]: I0224 05:29:23.664167 7614 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-zxkt2" podUID="be7a4b9e-1e9a-4298-b804-21b683805c0e" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 05:29:23.960466 master-0 kubenswrapper[7614]: I0224 05:29:23.960273 7614 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-config-operator_openshift-config-operator-6f47d587d6-7b87v_3f511d03-a182-4968-ba40-5c5c10e5e6be/openshift-config-operator/2.log" Feb 24 05:29:24.666425 master-0 kubenswrapper[7614]: I0224 05:29:24.664117 7614 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-zxkt2 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 05:29:24.666425 master-0 kubenswrapper[7614]: [-]has-synced failed: reason withheld Feb 24 05:29:24.666425 master-0 kubenswrapper[7614]: [+]process-running ok Feb 24 05:29:24.666425 master-0 kubenswrapper[7614]: healthz check failed Feb 24 05:29:24.666425 master-0 kubenswrapper[7614]: I0224 05:29:24.664264 7614 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-zxkt2" podUID="be7a4b9e-1e9a-4298-b804-21b683805c0e" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 05:29:25.662599 master-0 kubenswrapper[7614]: I0224 05:29:25.662511 7614 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-zxkt2 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 05:29:25.662599 master-0 kubenswrapper[7614]: [-]has-synced failed: reason withheld Feb 24 05:29:25.662599 master-0 kubenswrapper[7614]: [+]process-running ok Feb 24 05:29:25.662599 master-0 kubenswrapper[7614]: healthz check failed Feb 24 05:29:25.663178 master-0 kubenswrapper[7614]: I0224 05:29:25.662615 7614 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-zxkt2" podUID="be7a4b9e-1e9a-4298-b804-21b683805c0e" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 05:29:26.662858 master-0 kubenswrapper[7614]: I0224 05:29:26.662774 7614 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-zxkt2 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 05:29:26.662858 master-0 kubenswrapper[7614]: [-]has-synced failed: reason withheld Feb 24 05:29:26.662858 master-0 kubenswrapper[7614]: [+]process-running ok Feb 24 05:29:26.662858 master-0 kubenswrapper[7614]: healthz check failed Feb 24 05:29:26.664673 master-0 kubenswrapper[7614]: I0224 05:29:26.662884 7614 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-zxkt2" podUID="be7a4b9e-1e9a-4298-b804-21b683805c0e" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 05:29:27.664416 master-0 kubenswrapper[7614]: I0224 05:29:27.664280 7614 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-zxkt2 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 05:29:27.664416 master-0 kubenswrapper[7614]: [-]has-synced failed: reason withheld Feb 24 05:29:27.664416 master-0 kubenswrapper[7614]: [+]process-running ok Feb 24 05:29:27.664416 master-0 kubenswrapper[7614]: healthz check failed Feb 24 05:29:27.664416 master-0 kubenswrapper[7614]: I0224 05:29:27.664409 7614 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-zxkt2" podUID="be7a4b9e-1e9a-4298-b804-21b683805c0e" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 05:29:28.176071 master-0 kubenswrapper[7614]: I0224 05:29:28.175938 7614 scope.go:117] "RemoveContainer" containerID="fb14b25796af448c2bc49088ecec2ac65559ebf13053ffc78319aa0e2b8844d9" Feb 24 05:29:28.176611 master-0 kubenswrapper[7614]: E0224 05:29:28.176405 7614 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"authentication-operator\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=authentication-operator pod=authentication-operator-5bd7c86784-kbb8z_openshift-authentication-operator(59333a14-5bdc-4590-a3da-af7300f086da)\"" pod="openshift-authentication-operator/authentication-operator-5bd7c86784-kbb8z" podUID="59333a14-5bdc-4590-a3da-af7300f086da" Feb 24 05:29:28.664074 master-0 kubenswrapper[7614]: I0224 05:29:28.663934 7614 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-zxkt2 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 05:29:28.664074 master-0 kubenswrapper[7614]: [-]has-synced failed: reason withheld Feb 24 05:29:28.664074 master-0 kubenswrapper[7614]: [+]process-running ok Feb 24 05:29:28.664074 master-0 kubenswrapper[7614]: healthz check failed Feb 24 05:29:28.665564 master-0 kubenswrapper[7614]: I0224 05:29:28.664096 7614 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-zxkt2" podUID="be7a4b9e-1e9a-4298-b804-21b683805c0e" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 05:29:29.181551 master-0 kubenswrapper[7614]: I0224 05:29:29.181424 7614 scope.go:117] "RemoveContainer" containerID="1dbe14eb848b87711b564dbd00190070ac04cfc0d462906a427a3af22f0cfd2a" Feb 24 05:29:29.182025 master-0 kubenswrapper[7614]: E0224 05:29:29.181946 7614 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"snapshot-controller\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=snapshot-controller pod=csi-snapshot-controller-6847bb4785-vqn96_openshift-cluster-storage-operator(b79ef90c-dc66-4d5f-8943-2c3ac68796ba)\"" pod="openshift-cluster-storage-operator/csi-snapshot-controller-6847bb4785-vqn96" podUID="b79ef90c-dc66-4d5f-8943-2c3ac68796ba" Feb 24 05:29:29.664440 master-0 kubenswrapper[7614]: I0224 05:29:29.664328 7614 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-zxkt2 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 05:29:29.664440 master-0 kubenswrapper[7614]: [-]has-synced failed: reason withheld Feb 24 05:29:29.664440 master-0 kubenswrapper[7614]: [+]process-running ok Feb 24 05:29:29.664440 master-0 kubenswrapper[7614]: healthz check failed Feb 24 05:29:29.665569 master-0 kubenswrapper[7614]: I0224 05:29:29.664445 7614 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-zxkt2" podUID="be7a4b9e-1e9a-4298-b804-21b683805c0e" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 05:29:30.174980 master-0 kubenswrapper[7614]: I0224 05:29:30.174842 7614 scope.go:117] "RemoveContainer" containerID="09f85c8b01d7446d5646107c9c18780a59af7c98e21d551a62767e55f5cabf2d" Feb 24 05:29:30.175476 master-0 kubenswrapper[7614]: E0224 05:29:30.175401 7614 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ingress-operator\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=ingress-operator pod=ingress-operator-6569778c84-rr8r7_openshift-ingress-operator(3d6b1ce7-1213-494c-829d-186d39eac7eb)\"" pod="openshift-ingress-operator/ingress-operator-6569778c84-rr8r7" podUID="3d6b1ce7-1213-494c-829d-186d39eac7eb" Feb 24 05:29:30.663740 master-0 kubenswrapper[7614]: I0224 05:29:30.663533 7614 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-zxkt2 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 05:29:30.663740 master-0 kubenswrapper[7614]: [-]has-synced failed: reason withheld Feb 24 05:29:30.663740 master-0 kubenswrapper[7614]: [+]process-running ok Feb 24 05:29:30.663740 master-0 kubenswrapper[7614]: healthz check failed Feb 24 05:29:30.663740 master-0 kubenswrapper[7614]: I0224 05:29:30.663653 7614 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-zxkt2" podUID="be7a4b9e-1e9a-4298-b804-21b683805c0e" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 05:29:31.662852 master-0 kubenswrapper[7614]: I0224 05:29:31.662717 7614 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-zxkt2 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 05:29:31.662852 master-0 kubenswrapper[7614]: [-]has-synced failed: reason withheld Feb 24 05:29:31.662852 master-0 kubenswrapper[7614]: [+]process-running ok Feb 24 05:29:31.662852 master-0 kubenswrapper[7614]: healthz check failed Feb 24 05:29:31.664176 master-0 kubenswrapper[7614]: I0224 05:29:31.662852 7614 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-zxkt2" podUID="be7a4b9e-1e9a-4298-b804-21b683805c0e" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 05:29:32.663962 master-0 kubenswrapper[7614]: I0224 05:29:32.663858 7614 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-zxkt2 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 05:29:32.663962 master-0 kubenswrapper[7614]: [-]has-synced failed: reason withheld Feb 24 05:29:32.663962 master-0 kubenswrapper[7614]: [+]process-running ok Feb 24 05:29:32.663962 master-0 kubenswrapper[7614]: healthz check failed Feb 24 05:29:32.663962 master-0 kubenswrapper[7614]: I0224 05:29:32.663942 7614 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-zxkt2" podUID="be7a4b9e-1e9a-4298-b804-21b683805c0e" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 05:29:33.174714 master-0 kubenswrapper[7614]: I0224 05:29:33.174643 7614 scope.go:117] "RemoveContainer" containerID="86e637d0b5dc95d562f8425432d6a525c0e0e358c1d51fc8a2c0d80b43fd747a" Feb 24 05:29:33.663676 master-0 kubenswrapper[7614]: I0224 05:29:33.663558 7614 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-zxkt2 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 05:29:33.663676 master-0 kubenswrapper[7614]: [-]has-synced failed: reason withheld Feb 24 05:29:33.663676 master-0 kubenswrapper[7614]: [+]process-running ok Feb 24 05:29:33.663676 master-0 kubenswrapper[7614]: healthz check failed Feb 24 05:29:33.663676 master-0 kubenswrapper[7614]: I0224 05:29:33.663662 7614 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-zxkt2" podUID="be7a4b9e-1e9a-4298-b804-21b683805c0e" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 05:29:34.058492 master-0 kubenswrapper[7614]: I0224 05:29:34.058379 7614 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_cluster-baremetal-operator-d6bb9bb76-54hnv_39623346-691b-42c8-af76-409d4f6629af/cluster-baremetal-operator/2.log" Feb 24 05:29:34.058960 master-0 kubenswrapper[7614]: I0224 05:29:34.058874 7614 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/cluster-baremetal-operator-d6bb9bb76-54hnv" event={"ID":"39623346-691b-42c8-af76-409d4f6629af","Type":"ContainerStarted","Data":"a16e982011d25a808420e64218d38901635267123952229b9d01f68aefd3e0c2"} Feb 24 05:29:34.174932 master-0 kubenswrapper[7614]: I0224 05:29:34.174677 7614 scope.go:117] "RemoveContainer" containerID="dd63c41ce4fcbe7d75c9e6d476fdf1e340af8b590914ef213ef34b53284ccafe" Feb 24 05:29:34.175374 master-0 kubenswrapper[7614]: E0224 05:29:34.175182 7614 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"openshift-config-operator\" with CrashLoopBackOff: \"back-off 20s restarting failed container=openshift-config-operator pod=openshift-config-operator-6f47d587d6-7b87v_openshift-config-operator(3f511d03-a182-4968-ba40-5c5c10e5e6be)\"" pod="openshift-config-operator/openshift-config-operator-6f47d587d6-7b87v" podUID="3f511d03-a182-4968-ba40-5c5c10e5e6be" Feb 24 05:29:34.664710 master-0 kubenswrapper[7614]: I0224 05:29:34.664552 7614 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-zxkt2 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 05:29:34.664710 master-0 kubenswrapper[7614]: [-]has-synced failed: reason withheld Feb 24 05:29:34.664710 master-0 kubenswrapper[7614]: [+]process-running ok Feb 24 05:29:34.664710 master-0 kubenswrapper[7614]: healthz check failed Feb 24 05:29:34.666242 master-0 kubenswrapper[7614]: I0224 05:29:34.664749 7614 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-zxkt2" podUID="be7a4b9e-1e9a-4298-b804-21b683805c0e" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 05:29:35.663972 master-0 kubenswrapper[7614]: I0224 05:29:35.663884 7614 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-zxkt2 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 05:29:35.663972 master-0 kubenswrapper[7614]: [-]has-synced failed: reason withheld Feb 24 05:29:35.663972 master-0 kubenswrapper[7614]: [+]process-running ok Feb 24 05:29:35.663972 master-0 kubenswrapper[7614]: healthz check failed Feb 24 05:29:35.664699 master-0 kubenswrapper[7614]: I0224 05:29:35.663991 7614 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-zxkt2" podUID="be7a4b9e-1e9a-4298-b804-21b683805c0e" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 05:29:36.689688 master-0 kubenswrapper[7614]: I0224 05:29:36.664173 7614 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-zxkt2 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 05:29:36.689688 master-0 kubenswrapper[7614]: [-]has-synced failed: reason withheld Feb 24 05:29:36.689688 master-0 kubenswrapper[7614]: [+]process-running ok Feb 24 05:29:36.689688 master-0 kubenswrapper[7614]: healthz check failed Feb 24 05:29:36.689688 master-0 kubenswrapper[7614]: I0224 05:29:36.664380 7614 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-zxkt2" podUID="be7a4b9e-1e9a-4298-b804-21b683805c0e" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 05:29:37.663001 master-0 kubenswrapper[7614]: I0224 05:29:37.662904 7614 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-zxkt2 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 05:29:37.663001 master-0 kubenswrapper[7614]: [-]has-synced failed: reason withheld Feb 24 05:29:37.663001 master-0 kubenswrapper[7614]: [+]process-running ok Feb 24 05:29:37.663001 master-0 kubenswrapper[7614]: healthz check failed Feb 24 05:29:37.663001 master-0 kubenswrapper[7614]: I0224 05:29:37.662997 7614 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-zxkt2" podUID="be7a4b9e-1e9a-4298-b804-21b683805c0e" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 05:29:38.666365 master-0 kubenswrapper[7614]: I0224 05:29:38.664828 7614 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-zxkt2 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 05:29:38.666365 master-0 kubenswrapper[7614]: [-]has-synced failed: reason withheld Feb 24 05:29:38.666365 master-0 kubenswrapper[7614]: [+]process-running ok Feb 24 05:29:38.666365 master-0 kubenswrapper[7614]: healthz check failed Feb 24 05:29:38.666365 master-0 kubenswrapper[7614]: I0224 05:29:38.665090 7614 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-zxkt2" podUID="be7a4b9e-1e9a-4298-b804-21b683805c0e" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 05:29:39.666414 master-0 kubenswrapper[7614]: I0224 05:29:39.666336 7614 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-zxkt2 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 05:29:39.666414 master-0 kubenswrapper[7614]: [-]has-synced failed: reason withheld Feb 24 05:29:39.666414 master-0 kubenswrapper[7614]: [+]process-running ok Feb 24 05:29:39.666414 master-0 kubenswrapper[7614]: healthz check failed Feb 24 05:29:39.667640 master-0 kubenswrapper[7614]: I0224 05:29:39.666438 7614 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-zxkt2" podUID="be7a4b9e-1e9a-4298-b804-21b683805c0e" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 05:29:40.123430 master-0 kubenswrapper[7614]: I0224 05:29:40.123284 7614 generic.go:334] "Generic (PLEG): container finished" podID="0e05783d-6bd1-4c71-87d9-1eb3edd827b3" containerID="883402f37d06428c5ac9d5006756ff5c514e20caeb827c4b80ee87b11ce334df" exitCode=0 Feb 24 05:29:40.123430 master-0 kubenswrapper[7614]: I0224 05:29:40.123377 7614 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-57476485-7g2gq" event={"ID":"0e05783d-6bd1-4c71-87d9-1eb3edd827b3","Type":"ContainerDied","Data":"883402f37d06428c5ac9d5006756ff5c514e20caeb827c4b80ee87b11ce334df"} Feb 24 05:29:40.124416 master-0 kubenswrapper[7614]: I0224 05:29:40.124377 7614 scope.go:117] "RemoveContainer" containerID="883402f37d06428c5ac9d5006756ff5c514e20caeb827c4b80ee87b11ce334df" Feb 24 05:29:40.663706 master-0 kubenswrapper[7614]: I0224 05:29:40.663612 7614 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-zxkt2 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 05:29:40.663706 master-0 kubenswrapper[7614]: [-]has-synced failed: reason withheld Feb 24 05:29:40.663706 master-0 kubenswrapper[7614]: [+]process-running ok Feb 24 05:29:40.663706 master-0 kubenswrapper[7614]: healthz check failed Feb 24 05:29:40.664082 master-0 kubenswrapper[7614]: I0224 05:29:40.663717 7614 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-zxkt2" podUID="be7a4b9e-1e9a-4298-b804-21b683805c0e" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 05:29:41.136896 master-0 kubenswrapper[7614]: I0224 05:29:41.136826 7614 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-57476485-7g2gq" event={"ID":"0e05783d-6bd1-4c71-87d9-1eb3edd827b3","Type":"ContainerStarted","Data":"d837d027f14c5b13f6651317447955d27e98d1c29b78df8c793c68f53a7d166b"} Feb 24 05:29:41.662219 master-0 kubenswrapper[7614]: I0224 05:29:41.662124 7614 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-zxkt2 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 05:29:41.662219 master-0 kubenswrapper[7614]: [-]has-synced failed: reason withheld Feb 24 05:29:41.662219 master-0 kubenswrapper[7614]: [+]process-running ok Feb 24 05:29:41.662219 master-0 kubenswrapper[7614]: healthz check failed Feb 24 05:29:41.662941 master-0 kubenswrapper[7614]: I0224 05:29:41.662227 7614 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-zxkt2" podUID="be7a4b9e-1e9a-4298-b804-21b683805c0e" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 05:29:42.173926 master-0 kubenswrapper[7614]: I0224 05:29:42.173853 7614 scope.go:117] "RemoveContainer" containerID="09f85c8b01d7446d5646107c9c18780a59af7c98e21d551a62767e55f5cabf2d" Feb 24 05:29:42.175147 master-0 kubenswrapper[7614]: E0224 05:29:42.174139 7614 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ingress-operator\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=ingress-operator pod=ingress-operator-6569778c84-rr8r7_openshift-ingress-operator(3d6b1ce7-1213-494c-829d-186d39eac7eb)\"" pod="openshift-ingress-operator/ingress-operator-6569778c84-rr8r7" podUID="3d6b1ce7-1213-494c-829d-186d39eac7eb" Feb 24 05:29:42.175147 master-0 kubenswrapper[7614]: I0224 05:29:42.174438 7614 scope.go:117] "RemoveContainer" containerID="1dbe14eb848b87711b564dbd00190070ac04cfc0d462906a427a3af22f0cfd2a" Feb 24 05:29:42.175147 master-0 kubenswrapper[7614]: E0224 05:29:42.174734 7614 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"snapshot-controller\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=snapshot-controller pod=csi-snapshot-controller-6847bb4785-vqn96_openshift-cluster-storage-operator(b79ef90c-dc66-4d5f-8943-2c3ac68796ba)\"" pod="openshift-cluster-storage-operator/csi-snapshot-controller-6847bb4785-vqn96" podUID="b79ef90c-dc66-4d5f-8943-2c3ac68796ba" Feb 24 05:29:42.663743 master-0 kubenswrapper[7614]: I0224 05:29:42.663648 7614 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-zxkt2 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 05:29:42.663743 master-0 kubenswrapper[7614]: [-]has-synced failed: reason withheld Feb 24 05:29:42.663743 master-0 kubenswrapper[7614]: [+]process-running ok Feb 24 05:29:42.663743 master-0 kubenswrapper[7614]: healthz check failed Feb 24 05:29:42.664115 master-0 kubenswrapper[7614]: I0224 05:29:42.663799 7614 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-zxkt2" podUID="be7a4b9e-1e9a-4298-b804-21b683805c0e" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 05:29:43.174391 master-0 kubenswrapper[7614]: I0224 05:29:43.174278 7614 scope.go:117] "RemoveContainer" containerID="fb14b25796af448c2bc49088ecec2ac65559ebf13053ffc78319aa0e2b8844d9" Feb 24 05:29:43.175375 master-0 kubenswrapper[7614]: E0224 05:29:43.174799 7614 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"authentication-operator\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=authentication-operator pod=authentication-operator-5bd7c86784-kbb8z_openshift-authentication-operator(59333a14-5bdc-4590-a3da-af7300f086da)\"" pod="openshift-authentication-operator/authentication-operator-5bd7c86784-kbb8z" podUID="59333a14-5bdc-4590-a3da-af7300f086da" Feb 24 05:29:43.663783 master-0 kubenswrapper[7614]: I0224 05:29:43.663686 7614 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-zxkt2 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 05:29:43.663783 master-0 kubenswrapper[7614]: [-]has-synced failed: reason withheld Feb 24 05:29:43.663783 master-0 kubenswrapper[7614]: [+]process-running ok Feb 24 05:29:43.663783 master-0 kubenswrapper[7614]: healthz check failed Feb 24 05:29:43.663783 master-0 kubenswrapper[7614]: I0224 05:29:43.663766 7614 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-zxkt2" podUID="be7a4b9e-1e9a-4298-b804-21b683805c0e" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 05:29:44.663815 master-0 kubenswrapper[7614]: I0224 05:29:44.663715 7614 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-zxkt2 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 05:29:44.663815 master-0 kubenswrapper[7614]: [-]has-synced failed: reason withheld Feb 24 05:29:44.663815 master-0 kubenswrapper[7614]: [+]process-running ok Feb 24 05:29:44.663815 master-0 kubenswrapper[7614]: healthz check failed Feb 24 05:29:44.665265 master-0 kubenswrapper[7614]: I0224 05:29:44.663831 7614 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-zxkt2" podUID="be7a4b9e-1e9a-4298-b804-21b683805c0e" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 05:29:45.663599 master-0 kubenswrapper[7614]: I0224 05:29:45.663476 7614 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-zxkt2 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 05:29:45.663599 master-0 kubenswrapper[7614]: [-]has-synced failed: reason withheld Feb 24 05:29:45.663599 master-0 kubenswrapper[7614]: [+]process-running ok Feb 24 05:29:45.663599 master-0 kubenswrapper[7614]: healthz check failed Feb 24 05:29:45.664729 master-0 kubenswrapper[7614]: I0224 05:29:45.663617 7614 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-zxkt2" podUID="be7a4b9e-1e9a-4298-b804-21b683805c0e" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 05:29:46.663693 master-0 kubenswrapper[7614]: I0224 05:29:46.663597 7614 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-zxkt2 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 05:29:46.663693 master-0 kubenswrapper[7614]: [-]has-synced failed: reason withheld Feb 24 05:29:46.663693 master-0 kubenswrapper[7614]: [+]process-running ok Feb 24 05:29:46.663693 master-0 kubenswrapper[7614]: healthz check failed Feb 24 05:29:46.664834 master-0 kubenswrapper[7614]: I0224 05:29:46.663711 7614 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-zxkt2" podUID="be7a4b9e-1e9a-4298-b804-21b683805c0e" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 05:29:47.665412 master-0 kubenswrapper[7614]: I0224 05:29:47.664670 7614 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-zxkt2 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 05:29:47.665412 master-0 kubenswrapper[7614]: [-]has-synced failed: reason withheld Feb 24 05:29:47.665412 master-0 kubenswrapper[7614]: [+]process-running ok Feb 24 05:29:47.665412 master-0 kubenswrapper[7614]: healthz check failed Feb 24 05:29:47.665412 master-0 kubenswrapper[7614]: I0224 05:29:47.664763 7614 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-zxkt2" podUID="be7a4b9e-1e9a-4298-b804-21b683805c0e" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 05:29:48.663740 master-0 kubenswrapper[7614]: I0224 05:29:48.663626 7614 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-zxkt2 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 05:29:48.663740 master-0 kubenswrapper[7614]: [-]has-synced failed: reason withheld Feb 24 05:29:48.663740 master-0 kubenswrapper[7614]: [+]process-running ok Feb 24 05:29:48.663740 master-0 kubenswrapper[7614]: healthz check failed Feb 24 05:29:48.664291 master-0 kubenswrapper[7614]: I0224 05:29:48.663769 7614 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-zxkt2" podUID="be7a4b9e-1e9a-4298-b804-21b683805c0e" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 05:29:49.182264 master-0 kubenswrapper[7614]: I0224 05:29:49.182152 7614 scope.go:117] "RemoveContainer" containerID="dd63c41ce4fcbe7d75c9e6d476fdf1e340af8b590914ef213ef34b53284ccafe" Feb 24 05:29:49.663755 master-0 kubenswrapper[7614]: I0224 05:29:49.663647 7614 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-zxkt2 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 05:29:49.663755 master-0 kubenswrapper[7614]: [-]has-synced failed: reason withheld Feb 24 05:29:49.663755 master-0 kubenswrapper[7614]: [+]process-running ok Feb 24 05:29:49.663755 master-0 kubenswrapper[7614]: healthz check failed Feb 24 05:29:49.664376 master-0 kubenswrapper[7614]: I0224 05:29:49.663770 7614 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-zxkt2" podUID="be7a4b9e-1e9a-4298-b804-21b683805c0e" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 05:29:50.255932 master-0 kubenswrapper[7614]: I0224 05:29:50.255863 7614 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-config-operator_openshift-config-operator-6f47d587d6-7b87v_3f511d03-a182-4968-ba40-5c5c10e5e6be/openshift-config-operator/2.log" Feb 24 05:29:50.257172 master-0 kubenswrapper[7614]: I0224 05:29:50.256546 7614 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-6f47d587d6-7b87v" event={"ID":"3f511d03-a182-4968-ba40-5c5c10e5e6be","Type":"ContainerStarted","Data":"37ee9e88653efcca5d9369e7e3804a3e76d6d8c3606b94423b5791232d9067d8"} Feb 24 05:29:50.257172 master-0 kubenswrapper[7614]: I0224 05:29:50.256893 7614 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-config-operator/openshift-config-operator-6f47d587d6-7b87v" Feb 24 05:29:50.664617 master-0 kubenswrapper[7614]: I0224 05:29:50.664385 7614 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-zxkt2 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 05:29:50.664617 master-0 kubenswrapper[7614]: [-]has-synced failed: reason withheld Feb 24 05:29:50.664617 master-0 kubenswrapper[7614]: [+]process-running ok Feb 24 05:29:50.664617 master-0 kubenswrapper[7614]: healthz check failed Feb 24 05:29:50.665346 master-0 kubenswrapper[7614]: I0224 05:29:50.665250 7614 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-zxkt2" podUID="be7a4b9e-1e9a-4298-b804-21b683805c0e" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 05:29:51.663138 master-0 kubenswrapper[7614]: I0224 05:29:51.663048 7614 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-zxkt2 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 05:29:51.663138 master-0 kubenswrapper[7614]: [-]has-synced failed: reason withheld Feb 24 05:29:51.663138 master-0 kubenswrapper[7614]: [+]process-running ok Feb 24 05:29:51.663138 master-0 kubenswrapper[7614]: healthz check failed Feb 24 05:29:51.664554 master-0 kubenswrapper[7614]: I0224 05:29:51.663160 7614 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-zxkt2" podUID="be7a4b9e-1e9a-4298-b804-21b683805c0e" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 05:29:52.244875 master-0 kubenswrapper[7614]: I0224 05:29:52.244828 7614 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/package-server-manager-5c75f78c8b-9d82f" Feb 24 05:29:52.288103 master-0 kubenswrapper[7614]: I0224 05:29:52.287997 7614 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-scheduler_openshift-kube-scheduler-master-0_ebb9c3b6f4ad10a97951cbde655daea9/kube-scheduler-cert-syncer/0.log" Feb 24 05:29:52.288879 master-0 kubenswrapper[7614]: I0224 05:29:52.288822 7614 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-scheduler_openshift-kube-scheduler-master-0_ebb9c3b6f4ad10a97951cbde655daea9/kube-scheduler/0.log" Feb 24 05:29:52.289406 master-0 kubenswrapper[7614]: I0224 05:29:52.289340 7614 generic.go:334] "Generic (PLEG): container finished" podID="ebb9c3b6f4ad10a97951cbde655daea9" containerID="9e0cc0f7f581085a792db3f9717a0c7d3e86218c9ccfa7f2c67da547aa98fac9" exitCode=1 Feb 24 05:29:52.289497 master-0 kubenswrapper[7614]: I0224 05:29:52.289443 7614 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" event={"ID":"ebb9c3b6f4ad10a97951cbde655daea9","Type":"ContainerDied","Data":"9e0cc0f7f581085a792db3f9717a0c7d3e86218c9ccfa7f2c67da547aa98fac9"} Feb 24 05:29:52.290333 master-0 kubenswrapper[7614]: I0224 05:29:52.290268 7614 scope.go:117] "RemoveContainer" containerID="9e0cc0f7f581085a792db3f9717a0c7d3e86218c9ccfa7f2c67da547aa98fac9" Feb 24 05:29:52.293773 master-0 kubenswrapper[7614]: I0224 05:29:52.293698 7614 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-admission-controller-5f98f4f8d5-b985k_8be1f8db-3f0b-4d6f-be42-7564fba66820/multus-admission-controller/0.log" Feb 24 05:29:52.293902 master-0 kubenswrapper[7614]: I0224 05:29:52.293796 7614 generic.go:334] "Generic (PLEG): container finished" podID="8be1f8db-3f0b-4d6f-be42-7564fba66820" containerID="eb40f700665ddc5a59ad171b706d2fdf1426e6e5d152e9cd1782903011fd60d0" exitCode=137 Feb 24 05:29:52.293976 master-0 kubenswrapper[7614]: I0224 05:29:52.293935 7614 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-5f98f4f8d5-b985k" event={"ID":"8be1f8db-3f0b-4d6f-be42-7564fba66820","Type":"ContainerDied","Data":"eb40f700665ddc5a59ad171b706d2fdf1426e6e5d152e9cd1782903011fd60d0"} Feb 24 05:29:52.663580 master-0 kubenswrapper[7614]: I0224 05:29:52.663490 7614 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-zxkt2 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 05:29:52.663580 master-0 kubenswrapper[7614]: [-]has-synced failed: reason withheld Feb 24 05:29:52.663580 master-0 kubenswrapper[7614]: [+]process-running ok Feb 24 05:29:52.663580 master-0 kubenswrapper[7614]: healthz check failed Feb 24 05:29:52.664383 master-0 kubenswrapper[7614]: I0224 05:29:52.663611 7614 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-zxkt2" podUID="be7a4b9e-1e9a-4298-b804-21b683805c0e" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 05:29:53.174357 master-0 kubenswrapper[7614]: I0224 05:29:53.173877 7614 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-admission-controller-5f98f4f8d5-b985k_8be1f8db-3f0b-4d6f-be42-7564fba66820/multus-admission-controller/0.log" Feb 24 05:29:53.174357 master-0 kubenswrapper[7614]: I0224 05:29:53.173965 7614 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-5f98f4f8d5-b985k" Feb 24 05:29:53.216236 master-0 kubenswrapper[7614]: I0224 05:29:53.216178 7614 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xj2tz\" (UniqueName: \"kubernetes.io/projected/8be1f8db-3f0b-4d6f-be42-7564fba66820-kube-api-access-xj2tz\") pod \"8be1f8db-3f0b-4d6f-be42-7564fba66820\" (UID: \"8be1f8db-3f0b-4d6f-be42-7564fba66820\") " Feb 24 05:29:53.216725 master-0 kubenswrapper[7614]: I0224 05:29:53.216694 7614 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/8be1f8db-3f0b-4d6f-be42-7564fba66820-webhook-certs\") pod \"8be1f8db-3f0b-4d6f-be42-7564fba66820\" (UID: \"8be1f8db-3f0b-4d6f-be42-7564fba66820\") " Feb 24 05:29:53.240547 master-0 kubenswrapper[7614]: I0224 05:29:53.240478 7614 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8be1f8db-3f0b-4d6f-be42-7564fba66820-webhook-certs" (OuterVolumeSpecName: "webhook-certs") pod "8be1f8db-3f0b-4d6f-be42-7564fba66820" (UID: "8be1f8db-3f0b-4d6f-be42-7564fba66820"). InnerVolumeSpecName "webhook-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 24 05:29:53.242593 master-0 kubenswrapper[7614]: I0224 05:29:53.242466 7614 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8be1f8db-3f0b-4d6f-be42-7564fba66820-kube-api-access-xj2tz" (OuterVolumeSpecName: "kube-api-access-xj2tz") pod "8be1f8db-3f0b-4d6f-be42-7564fba66820" (UID: "8be1f8db-3f0b-4d6f-be42-7564fba66820"). InnerVolumeSpecName "kube-api-access-xj2tz". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 24 05:29:53.308646 master-0 kubenswrapper[7614]: I0224 05:29:53.308538 7614 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-admission-controller-5f98f4f8d5-b985k_8be1f8db-3f0b-4d6f-be42-7564fba66820/multus-admission-controller/0.log" Feb 24 05:29:53.308915 master-0 kubenswrapper[7614]: I0224 05:29:53.308781 7614 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-5f98f4f8d5-b985k" event={"ID":"8be1f8db-3f0b-4d6f-be42-7564fba66820","Type":"ContainerDied","Data":"46a5994b405203be832b6e8a9d78723e27b9a540f4fcd8cfc16f6928523dcdb0"} Feb 24 05:29:53.308915 master-0 kubenswrapper[7614]: I0224 05:29:53.308859 7614 scope.go:117] "RemoveContainer" containerID="bbd5aa582f8241ea4c62c11beba1abad300d328a3af1603fa3f170227b163e28" Feb 24 05:29:53.309129 master-0 kubenswrapper[7614]: I0224 05:29:53.308938 7614 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-5f98f4f8d5-b985k" Feb 24 05:29:53.318918 master-0 kubenswrapper[7614]: I0224 05:29:53.318840 7614 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xj2tz\" (UniqueName: \"kubernetes.io/projected/8be1f8db-3f0b-4d6f-be42-7564fba66820-kube-api-access-xj2tz\") on node \"master-0\" DevicePath \"\"" Feb 24 05:29:53.318918 master-0 kubenswrapper[7614]: I0224 05:29:53.318896 7614 reconciler_common.go:293] "Volume detached for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/8be1f8db-3f0b-4d6f-be42-7564fba66820-webhook-certs\") on node \"master-0\" DevicePath \"\"" Feb 24 05:29:53.319622 master-0 kubenswrapper[7614]: I0224 05:29:53.319549 7614 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-scheduler_openshift-kube-scheduler-master-0_ebb9c3b6f4ad10a97951cbde655daea9/kube-scheduler-cert-syncer/0.log" Feb 24 05:29:53.320546 master-0 kubenswrapper[7614]: I0224 05:29:53.320488 7614 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-scheduler_openshift-kube-scheduler-master-0_ebb9c3b6f4ad10a97951cbde655daea9/kube-scheduler/0.log" Feb 24 05:29:53.321142 master-0 kubenswrapper[7614]: I0224 05:29:53.321069 7614 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" event={"ID":"ebb9c3b6f4ad10a97951cbde655daea9","Type":"ContainerStarted","Data":"9c08e2b99bda6708882f4175ffb049128a51c70caca590d2e61441c5ea9ae2b4"} Feb 24 05:29:53.340739 master-0 kubenswrapper[7614]: I0224 05:29:53.340691 7614 scope.go:117] "RemoveContainer" containerID="eb40f700665ddc5a59ad171b706d2fdf1426e6e5d152e9cd1782903011fd60d0" Feb 24 05:29:53.380020 master-0 kubenswrapper[7614]: I0224 05:29:53.379739 7614 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-multus/multus-admission-controller-5f98f4f8d5-b985k"] Feb 24 05:29:53.385921 master-0 kubenswrapper[7614]: I0224 05:29:53.385836 7614 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-multus/multus-admission-controller-5f98f4f8d5-b985k"] Feb 24 05:29:53.663979 master-0 kubenswrapper[7614]: I0224 05:29:53.663756 7614 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-zxkt2 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 05:29:53.663979 master-0 kubenswrapper[7614]: [-]has-synced failed: reason withheld Feb 24 05:29:53.663979 master-0 kubenswrapper[7614]: [+]process-running ok Feb 24 05:29:53.663979 master-0 kubenswrapper[7614]: healthz check failed Feb 24 05:29:53.665209 master-0 kubenswrapper[7614]: I0224 05:29:53.664419 7614 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-zxkt2" podUID="be7a4b9e-1e9a-4298-b804-21b683805c0e" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 05:29:54.515822 master-0 kubenswrapper[7614]: I0224 05:29:54.515712 7614 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-config-operator/openshift-config-operator-6f47d587d6-7b87v" Feb 24 05:29:54.663503 master-0 kubenswrapper[7614]: I0224 05:29:54.663389 7614 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-zxkt2 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 05:29:54.663503 master-0 kubenswrapper[7614]: [-]has-synced failed: reason withheld Feb 24 05:29:54.663503 master-0 kubenswrapper[7614]: [+]process-running ok Feb 24 05:29:54.663503 master-0 kubenswrapper[7614]: healthz check failed Feb 24 05:29:54.664052 master-0 kubenswrapper[7614]: I0224 05:29:54.663513 7614 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-zxkt2" podUID="be7a4b9e-1e9a-4298-b804-21b683805c0e" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 05:29:55.174600 master-0 kubenswrapper[7614]: I0224 05:29:55.174521 7614 scope.go:117] "RemoveContainer" containerID="1dbe14eb848b87711b564dbd00190070ac04cfc0d462906a427a3af22f0cfd2a" Feb 24 05:29:55.175013 master-0 kubenswrapper[7614]: I0224 05:29:55.174758 7614 scope.go:117] "RemoveContainer" containerID="09f85c8b01d7446d5646107c9c18780a59af7c98e21d551a62767e55f5cabf2d" Feb 24 05:29:55.175013 master-0 kubenswrapper[7614]: E0224 05:29:55.174937 7614 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"snapshot-controller\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=snapshot-controller pod=csi-snapshot-controller-6847bb4785-vqn96_openshift-cluster-storage-operator(b79ef90c-dc66-4d5f-8943-2c3ac68796ba)\"" pod="openshift-cluster-storage-operator/csi-snapshot-controller-6847bb4785-vqn96" podUID="b79ef90c-dc66-4d5f-8943-2c3ac68796ba" Feb 24 05:29:55.175173 master-0 kubenswrapper[7614]: E0224 05:29:55.175084 7614 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ingress-operator\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=ingress-operator pod=ingress-operator-6569778c84-rr8r7_openshift-ingress-operator(3d6b1ce7-1213-494c-829d-186d39eac7eb)\"" pod="openshift-ingress-operator/ingress-operator-6569778c84-rr8r7" podUID="3d6b1ce7-1213-494c-829d-186d39eac7eb" Feb 24 05:29:55.189753 master-0 kubenswrapper[7614]: I0224 05:29:55.189689 7614 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8be1f8db-3f0b-4d6f-be42-7564fba66820" path="/var/lib/kubelet/pods/8be1f8db-3f0b-4d6f-be42-7564fba66820/volumes" Feb 24 05:29:55.662788 master-0 kubenswrapper[7614]: I0224 05:29:55.662707 7614 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-zxkt2 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 05:29:55.662788 master-0 kubenswrapper[7614]: [-]has-synced failed: reason withheld Feb 24 05:29:55.662788 master-0 kubenswrapper[7614]: [+]process-running ok Feb 24 05:29:55.662788 master-0 kubenswrapper[7614]: healthz check failed Feb 24 05:29:55.663242 master-0 kubenswrapper[7614]: I0224 05:29:55.662827 7614 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-zxkt2" podUID="be7a4b9e-1e9a-4298-b804-21b683805c0e" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 05:29:56.664381 master-0 kubenswrapper[7614]: I0224 05:29:56.664244 7614 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-zxkt2 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 05:29:56.664381 master-0 kubenswrapper[7614]: [-]has-synced failed: reason withheld Feb 24 05:29:56.664381 master-0 kubenswrapper[7614]: [+]process-running ok Feb 24 05:29:56.664381 master-0 kubenswrapper[7614]: healthz check failed Feb 24 05:29:56.665543 master-0 kubenswrapper[7614]: I0224 05:29:56.664456 7614 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-zxkt2" podUID="be7a4b9e-1e9a-4298-b804-21b683805c0e" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 05:29:57.663879 master-0 kubenswrapper[7614]: I0224 05:29:57.663762 7614 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-zxkt2 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 05:29:57.663879 master-0 kubenswrapper[7614]: [-]has-synced failed: reason withheld Feb 24 05:29:57.663879 master-0 kubenswrapper[7614]: [+]process-running ok Feb 24 05:29:57.663879 master-0 kubenswrapper[7614]: healthz check failed Feb 24 05:29:57.664600 master-0 kubenswrapper[7614]: I0224 05:29:57.663899 7614 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-zxkt2" podUID="be7a4b9e-1e9a-4298-b804-21b683805c0e" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 05:29:58.175048 master-0 kubenswrapper[7614]: I0224 05:29:58.174938 7614 scope.go:117] "RemoveContainer" containerID="fb14b25796af448c2bc49088ecec2ac65559ebf13053ffc78319aa0e2b8844d9" Feb 24 05:29:58.175460 master-0 kubenswrapper[7614]: E0224 05:29:58.175402 7614 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"authentication-operator\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=authentication-operator pod=authentication-operator-5bd7c86784-kbb8z_openshift-authentication-operator(59333a14-5bdc-4590-a3da-af7300f086da)\"" pod="openshift-authentication-operator/authentication-operator-5bd7c86784-kbb8z" podUID="59333a14-5bdc-4590-a3da-af7300f086da" Feb 24 05:29:58.663246 master-0 kubenswrapper[7614]: I0224 05:29:58.663154 7614 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-zxkt2 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 05:29:58.663246 master-0 kubenswrapper[7614]: [-]has-synced failed: reason withheld Feb 24 05:29:58.663246 master-0 kubenswrapper[7614]: [+]process-running ok Feb 24 05:29:58.663246 master-0 kubenswrapper[7614]: healthz check failed Feb 24 05:29:58.663246 master-0 kubenswrapper[7614]: I0224 05:29:58.663243 7614 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-zxkt2" podUID="be7a4b9e-1e9a-4298-b804-21b683805c0e" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 05:29:59.664645 master-0 kubenswrapper[7614]: I0224 05:29:59.664557 7614 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-zxkt2 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 05:29:59.664645 master-0 kubenswrapper[7614]: [-]has-synced failed: reason withheld Feb 24 05:29:59.664645 master-0 kubenswrapper[7614]: [+]process-running ok Feb 24 05:29:59.664645 master-0 kubenswrapper[7614]: healthz check failed Feb 24 05:29:59.665450 master-0 kubenswrapper[7614]: I0224 05:29:59.664659 7614 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-zxkt2" podUID="be7a4b9e-1e9a-4298-b804-21b683805c0e" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 05:30:00.663435 master-0 kubenswrapper[7614]: I0224 05:30:00.663328 7614 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-zxkt2 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 05:30:00.663435 master-0 kubenswrapper[7614]: [-]has-synced failed: reason withheld Feb 24 05:30:00.663435 master-0 kubenswrapper[7614]: [+]process-running ok Feb 24 05:30:00.663435 master-0 kubenswrapper[7614]: healthz check failed Feb 24 05:30:00.663853 master-0 kubenswrapper[7614]: I0224 05:30:00.663460 7614 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-zxkt2" podUID="be7a4b9e-1e9a-4298-b804-21b683805c0e" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 05:30:01.663279 master-0 kubenswrapper[7614]: I0224 05:30:01.663155 7614 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-zxkt2 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 05:30:01.663279 master-0 kubenswrapper[7614]: [-]has-synced failed: reason withheld Feb 24 05:30:01.663279 master-0 kubenswrapper[7614]: [+]process-running ok Feb 24 05:30:01.663279 master-0 kubenswrapper[7614]: healthz check failed Feb 24 05:30:01.663279 master-0 kubenswrapper[7614]: I0224 05:30:01.663246 7614 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-zxkt2" podUID="be7a4b9e-1e9a-4298-b804-21b683805c0e" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 05:30:02.663213 master-0 kubenswrapper[7614]: I0224 05:30:02.662960 7614 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-zxkt2 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 05:30:02.663213 master-0 kubenswrapper[7614]: [-]has-synced failed: reason withheld Feb 24 05:30:02.663213 master-0 kubenswrapper[7614]: [+]process-running ok Feb 24 05:30:02.663213 master-0 kubenswrapper[7614]: healthz check failed Feb 24 05:30:02.663213 master-0 kubenswrapper[7614]: I0224 05:30:02.663101 7614 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-zxkt2" podUID="be7a4b9e-1e9a-4298-b804-21b683805c0e" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 05:30:03.663356 master-0 kubenswrapper[7614]: I0224 05:30:03.663236 7614 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-zxkt2 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 05:30:03.663356 master-0 kubenswrapper[7614]: [-]has-synced failed: reason withheld Feb 24 05:30:03.663356 master-0 kubenswrapper[7614]: [+]process-running ok Feb 24 05:30:03.663356 master-0 kubenswrapper[7614]: healthz check failed Feb 24 05:30:03.663356 master-0 kubenswrapper[7614]: I0224 05:30:03.663345 7614 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-zxkt2" podUID="be7a4b9e-1e9a-4298-b804-21b683805c0e" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 05:30:04.664165 master-0 kubenswrapper[7614]: I0224 05:30:04.664023 7614 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-zxkt2 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 05:30:04.664165 master-0 kubenswrapper[7614]: [-]has-synced failed: reason withheld Feb 24 05:30:04.664165 master-0 kubenswrapper[7614]: [+]process-running ok Feb 24 05:30:04.664165 master-0 kubenswrapper[7614]: healthz check failed Feb 24 05:30:04.664165 master-0 kubenswrapper[7614]: I0224 05:30:04.664158 7614 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-zxkt2" podUID="be7a4b9e-1e9a-4298-b804-21b683805c0e" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 05:30:05.663068 master-0 kubenswrapper[7614]: I0224 05:30:05.662894 7614 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-zxkt2 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 05:30:05.663068 master-0 kubenswrapper[7614]: [-]has-synced failed: reason withheld Feb 24 05:30:05.663068 master-0 kubenswrapper[7614]: [+]process-running ok Feb 24 05:30:05.663068 master-0 kubenswrapper[7614]: healthz check failed Feb 24 05:30:05.663673 master-0 kubenswrapper[7614]: I0224 05:30:05.663071 7614 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-zxkt2" podUID="be7a4b9e-1e9a-4298-b804-21b683805c0e" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 05:30:06.663339 master-0 kubenswrapper[7614]: I0224 05:30:06.663255 7614 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-zxkt2 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 05:30:06.663339 master-0 kubenswrapper[7614]: [-]has-synced failed: reason withheld Feb 24 05:30:06.663339 master-0 kubenswrapper[7614]: [+]process-running ok Feb 24 05:30:06.663339 master-0 kubenswrapper[7614]: healthz check failed Feb 24 05:30:06.664028 master-0 kubenswrapper[7614]: I0224 05:30:06.663383 7614 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-zxkt2" podUID="be7a4b9e-1e9a-4298-b804-21b683805c0e" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 05:30:07.175407 master-0 kubenswrapper[7614]: I0224 05:30:07.175312 7614 scope.go:117] "RemoveContainer" containerID="1dbe14eb848b87711b564dbd00190070ac04cfc0d462906a427a3af22f0cfd2a" Feb 24 05:30:07.175920 master-0 kubenswrapper[7614]: E0224 05:30:07.175665 7614 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"snapshot-controller\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=snapshot-controller pod=csi-snapshot-controller-6847bb4785-vqn96_openshift-cluster-storage-operator(b79ef90c-dc66-4d5f-8943-2c3ac68796ba)\"" pod="openshift-cluster-storage-operator/csi-snapshot-controller-6847bb4785-vqn96" podUID="b79ef90c-dc66-4d5f-8943-2c3ac68796ba" Feb 24 05:30:07.662628 master-0 kubenswrapper[7614]: I0224 05:30:07.662530 7614 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-zxkt2 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 05:30:07.662628 master-0 kubenswrapper[7614]: [-]has-synced failed: reason withheld Feb 24 05:30:07.662628 master-0 kubenswrapper[7614]: [+]process-running ok Feb 24 05:30:07.662628 master-0 kubenswrapper[7614]: healthz check failed Feb 24 05:30:07.663179 master-0 kubenswrapper[7614]: I0224 05:30:07.662673 7614 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-zxkt2" podUID="be7a4b9e-1e9a-4298-b804-21b683805c0e" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 05:30:08.664423 master-0 kubenswrapper[7614]: I0224 05:30:08.664288 7614 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-zxkt2 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 05:30:08.664423 master-0 kubenswrapper[7614]: [-]has-synced failed: reason withheld Feb 24 05:30:08.664423 master-0 kubenswrapper[7614]: [+]process-running ok Feb 24 05:30:08.664423 master-0 kubenswrapper[7614]: healthz check failed Feb 24 05:30:08.665947 master-0 kubenswrapper[7614]: I0224 05:30:08.664446 7614 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-zxkt2" podUID="be7a4b9e-1e9a-4298-b804-21b683805c0e" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 05:30:08.665947 master-0 kubenswrapper[7614]: I0224 05:30:08.664562 7614 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-ingress/router-default-7b65dc9fcb-zxkt2" Feb 24 05:30:08.666158 master-0 kubenswrapper[7614]: I0224 05:30:08.666126 7614 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="router" containerStatusID={"Type":"cri-o","ID":"0d9c40e1ab9fe194700e549fe0bed42e1d026dc7732cf97087ef5f334f860eb9"} pod="openshift-ingress/router-default-7b65dc9fcb-zxkt2" containerMessage="Container router failed startup probe, will be restarted" Feb 24 05:30:08.666233 master-0 kubenswrapper[7614]: I0224 05:30:08.666201 7614 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ingress/router-default-7b65dc9fcb-zxkt2" podUID="be7a4b9e-1e9a-4298-b804-21b683805c0e" containerName="router" containerID="cri-o://0d9c40e1ab9fe194700e549fe0bed42e1d026dc7732cf97087ef5f334f860eb9" gracePeriod=3600 Feb 24 05:30:10.174919 master-0 kubenswrapper[7614]: I0224 05:30:10.174709 7614 scope.go:117] "RemoveContainer" containerID="09f85c8b01d7446d5646107c9c18780a59af7c98e21d551a62767e55f5cabf2d" Feb 24 05:30:10.490448 master-0 kubenswrapper[7614]: I0224 05:30:10.490370 7614 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ingress-operator_ingress-operator-6569778c84-rr8r7_3d6b1ce7-1213-494c-829d-186d39eac7eb/ingress-operator/4.log" Feb 24 05:30:10.523040 master-0 kubenswrapper[7614]: I0224 05:30:10.522911 7614 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-6569778c84-rr8r7" event={"ID":"3d6b1ce7-1213-494c-829d-186d39eac7eb","Type":"ContainerStarted","Data":"b50331e20fa2ff1ed8c1b0d16427617da5e032349e660e1b87569256d54c21e5"} Feb 24 05:30:13.174912 master-0 kubenswrapper[7614]: I0224 05:30:13.174817 7614 scope.go:117] "RemoveContainer" containerID="fb14b25796af448c2bc49088ecec2ac65559ebf13053ffc78319aa0e2b8844d9" Feb 24 05:30:13.551999 master-0 kubenswrapper[7614]: I0224 05:30:13.551910 7614 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-authentication-operator_authentication-operator-5bd7c86784-kbb8z_59333a14-5bdc-4590-a3da-af7300f086da/authentication-operator/4.log" Feb 24 05:30:13.552385 master-0 kubenswrapper[7614]: I0224 05:30:13.552020 7614 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-5bd7c86784-kbb8z" event={"ID":"59333a14-5bdc-4590-a3da-af7300f086da","Type":"ContainerStarted","Data":"dde085e1b87187a0dcc6d22539824592002eea05252f8d257514db888c205d6a"} Feb 24 05:30:14.174575 master-0 kubenswrapper[7614]: I0224 05:30:14.174468 7614 kubelet.go:1909] "Trying to delete pod" pod="openshift-etcd/etcd-master-0" podUID="6636104e-8b36-4c09-9e6b-e13fc7237a3e" Feb 24 05:30:14.174575 master-0 kubenswrapper[7614]: I0224 05:30:14.174541 7614 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-etcd/etcd-master-0" podUID="6636104e-8b36-4c09-9e6b-e13fc7237a3e" Feb 24 05:30:14.196548 master-0 kubenswrapper[7614]: I0224 05:30:14.196437 7614 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-etcd/etcd-master-0"] Feb 24 05:30:14.203180 master-0 kubenswrapper[7614]: I0224 05:30:14.203102 7614 kubelet.go:1914] "Deleted mirror pod because it is outdated" pod="openshift-etcd/etcd-master-0" Feb 24 05:30:14.216744 master-0 kubenswrapper[7614]: I0224 05:30:14.216629 7614 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-etcd/etcd-master-0"] Feb 24 05:30:14.240285 master-0 kubenswrapper[7614]: I0224 05:30:14.240198 7614 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-etcd/etcd-master-0"] Feb 24 05:30:14.565146 master-0 kubenswrapper[7614]: I0224 05:30:14.565041 7614 kubelet.go:1909] "Trying to delete pod" pod="openshift-etcd/etcd-master-0" podUID="6636104e-8b36-4c09-9e6b-e13fc7237a3e" Feb 24 05:30:14.565146 master-0 kubenswrapper[7614]: I0224 05:30:14.565127 7614 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-etcd/etcd-master-0" podUID="6636104e-8b36-4c09-9e6b-e13fc7237a3e" Feb 24 05:30:19.231862 master-0 kubenswrapper[7614]: I0224 05:30:19.231719 7614 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-etcd/etcd-master-0" podStartSLOduration=5.231687086 podStartE2EDuration="5.231687086s" podCreationTimestamp="2026-02-24 05:30:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-24 05:30:19.227144364 +0000 UTC m=+950.261887530" watchObservedRunningTime="2026-02-24 05:30:19.231687086 +0000 UTC m=+950.266430282" Feb 24 05:30:20.174894 master-0 kubenswrapper[7614]: I0224 05:30:20.174682 7614 scope.go:117] "RemoveContainer" containerID="1dbe14eb848b87711b564dbd00190070ac04cfc0d462906a427a3af22f0cfd2a" Feb 24 05:30:20.175193 master-0 kubenswrapper[7614]: E0224 05:30:20.175025 7614 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"snapshot-controller\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=snapshot-controller pod=csi-snapshot-controller-6847bb4785-vqn96_openshift-cluster-storage-operator(b79ef90c-dc66-4d5f-8943-2c3ac68796ba)\"" pod="openshift-cluster-storage-operator/csi-snapshot-controller-6847bb4785-vqn96" podUID="b79ef90c-dc66-4d5f-8943-2c3ac68796ba" Feb 24 05:30:34.174203 master-0 kubenswrapper[7614]: I0224 05:30:34.174115 7614 scope.go:117] "RemoveContainer" containerID="1dbe14eb848b87711b564dbd00190070ac04cfc0d462906a427a3af22f0cfd2a" Feb 24 05:30:34.797707 master-0 kubenswrapper[7614]: I0224 05:30:34.797594 7614 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cluster-storage-operator_csi-snapshot-controller-6847bb4785-vqn96_b79ef90c-dc66-4d5f-8943-2c3ac68796ba/snapshot-controller/4.log" Feb 24 05:30:34.797707 master-0 kubenswrapper[7614]: I0224 05:30:34.797685 7614 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-storage-operator/csi-snapshot-controller-6847bb4785-vqn96" event={"ID":"b79ef90c-dc66-4d5f-8943-2c3ac68796ba","Type":"ContainerStarted","Data":"8dabc07b49b569ca211748297452e9cc7192e76e95c033ea5faabef78c2ff964"} Feb 24 05:30:55.008974 master-0 kubenswrapper[7614]: I0224 05:30:55.008881 7614 generic.go:334] "Generic (PLEG): container finished" podID="be7a4b9e-1e9a-4298-b804-21b683805c0e" containerID="0d9c40e1ab9fe194700e549fe0bed42e1d026dc7732cf97087ef5f334f860eb9" exitCode=0 Feb 24 05:30:55.008974 master-0 kubenswrapper[7614]: I0224 05:30:55.008965 7614 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-7b65dc9fcb-zxkt2" event={"ID":"be7a4b9e-1e9a-4298-b804-21b683805c0e","Type":"ContainerDied","Data":"0d9c40e1ab9fe194700e549fe0bed42e1d026dc7732cf97087ef5f334f860eb9"} Feb 24 05:30:55.010194 master-0 kubenswrapper[7614]: I0224 05:30:55.009035 7614 scope.go:117] "RemoveContainer" containerID="ebf89d5ba5d68a652168caf590af22fc79d75d991b321ff2b9f369556f4d28c8" Feb 24 05:30:56.021851 master-0 kubenswrapper[7614]: I0224 05:30:56.021754 7614 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-7b65dc9fcb-zxkt2" event={"ID":"be7a4b9e-1e9a-4298-b804-21b683805c0e","Type":"ContainerStarted","Data":"acb0698fd79ca407db7d9ea2aa9e8794fcca326eb46507a49a5c7b349296ed25"} Feb 24 05:30:56.660265 master-0 kubenswrapper[7614]: I0224 05:30:56.660145 7614 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-ingress/router-default-7b65dc9fcb-zxkt2" Feb 24 05:30:56.664632 master-0 kubenswrapper[7614]: I0224 05:30:56.664540 7614 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-zxkt2 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 05:30:56.664632 master-0 kubenswrapper[7614]: [-]has-synced failed: reason withheld Feb 24 05:30:56.664632 master-0 kubenswrapper[7614]: [+]process-running ok Feb 24 05:30:56.664632 master-0 kubenswrapper[7614]: healthz check failed Feb 24 05:30:56.664955 master-0 kubenswrapper[7614]: I0224 05:30:56.664655 7614 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-zxkt2" podUID="be7a4b9e-1e9a-4298-b804-21b683805c0e" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 05:30:57.664734 master-0 kubenswrapper[7614]: I0224 05:30:57.664601 7614 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-zxkt2 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 05:30:57.664734 master-0 kubenswrapper[7614]: [-]has-synced failed: reason withheld Feb 24 05:30:57.664734 master-0 kubenswrapper[7614]: [+]process-running ok Feb 24 05:30:57.664734 master-0 kubenswrapper[7614]: healthz check failed Feb 24 05:30:57.664734 master-0 kubenswrapper[7614]: I0224 05:30:57.664723 7614 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-zxkt2" podUID="be7a4b9e-1e9a-4298-b804-21b683805c0e" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 05:30:58.665238 master-0 kubenswrapper[7614]: I0224 05:30:58.665136 7614 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-zxkt2 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 05:30:58.665238 master-0 kubenswrapper[7614]: [-]has-synced failed: reason withheld Feb 24 05:30:58.665238 master-0 kubenswrapper[7614]: [+]process-running ok Feb 24 05:30:58.665238 master-0 kubenswrapper[7614]: healthz check failed Feb 24 05:30:58.666488 master-0 kubenswrapper[7614]: I0224 05:30:58.665240 7614 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-zxkt2" podUID="be7a4b9e-1e9a-4298-b804-21b683805c0e" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 05:30:59.663983 master-0 kubenswrapper[7614]: I0224 05:30:59.663816 7614 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-zxkt2 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 05:30:59.663983 master-0 kubenswrapper[7614]: [-]has-synced failed: reason withheld Feb 24 05:30:59.663983 master-0 kubenswrapper[7614]: [+]process-running ok Feb 24 05:30:59.663983 master-0 kubenswrapper[7614]: healthz check failed Feb 24 05:30:59.663983 master-0 kubenswrapper[7614]: I0224 05:30:59.663971 7614 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-zxkt2" podUID="be7a4b9e-1e9a-4298-b804-21b683805c0e" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 05:31:00.664086 master-0 kubenswrapper[7614]: I0224 05:31:00.663965 7614 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-zxkt2 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 05:31:00.664086 master-0 kubenswrapper[7614]: [-]has-synced failed: reason withheld Feb 24 05:31:00.664086 master-0 kubenswrapper[7614]: [+]process-running ok Feb 24 05:31:00.664086 master-0 kubenswrapper[7614]: healthz check failed Feb 24 05:31:00.665082 master-0 kubenswrapper[7614]: I0224 05:31:00.664158 7614 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-zxkt2" podUID="be7a4b9e-1e9a-4298-b804-21b683805c0e" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 05:31:01.664105 master-0 kubenswrapper[7614]: I0224 05:31:01.664004 7614 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-zxkt2 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 05:31:01.664105 master-0 kubenswrapper[7614]: [-]has-synced failed: reason withheld Feb 24 05:31:01.664105 master-0 kubenswrapper[7614]: [+]process-running ok Feb 24 05:31:01.664105 master-0 kubenswrapper[7614]: healthz check failed Feb 24 05:31:01.665002 master-0 kubenswrapper[7614]: I0224 05:31:01.664115 7614 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-zxkt2" podUID="be7a4b9e-1e9a-4298-b804-21b683805c0e" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 05:31:02.664024 master-0 kubenswrapper[7614]: I0224 05:31:02.663913 7614 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-zxkt2 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 05:31:02.664024 master-0 kubenswrapper[7614]: [-]has-synced failed: reason withheld Feb 24 05:31:02.664024 master-0 kubenswrapper[7614]: [+]process-running ok Feb 24 05:31:02.664024 master-0 kubenswrapper[7614]: healthz check failed Feb 24 05:31:02.664855 master-0 kubenswrapper[7614]: I0224 05:31:02.664074 7614 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-zxkt2" podUID="be7a4b9e-1e9a-4298-b804-21b683805c0e" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 05:31:03.664099 master-0 kubenswrapper[7614]: I0224 05:31:03.663972 7614 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-zxkt2 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 05:31:03.664099 master-0 kubenswrapper[7614]: [-]has-synced failed: reason withheld Feb 24 05:31:03.664099 master-0 kubenswrapper[7614]: [+]process-running ok Feb 24 05:31:03.664099 master-0 kubenswrapper[7614]: healthz check failed Feb 24 05:31:03.664099 master-0 kubenswrapper[7614]: I0224 05:31:03.664099 7614 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-zxkt2" podUID="be7a4b9e-1e9a-4298-b804-21b683805c0e" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 05:31:04.663947 master-0 kubenswrapper[7614]: I0224 05:31:04.663868 7614 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-zxkt2 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 05:31:04.663947 master-0 kubenswrapper[7614]: [-]has-synced failed: reason withheld Feb 24 05:31:04.663947 master-0 kubenswrapper[7614]: [+]process-running ok Feb 24 05:31:04.663947 master-0 kubenswrapper[7614]: healthz check failed Feb 24 05:31:04.664637 master-0 kubenswrapper[7614]: I0224 05:31:04.664536 7614 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-zxkt2" podUID="be7a4b9e-1e9a-4298-b804-21b683805c0e" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 05:31:05.660878 master-0 kubenswrapper[7614]: I0224 05:31:05.660734 7614 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ingress/router-default-7b65dc9fcb-zxkt2" Feb 24 05:31:05.664250 master-0 kubenswrapper[7614]: I0224 05:31:05.664178 7614 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-zxkt2 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 05:31:05.664250 master-0 kubenswrapper[7614]: [-]has-synced failed: reason withheld Feb 24 05:31:05.664250 master-0 kubenswrapper[7614]: [+]process-running ok Feb 24 05:31:05.664250 master-0 kubenswrapper[7614]: healthz check failed Feb 24 05:31:05.664550 master-0 kubenswrapper[7614]: I0224 05:31:05.664298 7614 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-zxkt2" podUID="be7a4b9e-1e9a-4298-b804-21b683805c0e" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 05:31:06.663429 master-0 kubenswrapper[7614]: I0224 05:31:06.663288 7614 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-zxkt2 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 05:31:06.663429 master-0 kubenswrapper[7614]: [-]has-synced failed: reason withheld Feb 24 05:31:06.663429 master-0 kubenswrapper[7614]: [+]process-running ok Feb 24 05:31:06.663429 master-0 kubenswrapper[7614]: healthz check failed Feb 24 05:31:06.664849 master-0 kubenswrapper[7614]: I0224 05:31:06.663443 7614 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-zxkt2" podUID="be7a4b9e-1e9a-4298-b804-21b683805c0e" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 05:31:07.664159 master-0 kubenswrapper[7614]: I0224 05:31:07.664052 7614 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-zxkt2 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 05:31:07.664159 master-0 kubenswrapper[7614]: [-]has-synced failed: reason withheld Feb 24 05:31:07.664159 master-0 kubenswrapper[7614]: [+]process-running ok Feb 24 05:31:07.664159 master-0 kubenswrapper[7614]: healthz check failed Feb 24 05:31:07.664159 master-0 kubenswrapper[7614]: I0224 05:31:07.664143 7614 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-zxkt2" podUID="be7a4b9e-1e9a-4298-b804-21b683805c0e" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 05:31:08.663851 master-0 kubenswrapper[7614]: I0224 05:31:08.663736 7614 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-zxkt2 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 05:31:08.663851 master-0 kubenswrapper[7614]: [-]has-synced failed: reason withheld Feb 24 05:31:08.663851 master-0 kubenswrapper[7614]: [+]process-running ok Feb 24 05:31:08.663851 master-0 kubenswrapper[7614]: healthz check failed Feb 24 05:31:08.663851 master-0 kubenswrapper[7614]: I0224 05:31:08.663835 7614 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-zxkt2" podUID="be7a4b9e-1e9a-4298-b804-21b683805c0e" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 05:31:09.664511 master-0 kubenswrapper[7614]: I0224 05:31:09.664395 7614 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-zxkt2 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 05:31:09.664511 master-0 kubenswrapper[7614]: [-]has-synced failed: reason withheld Feb 24 05:31:09.664511 master-0 kubenswrapper[7614]: [+]process-running ok Feb 24 05:31:09.664511 master-0 kubenswrapper[7614]: healthz check failed Feb 24 05:31:09.665782 master-0 kubenswrapper[7614]: I0224 05:31:09.664518 7614 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-zxkt2" podUID="be7a4b9e-1e9a-4298-b804-21b683805c0e" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 05:31:10.663813 master-0 kubenswrapper[7614]: I0224 05:31:10.663712 7614 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-zxkt2 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 05:31:10.663813 master-0 kubenswrapper[7614]: [-]has-synced failed: reason withheld Feb 24 05:31:10.663813 master-0 kubenswrapper[7614]: [+]process-running ok Feb 24 05:31:10.663813 master-0 kubenswrapper[7614]: healthz check failed Feb 24 05:31:10.663813 master-0 kubenswrapper[7614]: I0224 05:31:10.663811 7614 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-zxkt2" podUID="be7a4b9e-1e9a-4298-b804-21b683805c0e" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 05:31:11.664251 master-0 kubenswrapper[7614]: I0224 05:31:11.664116 7614 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-zxkt2 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 05:31:11.664251 master-0 kubenswrapper[7614]: [-]has-synced failed: reason withheld Feb 24 05:31:11.664251 master-0 kubenswrapper[7614]: [+]process-running ok Feb 24 05:31:11.664251 master-0 kubenswrapper[7614]: healthz check failed Feb 24 05:31:11.665408 master-0 kubenswrapper[7614]: I0224 05:31:11.664245 7614 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-zxkt2" podUID="be7a4b9e-1e9a-4298-b804-21b683805c0e" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 05:31:12.662863 master-0 kubenswrapper[7614]: I0224 05:31:12.662783 7614 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-zxkt2 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 05:31:12.662863 master-0 kubenswrapper[7614]: [-]has-synced failed: reason withheld Feb 24 05:31:12.662863 master-0 kubenswrapper[7614]: [+]process-running ok Feb 24 05:31:12.662863 master-0 kubenswrapper[7614]: healthz check failed Feb 24 05:31:12.663441 master-0 kubenswrapper[7614]: I0224 05:31:12.663400 7614 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-zxkt2" podUID="be7a4b9e-1e9a-4298-b804-21b683805c0e" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 05:31:13.664763 master-0 kubenswrapper[7614]: I0224 05:31:13.664614 7614 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-zxkt2 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 05:31:13.664763 master-0 kubenswrapper[7614]: [-]has-synced failed: reason withheld Feb 24 05:31:13.664763 master-0 kubenswrapper[7614]: [+]process-running ok Feb 24 05:31:13.664763 master-0 kubenswrapper[7614]: healthz check failed Feb 24 05:31:13.665836 master-0 kubenswrapper[7614]: I0224 05:31:13.664770 7614 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-zxkt2" podUID="be7a4b9e-1e9a-4298-b804-21b683805c0e" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 05:31:14.663442 master-0 kubenswrapper[7614]: I0224 05:31:14.663302 7614 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-zxkt2 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 05:31:14.663442 master-0 kubenswrapper[7614]: [-]has-synced failed: reason withheld Feb 24 05:31:14.663442 master-0 kubenswrapper[7614]: [+]process-running ok Feb 24 05:31:14.663442 master-0 kubenswrapper[7614]: healthz check failed Feb 24 05:31:14.663998 master-0 kubenswrapper[7614]: I0224 05:31:14.663461 7614 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-zxkt2" podUID="be7a4b9e-1e9a-4298-b804-21b683805c0e" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 05:31:15.662922 master-0 kubenswrapper[7614]: I0224 05:31:15.662792 7614 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-zxkt2 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 05:31:15.662922 master-0 kubenswrapper[7614]: [-]has-synced failed: reason withheld Feb 24 05:31:15.662922 master-0 kubenswrapper[7614]: [+]process-running ok Feb 24 05:31:15.662922 master-0 kubenswrapper[7614]: healthz check failed Feb 24 05:31:15.662922 master-0 kubenswrapper[7614]: I0224 05:31:15.662890 7614 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-zxkt2" podUID="be7a4b9e-1e9a-4298-b804-21b683805c0e" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 05:31:16.663460 master-0 kubenswrapper[7614]: I0224 05:31:16.663339 7614 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-zxkt2 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 05:31:16.663460 master-0 kubenswrapper[7614]: [-]has-synced failed: reason withheld Feb 24 05:31:16.663460 master-0 kubenswrapper[7614]: [+]process-running ok Feb 24 05:31:16.663460 master-0 kubenswrapper[7614]: healthz check failed Feb 24 05:31:16.664711 master-0 kubenswrapper[7614]: I0224 05:31:16.663461 7614 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-zxkt2" podUID="be7a4b9e-1e9a-4298-b804-21b683805c0e" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 05:31:17.663991 master-0 kubenswrapper[7614]: I0224 05:31:17.663814 7614 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-zxkt2 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 05:31:17.663991 master-0 kubenswrapper[7614]: [-]has-synced failed: reason withheld Feb 24 05:31:17.663991 master-0 kubenswrapper[7614]: [+]process-running ok Feb 24 05:31:17.663991 master-0 kubenswrapper[7614]: healthz check failed Feb 24 05:31:17.664913 master-0 kubenswrapper[7614]: I0224 05:31:17.664025 7614 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-zxkt2" podUID="be7a4b9e-1e9a-4298-b804-21b683805c0e" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 05:31:18.664419 master-0 kubenswrapper[7614]: I0224 05:31:18.664301 7614 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-zxkt2 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 05:31:18.664419 master-0 kubenswrapper[7614]: [-]has-synced failed: reason withheld Feb 24 05:31:18.664419 master-0 kubenswrapper[7614]: [+]process-running ok Feb 24 05:31:18.664419 master-0 kubenswrapper[7614]: healthz check failed Feb 24 05:31:18.665984 master-0 kubenswrapper[7614]: I0224 05:31:18.664448 7614 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-zxkt2" podUID="be7a4b9e-1e9a-4298-b804-21b683805c0e" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 05:31:19.664602 master-0 kubenswrapper[7614]: I0224 05:31:19.664480 7614 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-zxkt2 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 05:31:19.664602 master-0 kubenswrapper[7614]: [-]has-synced failed: reason withheld Feb 24 05:31:19.664602 master-0 kubenswrapper[7614]: [+]process-running ok Feb 24 05:31:19.664602 master-0 kubenswrapper[7614]: healthz check failed Feb 24 05:31:19.665763 master-0 kubenswrapper[7614]: I0224 05:31:19.664628 7614 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-zxkt2" podUID="be7a4b9e-1e9a-4298-b804-21b683805c0e" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 05:31:20.664202 master-0 kubenswrapper[7614]: I0224 05:31:20.664042 7614 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-zxkt2 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 05:31:20.664202 master-0 kubenswrapper[7614]: [-]has-synced failed: reason withheld Feb 24 05:31:20.664202 master-0 kubenswrapper[7614]: [+]process-running ok Feb 24 05:31:20.664202 master-0 kubenswrapper[7614]: healthz check failed Feb 24 05:31:20.664202 master-0 kubenswrapper[7614]: I0224 05:31:20.664193 7614 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-zxkt2" podUID="be7a4b9e-1e9a-4298-b804-21b683805c0e" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 05:31:21.663597 master-0 kubenswrapper[7614]: I0224 05:31:21.663488 7614 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-zxkt2 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 05:31:21.663597 master-0 kubenswrapper[7614]: [-]has-synced failed: reason withheld Feb 24 05:31:21.663597 master-0 kubenswrapper[7614]: [+]process-running ok Feb 24 05:31:21.663597 master-0 kubenswrapper[7614]: healthz check failed Feb 24 05:31:21.664162 master-0 kubenswrapper[7614]: I0224 05:31:21.663617 7614 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-zxkt2" podUID="be7a4b9e-1e9a-4298-b804-21b683805c0e" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 05:31:22.663748 master-0 kubenswrapper[7614]: I0224 05:31:22.663606 7614 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-zxkt2 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 05:31:22.663748 master-0 kubenswrapper[7614]: [-]has-synced failed: reason withheld Feb 24 05:31:22.663748 master-0 kubenswrapper[7614]: [+]process-running ok Feb 24 05:31:22.663748 master-0 kubenswrapper[7614]: healthz check failed Feb 24 05:31:22.664704 master-0 kubenswrapper[7614]: I0224 05:31:22.663769 7614 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-zxkt2" podUID="be7a4b9e-1e9a-4298-b804-21b683805c0e" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 05:31:23.664032 master-0 kubenswrapper[7614]: I0224 05:31:23.663921 7614 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-zxkt2 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 05:31:23.664032 master-0 kubenswrapper[7614]: [-]has-synced failed: reason withheld Feb 24 05:31:23.664032 master-0 kubenswrapper[7614]: [+]process-running ok Feb 24 05:31:23.664032 master-0 kubenswrapper[7614]: healthz check failed Feb 24 05:31:23.665243 master-0 kubenswrapper[7614]: I0224 05:31:23.664121 7614 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-zxkt2" podUID="be7a4b9e-1e9a-4298-b804-21b683805c0e" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 05:31:24.664051 master-0 kubenswrapper[7614]: I0224 05:31:24.663961 7614 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-zxkt2 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 05:31:24.664051 master-0 kubenswrapper[7614]: [-]has-synced failed: reason withheld Feb 24 05:31:24.664051 master-0 kubenswrapper[7614]: [+]process-running ok Feb 24 05:31:24.664051 master-0 kubenswrapper[7614]: healthz check failed Feb 24 05:31:24.665574 master-0 kubenswrapper[7614]: I0224 05:31:24.664095 7614 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-zxkt2" podUID="be7a4b9e-1e9a-4298-b804-21b683805c0e" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 05:31:25.663586 master-0 kubenswrapper[7614]: I0224 05:31:25.663497 7614 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-zxkt2 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 05:31:25.663586 master-0 kubenswrapper[7614]: [-]has-synced failed: reason withheld Feb 24 05:31:25.663586 master-0 kubenswrapper[7614]: [+]process-running ok Feb 24 05:31:25.663586 master-0 kubenswrapper[7614]: healthz check failed Feb 24 05:31:25.664088 master-0 kubenswrapper[7614]: I0224 05:31:25.663601 7614 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-zxkt2" podUID="be7a4b9e-1e9a-4298-b804-21b683805c0e" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 05:31:26.663990 master-0 kubenswrapper[7614]: I0224 05:31:26.663879 7614 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-zxkt2 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 05:31:26.663990 master-0 kubenswrapper[7614]: [-]has-synced failed: reason withheld Feb 24 05:31:26.663990 master-0 kubenswrapper[7614]: [+]process-running ok Feb 24 05:31:26.663990 master-0 kubenswrapper[7614]: healthz check failed Feb 24 05:31:26.665188 master-0 kubenswrapper[7614]: I0224 05:31:26.663995 7614 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-zxkt2" podUID="be7a4b9e-1e9a-4298-b804-21b683805c0e" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 05:31:27.663225 master-0 kubenswrapper[7614]: I0224 05:31:27.663117 7614 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-zxkt2 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 05:31:27.663225 master-0 kubenswrapper[7614]: [-]has-synced failed: reason withheld Feb 24 05:31:27.663225 master-0 kubenswrapper[7614]: [+]process-running ok Feb 24 05:31:27.663225 master-0 kubenswrapper[7614]: healthz check failed Feb 24 05:31:27.663845 master-0 kubenswrapper[7614]: I0224 05:31:27.663231 7614 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-zxkt2" podUID="be7a4b9e-1e9a-4298-b804-21b683805c0e" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 05:31:28.664731 master-0 kubenswrapper[7614]: I0224 05:31:28.664616 7614 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-zxkt2 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 05:31:28.664731 master-0 kubenswrapper[7614]: [-]has-synced failed: reason withheld Feb 24 05:31:28.664731 master-0 kubenswrapper[7614]: [+]process-running ok Feb 24 05:31:28.664731 master-0 kubenswrapper[7614]: healthz check failed Feb 24 05:31:28.665988 master-0 kubenswrapper[7614]: I0224 05:31:28.664758 7614 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-zxkt2" podUID="be7a4b9e-1e9a-4298-b804-21b683805c0e" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 05:31:29.662839 master-0 kubenswrapper[7614]: I0224 05:31:29.662747 7614 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-zxkt2 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 05:31:29.662839 master-0 kubenswrapper[7614]: [-]has-synced failed: reason withheld Feb 24 05:31:29.662839 master-0 kubenswrapper[7614]: [+]process-running ok Feb 24 05:31:29.662839 master-0 kubenswrapper[7614]: healthz check failed Feb 24 05:31:29.663195 master-0 kubenswrapper[7614]: I0224 05:31:29.662867 7614 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-zxkt2" podUID="be7a4b9e-1e9a-4298-b804-21b683805c0e" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 05:31:30.663677 master-0 kubenswrapper[7614]: I0224 05:31:30.663578 7614 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-zxkt2 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 05:31:30.663677 master-0 kubenswrapper[7614]: [-]has-synced failed: reason withheld Feb 24 05:31:30.663677 master-0 kubenswrapper[7614]: [+]process-running ok Feb 24 05:31:30.663677 master-0 kubenswrapper[7614]: healthz check failed Feb 24 05:31:30.664442 master-0 kubenswrapper[7614]: I0224 05:31:30.663700 7614 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-zxkt2" podUID="be7a4b9e-1e9a-4298-b804-21b683805c0e" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 05:31:31.663424 master-0 kubenswrapper[7614]: I0224 05:31:31.663295 7614 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-zxkt2 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 05:31:31.663424 master-0 kubenswrapper[7614]: [-]has-synced failed: reason withheld Feb 24 05:31:31.663424 master-0 kubenswrapper[7614]: [+]process-running ok Feb 24 05:31:31.663424 master-0 kubenswrapper[7614]: healthz check failed Feb 24 05:31:31.664433 master-0 kubenswrapper[7614]: I0224 05:31:31.663467 7614 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-zxkt2" podUID="be7a4b9e-1e9a-4298-b804-21b683805c0e" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 05:31:32.664675 master-0 kubenswrapper[7614]: I0224 05:31:32.664462 7614 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-zxkt2 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 05:31:32.664675 master-0 kubenswrapper[7614]: [-]has-synced failed: reason withheld Feb 24 05:31:32.664675 master-0 kubenswrapper[7614]: [+]process-running ok Feb 24 05:31:32.664675 master-0 kubenswrapper[7614]: healthz check failed Feb 24 05:31:32.664675 master-0 kubenswrapper[7614]: I0224 05:31:32.664587 7614 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-zxkt2" podUID="be7a4b9e-1e9a-4298-b804-21b683805c0e" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 05:31:33.663552 master-0 kubenswrapper[7614]: I0224 05:31:33.663400 7614 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-zxkt2 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 05:31:33.663552 master-0 kubenswrapper[7614]: [-]has-synced failed: reason withheld Feb 24 05:31:33.663552 master-0 kubenswrapper[7614]: [+]process-running ok Feb 24 05:31:33.663552 master-0 kubenswrapper[7614]: healthz check failed Feb 24 05:31:33.664018 master-0 kubenswrapper[7614]: I0224 05:31:33.663589 7614 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-zxkt2" podUID="be7a4b9e-1e9a-4298-b804-21b683805c0e" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 05:31:34.664126 master-0 kubenswrapper[7614]: I0224 05:31:34.664018 7614 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-zxkt2 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 05:31:34.664126 master-0 kubenswrapper[7614]: [-]has-synced failed: reason withheld Feb 24 05:31:34.664126 master-0 kubenswrapper[7614]: [+]process-running ok Feb 24 05:31:34.664126 master-0 kubenswrapper[7614]: healthz check failed Feb 24 05:31:34.665049 master-0 kubenswrapper[7614]: I0224 05:31:34.664174 7614 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-zxkt2" podUID="be7a4b9e-1e9a-4298-b804-21b683805c0e" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 05:31:35.663337 master-0 kubenswrapper[7614]: I0224 05:31:35.663203 7614 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-zxkt2 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 05:31:35.663337 master-0 kubenswrapper[7614]: [-]has-synced failed: reason withheld Feb 24 05:31:35.663337 master-0 kubenswrapper[7614]: [+]process-running ok Feb 24 05:31:35.663337 master-0 kubenswrapper[7614]: healthz check failed Feb 24 05:31:35.663908 master-0 kubenswrapper[7614]: I0224 05:31:35.663377 7614 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-zxkt2" podUID="be7a4b9e-1e9a-4298-b804-21b683805c0e" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 05:31:36.662932 master-0 kubenswrapper[7614]: I0224 05:31:36.662849 7614 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-zxkt2 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 05:31:36.662932 master-0 kubenswrapper[7614]: [-]has-synced failed: reason withheld Feb 24 05:31:36.662932 master-0 kubenswrapper[7614]: [+]process-running ok Feb 24 05:31:36.662932 master-0 kubenswrapper[7614]: healthz check failed Feb 24 05:31:36.662932 master-0 kubenswrapper[7614]: I0224 05:31:36.662939 7614 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-zxkt2" podUID="be7a4b9e-1e9a-4298-b804-21b683805c0e" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 05:31:37.664135 master-0 kubenswrapper[7614]: I0224 05:31:37.663921 7614 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-zxkt2 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 05:31:37.664135 master-0 kubenswrapper[7614]: [-]has-synced failed: reason withheld Feb 24 05:31:37.664135 master-0 kubenswrapper[7614]: [+]process-running ok Feb 24 05:31:37.664135 master-0 kubenswrapper[7614]: healthz check failed Feb 24 05:31:37.664135 master-0 kubenswrapper[7614]: I0224 05:31:37.664101 7614 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-zxkt2" podUID="be7a4b9e-1e9a-4298-b804-21b683805c0e" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 05:31:38.663569 master-0 kubenswrapper[7614]: I0224 05:31:38.663462 7614 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-zxkt2 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 05:31:38.663569 master-0 kubenswrapper[7614]: [-]has-synced failed: reason withheld Feb 24 05:31:38.663569 master-0 kubenswrapper[7614]: [+]process-running ok Feb 24 05:31:38.663569 master-0 kubenswrapper[7614]: healthz check failed Feb 24 05:31:38.664031 master-0 kubenswrapper[7614]: I0224 05:31:38.663597 7614 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-zxkt2" podUID="be7a4b9e-1e9a-4298-b804-21b683805c0e" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 05:31:39.664521 master-0 kubenswrapper[7614]: I0224 05:31:39.664411 7614 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-zxkt2 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 05:31:39.664521 master-0 kubenswrapper[7614]: [-]has-synced failed: reason withheld Feb 24 05:31:39.664521 master-0 kubenswrapper[7614]: [+]process-running ok Feb 24 05:31:39.664521 master-0 kubenswrapper[7614]: healthz check failed Feb 24 05:31:39.665296 master-0 kubenswrapper[7614]: I0224 05:31:39.664544 7614 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-zxkt2" podUID="be7a4b9e-1e9a-4298-b804-21b683805c0e" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 05:31:40.664400 master-0 kubenswrapper[7614]: I0224 05:31:40.664099 7614 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-zxkt2 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 05:31:40.664400 master-0 kubenswrapper[7614]: [-]has-synced failed: reason withheld Feb 24 05:31:40.664400 master-0 kubenswrapper[7614]: [+]process-running ok Feb 24 05:31:40.664400 master-0 kubenswrapper[7614]: healthz check failed Feb 24 05:31:40.664400 master-0 kubenswrapper[7614]: I0224 05:31:40.664246 7614 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-zxkt2" podUID="be7a4b9e-1e9a-4298-b804-21b683805c0e" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 05:31:41.663551 master-0 kubenswrapper[7614]: I0224 05:31:41.663455 7614 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-zxkt2 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 05:31:41.663551 master-0 kubenswrapper[7614]: [-]has-synced failed: reason withheld Feb 24 05:31:41.663551 master-0 kubenswrapper[7614]: [+]process-running ok Feb 24 05:31:41.663551 master-0 kubenswrapper[7614]: healthz check failed Feb 24 05:31:41.663551 master-0 kubenswrapper[7614]: I0224 05:31:41.663554 7614 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-zxkt2" podUID="be7a4b9e-1e9a-4298-b804-21b683805c0e" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 05:31:42.664068 master-0 kubenswrapper[7614]: I0224 05:31:42.663945 7614 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-zxkt2 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 05:31:42.664068 master-0 kubenswrapper[7614]: [-]has-synced failed: reason withheld Feb 24 05:31:42.664068 master-0 kubenswrapper[7614]: [+]process-running ok Feb 24 05:31:42.664068 master-0 kubenswrapper[7614]: healthz check failed Feb 24 05:31:42.665290 master-0 kubenswrapper[7614]: I0224 05:31:42.664068 7614 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-zxkt2" podUID="be7a4b9e-1e9a-4298-b804-21b683805c0e" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 05:31:43.664199 master-0 kubenswrapper[7614]: I0224 05:31:43.664060 7614 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-zxkt2 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 05:31:43.664199 master-0 kubenswrapper[7614]: [-]has-synced failed: reason withheld Feb 24 05:31:43.664199 master-0 kubenswrapper[7614]: [+]process-running ok Feb 24 05:31:43.664199 master-0 kubenswrapper[7614]: healthz check failed Feb 24 05:31:43.665405 master-0 kubenswrapper[7614]: I0224 05:31:43.664206 7614 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-zxkt2" podUID="be7a4b9e-1e9a-4298-b804-21b683805c0e" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 05:31:44.664166 master-0 kubenswrapper[7614]: I0224 05:31:44.664042 7614 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-zxkt2 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 05:31:44.664166 master-0 kubenswrapper[7614]: [-]has-synced failed: reason withheld Feb 24 05:31:44.664166 master-0 kubenswrapper[7614]: [+]process-running ok Feb 24 05:31:44.664166 master-0 kubenswrapper[7614]: healthz check failed Feb 24 05:31:44.664166 master-0 kubenswrapper[7614]: I0224 05:31:44.664165 7614 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-zxkt2" podUID="be7a4b9e-1e9a-4298-b804-21b683805c0e" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 05:31:45.663251 master-0 kubenswrapper[7614]: I0224 05:31:45.663124 7614 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-zxkt2 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 05:31:45.663251 master-0 kubenswrapper[7614]: [-]has-synced failed: reason withheld Feb 24 05:31:45.663251 master-0 kubenswrapper[7614]: [+]process-running ok Feb 24 05:31:45.663251 master-0 kubenswrapper[7614]: healthz check failed Feb 24 05:31:45.663251 master-0 kubenswrapper[7614]: I0224 05:31:45.663222 7614 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-zxkt2" podUID="be7a4b9e-1e9a-4298-b804-21b683805c0e" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 05:31:46.663824 master-0 kubenswrapper[7614]: I0224 05:31:46.663741 7614 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-zxkt2 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 05:31:46.663824 master-0 kubenswrapper[7614]: [-]has-synced failed: reason withheld Feb 24 05:31:46.663824 master-0 kubenswrapper[7614]: [+]process-running ok Feb 24 05:31:46.663824 master-0 kubenswrapper[7614]: healthz check failed Feb 24 05:31:46.664601 master-0 kubenswrapper[7614]: I0224 05:31:46.663863 7614 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-zxkt2" podUID="be7a4b9e-1e9a-4298-b804-21b683805c0e" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 05:31:47.663829 master-0 kubenswrapper[7614]: I0224 05:31:47.663692 7614 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-zxkt2 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 05:31:47.663829 master-0 kubenswrapper[7614]: [-]has-synced failed: reason withheld Feb 24 05:31:47.663829 master-0 kubenswrapper[7614]: [+]process-running ok Feb 24 05:31:47.663829 master-0 kubenswrapper[7614]: healthz check failed Feb 24 05:31:47.663829 master-0 kubenswrapper[7614]: I0224 05:31:47.663812 7614 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-zxkt2" podUID="be7a4b9e-1e9a-4298-b804-21b683805c0e" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 05:31:48.663719 master-0 kubenswrapper[7614]: I0224 05:31:48.663595 7614 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-zxkt2 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 05:31:48.663719 master-0 kubenswrapper[7614]: [-]has-synced failed: reason withheld Feb 24 05:31:48.663719 master-0 kubenswrapper[7614]: [+]process-running ok Feb 24 05:31:48.663719 master-0 kubenswrapper[7614]: healthz check failed Feb 24 05:31:48.664773 master-0 kubenswrapper[7614]: I0224 05:31:48.663755 7614 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-zxkt2" podUID="be7a4b9e-1e9a-4298-b804-21b683805c0e" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 05:31:49.663616 master-0 kubenswrapper[7614]: I0224 05:31:49.663488 7614 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-zxkt2 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 05:31:49.663616 master-0 kubenswrapper[7614]: [-]has-synced failed: reason withheld Feb 24 05:31:49.663616 master-0 kubenswrapper[7614]: [+]process-running ok Feb 24 05:31:49.663616 master-0 kubenswrapper[7614]: healthz check failed Feb 24 05:31:49.664779 master-0 kubenswrapper[7614]: I0224 05:31:49.663636 7614 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-zxkt2" podUID="be7a4b9e-1e9a-4298-b804-21b683805c0e" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 05:31:50.663655 master-0 kubenswrapper[7614]: I0224 05:31:50.663540 7614 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-zxkt2 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 05:31:50.663655 master-0 kubenswrapper[7614]: [-]has-synced failed: reason withheld Feb 24 05:31:50.663655 master-0 kubenswrapper[7614]: [+]process-running ok Feb 24 05:31:50.663655 master-0 kubenswrapper[7614]: healthz check failed Feb 24 05:31:50.664821 master-0 kubenswrapper[7614]: I0224 05:31:50.663690 7614 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-zxkt2" podUID="be7a4b9e-1e9a-4298-b804-21b683805c0e" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 05:31:51.663846 master-0 kubenswrapper[7614]: I0224 05:31:51.663739 7614 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-zxkt2 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 05:31:51.663846 master-0 kubenswrapper[7614]: [-]has-synced failed: reason withheld Feb 24 05:31:51.663846 master-0 kubenswrapper[7614]: [+]process-running ok Feb 24 05:31:51.663846 master-0 kubenswrapper[7614]: healthz check failed Feb 24 05:31:51.665012 master-0 kubenswrapper[7614]: I0224 05:31:51.663858 7614 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-zxkt2" podUID="be7a4b9e-1e9a-4298-b804-21b683805c0e" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 05:31:52.663528 master-0 kubenswrapper[7614]: I0224 05:31:52.663304 7614 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-zxkt2 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 05:31:52.663528 master-0 kubenswrapper[7614]: [-]has-synced failed: reason withheld Feb 24 05:31:52.663528 master-0 kubenswrapper[7614]: [+]process-running ok Feb 24 05:31:52.663528 master-0 kubenswrapper[7614]: healthz check failed Feb 24 05:31:52.663947 master-0 kubenswrapper[7614]: I0224 05:31:52.663614 7614 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-zxkt2" podUID="be7a4b9e-1e9a-4298-b804-21b683805c0e" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 05:31:53.663304 master-0 kubenswrapper[7614]: I0224 05:31:53.663191 7614 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-zxkt2 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 05:31:53.663304 master-0 kubenswrapper[7614]: [-]has-synced failed: reason withheld Feb 24 05:31:53.663304 master-0 kubenswrapper[7614]: [+]process-running ok Feb 24 05:31:53.663304 master-0 kubenswrapper[7614]: healthz check failed Feb 24 05:31:53.663304 master-0 kubenswrapper[7614]: I0224 05:31:53.663299 7614 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-zxkt2" podUID="be7a4b9e-1e9a-4298-b804-21b683805c0e" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 05:31:54.663274 master-0 kubenswrapper[7614]: I0224 05:31:54.663116 7614 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-zxkt2 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 05:31:54.663274 master-0 kubenswrapper[7614]: [-]has-synced failed: reason withheld Feb 24 05:31:54.663274 master-0 kubenswrapper[7614]: [+]process-running ok Feb 24 05:31:54.663274 master-0 kubenswrapper[7614]: healthz check failed Feb 24 05:31:54.663274 master-0 kubenswrapper[7614]: I0224 05:31:54.663250 7614 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-zxkt2" podUID="be7a4b9e-1e9a-4298-b804-21b683805c0e" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 05:31:55.664652 master-0 kubenswrapper[7614]: I0224 05:31:55.664516 7614 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-zxkt2 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 05:31:55.664652 master-0 kubenswrapper[7614]: [-]has-synced failed: reason withheld Feb 24 05:31:55.664652 master-0 kubenswrapper[7614]: [+]process-running ok Feb 24 05:31:55.664652 master-0 kubenswrapper[7614]: healthz check failed Feb 24 05:31:55.664652 master-0 kubenswrapper[7614]: I0224 05:31:55.664633 7614 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-zxkt2" podUID="be7a4b9e-1e9a-4298-b804-21b683805c0e" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 05:31:56.664127 master-0 kubenswrapper[7614]: I0224 05:31:56.664031 7614 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-zxkt2 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 05:31:56.664127 master-0 kubenswrapper[7614]: [-]has-synced failed: reason withheld Feb 24 05:31:56.664127 master-0 kubenswrapper[7614]: [+]process-running ok Feb 24 05:31:56.664127 master-0 kubenswrapper[7614]: healthz check failed Feb 24 05:31:56.664761 master-0 kubenswrapper[7614]: I0224 05:31:56.664137 7614 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-zxkt2" podUID="be7a4b9e-1e9a-4298-b804-21b683805c0e" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 05:31:57.662777 master-0 kubenswrapper[7614]: I0224 05:31:57.662676 7614 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-zxkt2 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 05:31:57.662777 master-0 kubenswrapper[7614]: [-]has-synced failed: reason withheld Feb 24 05:31:57.662777 master-0 kubenswrapper[7614]: [+]process-running ok Feb 24 05:31:57.662777 master-0 kubenswrapper[7614]: healthz check failed Feb 24 05:31:57.663353 master-0 kubenswrapper[7614]: I0224 05:31:57.662785 7614 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-zxkt2" podUID="be7a4b9e-1e9a-4298-b804-21b683805c0e" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 05:31:58.704278 master-0 kubenswrapper[7614]: I0224 05:31:58.704165 7614 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-zxkt2 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 05:31:58.704278 master-0 kubenswrapper[7614]: [-]has-synced failed: reason withheld Feb 24 05:31:58.704278 master-0 kubenswrapper[7614]: [+]process-running ok Feb 24 05:31:58.704278 master-0 kubenswrapper[7614]: healthz check failed Feb 24 05:31:58.705629 master-0 kubenswrapper[7614]: I0224 05:31:58.704292 7614 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-zxkt2" podUID="be7a4b9e-1e9a-4298-b804-21b683805c0e" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 05:31:59.663041 master-0 kubenswrapper[7614]: I0224 05:31:59.662947 7614 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-zxkt2 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 05:31:59.663041 master-0 kubenswrapper[7614]: [-]has-synced failed: reason withheld Feb 24 05:31:59.663041 master-0 kubenswrapper[7614]: [+]process-running ok Feb 24 05:31:59.663041 master-0 kubenswrapper[7614]: healthz check failed Feb 24 05:31:59.663775 master-0 kubenswrapper[7614]: I0224 05:31:59.663061 7614 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-zxkt2" podUID="be7a4b9e-1e9a-4298-b804-21b683805c0e" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 05:32:00.664075 master-0 kubenswrapper[7614]: I0224 05:32:00.663926 7614 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-zxkt2 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 05:32:00.664075 master-0 kubenswrapper[7614]: [-]has-synced failed: reason withheld Feb 24 05:32:00.664075 master-0 kubenswrapper[7614]: [+]process-running ok Feb 24 05:32:00.664075 master-0 kubenswrapper[7614]: healthz check failed Feb 24 05:32:00.664075 master-0 kubenswrapper[7614]: I0224 05:32:00.664041 7614 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-zxkt2" podUID="be7a4b9e-1e9a-4298-b804-21b683805c0e" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 05:32:01.663598 master-0 kubenswrapper[7614]: I0224 05:32:01.663471 7614 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-zxkt2 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 05:32:01.663598 master-0 kubenswrapper[7614]: [-]has-synced failed: reason withheld Feb 24 05:32:01.663598 master-0 kubenswrapper[7614]: [+]process-running ok Feb 24 05:32:01.663598 master-0 kubenswrapper[7614]: healthz check failed Feb 24 05:32:01.664861 master-0 kubenswrapper[7614]: I0224 05:32:01.663610 7614 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-zxkt2" podUID="be7a4b9e-1e9a-4298-b804-21b683805c0e" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 05:32:02.663763 master-0 kubenswrapper[7614]: I0224 05:32:02.663649 7614 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-zxkt2 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 05:32:02.663763 master-0 kubenswrapper[7614]: [-]has-synced failed: reason withheld Feb 24 05:32:02.663763 master-0 kubenswrapper[7614]: [+]process-running ok Feb 24 05:32:02.663763 master-0 kubenswrapper[7614]: healthz check failed Feb 24 05:32:02.663763 master-0 kubenswrapper[7614]: I0224 05:32:02.663762 7614 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-zxkt2" podUID="be7a4b9e-1e9a-4298-b804-21b683805c0e" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 05:32:03.663941 master-0 kubenswrapper[7614]: I0224 05:32:03.663833 7614 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-zxkt2 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 05:32:03.663941 master-0 kubenswrapper[7614]: [-]has-synced failed: reason withheld Feb 24 05:32:03.663941 master-0 kubenswrapper[7614]: [+]process-running ok Feb 24 05:32:03.663941 master-0 kubenswrapper[7614]: healthz check failed Feb 24 05:32:03.665072 master-0 kubenswrapper[7614]: I0224 05:32:03.664037 7614 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-zxkt2" podUID="be7a4b9e-1e9a-4298-b804-21b683805c0e" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 05:32:04.664035 master-0 kubenswrapper[7614]: I0224 05:32:04.663887 7614 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-zxkt2 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 05:32:04.664035 master-0 kubenswrapper[7614]: [-]has-synced failed: reason withheld Feb 24 05:32:04.664035 master-0 kubenswrapper[7614]: [+]process-running ok Feb 24 05:32:04.664035 master-0 kubenswrapper[7614]: healthz check failed Feb 24 05:32:04.664035 master-0 kubenswrapper[7614]: I0224 05:32:04.664006 7614 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-zxkt2" podUID="be7a4b9e-1e9a-4298-b804-21b683805c0e" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 05:32:05.663937 master-0 kubenswrapper[7614]: I0224 05:32:05.663797 7614 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-zxkt2 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 05:32:05.663937 master-0 kubenswrapper[7614]: [-]has-synced failed: reason withheld Feb 24 05:32:05.663937 master-0 kubenswrapper[7614]: [+]process-running ok Feb 24 05:32:05.663937 master-0 kubenswrapper[7614]: healthz check failed Feb 24 05:32:05.665902 master-0 kubenswrapper[7614]: I0224 05:32:05.663945 7614 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-zxkt2" podUID="be7a4b9e-1e9a-4298-b804-21b683805c0e" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 05:32:06.663080 master-0 kubenswrapper[7614]: I0224 05:32:06.663028 7614 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-zxkt2 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 05:32:06.663080 master-0 kubenswrapper[7614]: [-]has-synced failed: reason withheld Feb 24 05:32:06.663080 master-0 kubenswrapper[7614]: [+]process-running ok Feb 24 05:32:06.663080 master-0 kubenswrapper[7614]: healthz check failed Feb 24 05:32:06.663520 master-0 kubenswrapper[7614]: I0224 05:32:06.663102 7614 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-zxkt2" podUID="be7a4b9e-1e9a-4298-b804-21b683805c0e" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 05:32:07.664047 master-0 kubenswrapper[7614]: I0224 05:32:07.663969 7614 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-zxkt2 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 05:32:07.664047 master-0 kubenswrapper[7614]: [-]has-synced failed: reason withheld Feb 24 05:32:07.664047 master-0 kubenswrapper[7614]: [+]process-running ok Feb 24 05:32:07.664047 master-0 kubenswrapper[7614]: healthz check failed Feb 24 05:32:07.664983 master-0 kubenswrapper[7614]: I0224 05:32:07.664062 7614 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-zxkt2" podUID="be7a4b9e-1e9a-4298-b804-21b683805c0e" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 05:32:08.663011 master-0 kubenswrapper[7614]: I0224 05:32:08.662939 7614 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-zxkt2 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 05:32:08.663011 master-0 kubenswrapper[7614]: [-]has-synced failed: reason withheld Feb 24 05:32:08.663011 master-0 kubenswrapper[7614]: [+]process-running ok Feb 24 05:32:08.663011 master-0 kubenswrapper[7614]: healthz check failed Feb 24 05:32:08.663556 master-0 kubenswrapper[7614]: I0224 05:32:08.663029 7614 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-zxkt2" podUID="be7a4b9e-1e9a-4298-b804-21b683805c0e" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 05:32:09.663493 master-0 kubenswrapper[7614]: I0224 05:32:09.663387 7614 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-zxkt2 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 05:32:09.663493 master-0 kubenswrapper[7614]: [-]has-synced failed: reason withheld Feb 24 05:32:09.663493 master-0 kubenswrapper[7614]: [+]process-running ok Feb 24 05:32:09.663493 master-0 kubenswrapper[7614]: healthz check failed Feb 24 05:32:09.664693 master-0 kubenswrapper[7614]: I0224 05:32:09.663526 7614 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-zxkt2" podUID="be7a4b9e-1e9a-4298-b804-21b683805c0e" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 05:32:09.785870 master-0 kubenswrapper[7614]: I0224 05:32:09.785742 7614 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29531850-l54gb"] Feb 24 05:32:09.786239 master-0 kubenswrapper[7614]: E0224 05:32:09.786154 7614 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="29b0d9bb-1b88-4023-8b08-896d581c79c7" containerName="installer" Feb 24 05:32:09.786239 master-0 kubenswrapper[7614]: I0224 05:32:09.786179 7614 state_mem.go:107] "Deleted CPUSet assignment" podUID="29b0d9bb-1b88-4023-8b08-896d581c79c7" containerName="installer" Feb 24 05:32:09.786239 master-0 kubenswrapper[7614]: E0224 05:32:09.786197 7614 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8be1f8db-3f0b-4d6f-be42-7564fba66820" containerName="multus-admission-controller" Feb 24 05:32:09.786239 master-0 kubenswrapper[7614]: I0224 05:32:09.786207 7614 state_mem.go:107] "Deleted CPUSet assignment" podUID="8be1f8db-3f0b-4d6f-be42-7564fba66820" containerName="multus-admission-controller" Feb 24 05:32:09.786239 master-0 kubenswrapper[7614]: E0224 05:32:09.786222 7614 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7d063f48-5f89-47d0-bafc-84a52839c806" containerName="installer" Feb 24 05:32:09.786239 master-0 kubenswrapper[7614]: I0224 05:32:09.786232 7614 state_mem.go:107] "Deleted CPUSet assignment" podUID="7d063f48-5f89-47d0-bafc-84a52839c806" containerName="installer" Feb 24 05:32:09.786659 master-0 kubenswrapper[7614]: E0224 05:32:09.786258 7614 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2303d3b8-fe6a-469a-a306-4e1685181dbe" containerName="kube-multus-additional-cni-plugins" Feb 24 05:32:09.786659 master-0 kubenswrapper[7614]: I0224 05:32:09.786269 7614 state_mem.go:107] "Deleted CPUSet assignment" podUID="2303d3b8-fe6a-469a-a306-4e1685181dbe" containerName="kube-multus-additional-cni-plugins" Feb 24 05:32:09.786659 master-0 kubenswrapper[7614]: E0224 05:32:09.786279 7614 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8be1f8db-3f0b-4d6f-be42-7564fba66820" containerName="kube-rbac-proxy" Feb 24 05:32:09.786659 master-0 kubenswrapper[7614]: I0224 05:32:09.786376 7614 state_mem.go:107] "Deleted CPUSet assignment" podUID="8be1f8db-3f0b-4d6f-be42-7564fba66820" containerName="kube-rbac-proxy" Feb 24 05:32:09.786907 master-0 kubenswrapper[7614]: I0224 05:32:09.786818 7614 memory_manager.go:354] "RemoveStaleState removing state" podUID="8be1f8db-3f0b-4d6f-be42-7564fba66820" containerName="multus-admission-controller" Feb 24 05:32:09.786907 master-0 kubenswrapper[7614]: I0224 05:32:09.786886 7614 memory_manager.go:354] "RemoveStaleState removing state" podUID="2303d3b8-fe6a-469a-a306-4e1685181dbe" containerName="kube-multus-additional-cni-plugins" Feb 24 05:32:09.787031 master-0 kubenswrapper[7614]: I0224 05:32:09.786926 7614 memory_manager.go:354] "RemoveStaleState removing state" podUID="7d063f48-5f89-47d0-bafc-84a52839c806" containerName="installer" Feb 24 05:32:09.787031 master-0 kubenswrapper[7614]: I0224 05:32:09.786946 7614 memory_manager.go:354] "RemoveStaleState removing state" podUID="29b0d9bb-1b88-4023-8b08-896d581c79c7" containerName="installer" Feb 24 05:32:09.787031 master-0 kubenswrapper[7614]: I0224 05:32:09.786961 7614 memory_manager.go:354] "RemoveStaleState removing state" podUID="8be1f8db-3f0b-4d6f-be42-7564fba66820" containerName="kube-rbac-proxy" Feb 24 05:32:09.787825 master-0 kubenswrapper[7614]: I0224 05:32:09.787772 7614 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29531850-l54gb" Feb 24 05:32:09.790291 master-0 kubenswrapper[7614]: I0224 05:32:09.790228 7614 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Feb 24 05:32:09.793841 master-0 kubenswrapper[7614]: I0224 05:32:09.793786 7614 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-27rfg" Feb 24 05:32:09.799651 master-0 kubenswrapper[7614]: I0224 05:32:09.799589 7614 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29531850-l54gb"] Feb 24 05:32:09.901528 master-0 kubenswrapper[7614]: I0224 05:32:09.901434 7614 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/f2249df3-3ce9-4f96-8f6f-59943125f8ed-config-volume\") pod \"collect-profiles-29531850-l54gb\" (UID: \"f2249df3-3ce9-4f96-8f6f-59943125f8ed\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29531850-l54gb" Feb 24 05:32:09.901528 master-0 kubenswrapper[7614]: I0224 05:32:09.901504 7614 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/f2249df3-3ce9-4f96-8f6f-59943125f8ed-secret-volume\") pod \"collect-profiles-29531850-l54gb\" (UID: \"f2249df3-3ce9-4f96-8f6f-59943125f8ed\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29531850-l54gb" Feb 24 05:32:09.901528 master-0 kubenswrapper[7614]: I0224 05:32:09.901527 7614 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g82bz\" (UniqueName: \"kubernetes.io/projected/f2249df3-3ce9-4f96-8f6f-59943125f8ed-kube-api-access-g82bz\") pod \"collect-profiles-29531850-l54gb\" (UID: \"f2249df3-3ce9-4f96-8f6f-59943125f8ed\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29531850-l54gb" Feb 24 05:32:10.003002 master-0 kubenswrapper[7614]: I0224 05:32:10.002878 7614 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/f2249df3-3ce9-4f96-8f6f-59943125f8ed-config-volume\") pod \"collect-profiles-29531850-l54gb\" (UID: \"f2249df3-3ce9-4f96-8f6f-59943125f8ed\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29531850-l54gb" Feb 24 05:32:10.003002 master-0 kubenswrapper[7614]: I0224 05:32:10.002958 7614 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/f2249df3-3ce9-4f96-8f6f-59943125f8ed-secret-volume\") pod \"collect-profiles-29531850-l54gb\" (UID: \"f2249df3-3ce9-4f96-8f6f-59943125f8ed\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29531850-l54gb" Feb 24 05:32:10.003002 master-0 kubenswrapper[7614]: I0224 05:32:10.002985 7614 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g82bz\" (UniqueName: \"kubernetes.io/projected/f2249df3-3ce9-4f96-8f6f-59943125f8ed-kube-api-access-g82bz\") pod \"collect-profiles-29531850-l54gb\" (UID: \"f2249df3-3ce9-4f96-8f6f-59943125f8ed\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29531850-l54gb" Feb 24 05:32:10.005472 master-0 kubenswrapper[7614]: I0224 05:32:10.005396 7614 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/f2249df3-3ce9-4f96-8f6f-59943125f8ed-config-volume\") pod \"collect-profiles-29531850-l54gb\" (UID: \"f2249df3-3ce9-4f96-8f6f-59943125f8ed\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29531850-l54gb" Feb 24 05:32:10.008266 master-0 kubenswrapper[7614]: I0224 05:32:10.008135 7614 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/f2249df3-3ce9-4f96-8f6f-59943125f8ed-secret-volume\") pod \"collect-profiles-29531850-l54gb\" (UID: \"f2249df3-3ce9-4f96-8f6f-59943125f8ed\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29531850-l54gb" Feb 24 05:32:10.040376 master-0 kubenswrapper[7614]: I0224 05:32:10.040261 7614 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g82bz\" (UniqueName: \"kubernetes.io/projected/f2249df3-3ce9-4f96-8f6f-59943125f8ed-kube-api-access-g82bz\") pod \"collect-profiles-29531850-l54gb\" (UID: \"f2249df3-3ce9-4f96-8f6f-59943125f8ed\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29531850-l54gb" Feb 24 05:32:10.111910 master-0 kubenswrapper[7614]: I0224 05:32:10.111755 7614 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29531850-l54gb" Feb 24 05:32:10.478514 master-0 kubenswrapper[7614]: W0224 05:32:10.478404 7614 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf2249df3_3ce9_4f96_8f6f_59943125f8ed.slice/crio-47ea3d92ee18dd9e6cbbd5b8e7f44f8b09235cb5c1fd91ba759f995d35faf1f2 WatchSource:0}: Error finding container 47ea3d92ee18dd9e6cbbd5b8e7f44f8b09235cb5c1fd91ba759f995d35faf1f2: Status 404 returned error can't find the container with id 47ea3d92ee18dd9e6cbbd5b8e7f44f8b09235cb5c1fd91ba759f995d35faf1f2 Feb 24 05:32:10.481080 master-0 kubenswrapper[7614]: I0224 05:32:10.480984 7614 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29531850-l54gb"] Feb 24 05:32:10.665820 master-0 kubenswrapper[7614]: I0224 05:32:10.665712 7614 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-zxkt2 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 05:32:10.665820 master-0 kubenswrapper[7614]: [-]has-synced failed: reason withheld Feb 24 05:32:10.665820 master-0 kubenswrapper[7614]: [+]process-running ok Feb 24 05:32:10.665820 master-0 kubenswrapper[7614]: healthz check failed Feb 24 05:32:10.666979 master-0 kubenswrapper[7614]: I0224 05:32:10.665844 7614 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-zxkt2" podUID="be7a4b9e-1e9a-4298-b804-21b683805c0e" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 05:32:10.725668 master-0 kubenswrapper[7614]: I0224 05:32:10.725503 7614 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29531850-l54gb" event={"ID":"f2249df3-3ce9-4f96-8f6f-59943125f8ed","Type":"ContainerStarted","Data":"f98e6d86d52c9e26477f3eaacf651db4b9ae2a6be8a9a3959935ba8da1491173"} Feb 24 05:32:10.725668 master-0 kubenswrapper[7614]: I0224 05:32:10.725627 7614 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29531850-l54gb" event={"ID":"f2249df3-3ce9-4f96-8f6f-59943125f8ed","Type":"ContainerStarted","Data":"47ea3d92ee18dd9e6cbbd5b8e7f44f8b09235cb5c1fd91ba759f995d35faf1f2"} Feb 24 05:32:10.764584 master-0 kubenswrapper[7614]: I0224 05:32:10.764440 7614 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29531850-l54gb" podStartSLOduration=130.764343061 podStartE2EDuration="2m10.764343061s" podCreationTimestamp="2026-02-24 05:30:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-24 05:32:10.753919631 +0000 UTC m=+1061.788662827" watchObservedRunningTime="2026-02-24 05:32:10.764343061 +0000 UTC m=+1061.799086247" Feb 24 05:32:11.663892 master-0 kubenswrapper[7614]: I0224 05:32:11.663662 7614 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-zxkt2 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 05:32:11.663892 master-0 kubenswrapper[7614]: [-]has-synced failed: reason withheld Feb 24 05:32:11.663892 master-0 kubenswrapper[7614]: [+]process-running ok Feb 24 05:32:11.663892 master-0 kubenswrapper[7614]: healthz check failed Feb 24 05:32:11.663892 master-0 kubenswrapper[7614]: I0224 05:32:11.663802 7614 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-zxkt2" podUID="be7a4b9e-1e9a-4298-b804-21b683805c0e" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 05:32:11.738528 master-0 kubenswrapper[7614]: I0224 05:32:11.738418 7614 generic.go:334] "Generic (PLEG): container finished" podID="f2249df3-3ce9-4f96-8f6f-59943125f8ed" containerID="f98e6d86d52c9e26477f3eaacf651db4b9ae2a6be8a9a3959935ba8da1491173" exitCode=0 Feb 24 05:32:11.739150 master-0 kubenswrapper[7614]: I0224 05:32:11.738664 7614 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29531850-l54gb" event={"ID":"f2249df3-3ce9-4f96-8f6f-59943125f8ed","Type":"ContainerDied","Data":"f98e6d86d52c9e26477f3eaacf651db4b9ae2a6be8a9a3959935ba8da1491173"} Feb 24 05:32:11.743953 master-0 kubenswrapper[7614]: I0224 05:32:11.743855 7614 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ingress-operator_ingress-operator-6569778c84-rr8r7_3d6b1ce7-1213-494c-829d-186d39eac7eb/ingress-operator/5.log" Feb 24 05:32:11.746071 master-0 kubenswrapper[7614]: I0224 05:32:11.746016 7614 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ingress-operator_ingress-operator-6569778c84-rr8r7_3d6b1ce7-1213-494c-829d-186d39eac7eb/ingress-operator/4.log" Feb 24 05:32:11.746696 master-0 kubenswrapper[7614]: I0224 05:32:11.746635 7614 generic.go:334] "Generic (PLEG): container finished" podID="3d6b1ce7-1213-494c-829d-186d39eac7eb" containerID="b50331e20fa2ff1ed8c1b0d16427617da5e032349e660e1b87569256d54c21e5" exitCode=1 Feb 24 05:32:11.746769 master-0 kubenswrapper[7614]: I0224 05:32:11.746717 7614 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-6569778c84-rr8r7" event={"ID":"3d6b1ce7-1213-494c-829d-186d39eac7eb","Type":"ContainerDied","Data":"b50331e20fa2ff1ed8c1b0d16427617da5e032349e660e1b87569256d54c21e5"} Feb 24 05:32:11.746812 master-0 kubenswrapper[7614]: I0224 05:32:11.746784 7614 scope.go:117] "RemoveContainer" containerID="09f85c8b01d7446d5646107c9c18780a59af7c98e21d551a62767e55f5cabf2d" Feb 24 05:32:11.747886 master-0 kubenswrapper[7614]: I0224 05:32:11.747822 7614 scope.go:117] "RemoveContainer" containerID="b50331e20fa2ff1ed8c1b0d16427617da5e032349e660e1b87569256d54c21e5" Feb 24 05:32:11.748447 master-0 kubenswrapper[7614]: E0224 05:32:11.748392 7614 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ingress-operator\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=ingress-operator pod=ingress-operator-6569778c84-rr8r7_openshift-ingress-operator(3d6b1ce7-1213-494c-829d-186d39eac7eb)\"" pod="openshift-ingress-operator/ingress-operator-6569778c84-rr8r7" podUID="3d6b1ce7-1213-494c-829d-186d39eac7eb" Feb 24 05:32:12.664969 master-0 kubenswrapper[7614]: I0224 05:32:12.664881 7614 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-zxkt2 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 05:32:12.664969 master-0 kubenswrapper[7614]: [-]has-synced failed: reason withheld Feb 24 05:32:12.664969 master-0 kubenswrapper[7614]: [+]process-running ok Feb 24 05:32:12.664969 master-0 kubenswrapper[7614]: healthz check failed Feb 24 05:32:12.665303 master-0 kubenswrapper[7614]: I0224 05:32:12.664981 7614 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-zxkt2" podUID="be7a4b9e-1e9a-4298-b804-21b683805c0e" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 05:32:12.757486 master-0 kubenswrapper[7614]: I0224 05:32:12.757426 7614 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ingress-operator_ingress-operator-6569778c84-rr8r7_3d6b1ce7-1213-494c-829d-186d39eac7eb/ingress-operator/5.log" Feb 24 05:32:13.219805 master-0 kubenswrapper[7614]: I0224 05:32:13.219720 7614 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29531850-l54gb" Feb 24 05:32:13.379170 master-0 kubenswrapper[7614]: I0224 05:32:13.379093 7614 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/f2249df3-3ce9-4f96-8f6f-59943125f8ed-config-volume\") pod \"f2249df3-3ce9-4f96-8f6f-59943125f8ed\" (UID: \"f2249df3-3ce9-4f96-8f6f-59943125f8ed\") " Feb 24 05:32:13.379509 master-0 kubenswrapper[7614]: I0224 05:32:13.379215 7614 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/f2249df3-3ce9-4f96-8f6f-59943125f8ed-secret-volume\") pod \"f2249df3-3ce9-4f96-8f6f-59943125f8ed\" (UID: \"f2249df3-3ce9-4f96-8f6f-59943125f8ed\") " Feb 24 05:32:13.379509 master-0 kubenswrapper[7614]: I0224 05:32:13.379288 7614 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-g82bz\" (UniqueName: \"kubernetes.io/projected/f2249df3-3ce9-4f96-8f6f-59943125f8ed-kube-api-access-g82bz\") pod \"f2249df3-3ce9-4f96-8f6f-59943125f8ed\" (UID: \"f2249df3-3ce9-4f96-8f6f-59943125f8ed\") " Feb 24 05:32:13.379877 master-0 kubenswrapper[7614]: I0224 05:32:13.379810 7614 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f2249df3-3ce9-4f96-8f6f-59943125f8ed-config-volume" (OuterVolumeSpecName: "config-volume") pod "f2249df3-3ce9-4f96-8f6f-59943125f8ed" (UID: "f2249df3-3ce9-4f96-8f6f-59943125f8ed"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 24 05:32:13.382475 master-0 kubenswrapper[7614]: I0224 05:32:13.382412 7614 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f2249df3-3ce9-4f96-8f6f-59943125f8ed-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "f2249df3-3ce9-4f96-8f6f-59943125f8ed" (UID: "f2249df3-3ce9-4f96-8f6f-59943125f8ed"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 24 05:32:13.385979 master-0 kubenswrapper[7614]: I0224 05:32:13.385863 7614 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f2249df3-3ce9-4f96-8f6f-59943125f8ed-kube-api-access-g82bz" (OuterVolumeSpecName: "kube-api-access-g82bz") pod "f2249df3-3ce9-4f96-8f6f-59943125f8ed" (UID: "f2249df3-3ce9-4f96-8f6f-59943125f8ed"). InnerVolumeSpecName "kube-api-access-g82bz". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 24 05:32:13.481061 master-0 kubenswrapper[7614]: I0224 05:32:13.480985 7614 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/f2249df3-3ce9-4f96-8f6f-59943125f8ed-secret-volume\") on node \"master-0\" DevicePath \"\"" Feb 24 05:32:13.481061 master-0 kubenswrapper[7614]: I0224 05:32:13.481037 7614 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-g82bz\" (UniqueName: \"kubernetes.io/projected/f2249df3-3ce9-4f96-8f6f-59943125f8ed-kube-api-access-g82bz\") on node \"master-0\" DevicePath \"\"" Feb 24 05:32:13.481061 master-0 kubenswrapper[7614]: I0224 05:32:13.481053 7614 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/f2249df3-3ce9-4f96-8f6f-59943125f8ed-config-volume\") on node \"master-0\" DevicePath \"\"" Feb 24 05:32:13.663953 master-0 kubenswrapper[7614]: I0224 05:32:13.663833 7614 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-zxkt2 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 05:32:13.663953 master-0 kubenswrapper[7614]: [-]has-synced failed: reason withheld Feb 24 05:32:13.663953 master-0 kubenswrapper[7614]: [+]process-running ok Feb 24 05:32:13.663953 master-0 kubenswrapper[7614]: healthz check failed Feb 24 05:32:13.664530 master-0 kubenswrapper[7614]: I0224 05:32:13.663958 7614 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-zxkt2" podUID="be7a4b9e-1e9a-4298-b804-21b683805c0e" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 05:32:13.769340 master-0 kubenswrapper[7614]: I0224 05:32:13.769162 7614 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29531850-l54gb" event={"ID":"f2249df3-3ce9-4f96-8f6f-59943125f8ed","Type":"ContainerDied","Data":"47ea3d92ee18dd9e6cbbd5b8e7f44f8b09235cb5c1fd91ba759f995d35faf1f2"} Feb 24 05:32:13.769340 master-0 kubenswrapper[7614]: I0224 05:32:13.769286 7614 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="47ea3d92ee18dd9e6cbbd5b8e7f44f8b09235cb5c1fd91ba759f995d35faf1f2" Feb 24 05:32:13.770271 master-0 kubenswrapper[7614]: I0224 05:32:13.769366 7614 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29531850-l54gb" Feb 24 05:32:14.663846 master-0 kubenswrapper[7614]: I0224 05:32:14.663738 7614 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-zxkt2 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 05:32:14.663846 master-0 kubenswrapper[7614]: [-]has-synced failed: reason withheld Feb 24 05:32:14.663846 master-0 kubenswrapper[7614]: [+]process-running ok Feb 24 05:32:14.663846 master-0 kubenswrapper[7614]: healthz check failed Feb 24 05:32:14.664266 master-0 kubenswrapper[7614]: I0224 05:32:14.663850 7614 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-zxkt2" podUID="be7a4b9e-1e9a-4298-b804-21b683805c0e" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 05:32:15.663982 master-0 kubenswrapper[7614]: I0224 05:32:15.663901 7614 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-zxkt2 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 05:32:15.663982 master-0 kubenswrapper[7614]: [-]has-synced failed: reason withheld Feb 24 05:32:15.663982 master-0 kubenswrapper[7614]: [+]process-running ok Feb 24 05:32:15.663982 master-0 kubenswrapper[7614]: healthz check failed Feb 24 05:32:15.665230 master-0 kubenswrapper[7614]: I0224 05:32:15.663997 7614 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-zxkt2" podUID="be7a4b9e-1e9a-4298-b804-21b683805c0e" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 05:32:16.663821 master-0 kubenswrapper[7614]: I0224 05:32:16.663687 7614 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-zxkt2 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 05:32:16.663821 master-0 kubenswrapper[7614]: [-]has-synced failed: reason withheld Feb 24 05:32:16.663821 master-0 kubenswrapper[7614]: [+]process-running ok Feb 24 05:32:16.663821 master-0 kubenswrapper[7614]: healthz check failed Feb 24 05:32:16.665390 master-0 kubenswrapper[7614]: I0224 05:32:16.663859 7614 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-zxkt2" podUID="be7a4b9e-1e9a-4298-b804-21b683805c0e" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 05:32:17.664201 master-0 kubenswrapper[7614]: I0224 05:32:17.664091 7614 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-zxkt2 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 05:32:17.664201 master-0 kubenswrapper[7614]: [-]has-synced failed: reason withheld Feb 24 05:32:17.664201 master-0 kubenswrapper[7614]: [+]process-running ok Feb 24 05:32:17.664201 master-0 kubenswrapper[7614]: healthz check failed Feb 24 05:32:17.665499 master-0 kubenswrapper[7614]: I0224 05:32:17.664211 7614 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-zxkt2" podUID="be7a4b9e-1e9a-4298-b804-21b683805c0e" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 05:32:18.662809 master-0 kubenswrapper[7614]: I0224 05:32:18.662717 7614 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-zxkt2 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 05:32:18.662809 master-0 kubenswrapper[7614]: [-]has-synced failed: reason withheld Feb 24 05:32:18.662809 master-0 kubenswrapper[7614]: [+]process-running ok Feb 24 05:32:18.662809 master-0 kubenswrapper[7614]: healthz check failed Feb 24 05:32:18.662809 master-0 kubenswrapper[7614]: I0224 05:32:18.662802 7614 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-zxkt2" podUID="be7a4b9e-1e9a-4298-b804-21b683805c0e" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 05:32:19.663699 master-0 kubenswrapper[7614]: I0224 05:32:19.663607 7614 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-zxkt2 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 05:32:19.663699 master-0 kubenswrapper[7614]: [-]has-synced failed: reason withheld Feb 24 05:32:19.663699 master-0 kubenswrapper[7614]: [+]process-running ok Feb 24 05:32:19.663699 master-0 kubenswrapper[7614]: healthz check failed Feb 24 05:32:19.665037 master-0 kubenswrapper[7614]: I0224 05:32:19.663738 7614 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-zxkt2" podUID="be7a4b9e-1e9a-4298-b804-21b683805c0e" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 05:32:20.665034 master-0 kubenswrapper[7614]: I0224 05:32:20.664884 7614 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-zxkt2 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 05:32:20.665034 master-0 kubenswrapper[7614]: [-]has-synced failed: reason withheld Feb 24 05:32:20.665034 master-0 kubenswrapper[7614]: [+]process-running ok Feb 24 05:32:20.665034 master-0 kubenswrapper[7614]: healthz check failed Feb 24 05:32:20.666276 master-0 kubenswrapper[7614]: I0224 05:32:20.665079 7614 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-zxkt2" podUID="be7a4b9e-1e9a-4298-b804-21b683805c0e" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 05:32:21.664112 master-0 kubenswrapper[7614]: I0224 05:32:21.664021 7614 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-zxkt2 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 05:32:21.664112 master-0 kubenswrapper[7614]: [-]has-synced failed: reason withheld Feb 24 05:32:21.664112 master-0 kubenswrapper[7614]: [+]process-running ok Feb 24 05:32:21.664112 master-0 kubenswrapper[7614]: healthz check failed Feb 24 05:32:21.664772 master-0 kubenswrapper[7614]: I0224 05:32:21.664142 7614 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-zxkt2" podUID="be7a4b9e-1e9a-4298-b804-21b683805c0e" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 05:32:22.662198 master-0 kubenswrapper[7614]: I0224 05:32:22.662120 7614 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-zxkt2 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 05:32:22.662198 master-0 kubenswrapper[7614]: [-]has-synced failed: reason withheld Feb 24 05:32:22.662198 master-0 kubenswrapper[7614]: [+]process-running ok Feb 24 05:32:22.662198 master-0 kubenswrapper[7614]: healthz check failed Feb 24 05:32:22.662198 master-0 kubenswrapper[7614]: I0224 05:32:22.662188 7614 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-zxkt2" podUID="be7a4b9e-1e9a-4298-b804-21b683805c0e" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 05:32:23.174234 master-0 kubenswrapper[7614]: I0224 05:32:23.174159 7614 scope.go:117] "RemoveContainer" containerID="b50331e20fa2ff1ed8c1b0d16427617da5e032349e660e1b87569256d54c21e5" Feb 24 05:32:23.174687 master-0 kubenswrapper[7614]: E0224 05:32:23.174449 7614 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ingress-operator\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=ingress-operator pod=ingress-operator-6569778c84-rr8r7_openshift-ingress-operator(3d6b1ce7-1213-494c-829d-186d39eac7eb)\"" pod="openshift-ingress-operator/ingress-operator-6569778c84-rr8r7" podUID="3d6b1ce7-1213-494c-829d-186d39eac7eb" Feb 24 05:32:23.664470 master-0 kubenswrapper[7614]: I0224 05:32:23.664391 7614 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-zxkt2 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 05:32:23.664470 master-0 kubenswrapper[7614]: [-]has-synced failed: reason withheld Feb 24 05:32:23.664470 master-0 kubenswrapper[7614]: [+]process-running ok Feb 24 05:32:23.664470 master-0 kubenswrapper[7614]: healthz check failed Feb 24 05:32:23.665563 master-0 kubenswrapper[7614]: I0224 05:32:23.664513 7614 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-zxkt2" podUID="be7a4b9e-1e9a-4298-b804-21b683805c0e" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 05:32:24.663767 master-0 kubenswrapper[7614]: I0224 05:32:24.663664 7614 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-zxkt2 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 05:32:24.663767 master-0 kubenswrapper[7614]: [-]has-synced failed: reason withheld Feb 24 05:32:24.663767 master-0 kubenswrapper[7614]: [+]process-running ok Feb 24 05:32:24.663767 master-0 kubenswrapper[7614]: healthz check failed Feb 24 05:32:24.664341 master-0 kubenswrapper[7614]: I0224 05:32:24.663782 7614 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-zxkt2" podUID="be7a4b9e-1e9a-4298-b804-21b683805c0e" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 05:32:25.663347 master-0 kubenswrapper[7614]: I0224 05:32:25.663189 7614 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-zxkt2 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 05:32:25.663347 master-0 kubenswrapper[7614]: [-]has-synced failed: reason withheld Feb 24 05:32:25.663347 master-0 kubenswrapper[7614]: [+]process-running ok Feb 24 05:32:25.663347 master-0 kubenswrapper[7614]: healthz check failed Feb 24 05:32:25.663347 master-0 kubenswrapper[7614]: I0224 05:32:25.663282 7614 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-zxkt2" podUID="be7a4b9e-1e9a-4298-b804-21b683805c0e" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 05:32:26.664399 master-0 kubenswrapper[7614]: I0224 05:32:26.664249 7614 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-zxkt2 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 05:32:26.664399 master-0 kubenswrapper[7614]: [-]has-synced failed: reason withheld Feb 24 05:32:26.664399 master-0 kubenswrapper[7614]: [+]process-running ok Feb 24 05:32:26.664399 master-0 kubenswrapper[7614]: healthz check failed Feb 24 05:32:26.665708 master-0 kubenswrapper[7614]: I0224 05:32:26.664404 7614 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-zxkt2" podUID="be7a4b9e-1e9a-4298-b804-21b683805c0e" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 05:32:27.663878 master-0 kubenswrapper[7614]: I0224 05:32:27.663775 7614 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-zxkt2 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 05:32:27.663878 master-0 kubenswrapper[7614]: [-]has-synced failed: reason withheld Feb 24 05:32:27.663878 master-0 kubenswrapper[7614]: [+]process-running ok Feb 24 05:32:27.663878 master-0 kubenswrapper[7614]: healthz check failed Feb 24 05:32:27.664463 master-0 kubenswrapper[7614]: I0224 05:32:27.663897 7614 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-zxkt2" podUID="be7a4b9e-1e9a-4298-b804-21b683805c0e" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 05:32:28.664165 master-0 kubenswrapper[7614]: I0224 05:32:28.664054 7614 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-zxkt2 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 05:32:28.664165 master-0 kubenswrapper[7614]: [-]has-synced failed: reason withheld Feb 24 05:32:28.664165 master-0 kubenswrapper[7614]: [+]process-running ok Feb 24 05:32:28.664165 master-0 kubenswrapper[7614]: healthz check failed Feb 24 05:32:28.665519 master-0 kubenswrapper[7614]: I0224 05:32:28.664193 7614 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-zxkt2" podUID="be7a4b9e-1e9a-4298-b804-21b683805c0e" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 05:32:29.663951 master-0 kubenswrapper[7614]: I0224 05:32:29.663837 7614 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-zxkt2 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 05:32:29.663951 master-0 kubenswrapper[7614]: [-]has-synced failed: reason withheld Feb 24 05:32:29.663951 master-0 kubenswrapper[7614]: [+]process-running ok Feb 24 05:32:29.663951 master-0 kubenswrapper[7614]: healthz check failed Feb 24 05:32:29.664530 master-0 kubenswrapper[7614]: I0224 05:32:29.663959 7614 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-zxkt2" podUID="be7a4b9e-1e9a-4298-b804-21b683805c0e" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 05:32:30.662895 master-0 kubenswrapper[7614]: I0224 05:32:30.662807 7614 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-zxkt2 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 05:32:30.662895 master-0 kubenswrapper[7614]: [-]has-synced failed: reason withheld Feb 24 05:32:30.662895 master-0 kubenswrapper[7614]: [+]process-running ok Feb 24 05:32:30.662895 master-0 kubenswrapper[7614]: healthz check failed Feb 24 05:32:30.663567 master-0 kubenswrapper[7614]: I0224 05:32:30.662903 7614 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-zxkt2" podUID="be7a4b9e-1e9a-4298-b804-21b683805c0e" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 05:32:31.663191 master-0 kubenswrapper[7614]: I0224 05:32:31.663079 7614 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-zxkt2 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 05:32:31.663191 master-0 kubenswrapper[7614]: [-]has-synced failed: reason withheld Feb 24 05:32:31.663191 master-0 kubenswrapper[7614]: [+]process-running ok Feb 24 05:32:31.663191 master-0 kubenswrapper[7614]: healthz check failed Feb 24 05:32:31.664569 master-0 kubenswrapper[7614]: I0224 05:32:31.663208 7614 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-zxkt2" podUID="be7a4b9e-1e9a-4298-b804-21b683805c0e" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 05:32:32.662985 master-0 kubenswrapper[7614]: I0224 05:32:32.662884 7614 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-zxkt2 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 05:32:32.662985 master-0 kubenswrapper[7614]: [-]has-synced failed: reason withheld Feb 24 05:32:32.662985 master-0 kubenswrapper[7614]: [+]process-running ok Feb 24 05:32:32.662985 master-0 kubenswrapper[7614]: healthz check failed Feb 24 05:32:32.663749 master-0 kubenswrapper[7614]: I0224 05:32:32.663044 7614 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-zxkt2" podUID="be7a4b9e-1e9a-4298-b804-21b683805c0e" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 05:32:33.663570 master-0 kubenswrapper[7614]: I0224 05:32:33.663484 7614 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-zxkt2 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 05:32:33.663570 master-0 kubenswrapper[7614]: [-]has-synced failed: reason withheld Feb 24 05:32:33.663570 master-0 kubenswrapper[7614]: [+]process-running ok Feb 24 05:32:33.663570 master-0 kubenswrapper[7614]: healthz check failed Feb 24 05:32:33.664275 master-0 kubenswrapper[7614]: I0224 05:32:33.663593 7614 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-zxkt2" podUID="be7a4b9e-1e9a-4298-b804-21b683805c0e" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 05:32:34.175783 master-0 kubenswrapper[7614]: I0224 05:32:34.175675 7614 scope.go:117] "RemoveContainer" containerID="b50331e20fa2ff1ed8c1b0d16427617da5e032349e660e1b87569256d54c21e5" Feb 24 05:32:34.176809 master-0 kubenswrapper[7614]: E0224 05:32:34.176740 7614 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ingress-operator\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=ingress-operator pod=ingress-operator-6569778c84-rr8r7_openshift-ingress-operator(3d6b1ce7-1213-494c-829d-186d39eac7eb)\"" pod="openshift-ingress-operator/ingress-operator-6569778c84-rr8r7" podUID="3d6b1ce7-1213-494c-829d-186d39eac7eb" Feb 24 05:32:34.663367 master-0 kubenswrapper[7614]: I0224 05:32:34.663259 7614 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-zxkt2 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 05:32:34.663367 master-0 kubenswrapper[7614]: [-]has-synced failed: reason withheld Feb 24 05:32:34.663367 master-0 kubenswrapper[7614]: [+]process-running ok Feb 24 05:32:34.663367 master-0 kubenswrapper[7614]: healthz check failed Feb 24 05:32:34.664610 master-0 kubenswrapper[7614]: I0224 05:32:34.663382 7614 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-zxkt2" podUID="be7a4b9e-1e9a-4298-b804-21b683805c0e" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 05:32:35.664062 master-0 kubenswrapper[7614]: I0224 05:32:35.663938 7614 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-zxkt2 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 05:32:35.664062 master-0 kubenswrapper[7614]: [-]has-synced failed: reason withheld Feb 24 05:32:35.664062 master-0 kubenswrapper[7614]: [+]process-running ok Feb 24 05:32:35.664062 master-0 kubenswrapper[7614]: healthz check failed Feb 24 05:32:35.665131 master-0 kubenswrapper[7614]: I0224 05:32:35.664079 7614 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-zxkt2" podUID="be7a4b9e-1e9a-4298-b804-21b683805c0e" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 05:32:36.663230 master-0 kubenswrapper[7614]: I0224 05:32:36.663152 7614 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-zxkt2 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 05:32:36.663230 master-0 kubenswrapper[7614]: [-]has-synced failed: reason withheld Feb 24 05:32:36.663230 master-0 kubenswrapper[7614]: [+]process-running ok Feb 24 05:32:36.663230 master-0 kubenswrapper[7614]: healthz check failed Feb 24 05:32:36.663612 master-0 kubenswrapper[7614]: I0224 05:32:36.663276 7614 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-zxkt2" podUID="be7a4b9e-1e9a-4298-b804-21b683805c0e" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 05:32:37.663340 master-0 kubenswrapper[7614]: I0224 05:32:37.663221 7614 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-zxkt2 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 05:32:37.663340 master-0 kubenswrapper[7614]: [-]has-synced failed: reason withheld Feb 24 05:32:37.663340 master-0 kubenswrapper[7614]: [+]process-running ok Feb 24 05:32:37.663340 master-0 kubenswrapper[7614]: healthz check failed Feb 24 05:32:37.666183 master-0 kubenswrapper[7614]: I0224 05:32:37.663411 7614 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-zxkt2" podUID="be7a4b9e-1e9a-4298-b804-21b683805c0e" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 05:32:38.664271 master-0 kubenswrapper[7614]: I0224 05:32:38.664133 7614 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-zxkt2 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 05:32:38.664271 master-0 kubenswrapper[7614]: [-]has-synced failed: reason withheld Feb 24 05:32:38.664271 master-0 kubenswrapper[7614]: [+]process-running ok Feb 24 05:32:38.664271 master-0 kubenswrapper[7614]: healthz check failed Feb 24 05:32:38.664271 master-0 kubenswrapper[7614]: I0224 05:32:38.664248 7614 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-zxkt2" podUID="be7a4b9e-1e9a-4298-b804-21b683805c0e" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 05:32:39.663820 master-0 kubenswrapper[7614]: I0224 05:32:39.663681 7614 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-zxkt2 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 05:32:39.663820 master-0 kubenswrapper[7614]: [-]has-synced failed: reason withheld Feb 24 05:32:39.663820 master-0 kubenswrapper[7614]: [+]process-running ok Feb 24 05:32:39.663820 master-0 kubenswrapper[7614]: healthz check failed Feb 24 05:32:39.663820 master-0 kubenswrapper[7614]: I0224 05:32:39.663810 7614 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-zxkt2" podUID="be7a4b9e-1e9a-4298-b804-21b683805c0e" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 05:32:40.663808 master-0 kubenswrapper[7614]: I0224 05:32:40.663682 7614 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-zxkt2 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 05:32:40.663808 master-0 kubenswrapper[7614]: [-]has-synced failed: reason withheld Feb 24 05:32:40.663808 master-0 kubenswrapper[7614]: [+]process-running ok Feb 24 05:32:40.663808 master-0 kubenswrapper[7614]: healthz check failed Feb 24 05:32:40.663808 master-0 kubenswrapper[7614]: I0224 05:32:40.663798 7614 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-zxkt2" podUID="be7a4b9e-1e9a-4298-b804-21b683805c0e" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 05:32:41.663921 master-0 kubenswrapper[7614]: I0224 05:32:41.663806 7614 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-zxkt2 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 05:32:41.663921 master-0 kubenswrapper[7614]: [-]has-synced failed: reason withheld Feb 24 05:32:41.663921 master-0 kubenswrapper[7614]: [+]process-running ok Feb 24 05:32:41.663921 master-0 kubenswrapper[7614]: healthz check failed Feb 24 05:32:41.663921 master-0 kubenswrapper[7614]: I0224 05:32:41.663920 7614 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-zxkt2" podUID="be7a4b9e-1e9a-4298-b804-21b683805c0e" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 05:32:42.663901 master-0 kubenswrapper[7614]: I0224 05:32:42.663798 7614 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-zxkt2 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 05:32:42.663901 master-0 kubenswrapper[7614]: [-]has-synced failed: reason withheld Feb 24 05:32:42.663901 master-0 kubenswrapper[7614]: [+]process-running ok Feb 24 05:32:42.663901 master-0 kubenswrapper[7614]: healthz check failed Feb 24 05:32:42.665022 master-0 kubenswrapper[7614]: I0224 05:32:42.663914 7614 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-zxkt2" podUID="be7a4b9e-1e9a-4298-b804-21b683805c0e" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 05:32:43.663483 master-0 kubenswrapper[7614]: I0224 05:32:43.663329 7614 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-zxkt2 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 05:32:43.663483 master-0 kubenswrapper[7614]: [-]has-synced failed: reason withheld Feb 24 05:32:43.663483 master-0 kubenswrapper[7614]: [+]process-running ok Feb 24 05:32:43.663483 master-0 kubenswrapper[7614]: healthz check failed Feb 24 05:32:43.663483 master-0 kubenswrapper[7614]: I0224 05:32:43.663444 7614 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-zxkt2" podUID="be7a4b9e-1e9a-4298-b804-21b683805c0e" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 05:32:44.663869 master-0 kubenswrapper[7614]: I0224 05:32:44.663776 7614 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-zxkt2 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 05:32:44.663869 master-0 kubenswrapper[7614]: [-]has-synced failed: reason withheld Feb 24 05:32:44.663869 master-0 kubenswrapper[7614]: [+]process-running ok Feb 24 05:32:44.663869 master-0 kubenswrapper[7614]: healthz check failed Feb 24 05:32:44.665723 master-0 kubenswrapper[7614]: I0224 05:32:44.665420 7614 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-zxkt2" podUID="be7a4b9e-1e9a-4298-b804-21b683805c0e" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 05:32:45.662895 master-0 kubenswrapper[7614]: I0224 05:32:45.662789 7614 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-zxkt2 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 05:32:45.662895 master-0 kubenswrapper[7614]: [-]has-synced failed: reason withheld Feb 24 05:32:45.662895 master-0 kubenswrapper[7614]: [+]process-running ok Feb 24 05:32:45.662895 master-0 kubenswrapper[7614]: healthz check failed Feb 24 05:32:45.662895 master-0 kubenswrapper[7614]: I0224 05:32:45.662891 7614 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-zxkt2" podUID="be7a4b9e-1e9a-4298-b804-21b683805c0e" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 05:32:46.663404 master-0 kubenswrapper[7614]: I0224 05:32:46.663303 7614 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-zxkt2 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 05:32:46.663404 master-0 kubenswrapper[7614]: [-]has-synced failed: reason withheld Feb 24 05:32:46.663404 master-0 kubenswrapper[7614]: [+]process-running ok Feb 24 05:32:46.663404 master-0 kubenswrapper[7614]: healthz check failed Feb 24 05:32:46.664653 master-0 kubenswrapper[7614]: I0224 05:32:46.664119 7614 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-zxkt2" podUID="be7a4b9e-1e9a-4298-b804-21b683805c0e" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 05:32:47.664133 master-0 kubenswrapper[7614]: I0224 05:32:47.664012 7614 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-zxkt2 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 05:32:47.664133 master-0 kubenswrapper[7614]: [-]has-synced failed: reason withheld Feb 24 05:32:47.664133 master-0 kubenswrapper[7614]: [+]process-running ok Feb 24 05:32:47.664133 master-0 kubenswrapper[7614]: healthz check failed Feb 24 05:32:47.664133 master-0 kubenswrapper[7614]: I0224 05:32:47.664132 7614 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-zxkt2" podUID="be7a4b9e-1e9a-4298-b804-21b683805c0e" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 05:32:48.175062 master-0 kubenswrapper[7614]: I0224 05:32:48.174930 7614 scope.go:117] "RemoveContainer" containerID="b50331e20fa2ff1ed8c1b0d16427617da5e032349e660e1b87569256d54c21e5" Feb 24 05:32:48.175600 master-0 kubenswrapper[7614]: E0224 05:32:48.175541 7614 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ingress-operator\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=ingress-operator pod=ingress-operator-6569778c84-rr8r7_openshift-ingress-operator(3d6b1ce7-1213-494c-829d-186d39eac7eb)\"" pod="openshift-ingress-operator/ingress-operator-6569778c84-rr8r7" podUID="3d6b1ce7-1213-494c-829d-186d39eac7eb" Feb 24 05:32:48.663668 master-0 kubenswrapper[7614]: I0224 05:32:48.663598 7614 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-zxkt2 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 05:32:48.663668 master-0 kubenswrapper[7614]: [-]has-synced failed: reason withheld Feb 24 05:32:48.663668 master-0 kubenswrapper[7614]: [+]process-running ok Feb 24 05:32:48.663668 master-0 kubenswrapper[7614]: healthz check failed Feb 24 05:32:48.664381 master-0 kubenswrapper[7614]: I0224 05:32:48.664302 7614 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-zxkt2" podUID="be7a4b9e-1e9a-4298-b804-21b683805c0e" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 05:32:49.664251 master-0 kubenswrapper[7614]: I0224 05:32:49.664145 7614 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-zxkt2 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 05:32:49.664251 master-0 kubenswrapper[7614]: [-]has-synced failed: reason withheld Feb 24 05:32:49.664251 master-0 kubenswrapper[7614]: [+]process-running ok Feb 24 05:32:49.664251 master-0 kubenswrapper[7614]: healthz check failed Feb 24 05:32:49.665436 master-0 kubenswrapper[7614]: I0224 05:32:49.664274 7614 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-zxkt2" podUID="be7a4b9e-1e9a-4298-b804-21b683805c0e" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 05:32:50.663941 master-0 kubenswrapper[7614]: I0224 05:32:50.663785 7614 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-zxkt2 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 05:32:50.663941 master-0 kubenswrapper[7614]: [-]has-synced failed: reason withheld Feb 24 05:32:50.663941 master-0 kubenswrapper[7614]: [+]process-running ok Feb 24 05:32:50.663941 master-0 kubenswrapper[7614]: healthz check failed Feb 24 05:32:50.663941 master-0 kubenswrapper[7614]: I0224 05:32:50.663935 7614 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-zxkt2" podUID="be7a4b9e-1e9a-4298-b804-21b683805c0e" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 05:32:51.664030 master-0 kubenswrapper[7614]: I0224 05:32:51.663929 7614 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-zxkt2 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 05:32:51.664030 master-0 kubenswrapper[7614]: [-]has-synced failed: reason withheld Feb 24 05:32:51.664030 master-0 kubenswrapper[7614]: [+]process-running ok Feb 24 05:32:51.664030 master-0 kubenswrapper[7614]: healthz check failed Feb 24 05:32:51.665373 master-0 kubenswrapper[7614]: I0224 05:32:51.664077 7614 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-zxkt2" podUID="be7a4b9e-1e9a-4298-b804-21b683805c0e" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 05:32:52.663719 master-0 kubenswrapper[7614]: I0224 05:32:52.663570 7614 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-zxkt2 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 05:32:52.663719 master-0 kubenswrapper[7614]: [-]has-synced failed: reason withheld Feb 24 05:32:52.663719 master-0 kubenswrapper[7614]: [+]process-running ok Feb 24 05:32:52.663719 master-0 kubenswrapper[7614]: healthz check failed Feb 24 05:32:52.663719 master-0 kubenswrapper[7614]: I0224 05:32:52.663703 7614 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-zxkt2" podUID="be7a4b9e-1e9a-4298-b804-21b683805c0e" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 05:32:53.663830 master-0 kubenswrapper[7614]: I0224 05:32:53.663722 7614 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-zxkt2 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 05:32:53.663830 master-0 kubenswrapper[7614]: [-]has-synced failed: reason withheld Feb 24 05:32:53.663830 master-0 kubenswrapper[7614]: [+]process-running ok Feb 24 05:32:53.663830 master-0 kubenswrapper[7614]: healthz check failed Feb 24 05:32:53.663830 master-0 kubenswrapper[7614]: I0224 05:32:53.663821 7614 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-zxkt2" podUID="be7a4b9e-1e9a-4298-b804-21b683805c0e" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 05:32:54.663866 master-0 kubenswrapper[7614]: I0224 05:32:54.663767 7614 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-zxkt2 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 05:32:54.663866 master-0 kubenswrapper[7614]: [-]has-synced failed: reason withheld Feb 24 05:32:54.663866 master-0 kubenswrapper[7614]: [+]process-running ok Feb 24 05:32:54.663866 master-0 kubenswrapper[7614]: healthz check failed Feb 24 05:32:54.665261 master-0 kubenswrapper[7614]: I0224 05:32:54.663911 7614 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-zxkt2" podUID="be7a4b9e-1e9a-4298-b804-21b683805c0e" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 05:32:55.663152 master-0 kubenswrapper[7614]: I0224 05:32:55.663058 7614 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-zxkt2 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 05:32:55.663152 master-0 kubenswrapper[7614]: [-]has-synced failed: reason withheld Feb 24 05:32:55.663152 master-0 kubenswrapper[7614]: [+]process-running ok Feb 24 05:32:55.663152 master-0 kubenswrapper[7614]: healthz check failed Feb 24 05:32:55.663547 master-0 kubenswrapper[7614]: I0224 05:32:55.663175 7614 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-zxkt2" podUID="be7a4b9e-1e9a-4298-b804-21b683805c0e" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 05:32:55.663547 master-0 kubenswrapper[7614]: I0224 05:32:55.663267 7614 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-ingress/router-default-7b65dc9fcb-zxkt2" Feb 24 05:32:55.664549 master-0 kubenswrapper[7614]: I0224 05:32:55.664489 7614 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="router" containerStatusID={"Type":"cri-o","ID":"acb0698fd79ca407db7d9ea2aa9e8794fcca326eb46507a49a5c7b349296ed25"} pod="openshift-ingress/router-default-7b65dc9fcb-zxkt2" containerMessage="Container router failed startup probe, will be restarted" Feb 24 05:32:55.664974 master-0 kubenswrapper[7614]: I0224 05:32:55.664781 7614 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ingress/router-default-7b65dc9fcb-zxkt2" podUID="be7a4b9e-1e9a-4298-b804-21b683805c0e" containerName="router" containerID="cri-o://acb0698fd79ca407db7d9ea2aa9e8794fcca326eb46507a49a5c7b349296ed25" gracePeriod=3600 Feb 24 05:33:01.174821 master-0 kubenswrapper[7614]: I0224 05:33:01.174748 7614 scope.go:117] "RemoveContainer" containerID="b50331e20fa2ff1ed8c1b0d16427617da5e032349e660e1b87569256d54c21e5" Feb 24 05:33:01.175832 master-0 kubenswrapper[7614]: E0224 05:33:01.175026 7614 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ingress-operator\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=ingress-operator pod=ingress-operator-6569778c84-rr8r7_openshift-ingress-operator(3d6b1ce7-1213-494c-829d-186d39eac7eb)\"" pod="openshift-ingress-operator/ingress-operator-6569778c84-rr8r7" podUID="3d6b1ce7-1213-494c-829d-186d39eac7eb" Feb 24 05:33:15.175007 master-0 kubenswrapper[7614]: I0224 05:33:15.174953 7614 scope.go:117] "RemoveContainer" containerID="b50331e20fa2ff1ed8c1b0d16427617da5e032349e660e1b87569256d54c21e5" Feb 24 05:33:15.176182 master-0 kubenswrapper[7614]: E0224 05:33:15.176155 7614 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ingress-operator\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=ingress-operator pod=ingress-operator-6569778c84-rr8r7_openshift-ingress-operator(3d6b1ce7-1213-494c-829d-186d39eac7eb)\"" pod="openshift-ingress-operator/ingress-operator-6569778c84-rr8r7" podUID="3d6b1ce7-1213-494c-829d-186d39eac7eb" Feb 24 05:33:28.174774 master-0 kubenswrapper[7614]: I0224 05:33:28.174687 7614 scope.go:117] "RemoveContainer" containerID="b50331e20fa2ff1ed8c1b0d16427617da5e032349e660e1b87569256d54c21e5" Feb 24 05:33:28.175644 master-0 kubenswrapper[7614]: E0224 05:33:28.175151 7614 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ingress-operator\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=ingress-operator pod=ingress-operator-6569778c84-rr8r7_openshift-ingress-operator(3d6b1ce7-1213-494c-829d-186d39eac7eb)\"" pod="openshift-ingress-operator/ingress-operator-6569778c84-rr8r7" podUID="3d6b1ce7-1213-494c-829d-186d39eac7eb" Feb 24 05:33:42.617875 master-0 kubenswrapper[7614]: I0224 05:33:42.617748 7614 generic.go:334] "Generic (PLEG): container finished" podID="be7a4b9e-1e9a-4298-b804-21b683805c0e" containerID="acb0698fd79ca407db7d9ea2aa9e8794fcca326eb46507a49a5c7b349296ed25" exitCode=0 Feb 24 05:33:42.617875 master-0 kubenswrapper[7614]: I0224 05:33:42.617839 7614 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-7b65dc9fcb-zxkt2" event={"ID":"be7a4b9e-1e9a-4298-b804-21b683805c0e","Type":"ContainerDied","Data":"acb0698fd79ca407db7d9ea2aa9e8794fcca326eb46507a49a5c7b349296ed25"} Feb 24 05:33:42.617875 master-0 kubenswrapper[7614]: I0224 05:33:42.617890 7614 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-7b65dc9fcb-zxkt2" event={"ID":"be7a4b9e-1e9a-4298-b804-21b683805c0e","Type":"ContainerStarted","Data":"4f3ae3a1fb93152f16413963009dac29f899944719e22e0315c1d5fd940eb4a6"} Feb 24 05:33:42.619068 master-0 kubenswrapper[7614]: I0224 05:33:42.617949 7614 scope.go:117] "RemoveContainer" containerID="0d9c40e1ab9fe194700e549fe0bed42e1d026dc7732cf97087ef5f334f860eb9" Feb 24 05:33:42.660008 master-0 kubenswrapper[7614]: I0224 05:33:42.659931 7614 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-ingress/router-default-7b65dc9fcb-zxkt2" Feb 24 05:33:42.664458 master-0 kubenswrapper[7614]: I0224 05:33:42.664398 7614 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-zxkt2 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 05:33:42.664458 master-0 kubenswrapper[7614]: [-]has-synced failed: reason withheld Feb 24 05:33:42.664458 master-0 kubenswrapper[7614]: [+]process-running ok Feb 24 05:33:42.664458 master-0 kubenswrapper[7614]: healthz check failed Feb 24 05:33:42.664789 master-0 kubenswrapper[7614]: I0224 05:33:42.664491 7614 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-zxkt2" podUID="be7a4b9e-1e9a-4298-b804-21b683805c0e" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 05:33:43.175355 master-0 kubenswrapper[7614]: I0224 05:33:43.175252 7614 scope.go:117] "RemoveContainer" containerID="b50331e20fa2ff1ed8c1b0d16427617da5e032349e660e1b87569256d54c21e5" Feb 24 05:33:43.175809 master-0 kubenswrapper[7614]: E0224 05:33:43.175641 7614 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ingress-operator\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=ingress-operator pod=ingress-operator-6569778c84-rr8r7_openshift-ingress-operator(3d6b1ce7-1213-494c-829d-186d39eac7eb)\"" pod="openshift-ingress-operator/ingress-operator-6569778c84-rr8r7" podUID="3d6b1ce7-1213-494c-829d-186d39eac7eb" Feb 24 05:33:43.663746 master-0 kubenswrapper[7614]: I0224 05:33:43.663639 7614 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-zxkt2 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 05:33:43.663746 master-0 kubenswrapper[7614]: [-]has-synced failed: reason withheld Feb 24 05:33:43.663746 master-0 kubenswrapper[7614]: [+]process-running ok Feb 24 05:33:43.663746 master-0 kubenswrapper[7614]: healthz check failed Feb 24 05:33:43.664638 master-0 kubenswrapper[7614]: I0224 05:33:43.663774 7614 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-zxkt2" podUID="be7a4b9e-1e9a-4298-b804-21b683805c0e" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 05:33:44.664215 master-0 kubenswrapper[7614]: I0224 05:33:44.664089 7614 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-zxkt2 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 05:33:44.664215 master-0 kubenswrapper[7614]: [-]has-synced failed: reason withheld Feb 24 05:33:44.664215 master-0 kubenswrapper[7614]: [+]process-running ok Feb 24 05:33:44.664215 master-0 kubenswrapper[7614]: healthz check failed Feb 24 05:33:44.665662 master-0 kubenswrapper[7614]: I0224 05:33:44.664227 7614 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-zxkt2" podUID="be7a4b9e-1e9a-4298-b804-21b683805c0e" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 05:33:45.661007 master-0 kubenswrapper[7614]: I0224 05:33:45.660867 7614 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ingress/router-default-7b65dc9fcb-zxkt2" Feb 24 05:33:45.664731 master-0 kubenswrapper[7614]: I0224 05:33:45.664674 7614 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-zxkt2 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 05:33:45.664731 master-0 kubenswrapper[7614]: [-]has-synced failed: reason withheld Feb 24 05:33:45.664731 master-0 kubenswrapper[7614]: [+]process-running ok Feb 24 05:33:45.664731 master-0 kubenswrapper[7614]: healthz check failed Feb 24 05:33:45.665685 master-0 kubenswrapper[7614]: I0224 05:33:45.664741 7614 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-zxkt2" podUID="be7a4b9e-1e9a-4298-b804-21b683805c0e" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 05:33:46.663712 master-0 kubenswrapper[7614]: I0224 05:33:46.663629 7614 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-zxkt2 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 05:33:46.663712 master-0 kubenswrapper[7614]: [-]has-synced failed: reason withheld Feb 24 05:33:46.663712 master-0 kubenswrapper[7614]: [+]process-running ok Feb 24 05:33:46.663712 master-0 kubenswrapper[7614]: healthz check failed Feb 24 05:33:46.664247 master-0 kubenswrapper[7614]: I0224 05:33:46.663731 7614 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-zxkt2" podUID="be7a4b9e-1e9a-4298-b804-21b683805c0e" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 05:33:47.665051 master-0 kubenswrapper[7614]: I0224 05:33:47.664943 7614 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-zxkt2 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 05:33:47.665051 master-0 kubenswrapper[7614]: [-]has-synced failed: reason withheld Feb 24 05:33:47.665051 master-0 kubenswrapper[7614]: [+]process-running ok Feb 24 05:33:47.665051 master-0 kubenswrapper[7614]: healthz check failed Feb 24 05:33:47.665879 master-0 kubenswrapper[7614]: I0224 05:33:47.665082 7614 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-zxkt2" podUID="be7a4b9e-1e9a-4298-b804-21b683805c0e" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 05:33:48.663559 master-0 kubenswrapper[7614]: I0224 05:33:48.663444 7614 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-zxkt2 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 05:33:48.663559 master-0 kubenswrapper[7614]: [-]has-synced failed: reason withheld Feb 24 05:33:48.663559 master-0 kubenswrapper[7614]: [+]process-running ok Feb 24 05:33:48.663559 master-0 kubenswrapper[7614]: healthz check failed Feb 24 05:33:48.664133 master-0 kubenswrapper[7614]: I0224 05:33:48.663586 7614 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-zxkt2" podUID="be7a4b9e-1e9a-4298-b804-21b683805c0e" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 05:33:49.663695 master-0 kubenswrapper[7614]: I0224 05:33:49.663600 7614 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-zxkt2 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 05:33:49.663695 master-0 kubenswrapper[7614]: [-]has-synced failed: reason withheld Feb 24 05:33:49.663695 master-0 kubenswrapper[7614]: [+]process-running ok Feb 24 05:33:49.663695 master-0 kubenswrapper[7614]: healthz check failed Feb 24 05:33:49.664536 master-0 kubenswrapper[7614]: I0224 05:33:49.663722 7614 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-zxkt2" podUID="be7a4b9e-1e9a-4298-b804-21b683805c0e" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 05:33:50.664239 master-0 kubenswrapper[7614]: I0224 05:33:50.664134 7614 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-zxkt2 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 05:33:50.664239 master-0 kubenswrapper[7614]: [-]has-synced failed: reason withheld Feb 24 05:33:50.664239 master-0 kubenswrapper[7614]: [+]process-running ok Feb 24 05:33:50.664239 master-0 kubenswrapper[7614]: healthz check failed Feb 24 05:33:50.664239 master-0 kubenswrapper[7614]: I0224 05:33:50.664227 7614 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-zxkt2" podUID="be7a4b9e-1e9a-4298-b804-21b683805c0e" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 05:33:51.663340 master-0 kubenswrapper[7614]: I0224 05:33:51.663181 7614 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-zxkt2 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 05:33:51.663340 master-0 kubenswrapper[7614]: [-]has-synced failed: reason withheld Feb 24 05:33:51.663340 master-0 kubenswrapper[7614]: [+]process-running ok Feb 24 05:33:51.663340 master-0 kubenswrapper[7614]: healthz check failed Feb 24 05:33:51.664244 master-0 kubenswrapper[7614]: I0224 05:33:51.663366 7614 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-zxkt2" podUID="be7a4b9e-1e9a-4298-b804-21b683805c0e" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 05:33:52.678701 master-0 kubenswrapper[7614]: I0224 05:33:52.678617 7614 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-zxkt2 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 05:33:52.678701 master-0 kubenswrapper[7614]: [-]has-synced failed: reason withheld Feb 24 05:33:52.678701 master-0 kubenswrapper[7614]: [+]process-running ok Feb 24 05:33:52.678701 master-0 kubenswrapper[7614]: healthz check failed Feb 24 05:33:52.680155 master-0 kubenswrapper[7614]: I0224 05:33:52.680101 7614 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-zxkt2" podUID="be7a4b9e-1e9a-4298-b804-21b683805c0e" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 05:33:53.663449 master-0 kubenswrapper[7614]: I0224 05:33:53.663362 7614 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-zxkt2 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 05:33:53.663449 master-0 kubenswrapper[7614]: [-]has-synced failed: reason withheld Feb 24 05:33:53.663449 master-0 kubenswrapper[7614]: [+]process-running ok Feb 24 05:33:53.663449 master-0 kubenswrapper[7614]: healthz check failed Feb 24 05:33:53.663833 master-0 kubenswrapper[7614]: I0224 05:33:53.663498 7614 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-zxkt2" podUID="be7a4b9e-1e9a-4298-b804-21b683805c0e" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 05:33:54.174351 master-0 kubenswrapper[7614]: I0224 05:33:54.174231 7614 scope.go:117] "RemoveContainer" containerID="b50331e20fa2ff1ed8c1b0d16427617da5e032349e660e1b87569256d54c21e5" Feb 24 05:33:54.175727 master-0 kubenswrapper[7614]: E0224 05:33:54.174598 7614 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ingress-operator\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=ingress-operator pod=ingress-operator-6569778c84-rr8r7_openshift-ingress-operator(3d6b1ce7-1213-494c-829d-186d39eac7eb)\"" pod="openshift-ingress-operator/ingress-operator-6569778c84-rr8r7" podUID="3d6b1ce7-1213-494c-829d-186d39eac7eb" Feb 24 05:33:54.663873 master-0 kubenswrapper[7614]: I0224 05:33:54.663781 7614 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-zxkt2 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 05:33:54.663873 master-0 kubenswrapper[7614]: [-]has-synced failed: reason withheld Feb 24 05:33:54.663873 master-0 kubenswrapper[7614]: [+]process-running ok Feb 24 05:33:54.663873 master-0 kubenswrapper[7614]: healthz check failed Feb 24 05:33:54.664486 master-0 kubenswrapper[7614]: I0224 05:33:54.663916 7614 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-zxkt2" podUID="be7a4b9e-1e9a-4298-b804-21b683805c0e" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 05:33:55.663513 master-0 kubenswrapper[7614]: I0224 05:33:55.663387 7614 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-zxkt2 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 05:33:55.663513 master-0 kubenswrapper[7614]: [-]has-synced failed: reason withheld Feb 24 05:33:55.663513 master-0 kubenswrapper[7614]: [+]process-running ok Feb 24 05:33:55.663513 master-0 kubenswrapper[7614]: healthz check failed Feb 24 05:33:55.663513 master-0 kubenswrapper[7614]: I0224 05:33:55.663515 7614 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-zxkt2" podUID="be7a4b9e-1e9a-4298-b804-21b683805c0e" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 05:33:56.664050 master-0 kubenswrapper[7614]: I0224 05:33:56.663949 7614 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-zxkt2 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 05:33:56.664050 master-0 kubenswrapper[7614]: [-]has-synced failed: reason withheld Feb 24 05:33:56.664050 master-0 kubenswrapper[7614]: [+]process-running ok Feb 24 05:33:56.664050 master-0 kubenswrapper[7614]: healthz check failed Feb 24 05:33:56.665494 master-0 kubenswrapper[7614]: I0224 05:33:56.664061 7614 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-zxkt2" podUID="be7a4b9e-1e9a-4298-b804-21b683805c0e" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 05:33:57.664029 master-0 kubenswrapper[7614]: I0224 05:33:57.663925 7614 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-zxkt2 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 05:33:57.664029 master-0 kubenswrapper[7614]: [-]has-synced failed: reason withheld Feb 24 05:33:57.664029 master-0 kubenswrapper[7614]: [+]process-running ok Feb 24 05:33:57.664029 master-0 kubenswrapper[7614]: healthz check failed Feb 24 05:33:57.664029 master-0 kubenswrapper[7614]: I0224 05:33:57.664023 7614 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-zxkt2" podUID="be7a4b9e-1e9a-4298-b804-21b683805c0e" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 05:33:58.664274 master-0 kubenswrapper[7614]: I0224 05:33:58.664176 7614 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-zxkt2 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 05:33:58.664274 master-0 kubenswrapper[7614]: [-]has-synced failed: reason withheld Feb 24 05:33:58.664274 master-0 kubenswrapper[7614]: [+]process-running ok Feb 24 05:33:58.664274 master-0 kubenswrapper[7614]: healthz check failed Feb 24 05:33:58.665428 master-0 kubenswrapper[7614]: I0224 05:33:58.664339 7614 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-zxkt2" podUID="be7a4b9e-1e9a-4298-b804-21b683805c0e" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 05:33:59.664720 master-0 kubenswrapper[7614]: I0224 05:33:59.664613 7614 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-zxkt2 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 05:33:59.664720 master-0 kubenswrapper[7614]: [-]has-synced failed: reason withheld Feb 24 05:33:59.664720 master-0 kubenswrapper[7614]: [+]process-running ok Feb 24 05:33:59.664720 master-0 kubenswrapper[7614]: healthz check failed Feb 24 05:33:59.664720 master-0 kubenswrapper[7614]: I0224 05:33:59.664714 7614 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-zxkt2" podUID="be7a4b9e-1e9a-4298-b804-21b683805c0e" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 05:34:00.663290 master-0 kubenswrapper[7614]: I0224 05:34:00.663213 7614 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-zxkt2 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 05:34:00.663290 master-0 kubenswrapper[7614]: [-]has-synced failed: reason withheld Feb 24 05:34:00.663290 master-0 kubenswrapper[7614]: [+]process-running ok Feb 24 05:34:00.663290 master-0 kubenswrapper[7614]: healthz check failed Feb 24 05:34:00.663776 master-0 kubenswrapper[7614]: I0224 05:34:00.663306 7614 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-zxkt2" podUID="be7a4b9e-1e9a-4298-b804-21b683805c0e" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 05:34:01.662512 master-0 kubenswrapper[7614]: I0224 05:34:01.662426 7614 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-zxkt2 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 05:34:01.662512 master-0 kubenswrapper[7614]: [-]has-synced failed: reason withheld Feb 24 05:34:01.662512 master-0 kubenswrapper[7614]: [+]process-running ok Feb 24 05:34:01.662512 master-0 kubenswrapper[7614]: healthz check failed Feb 24 05:34:01.662512 master-0 kubenswrapper[7614]: I0224 05:34:01.662492 7614 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-zxkt2" podUID="be7a4b9e-1e9a-4298-b804-21b683805c0e" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 05:34:02.664001 master-0 kubenswrapper[7614]: I0224 05:34:02.663892 7614 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-zxkt2 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 05:34:02.664001 master-0 kubenswrapper[7614]: [-]has-synced failed: reason withheld Feb 24 05:34:02.664001 master-0 kubenswrapper[7614]: [+]process-running ok Feb 24 05:34:02.664001 master-0 kubenswrapper[7614]: healthz check failed Feb 24 05:34:02.665010 master-0 kubenswrapper[7614]: I0224 05:34:02.664002 7614 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-zxkt2" podUID="be7a4b9e-1e9a-4298-b804-21b683805c0e" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 05:34:03.663964 master-0 kubenswrapper[7614]: I0224 05:34:03.663860 7614 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-zxkt2 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 05:34:03.663964 master-0 kubenswrapper[7614]: [-]has-synced failed: reason withheld Feb 24 05:34:03.663964 master-0 kubenswrapper[7614]: [+]process-running ok Feb 24 05:34:03.663964 master-0 kubenswrapper[7614]: healthz check failed Feb 24 05:34:03.665442 master-0 kubenswrapper[7614]: I0224 05:34:03.663971 7614 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-zxkt2" podUID="be7a4b9e-1e9a-4298-b804-21b683805c0e" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 05:34:04.663896 master-0 kubenswrapper[7614]: I0224 05:34:04.663823 7614 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-zxkt2 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 05:34:04.663896 master-0 kubenswrapper[7614]: [-]has-synced failed: reason withheld Feb 24 05:34:04.663896 master-0 kubenswrapper[7614]: [+]process-running ok Feb 24 05:34:04.663896 master-0 kubenswrapper[7614]: healthz check failed Feb 24 05:34:04.664419 master-0 kubenswrapper[7614]: I0224 05:34:04.663939 7614 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-zxkt2" podUID="be7a4b9e-1e9a-4298-b804-21b683805c0e" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 05:34:05.663802 master-0 kubenswrapper[7614]: I0224 05:34:05.663706 7614 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-zxkt2 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 05:34:05.663802 master-0 kubenswrapper[7614]: [-]has-synced failed: reason withheld Feb 24 05:34:05.663802 master-0 kubenswrapper[7614]: [+]process-running ok Feb 24 05:34:05.663802 master-0 kubenswrapper[7614]: healthz check failed Feb 24 05:34:05.664530 master-0 kubenswrapper[7614]: I0224 05:34:05.663825 7614 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-zxkt2" podUID="be7a4b9e-1e9a-4298-b804-21b683805c0e" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 05:34:06.176126 master-0 kubenswrapper[7614]: I0224 05:34:06.176065 7614 scope.go:117] "RemoveContainer" containerID="b50331e20fa2ff1ed8c1b0d16427617da5e032349e660e1b87569256d54c21e5" Feb 24 05:34:06.176952 master-0 kubenswrapper[7614]: E0224 05:34:06.176912 7614 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ingress-operator\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=ingress-operator pod=ingress-operator-6569778c84-rr8r7_openshift-ingress-operator(3d6b1ce7-1213-494c-829d-186d39eac7eb)\"" pod="openshift-ingress-operator/ingress-operator-6569778c84-rr8r7" podUID="3d6b1ce7-1213-494c-829d-186d39eac7eb" Feb 24 05:34:06.662868 master-0 kubenswrapper[7614]: I0224 05:34:06.662762 7614 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-zxkt2 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 05:34:06.662868 master-0 kubenswrapper[7614]: [-]has-synced failed: reason withheld Feb 24 05:34:06.662868 master-0 kubenswrapper[7614]: [+]process-running ok Feb 24 05:34:06.662868 master-0 kubenswrapper[7614]: healthz check failed Feb 24 05:34:06.662868 master-0 kubenswrapper[7614]: I0224 05:34:06.662855 7614 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-zxkt2" podUID="be7a4b9e-1e9a-4298-b804-21b683805c0e" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 05:34:07.665917 master-0 kubenswrapper[7614]: I0224 05:34:07.665831 7614 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-zxkt2 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 05:34:07.665917 master-0 kubenswrapper[7614]: [-]has-synced failed: reason withheld Feb 24 05:34:07.665917 master-0 kubenswrapper[7614]: [+]process-running ok Feb 24 05:34:07.665917 master-0 kubenswrapper[7614]: healthz check failed Feb 24 05:34:07.666611 master-0 kubenswrapper[7614]: I0224 05:34:07.666090 7614 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-zxkt2" podUID="be7a4b9e-1e9a-4298-b804-21b683805c0e" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 05:34:08.664018 master-0 kubenswrapper[7614]: I0224 05:34:08.663876 7614 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-zxkt2 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 05:34:08.664018 master-0 kubenswrapper[7614]: [-]has-synced failed: reason withheld Feb 24 05:34:08.664018 master-0 kubenswrapper[7614]: [+]process-running ok Feb 24 05:34:08.664018 master-0 kubenswrapper[7614]: healthz check failed Feb 24 05:34:08.664633 master-0 kubenswrapper[7614]: I0224 05:34:08.664043 7614 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-zxkt2" podUID="be7a4b9e-1e9a-4298-b804-21b683805c0e" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 05:34:09.663362 master-0 kubenswrapper[7614]: I0224 05:34:09.663243 7614 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-zxkt2 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 05:34:09.663362 master-0 kubenswrapper[7614]: [-]has-synced failed: reason withheld Feb 24 05:34:09.663362 master-0 kubenswrapper[7614]: [+]process-running ok Feb 24 05:34:09.663362 master-0 kubenswrapper[7614]: healthz check failed Feb 24 05:34:09.664355 master-0 kubenswrapper[7614]: I0224 05:34:09.663415 7614 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-zxkt2" podUID="be7a4b9e-1e9a-4298-b804-21b683805c0e" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 05:34:10.664951 master-0 kubenswrapper[7614]: I0224 05:34:10.664789 7614 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-zxkt2 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 05:34:10.664951 master-0 kubenswrapper[7614]: [-]has-synced failed: reason withheld Feb 24 05:34:10.664951 master-0 kubenswrapper[7614]: [+]process-running ok Feb 24 05:34:10.664951 master-0 kubenswrapper[7614]: healthz check failed Feb 24 05:34:10.671669 master-0 kubenswrapper[7614]: I0224 05:34:10.664944 7614 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-zxkt2" podUID="be7a4b9e-1e9a-4298-b804-21b683805c0e" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 05:34:11.662841 master-0 kubenswrapper[7614]: I0224 05:34:11.662742 7614 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-zxkt2 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 05:34:11.662841 master-0 kubenswrapper[7614]: [-]has-synced failed: reason withheld Feb 24 05:34:11.662841 master-0 kubenswrapper[7614]: [+]process-running ok Feb 24 05:34:11.662841 master-0 kubenswrapper[7614]: healthz check failed Feb 24 05:34:11.663295 master-0 kubenswrapper[7614]: I0224 05:34:11.662877 7614 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-zxkt2" podUID="be7a4b9e-1e9a-4298-b804-21b683805c0e" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 05:34:12.663648 master-0 kubenswrapper[7614]: I0224 05:34:12.663516 7614 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-zxkt2 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 05:34:12.663648 master-0 kubenswrapper[7614]: [-]has-synced failed: reason withheld Feb 24 05:34:12.663648 master-0 kubenswrapper[7614]: [+]process-running ok Feb 24 05:34:12.663648 master-0 kubenswrapper[7614]: healthz check failed Feb 24 05:34:12.664827 master-0 kubenswrapper[7614]: I0224 05:34:12.663677 7614 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-zxkt2" podUID="be7a4b9e-1e9a-4298-b804-21b683805c0e" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 05:34:12.938158 master-0 kubenswrapper[7614]: I0224 05:34:12.935809 7614 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-master-0_79656ffd720980cfc7e8a06d9f509855/cluster-policy-controller/3.log" Feb 24 05:34:12.941904 master-0 kubenswrapper[7614]: I0224 05:34:12.938579 7614 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-master-0_79656ffd720980cfc7e8a06d9f509855/kube-controller-manager-cert-syncer/0.log" Feb 24 05:34:12.941904 master-0 kubenswrapper[7614]: I0224 05:34:12.939266 7614 generic.go:334] "Generic (PLEG): container finished" podID="79656ffd720980cfc7e8a06d9f509855" containerID="dd597b6ad5f257c6a61d0a1a9b377d01faf516c3b8373c6ff9e2832da517d51c" exitCode=1 Feb 24 05:34:12.941904 master-0 kubenswrapper[7614]: I0224 05:34:12.939357 7614 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" event={"ID":"79656ffd720980cfc7e8a06d9f509855","Type":"ContainerDied","Data":"dd597b6ad5f257c6a61d0a1a9b377d01faf516c3b8373c6ff9e2832da517d51c"} Feb 24 05:34:12.944660 master-0 kubenswrapper[7614]: I0224 05:34:12.944595 7614 scope.go:117] "RemoveContainer" containerID="dd597b6ad5f257c6a61d0a1a9b377d01faf516c3b8373c6ff9e2832da517d51c" Feb 24 05:34:13.663834 master-0 kubenswrapper[7614]: I0224 05:34:13.663726 7614 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-zxkt2 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 05:34:13.663834 master-0 kubenswrapper[7614]: [-]has-synced failed: reason withheld Feb 24 05:34:13.663834 master-0 kubenswrapper[7614]: [+]process-running ok Feb 24 05:34:13.663834 master-0 kubenswrapper[7614]: healthz check failed Feb 24 05:34:13.663834 master-0 kubenswrapper[7614]: I0224 05:34:13.663830 7614 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-zxkt2" podUID="be7a4b9e-1e9a-4298-b804-21b683805c0e" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 05:34:13.956143 master-0 kubenswrapper[7614]: I0224 05:34:13.955970 7614 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-master-0_79656ffd720980cfc7e8a06d9f509855/cluster-policy-controller/3.log" Feb 24 05:34:13.958362 master-0 kubenswrapper[7614]: I0224 05:34:13.958271 7614 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-master-0_79656ffd720980cfc7e8a06d9f509855/kube-controller-manager-cert-syncer/0.log" Feb 24 05:34:13.959217 master-0 kubenswrapper[7614]: I0224 05:34:13.959137 7614 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" event={"ID":"79656ffd720980cfc7e8a06d9f509855","Type":"ContainerStarted","Data":"ee772b432f63e876abbe548c41b44054165b309149c77604c256c866ea308fdb"} Feb 24 05:34:14.663198 master-0 kubenswrapper[7614]: I0224 05:34:14.663106 7614 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-zxkt2 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 05:34:14.663198 master-0 kubenswrapper[7614]: [-]has-synced failed: reason withheld Feb 24 05:34:14.663198 master-0 kubenswrapper[7614]: [+]process-running ok Feb 24 05:34:14.663198 master-0 kubenswrapper[7614]: healthz check failed Feb 24 05:34:14.663655 master-0 kubenswrapper[7614]: I0224 05:34:14.663243 7614 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-zxkt2" podUID="be7a4b9e-1e9a-4298-b804-21b683805c0e" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 05:34:15.664147 master-0 kubenswrapper[7614]: I0224 05:34:15.664046 7614 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-zxkt2 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 05:34:15.664147 master-0 kubenswrapper[7614]: [-]has-synced failed: reason withheld Feb 24 05:34:15.664147 master-0 kubenswrapper[7614]: [+]process-running ok Feb 24 05:34:15.664147 master-0 kubenswrapper[7614]: healthz check failed Feb 24 05:34:15.665562 master-0 kubenswrapper[7614]: I0224 05:34:15.664156 7614 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-zxkt2" podUID="be7a4b9e-1e9a-4298-b804-21b683805c0e" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 05:34:16.663250 master-0 kubenswrapper[7614]: I0224 05:34:16.663160 7614 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-zxkt2 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 05:34:16.663250 master-0 kubenswrapper[7614]: [-]has-synced failed: reason withheld Feb 24 05:34:16.663250 master-0 kubenswrapper[7614]: [+]process-running ok Feb 24 05:34:16.663250 master-0 kubenswrapper[7614]: healthz check failed Feb 24 05:34:16.664128 master-0 kubenswrapper[7614]: I0224 05:34:16.664073 7614 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-zxkt2" podUID="be7a4b9e-1e9a-4298-b804-21b683805c0e" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 05:34:17.663710 master-0 kubenswrapper[7614]: I0224 05:34:17.663571 7614 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-zxkt2 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 05:34:17.663710 master-0 kubenswrapper[7614]: [-]has-synced failed: reason withheld Feb 24 05:34:17.663710 master-0 kubenswrapper[7614]: [+]process-running ok Feb 24 05:34:17.663710 master-0 kubenswrapper[7614]: healthz check failed Feb 24 05:34:17.664755 master-0 kubenswrapper[7614]: I0224 05:34:17.663718 7614 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-zxkt2" podUID="be7a4b9e-1e9a-4298-b804-21b683805c0e" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 05:34:18.174833 master-0 kubenswrapper[7614]: I0224 05:34:18.174733 7614 scope.go:117] "RemoveContainer" containerID="b50331e20fa2ff1ed8c1b0d16427617da5e032349e660e1b87569256d54c21e5" Feb 24 05:34:18.175224 master-0 kubenswrapper[7614]: E0224 05:34:18.175160 7614 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ingress-operator\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=ingress-operator pod=ingress-operator-6569778c84-rr8r7_openshift-ingress-operator(3d6b1ce7-1213-494c-829d-186d39eac7eb)\"" pod="openshift-ingress-operator/ingress-operator-6569778c84-rr8r7" podUID="3d6b1ce7-1213-494c-829d-186d39eac7eb" Feb 24 05:34:18.665086 master-0 kubenswrapper[7614]: I0224 05:34:18.664977 7614 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-zxkt2 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 05:34:18.665086 master-0 kubenswrapper[7614]: [-]has-synced failed: reason withheld Feb 24 05:34:18.665086 master-0 kubenswrapper[7614]: [+]process-running ok Feb 24 05:34:18.665086 master-0 kubenswrapper[7614]: healthz check failed Feb 24 05:34:18.666470 master-0 kubenswrapper[7614]: I0224 05:34:18.665098 7614 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-zxkt2" podUID="be7a4b9e-1e9a-4298-b804-21b683805c0e" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 05:34:19.662858 master-0 kubenswrapper[7614]: I0224 05:34:19.662741 7614 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-zxkt2 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 05:34:19.662858 master-0 kubenswrapper[7614]: [-]has-synced failed: reason withheld Feb 24 05:34:19.662858 master-0 kubenswrapper[7614]: [+]process-running ok Feb 24 05:34:19.662858 master-0 kubenswrapper[7614]: healthz check failed Feb 24 05:34:19.663298 master-0 kubenswrapper[7614]: I0224 05:34:19.662887 7614 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-zxkt2" podUID="be7a4b9e-1e9a-4298-b804-21b683805c0e" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 05:34:20.663535 master-0 kubenswrapper[7614]: I0224 05:34:20.663402 7614 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-zxkt2 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 05:34:20.663535 master-0 kubenswrapper[7614]: [-]has-synced failed: reason withheld Feb 24 05:34:20.663535 master-0 kubenswrapper[7614]: [+]process-running ok Feb 24 05:34:20.663535 master-0 kubenswrapper[7614]: healthz check failed Feb 24 05:34:20.664399 master-0 kubenswrapper[7614]: I0224 05:34:20.663630 7614 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-zxkt2" podUID="be7a4b9e-1e9a-4298-b804-21b683805c0e" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 05:34:21.663675 master-0 kubenswrapper[7614]: I0224 05:34:21.663601 7614 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-zxkt2 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 05:34:21.663675 master-0 kubenswrapper[7614]: [-]has-synced failed: reason withheld Feb 24 05:34:21.663675 master-0 kubenswrapper[7614]: [+]process-running ok Feb 24 05:34:21.663675 master-0 kubenswrapper[7614]: healthz check failed Feb 24 05:34:21.663675 master-0 kubenswrapper[7614]: I0224 05:34:21.663679 7614 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-zxkt2" podUID="be7a4b9e-1e9a-4298-b804-21b683805c0e" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 05:34:22.662930 master-0 kubenswrapper[7614]: I0224 05:34:22.662745 7614 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-zxkt2 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 05:34:22.662930 master-0 kubenswrapper[7614]: [-]has-synced failed: reason withheld Feb 24 05:34:22.662930 master-0 kubenswrapper[7614]: [+]process-running ok Feb 24 05:34:22.662930 master-0 kubenswrapper[7614]: healthz check failed Feb 24 05:34:22.663409 master-0 kubenswrapper[7614]: I0224 05:34:22.662946 7614 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-zxkt2" podUID="be7a4b9e-1e9a-4298-b804-21b683805c0e" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 05:34:23.663517 master-0 kubenswrapper[7614]: I0224 05:34:23.663371 7614 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-zxkt2 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 05:34:23.663517 master-0 kubenswrapper[7614]: [-]has-synced failed: reason withheld Feb 24 05:34:23.663517 master-0 kubenswrapper[7614]: [+]process-running ok Feb 24 05:34:23.663517 master-0 kubenswrapper[7614]: healthz check failed Feb 24 05:34:23.664767 master-0 kubenswrapper[7614]: I0224 05:34:23.663569 7614 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-zxkt2" podUID="be7a4b9e-1e9a-4298-b804-21b683805c0e" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 05:34:24.664812 master-0 kubenswrapper[7614]: I0224 05:34:24.664714 7614 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-zxkt2 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 05:34:24.664812 master-0 kubenswrapper[7614]: [-]has-synced failed: reason withheld Feb 24 05:34:24.664812 master-0 kubenswrapper[7614]: [+]process-running ok Feb 24 05:34:24.664812 master-0 kubenswrapper[7614]: healthz check failed Feb 24 05:34:24.665964 master-0 kubenswrapper[7614]: I0224 05:34:24.664831 7614 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-zxkt2" podUID="be7a4b9e-1e9a-4298-b804-21b683805c0e" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 05:34:25.662727 master-0 kubenswrapper[7614]: I0224 05:34:25.662592 7614 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-zxkt2 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 05:34:25.662727 master-0 kubenswrapper[7614]: [-]has-synced failed: reason withheld Feb 24 05:34:25.662727 master-0 kubenswrapper[7614]: [+]process-running ok Feb 24 05:34:25.662727 master-0 kubenswrapper[7614]: healthz check failed Feb 24 05:34:25.662727 master-0 kubenswrapper[7614]: I0224 05:34:25.662721 7614 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-zxkt2" podUID="be7a4b9e-1e9a-4298-b804-21b683805c0e" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 05:34:26.663946 master-0 kubenswrapper[7614]: I0224 05:34:26.663824 7614 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-zxkt2 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 05:34:26.663946 master-0 kubenswrapper[7614]: [-]has-synced failed: reason withheld Feb 24 05:34:26.663946 master-0 kubenswrapper[7614]: [+]process-running ok Feb 24 05:34:26.663946 master-0 kubenswrapper[7614]: healthz check failed Feb 24 05:34:26.665086 master-0 kubenswrapper[7614]: I0224 05:34:26.663947 7614 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-zxkt2" podUID="be7a4b9e-1e9a-4298-b804-21b683805c0e" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 05:34:27.663798 master-0 kubenswrapper[7614]: I0224 05:34:27.663703 7614 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-zxkt2 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 05:34:27.663798 master-0 kubenswrapper[7614]: [-]has-synced failed: reason withheld Feb 24 05:34:27.663798 master-0 kubenswrapper[7614]: [+]process-running ok Feb 24 05:34:27.663798 master-0 kubenswrapper[7614]: healthz check failed Feb 24 05:34:27.664787 master-0 kubenswrapper[7614]: I0224 05:34:27.663811 7614 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-zxkt2" podUID="be7a4b9e-1e9a-4298-b804-21b683805c0e" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 05:34:28.663536 master-0 kubenswrapper[7614]: I0224 05:34:28.663387 7614 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-zxkt2 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 05:34:28.663536 master-0 kubenswrapper[7614]: [-]has-synced failed: reason withheld Feb 24 05:34:28.663536 master-0 kubenswrapper[7614]: [+]process-running ok Feb 24 05:34:28.663536 master-0 kubenswrapper[7614]: healthz check failed Feb 24 05:34:28.663536 master-0 kubenswrapper[7614]: I0224 05:34:28.663529 7614 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-zxkt2" podUID="be7a4b9e-1e9a-4298-b804-21b683805c0e" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 05:34:29.667534 master-0 kubenswrapper[7614]: I0224 05:34:29.667445 7614 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-zxkt2 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 05:34:29.667534 master-0 kubenswrapper[7614]: [-]has-synced failed: reason withheld Feb 24 05:34:29.667534 master-0 kubenswrapper[7614]: [+]process-running ok Feb 24 05:34:29.667534 master-0 kubenswrapper[7614]: healthz check failed Feb 24 05:34:29.668650 master-0 kubenswrapper[7614]: I0224 05:34:29.667550 7614 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-zxkt2" podUID="be7a4b9e-1e9a-4298-b804-21b683805c0e" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 05:34:30.664738 master-0 kubenswrapper[7614]: I0224 05:34:30.664640 7614 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-zxkt2 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 05:34:30.664738 master-0 kubenswrapper[7614]: [-]has-synced failed: reason withheld Feb 24 05:34:30.664738 master-0 kubenswrapper[7614]: [+]process-running ok Feb 24 05:34:30.664738 master-0 kubenswrapper[7614]: healthz check failed Feb 24 05:34:30.665210 master-0 kubenswrapper[7614]: I0224 05:34:30.664757 7614 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-zxkt2" podUID="be7a4b9e-1e9a-4298-b804-21b683805c0e" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 05:34:31.673031 master-0 kubenswrapper[7614]: I0224 05:34:31.672930 7614 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-zxkt2 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 05:34:31.673031 master-0 kubenswrapper[7614]: [-]has-synced failed: reason withheld Feb 24 05:34:31.673031 master-0 kubenswrapper[7614]: [+]process-running ok Feb 24 05:34:31.673031 master-0 kubenswrapper[7614]: healthz check failed Feb 24 05:34:31.674438 master-0 kubenswrapper[7614]: I0224 05:34:31.673048 7614 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-zxkt2" podUID="be7a4b9e-1e9a-4298-b804-21b683805c0e" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 05:34:32.174707 master-0 kubenswrapper[7614]: I0224 05:34:32.174601 7614 scope.go:117] "RemoveContainer" containerID="b50331e20fa2ff1ed8c1b0d16427617da5e032349e660e1b87569256d54c21e5" Feb 24 05:34:32.175134 master-0 kubenswrapper[7614]: E0224 05:34:32.174892 7614 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ingress-operator\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=ingress-operator pod=ingress-operator-6569778c84-rr8r7_openshift-ingress-operator(3d6b1ce7-1213-494c-829d-186d39eac7eb)\"" pod="openshift-ingress-operator/ingress-operator-6569778c84-rr8r7" podUID="3d6b1ce7-1213-494c-829d-186d39eac7eb" Feb 24 05:34:32.663123 master-0 kubenswrapper[7614]: I0224 05:34:32.663026 7614 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-zxkt2 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 05:34:32.663123 master-0 kubenswrapper[7614]: [-]has-synced failed: reason withheld Feb 24 05:34:32.663123 master-0 kubenswrapper[7614]: [+]process-running ok Feb 24 05:34:32.663123 master-0 kubenswrapper[7614]: healthz check failed Feb 24 05:34:32.663123 master-0 kubenswrapper[7614]: I0224 05:34:32.663126 7614 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-zxkt2" podUID="be7a4b9e-1e9a-4298-b804-21b683805c0e" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 05:34:33.662299 master-0 kubenswrapper[7614]: I0224 05:34:33.662200 7614 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-zxkt2 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 05:34:33.662299 master-0 kubenswrapper[7614]: [-]has-synced failed: reason withheld Feb 24 05:34:33.662299 master-0 kubenswrapper[7614]: [+]process-running ok Feb 24 05:34:33.662299 master-0 kubenswrapper[7614]: healthz check failed Feb 24 05:34:33.662299 master-0 kubenswrapper[7614]: I0224 05:34:33.662284 7614 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-zxkt2" podUID="be7a4b9e-1e9a-4298-b804-21b683805c0e" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 05:34:34.664116 master-0 kubenswrapper[7614]: I0224 05:34:34.663960 7614 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-zxkt2 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 05:34:34.664116 master-0 kubenswrapper[7614]: [-]has-synced failed: reason withheld Feb 24 05:34:34.664116 master-0 kubenswrapper[7614]: [+]process-running ok Feb 24 05:34:34.664116 master-0 kubenswrapper[7614]: healthz check failed Feb 24 05:34:34.665651 master-0 kubenswrapper[7614]: I0224 05:34:34.664128 7614 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-zxkt2" podUID="be7a4b9e-1e9a-4298-b804-21b683805c0e" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 05:34:35.663349 master-0 kubenswrapper[7614]: I0224 05:34:35.663217 7614 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-zxkt2 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 05:34:35.663349 master-0 kubenswrapper[7614]: [-]has-synced failed: reason withheld Feb 24 05:34:35.663349 master-0 kubenswrapper[7614]: [+]process-running ok Feb 24 05:34:35.663349 master-0 kubenswrapper[7614]: healthz check failed Feb 24 05:34:35.663979 master-0 kubenswrapper[7614]: I0224 05:34:35.663563 7614 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-zxkt2" podUID="be7a4b9e-1e9a-4298-b804-21b683805c0e" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 05:34:36.663359 master-0 kubenswrapper[7614]: I0224 05:34:36.663210 7614 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-zxkt2 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 05:34:36.663359 master-0 kubenswrapper[7614]: [-]has-synced failed: reason withheld Feb 24 05:34:36.663359 master-0 kubenswrapper[7614]: [+]process-running ok Feb 24 05:34:36.663359 master-0 kubenswrapper[7614]: healthz check failed Feb 24 05:34:36.664515 master-0 kubenswrapper[7614]: I0224 05:34:36.663381 7614 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-zxkt2" podUID="be7a4b9e-1e9a-4298-b804-21b683805c0e" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 05:34:37.665065 master-0 kubenswrapper[7614]: I0224 05:34:37.664940 7614 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-zxkt2 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 05:34:37.665065 master-0 kubenswrapper[7614]: [-]has-synced failed: reason withheld Feb 24 05:34:37.665065 master-0 kubenswrapper[7614]: [+]process-running ok Feb 24 05:34:37.665065 master-0 kubenswrapper[7614]: healthz check failed Feb 24 05:34:37.666157 master-0 kubenswrapper[7614]: I0224 05:34:37.665107 7614 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-zxkt2" podUID="be7a4b9e-1e9a-4298-b804-21b683805c0e" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 05:34:38.664542 master-0 kubenswrapper[7614]: I0224 05:34:38.664440 7614 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-zxkt2 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 05:34:38.664542 master-0 kubenswrapper[7614]: [-]has-synced failed: reason withheld Feb 24 05:34:38.664542 master-0 kubenswrapper[7614]: [+]process-running ok Feb 24 05:34:38.664542 master-0 kubenswrapper[7614]: healthz check failed Feb 24 05:34:38.665144 master-0 kubenswrapper[7614]: I0224 05:34:38.664548 7614 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-zxkt2" podUID="be7a4b9e-1e9a-4298-b804-21b683805c0e" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 05:34:39.664065 master-0 kubenswrapper[7614]: I0224 05:34:39.663959 7614 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-zxkt2 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 05:34:39.664065 master-0 kubenswrapper[7614]: [-]has-synced failed: reason withheld Feb 24 05:34:39.664065 master-0 kubenswrapper[7614]: [+]process-running ok Feb 24 05:34:39.664065 master-0 kubenswrapper[7614]: healthz check failed Feb 24 05:34:39.664837 master-0 kubenswrapper[7614]: I0224 05:34:39.664076 7614 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-zxkt2" podUID="be7a4b9e-1e9a-4298-b804-21b683805c0e" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 05:34:40.663672 master-0 kubenswrapper[7614]: I0224 05:34:40.663582 7614 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-zxkt2 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 05:34:40.663672 master-0 kubenswrapper[7614]: [-]has-synced failed: reason withheld Feb 24 05:34:40.663672 master-0 kubenswrapper[7614]: [+]process-running ok Feb 24 05:34:40.663672 master-0 kubenswrapper[7614]: healthz check failed Feb 24 05:34:40.664339 master-0 kubenswrapper[7614]: I0224 05:34:40.663726 7614 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-zxkt2" podUID="be7a4b9e-1e9a-4298-b804-21b683805c0e" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 05:34:41.664541 master-0 kubenswrapper[7614]: I0224 05:34:41.664412 7614 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-zxkt2 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 05:34:41.664541 master-0 kubenswrapper[7614]: [-]has-synced failed: reason withheld Feb 24 05:34:41.664541 master-0 kubenswrapper[7614]: [+]process-running ok Feb 24 05:34:41.664541 master-0 kubenswrapper[7614]: healthz check failed Feb 24 05:34:41.665667 master-0 kubenswrapper[7614]: I0224 05:34:41.664546 7614 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-zxkt2" podUID="be7a4b9e-1e9a-4298-b804-21b683805c0e" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 05:34:42.664452 master-0 kubenswrapper[7614]: I0224 05:34:42.664358 7614 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-zxkt2 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 05:34:42.664452 master-0 kubenswrapper[7614]: [-]has-synced failed: reason withheld Feb 24 05:34:42.664452 master-0 kubenswrapper[7614]: [+]process-running ok Feb 24 05:34:42.664452 master-0 kubenswrapper[7614]: healthz check failed Feb 24 05:34:42.664452 master-0 kubenswrapper[7614]: I0224 05:34:42.664453 7614 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-zxkt2" podUID="be7a4b9e-1e9a-4298-b804-21b683805c0e" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 05:34:43.175460 master-0 kubenswrapper[7614]: I0224 05:34:43.175369 7614 scope.go:117] "RemoveContainer" containerID="b50331e20fa2ff1ed8c1b0d16427617da5e032349e660e1b87569256d54c21e5" Feb 24 05:34:43.175911 master-0 kubenswrapper[7614]: E0224 05:34:43.175744 7614 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ingress-operator\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=ingress-operator pod=ingress-operator-6569778c84-rr8r7_openshift-ingress-operator(3d6b1ce7-1213-494c-829d-186d39eac7eb)\"" pod="openshift-ingress-operator/ingress-operator-6569778c84-rr8r7" podUID="3d6b1ce7-1213-494c-829d-186d39eac7eb" Feb 24 05:34:43.663807 master-0 kubenswrapper[7614]: I0224 05:34:43.663699 7614 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-zxkt2 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 05:34:43.663807 master-0 kubenswrapper[7614]: [-]has-synced failed: reason withheld Feb 24 05:34:43.663807 master-0 kubenswrapper[7614]: [+]process-running ok Feb 24 05:34:43.663807 master-0 kubenswrapper[7614]: healthz check failed Feb 24 05:34:43.664405 master-0 kubenswrapper[7614]: I0224 05:34:43.663833 7614 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-zxkt2" podUID="be7a4b9e-1e9a-4298-b804-21b683805c0e" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 05:34:44.663779 master-0 kubenswrapper[7614]: I0224 05:34:44.663664 7614 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-zxkt2 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 05:34:44.663779 master-0 kubenswrapper[7614]: [-]has-synced failed: reason withheld Feb 24 05:34:44.663779 master-0 kubenswrapper[7614]: [+]process-running ok Feb 24 05:34:44.663779 master-0 kubenswrapper[7614]: healthz check failed Feb 24 05:34:44.664587 master-0 kubenswrapper[7614]: I0224 05:34:44.663810 7614 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-zxkt2" podUID="be7a4b9e-1e9a-4298-b804-21b683805c0e" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 05:34:45.663419 master-0 kubenswrapper[7614]: I0224 05:34:45.663285 7614 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-zxkt2 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 05:34:45.663419 master-0 kubenswrapper[7614]: [-]has-synced failed: reason withheld Feb 24 05:34:45.663419 master-0 kubenswrapper[7614]: [+]process-running ok Feb 24 05:34:45.663419 master-0 kubenswrapper[7614]: healthz check failed Feb 24 05:34:45.664883 master-0 kubenswrapper[7614]: I0224 05:34:45.663441 7614 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-zxkt2" podUID="be7a4b9e-1e9a-4298-b804-21b683805c0e" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 05:34:46.663194 master-0 kubenswrapper[7614]: I0224 05:34:46.663067 7614 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-zxkt2 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 05:34:46.663194 master-0 kubenswrapper[7614]: [-]has-synced failed: reason withheld Feb 24 05:34:46.663194 master-0 kubenswrapper[7614]: [+]process-running ok Feb 24 05:34:46.663194 master-0 kubenswrapper[7614]: healthz check failed Feb 24 05:34:46.663812 master-0 kubenswrapper[7614]: I0224 05:34:46.663206 7614 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-zxkt2" podUID="be7a4b9e-1e9a-4298-b804-21b683805c0e" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 05:34:47.664383 master-0 kubenswrapper[7614]: I0224 05:34:47.664221 7614 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-zxkt2 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 05:34:47.664383 master-0 kubenswrapper[7614]: [-]has-synced failed: reason withheld Feb 24 05:34:47.664383 master-0 kubenswrapper[7614]: [+]process-running ok Feb 24 05:34:47.664383 master-0 kubenswrapper[7614]: healthz check failed Feb 24 05:34:47.665607 master-0 kubenswrapper[7614]: I0224 05:34:47.664447 7614 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-zxkt2" podUID="be7a4b9e-1e9a-4298-b804-21b683805c0e" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 05:34:48.665149 master-0 kubenswrapper[7614]: I0224 05:34:48.663045 7614 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-zxkt2 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 05:34:48.665149 master-0 kubenswrapper[7614]: [-]has-synced failed: reason withheld Feb 24 05:34:48.665149 master-0 kubenswrapper[7614]: [+]process-running ok Feb 24 05:34:48.665149 master-0 kubenswrapper[7614]: healthz check failed Feb 24 05:34:48.665149 master-0 kubenswrapper[7614]: I0224 05:34:48.663137 7614 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-zxkt2" podUID="be7a4b9e-1e9a-4298-b804-21b683805c0e" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 05:34:49.663434 master-0 kubenswrapper[7614]: I0224 05:34:49.663334 7614 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-zxkt2 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 05:34:49.663434 master-0 kubenswrapper[7614]: [-]has-synced failed: reason withheld Feb 24 05:34:49.663434 master-0 kubenswrapper[7614]: [+]process-running ok Feb 24 05:34:49.663434 master-0 kubenswrapper[7614]: healthz check failed Feb 24 05:34:49.663964 master-0 kubenswrapper[7614]: I0224 05:34:49.663448 7614 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-zxkt2" podUID="be7a4b9e-1e9a-4298-b804-21b683805c0e" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 05:34:50.663765 master-0 kubenswrapper[7614]: I0224 05:34:50.663628 7614 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-zxkt2 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 05:34:50.663765 master-0 kubenswrapper[7614]: [-]has-synced failed: reason withheld Feb 24 05:34:50.663765 master-0 kubenswrapper[7614]: [+]process-running ok Feb 24 05:34:50.663765 master-0 kubenswrapper[7614]: healthz check failed Feb 24 05:34:50.664881 master-0 kubenswrapper[7614]: I0224 05:34:50.663769 7614 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-zxkt2" podUID="be7a4b9e-1e9a-4298-b804-21b683805c0e" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 05:34:51.663329 master-0 kubenswrapper[7614]: I0224 05:34:51.663225 7614 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-zxkt2 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 05:34:51.663329 master-0 kubenswrapper[7614]: [-]has-synced failed: reason withheld Feb 24 05:34:51.663329 master-0 kubenswrapper[7614]: [+]process-running ok Feb 24 05:34:51.663329 master-0 kubenswrapper[7614]: healthz check failed Feb 24 05:34:51.664282 master-0 kubenswrapper[7614]: I0224 05:34:51.663384 7614 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-zxkt2" podUID="be7a4b9e-1e9a-4298-b804-21b683805c0e" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 05:34:52.663412 master-0 kubenswrapper[7614]: I0224 05:34:52.663286 7614 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-zxkt2 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 05:34:52.663412 master-0 kubenswrapper[7614]: [-]has-synced failed: reason withheld Feb 24 05:34:52.663412 master-0 kubenswrapper[7614]: [+]process-running ok Feb 24 05:34:52.663412 master-0 kubenswrapper[7614]: healthz check failed Feb 24 05:34:52.663412 master-0 kubenswrapper[7614]: I0224 05:34:52.663411 7614 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-zxkt2" podUID="be7a4b9e-1e9a-4298-b804-21b683805c0e" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 05:34:53.663406 master-0 kubenswrapper[7614]: I0224 05:34:53.663250 7614 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-zxkt2 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 05:34:53.663406 master-0 kubenswrapper[7614]: [-]has-synced failed: reason withheld Feb 24 05:34:53.663406 master-0 kubenswrapper[7614]: [+]process-running ok Feb 24 05:34:53.663406 master-0 kubenswrapper[7614]: healthz check failed Feb 24 05:34:53.664027 master-0 kubenswrapper[7614]: I0224 05:34:53.663504 7614 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-zxkt2" podUID="be7a4b9e-1e9a-4298-b804-21b683805c0e" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 05:34:54.663757 master-0 kubenswrapper[7614]: I0224 05:34:54.663629 7614 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-zxkt2 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 05:34:54.663757 master-0 kubenswrapper[7614]: [-]has-synced failed: reason withheld Feb 24 05:34:54.663757 master-0 kubenswrapper[7614]: [+]process-running ok Feb 24 05:34:54.663757 master-0 kubenswrapper[7614]: healthz check failed Feb 24 05:34:54.663757 master-0 kubenswrapper[7614]: I0224 05:34:54.663752 7614 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-zxkt2" podUID="be7a4b9e-1e9a-4298-b804-21b683805c0e" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 05:34:55.664166 master-0 kubenswrapper[7614]: I0224 05:34:55.664029 7614 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-zxkt2 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 05:34:55.664166 master-0 kubenswrapper[7614]: [-]has-synced failed: reason withheld Feb 24 05:34:55.664166 master-0 kubenswrapper[7614]: [+]process-running ok Feb 24 05:34:55.664166 master-0 kubenswrapper[7614]: healthz check failed Feb 24 05:34:55.665260 master-0 kubenswrapper[7614]: I0224 05:34:55.664183 7614 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-zxkt2" podUID="be7a4b9e-1e9a-4298-b804-21b683805c0e" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 05:34:56.663853 master-0 kubenswrapper[7614]: I0224 05:34:56.663750 7614 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-zxkt2 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 05:34:56.663853 master-0 kubenswrapper[7614]: [-]has-synced failed: reason withheld Feb 24 05:34:56.663853 master-0 kubenswrapper[7614]: [+]process-running ok Feb 24 05:34:56.663853 master-0 kubenswrapper[7614]: healthz check failed Feb 24 05:34:56.663853 master-0 kubenswrapper[7614]: I0224 05:34:56.663849 7614 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-zxkt2" podUID="be7a4b9e-1e9a-4298-b804-21b683805c0e" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 05:34:57.175563 master-0 kubenswrapper[7614]: I0224 05:34:57.175424 7614 scope.go:117] "RemoveContainer" containerID="b50331e20fa2ff1ed8c1b0d16427617da5e032349e660e1b87569256d54c21e5" Feb 24 05:34:57.662187 master-0 kubenswrapper[7614]: I0224 05:34:57.662102 7614 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-zxkt2 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 05:34:57.662187 master-0 kubenswrapper[7614]: [-]has-synced failed: reason withheld Feb 24 05:34:57.662187 master-0 kubenswrapper[7614]: [+]process-running ok Feb 24 05:34:57.662187 master-0 kubenswrapper[7614]: healthz check failed Feb 24 05:34:57.662187 master-0 kubenswrapper[7614]: I0224 05:34:57.662184 7614 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-zxkt2" podUID="be7a4b9e-1e9a-4298-b804-21b683805c0e" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 05:34:58.396327 master-0 kubenswrapper[7614]: I0224 05:34:58.396230 7614 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ingress-operator_ingress-operator-6569778c84-rr8r7_3d6b1ce7-1213-494c-829d-186d39eac7eb/ingress-operator/5.log" Feb 24 05:34:58.397144 master-0 kubenswrapper[7614]: I0224 05:34:58.396975 7614 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-6569778c84-rr8r7" event={"ID":"3d6b1ce7-1213-494c-829d-186d39eac7eb","Type":"ContainerStarted","Data":"e5961da58ba0000499976ed125663a28df9508f26428d259f2513e76bb11ef6f"} Feb 24 05:34:58.664225 master-0 kubenswrapper[7614]: I0224 05:34:58.664031 7614 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-zxkt2 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 05:34:58.664225 master-0 kubenswrapper[7614]: [-]has-synced failed: reason withheld Feb 24 05:34:58.664225 master-0 kubenswrapper[7614]: [+]process-running ok Feb 24 05:34:58.664225 master-0 kubenswrapper[7614]: healthz check failed Feb 24 05:34:58.664225 master-0 kubenswrapper[7614]: I0224 05:34:58.664188 7614 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-zxkt2" podUID="be7a4b9e-1e9a-4298-b804-21b683805c0e" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 05:34:59.664966 master-0 kubenswrapper[7614]: I0224 05:34:59.664765 7614 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-zxkt2 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 05:34:59.664966 master-0 kubenswrapper[7614]: [-]has-synced failed: reason withheld Feb 24 05:34:59.664966 master-0 kubenswrapper[7614]: [+]process-running ok Feb 24 05:34:59.664966 master-0 kubenswrapper[7614]: healthz check failed Feb 24 05:34:59.666250 master-0 kubenswrapper[7614]: I0224 05:34:59.665021 7614 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-zxkt2" podUID="be7a4b9e-1e9a-4298-b804-21b683805c0e" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 05:35:00.663945 master-0 kubenswrapper[7614]: I0224 05:35:00.663888 7614 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-zxkt2 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 05:35:00.663945 master-0 kubenswrapper[7614]: [-]has-synced failed: reason withheld Feb 24 05:35:00.663945 master-0 kubenswrapper[7614]: [+]process-running ok Feb 24 05:35:00.663945 master-0 kubenswrapper[7614]: healthz check failed Feb 24 05:35:00.664615 master-0 kubenswrapper[7614]: I0224 05:35:00.664568 7614 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-zxkt2" podUID="be7a4b9e-1e9a-4298-b804-21b683805c0e" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 05:35:01.541598 master-0 kubenswrapper[7614]: I0224 05:35:01.541493 7614 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler/installer-2-master-0"] Feb 24 05:35:01.542843 master-0 kubenswrapper[7614]: E0224 05:35:01.541955 7614 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f2249df3-3ce9-4f96-8f6f-59943125f8ed" containerName="collect-profiles" Feb 24 05:35:01.542843 master-0 kubenswrapper[7614]: I0224 05:35:01.541981 7614 state_mem.go:107] "Deleted CPUSet assignment" podUID="f2249df3-3ce9-4f96-8f6f-59943125f8ed" containerName="collect-profiles" Feb 24 05:35:01.542843 master-0 kubenswrapper[7614]: I0224 05:35:01.542218 7614 memory_manager.go:354] "RemoveStaleState removing state" podUID="f2249df3-3ce9-4f96-8f6f-59943125f8ed" containerName="collect-profiles" Feb 24 05:35:01.543177 master-0 kubenswrapper[7614]: I0224 05:35:01.542911 7614 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/installer-2-master-0" Feb 24 05:35:01.548203 master-0 kubenswrapper[7614]: I0224 05:35:01.546476 7614 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler"/"kube-root-ca.crt" Feb 24 05:35:01.550156 master-0 kubenswrapper[7614]: I0224 05:35:01.550113 7614 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler"/"installer-sa-dockercfg-sh42j" Feb 24 05:35:01.563568 master-0 kubenswrapper[7614]: I0224 05:35:01.563449 7614 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler/installer-2-master-0"] Feb 24 05:35:01.572475 master-0 kubenswrapper[7614]: I0224 05:35:01.572029 7614 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/17ac3cae-8c8a-4e8f-9f58-ab82b543ec86-kube-api-access\") pod \"installer-2-master-0\" (UID: \"17ac3cae-8c8a-4e8f-9f58-ab82b543ec86\") " pod="openshift-kube-scheduler/installer-2-master-0" Feb 24 05:35:01.572475 master-0 kubenswrapper[7614]: I0224 05:35:01.572162 7614 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/17ac3cae-8c8a-4e8f-9f58-ab82b543ec86-kubelet-dir\") pod \"installer-2-master-0\" (UID: \"17ac3cae-8c8a-4e8f-9f58-ab82b543ec86\") " pod="openshift-kube-scheduler/installer-2-master-0" Feb 24 05:35:01.572475 master-0 kubenswrapper[7614]: I0224 05:35:01.572249 7614 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/17ac3cae-8c8a-4e8f-9f58-ab82b543ec86-var-lock\") pod \"installer-2-master-0\" (UID: \"17ac3cae-8c8a-4e8f-9f58-ab82b543ec86\") " pod="openshift-kube-scheduler/installer-2-master-0" Feb 24 05:35:01.663542 master-0 kubenswrapper[7614]: I0224 05:35:01.663457 7614 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-zxkt2 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 05:35:01.663542 master-0 kubenswrapper[7614]: [-]has-synced failed: reason withheld Feb 24 05:35:01.663542 master-0 kubenswrapper[7614]: [+]process-running ok Feb 24 05:35:01.663542 master-0 kubenswrapper[7614]: healthz check failed Feb 24 05:35:01.663902 master-0 kubenswrapper[7614]: I0224 05:35:01.663553 7614 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-zxkt2" podUID="be7a4b9e-1e9a-4298-b804-21b683805c0e" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 05:35:01.674280 master-0 kubenswrapper[7614]: I0224 05:35:01.674191 7614 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/17ac3cae-8c8a-4e8f-9f58-ab82b543ec86-kubelet-dir\") pod \"installer-2-master-0\" (UID: \"17ac3cae-8c8a-4e8f-9f58-ab82b543ec86\") " pod="openshift-kube-scheduler/installer-2-master-0" Feb 24 05:35:01.674449 master-0 kubenswrapper[7614]: I0224 05:35:01.674418 7614 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/17ac3cae-8c8a-4e8f-9f58-ab82b543ec86-var-lock\") pod \"installer-2-master-0\" (UID: \"17ac3cae-8c8a-4e8f-9f58-ab82b543ec86\") " pod="openshift-kube-scheduler/installer-2-master-0" Feb 24 05:35:01.674531 master-0 kubenswrapper[7614]: I0224 05:35:01.674471 7614 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/17ac3cae-8c8a-4e8f-9f58-ab82b543ec86-kubelet-dir\") pod \"installer-2-master-0\" (UID: \"17ac3cae-8c8a-4e8f-9f58-ab82b543ec86\") " pod="openshift-kube-scheduler/installer-2-master-0" Feb 24 05:35:01.674674 master-0 kubenswrapper[7614]: I0224 05:35:01.674627 7614 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/17ac3cae-8c8a-4e8f-9f58-ab82b543ec86-var-lock\") pod \"installer-2-master-0\" (UID: \"17ac3cae-8c8a-4e8f-9f58-ab82b543ec86\") " pod="openshift-kube-scheduler/installer-2-master-0" Feb 24 05:35:01.674926 master-0 kubenswrapper[7614]: I0224 05:35:01.674879 7614 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/17ac3cae-8c8a-4e8f-9f58-ab82b543ec86-kube-api-access\") pod \"installer-2-master-0\" (UID: \"17ac3cae-8c8a-4e8f-9f58-ab82b543ec86\") " pod="openshift-kube-scheduler/installer-2-master-0" Feb 24 05:35:01.712301 master-0 kubenswrapper[7614]: I0224 05:35:01.712217 7614 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/17ac3cae-8c8a-4e8f-9f58-ab82b543ec86-kube-api-access\") pod \"installer-2-master-0\" (UID: \"17ac3cae-8c8a-4e8f-9f58-ab82b543ec86\") " pod="openshift-kube-scheduler/installer-2-master-0" Feb 24 05:35:01.893760 master-0 kubenswrapper[7614]: I0224 05:35:01.893594 7614 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/installer-2-master-0" Feb 24 05:35:02.416246 master-0 kubenswrapper[7614]: I0224 05:35:02.416178 7614 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler/installer-2-master-0"] Feb 24 05:35:02.433904 master-0 kubenswrapper[7614]: I0224 05:35:02.433825 7614 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/installer-2-master-0" event={"ID":"17ac3cae-8c8a-4e8f-9f58-ab82b543ec86","Type":"ContainerStarted","Data":"ee58f94aaa31646ed744150034f7422744de52c8ea47ed7679b57341645f987d"} Feb 24 05:35:02.663682 master-0 kubenswrapper[7614]: I0224 05:35:02.663569 7614 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-zxkt2 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 05:35:02.663682 master-0 kubenswrapper[7614]: [-]has-synced failed: reason withheld Feb 24 05:35:02.663682 master-0 kubenswrapper[7614]: [+]process-running ok Feb 24 05:35:02.663682 master-0 kubenswrapper[7614]: healthz check failed Feb 24 05:35:02.667295 master-0 kubenswrapper[7614]: I0224 05:35:02.663722 7614 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-zxkt2" podUID="be7a4b9e-1e9a-4298-b804-21b683805c0e" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 05:35:03.448808 master-0 kubenswrapper[7614]: I0224 05:35:03.448684 7614 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/installer-2-master-0" event={"ID":"17ac3cae-8c8a-4e8f-9f58-ab82b543ec86","Type":"ContainerStarted","Data":"f1b7c6a181b3c4b7c381db07bd0f31166802251328ff7e67a24e7a9f4676e269"} Feb 24 05:35:03.479651 master-0 kubenswrapper[7614]: I0224 05:35:03.479495 7614 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler/installer-2-master-0" podStartSLOduration=2.479428117 podStartE2EDuration="2.479428117s" podCreationTimestamp="2026-02-24 05:35:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-24 05:35:03.475143902 +0000 UTC m=+1234.509887148" watchObservedRunningTime="2026-02-24 05:35:03.479428117 +0000 UTC m=+1234.514171313" Feb 24 05:35:03.663802 master-0 kubenswrapper[7614]: I0224 05:35:03.663723 7614 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-zxkt2 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 05:35:03.663802 master-0 kubenswrapper[7614]: [-]has-synced failed: reason withheld Feb 24 05:35:03.663802 master-0 kubenswrapper[7614]: [+]process-running ok Feb 24 05:35:03.663802 master-0 kubenswrapper[7614]: healthz check failed Feb 24 05:35:03.665151 master-0 kubenswrapper[7614]: I0224 05:35:03.665099 7614 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-zxkt2" podUID="be7a4b9e-1e9a-4298-b804-21b683805c0e" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 05:35:04.664534 master-0 kubenswrapper[7614]: I0224 05:35:04.664396 7614 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-zxkt2 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 05:35:04.664534 master-0 kubenswrapper[7614]: [-]has-synced failed: reason withheld Feb 24 05:35:04.664534 master-0 kubenswrapper[7614]: [+]process-running ok Feb 24 05:35:04.664534 master-0 kubenswrapper[7614]: healthz check failed Feb 24 05:35:04.665591 master-0 kubenswrapper[7614]: I0224 05:35:04.664574 7614 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-zxkt2" podUID="be7a4b9e-1e9a-4298-b804-21b683805c0e" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 05:35:05.663706 master-0 kubenswrapper[7614]: I0224 05:35:05.663571 7614 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-zxkt2 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 05:35:05.663706 master-0 kubenswrapper[7614]: [-]has-synced failed: reason withheld Feb 24 05:35:05.663706 master-0 kubenswrapper[7614]: [+]process-running ok Feb 24 05:35:05.663706 master-0 kubenswrapper[7614]: healthz check failed Feb 24 05:35:05.664201 master-0 kubenswrapper[7614]: I0224 05:35:05.663743 7614 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-zxkt2" podUID="be7a4b9e-1e9a-4298-b804-21b683805c0e" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 05:35:06.663704 master-0 kubenswrapper[7614]: I0224 05:35:06.663606 7614 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-zxkt2 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 05:35:06.663704 master-0 kubenswrapper[7614]: [-]has-synced failed: reason withheld Feb 24 05:35:06.663704 master-0 kubenswrapper[7614]: [+]process-running ok Feb 24 05:35:06.663704 master-0 kubenswrapper[7614]: healthz check failed Feb 24 05:35:06.664987 master-0 kubenswrapper[7614]: I0224 05:35:06.663727 7614 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-zxkt2" podUID="be7a4b9e-1e9a-4298-b804-21b683805c0e" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 05:35:07.667008 master-0 kubenswrapper[7614]: I0224 05:35:07.666910 7614 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-zxkt2 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 05:35:07.667008 master-0 kubenswrapper[7614]: [-]has-synced failed: reason withheld Feb 24 05:35:07.667008 master-0 kubenswrapper[7614]: [+]process-running ok Feb 24 05:35:07.667008 master-0 kubenswrapper[7614]: healthz check failed Feb 24 05:35:07.668012 master-0 kubenswrapper[7614]: I0224 05:35:07.667052 7614 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-zxkt2" podUID="be7a4b9e-1e9a-4298-b804-21b683805c0e" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 05:35:08.663689 master-0 kubenswrapper[7614]: I0224 05:35:08.663588 7614 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-zxkt2 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 05:35:08.663689 master-0 kubenswrapper[7614]: [-]has-synced failed: reason withheld Feb 24 05:35:08.663689 master-0 kubenswrapper[7614]: [+]process-running ok Feb 24 05:35:08.663689 master-0 kubenswrapper[7614]: healthz check failed Feb 24 05:35:08.664249 master-0 kubenswrapper[7614]: I0224 05:35:08.663717 7614 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-zxkt2" podUID="be7a4b9e-1e9a-4298-b804-21b683805c0e" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 05:35:09.666342 master-0 kubenswrapper[7614]: I0224 05:35:09.666201 7614 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-zxkt2 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 05:35:09.666342 master-0 kubenswrapper[7614]: [-]has-synced failed: reason withheld Feb 24 05:35:09.666342 master-0 kubenswrapper[7614]: [+]process-running ok Feb 24 05:35:09.666342 master-0 kubenswrapper[7614]: healthz check failed Feb 24 05:35:09.667581 master-0 kubenswrapper[7614]: I0224 05:35:09.666402 7614 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-zxkt2" podUID="be7a4b9e-1e9a-4298-b804-21b683805c0e" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 05:35:10.663859 master-0 kubenswrapper[7614]: I0224 05:35:10.663638 7614 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-zxkt2 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 05:35:10.663859 master-0 kubenswrapper[7614]: [-]has-synced failed: reason withheld Feb 24 05:35:10.663859 master-0 kubenswrapper[7614]: [+]process-running ok Feb 24 05:35:10.663859 master-0 kubenswrapper[7614]: healthz check failed Feb 24 05:35:10.664442 master-0 kubenswrapper[7614]: I0224 05:35:10.663908 7614 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-zxkt2" podUID="be7a4b9e-1e9a-4298-b804-21b683805c0e" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 05:35:11.662855 master-0 kubenswrapper[7614]: I0224 05:35:11.662762 7614 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-zxkt2 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 05:35:11.662855 master-0 kubenswrapper[7614]: [-]has-synced failed: reason withheld Feb 24 05:35:11.662855 master-0 kubenswrapper[7614]: [+]process-running ok Feb 24 05:35:11.662855 master-0 kubenswrapper[7614]: healthz check failed Feb 24 05:35:11.663966 master-0 kubenswrapper[7614]: I0224 05:35:11.662899 7614 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-zxkt2" podUID="be7a4b9e-1e9a-4298-b804-21b683805c0e" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 05:35:12.664508 master-0 kubenswrapper[7614]: I0224 05:35:12.664408 7614 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-zxkt2 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 05:35:12.664508 master-0 kubenswrapper[7614]: [-]has-synced failed: reason withheld Feb 24 05:35:12.664508 master-0 kubenswrapper[7614]: [+]process-running ok Feb 24 05:35:12.664508 master-0 kubenswrapper[7614]: healthz check failed Feb 24 05:35:12.665336 master-0 kubenswrapper[7614]: I0224 05:35:12.664562 7614 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-zxkt2" podUID="be7a4b9e-1e9a-4298-b804-21b683805c0e" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 05:35:13.663474 master-0 kubenswrapper[7614]: I0224 05:35:13.663382 7614 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-zxkt2 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 05:35:13.663474 master-0 kubenswrapper[7614]: [-]has-synced failed: reason withheld Feb 24 05:35:13.663474 master-0 kubenswrapper[7614]: [+]process-running ok Feb 24 05:35:13.663474 master-0 kubenswrapper[7614]: healthz check failed Feb 24 05:35:13.664214 master-0 kubenswrapper[7614]: I0224 05:35:13.663497 7614 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-zxkt2" podUID="be7a4b9e-1e9a-4298-b804-21b683805c0e" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 05:35:14.663618 master-0 kubenswrapper[7614]: I0224 05:35:14.663542 7614 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-zxkt2 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 05:35:14.663618 master-0 kubenswrapper[7614]: [-]has-synced failed: reason withheld Feb 24 05:35:14.663618 master-0 kubenswrapper[7614]: [+]process-running ok Feb 24 05:35:14.663618 master-0 kubenswrapper[7614]: healthz check failed Feb 24 05:35:14.663618 master-0 kubenswrapper[7614]: I0224 05:35:14.663620 7614 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-zxkt2" podUID="be7a4b9e-1e9a-4298-b804-21b683805c0e" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 05:35:15.663295 master-0 kubenswrapper[7614]: I0224 05:35:15.663211 7614 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-zxkt2 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 05:35:15.663295 master-0 kubenswrapper[7614]: [-]has-synced failed: reason withheld Feb 24 05:35:15.663295 master-0 kubenswrapper[7614]: [+]process-running ok Feb 24 05:35:15.663295 master-0 kubenswrapper[7614]: healthz check failed Feb 24 05:35:15.663672 master-0 kubenswrapper[7614]: I0224 05:35:15.663373 7614 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-zxkt2" podUID="be7a4b9e-1e9a-4298-b804-21b683805c0e" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 05:35:16.663381 master-0 kubenswrapper[7614]: I0224 05:35:16.663287 7614 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-zxkt2 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 05:35:16.663381 master-0 kubenswrapper[7614]: [-]has-synced failed: reason withheld Feb 24 05:35:16.663381 master-0 kubenswrapper[7614]: [+]process-running ok Feb 24 05:35:16.663381 master-0 kubenswrapper[7614]: healthz check failed Feb 24 05:35:16.664507 master-0 kubenswrapper[7614]: I0224 05:35:16.663387 7614 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-zxkt2" podUID="be7a4b9e-1e9a-4298-b804-21b683805c0e" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 05:35:17.664296 master-0 kubenswrapper[7614]: I0224 05:35:17.664190 7614 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-zxkt2 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 05:35:17.664296 master-0 kubenswrapper[7614]: [-]has-synced failed: reason withheld Feb 24 05:35:17.664296 master-0 kubenswrapper[7614]: [+]process-running ok Feb 24 05:35:17.664296 master-0 kubenswrapper[7614]: healthz check failed Feb 24 05:35:17.665048 master-0 kubenswrapper[7614]: I0224 05:35:17.664356 7614 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-zxkt2" podUID="be7a4b9e-1e9a-4298-b804-21b683805c0e" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 05:35:18.664084 master-0 kubenswrapper[7614]: I0224 05:35:18.663970 7614 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-zxkt2 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 05:35:18.664084 master-0 kubenswrapper[7614]: [-]has-synced failed: reason withheld Feb 24 05:35:18.664084 master-0 kubenswrapper[7614]: [+]process-running ok Feb 24 05:35:18.664084 master-0 kubenswrapper[7614]: healthz check failed Feb 24 05:35:18.666270 master-0 kubenswrapper[7614]: I0224 05:35:18.664095 7614 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-zxkt2" podUID="be7a4b9e-1e9a-4298-b804-21b683805c0e" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 05:35:19.663634 master-0 kubenswrapper[7614]: I0224 05:35:19.663514 7614 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-zxkt2 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 05:35:19.663634 master-0 kubenswrapper[7614]: [-]has-synced failed: reason withheld Feb 24 05:35:19.663634 master-0 kubenswrapper[7614]: [+]process-running ok Feb 24 05:35:19.663634 master-0 kubenswrapper[7614]: healthz check failed Feb 24 05:35:19.664357 master-0 kubenswrapper[7614]: I0224 05:35:19.663674 7614 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-zxkt2" podUID="be7a4b9e-1e9a-4298-b804-21b683805c0e" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 05:35:20.663208 master-0 kubenswrapper[7614]: I0224 05:35:20.663120 7614 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-zxkt2 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 05:35:20.663208 master-0 kubenswrapper[7614]: [-]has-synced failed: reason withheld Feb 24 05:35:20.663208 master-0 kubenswrapper[7614]: [+]process-running ok Feb 24 05:35:20.663208 master-0 kubenswrapper[7614]: healthz check failed Feb 24 05:35:20.663902 master-0 kubenswrapper[7614]: I0224 05:35:20.663240 7614 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-zxkt2" podUID="be7a4b9e-1e9a-4298-b804-21b683805c0e" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 05:35:21.662897 master-0 kubenswrapper[7614]: I0224 05:35:21.662777 7614 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-zxkt2 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 05:35:21.662897 master-0 kubenswrapper[7614]: [-]has-synced failed: reason withheld Feb 24 05:35:21.662897 master-0 kubenswrapper[7614]: [+]process-running ok Feb 24 05:35:21.662897 master-0 kubenswrapper[7614]: healthz check failed Feb 24 05:35:21.662897 master-0 kubenswrapper[7614]: I0224 05:35:21.662899 7614 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-zxkt2" podUID="be7a4b9e-1e9a-4298-b804-21b683805c0e" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 05:35:22.663086 master-0 kubenswrapper[7614]: I0224 05:35:22.662978 7614 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-zxkt2 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 05:35:22.663086 master-0 kubenswrapper[7614]: [-]has-synced failed: reason withheld Feb 24 05:35:22.663086 master-0 kubenswrapper[7614]: [+]process-running ok Feb 24 05:35:22.663086 master-0 kubenswrapper[7614]: healthz check failed Feb 24 05:35:22.664021 master-0 kubenswrapper[7614]: I0224 05:35:22.663111 7614 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-zxkt2" podUID="be7a4b9e-1e9a-4298-b804-21b683805c0e" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 05:35:23.663956 master-0 kubenswrapper[7614]: I0224 05:35:23.663814 7614 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-zxkt2 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 05:35:23.663956 master-0 kubenswrapper[7614]: [-]has-synced failed: reason withheld Feb 24 05:35:23.663956 master-0 kubenswrapper[7614]: [+]process-running ok Feb 24 05:35:23.663956 master-0 kubenswrapper[7614]: healthz check failed Feb 24 05:35:23.663956 master-0 kubenswrapper[7614]: I0224 05:35:23.663938 7614 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-zxkt2" podUID="be7a4b9e-1e9a-4298-b804-21b683805c0e" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 05:35:24.663056 master-0 kubenswrapper[7614]: I0224 05:35:24.662964 7614 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-zxkt2 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 05:35:24.663056 master-0 kubenswrapper[7614]: [-]has-synced failed: reason withheld Feb 24 05:35:24.663056 master-0 kubenswrapper[7614]: [+]process-running ok Feb 24 05:35:24.663056 master-0 kubenswrapper[7614]: healthz check failed Feb 24 05:35:24.663056 master-0 kubenswrapper[7614]: I0224 05:35:24.663058 7614 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-zxkt2" podUID="be7a4b9e-1e9a-4298-b804-21b683805c0e" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 05:35:25.664450 master-0 kubenswrapper[7614]: I0224 05:35:25.664352 7614 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-zxkt2 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 05:35:25.664450 master-0 kubenswrapper[7614]: [-]has-synced failed: reason withheld Feb 24 05:35:25.664450 master-0 kubenswrapper[7614]: [+]process-running ok Feb 24 05:35:25.664450 master-0 kubenswrapper[7614]: healthz check failed Feb 24 05:35:25.665821 master-0 kubenswrapper[7614]: I0224 05:35:25.664456 7614 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-zxkt2" podUID="be7a4b9e-1e9a-4298-b804-21b683805c0e" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 05:35:26.663331 master-0 kubenswrapper[7614]: I0224 05:35:26.663247 7614 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-zxkt2 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 05:35:26.663331 master-0 kubenswrapper[7614]: [-]has-synced failed: reason withheld Feb 24 05:35:26.663331 master-0 kubenswrapper[7614]: [+]process-running ok Feb 24 05:35:26.663331 master-0 kubenswrapper[7614]: healthz check failed Feb 24 05:35:26.663703 master-0 kubenswrapper[7614]: I0224 05:35:26.663361 7614 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-zxkt2" podUID="be7a4b9e-1e9a-4298-b804-21b683805c0e" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 05:35:27.663915 master-0 kubenswrapper[7614]: I0224 05:35:27.663823 7614 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-zxkt2 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 05:35:27.663915 master-0 kubenswrapper[7614]: [-]has-synced failed: reason withheld Feb 24 05:35:27.663915 master-0 kubenswrapper[7614]: [+]process-running ok Feb 24 05:35:27.663915 master-0 kubenswrapper[7614]: healthz check failed Feb 24 05:35:27.665028 master-0 kubenswrapper[7614]: I0224 05:35:27.663925 7614 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-zxkt2" podUID="be7a4b9e-1e9a-4298-b804-21b683805c0e" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 05:35:28.664468 master-0 kubenswrapper[7614]: I0224 05:35:28.664354 7614 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-zxkt2 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 05:35:28.664468 master-0 kubenswrapper[7614]: [-]has-synced failed: reason withheld Feb 24 05:35:28.664468 master-0 kubenswrapper[7614]: [+]process-running ok Feb 24 05:35:28.664468 master-0 kubenswrapper[7614]: healthz check failed Feb 24 05:35:28.665893 master-0 kubenswrapper[7614]: I0224 05:35:28.664481 7614 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-zxkt2" podUID="be7a4b9e-1e9a-4298-b804-21b683805c0e" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 05:35:29.663261 master-0 kubenswrapper[7614]: I0224 05:35:29.663164 7614 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-zxkt2 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 05:35:29.663261 master-0 kubenswrapper[7614]: [-]has-synced failed: reason withheld Feb 24 05:35:29.663261 master-0 kubenswrapper[7614]: [+]process-running ok Feb 24 05:35:29.663261 master-0 kubenswrapper[7614]: healthz check failed Feb 24 05:35:29.663989 master-0 kubenswrapper[7614]: I0224 05:35:29.663279 7614 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-zxkt2" podUID="be7a4b9e-1e9a-4298-b804-21b683805c0e" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 05:35:30.664105 master-0 kubenswrapper[7614]: I0224 05:35:30.663991 7614 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-zxkt2 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 05:35:30.664105 master-0 kubenswrapper[7614]: [-]has-synced failed: reason withheld Feb 24 05:35:30.664105 master-0 kubenswrapper[7614]: [+]process-running ok Feb 24 05:35:30.664105 master-0 kubenswrapper[7614]: healthz check failed Feb 24 05:35:30.664105 master-0 kubenswrapper[7614]: I0224 05:35:30.664089 7614 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-zxkt2" podUID="be7a4b9e-1e9a-4298-b804-21b683805c0e" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 05:35:31.663083 master-0 kubenswrapper[7614]: I0224 05:35:31.662976 7614 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-zxkt2 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 05:35:31.663083 master-0 kubenswrapper[7614]: [-]has-synced failed: reason withheld Feb 24 05:35:31.663083 master-0 kubenswrapper[7614]: [+]process-running ok Feb 24 05:35:31.663083 master-0 kubenswrapper[7614]: healthz check failed Feb 24 05:35:31.663811 master-0 kubenswrapper[7614]: I0224 05:35:31.663096 7614 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-zxkt2" podUID="be7a4b9e-1e9a-4298-b804-21b683805c0e" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 05:35:32.663532 master-0 kubenswrapper[7614]: I0224 05:35:32.663416 7614 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-zxkt2 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 05:35:32.663532 master-0 kubenswrapper[7614]: [-]has-synced failed: reason withheld Feb 24 05:35:32.663532 master-0 kubenswrapper[7614]: [+]process-running ok Feb 24 05:35:32.663532 master-0 kubenswrapper[7614]: healthz check failed Feb 24 05:35:32.663532 master-0 kubenswrapper[7614]: I0224 05:35:32.663515 7614 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-zxkt2" podUID="be7a4b9e-1e9a-4298-b804-21b683805c0e" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 05:35:33.663278 master-0 kubenswrapper[7614]: I0224 05:35:33.663198 7614 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-zxkt2 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 05:35:33.663278 master-0 kubenswrapper[7614]: [-]has-synced failed: reason withheld Feb 24 05:35:33.663278 master-0 kubenswrapper[7614]: [+]process-running ok Feb 24 05:35:33.663278 master-0 kubenswrapper[7614]: healthz check failed Feb 24 05:35:33.664071 master-0 kubenswrapper[7614]: I0224 05:35:33.663282 7614 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-zxkt2" podUID="be7a4b9e-1e9a-4298-b804-21b683805c0e" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 05:35:34.406980 master-0 kubenswrapper[7614]: I0224 05:35:34.406883 7614 kubelet.go:2431] "SyncLoop REMOVE" source="file" pods=["openshift-kube-scheduler/openshift-kube-scheduler-master-0"] Feb 24 05:35:34.407549 master-0 kubenswrapper[7614]: I0224 05:35:34.407437 7614 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" podUID="ebb9c3b6f4ad10a97951cbde655daea9" containerName="kube-scheduler-recovery-controller" containerID="cri-o://856274500e14cb82370664b7fa9205dec8cf8d13575deae834feb4190cf946dd" gracePeriod=30 Feb 24 05:35:34.407695 master-0 kubenswrapper[7614]: I0224 05:35:34.407514 7614 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" podUID="ebb9c3b6f4ad10a97951cbde655daea9" containerName="kube-scheduler" containerID="cri-o://ce1534e77be4055b68d61d3ba9e804a6088794580111559891de6340e0482ba1" gracePeriod=30 Feb 24 05:35:34.407695 master-0 kubenswrapper[7614]: I0224 05:35:34.407520 7614 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" podUID="ebb9c3b6f4ad10a97951cbde655daea9" containerName="kube-scheduler-cert-syncer" containerID="cri-o://9c08e2b99bda6708882f4175ffb049128a51c70caca590d2e61441c5ea9ae2b4" gracePeriod=30 Feb 24 05:35:34.409559 master-0 kubenswrapper[7614]: I0224 05:35:34.408024 7614 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-scheduler/openshift-kube-scheduler-master-0"] Feb 24 05:35:34.409559 master-0 kubenswrapper[7614]: E0224 05:35:34.408534 7614 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ebb9c3b6f4ad10a97951cbde655daea9" containerName="kube-scheduler" Feb 24 05:35:34.409559 master-0 kubenswrapper[7614]: I0224 05:35:34.408559 7614 state_mem.go:107] "Deleted CPUSet assignment" podUID="ebb9c3b6f4ad10a97951cbde655daea9" containerName="kube-scheduler" Feb 24 05:35:34.409559 master-0 kubenswrapper[7614]: E0224 05:35:34.408596 7614 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ebb9c3b6f4ad10a97951cbde655daea9" containerName="wait-for-host-port" Feb 24 05:35:34.409559 master-0 kubenswrapper[7614]: I0224 05:35:34.408609 7614 state_mem.go:107] "Deleted CPUSet assignment" podUID="ebb9c3b6f4ad10a97951cbde655daea9" containerName="wait-for-host-port" Feb 24 05:35:34.409559 master-0 kubenswrapper[7614]: E0224 05:35:34.408637 7614 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ebb9c3b6f4ad10a97951cbde655daea9" containerName="kube-scheduler-cert-syncer" Feb 24 05:35:34.409559 master-0 kubenswrapper[7614]: I0224 05:35:34.408651 7614 state_mem.go:107] "Deleted CPUSet assignment" podUID="ebb9c3b6f4ad10a97951cbde655daea9" containerName="kube-scheduler-cert-syncer" Feb 24 05:35:34.409559 master-0 kubenswrapper[7614]: E0224 05:35:34.408671 7614 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ebb9c3b6f4ad10a97951cbde655daea9" containerName="kube-scheduler-recovery-controller" Feb 24 05:35:34.409559 master-0 kubenswrapper[7614]: I0224 05:35:34.408684 7614 state_mem.go:107] "Deleted CPUSet assignment" podUID="ebb9c3b6f4ad10a97951cbde655daea9" containerName="kube-scheduler-recovery-controller" Feb 24 05:35:34.409559 master-0 kubenswrapper[7614]: E0224 05:35:34.408721 7614 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ebb9c3b6f4ad10a97951cbde655daea9" containerName="kube-scheduler-cert-syncer" Feb 24 05:35:34.409559 master-0 kubenswrapper[7614]: I0224 05:35:34.408733 7614 state_mem.go:107] "Deleted CPUSet assignment" podUID="ebb9c3b6f4ad10a97951cbde655daea9" containerName="kube-scheduler-cert-syncer" Feb 24 05:35:34.409559 master-0 kubenswrapper[7614]: I0224 05:35:34.408957 7614 memory_manager.go:354] "RemoveStaleState removing state" podUID="ebb9c3b6f4ad10a97951cbde655daea9" containerName="kube-scheduler-recovery-controller" Feb 24 05:35:34.409559 master-0 kubenswrapper[7614]: I0224 05:35:34.408985 7614 memory_manager.go:354] "RemoveStaleState removing state" podUID="ebb9c3b6f4ad10a97951cbde655daea9" containerName="kube-scheduler-cert-syncer" Feb 24 05:35:34.409559 master-0 kubenswrapper[7614]: I0224 05:35:34.409000 7614 memory_manager.go:354] "RemoveStaleState removing state" podUID="ebb9c3b6f4ad10a97951cbde655daea9" containerName="kube-scheduler" Feb 24 05:35:34.409559 master-0 kubenswrapper[7614]: I0224 05:35:34.409037 7614 memory_manager.go:354] "RemoveStaleState removing state" podUID="ebb9c3b6f4ad10a97951cbde655daea9" containerName="kube-scheduler" Feb 24 05:35:34.409559 master-0 kubenswrapper[7614]: I0224 05:35:34.409055 7614 memory_manager.go:354] "RemoveStaleState removing state" podUID="ebb9c3b6f4ad10a97951cbde655daea9" containerName="kube-scheduler-cert-syncer" Feb 24 05:35:34.409559 master-0 kubenswrapper[7614]: E0224 05:35:34.409291 7614 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ebb9c3b6f4ad10a97951cbde655daea9" containerName="kube-scheduler" Feb 24 05:35:34.409559 master-0 kubenswrapper[7614]: I0224 05:35:34.409329 7614 state_mem.go:107] "Deleted CPUSet assignment" podUID="ebb9c3b6f4ad10a97951cbde655daea9" containerName="kube-scheduler" Feb 24 05:35:34.499014 master-0 kubenswrapper[7614]: I0224 05:35:34.498927 7614 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/416b60c941b7224bbf94e8f78b59b910-resource-dir\") pod \"openshift-kube-scheduler-master-0\" (UID: \"416b60c941b7224bbf94e8f78b59b910\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" Feb 24 05:35:34.499014 master-0 kubenswrapper[7614]: I0224 05:35:34.499014 7614 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/416b60c941b7224bbf94e8f78b59b910-cert-dir\") pod \"openshift-kube-scheduler-master-0\" (UID: \"416b60c941b7224bbf94e8f78b59b910\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" Feb 24 05:35:34.596983 master-0 kubenswrapper[7614]: I0224 05:35:34.596928 7614 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-scheduler_openshift-kube-scheduler-master-0_ebb9c3b6f4ad10a97951cbde655daea9/kube-scheduler-cert-syncer/1.log" Feb 24 05:35:34.598991 master-0 kubenswrapper[7614]: I0224 05:35:34.598945 7614 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-scheduler_openshift-kube-scheduler-master-0_ebb9c3b6f4ad10a97951cbde655daea9/kube-scheduler-cert-syncer/0.log" Feb 24 05:35:34.599974 master-0 kubenswrapper[7614]: I0224 05:35:34.599937 7614 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-scheduler_openshift-kube-scheduler-master-0_ebb9c3b6f4ad10a97951cbde655daea9/kube-scheduler/0.log" Feb 24 05:35:34.600046 master-0 kubenswrapper[7614]: I0224 05:35:34.600001 7614 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/416b60c941b7224bbf94e8f78b59b910-resource-dir\") pod \"openshift-kube-scheduler-master-0\" (UID: \"416b60c941b7224bbf94e8f78b59b910\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" Feb 24 05:35:34.600046 master-0 kubenswrapper[7614]: I0224 05:35:34.600034 7614 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/416b60c941b7224bbf94e8f78b59b910-cert-dir\") pod \"openshift-kube-scheduler-master-0\" (UID: \"416b60c941b7224bbf94e8f78b59b910\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" Feb 24 05:35:34.600211 master-0 kubenswrapper[7614]: I0224 05:35:34.600182 7614 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/416b60c941b7224bbf94e8f78b59b910-cert-dir\") pod \"openshift-kube-scheduler-master-0\" (UID: \"416b60c941b7224bbf94e8f78b59b910\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" Feb 24 05:35:34.600260 master-0 kubenswrapper[7614]: I0224 05:35:34.600227 7614 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/416b60c941b7224bbf94e8f78b59b910-resource-dir\") pod \"openshift-kube-scheduler-master-0\" (UID: \"416b60c941b7224bbf94e8f78b59b910\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" Feb 24 05:35:34.601070 master-0 kubenswrapper[7614]: I0224 05:35:34.601033 7614 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" Feb 24 05:35:34.603965 master-0 kubenswrapper[7614]: I0224 05:35:34.603909 7614 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" oldPodUID="ebb9c3b6f4ad10a97951cbde655daea9" podUID="416b60c941b7224bbf94e8f78b59b910" Feb 24 05:35:34.663066 master-0 kubenswrapper[7614]: I0224 05:35:34.662879 7614 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-zxkt2 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 05:35:34.663066 master-0 kubenswrapper[7614]: [-]has-synced failed: reason withheld Feb 24 05:35:34.663066 master-0 kubenswrapper[7614]: [+]process-running ok Feb 24 05:35:34.663066 master-0 kubenswrapper[7614]: healthz check failed Feb 24 05:35:34.663066 master-0 kubenswrapper[7614]: I0224 05:35:34.662973 7614 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-zxkt2" podUID="be7a4b9e-1e9a-4298-b804-21b683805c0e" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 05:35:34.701020 master-0 kubenswrapper[7614]: I0224 05:35:34.700946 7614 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/ebb9c3b6f4ad10a97951cbde655daea9-cert-dir\") pod \"ebb9c3b6f4ad10a97951cbde655daea9\" (UID: \"ebb9c3b6f4ad10a97951cbde655daea9\") " Feb 24 05:35:34.701020 master-0 kubenswrapper[7614]: I0224 05:35:34.701027 7614 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/ebb9c3b6f4ad10a97951cbde655daea9-resource-dir\") pod \"ebb9c3b6f4ad10a97951cbde655daea9\" (UID: \"ebb9c3b6f4ad10a97951cbde655daea9\") " Feb 24 05:35:34.701752 master-0 kubenswrapper[7614]: I0224 05:35:34.701108 7614 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ebb9c3b6f4ad10a97951cbde655daea9-cert-dir" (OuterVolumeSpecName: "cert-dir") pod "ebb9c3b6f4ad10a97951cbde655daea9" (UID: "ebb9c3b6f4ad10a97951cbde655daea9"). InnerVolumeSpecName "cert-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 24 05:35:34.701752 master-0 kubenswrapper[7614]: I0224 05:35:34.701144 7614 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ebb9c3b6f4ad10a97951cbde655daea9-resource-dir" (OuterVolumeSpecName: "resource-dir") pod "ebb9c3b6f4ad10a97951cbde655daea9" (UID: "ebb9c3b6f4ad10a97951cbde655daea9"). InnerVolumeSpecName "resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 24 05:35:34.701752 master-0 kubenswrapper[7614]: I0224 05:35:34.701548 7614 reconciler_common.go:293] "Volume detached for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/ebb9c3b6f4ad10a97951cbde655daea9-cert-dir\") on node \"master-0\" DevicePath \"\"" Feb 24 05:35:34.701752 master-0 kubenswrapper[7614]: I0224 05:35:34.701570 7614 reconciler_common.go:293] "Volume detached for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/ebb9c3b6f4ad10a97951cbde655daea9-resource-dir\") on node \"master-0\" DevicePath \"\"" Feb 24 05:35:34.725802 master-0 kubenswrapper[7614]: I0224 05:35:34.725723 7614 generic.go:334] "Generic (PLEG): container finished" podID="17ac3cae-8c8a-4e8f-9f58-ab82b543ec86" containerID="f1b7c6a181b3c4b7c381db07bd0f31166802251328ff7e67a24e7a9f4676e269" exitCode=0 Feb 24 05:35:34.725940 master-0 kubenswrapper[7614]: I0224 05:35:34.725855 7614 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/installer-2-master-0" event={"ID":"17ac3cae-8c8a-4e8f-9f58-ab82b543ec86","Type":"ContainerDied","Data":"f1b7c6a181b3c4b7c381db07bd0f31166802251328ff7e67a24e7a9f4676e269"} Feb 24 05:35:34.730716 master-0 kubenswrapper[7614]: I0224 05:35:34.730673 7614 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-scheduler_openshift-kube-scheduler-master-0_ebb9c3b6f4ad10a97951cbde655daea9/kube-scheduler-cert-syncer/1.log" Feb 24 05:35:34.733411 master-0 kubenswrapper[7614]: I0224 05:35:34.733365 7614 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-scheduler_openshift-kube-scheduler-master-0_ebb9c3b6f4ad10a97951cbde655daea9/kube-scheduler-cert-syncer/0.log" Feb 24 05:35:34.734447 master-0 kubenswrapper[7614]: I0224 05:35:34.734385 7614 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-scheduler_openshift-kube-scheduler-master-0_ebb9c3b6f4ad10a97951cbde655daea9/kube-scheduler/0.log" Feb 24 05:35:34.735279 master-0 kubenswrapper[7614]: I0224 05:35:34.735236 7614 generic.go:334] "Generic (PLEG): container finished" podID="ebb9c3b6f4ad10a97951cbde655daea9" containerID="9c08e2b99bda6708882f4175ffb049128a51c70caca590d2e61441c5ea9ae2b4" exitCode=2 Feb 24 05:35:34.735391 master-0 kubenswrapper[7614]: I0224 05:35:34.735278 7614 generic.go:334] "Generic (PLEG): container finished" podID="ebb9c3b6f4ad10a97951cbde655daea9" containerID="ce1534e77be4055b68d61d3ba9e804a6088794580111559891de6340e0482ba1" exitCode=0 Feb 24 05:35:34.735391 master-0 kubenswrapper[7614]: I0224 05:35:34.735293 7614 generic.go:334] "Generic (PLEG): container finished" podID="ebb9c3b6f4ad10a97951cbde655daea9" containerID="856274500e14cb82370664b7fa9205dec8cf8d13575deae834feb4190cf946dd" exitCode=0 Feb 24 05:35:34.735391 master-0 kubenswrapper[7614]: I0224 05:35:34.735334 7614 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" Feb 24 05:35:34.735624 master-0 kubenswrapper[7614]: I0224 05:35:34.735422 7614 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="639ae518497ba1706dda96412a5f991e087afb115a63188b2e7c534e5017f902" Feb 24 05:35:34.735624 master-0 kubenswrapper[7614]: I0224 05:35:34.735487 7614 scope.go:117] "RemoveContainer" containerID="9e0cc0f7f581085a792db3f9717a0c7d3e86218c9ccfa7f2c67da547aa98fac9" Feb 24 05:35:34.757162 master-0 kubenswrapper[7614]: I0224 05:35:34.757077 7614 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" oldPodUID="ebb9c3b6f4ad10a97951cbde655daea9" podUID="416b60c941b7224bbf94e8f78b59b910" Feb 24 05:35:34.785878 master-0 kubenswrapper[7614]: I0224 05:35:34.783547 7614 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" oldPodUID="ebb9c3b6f4ad10a97951cbde655daea9" podUID="416b60c941b7224bbf94e8f78b59b910" Feb 24 05:35:34.804267 master-0 kubenswrapper[7614]: I0224 05:35:34.804018 7614 scope.go:117] "RemoveContainer" containerID="4ada702e991319865f9dacb414ee4288bbdec2d1eeae1681a213589c60b83506" Feb 24 05:35:35.219668 master-0 kubenswrapper[7614]: I0224 05:35:35.219552 7614 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ebb9c3b6f4ad10a97951cbde655daea9" path="/var/lib/kubelet/pods/ebb9c3b6f4ad10a97951cbde655daea9/volumes" Feb 24 05:35:35.664348 master-0 kubenswrapper[7614]: I0224 05:35:35.663985 7614 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-zxkt2 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 05:35:35.664348 master-0 kubenswrapper[7614]: [-]has-synced failed: reason withheld Feb 24 05:35:35.664348 master-0 kubenswrapper[7614]: [+]process-running ok Feb 24 05:35:35.664348 master-0 kubenswrapper[7614]: healthz check failed Feb 24 05:35:35.664348 master-0 kubenswrapper[7614]: I0224 05:35:35.664120 7614 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-zxkt2" podUID="be7a4b9e-1e9a-4298-b804-21b683805c0e" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 05:35:35.746878 master-0 kubenswrapper[7614]: I0224 05:35:35.746833 7614 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-scheduler_openshift-kube-scheduler-master-0_ebb9c3b6f4ad10a97951cbde655daea9/kube-scheduler-cert-syncer/1.log" Feb 24 05:35:36.113938 master-0 kubenswrapper[7614]: I0224 05:35:36.113873 7614 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/installer-2-master-0" Feb 24 05:35:36.226291 master-0 kubenswrapper[7614]: I0224 05:35:36.226204 7614 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/17ac3cae-8c8a-4e8f-9f58-ab82b543ec86-kube-api-access\") pod \"17ac3cae-8c8a-4e8f-9f58-ab82b543ec86\" (UID: \"17ac3cae-8c8a-4e8f-9f58-ab82b543ec86\") " Feb 24 05:35:36.226653 master-0 kubenswrapper[7614]: I0224 05:35:36.226584 7614 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/17ac3cae-8c8a-4e8f-9f58-ab82b543ec86-var-lock\") pod \"17ac3cae-8c8a-4e8f-9f58-ab82b543ec86\" (UID: \"17ac3cae-8c8a-4e8f-9f58-ab82b543ec86\") " Feb 24 05:35:36.226653 master-0 kubenswrapper[7614]: I0224 05:35:36.226641 7614 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/17ac3cae-8c8a-4e8f-9f58-ab82b543ec86-kubelet-dir\") pod \"17ac3cae-8c8a-4e8f-9f58-ab82b543ec86\" (UID: \"17ac3cae-8c8a-4e8f-9f58-ab82b543ec86\") " Feb 24 05:35:36.227239 master-0 kubenswrapper[7614]: I0224 05:35:36.226784 7614 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/17ac3cae-8c8a-4e8f-9f58-ab82b543ec86-var-lock" (OuterVolumeSpecName: "var-lock") pod "17ac3cae-8c8a-4e8f-9f58-ab82b543ec86" (UID: "17ac3cae-8c8a-4e8f-9f58-ab82b543ec86"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 24 05:35:36.227239 master-0 kubenswrapper[7614]: I0224 05:35:36.226872 7614 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/17ac3cae-8c8a-4e8f-9f58-ab82b543ec86-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "17ac3cae-8c8a-4e8f-9f58-ab82b543ec86" (UID: "17ac3cae-8c8a-4e8f-9f58-ab82b543ec86"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 24 05:35:36.229339 master-0 kubenswrapper[7614]: I0224 05:35:36.229248 7614 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/17ac3cae-8c8a-4e8f-9f58-ab82b543ec86-var-lock\") on node \"master-0\" DevicePath \"\"" Feb 24 05:35:36.229916 master-0 kubenswrapper[7614]: I0224 05:35:36.229863 7614 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/17ac3cae-8c8a-4e8f-9f58-ab82b543ec86-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "17ac3cae-8c8a-4e8f-9f58-ab82b543ec86" (UID: "17ac3cae-8c8a-4e8f-9f58-ab82b543ec86"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 24 05:35:36.229916 master-0 kubenswrapper[7614]: I0224 05:35:36.229867 7614 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/17ac3cae-8c8a-4e8f-9f58-ab82b543ec86-kubelet-dir\") on node \"master-0\" DevicePath \"\"" Feb 24 05:35:36.338543 master-0 kubenswrapper[7614]: I0224 05:35:36.336493 7614 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/17ac3cae-8c8a-4e8f-9f58-ab82b543ec86-kube-api-access\") on node \"master-0\" DevicePath \"\"" Feb 24 05:35:36.663528 master-0 kubenswrapper[7614]: I0224 05:35:36.663208 7614 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-zxkt2 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 05:35:36.663528 master-0 kubenswrapper[7614]: [-]has-synced failed: reason withheld Feb 24 05:35:36.663528 master-0 kubenswrapper[7614]: [+]process-running ok Feb 24 05:35:36.663528 master-0 kubenswrapper[7614]: healthz check failed Feb 24 05:35:36.663528 master-0 kubenswrapper[7614]: I0224 05:35:36.663372 7614 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-zxkt2" podUID="be7a4b9e-1e9a-4298-b804-21b683805c0e" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 05:35:36.759844 master-0 kubenswrapper[7614]: I0224 05:35:36.759764 7614 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/installer-2-master-0" event={"ID":"17ac3cae-8c8a-4e8f-9f58-ab82b543ec86","Type":"ContainerDied","Data":"ee58f94aaa31646ed744150034f7422744de52c8ea47ed7679b57341645f987d"} Feb 24 05:35:36.759844 master-0 kubenswrapper[7614]: I0224 05:35:36.759840 7614 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ee58f94aaa31646ed744150034f7422744de52c8ea47ed7679b57341645f987d" Feb 24 05:35:36.760771 master-0 kubenswrapper[7614]: I0224 05:35:36.759930 7614 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/installer-2-master-0" Feb 24 05:35:37.662954 master-0 kubenswrapper[7614]: I0224 05:35:37.662879 7614 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-zxkt2 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 05:35:37.662954 master-0 kubenswrapper[7614]: [-]has-synced failed: reason withheld Feb 24 05:35:37.662954 master-0 kubenswrapper[7614]: [+]process-running ok Feb 24 05:35:37.662954 master-0 kubenswrapper[7614]: healthz check failed Feb 24 05:35:37.663641 master-0 kubenswrapper[7614]: I0224 05:35:37.662987 7614 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-zxkt2" podUID="be7a4b9e-1e9a-4298-b804-21b683805c0e" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 05:35:38.663156 master-0 kubenswrapper[7614]: I0224 05:35:38.663061 7614 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-zxkt2 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 05:35:38.663156 master-0 kubenswrapper[7614]: [-]has-synced failed: reason withheld Feb 24 05:35:38.663156 master-0 kubenswrapper[7614]: [+]process-running ok Feb 24 05:35:38.663156 master-0 kubenswrapper[7614]: healthz check failed Feb 24 05:35:38.664199 master-0 kubenswrapper[7614]: I0224 05:35:38.663161 7614 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-zxkt2" podUID="be7a4b9e-1e9a-4298-b804-21b683805c0e" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 05:35:39.663512 master-0 kubenswrapper[7614]: I0224 05:35:39.663414 7614 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-zxkt2 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 05:35:39.663512 master-0 kubenswrapper[7614]: [-]has-synced failed: reason withheld Feb 24 05:35:39.663512 master-0 kubenswrapper[7614]: [+]process-running ok Feb 24 05:35:39.663512 master-0 kubenswrapper[7614]: healthz check failed Feb 24 05:35:39.664759 master-0 kubenswrapper[7614]: I0224 05:35:39.663525 7614 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-zxkt2" podUID="be7a4b9e-1e9a-4298-b804-21b683805c0e" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 05:35:40.664278 master-0 kubenswrapper[7614]: I0224 05:35:40.664156 7614 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-zxkt2 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 05:35:40.664278 master-0 kubenswrapper[7614]: [-]has-synced failed: reason withheld Feb 24 05:35:40.664278 master-0 kubenswrapper[7614]: [+]process-running ok Feb 24 05:35:40.664278 master-0 kubenswrapper[7614]: healthz check failed Feb 24 05:35:40.664942 master-0 kubenswrapper[7614]: I0224 05:35:40.664376 7614 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-zxkt2" podUID="be7a4b9e-1e9a-4298-b804-21b683805c0e" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 05:35:41.662786 master-0 kubenswrapper[7614]: I0224 05:35:41.662691 7614 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-zxkt2 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 05:35:41.662786 master-0 kubenswrapper[7614]: [-]has-synced failed: reason withheld Feb 24 05:35:41.662786 master-0 kubenswrapper[7614]: [+]process-running ok Feb 24 05:35:41.662786 master-0 kubenswrapper[7614]: healthz check failed Feb 24 05:35:41.662786 master-0 kubenswrapper[7614]: I0224 05:35:41.662766 7614 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-zxkt2" podUID="be7a4b9e-1e9a-4298-b804-21b683805c0e" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 05:35:41.663542 master-0 kubenswrapper[7614]: I0224 05:35:41.662830 7614 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-ingress/router-default-7b65dc9fcb-zxkt2" Feb 24 05:35:41.663542 master-0 kubenswrapper[7614]: I0224 05:35:41.663532 7614 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="router" containerStatusID={"Type":"cri-o","ID":"4f3ae3a1fb93152f16413963009dac29f899944719e22e0315c1d5fd940eb4a6"} pod="openshift-ingress/router-default-7b65dc9fcb-zxkt2" containerMessage="Container router failed startup probe, will be restarted" Feb 24 05:35:41.663729 master-0 kubenswrapper[7614]: I0224 05:35:41.663583 7614 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ingress/router-default-7b65dc9fcb-zxkt2" podUID="be7a4b9e-1e9a-4298-b804-21b683805c0e" containerName="router" containerID="cri-o://4f3ae3a1fb93152f16413963009dac29f899944719e22e0315c1d5fd940eb4a6" gracePeriod=3600 Feb 24 05:35:45.173912 master-0 kubenswrapper[7614]: I0224 05:35:45.173852 7614 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" Feb 24 05:35:45.208736 master-0 kubenswrapper[7614]: I0224 05:35:45.208659 7614 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" podUID="39a29d11-941f-4a92-9fd1-876963f7e6db" Feb 24 05:35:45.208736 master-0 kubenswrapper[7614]: I0224 05:35:45.208715 7614 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" podUID="39a29d11-941f-4a92-9fd1-876963f7e6db" Feb 24 05:35:45.222763 master-0 kubenswrapper[7614]: I0224 05:35:45.222687 7614 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-kube-scheduler/openshift-kube-scheduler-master-0"] Feb 24 05:35:45.229549 master-0 kubenswrapper[7614]: I0224 05:35:45.229455 7614 kubelet.go:1914] "Deleted mirror pod because it is outdated" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" Feb 24 05:35:45.240418 master-0 kubenswrapper[7614]: I0224 05:35:45.240293 7614 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-kube-scheduler/openshift-kube-scheduler-master-0"] Feb 24 05:35:45.250467 master-0 kubenswrapper[7614]: I0224 05:35:45.250418 7614 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" Feb 24 05:35:45.260899 master-0 kubenswrapper[7614]: I0224 05:35:45.260832 7614 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler/openshift-kube-scheduler-master-0"] Feb 24 05:35:45.843767 master-0 kubenswrapper[7614]: I0224 05:35:45.843644 7614 generic.go:334] "Generic (PLEG): container finished" podID="416b60c941b7224bbf94e8f78b59b910" containerID="f8b39be67a04cf9d38216643f5aaffec2fb3ec2bf8622811dc4fae7f64bc4612" exitCode=0 Feb 24 05:35:45.843767 master-0 kubenswrapper[7614]: I0224 05:35:45.843736 7614 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" event={"ID":"416b60c941b7224bbf94e8f78b59b910","Type":"ContainerDied","Data":"f8b39be67a04cf9d38216643f5aaffec2fb3ec2bf8622811dc4fae7f64bc4612"} Feb 24 05:35:45.844192 master-0 kubenswrapper[7614]: I0224 05:35:45.843815 7614 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" event={"ID":"416b60c941b7224bbf94e8f78b59b910","Type":"ContainerStarted","Data":"dd7b027ed4dfa318c6f765780e7da4b378d4a45eec9c4d60403e7f1cb887d422"} Feb 24 05:35:46.859009 master-0 kubenswrapper[7614]: I0224 05:35:46.858929 7614 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" event={"ID":"416b60c941b7224bbf94e8f78b59b910","Type":"ContainerStarted","Data":"7df6d68e4eccd870d7979d194dd996cd069e699306fa6a1039debffe4bc0d5b8"} Feb 24 05:35:46.859009 master-0 kubenswrapper[7614]: I0224 05:35:46.859004 7614 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" event={"ID":"416b60c941b7224bbf94e8f78b59b910","Type":"ContainerStarted","Data":"920bf35ac9d63c2c6150dee7e01c82a4f11232a87154bda4b9a5efa5e5177bc2"} Feb 24 05:35:47.874673 master-0 kubenswrapper[7614]: I0224 05:35:47.874569 7614 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" event={"ID":"416b60c941b7224bbf94e8f78b59b910","Type":"ContainerStarted","Data":"5d14453ddb467f5c28b4c89fef9f05456c5bc2ab851e4cdb483a72f52c45f0ea"} Feb 24 05:35:47.875691 master-0 kubenswrapper[7614]: I0224 05:35:47.874920 7614 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" Feb 24 05:35:47.943157 master-0 kubenswrapper[7614]: I0224 05:35:47.943058 7614 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" podStartSLOduration=2.943035493 podStartE2EDuration="2.943035493s" podCreationTimestamp="2026-02-24 05:35:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-24 05:35:47.939604601 +0000 UTC m=+1278.974347827" watchObservedRunningTime="2026-02-24 05:35:47.943035493 +0000 UTC m=+1278.977778649" Feb 24 05:35:49.944292 master-0 kubenswrapper[7614]: I0224 05:35:49.944164 7614 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager/installer-3-master-0"] Feb 24 05:35:49.945385 master-0 kubenswrapper[7614]: E0224 05:35:49.944647 7614 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="17ac3cae-8c8a-4e8f-9f58-ab82b543ec86" containerName="installer" Feb 24 05:35:49.945385 master-0 kubenswrapper[7614]: I0224 05:35:49.944673 7614 state_mem.go:107] "Deleted CPUSet assignment" podUID="17ac3cae-8c8a-4e8f-9f58-ab82b543ec86" containerName="installer" Feb 24 05:35:49.945385 master-0 kubenswrapper[7614]: I0224 05:35:49.944965 7614 memory_manager.go:354] "RemoveStaleState removing state" podUID="17ac3cae-8c8a-4e8f-9f58-ab82b543ec86" containerName="installer" Feb 24 05:35:49.945828 master-0 kubenswrapper[7614]: I0224 05:35:49.945742 7614 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/installer-3-master-0" Feb 24 05:35:49.948723 master-0 kubenswrapper[7614]: I0224 05:35:49.948683 7614 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager"/"installer-sa-dockercfg-kjfbr" Feb 24 05:35:49.949007 master-0 kubenswrapper[7614]: I0224 05:35:49.948920 7614 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager"/"kube-root-ca.crt" Feb 24 05:35:49.964155 master-0 kubenswrapper[7614]: I0224 05:35:49.964056 7614 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/installer-3-master-0"] Feb 24 05:35:49.985762 master-0 kubenswrapper[7614]: I0224 05:35:49.985696 7614 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/154c1cd0-d69a-4213-8fc2-2d80217c358e-var-lock\") pod \"installer-3-master-0\" (UID: \"154c1cd0-d69a-4213-8fc2-2d80217c358e\") " pod="openshift-kube-controller-manager/installer-3-master-0" Feb 24 05:35:49.985762 master-0 kubenswrapper[7614]: I0224 05:35:49.985766 7614 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/154c1cd0-d69a-4213-8fc2-2d80217c358e-kubelet-dir\") pod \"installer-3-master-0\" (UID: \"154c1cd0-d69a-4213-8fc2-2d80217c358e\") " pod="openshift-kube-controller-manager/installer-3-master-0" Feb 24 05:35:49.986119 master-0 kubenswrapper[7614]: I0224 05:35:49.985804 7614 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/154c1cd0-d69a-4213-8fc2-2d80217c358e-kube-api-access\") pod \"installer-3-master-0\" (UID: \"154c1cd0-d69a-4213-8fc2-2d80217c358e\") " pod="openshift-kube-controller-manager/installer-3-master-0" Feb 24 05:35:50.088638 master-0 kubenswrapper[7614]: I0224 05:35:50.088427 7614 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/154c1cd0-d69a-4213-8fc2-2d80217c358e-var-lock\") pod \"installer-3-master-0\" (UID: \"154c1cd0-d69a-4213-8fc2-2d80217c358e\") " pod="openshift-kube-controller-manager/installer-3-master-0" Feb 24 05:35:50.088638 master-0 kubenswrapper[7614]: I0224 05:35:50.088553 7614 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/154c1cd0-d69a-4213-8fc2-2d80217c358e-kubelet-dir\") pod \"installer-3-master-0\" (UID: \"154c1cd0-d69a-4213-8fc2-2d80217c358e\") " pod="openshift-kube-controller-manager/installer-3-master-0" Feb 24 05:35:50.088638 master-0 kubenswrapper[7614]: I0224 05:35:50.088614 7614 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/154c1cd0-d69a-4213-8fc2-2d80217c358e-kube-api-access\") pod \"installer-3-master-0\" (UID: \"154c1cd0-d69a-4213-8fc2-2d80217c358e\") " pod="openshift-kube-controller-manager/installer-3-master-0" Feb 24 05:35:50.089224 master-0 kubenswrapper[7614]: I0224 05:35:50.088608 7614 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/154c1cd0-d69a-4213-8fc2-2d80217c358e-var-lock\") pod \"installer-3-master-0\" (UID: \"154c1cd0-d69a-4213-8fc2-2d80217c358e\") " pod="openshift-kube-controller-manager/installer-3-master-0" Feb 24 05:35:50.089224 master-0 kubenswrapper[7614]: I0224 05:35:50.088689 7614 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/154c1cd0-d69a-4213-8fc2-2d80217c358e-kubelet-dir\") pod \"installer-3-master-0\" (UID: \"154c1cd0-d69a-4213-8fc2-2d80217c358e\") " pod="openshift-kube-controller-manager/installer-3-master-0" Feb 24 05:35:50.114468 master-0 kubenswrapper[7614]: I0224 05:35:50.114017 7614 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/154c1cd0-d69a-4213-8fc2-2d80217c358e-kube-api-access\") pod \"installer-3-master-0\" (UID: \"154c1cd0-d69a-4213-8fc2-2d80217c358e\") " pod="openshift-kube-controller-manager/installer-3-master-0" Feb 24 05:35:50.294115 master-0 kubenswrapper[7614]: I0224 05:35:50.293999 7614 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/installer-3-master-0" Feb 24 05:35:50.460008 master-0 kubenswrapper[7614]: I0224 05:35:50.459605 7614 scope.go:117] "RemoveContainer" containerID="c31c78349f1ab025d6ecadfbb83b67c0bce9a73e637fa587febda4c860d8e036" Feb 24 05:35:50.513622 master-0 kubenswrapper[7614]: I0224 05:35:50.513542 7614 scope.go:117] "RemoveContainer" containerID="ce1534e77be4055b68d61d3ba9e804a6088794580111559891de6340e0482ba1" Feb 24 05:35:50.548235 master-0 kubenswrapper[7614]: I0224 05:35:50.546182 7614 scope.go:117] "RemoveContainer" containerID="856274500e14cb82370664b7fa9205dec8cf8d13575deae834feb4190cf946dd" Feb 24 05:35:50.843611 master-0 kubenswrapper[7614]: I0224 05:35:50.843521 7614 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/installer-3-master-0"] Feb 24 05:35:50.847772 master-0 kubenswrapper[7614]: W0224 05:35:50.847678 7614 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-pod154c1cd0_d69a_4213_8fc2_2d80217c358e.slice/crio-49dc4d8de02054e0c7305ee0abb7f18a0ace00c3ecc8e971017afe0705de270d WatchSource:0}: Error finding container 49dc4d8de02054e0c7305ee0abb7f18a0ace00c3ecc8e971017afe0705de270d: Status 404 returned error can't find the container with id 49dc4d8de02054e0c7305ee0abb7f18a0ace00c3ecc8e971017afe0705de270d Feb 24 05:35:50.910758 master-0 kubenswrapper[7614]: I0224 05:35:50.910701 7614 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/installer-3-master-0" event={"ID":"154c1cd0-d69a-4213-8fc2-2d80217c358e","Type":"ContainerStarted","Data":"49dc4d8de02054e0c7305ee0abb7f18a0ace00c3ecc8e971017afe0705de270d"} Feb 24 05:35:51.920858 master-0 kubenswrapper[7614]: I0224 05:35:51.920780 7614 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/installer-3-master-0" event={"ID":"154c1cd0-d69a-4213-8fc2-2d80217c358e","Type":"ContainerStarted","Data":"b3e9d6215183b510ab3e035f5ecd9035f6b7ed2689c41b39393ba0067bb54568"} Feb 24 05:35:51.938237 master-0 kubenswrapper[7614]: I0224 05:35:51.938146 7614 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager/installer-3-master-0" podStartSLOduration=2.93811489 podStartE2EDuration="2.93811489s" podCreationTimestamp="2026-02-24 05:35:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-24 05:35:51.937618536 +0000 UTC m=+1282.972361722" watchObservedRunningTime="2026-02-24 05:35:51.93811489 +0000 UTC m=+1282.972858046" Feb 24 05:35:57.663107 master-0 kubenswrapper[7614]: I0224 05:35:57.662990 7614 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/installer-2-retry-1-master-0"] Feb 24 05:35:57.664410 master-0 kubenswrapper[7614]: I0224 05:35:57.664358 7614 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-2-retry-1-master-0" Feb 24 05:35:57.667867 master-0 kubenswrapper[7614]: I0224 05:35:57.667818 7614 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver"/"kube-root-ca.crt" Feb 24 05:35:57.668489 master-0 kubenswrapper[7614]: I0224 05:35:57.668446 7614 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver"/"installer-sa-dockercfg-d88q9" Feb 24 05:35:57.682394 master-0 kubenswrapper[7614]: I0224 05:35:57.682291 7614 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-2-retry-1-master-0"] Feb 24 05:35:57.770618 master-0 kubenswrapper[7614]: I0224 05:35:57.770524 7614 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/95004cdb-0c51-4cd2-8fa4-28bdf9901ec6-kube-api-access\") pod \"installer-2-retry-1-master-0\" (UID: \"95004cdb-0c51-4cd2-8fa4-28bdf9901ec6\") " pod="openshift-kube-apiserver/installer-2-retry-1-master-0" Feb 24 05:35:57.770950 master-0 kubenswrapper[7614]: I0224 05:35:57.770737 7614 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/95004cdb-0c51-4cd2-8fa4-28bdf9901ec6-var-lock\") pod \"installer-2-retry-1-master-0\" (UID: \"95004cdb-0c51-4cd2-8fa4-28bdf9901ec6\") " pod="openshift-kube-apiserver/installer-2-retry-1-master-0" Feb 24 05:35:57.771256 master-0 kubenswrapper[7614]: I0224 05:35:57.771198 7614 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/95004cdb-0c51-4cd2-8fa4-28bdf9901ec6-kubelet-dir\") pod \"installer-2-retry-1-master-0\" (UID: \"95004cdb-0c51-4cd2-8fa4-28bdf9901ec6\") " pod="openshift-kube-apiserver/installer-2-retry-1-master-0" Feb 24 05:35:57.873704 master-0 kubenswrapper[7614]: I0224 05:35:57.873590 7614 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/95004cdb-0c51-4cd2-8fa4-28bdf9901ec6-var-lock\") pod \"installer-2-retry-1-master-0\" (UID: \"95004cdb-0c51-4cd2-8fa4-28bdf9901ec6\") " pod="openshift-kube-apiserver/installer-2-retry-1-master-0" Feb 24 05:35:57.874026 master-0 kubenswrapper[7614]: I0224 05:35:57.873855 7614 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/95004cdb-0c51-4cd2-8fa4-28bdf9901ec6-kubelet-dir\") pod \"installer-2-retry-1-master-0\" (UID: \"95004cdb-0c51-4cd2-8fa4-28bdf9901ec6\") " pod="openshift-kube-apiserver/installer-2-retry-1-master-0" Feb 24 05:35:57.874026 master-0 kubenswrapper[7614]: I0224 05:35:57.873851 7614 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/95004cdb-0c51-4cd2-8fa4-28bdf9901ec6-var-lock\") pod \"installer-2-retry-1-master-0\" (UID: \"95004cdb-0c51-4cd2-8fa4-28bdf9901ec6\") " pod="openshift-kube-apiserver/installer-2-retry-1-master-0" Feb 24 05:35:57.874026 master-0 kubenswrapper[7614]: I0224 05:35:57.873971 7614 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/95004cdb-0c51-4cd2-8fa4-28bdf9901ec6-kube-api-access\") pod \"installer-2-retry-1-master-0\" (UID: \"95004cdb-0c51-4cd2-8fa4-28bdf9901ec6\") " pod="openshift-kube-apiserver/installer-2-retry-1-master-0" Feb 24 05:35:57.874219 master-0 kubenswrapper[7614]: I0224 05:35:57.874114 7614 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/95004cdb-0c51-4cd2-8fa4-28bdf9901ec6-kubelet-dir\") pod \"installer-2-retry-1-master-0\" (UID: \"95004cdb-0c51-4cd2-8fa4-28bdf9901ec6\") " pod="openshift-kube-apiserver/installer-2-retry-1-master-0" Feb 24 05:35:57.898985 master-0 kubenswrapper[7614]: I0224 05:35:57.898909 7614 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/95004cdb-0c51-4cd2-8fa4-28bdf9901ec6-kube-api-access\") pod \"installer-2-retry-1-master-0\" (UID: \"95004cdb-0c51-4cd2-8fa4-28bdf9901ec6\") " pod="openshift-kube-apiserver/installer-2-retry-1-master-0" Feb 24 05:35:57.993887 master-0 kubenswrapper[7614]: I0224 05:35:57.993785 7614 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-2-retry-1-master-0" Feb 24 05:35:58.521970 master-0 kubenswrapper[7614]: I0224 05:35:58.521887 7614 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-2-retry-1-master-0"] Feb 24 05:35:58.538771 master-0 kubenswrapper[7614]: W0224 05:35:58.538689 7614 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-pod95004cdb_0c51_4cd2_8fa4_28bdf9901ec6.slice/crio-b5014913008664f81c058adb929e9ab0f5679eae3b998e111a8d5dd7cf444f9d WatchSource:0}: Error finding container b5014913008664f81c058adb929e9ab0f5679eae3b998e111a8d5dd7cf444f9d: Status 404 returned error can't find the container with id b5014913008664f81c058adb929e9ab0f5679eae3b998e111a8d5dd7cf444f9d Feb 24 05:35:58.986579 master-0 kubenswrapper[7614]: I0224 05:35:58.986484 7614 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-2-retry-1-master-0" event={"ID":"95004cdb-0c51-4cd2-8fa4-28bdf9901ec6","Type":"ContainerStarted","Data":"b5014913008664f81c058adb929e9ab0f5679eae3b998e111a8d5dd7cf444f9d"} Feb 24 05:36:00.000350 master-0 kubenswrapper[7614]: I0224 05:36:00.000254 7614 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-2-retry-1-master-0" event={"ID":"95004cdb-0c51-4cd2-8fa4-28bdf9901ec6","Type":"ContainerStarted","Data":"a2a044970cc81b5de15bc7ba6f6e6e6c9ceb095dd83aa2eb02ced08f9d7400e0"} Feb 24 05:36:00.028108 master-0 kubenswrapper[7614]: I0224 05:36:00.028006 7614 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/installer-2-retry-1-master-0" podStartSLOduration=3.027984098 podStartE2EDuration="3.027984098s" podCreationTimestamp="2026-02-24 05:35:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-24 05:36:00.025048039 +0000 UTC m=+1291.059791225" watchObservedRunningTime="2026-02-24 05:36:00.027984098 +0000 UTC m=+1291.062727254" Feb 24 05:36:01.851530 master-0 kubenswrapper[7614]: I0224 05:36:01.851466 7614 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-kube-apiserver/installer-2-retry-1-master-0"] Feb 24 05:36:02.016759 master-0 kubenswrapper[7614]: I0224 05:36:02.016666 7614 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/installer-2-retry-1-master-0" podUID="95004cdb-0c51-4cd2-8fa4-28bdf9901ec6" containerName="installer" containerID="cri-o://a2a044970cc81b5de15bc7ba6f6e6e6c9ceb095dd83aa2eb02ced08f9d7400e0" gracePeriod=30 Feb 24 05:36:05.058014 master-0 kubenswrapper[7614]: I0224 05:36:05.057840 7614 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/installer-3-master-0"] Feb 24 05:36:05.059407 master-0 kubenswrapper[7614]: I0224 05:36:05.059371 7614 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-3-master-0" Feb 24 05:36:05.071678 master-0 kubenswrapper[7614]: I0224 05:36:05.071609 7614 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-3-master-0"] Feb 24 05:36:05.207851 master-0 kubenswrapper[7614]: I0224 05:36:05.207765 7614 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/afaa1cb5-3ad5-4c67-8802-9c4db23a2e3a-kubelet-dir\") pod \"installer-3-master-0\" (UID: \"afaa1cb5-3ad5-4c67-8802-9c4db23a2e3a\") " pod="openshift-kube-apiserver/installer-3-master-0" Feb 24 05:36:05.208205 master-0 kubenswrapper[7614]: I0224 05:36:05.208019 7614 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/afaa1cb5-3ad5-4c67-8802-9c4db23a2e3a-kube-api-access\") pod \"installer-3-master-0\" (UID: \"afaa1cb5-3ad5-4c67-8802-9c4db23a2e3a\") " pod="openshift-kube-apiserver/installer-3-master-0" Feb 24 05:36:05.208269 master-0 kubenswrapper[7614]: I0224 05:36:05.208242 7614 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/afaa1cb5-3ad5-4c67-8802-9c4db23a2e3a-var-lock\") pod \"installer-3-master-0\" (UID: \"afaa1cb5-3ad5-4c67-8802-9c4db23a2e3a\") " pod="openshift-kube-apiserver/installer-3-master-0" Feb 24 05:36:05.310990 master-0 kubenswrapper[7614]: I0224 05:36:05.310780 7614 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/afaa1cb5-3ad5-4c67-8802-9c4db23a2e3a-var-lock\") pod \"installer-3-master-0\" (UID: \"afaa1cb5-3ad5-4c67-8802-9c4db23a2e3a\") " pod="openshift-kube-apiserver/installer-3-master-0" Feb 24 05:36:05.310990 master-0 kubenswrapper[7614]: I0224 05:36:05.310992 7614 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/afaa1cb5-3ad5-4c67-8802-9c4db23a2e3a-var-lock\") pod \"installer-3-master-0\" (UID: \"afaa1cb5-3ad5-4c67-8802-9c4db23a2e3a\") " pod="openshift-kube-apiserver/installer-3-master-0" Feb 24 05:36:05.311365 master-0 kubenswrapper[7614]: I0224 05:36:05.311049 7614 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/afaa1cb5-3ad5-4c67-8802-9c4db23a2e3a-kubelet-dir\") pod \"installer-3-master-0\" (UID: \"afaa1cb5-3ad5-4c67-8802-9c4db23a2e3a\") " pod="openshift-kube-apiserver/installer-3-master-0" Feb 24 05:36:05.311365 master-0 kubenswrapper[7614]: I0224 05:36:05.311099 7614 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/afaa1cb5-3ad5-4c67-8802-9c4db23a2e3a-kube-api-access\") pod \"installer-3-master-0\" (UID: \"afaa1cb5-3ad5-4c67-8802-9c4db23a2e3a\") " pod="openshift-kube-apiserver/installer-3-master-0" Feb 24 05:36:05.311365 master-0 kubenswrapper[7614]: I0224 05:36:05.311216 7614 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/afaa1cb5-3ad5-4c67-8802-9c4db23a2e3a-kubelet-dir\") pod \"installer-3-master-0\" (UID: \"afaa1cb5-3ad5-4c67-8802-9c4db23a2e3a\") " pod="openshift-kube-apiserver/installer-3-master-0" Feb 24 05:36:05.332429 master-0 kubenswrapper[7614]: I0224 05:36:05.332355 7614 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/afaa1cb5-3ad5-4c67-8802-9c4db23a2e3a-kube-api-access\") pod \"installer-3-master-0\" (UID: \"afaa1cb5-3ad5-4c67-8802-9c4db23a2e3a\") " pod="openshift-kube-apiserver/installer-3-master-0" Feb 24 05:36:05.402469 master-0 kubenswrapper[7614]: I0224 05:36:05.402352 7614 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-3-master-0" Feb 24 05:36:05.889722 master-0 kubenswrapper[7614]: I0224 05:36:05.889649 7614 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-3-master-0"] Feb 24 05:36:06.054116 master-0 kubenswrapper[7614]: I0224 05:36:06.054018 7614 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-3-master-0" event={"ID":"afaa1cb5-3ad5-4c67-8802-9c4db23a2e3a","Type":"ContainerStarted","Data":"e43e86c2da24898ed3ceda5fba223181eeaf5fa1fa61d7f1b9a1561a31040dae"} Feb 24 05:36:07.064335 master-0 kubenswrapper[7614]: I0224 05:36:07.064255 7614 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-3-master-0" event={"ID":"afaa1cb5-3ad5-4c67-8802-9c4db23a2e3a","Type":"ContainerStarted","Data":"8904c0214073753fcab4acc8adc0da951a7afde283497eeb5955cf76d5cf0b70"} Feb 24 05:36:07.103414 master-0 kubenswrapper[7614]: I0224 05:36:07.103085 7614 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/installer-3-master-0" podStartSLOduration=2.103051613 podStartE2EDuration="2.103051613s" podCreationTimestamp="2026-02-24 05:36:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-24 05:36:07.093861376 +0000 UTC m=+1298.128604562" watchObservedRunningTime="2026-02-24 05:36:07.103051613 +0000 UTC m=+1298.137794809" Feb 24 05:36:24.291005 master-0 kubenswrapper[7614]: I0224 05:36:24.290863 7614 kubelet.go:2431] "SyncLoop REMOVE" source="file" pods=["openshift-kube-controller-manager/kube-controller-manager-master-0"] Feb 24 05:36:24.292169 master-0 kubenswrapper[7614]: I0224 05:36:24.291499 7614 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" podUID="79656ffd720980cfc7e8a06d9f509855" containerName="kube-controller-manager" containerID="cri-o://e07d27e1643309ae09b9bf325c92ee0eea5fe45e32bcb73532559162327b2ce0" gracePeriod=30 Feb 24 05:36:24.292169 master-0 kubenswrapper[7614]: I0224 05:36:24.291653 7614 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" podUID="79656ffd720980cfc7e8a06d9f509855" containerName="cluster-policy-controller" containerID="cri-o://9fd60c40d34b354fc0543b6f24498012605ff3f1280ac8ad107ef8c7af8045c6" gracePeriod=30 Feb 24 05:36:24.292169 master-0 kubenswrapper[7614]: I0224 05:36:24.291704 7614 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" podUID="79656ffd720980cfc7e8a06d9f509855" containerName="kube-controller-manager-recovery-controller" containerID="cri-o://806562bb76905d2503927db61aa9771f4fa696f0288313aafa1dfd726165b997" gracePeriod=30 Feb 24 05:36:24.292169 master-0 kubenswrapper[7614]: I0224 05:36:24.291704 7614 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" podUID="79656ffd720980cfc7e8a06d9f509855" containerName="kube-controller-manager-cert-syncer" containerID="cri-o://ee772b432f63e876abbe548c41b44054165b309149c77604c256c866ea308fdb" gracePeriod=30 Feb 24 05:36:24.295167 master-0 kubenswrapper[7614]: I0224 05:36:24.295099 7614 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-controller-manager/kube-controller-manager-master-0"] Feb 24 05:36:24.295665 master-0 kubenswrapper[7614]: E0224 05:36:24.295617 7614 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="79656ffd720980cfc7e8a06d9f509855" containerName="cluster-policy-controller" Feb 24 05:36:24.295665 master-0 kubenswrapper[7614]: I0224 05:36:24.295651 7614 state_mem.go:107] "Deleted CPUSet assignment" podUID="79656ffd720980cfc7e8a06d9f509855" containerName="cluster-policy-controller" Feb 24 05:36:24.295665 master-0 kubenswrapper[7614]: E0224 05:36:24.295669 7614 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="79656ffd720980cfc7e8a06d9f509855" containerName="kube-controller-manager" Feb 24 05:36:24.295941 master-0 kubenswrapper[7614]: I0224 05:36:24.295684 7614 state_mem.go:107] "Deleted CPUSet assignment" podUID="79656ffd720980cfc7e8a06d9f509855" containerName="kube-controller-manager" Feb 24 05:36:24.295941 master-0 kubenswrapper[7614]: E0224 05:36:24.295720 7614 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="79656ffd720980cfc7e8a06d9f509855" containerName="kube-controller-manager-recovery-controller" Feb 24 05:36:24.295941 master-0 kubenswrapper[7614]: I0224 05:36:24.295738 7614 state_mem.go:107] "Deleted CPUSet assignment" podUID="79656ffd720980cfc7e8a06d9f509855" containerName="kube-controller-manager-recovery-controller" Feb 24 05:36:24.295941 master-0 kubenswrapper[7614]: E0224 05:36:24.295788 7614 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="79656ffd720980cfc7e8a06d9f509855" containerName="cluster-policy-controller" Feb 24 05:36:24.295941 master-0 kubenswrapper[7614]: I0224 05:36:24.295808 7614 state_mem.go:107] "Deleted CPUSet assignment" podUID="79656ffd720980cfc7e8a06d9f509855" containerName="cluster-policy-controller" Feb 24 05:36:24.295941 master-0 kubenswrapper[7614]: E0224 05:36:24.295866 7614 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="79656ffd720980cfc7e8a06d9f509855" containerName="cluster-policy-controller" Feb 24 05:36:24.295941 master-0 kubenswrapper[7614]: I0224 05:36:24.295886 7614 state_mem.go:107] "Deleted CPUSet assignment" podUID="79656ffd720980cfc7e8a06d9f509855" containerName="cluster-policy-controller" Feb 24 05:36:24.295941 master-0 kubenswrapper[7614]: E0224 05:36:24.295914 7614 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="79656ffd720980cfc7e8a06d9f509855" containerName="kube-controller-manager-cert-syncer" Feb 24 05:36:24.295941 master-0 kubenswrapper[7614]: I0224 05:36:24.295932 7614 state_mem.go:107] "Deleted CPUSet assignment" podUID="79656ffd720980cfc7e8a06d9f509855" containerName="kube-controller-manager-cert-syncer" Feb 24 05:36:24.296677 master-0 kubenswrapper[7614]: E0224 05:36:24.295969 7614 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="79656ffd720980cfc7e8a06d9f509855" containerName="kube-controller-manager-cert-syncer" Feb 24 05:36:24.296677 master-0 kubenswrapper[7614]: I0224 05:36:24.295989 7614 state_mem.go:107] "Deleted CPUSet assignment" podUID="79656ffd720980cfc7e8a06d9f509855" containerName="kube-controller-manager-cert-syncer" Feb 24 05:36:24.296677 master-0 kubenswrapper[7614]: I0224 05:36:24.296304 7614 memory_manager.go:354] "RemoveStaleState removing state" podUID="79656ffd720980cfc7e8a06d9f509855" containerName="cluster-policy-controller" Feb 24 05:36:24.296677 master-0 kubenswrapper[7614]: I0224 05:36:24.296383 7614 memory_manager.go:354] "RemoveStaleState removing state" podUID="79656ffd720980cfc7e8a06d9f509855" containerName="kube-controller-manager-recovery-controller" Feb 24 05:36:24.296677 master-0 kubenswrapper[7614]: I0224 05:36:24.296415 7614 memory_manager.go:354] "RemoveStaleState removing state" podUID="79656ffd720980cfc7e8a06d9f509855" containerName="cluster-policy-controller" Feb 24 05:36:24.296677 master-0 kubenswrapper[7614]: I0224 05:36:24.296443 7614 memory_manager.go:354] "RemoveStaleState removing state" podUID="79656ffd720980cfc7e8a06d9f509855" containerName="kube-controller-manager" Feb 24 05:36:24.296677 master-0 kubenswrapper[7614]: I0224 05:36:24.296566 7614 memory_manager.go:354] "RemoveStaleState removing state" podUID="79656ffd720980cfc7e8a06d9f509855" containerName="cluster-policy-controller" Feb 24 05:36:24.296677 master-0 kubenswrapper[7614]: I0224 05:36:24.296661 7614 memory_manager.go:354] "RemoveStaleState removing state" podUID="79656ffd720980cfc7e8a06d9f509855" containerName="cluster-policy-controller" Feb 24 05:36:24.297172 master-0 kubenswrapper[7614]: I0224 05:36:24.296691 7614 memory_manager.go:354] "RemoveStaleState removing state" podUID="79656ffd720980cfc7e8a06d9f509855" containerName="kube-controller-manager-cert-syncer" Feb 24 05:36:24.297172 master-0 kubenswrapper[7614]: I0224 05:36:24.296782 7614 memory_manager.go:354] "RemoveStaleState removing state" podUID="79656ffd720980cfc7e8a06d9f509855" containerName="kube-controller-manager-cert-syncer" Feb 24 05:36:24.297172 master-0 kubenswrapper[7614]: E0224 05:36:24.297118 7614 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="79656ffd720980cfc7e8a06d9f509855" containerName="cluster-policy-controller" Feb 24 05:36:24.297172 master-0 kubenswrapper[7614]: I0224 05:36:24.297141 7614 state_mem.go:107] "Deleted CPUSet assignment" podUID="79656ffd720980cfc7e8a06d9f509855" containerName="cluster-policy-controller" Feb 24 05:36:24.297450 master-0 kubenswrapper[7614]: I0224 05:36:24.297418 7614 memory_manager.go:354] "RemoveStaleState removing state" podUID="79656ffd720980cfc7e8a06d9f509855" containerName="cluster-policy-controller" Feb 24 05:36:24.297757 master-0 kubenswrapper[7614]: E0224 05:36:24.297690 7614 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="79656ffd720980cfc7e8a06d9f509855" containerName="cluster-policy-controller" Feb 24 05:36:24.297757 master-0 kubenswrapper[7614]: I0224 05:36:24.297739 7614 state_mem.go:107] "Deleted CPUSet assignment" podUID="79656ffd720980cfc7e8a06d9f509855" containerName="cluster-policy-controller" Feb 24 05:36:24.376651 master-0 kubenswrapper[7614]: I0224 05:36:24.376521 7614 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/c0305da6e0b04a4394ef2888a487bfa1-resource-dir\") pod \"kube-controller-manager-master-0\" (UID: \"c0305da6e0b04a4394ef2888a487bfa1\") " pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Feb 24 05:36:24.376912 master-0 kubenswrapper[7614]: I0224 05:36:24.376850 7614 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/c0305da6e0b04a4394ef2888a487bfa1-cert-dir\") pod \"kube-controller-manager-master-0\" (UID: \"c0305da6e0b04a4394ef2888a487bfa1\") " pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Feb 24 05:36:24.479156 master-0 kubenswrapper[7614]: I0224 05:36:24.479012 7614 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/c0305da6e0b04a4394ef2888a487bfa1-resource-dir\") pod \"kube-controller-manager-master-0\" (UID: \"c0305da6e0b04a4394ef2888a487bfa1\") " pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Feb 24 05:36:24.479251 master-0 kubenswrapper[7614]: I0224 05:36:24.479164 7614 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/c0305da6e0b04a4394ef2888a487bfa1-cert-dir\") pod \"kube-controller-manager-master-0\" (UID: \"c0305da6e0b04a4394ef2888a487bfa1\") " pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Feb 24 05:36:24.479251 master-0 kubenswrapper[7614]: I0224 05:36:24.479174 7614 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/c0305da6e0b04a4394ef2888a487bfa1-resource-dir\") pod \"kube-controller-manager-master-0\" (UID: \"c0305da6e0b04a4394ef2888a487bfa1\") " pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Feb 24 05:36:24.479471 master-0 kubenswrapper[7614]: I0224 05:36:24.479378 7614 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/c0305da6e0b04a4394ef2888a487bfa1-cert-dir\") pod \"kube-controller-manager-master-0\" (UID: \"c0305da6e0b04a4394ef2888a487bfa1\") " pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Feb 24 05:36:24.562015 master-0 kubenswrapper[7614]: I0224 05:36:24.561877 7614 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-master-0_79656ffd720980cfc7e8a06d9f509855/kube-controller-manager-cert-syncer/1.log" Feb 24 05:36:24.563632 master-0 kubenswrapper[7614]: I0224 05:36:24.563572 7614 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-master-0_79656ffd720980cfc7e8a06d9f509855/cluster-policy-controller/3.log" Feb 24 05:36:24.565177 master-0 kubenswrapper[7614]: I0224 05:36:24.565132 7614 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-master-0_79656ffd720980cfc7e8a06d9f509855/kube-controller-manager-cert-syncer/0.log" Feb 24 05:36:24.566175 master-0 kubenswrapper[7614]: I0224 05:36:24.566132 7614 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Feb 24 05:36:24.570652 master-0 kubenswrapper[7614]: I0224 05:36:24.570588 7614 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" oldPodUID="79656ffd720980cfc7e8a06d9f509855" podUID="c0305da6e0b04a4394ef2888a487bfa1" Feb 24 05:36:24.682623 master-0 kubenswrapper[7614]: I0224 05:36:24.682521 7614 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/79656ffd720980cfc7e8a06d9f509855-resource-dir\") pod \"79656ffd720980cfc7e8a06d9f509855\" (UID: \"79656ffd720980cfc7e8a06d9f509855\") " Feb 24 05:36:24.682880 master-0 kubenswrapper[7614]: I0224 05:36:24.682701 7614 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/79656ffd720980cfc7e8a06d9f509855-resource-dir" (OuterVolumeSpecName: "resource-dir") pod "79656ffd720980cfc7e8a06d9f509855" (UID: "79656ffd720980cfc7e8a06d9f509855"). InnerVolumeSpecName "resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 24 05:36:24.682880 master-0 kubenswrapper[7614]: I0224 05:36:24.682855 7614 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/79656ffd720980cfc7e8a06d9f509855-cert-dir\") pod \"79656ffd720980cfc7e8a06d9f509855\" (UID: \"79656ffd720980cfc7e8a06d9f509855\") " Feb 24 05:36:24.683069 master-0 kubenswrapper[7614]: I0224 05:36:24.682892 7614 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/79656ffd720980cfc7e8a06d9f509855-cert-dir" (OuterVolumeSpecName: "cert-dir") pod "79656ffd720980cfc7e8a06d9f509855" (UID: "79656ffd720980cfc7e8a06d9f509855"). InnerVolumeSpecName "cert-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 24 05:36:24.683477 master-0 kubenswrapper[7614]: I0224 05:36:24.683360 7614 reconciler_common.go:293] "Volume detached for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/79656ffd720980cfc7e8a06d9f509855-cert-dir\") on node \"master-0\" DevicePath \"\"" Feb 24 05:36:24.683477 master-0 kubenswrapper[7614]: I0224 05:36:24.683384 7614 reconciler_common.go:293] "Volume detached for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/79656ffd720980cfc7e8a06d9f509855-resource-dir\") on node \"master-0\" DevicePath \"\"" Feb 24 05:36:25.189976 master-0 kubenswrapper[7614]: I0224 05:36:25.189892 7614 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="79656ffd720980cfc7e8a06d9f509855" path="/var/lib/kubelet/pods/79656ffd720980cfc7e8a06d9f509855/volumes" Feb 24 05:36:25.254944 master-0 kubenswrapper[7614]: I0224 05:36:25.254874 7614 generic.go:334] "Generic (PLEG): container finished" podID="154c1cd0-d69a-4213-8fc2-2d80217c358e" containerID="b3e9d6215183b510ab3e035f5ecd9035f6b7ed2689c41b39393ba0067bb54568" exitCode=0 Feb 24 05:36:25.255661 master-0 kubenswrapper[7614]: I0224 05:36:25.254997 7614 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/installer-3-master-0" event={"ID":"154c1cd0-d69a-4213-8fc2-2d80217c358e","Type":"ContainerDied","Data":"b3e9d6215183b510ab3e035f5ecd9035f6b7ed2689c41b39393ba0067bb54568"} Feb 24 05:36:25.260127 master-0 kubenswrapper[7614]: I0224 05:36:25.260044 7614 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-master-0_79656ffd720980cfc7e8a06d9f509855/kube-controller-manager-cert-syncer/1.log" Feb 24 05:36:25.261596 master-0 kubenswrapper[7614]: I0224 05:36:25.261544 7614 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-master-0_79656ffd720980cfc7e8a06d9f509855/cluster-policy-controller/3.log" Feb 24 05:36:25.263096 master-0 kubenswrapper[7614]: I0224 05:36:25.262966 7614 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-master-0_79656ffd720980cfc7e8a06d9f509855/kube-controller-manager-cert-syncer/0.log" Feb 24 05:36:25.263687 master-0 kubenswrapper[7614]: I0224 05:36:25.263643 7614 generic.go:334] "Generic (PLEG): container finished" podID="79656ffd720980cfc7e8a06d9f509855" containerID="ee772b432f63e876abbe548c41b44054165b309149c77604c256c866ea308fdb" exitCode=2 Feb 24 05:36:25.263687 master-0 kubenswrapper[7614]: I0224 05:36:25.263679 7614 generic.go:334] "Generic (PLEG): container finished" podID="79656ffd720980cfc7e8a06d9f509855" containerID="9fd60c40d34b354fc0543b6f24498012605ff3f1280ac8ad107ef8c7af8045c6" exitCode=0 Feb 24 05:36:25.263864 master-0 kubenswrapper[7614]: I0224 05:36:25.263693 7614 generic.go:334] "Generic (PLEG): container finished" podID="79656ffd720980cfc7e8a06d9f509855" containerID="806562bb76905d2503927db61aa9771f4fa696f0288313aafa1dfd726165b997" exitCode=0 Feb 24 05:36:25.263864 master-0 kubenswrapper[7614]: I0224 05:36:25.263707 7614 generic.go:334] "Generic (PLEG): container finished" podID="79656ffd720980cfc7e8a06d9f509855" containerID="e07d27e1643309ae09b9bf325c92ee0eea5fe45e32bcb73532559162327b2ce0" exitCode=0 Feb 24 05:36:25.263864 master-0 kubenswrapper[7614]: I0224 05:36:25.263756 7614 scope.go:117] "RemoveContainer" containerID="ee772b432f63e876abbe548c41b44054165b309149c77604c256c866ea308fdb" Feb 24 05:36:25.263864 master-0 kubenswrapper[7614]: I0224 05:36:25.263809 7614 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Feb 24 05:36:25.289246 master-0 kubenswrapper[7614]: I0224 05:36:25.288834 7614 scope.go:117] "RemoveContainer" containerID="9fd60c40d34b354fc0543b6f24498012605ff3f1280ac8ad107ef8c7af8045c6" Feb 24 05:36:25.292564 master-0 kubenswrapper[7614]: I0224 05:36:25.292470 7614 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" oldPodUID="79656ffd720980cfc7e8a06d9f509855" podUID="c0305da6e0b04a4394ef2888a487bfa1" Feb 24 05:36:25.313346 master-0 kubenswrapper[7614]: I0224 05:36:25.312927 7614 scope.go:117] "RemoveContainer" containerID="386f03588b2fec464602c05980d3d5bd01154f4e28abd6dc77dfb5d576846982" Feb 24 05:36:25.335658 master-0 kubenswrapper[7614]: I0224 05:36:25.335604 7614 scope.go:117] "RemoveContainer" containerID="806562bb76905d2503927db61aa9771f4fa696f0288313aafa1dfd726165b997" Feb 24 05:36:25.365116 master-0 kubenswrapper[7614]: I0224 05:36:25.365045 7614 scope.go:117] "RemoveContainer" containerID="dd597b6ad5f257c6a61d0a1a9b377d01faf516c3b8373c6ff9e2832da517d51c" Feb 24 05:36:25.400055 master-0 kubenswrapper[7614]: I0224 05:36:25.399989 7614 scope.go:117] "RemoveContainer" containerID="e07d27e1643309ae09b9bf325c92ee0eea5fe45e32bcb73532559162327b2ce0" Feb 24 05:36:25.425143 master-0 kubenswrapper[7614]: I0224 05:36:25.425067 7614 scope.go:117] "RemoveContainer" containerID="ee772b432f63e876abbe548c41b44054165b309149c77604c256c866ea308fdb" Feb 24 05:36:25.425808 master-0 kubenswrapper[7614]: E0224 05:36:25.425747 7614 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ee772b432f63e876abbe548c41b44054165b309149c77604c256c866ea308fdb\": container with ID starting with ee772b432f63e876abbe548c41b44054165b309149c77604c256c866ea308fdb not found: ID does not exist" containerID="ee772b432f63e876abbe548c41b44054165b309149c77604c256c866ea308fdb" Feb 24 05:36:25.425921 master-0 kubenswrapper[7614]: I0224 05:36:25.425800 7614 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ee772b432f63e876abbe548c41b44054165b309149c77604c256c866ea308fdb"} err="failed to get container status \"ee772b432f63e876abbe548c41b44054165b309149c77604c256c866ea308fdb\": rpc error: code = NotFound desc = could not find container \"ee772b432f63e876abbe548c41b44054165b309149c77604c256c866ea308fdb\": container with ID starting with ee772b432f63e876abbe548c41b44054165b309149c77604c256c866ea308fdb not found: ID does not exist" Feb 24 05:36:25.425921 master-0 kubenswrapper[7614]: I0224 05:36:25.425840 7614 scope.go:117] "RemoveContainer" containerID="9fd60c40d34b354fc0543b6f24498012605ff3f1280ac8ad107ef8c7af8045c6" Feb 24 05:36:25.426752 master-0 kubenswrapper[7614]: E0224 05:36:25.426663 7614 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9fd60c40d34b354fc0543b6f24498012605ff3f1280ac8ad107ef8c7af8045c6\": container with ID starting with 9fd60c40d34b354fc0543b6f24498012605ff3f1280ac8ad107ef8c7af8045c6 not found: ID does not exist" containerID="9fd60c40d34b354fc0543b6f24498012605ff3f1280ac8ad107ef8c7af8045c6" Feb 24 05:36:25.426849 master-0 kubenswrapper[7614]: I0224 05:36:25.426758 7614 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9fd60c40d34b354fc0543b6f24498012605ff3f1280ac8ad107ef8c7af8045c6"} err="failed to get container status \"9fd60c40d34b354fc0543b6f24498012605ff3f1280ac8ad107ef8c7af8045c6\": rpc error: code = NotFound desc = could not find container \"9fd60c40d34b354fc0543b6f24498012605ff3f1280ac8ad107ef8c7af8045c6\": container with ID starting with 9fd60c40d34b354fc0543b6f24498012605ff3f1280ac8ad107ef8c7af8045c6 not found: ID does not exist" Feb 24 05:36:25.426849 master-0 kubenswrapper[7614]: I0224 05:36:25.426807 7614 scope.go:117] "RemoveContainer" containerID="386f03588b2fec464602c05980d3d5bd01154f4e28abd6dc77dfb5d576846982" Feb 24 05:36:25.427457 master-0 kubenswrapper[7614]: E0224 05:36:25.427380 7614 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"386f03588b2fec464602c05980d3d5bd01154f4e28abd6dc77dfb5d576846982\": container with ID starting with 386f03588b2fec464602c05980d3d5bd01154f4e28abd6dc77dfb5d576846982 not found: ID does not exist" containerID="386f03588b2fec464602c05980d3d5bd01154f4e28abd6dc77dfb5d576846982" Feb 24 05:36:25.427567 master-0 kubenswrapper[7614]: I0224 05:36:25.427472 7614 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"386f03588b2fec464602c05980d3d5bd01154f4e28abd6dc77dfb5d576846982"} err="failed to get container status \"386f03588b2fec464602c05980d3d5bd01154f4e28abd6dc77dfb5d576846982\": rpc error: code = NotFound desc = could not find container \"386f03588b2fec464602c05980d3d5bd01154f4e28abd6dc77dfb5d576846982\": container with ID starting with 386f03588b2fec464602c05980d3d5bd01154f4e28abd6dc77dfb5d576846982 not found: ID does not exist" Feb 24 05:36:25.427567 master-0 kubenswrapper[7614]: I0224 05:36:25.427527 7614 scope.go:117] "RemoveContainer" containerID="806562bb76905d2503927db61aa9771f4fa696f0288313aafa1dfd726165b997" Feb 24 05:36:25.428142 master-0 kubenswrapper[7614]: E0224 05:36:25.428083 7614 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"806562bb76905d2503927db61aa9771f4fa696f0288313aafa1dfd726165b997\": container with ID starting with 806562bb76905d2503927db61aa9771f4fa696f0288313aafa1dfd726165b997 not found: ID does not exist" containerID="806562bb76905d2503927db61aa9771f4fa696f0288313aafa1dfd726165b997" Feb 24 05:36:25.428232 master-0 kubenswrapper[7614]: I0224 05:36:25.428125 7614 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"806562bb76905d2503927db61aa9771f4fa696f0288313aafa1dfd726165b997"} err="failed to get container status \"806562bb76905d2503927db61aa9771f4fa696f0288313aafa1dfd726165b997\": rpc error: code = NotFound desc = could not find container \"806562bb76905d2503927db61aa9771f4fa696f0288313aafa1dfd726165b997\": container with ID starting with 806562bb76905d2503927db61aa9771f4fa696f0288313aafa1dfd726165b997 not found: ID does not exist" Feb 24 05:36:25.428232 master-0 kubenswrapper[7614]: I0224 05:36:25.428160 7614 scope.go:117] "RemoveContainer" containerID="dd597b6ad5f257c6a61d0a1a9b377d01faf516c3b8373c6ff9e2832da517d51c" Feb 24 05:36:25.428720 master-0 kubenswrapper[7614]: E0224 05:36:25.428662 7614 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"dd597b6ad5f257c6a61d0a1a9b377d01faf516c3b8373c6ff9e2832da517d51c\": container with ID starting with dd597b6ad5f257c6a61d0a1a9b377d01faf516c3b8373c6ff9e2832da517d51c not found: ID does not exist" containerID="dd597b6ad5f257c6a61d0a1a9b377d01faf516c3b8373c6ff9e2832da517d51c" Feb 24 05:36:25.428720 master-0 kubenswrapper[7614]: I0224 05:36:25.428706 7614 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"dd597b6ad5f257c6a61d0a1a9b377d01faf516c3b8373c6ff9e2832da517d51c"} err="failed to get container status \"dd597b6ad5f257c6a61d0a1a9b377d01faf516c3b8373c6ff9e2832da517d51c\": rpc error: code = NotFound desc = could not find container \"dd597b6ad5f257c6a61d0a1a9b377d01faf516c3b8373c6ff9e2832da517d51c\": container with ID starting with dd597b6ad5f257c6a61d0a1a9b377d01faf516c3b8373c6ff9e2832da517d51c not found: ID does not exist" Feb 24 05:36:25.428864 master-0 kubenswrapper[7614]: I0224 05:36:25.428730 7614 scope.go:117] "RemoveContainer" containerID="e07d27e1643309ae09b9bf325c92ee0eea5fe45e32bcb73532559162327b2ce0" Feb 24 05:36:25.429170 master-0 kubenswrapper[7614]: E0224 05:36:25.429112 7614 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e07d27e1643309ae09b9bf325c92ee0eea5fe45e32bcb73532559162327b2ce0\": container with ID starting with e07d27e1643309ae09b9bf325c92ee0eea5fe45e32bcb73532559162327b2ce0 not found: ID does not exist" containerID="e07d27e1643309ae09b9bf325c92ee0eea5fe45e32bcb73532559162327b2ce0" Feb 24 05:36:25.429261 master-0 kubenswrapper[7614]: I0224 05:36:25.429157 7614 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e07d27e1643309ae09b9bf325c92ee0eea5fe45e32bcb73532559162327b2ce0"} err="failed to get container status \"e07d27e1643309ae09b9bf325c92ee0eea5fe45e32bcb73532559162327b2ce0\": rpc error: code = NotFound desc = could not find container \"e07d27e1643309ae09b9bf325c92ee0eea5fe45e32bcb73532559162327b2ce0\": container with ID starting with e07d27e1643309ae09b9bf325c92ee0eea5fe45e32bcb73532559162327b2ce0 not found: ID does not exist" Feb 24 05:36:25.429261 master-0 kubenswrapper[7614]: I0224 05:36:25.429186 7614 scope.go:117] "RemoveContainer" containerID="ee772b432f63e876abbe548c41b44054165b309149c77604c256c866ea308fdb" Feb 24 05:36:25.429729 master-0 kubenswrapper[7614]: I0224 05:36:25.429677 7614 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ee772b432f63e876abbe548c41b44054165b309149c77604c256c866ea308fdb"} err="failed to get container status \"ee772b432f63e876abbe548c41b44054165b309149c77604c256c866ea308fdb\": rpc error: code = NotFound desc = could not find container \"ee772b432f63e876abbe548c41b44054165b309149c77604c256c866ea308fdb\": container with ID starting with ee772b432f63e876abbe548c41b44054165b309149c77604c256c866ea308fdb not found: ID does not exist" Feb 24 05:36:25.429729 master-0 kubenswrapper[7614]: I0224 05:36:25.429710 7614 scope.go:117] "RemoveContainer" containerID="9fd60c40d34b354fc0543b6f24498012605ff3f1280ac8ad107ef8c7af8045c6" Feb 24 05:36:25.430082 master-0 kubenswrapper[7614]: I0224 05:36:25.430026 7614 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9fd60c40d34b354fc0543b6f24498012605ff3f1280ac8ad107ef8c7af8045c6"} err="failed to get container status \"9fd60c40d34b354fc0543b6f24498012605ff3f1280ac8ad107ef8c7af8045c6\": rpc error: code = NotFound desc = could not find container \"9fd60c40d34b354fc0543b6f24498012605ff3f1280ac8ad107ef8c7af8045c6\": container with ID starting with 9fd60c40d34b354fc0543b6f24498012605ff3f1280ac8ad107ef8c7af8045c6 not found: ID does not exist" Feb 24 05:36:25.430082 master-0 kubenswrapper[7614]: I0224 05:36:25.430066 7614 scope.go:117] "RemoveContainer" containerID="386f03588b2fec464602c05980d3d5bd01154f4e28abd6dc77dfb5d576846982" Feb 24 05:36:25.430702 master-0 kubenswrapper[7614]: I0224 05:36:25.430638 7614 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"386f03588b2fec464602c05980d3d5bd01154f4e28abd6dc77dfb5d576846982"} err="failed to get container status \"386f03588b2fec464602c05980d3d5bd01154f4e28abd6dc77dfb5d576846982\": rpc error: code = NotFound desc = could not find container \"386f03588b2fec464602c05980d3d5bd01154f4e28abd6dc77dfb5d576846982\": container with ID starting with 386f03588b2fec464602c05980d3d5bd01154f4e28abd6dc77dfb5d576846982 not found: ID does not exist" Feb 24 05:36:25.430796 master-0 kubenswrapper[7614]: I0224 05:36:25.430693 7614 scope.go:117] "RemoveContainer" containerID="806562bb76905d2503927db61aa9771f4fa696f0288313aafa1dfd726165b997" Feb 24 05:36:25.431304 master-0 kubenswrapper[7614]: I0224 05:36:25.431248 7614 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"806562bb76905d2503927db61aa9771f4fa696f0288313aafa1dfd726165b997"} err="failed to get container status \"806562bb76905d2503927db61aa9771f4fa696f0288313aafa1dfd726165b997\": rpc error: code = NotFound desc = could not find container \"806562bb76905d2503927db61aa9771f4fa696f0288313aafa1dfd726165b997\": container with ID starting with 806562bb76905d2503927db61aa9771f4fa696f0288313aafa1dfd726165b997 not found: ID does not exist" Feb 24 05:36:25.431304 master-0 kubenswrapper[7614]: I0224 05:36:25.431287 7614 scope.go:117] "RemoveContainer" containerID="dd597b6ad5f257c6a61d0a1a9b377d01faf516c3b8373c6ff9e2832da517d51c" Feb 24 05:36:25.431760 master-0 kubenswrapper[7614]: I0224 05:36:25.431700 7614 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"dd597b6ad5f257c6a61d0a1a9b377d01faf516c3b8373c6ff9e2832da517d51c"} err="failed to get container status \"dd597b6ad5f257c6a61d0a1a9b377d01faf516c3b8373c6ff9e2832da517d51c\": rpc error: code = NotFound desc = could not find container \"dd597b6ad5f257c6a61d0a1a9b377d01faf516c3b8373c6ff9e2832da517d51c\": container with ID starting with dd597b6ad5f257c6a61d0a1a9b377d01faf516c3b8373c6ff9e2832da517d51c not found: ID does not exist" Feb 24 05:36:25.431760 master-0 kubenswrapper[7614]: I0224 05:36:25.431748 7614 scope.go:117] "RemoveContainer" containerID="e07d27e1643309ae09b9bf325c92ee0eea5fe45e32bcb73532559162327b2ce0" Feb 24 05:36:25.432303 master-0 kubenswrapper[7614]: I0224 05:36:25.432251 7614 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e07d27e1643309ae09b9bf325c92ee0eea5fe45e32bcb73532559162327b2ce0"} err="failed to get container status \"e07d27e1643309ae09b9bf325c92ee0eea5fe45e32bcb73532559162327b2ce0\": rpc error: code = NotFound desc = could not find container \"e07d27e1643309ae09b9bf325c92ee0eea5fe45e32bcb73532559162327b2ce0\": container with ID starting with e07d27e1643309ae09b9bf325c92ee0eea5fe45e32bcb73532559162327b2ce0 not found: ID does not exist" Feb 24 05:36:25.432303 master-0 kubenswrapper[7614]: I0224 05:36:25.432285 7614 scope.go:117] "RemoveContainer" containerID="ee772b432f63e876abbe548c41b44054165b309149c77604c256c866ea308fdb" Feb 24 05:36:25.432774 master-0 kubenswrapper[7614]: I0224 05:36:25.432714 7614 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ee772b432f63e876abbe548c41b44054165b309149c77604c256c866ea308fdb"} err="failed to get container status \"ee772b432f63e876abbe548c41b44054165b309149c77604c256c866ea308fdb\": rpc error: code = NotFound desc = could not find container \"ee772b432f63e876abbe548c41b44054165b309149c77604c256c866ea308fdb\": container with ID starting with ee772b432f63e876abbe548c41b44054165b309149c77604c256c866ea308fdb not found: ID does not exist" Feb 24 05:36:25.432774 master-0 kubenswrapper[7614]: I0224 05:36:25.432755 7614 scope.go:117] "RemoveContainer" containerID="9fd60c40d34b354fc0543b6f24498012605ff3f1280ac8ad107ef8c7af8045c6" Feb 24 05:36:25.433549 master-0 kubenswrapper[7614]: I0224 05:36:25.433495 7614 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9fd60c40d34b354fc0543b6f24498012605ff3f1280ac8ad107ef8c7af8045c6"} err="failed to get container status \"9fd60c40d34b354fc0543b6f24498012605ff3f1280ac8ad107ef8c7af8045c6\": rpc error: code = NotFound desc = could not find container \"9fd60c40d34b354fc0543b6f24498012605ff3f1280ac8ad107ef8c7af8045c6\": container with ID starting with 9fd60c40d34b354fc0543b6f24498012605ff3f1280ac8ad107ef8c7af8045c6 not found: ID does not exist" Feb 24 05:36:25.433549 master-0 kubenswrapper[7614]: I0224 05:36:25.433533 7614 scope.go:117] "RemoveContainer" containerID="386f03588b2fec464602c05980d3d5bd01154f4e28abd6dc77dfb5d576846982" Feb 24 05:36:25.434020 master-0 kubenswrapper[7614]: I0224 05:36:25.433950 7614 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"386f03588b2fec464602c05980d3d5bd01154f4e28abd6dc77dfb5d576846982"} err="failed to get container status \"386f03588b2fec464602c05980d3d5bd01154f4e28abd6dc77dfb5d576846982\": rpc error: code = NotFound desc = could not find container \"386f03588b2fec464602c05980d3d5bd01154f4e28abd6dc77dfb5d576846982\": container with ID starting with 386f03588b2fec464602c05980d3d5bd01154f4e28abd6dc77dfb5d576846982 not found: ID does not exist" Feb 24 05:36:25.434020 master-0 kubenswrapper[7614]: I0224 05:36:25.434007 7614 scope.go:117] "RemoveContainer" containerID="806562bb76905d2503927db61aa9771f4fa696f0288313aafa1dfd726165b997" Feb 24 05:36:25.434957 master-0 kubenswrapper[7614]: I0224 05:36:25.434905 7614 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"806562bb76905d2503927db61aa9771f4fa696f0288313aafa1dfd726165b997"} err="failed to get container status \"806562bb76905d2503927db61aa9771f4fa696f0288313aafa1dfd726165b997\": rpc error: code = NotFound desc = could not find container \"806562bb76905d2503927db61aa9771f4fa696f0288313aafa1dfd726165b997\": container with ID starting with 806562bb76905d2503927db61aa9771f4fa696f0288313aafa1dfd726165b997 not found: ID does not exist" Feb 24 05:36:25.434957 master-0 kubenswrapper[7614]: I0224 05:36:25.434936 7614 scope.go:117] "RemoveContainer" containerID="dd597b6ad5f257c6a61d0a1a9b377d01faf516c3b8373c6ff9e2832da517d51c" Feb 24 05:36:25.435575 master-0 kubenswrapper[7614]: I0224 05:36:25.435510 7614 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"dd597b6ad5f257c6a61d0a1a9b377d01faf516c3b8373c6ff9e2832da517d51c"} err="failed to get container status \"dd597b6ad5f257c6a61d0a1a9b377d01faf516c3b8373c6ff9e2832da517d51c\": rpc error: code = NotFound desc = could not find container \"dd597b6ad5f257c6a61d0a1a9b377d01faf516c3b8373c6ff9e2832da517d51c\": container with ID starting with dd597b6ad5f257c6a61d0a1a9b377d01faf516c3b8373c6ff9e2832da517d51c not found: ID does not exist" Feb 24 05:36:25.435575 master-0 kubenswrapper[7614]: I0224 05:36:25.435560 7614 scope.go:117] "RemoveContainer" containerID="e07d27e1643309ae09b9bf325c92ee0eea5fe45e32bcb73532559162327b2ce0" Feb 24 05:36:25.436242 master-0 kubenswrapper[7614]: I0224 05:36:25.436013 7614 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e07d27e1643309ae09b9bf325c92ee0eea5fe45e32bcb73532559162327b2ce0"} err="failed to get container status \"e07d27e1643309ae09b9bf325c92ee0eea5fe45e32bcb73532559162327b2ce0\": rpc error: code = NotFound desc = could not find container \"e07d27e1643309ae09b9bf325c92ee0eea5fe45e32bcb73532559162327b2ce0\": container with ID starting with e07d27e1643309ae09b9bf325c92ee0eea5fe45e32bcb73532559162327b2ce0 not found: ID does not exist" Feb 24 05:36:25.436242 master-0 kubenswrapper[7614]: I0224 05:36:25.436044 7614 scope.go:117] "RemoveContainer" containerID="ee772b432f63e876abbe548c41b44054165b309149c77604c256c866ea308fdb" Feb 24 05:36:25.436457 master-0 kubenswrapper[7614]: I0224 05:36:25.436403 7614 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ee772b432f63e876abbe548c41b44054165b309149c77604c256c866ea308fdb"} err="failed to get container status \"ee772b432f63e876abbe548c41b44054165b309149c77604c256c866ea308fdb\": rpc error: code = NotFound desc = could not find container \"ee772b432f63e876abbe548c41b44054165b309149c77604c256c866ea308fdb\": container with ID starting with ee772b432f63e876abbe548c41b44054165b309149c77604c256c866ea308fdb not found: ID does not exist" Feb 24 05:36:25.436457 master-0 kubenswrapper[7614]: I0224 05:36:25.436440 7614 scope.go:117] "RemoveContainer" containerID="9fd60c40d34b354fc0543b6f24498012605ff3f1280ac8ad107ef8c7af8045c6" Feb 24 05:36:25.437982 master-0 kubenswrapper[7614]: I0224 05:36:25.437881 7614 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9fd60c40d34b354fc0543b6f24498012605ff3f1280ac8ad107ef8c7af8045c6"} err="failed to get container status \"9fd60c40d34b354fc0543b6f24498012605ff3f1280ac8ad107ef8c7af8045c6\": rpc error: code = NotFound desc = could not find container \"9fd60c40d34b354fc0543b6f24498012605ff3f1280ac8ad107ef8c7af8045c6\": container with ID starting with 9fd60c40d34b354fc0543b6f24498012605ff3f1280ac8ad107ef8c7af8045c6 not found: ID does not exist" Feb 24 05:36:25.437982 master-0 kubenswrapper[7614]: I0224 05:36:25.437911 7614 scope.go:117] "RemoveContainer" containerID="386f03588b2fec464602c05980d3d5bd01154f4e28abd6dc77dfb5d576846982" Feb 24 05:36:25.438443 master-0 kubenswrapper[7614]: I0224 05:36:25.438389 7614 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"386f03588b2fec464602c05980d3d5bd01154f4e28abd6dc77dfb5d576846982"} err="failed to get container status \"386f03588b2fec464602c05980d3d5bd01154f4e28abd6dc77dfb5d576846982\": rpc error: code = NotFound desc = could not find container \"386f03588b2fec464602c05980d3d5bd01154f4e28abd6dc77dfb5d576846982\": container with ID starting with 386f03588b2fec464602c05980d3d5bd01154f4e28abd6dc77dfb5d576846982 not found: ID does not exist" Feb 24 05:36:25.438443 master-0 kubenswrapper[7614]: I0224 05:36:25.438426 7614 scope.go:117] "RemoveContainer" containerID="806562bb76905d2503927db61aa9771f4fa696f0288313aafa1dfd726165b997" Feb 24 05:36:25.438916 master-0 kubenswrapper[7614]: I0224 05:36:25.438851 7614 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"806562bb76905d2503927db61aa9771f4fa696f0288313aafa1dfd726165b997"} err="failed to get container status \"806562bb76905d2503927db61aa9771f4fa696f0288313aafa1dfd726165b997\": rpc error: code = NotFound desc = could not find container \"806562bb76905d2503927db61aa9771f4fa696f0288313aafa1dfd726165b997\": container with ID starting with 806562bb76905d2503927db61aa9771f4fa696f0288313aafa1dfd726165b997 not found: ID does not exist" Feb 24 05:36:25.438916 master-0 kubenswrapper[7614]: I0224 05:36:25.438905 7614 scope.go:117] "RemoveContainer" containerID="dd597b6ad5f257c6a61d0a1a9b377d01faf516c3b8373c6ff9e2832da517d51c" Feb 24 05:36:25.439421 master-0 kubenswrapper[7614]: I0224 05:36:25.439367 7614 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"dd597b6ad5f257c6a61d0a1a9b377d01faf516c3b8373c6ff9e2832da517d51c"} err="failed to get container status \"dd597b6ad5f257c6a61d0a1a9b377d01faf516c3b8373c6ff9e2832da517d51c\": rpc error: code = NotFound desc = could not find container \"dd597b6ad5f257c6a61d0a1a9b377d01faf516c3b8373c6ff9e2832da517d51c\": container with ID starting with dd597b6ad5f257c6a61d0a1a9b377d01faf516c3b8373c6ff9e2832da517d51c not found: ID does not exist" Feb 24 05:36:25.439421 master-0 kubenswrapper[7614]: I0224 05:36:25.439404 7614 scope.go:117] "RemoveContainer" containerID="e07d27e1643309ae09b9bf325c92ee0eea5fe45e32bcb73532559162327b2ce0" Feb 24 05:36:25.439809 master-0 kubenswrapper[7614]: I0224 05:36:25.439750 7614 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e07d27e1643309ae09b9bf325c92ee0eea5fe45e32bcb73532559162327b2ce0"} err="failed to get container status \"e07d27e1643309ae09b9bf325c92ee0eea5fe45e32bcb73532559162327b2ce0\": rpc error: code = NotFound desc = could not find container \"e07d27e1643309ae09b9bf325c92ee0eea5fe45e32bcb73532559162327b2ce0\": container with ID starting with e07d27e1643309ae09b9bf325c92ee0eea5fe45e32bcb73532559162327b2ce0 not found: ID does not exist" Feb 24 05:36:26.683915 master-0 kubenswrapper[7614]: I0224 05:36:26.683827 7614 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/installer-3-master-0" Feb 24 05:36:26.818858 master-0 kubenswrapper[7614]: I0224 05:36:26.818781 7614 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/154c1cd0-d69a-4213-8fc2-2d80217c358e-kube-api-access\") pod \"154c1cd0-d69a-4213-8fc2-2d80217c358e\" (UID: \"154c1cd0-d69a-4213-8fc2-2d80217c358e\") " Feb 24 05:36:26.818858 master-0 kubenswrapper[7614]: I0224 05:36:26.818837 7614 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/154c1cd0-d69a-4213-8fc2-2d80217c358e-kubelet-dir\") pod \"154c1cd0-d69a-4213-8fc2-2d80217c358e\" (UID: \"154c1cd0-d69a-4213-8fc2-2d80217c358e\") " Feb 24 05:36:26.819275 master-0 kubenswrapper[7614]: I0224 05:36:26.819005 7614 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/154c1cd0-d69a-4213-8fc2-2d80217c358e-var-lock\") pod \"154c1cd0-d69a-4213-8fc2-2d80217c358e\" (UID: \"154c1cd0-d69a-4213-8fc2-2d80217c358e\") " Feb 24 05:36:26.819275 master-0 kubenswrapper[7614]: I0224 05:36:26.819050 7614 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/154c1cd0-d69a-4213-8fc2-2d80217c358e-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "154c1cd0-d69a-4213-8fc2-2d80217c358e" (UID: "154c1cd0-d69a-4213-8fc2-2d80217c358e"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 24 05:36:26.819275 master-0 kubenswrapper[7614]: I0224 05:36:26.819221 7614 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/154c1cd0-d69a-4213-8fc2-2d80217c358e-var-lock" (OuterVolumeSpecName: "var-lock") pod "154c1cd0-d69a-4213-8fc2-2d80217c358e" (UID: "154c1cd0-d69a-4213-8fc2-2d80217c358e"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 24 05:36:26.819682 master-0 kubenswrapper[7614]: I0224 05:36:26.819644 7614 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/154c1cd0-d69a-4213-8fc2-2d80217c358e-kubelet-dir\") on node \"master-0\" DevicePath \"\"" Feb 24 05:36:26.819682 master-0 kubenswrapper[7614]: I0224 05:36:26.819675 7614 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/154c1cd0-d69a-4213-8fc2-2d80217c358e-var-lock\") on node \"master-0\" DevicePath \"\"" Feb 24 05:36:26.824684 master-0 kubenswrapper[7614]: I0224 05:36:26.824584 7614 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/154c1cd0-d69a-4213-8fc2-2d80217c358e-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "154c1cd0-d69a-4213-8fc2-2d80217c358e" (UID: "154c1cd0-d69a-4213-8fc2-2d80217c358e"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 24 05:36:26.921722 master-0 kubenswrapper[7614]: I0224 05:36:26.921540 7614 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/154c1cd0-d69a-4213-8fc2-2d80217c358e-kube-api-access\") on node \"master-0\" DevicePath \"\"" Feb 24 05:36:27.284808 master-0 kubenswrapper[7614]: I0224 05:36:27.284738 7614 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/installer-3-master-0" event={"ID":"154c1cd0-d69a-4213-8fc2-2d80217c358e","Type":"ContainerDied","Data":"49dc4d8de02054e0c7305ee0abb7f18a0ace00c3ecc8e971017afe0705de270d"} Feb 24 05:36:27.284808 master-0 kubenswrapper[7614]: I0224 05:36:27.284818 7614 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="49dc4d8de02054e0c7305ee0abb7f18a0ace00c3ecc8e971017afe0705de270d" Feb 24 05:36:27.285102 master-0 kubenswrapper[7614]: I0224 05:36:27.284826 7614 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/installer-3-master-0" Feb 24 05:36:28.296363 master-0 kubenswrapper[7614]: I0224 05:36:28.296208 7614 generic.go:334] "Generic (PLEG): container finished" podID="be7a4b9e-1e9a-4298-b804-21b683805c0e" containerID="4f3ae3a1fb93152f16413963009dac29f899944719e22e0315c1d5fd940eb4a6" exitCode=0 Feb 24 05:36:28.296363 master-0 kubenswrapper[7614]: I0224 05:36:28.296279 7614 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-7b65dc9fcb-zxkt2" event={"ID":"be7a4b9e-1e9a-4298-b804-21b683805c0e","Type":"ContainerDied","Data":"4f3ae3a1fb93152f16413963009dac29f899944719e22e0315c1d5fd940eb4a6"} Feb 24 05:36:28.296363 master-0 kubenswrapper[7614]: I0224 05:36:28.296350 7614 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-7b65dc9fcb-zxkt2" event={"ID":"be7a4b9e-1e9a-4298-b804-21b683805c0e","Type":"ContainerStarted","Data":"d64e503faa84bc5cb54350f7f5f8e0d5cf8f920c4de3b19ee4cd5ff7e2a6dc7b"} Feb 24 05:36:28.296363 master-0 kubenswrapper[7614]: I0224 05:36:28.296380 7614 scope.go:117] "RemoveContainer" containerID="acb0698fd79ca407db7d9ea2aa9e8794fcca326eb46507a49a5c7b349296ed25" Feb 24 05:36:28.660722 master-0 kubenswrapper[7614]: I0224 05:36:28.660455 7614 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-ingress/router-default-7b65dc9fcb-zxkt2" Feb 24 05:36:28.665548 master-0 kubenswrapper[7614]: I0224 05:36:28.665501 7614 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-zxkt2 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 05:36:28.665548 master-0 kubenswrapper[7614]: [-]has-synced failed: reason withheld Feb 24 05:36:28.665548 master-0 kubenswrapper[7614]: [+]process-running ok Feb 24 05:36:28.665548 master-0 kubenswrapper[7614]: healthz check failed Feb 24 05:36:28.665904 master-0 kubenswrapper[7614]: I0224 05:36:28.665589 7614 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-zxkt2" podUID="be7a4b9e-1e9a-4298-b804-21b683805c0e" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 05:36:29.664020 master-0 kubenswrapper[7614]: I0224 05:36:29.663923 7614 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-zxkt2 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 05:36:29.664020 master-0 kubenswrapper[7614]: [-]has-synced failed: reason withheld Feb 24 05:36:29.664020 master-0 kubenswrapper[7614]: [+]process-running ok Feb 24 05:36:29.664020 master-0 kubenswrapper[7614]: healthz check failed Feb 24 05:36:29.665395 master-0 kubenswrapper[7614]: I0224 05:36:29.664025 7614 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-zxkt2" podUID="be7a4b9e-1e9a-4298-b804-21b683805c0e" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 05:36:30.323058 master-0 kubenswrapper[7614]: I0224 05:36:30.322942 7614 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_installer-2-retry-1-master-0_95004cdb-0c51-4cd2-8fa4-28bdf9901ec6/installer/0.log" Feb 24 05:36:30.323503 master-0 kubenswrapper[7614]: I0224 05:36:30.323066 7614 generic.go:334] "Generic (PLEG): container finished" podID="95004cdb-0c51-4cd2-8fa4-28bdf9901ec6" containerID="a2a044970cc81b5de15bc7ba6f6e6e6c9ceb095dd83aa2eb02ced08f9d7400e0" exitCode=1 Feb 24 05:36:30.323503 master-0 kubenswrapper[7614]: I0224 05:36:30.323181 7614 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-2-retry-1-master-0" event={"ID":"95004cdb-0c51-4cd2-8fa4-28bdf9901ec6","Type":"ContainerDied","Data":"a2a044970cc81b5de15bc7ba6f6e6e6c9ceb095dd83aa2eb02ced08f9d7400e0"} Feb 24 05:36:30.663734 master-0 kubenswrapper[7614]: I0224 05:36:30.663620 7614 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-zxkt2 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 05:36:30.663734 master-0 kubenswrapper[7614]: [-]has-synced failed: reason withheld Feb 24 05:36:30.663734 master-0 kubenswrapper[7614]: [+]process-running ok Feb 24 05:36:30.663734 master-0 kubenswrapper[7614]: healthz check failed Feb 24 05:36:30.665268 master-0 kubenswrapper[7614]: I0224 05:36:30.663763 7614 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-zxkt2" podUID="be7a4b9e-1e9a-4298-b804-21b683805c0e" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 05:36:30.957164 master-0 kubenswrapper[7614]: I0224 05:36:30.957104 7614 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_installer-2-retry-1-master-0_95004cdb-0c51-4cd2-8fa4-28bdf9901ec6/installer/0.log" Feb 24 05:36:30.957460 master-0 kubenswrapper[7614]: I0224 05:36:30.957208 7614 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-2-retry-1-master-0" Feb 24 05:36:31.108157 master-0 kubenswrapper[7614]: I0224 05:36:31.108102 7614 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/95004cdb-0c51-4cd2-8fa4-28bdf9901ec6-var-lock\") pod \"95004cdb-0c51-4cd2-8fa4-28bdf9901ec6\" (UID: \"95004cdb-0c51-4cd2-8fa4-28bdf9901ec6\") " Feb 24 05:36:31.108658 master-0 kubenswrapper[7614]: I0224 05:36:31.108632 7614 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/95004cdb-0c51-4cd2-8fa4-28bdf9901ec6-kube-api-access\") pod \"95004cdb-0c51-4cd2-8fa4-28bdf9901ec6\" (UID: \"95004cdb-0c51-4cd2-8fa4-28bdf9901ec6\") " Feb 24 05:36:31.108900 master-0 kubenswrapper[7614]: I0224 05:36:31.108385 7614 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/95004cdb-0c51-4cd2-8fa4-28bdf9901ec6-var-lock" (OuterVolumeSpecName: "var-lock") pod "95004cdb-0c51-4cd2-8fa4-28bdf9901ec6" (UID: "95004cdb-0c51-4cd2-8fa4-28bdf9901ec6"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 24 05:36:31.109015 master-0 kubenswrapper[7614]: I0224 05:36:31.108998 7614 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/95004cdb-0c51-4cd2-8fa4-28bdf9901ec6-kubelet-dir\") pod \"95004cdb-0c51-4cd2-8fa4-28bdf9901ec6\" (UID: \"95004cdb-0c51-4cd2-8fa4-28bdf9901ec6\") " Feb 24 05:36:31.109145 master-0 kubenswrapper[7614]: I0224 05:36:31.109101 7614 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/95004cdb-0c51-4cd2-8fa4-28bdf9901ec6-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "95004cdb-0c51-4cd2-8fa4-28bdf9901ec6" (UID: "95004cdb-0c51-4cd2-8fa4-28bdf9901ec6"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 24 05:36:31.109496 master-0 kubenswrapper[7614]: I0224 05:36:31.109477 7614 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/95004cdb-0c51-4cd2-8fa4-28bdf9901ec6-kubelet-dir\") on node \"master-0\" DevicePath \"\"" Feb 24 05:36:31.109598 master-0 kubenswrapper[7614]: I0224 05:36:31.109584 7614 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/95004cdb-0c51-4cd2-8fa4-28bdf9901ec6-var-lock\") on node \"master-0\" DevicePath \"\"" Feb 24 05:36:31.114509 master-0 kubenswrapper[7614]: I0224 05:36:31.113782 7614 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/95004cdb-0c51-4cd2-8fa4-28bdf9901ec6-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "95004cdb-0c51-4cd2-8fa4-28bdf9901ec6" (UID: "95004cdb-0c51-4cd2-8fa4-28bdf9901ec6"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 24 05:36:31.210948 master-0 kubenswrapper[7614]: I0224 05:36:31.210773 7614 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/95004cdb-0c51-4cd2-8fa4-28bdf9901ec6-kube-api-access\") on node \"master-0\" DevicePath \"\"" Feb 24 05:36:31.337504 master-0 kubenswrapper[7614]: I0224 05:36:31.337406 7614 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_installer-2-retry-1-master-0_95004cdb-0c51-4cd2-8fa4-28bdf9901ec6/installer/0.log" Feb 24 05:36:31.338273 master-0 kubenswrapper[7614]: I0224 05:36:31.337521 7614 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-2-retry-1-master-0" event={"ID":"95004cdb-0c51-4cd2-8fa4-28bdf9901ec6","Type":"ContainerDied","Data":"b5014913008664f81c058adb929e9ab0f5679eae3b998e111a8d5dd7cf444f9d"} Feb 24 05:36:31.338273 master-0 kubenswrapper[7614]: I0224 05:36:31.337645 7614 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-2-retry-1-master-0" Feb 24 05:36:31.338273 master-0 kubenswrapper[7614]: I0224 05:36:31.337653 7614 scope.go:117] "RemoveContainer" containerID="a2a044970cc81b5de15bc7ba6f6e6e6c9ceb095dd83aa2eb02ced08f9d7400e0" Feb 24 05:36:31.385354 master-0 kubenswrapper[7614]: I0224 05:36:31.380338 7614 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-kube-apiserver/installer-2-retry-1-master-0"] Feb 24 05:36:31.394343 master-0 kubenswrapper[7614]: I0224 05:36:31.391730 7614 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-kube-apiserver/installer-2-retry-1-master-0"] Feb 24 05:36:31.663269 master-0 kubenswrapper[7614]: I0224 05:36:31.663166 7614 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-zxkt2 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 05:36:31.663269 master-0 kubenswrapper[7614]: [-]has-synced failed: reason withheld Feb 24 05:36:31.663269 master-0 kubenswrapper[7614]: [+]process-running ok Feb 24 05:36:31.663269 master-0 kubenswrapper[7614]: healthz check failed Feb 24 05:36:31.663855 master-0 kubenswrapper[7614]: I0224 05:36:31.663281 7614 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-zxkt2" podUID="be7a4b9e-1e9a-4298-b804-21b683805c0e" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 05:36:32.664708 master-0 kubenswrapper[7614]: I0224 05:36:32.664591 7614 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-zxkt2 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 05:36:32.664708 master-0 kubenswrapper[7614]: [-]has-synced failed: reason withheld Feb 24 05:36:32.664708 master-0 kubenswrapper[7614]: [+]process-running ok Feb 24 05:36:32.664708 master-0 kubenswrapper[7614]: healthz check failed Feb 24 05:36:32.665888 master-0 kubenswrapper[7614]: I0224 05:36:32.664751 7614 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-zxkt2" podUID="be7a4b9e-1e9a-4298-b804-21b683805c0e" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 05:36:33.184833 master-0 kubenswrapper[7614]: I0224 05:36:33.184743 7614 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="95004cdb-0c51-4cd2-8fa4-28bdf9901ec6" path="/var/lib/kubelet/pods/95004cdb-0c51-4cd2-8fa4-28bdf9901ec6/volumes" Feb 24 05:36:33.665198 master-0 kubenswrapper[7614]: I0224 05:36:33.665070 7614 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-zxkt2 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 05:36:33.665198 master-0 kubenswrapper[7614]: [-]has-synced failed: reason withheld Feb 24 05:36:33.665198 master-0 kubenswrapper[7614]: [+]process-running ok Feb 24 05:36:33.665198 master-0 kubenswrapper[7614]: healthz check failed Feb 24 05:36:33.666463 master-0 kubenswrapper[7614]: I0224 05:36:33.665228 7614 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-zxkt2" podUID="be7a4b9e-1e9a-4298-b804-21b683805c0e" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 05:36:34.663456 master-0 kubenswrapper[7614]: I0224 05:36:34.663351 7614 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-zxkt2 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 05:36:34.663456 master-0 kubenswrapper[7614]: [-]has-synced failed: reason withheld Feb 24 05:36:34.663456 master-0 kubenswrapper[7614]: [+]process-running ok Feb 24 05:36:34.663456 master-0 kubenswrapper[7614]: healthz check failed Feb 24 05:36:34.663456 master-0 kubenswrapper[7614]: I0224 05:36:34.663458 7614 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-zxkt2" podUID="be7a4b9e-1e9a-4298-b804-21b683805c0e" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 05:36:35.174149 master-0 kubenswrapper[7614]: I0224 05:36:35.173916 7614 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Feb 24 05:36:35.207352 master-0 kubenswrapper[7614]: I0224 05:36:35.207241 7614 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" podUID="cd9d6df2-0818-4aa3-ac91-3c789cb6cd94" Feb 24 05:36:35.207352 master-0 kubenswrapper[7614]: I0224 05:36:35.207332 7614 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" podUID="cd9d6df2-0818-4aa3-ac91-3c789cb6cd94" Feb 24 05:36:35.226472 master-0 kubenswrapper[7614]: I0224 05:36:35.226365 7614 kubelet.go:1914] "Deleted mirror pod because it is outdated" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Feb 24 05:36:35.228864 master-0 kubenswrapper[7614]: I0224 05:36:35.228775 7614 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-kube-controller-manager/kube-controller-manager-master-0"] Feb 24 05:36:35.235500 master-0 kubenswrapper[7614]: I0224 05:36:35.235408 7614 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-kube-controller-manager/kube-controller-manager-master-0"] Feb 24 05:36:35.253834 master-0 kubenswrapper[7614]: I0224 05:36:35.253731 7614 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Feb 24 05:36:35.257013 master-0 kubenswrapper[7614]: I0224 05:36:35.256944 7614 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" Feb 24 05:36:35.257647 master-0 kubenswrapper[7614]: I0224 05:36:35.257589 7614 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager/kube-controller-manager-master-0"] Feb 24 05:36:35.382785 master-0 kubenswrapper[7614]: I0224 05:36:35.382692 7614 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" event={"ID":"c0305da6e0b04a4394ef2888a487bfa1","Type":"ContainerStarted","Data":"b23dfe329a1134a3919827a4fef6a742a5c3a54647b515a5ae24efa737eaeba7"} Feb 24 05:36:35.661116 master-0 kubenswrapper[7614]: I0224 05:36:35.661046 7614 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ingress/router-default-7b65dc9fcb-zxkt2" Feb 24 05:36:35.664344 master-0 kubenswrapper[7614]: I0224 05:36:35.664268 7614 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-zxkt2 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 05:36:35.664344 master-0 kubenswrapper[7614]: [-]has-synced failed: reason withheld Feb 24 05:36:35.664344 master-0 kubenswrapper[7614]: [+]process-running ok Feb 24 05:36:35.664344 master-0 kubenswrapper[7614]: healthz check failed Feb 24 05:36:35.664545 master-0 kubenswrapper[7614]: I0224 05:36:35.664378 7614 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-zxkt2" podUID="be7a4b9e-1e9a-4298-b804-21b683805c0e" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 05:36:36.395433 master-0 kubenswrapper[7614]: I0224 05:36:36.395333 7614 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" event={"ID":"c0305da6e0b04a4394ef2888a487bfa1","Type":"ContainerStarted","Data":"25ae168ba418dfc4c1b33e602fae0945e84f4e24a75587f39220f0946080e548"} Feb 24 05:36:36.396101 master-0 kubenswrapper[7614]: I0224 05:36:36.395447 7614 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" event={"ID":"c0305da6e0b04a4394ef2888a487bfa1","Type":"ContainerStarted","Data":"e0f72d95db3b526338789b8fcf2468920b15351bce1ec3d46e5d53624269cc95"} Feb 24 05:36:36.666479 master-0 kubenswrapper[7614]: I0224 05:36:36.666433 7614 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-zxkt2 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 05:36:36.666479 master-0 kubenswrapper[7614]: [-]has-synced failed: reason withheld Feb 24 05:36:36.666479 master-0 kubenswrapper[7614]: [+]process-running ok Feb 24 05:36:36.666479 master-0 kubenswrapper[7614]: healthz check failed Feb 24 05:36:36.666796 master-0 kubenswrapper[7614]: I0224 05:36:36.666767 7614 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-zxkt2" podUID="be7a4b9e-1e9a-4298-b804-21b683805c0e" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 05:36:37.407462 master-0 kubenswrapper[7614]: I0224 05:36:37.407338 7614 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" event={"ID":"c0305da6e0b04a4394ef2888a487bfa1","Type":"ContainerStarted","Data":"5f3f429a73b99edab07440134a29330648aee1055142d0e2a471d2ca4da191ec"} Feb 24 05:36:37.407462 master-0 kubenswrapper[7614]: I0224 05:36:37.407437 7614 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" event={"ID":"c0305da6e0b04a4394ef2888a487bfa1","Type":"ContainerStarted","Data":"7b398e544e2416957c4399885f805d9a52847bdbb755fa9e7b753808f3ff7fcb"} Feb 24 05:36:37.446736 master-0 kubenswrapper[7614]: I0224 05:36:37.446543 7614 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" podStartSLOduration=2.4465082750000002 podStartE2EDuration="2.446508275s" podCreationTimestamp="2026-02-24 05:36:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-24 05:36:37.442269641 +0000 UTC m=+1328.477012827" watchObservedRunningTime="2026-02-24 05:36:37.446508275 +0000 UTC m=+1328.481251491" Feb 24 05:36:37.663412 master-0 kubenswrapper[7614]: I0224 05:36:37.663183 7614 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-zxkt2 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 05:36:37.663412 master-0 kubenswrapper[7614]: [-]has-synced failed: reason withheld Feb 24 05:36:37.663412 master-0 kubenswrapper[7614]: [+]process-running ok Feb 24 05:36:37.663412 master-0 kubenswrapper[7614]: healthz check failed Feb 24 05:36:37.663412 master-0 kubenswrapper[7614]: I0224 05:36:37.663295 7614 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-zxkt2" podUID="be7a4b9e-1e9a-4298-b804-21b683805c0e" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 05:36:38.664091 master-0 kubenswrapper[7614]: I0224 05:36:38.664016 7614 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-zxkt2 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 05:36:38.664091 master-0 kubenswrapper[7614]: [-]has-synced failed: reason withheld Feb 24 05:36:38.664091 master-0 kubenswrapper[7614]: [+]process-running ok Feb 24 05:36:38.664091 master-0 kubenswrapper[7614]: healthz check failed Feb 24 05:36:38.665178 master-0 kubenswrapper[7614]: I0224 05:36:38.665053 7614 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-zxkt2" podUID="be7a4b9e-1e9a-4298-b804-21b683805c0e" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 05:36:39.664292 master-0 kubenswrapper[7614]: I0224 05:36:39.664212 7614 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-zxkt2 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 05:36:39.664292 master-0 kubenswrapper[7614]: [-]has-synced failed: reason withheld Feb 24 05:36:39.664292 master-0 kubenswrapper[7614]: [+]process-running ok Feb 24 05:36:39.664292 master-0 kubenswrapper[7614]: healthz check failed Feb 24 05:36:39.665393 master-0 kubenswrapper[7614]: I0224 05:36:39.664347 7614 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-zxkt2" podUID="be7a4b9e-1e9a-4298-b804-21b683805c0e" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 05:36:40.664265 master-0 kubenswrapper[7614]: I0224 05:36:40.664171 7614 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-zxkt2 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 05:36:40.664265 master-0 kubenswrapper[7614]: [-]has-synced failed: reason withheld Feb 24 05:36:40.664265 master-0 kubenswrapper[7614]: [+]process-running ok Feb 24 05:36:40.664265 master-0 kubenswrapper[7614]: healthz check failed Feb 24 05:36:40.665789 master-0 kubenswrapper[7614]: I0224 05:36:40.664293 7614 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-zxkt2" podUID="be7a4b9e-1e9a-4298-b804-21b683805c0e" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 05:36:41.662566 master-0 kubenswrapper[7614]: I0224 05:36:41.662471 7614 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-zxkt2 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 05:36:41.662566 master-0 kubenswrapper[7614]: [-]has-synced failed: reason withheld Feb 24 05:36:41.662566 master-0 kubenswrapper[7614]: [+]process-running ok Feb 24 05:36:41.662566 master-0 kubenswrapper[7614]: healthz check failed Feb 24 05:36:41.663019 master-0 kubenswrapper[7614]: I0224 05:36:41.662587 7614 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-zxkt2" podUID="be7a4b9e-1e9a-4298-b804-21b683805c0e" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 05:36:42.669504 master-0 kubenswrapper[7614]: I0224 05:36:42.669404 7614 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-zxkt2 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 05:36:42.669504 master-0 kubenswrapper[7614]: [-]has-synced failed: reason withheld Feb 24 05:36:42.669504 master-0 kubenswrapper[7614]: [+]process-running ok Feb 24 05:36:42.669504 master-0 kubenswrapper[7614]: healthz check failed Feb 24 05:36:42.670402 master-0 kubenswrapper[7614]: I0224 05:36:42.669520 7614 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-zxkt2" podUID="be7a4b9e-1e9a-4298-b804-21b683805c0e" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 05:36:43.663881 master-0 kubenswrapper[7614]: I0224 05:36:43.663776 7614 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-zxkt2 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 05:36:43.663881 master-0 kubenswrapper[7614]: [-]has-synced failed: reason withheld Feb 24 05:36:43.663881 master-0 kubenswrapper[7614]: [+]process-running ok Feb 24 05:36:43.663881 master-0 kubenswrapper[7614]: healthz check failed Feb 24 05:36:43.664453 master-0 kubenswrapper[7614]: I0224 05:36:43.663926 7614 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-zxkt2" podUID="be7a4b9e-1e9a-4298-b804-21b683805c0e" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 05:36:44.664237 master-0 kubenswrapper[7614]: I0224 05:36:44.664115 7614 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-zxkt2 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 05:36:44.664237 master-0 kubenswrapper[7614]: [-]has-synced failed: reason withheld Feb 24 05:36:44.664237 master-0 kubenswrapper[7614]: [+]process-running ok Feb 24 05:36:44.664237 master-0 kubenswrapper[7614]: healthz check failed Feb 24 05:36:44.665395 master-0 kubenswrapper[7614]: I0224 05:36:44.664249 7614 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-zxkt2" podUID="be7a4b9e-1e9a-4298-b804-21b683805c0e" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 05:36:45.255637 master-0 kubenswrapper[7614]: I0224 05:36:45.255571 7614 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Feb 24 05:36:45.256047 master-0 kubenswrapper[7614]: I0224 05:36:45.256025 7614 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Feb 24 05:36:45.256281 master-0 kubenswrapper[7614]: I0224 05:36:45.256259 7614 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Feb 24 05:36:45.256555 master-0 kubenswrapper[7614]: I0224 05:36:45.256537 7614 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Feb 24 05:36:45.261294 master-0 kubenswrapper[7614]: I0224 05:36:45.261254 7614 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Feb 24 05:36:45.262321 master-0 kubenswrapper[7614]: I0224 05:36:45.262268 7614 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Feb 24 05:36:45.489620 master-0 kubenswrapper[7614]: I0224 05:36:45.489534 7614 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Feb 24 05:36:45.663556 master-0 kubenswrapper[7614]: I0224 05:36:45.663359 7614 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-zxkt2 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 05:36:45.663556 master-0 kubenswrapper[7614]: [-]has-synced failed: reason withheld Feb 24 05:36:45.663556 master-0 kubenswrapper[7614]: [+]process-running ok Feb 24 05:36:45.663556 master-0 kubenswrapper[7614]: healthz check failed Feb 24 05:36:45.663556 master-0 kubenswrapper[7614]: I0224 05:36:45.663476 7614 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-zxkt2" podUID="be7a4b9e-1e9a-4298-b804-21b683805c0e" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 05:36:46.508281 master-0 kubenswrapper[7614]: I0224 05:36:46.508178 7614 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Feb 24 05:36:46.663507 master-0 kubenswrapper[7614]: I0224 05:36:46.663367 7614 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-zxkt2 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 05:36:46.663507 master-0 kubenswrapper[7614]: [-]has-synced failed: reason withheld Feb 24 05:36:46.663507 master-0 kubenswrapper[7614]: [+]process-running ok Feb 24 05:36:46.663507 master-0 kubenswrapper[7614]: healthz check failed Feb 24 05:36:46.663948 master-0 kubenswrapper[7614]: I0224 05:36:46.663507 7614 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-zxkt2" podUID="be7a4b9e-1e9a-4298-b804-21b683805c0e" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 05:36:47.663341 master-0 kubenswrapper[7614]: I0224 05:36:47.663245 7614 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-zxkt2 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 05:36:47.663341 master-0 kubenswrapper[7614]: [-]has-synced failed: reason withheld Feb 24 05:36:47.663341 master-0 kubenswrapper[7614]: [+]process-running ok Feb 24 05:36:47.663341 master-0 kubenswrapper[7614]: healthz check failed Feb 24 05:36:47.664178 master-0 kubenswrapper[7614]: I0224 05:36:47.663371 7614 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-zxkt2" podUID="be7a4b9e-1e9a-4298-b804-21b683805c0e" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 05:36:48.664165 master-0 kubenswrapper[7614]: I0224 05:36:48.664055 7614 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-zxkt2 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 05:36:48.664165 master-0 kubenswrapper[7614]: [-]has-synced failed: reason withheld Feb 24 05:36:48.664165 master-0 kubenswrapper[7614]: [+]process-running ok Feb 24 05:36:48.664165 master-0 kubenswrapper[7614]: healthz check failed Feb 24 05:36:48.665233 master-0 kubenswrapper[7614]: I0224 05:36:48.664173 7614 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-zxkt2" podUID="be7a4b9e-1e9a-4298-b804-21b683805c0e" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 05:36:49.664243 master-0 kubenswrapper[7614]: I0224 05:36:49.664079 7614 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-zxkt2 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 05:36:49.664243 master-0 kubenswrapper[7614]: [-]has-synced failed: reason withheld Feb 24 05:36:49.664243 master-0 kubenswrapper[7614]: [+]process-running ok Feb 24 05:36:49.664243 master-0 kubenswrapper[7614]: healthz check failed Feb 24 05:36:49.664243 master-0 kubenswrapper[7614]: I0224 05:36:49.664240 7614 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-zxkt2" podUID="be7a4b9e-1e9a-4298-b804-21b683805c0e" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 05:36:50.597771 master-0 kubenswrapper[7614]: I0224 05:36:50.597698 7614 scope.go:117] "RemoveContainer" containerID="9c08e2b99bda6708882f4175ffb049128a51c70caca590d2e61441c5ea9ae2b4" Feb 24 05:36:50.664089 master-0 kubenswrapper[7614]: I0224 05:36:50.663966 7614 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-zxkt2 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 05:36:50.664089 master-0 kubenswrapper[7614]: [-]has-synced failed: reason withheld Feb 24 05:36:50.664089 master-0 kubenswrapper[7614]: [+]process-running ok Feb 24 05:36:50.664089 master-0 kubenswrapper[7614]: healthz check failed Feb 24 05:36:50.665059 master-0 kubenswrapper[7614]: I0224 05:36:50.664097 7614 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-zxkt2" podUID="be7a4b9e-1e9a-4298-b804-21b683805c0e" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 05:36:51.663420 master-0 kubenswrapper[7614]: I0224 05:36:51.663273 7614 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-zxkt2 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 05:36:51.663420 master-0 kubenswrapper[7614]: [-]has-synced failed: reason withheld Feb 24 05:36:51.663420 master-0 kubenswrapper[7614]: [+]process-running ok Feb 24 05:36:51.663420 master-0 kubenswrapper[7614]: healthz check failed Feb 24 05:36:51.664001 master-0 kubenswrapper[7614]: I0224 05:36:51.663435 7614 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-zxkt2" podUID="be7a4b9e-1e9a-4298-b804-21b683805c0e" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 05:36:52.664603 master-0 kubenswrapper[7614]: I0224 05:36:52.664524 7614 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-zxkt2 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 05:36:52.664603 master-0 kubenswrapper[7614]: [-]has-synced failed: reason withheld Feb 24 05:36:52.664603 master-0 kubenswrapper[7614]: [+]process-running ok Feb 24 05:36:52.664603 master-0 kubenswrapper[7614]: healthz check failed Feb 24 05:36:52.666353 master-0 kubenswrapper[7614]: I0224 05:36:52.665685 7614 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-zxkt2" podUID="be7a4b9e-1e9a-4298-b804-21b683805c0e" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 05:36:53.663251 master-0 kubenswrapper[7614]: I0224 05:36:53.663079 7614 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-zxkt2 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 05:36:53.663251 master-0 kubenswrapper[7614]: [-]has-synced failed: reason withheld Feb 24 05:36:53.663251 master-0 kubenswrapper[7614]: [+]process-running ok Feb 24 05:36:53.663251 master-0 kubenswrapper[7614]: healthz check failed Feb 24 05:36:53.663251 master-0 kubenswrapper[7614]: I0224 05:36:53.663182 7614 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-zxkt2" podUID="be7a4b9e-1e9a-4298-b804-21b683805c0e" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 05:36:54.664467 master-0 kubenswrapper[7614]: I0224 05:36:54.664289 7614 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-zxkt2 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 05:36:54.664467 master-0 kubenswrapper[7614]: [-]has-synced failed: reason withheld Feb 24 05:36:54.664467 master-0 kubenswrapper[7614]: [+]process-running ok Feb 24 05:36:54.664467 master-0 kubenswrapper[7614]: healthz check failed Feb 24 05:36:54.664467 master-0 kubenswrapper[7614]: I0224 05:36:54.664456 7614 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-zxkt2" podUID="be7a4b9e-1e9a-4298-b804-21b683805c0e" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 05:36:55.663477 master-0 kubenswrapper[7614]: I0224 05:36:55.663372 7614 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-zxkt2 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 05:36:55.663477 master-0 kubenswrapper[7614]: [-]has-synced failed: reason withheld Feb 24 05:36:55.663477 master-0 kubenswrapper[7614]: [+]process-running ok Feb 24 05:36:55.663477 master-0 kubenswrapper[7614]: healthz check failed Feb 24 05:36:55.663477 master-0 kubenswrapper[7614]: I0224 05:36:55.663476 7614 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-zxkt2" podUID="be7a4b9e-1e9a-4298-b804-21b683805c0e" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 05:36:56.664531 master-0 kubenswrapper[7614]: I0224 05:36:56.664432 7614 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-zxkt2 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 05:36:56.664531 master-0 kubenswrapper[7614]: [-]has-synced failed: reason withheld Feb 24 05:36:56.664531 master-0 kubenswrapper[7614]: [+]process-running ok Feb 24 05:36:56.664531 master-0 kubenswrapper[7614]: healthz check failed Feb 24 05:36:56.664531 master-0 kubenswrapper[7614]: I0224 05:36:56.664536 7614 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-zxkt2" podUID="be7a4b9e-1e9a-4298-b804-21b683805c0e" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 05:36:57.664955 master-0 kubenswrapper[7614]: I0224 05:36:57.664830 7614 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-zxkt2 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 05:36:57.664955 master-0 kubenswrapper[7614]: [-]has-synced failed: reason withheld Feb 24 05:36:57.664955 master-0 kubenswrapper[7614]: [+]process-running ok Feb 24 05:36:57.664955 master-0 kubenswrapper[7614]: healthz check failed Feb 24 05:36:57.666346 master-0 kubenswrapper[7614]: I0224 05:36:57.664958 7614 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-zxkt2" podUID="be7a4b9e-1e9a-4298-b804-21b683805c0e" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 05:36:58.618638 master-0 kubenswrapper[7614]: I0224 05:36:58.618417 7614 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ingress-operator_ingress-operator-6569778c84-rr8r7_3d6b1ce7-1213-494c-829d-186d39eac7eb/ingress-operator/6.log" Feb 24 05:36:58.620927 master-0 kubenswrapper[7614]: I0224 05:36:58.620843 7614 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ingress-operator_ingress-operator-6569778c84-rr8r7_3d6b1ce7-1213-494c-829d-186d39eac7eb/ingress-operator/5.log" Feb 24 05:36:58.621980 master-0 kubenswrapper[7614]: I0224 05:36:58.621894 7614 generic.go:334] "Generic (PLEG): container finished" podID="3d6b1ce7-1213-494c-829d-186d39eac7eb" containerID="e5961da58ba0000499976ed125663a28df9508f26428d259f2513e76bb11ef6f" exitCode=1 Feb 24 05:36:58.622138 master-0 kubenswrapper[7614]: I0224 05:36:58.621965 7614 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-6569778c84-rr8r7" event={"ID":"3d6b1ce7-1213-494c-829d-186d39eac7eb","Type":"ContainerDied","Data":"e5961da58ba0000499976ed125663a28df9508f26428d259f2513e76bb11ef6f"} Feb 24 05:36:58.622138 master-0 kubenswrapper[7614]: I0224 05:36:58.622088 7614 scope.go:117] "RemoveContainer" containerID="b50331e20fa2ff1ed8c1b0d16427617da5e032349e660e1b87569256d54c21e5" Feb 24 05:36:58.623034 master-0 kubenswrapper[7614]: I0224 05:36:58.622966 7614 scope.go:117] "RemoveContainer" containerID="e5961da58ba0000499976ed125663a28df9508f26428d259f2513e76bb11ef6f" Feb 24 05:36:58.623554 master-0 kubenswrapper[7614]: E0224 05:36:58.623463 7614 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ingress-operator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=ingress-operator pod=ingress-operator-6569778c84-rr8r7_openshift-ingress-operator(3d6b1ce7-1213-494c-829d-186d39eac7eb)\"" pod="openshift-ingress-operator/ingress-operator-6569778c84-rr8r7" podUID="3d6b1ce7-1213-494c-829d-186d39eac7eb" Feb 24 05:36:58.664769 master-0 kubenswrapper[7614]: I0224 05:36:58.664661 7614 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-zxkt2 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 05:36:58.664769 master-0 kubenswrapper[7614]: [-]has-synced failed: reason withheld Feb 24 05:36:58.664769 master-0 kubenswrapper[7614]: [+]process-running ok Feb 24 05:36:58.664769 master-0 kubenswrapper[7614]: healthz check failed Feb 24 05:36:58.672240 master-0 kubenswrapper[7614]: I0224 05:36:58.664797 7614 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-zxkt2" podUID="be7a4b9e-1e9a-4298-b804-21b683805c0e" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 05:36:59.636255 master-0 kubenswrapper[7614]: I0224 05:36:59.636152 7614 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ingress-operator_ingress-operator-6569778c84-rr8r7_3d6b1ce7-1213-494c-829d-186d39eac7eb/ingress-operator/6.log" Feb 24 05:36:59.664211 master-0 kubenswrapper[7614]: I0224 05:36:59.664076 7614 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-zxkt2 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 05:36:59.664211 master-0 kubenswrapper[7614]: [-]has-synced failed: reason withheld Feb 24 05:36:59.664211 master-0 kubenswrapper[7614]: [+]process-running ok Feb 24 05:36:59.664211 master-0 kubenswrapper[7614]: healthz check failed Feb 24 05:36:59.664661 master-0 kubenswrapper[7614]: I0224 05:36:59.664233 7614 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-zxkt2" podUID="be7a4b9e-1e9a-4298-b804-21b683805c0e" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 05:37:00.664216 master-0 kubenswrapper[7614]: I0224 05:37:00.664112 7614 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-zxkt2 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 05:37:00.664216 master-0 kubenswrapper[7614]: [-]has-synced failed: reason withheld Feb 24 05:37:00.664216 master-0 kubenswrapper[7614]: [+]process-running ok Feb 24 05:37:00.664216 master-0 kubenswrapper[7614]: healthz check failed Feb 24 05:37:00.665401 master-0 kubenswrapper[7614]: I0224 05:37:00.664219 7614 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-zxkt2" podUID="be7a4b9e-1e9a-4298-b804-21b683805c0e" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 05:37:01.663295 master-0 kubenswrapper[7614]: I0224 05:37:01.663156 7614 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-zxkt2 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 05:37:01.663295 master-0 kubenswrapper[7614]: [-]has-synced failed: reason withheld Feb 24 05:37:01.663295 master-0 kubenswrapper[7614]: [+]process-running ok Feb 24 05:37:01.663295 master-0 kubenswrapper[7614]: healthz check failed Feb 24 05:37:01.663980 master-0 kubenswrapper[7614]: I0224 05:37:01.663288 7614 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-zxkt2" podUID="be7a4b9e-1e9a-4298-b804-21b683805c0e" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 05:37:02.663622 master-0 kubenswrapper[7614]: I0224 05:37:02.663237 7614 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-zxkt2 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 05:37:02.663622 master-0 kubenswrapper[7614]: [-]has-synced failed: reason withheld Feb 24 05:37:02.663622 master-0 kubenswrapper[7614]: [+]process-running ok Feb 24 05:37:02.663622 master-0 kubenswrapper[7614]: healthz check failed Feb 24 05:37:02.663622 master-0 kubenswrapper[7614]: I0224 05:37:02.663575 7614 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-zxkt2" podUID="be7a4b9e-1e9a-4298-b804-21b683805c0e" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 05:37:03.664119 master-0 kubenswrapper[7614]: I0224 05:37:03.664011 7614 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-zxkt2 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 05:37:03.664119 master-0 kubenswrapper[7614]: [-]has-synced failed: reason withheld Feb 24 05:37:03.664119 master-0 kubenswrapper[7614]: [+]process-running ok Feb 24 05:37:03.664119 master-0 kubenswrapper[7614]: healthz check failed Feb 24 05:37:03.665945 master-0 kubenswrapper[7614]: I0224 05:37:03.664126 7614 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-zxkt2" podUID="be7a4b9e-1e9a-4298-b804-21b683805c0e" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 05:37:04.664350 master-0 kubenswrapper[7614]: I0224 05:37:04.664239 7614 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-zxkt2 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 05:37:04.664350 master-0 kubenswrapper[7614]: [-]has-synced failed: reason withheld Feb 24 05:37:04.664350 master-0 kubenswrapper[7614]: [+]process-running ok Feb 24 05:37:04.664350 master-0 kubenswrapper[7614]: healthz check failed Feb 24 05:37:04.665756 master-0 kubenswrapper[7614]: I0224 05:37:04.664380 7614 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-zxkt2" podUID="be7a4b9e-1e9a-4298-b804-21b683805c0e" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 05:37:05.664239 master-0 kubenswrapper[7614]: I0224 05:37:05.664134 7614 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-zxkt2 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 05:37:05.664239 master-0 kubenswrapper[7614]: [-]has-synced failed: reason withheld Feb 24 05:37:05.664239 master-0 kubenswrapper[7614]: [+]process-running ok Feb 24 05:37:05.664239 master-0 kubenswrapper[7614]: healthz check failed Feb 24 05:37:05.665470 master-0 kubenswrapper[7614]: I0224 05:37:05.664250 7614 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-zxkt2" podUID="be7a4b9e-1e9a-4298-b804-21b683805c0e" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 05:37:06.664753 master-0 kubenswrapper[7614]: I0224 05:37:06.663897 7614 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-zxkt2 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 05:37:06.664753 master-0 kubenswrapper[7614]: [-]has-synced failed: reason withheld Feb 24 05:37:06.664753 master-0 kubenswrapper[7614]: [+]process-running ok Feb 24 05:37:06.664753 master-0 kubenswrapper[7614]: healthz check failed Feb 24 05:37:06.664753 master-0 kubenswrapper[7614]: I0224 05:37:06.664043 7614 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-zxkt2" podUID="be7a4b9e-1e9a-4298-b804-21b683805c0e" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 05:37:07.663122 master-0 kubenswrapper[7614]: I0224 05:37:07.663024 7614 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-zxkt2 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 05:37:07.663122 master-0 kubenswrapper[7614]: [-]has-synced failed: reason withheld Feb 24 05:37:07.663122 master-0 kubenswrapper[7614]: [+]process-running ok Feb 24 05:37:07.663122 master-0 kubenswrapper[7614]: healthz check failed Feb 24 05:37:07.663609 master-0 kubenswrapper[7614]: I0224 05:37:07.663150 7614 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-zxkt2" podUID="be7a4b9e-1e9a-4298-b804-21b683805c0e" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 05:37:08.673341 master-0 kubenswrapper[7614]: I0224 05:37:08.670747 7614 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-zxkt2 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 05:37:08.673341 master-0 kubenswrapper[7614]: [-]has-synced failed: reason withheld Feb 24 05:37:08.673341 master-0 kubenswrapper[7614]: [+]process-running ok Feb 24 05:37:08.673341 master-0 kubenswrapper[7614]: healthz check failed Feb 24 05:37:08.673341 master-0 kubenswrapper[7614]: I0224 05:37:08.670838 7614 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-zxkt2" podUID="be7a4b9e-1e9a-4298-b804-21b683805c0e" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 05:37:09.181723 master-0 kubenswrapper[7614]: I0224 05:37:09.181634 7614 scope.go:117] "RemoveContainer" containerID="e5961da58ba0000499976ed125663a28df9508f26428d259f2513e76bb11ef6f" Feb 24 05:37:09.182363 master-0 kubenswrapper[7614]: E0224 05:37:09.182005 7614 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ingress-operator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=ingress-operator pod=ingress-operator-6569778c84-rr8r7_openshift-ingress-operator(3d6b1ce7-1213-494c-829d-186d39eac7eb)\"" pod="openshift-ingress-operator/ingress-operator-6569778c84-rr8r7" podUID="3d6b1ce7-1213-494c-829d-186d39eac7eb" Feb 24 05:37:09.664647 master-0 kubenswrapper[7614]: I0224 05:37:09.664549 7614 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-zxkt2 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 05:37:09.664647 master-0 kubenswrapper[7614]: [-]has-synced failed: reason withheld Feb 24 05:37:09.664647 master-0 kubenswrapper[7614]: [+]process-running ok Feb 24 05:37:09.664647 master-0 kubenswrapper[7614]: healthz check failed Feb 24 05:37:09.665345 master-0 kubenswrapper[7614]: I0224 05:37:09.664656 7614 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-zxkt2" podUID="be7a4b9e-1e9a-4298-b804-21b683805c0e" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 05:37:10.662899 master-0 kubenswrapper[7614]: I0224 05:37:10.662771 7614 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-zxkt2 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 05:37:10.662899 master-0 kubenswrapper[7614]: [-]has-synced failed: reason withheld Feb 24 05:37:10.662899 master-0 kubenswrapper[7614]: [+]process-running ok Feb 24 05:37:10.662899 master-0 kubenswrapper[7614]: healthz check failed Feb 24 05:37:10.663678 master-0 kubenswrapper[7614]: I0224 05:37:10.662932 7614 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-zxkt2" podUID="be7a4b9e-1e9a-4298-b804-21b683805c0e" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 05:37:11.663501 master-0 kubenswrapper[7614]: I0224 05:37:11.663421 7614 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-zxkt2 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 05:37:11.663501 master-0 kubenswrapper[7614]: [-]has-synced failed: reason withheld Feb 24 05:37:11.663501 master-0 kubenswrapper[7614]: [+]process-running ok Feb 24 05:37:11.663501 master-0 kubenswrapper[7614]: healthz check failed Feb 24 05:37:11.664255 master-0 kubenswrapper[7614]: I0224 05:37:11.663532 7614 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-zxkt2" podUID="be7a4b9e-1e9a-4298-b804-21b683805c0e" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 05:37:12.664439 master-0 kubenswrapper[7614]: I0224 05:37:12.664235 7614 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-zxkt2 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 05:37:12.664439 master-0 kubenswrapper[7614]: [-]has-synced failed: reason withheld Feb 24 05:37:12.664439 master-0 kubenswrapper[7614]: [+]process-running ok Feb 24 05:37:12.664439 master-0 kubenswrapper[7614]: healthz check failed Feb 24 05:37:12.665677 master-0 kubenswrapper[7614]: I0224 05:37:12.664451 7614 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-zxkt2" podUID="be7a4b9e-1e9a-4298-b804-21b683805c0e" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 05:37:13.663474 master-0 kubenswrapper[7614]: I0224 05:37:13.663366 7614 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-zxkt2 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 05:37:13.663474 master-0 kubenswrapper[7614]: [-]has-synced failed: reason withheld Feb 24 05:37:13.663474 master-0 kubenswrapper[7614]: [+]process-running ok Feb 24 05:37:13.663474 master-0 kubenswrapper[7614]: healthz check failed Feb 24 05:37:13.664159 master-0 kubenswrapper[7614]: I0224 05:37:13.663499 7614 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-zxkt2" podUID="be7a4b9e-1e9a-4298-b804-21b683805c0e" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 05:37:14.663603 master-0 kubenswrapper[7614]: I0224 05:37:14.663460 7614 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-zxkt2 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 05:37:14.663603 master-0 kubenswrapper[7614]: [-]has-synced failed: reason withheld Feb 24 05:37:14.663603 master-0 kubenswrapper[7614]: [+]process-running ok Feb 24 05:37:14.663603 master-0 kubenswrapper[7614]: healthz check failed Feb 24 05:37:14.665155 master-0 kubenswrapper[7614]: I0224 05:37:14.663646 7614 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-zxkt2" podUID="be7a4b9e-1e9a-4298-b804-21b683805c0e" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 05:37:14.886944 master-0 kubenswrapper[7614]: I0224 05:37:14.886727 7614 kubelet.go:2431] "SyncLoop REMOVE" source="file" pods=["openshift-kube-apiserver/bootstrap-kube-apiserver-master-0"] Feb 24 05:37:14.887391 master-0 kubenswrapper[7614]: I0224 05:37:14.887218 7614 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" podUID="687e92a6cecf1e2beeef16a0b322ad08" containerName="kube-apiserver" containerID="cri-o://cc6b6656b384d618f557c520566b8453504d4b13ef2e0b51275773dd51dc1d14" gracePeriod=15 Feb 24 05:37:14.887571 master-0 kubenswrapper[7614]: I0224 05:37:14.887503 7614 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" podUID="687e92a6cecf1e2beeef16a0b322ad08" containerName="kube-apiserver-insecure-readyz" containerID="cri-o://675a6b8500a8182d86383462e937389b91fb39e2924ac2657060054e1ef499ea" gracePeriod=15 Feb 24 05:37:14.893123 master-0 kubenswrapper[7614]: I0224 05:37:14.892195 7614 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-apiserver/kube-apiserver-master-0"] Feb 24 05:37:14.893123 master-0 kubenswrapper[7614]: E0224 05:37:14.892897 7614 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="95004cdb-0c51-4cd2-8fa4-28bdf9901ec6" containerName="installer" Feb 24 05:37:14.893123 master-0 kubenswrapper[7614]: I0224 05:37:14.892924 7614 state_mem.go:107] "Deleted CPUSet assignment" podUID="95004cdb-0c51-4cd2-8fa4-28bdf9901ec6" containerName="installer" Feb 24 05:37:14.893123 master-0 kubenswrapper[7614]: E0224 05:37:14.892967 7614 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="687e92a6cecf1e2beeef16a0b322ad08" containerName="kube-apiserver" Feb 24 05:37:14.893123 master-0 kubenswrapper[7614]: I0224 05:37:14.892983 7614 state_mem.go:107] "Deleted CPUSet assignment" podUID="687e92a6cecf1e2beeef16a0b322ad08" containerName="kube-apiserver" Feb 24 05:37:14.893123 master-0 kubenswrapper[7614]: E0224 05:37:14.893002 7614 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="687e92a6cecf1e2beeef16a0b322ad08" containerName="kube-apiserver-insecure-readyz" Feb 24 05:37:14.893123 master-0 kubenswrapper[7614]: I0224 05:37:14.893016 7614 state_mem.go:107] "Deleted CPUSet assignment" podUID="687e92a6cecf1e2beeef16a0b322ad08" containerName="kube-apiserver-insecure-readyz" Feb 24 05:37:14.893123 master-0 kubenswrapper[7614]: E0224 05:37:14.893035 7614 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="687e92a6cecf1e2beeef16a0b322ad08" containerName="setup" Feb 24 05:37:14.893123 master-0 kubenswrapper[7614]: I0224 05:37:14.893048 7614 state_mem.go:107] "Deleted CPUSet assignment" podUID="687e92a6cecf1e2beeef16a0b322ad08" containerName="setup" Feb 24 05:37:14.893123 master-0 kubenswrapper[7614]: E0224 05:37:14.893081 7614 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="154c1cd0-d69a-4213-8fc2-2d80217c358e" containerName="installer" Feb 24 05:37:14.893123 master-0 kubenswrapper[7614]: I0224 05:37:14.893093 7614 state_mem.go:107] "Deleted CPUSet assignment" podUID="154c1cd0-d69a-4213-8fc2-2d80217c358e" containerName="installer" Feb 24 05:37:14.894807 master-0 kubenswrapper[7614]: I0224 05:37:14.893462 7614 memory_manager.go:354] "RemoveStaleState removing state" podUID="154c1cd0-d69a-4213-8fc2-2d80217c358e" containerName="installer" Feb 24 05:37:14.894807 master-0 kubenswrapper[7614]: I0224 05:37:14.893488 7614 memory_manager.go:354] "RemoveStaleState removing state" podUID="687e92a6cecf1e2beeef16a0b322ad08" containerName="kube-apiserver" Feb 24 05:37:14.894807 master-0 kubenswrapper[7614]: I0224 05:37:14.893512 7614 memory_manager.go:354] "RemoveStaleState removing state" podUID="95004cdb-0c51-4cd2-8fa4-28bdf9901ec6" containerName="installer" Feb 24 05:37:14.894807 master-0 kubenswrapper[7614]: I0224 05:37:14.893536 7614 memory_manager.go:354] "RemoveStaleState removing state" podUID="687e92a6cecf1e2beeef16a0b322ad08" containerName="setup" Feb 24 05:37:14.894807 master-0 kubenswrapper[7614]: I0224 05:37:14.893568 7614 memory_manager.go:354] "RemoveStaleState removing state" podUID="687e92a6cecf1e2beeef16a0b322ad08" containerName="kube-apiserver-insecure-readyz" Feb 24 05:37:14.898004 master-0 kubenswrapper[7614]: I0224 05:37:14.896743 7614 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-master-0" Feb 24 05:37:14.898259 master-0 kubenswrapper[7614]: I0224 05:37:14.896756 7614 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0"] Feb 24 05:37:14.900392 master-0 kubenswrapper[7614]: I0224 05:37:14.900354 7614 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Feb 24 05:37:14.969626 master-0 kubenswrapper[7614]: I0224 05:37:14.969454 7614 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/5c4f5d60772fa42f26e9c219bffa62b9-resource-dir\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"5c4f5d60772fa42f26e9c219bffa62b9\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Feb 24 05:37:14.970047 master-0 kubenswrapper[7614]: I0224 05:37:14.969768 7614 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/eb342c942d3d92fd08ed7cf68fafb94c-audit-dir\") pod \"kube-apiserver-master-0\" (UID: \"eb342c942d3d92fd08ed7cf68fafb94c\") " pod="openshift-kube-apiserver/kube-apiserver-master-0" Feb 24 05:37:14.970047 master-0 kubenswrapper[7614]: I0224 05:37:14.969904 7614 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/eb342c942d3d92fd08ed7cf68fafb94c-cert-dir\") pod \"kube-apiserver-master-0\" (UID: \"eb342c942d3d92fd08ed7cf68fafb94c\") " pod="openshift-kube-apiserver/kube-apiserver-master-0" Feb 24 05:37:14.970047 master-0 kubenswrapper[7614]: I0224 05:37:14.970004 7614 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/eb342c942d3d92fd08ed7cf68fafb94c-resource-dir\") pod \"kube-apiserver-master-0\" (UID: \"eb342c942d3d92fd08ed7cf68fafb94c\") " pod="openshift-kube-apiserver/kube-apiserver-master-0" Feb 24 05:37:14.970219 master-0 kubenswrapper[7614]: I0224 05:37:14.970052 7614 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/5c4f5d60772fa42f26e9c219bffa62b9-var-log\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"5c4f5d60772fa42f26e9c219bffa62b9\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Feb 24 05:37:14.970219 master-0 kubenswrapper[7614]: I0224 05:37:14.970177 7614 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/5c4f5d60772fa42f26e9c219bffa62b9-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"5c4f5d60772fa42f26e9c219bffa62b9\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Feb 24 05:37:14.970346 master-0 kubenswrapper[7614]: I0224 05:37:14.970258 7614 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/5c4f5d60772fa42f26e9c219bffa62b9-var-lock\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"5c4f5d60772fa42f26e9c219bffa62b9\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Feb 24 05:37:14.970487 master-0 kubenswrapper[7614]: I0224 05:37:14.970415 7614 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/5c4f5d60772fa42f26e9c219bffa62b9-manifests\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"5c4f5d60772fa42f26e9c219bffa62b9\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Feb 24 05:37:14.972796 master-0 kubenswrapper[7614]: E0224 05:37:14.972684 7614 kubelet.go:1929] "Failed creating a mirror pod for" err="Post \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods\": dial tcp 192.168.32.10:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-master-0" Feb 24 05:37:14.996968 master-0 kubenswrapper[7614]: E0224 05:37:14.996874 7614 kubelet.go:1929] "Failed creating a mirror pod for" err="Post \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods\": dial tcp 192.168.32.10:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Feb 24 05:37:15.071367 master-0 kubenswrapper[7614]: I0224 05:37:15.071287 7614 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/5c4f5d60772fa42f26e9c219bffa62b9-var-lock\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"5c4f5d60772fa42f26e9c219bffa62b9\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Feb 24 05:37:15.071637 master-0 kubenswrapper[7614]: I0224 05:37:15.071618 7614 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/5c4f5d60772fa42f26e9c219bffa62b9-manifests\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"5c4f5d60772fa42f26e9c219bffa62b9\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Feb 24 05:37:15.071792 master-0 kubenswrapper[7614]: I0224 05:37:15.071771 7614 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/5c4f5d60772fa42f26e9c219bffa62b9-resource-dir\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"5c4f5d60772fa42f26e9c219bffa62b9\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Feb 24 05:37:15.071921 master-0 kubenswrapper[7614]: I0224 05:37:15.071866 7614 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/5c4f5d60772fa42f26e9c219bffa62b9-resource-dir\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"5c4f5d60772fa42f26e9c219bffa62b9\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Feb 24 05:37:15.071921 master-0 kubenswrapper[7614]: I0224 05:37:15.071447 7614 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/5c4f5d60772fa42f26e9c219bffa62b9-var-lock\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"5c4f5d60772fa42f26e9c219bffa62b9\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Feb 24 05:37:15.072024 master-0 kubenswrapper[7614]: I0224 05:37:15.071894 7614 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/eb342c942d3d92fd08ed7cf68fafb94c-audit-dir\") pod \"kube-apiserver-master-0\" (UID: \"eb342c942d3d92fd08ed7cf68fafb94c\") " pod="openshift-kube-apiserver/kube-apiserver-master-0" Feb 24 05:37:15.072024 master-0 kubenswrapper[7614]: I0224 05:37:15.071668 7614 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/5c4f5d60772fa42f26e9c219bffa62b9-manifests\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"5c4f5d60772fa42f26e9c219bffa62b9\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Feb 24 05:37:15.072210 master-0 kubenswrapper[7614]: I0224 05:37:15.072190 7614 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/eb342c942d3d92fd08ed7cf68fafb94c-audit-dir\") pod \"kube-apiserver-master-0\" (UID: \"eb342c942d3d92fd08ed7cf68fafb94c\") " pod="openshift-kube-apiserver/kube-apiserver-master-0" Feb 24 05:37:15.072412 master-0 kubenswrapper[7614]: I0224 05:37:15.072374 7614 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/eb342c942d3d92fd08ed7cf68fafb94c-cert-dir\") pod \"kube-apiserver-master-0\" (UID: \"eb342c942d3d92fd08ed7cf68fafb94c\") " pod="openshift-kube-apiserver/kube-apiserver-master-0" Feb 24 05:37:15.072557 master-0 kubenswrapper[7614]: I0224 05:37:15.072520 7614 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/eb342c942d3d92fd08ed7cf68fafb94c-resource-dir\") pod \"kube-apiserver-master-0\" (UID: \"eb342c942d3d92fd08ed7cf68fafb94c\") " pod="openshift-kube-apiserver/kube-apiserver-master-0" Feb 24 05:37:15.072613 master-0 kubenswrapper[7614]: I0224 05:37:15.072532 7614 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/eb342c942d3d92fd08ed7cf68fafb94c-cert-dir\") pod \"kube-apiserver-master-0\" (UID: \"eb342c942d3d92fd08ed7cf68fafb94c\") " pod="openshift-kube-apiserver/kube-apiserver-master-0" Feb 24 05:37:15.072732 master-0 kubenswrapper[7614]: I0224 05:37:15.072663 7614 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/eb342c942d3d92fd08ed7cf68fafb94c-resource-dir\") pod \"kube-apiserver-master-0\" (UID: \"eb342c942d3d92fd08ed7cf68fafb94c\") " pod="openshift-kube-apiserver/kube-apiserver-master-0" Feb 24 05:37:15.072793 master-0 kubenswrapper[7614]: I0224 05:37:15.072725 7614 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/5c4f5d60772fa42f26e9c219bffa62b9-var-log\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"5c4f5d60772fa42f26e9c219bffa62b9\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Feb 24 05:37:15.072793 master-0 kubenswrapper[7614]: I0224 05:37:15.072776 7614 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/5c4f5d60772fa42f26e9c219bffa62b9-var-log\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"5c4f5d60772fa42f26e9c219bffa62b9\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Feb 24 05:37:15.072906 master-0 kubenswrapper[7614]: I0224 05:37:15.072874 7614 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/5c4f5d60772fa42f26e9c219bffa62b9-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"5c4f5d60772fa42f26e9c219bffa62b9\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Feb 24 05:37:15.073052 master-0 kubenswrapper[7614]: I0224 05:37:15.073026 7614 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/5c4f5d60772fa42f26e9c219bffa62b9-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"5c4f5d60772fa42f26e9c219bffa62b9\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Feb 24 05:37:15.274078 master-0 kubenswrapper[7614]: I0224 05:37:15.273992 7614 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-master-0" Feb 24 05:37:15.298980 master-0 kubenswrapper[7614]: I0224 05:37:15.298846 7614 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Feb 24 05:37:15.308634 master-0 kubenswrapper[7614]: W0224 05:37:15.308113 7614 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podeb342c942d3d92fd08ed7cf68fafb94c.slice/crio-ef21d52c34e0ff209e507b2e241489d3a22d4196f3b18bf8ced7797fda251b4a WatchSource:0}: Error finding container ef21d52c34e0ff209e507b2e241489d3a22d4196f3b18bf8ced7797fda251b4a: Status 404 returned error can't find the container with id ef21d52c34e0ff209e507b2e241489d3a22d4196f3b18bf8ced7797fda251b4a Feb 24 05:37:15.325346 master-0 kubenswrapper[7614]: E0224 05:37:15.325083 7614 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/events\": dial tcp 192.168.32.10:6443: connect: connection refused" event="&Event{ObjectMeta:{kube-apiserver-master-0.1897180ffbe11f7b openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-master-0,UID:eb342c942d3d92fd08ed7cf68fafb94c,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8177c465e14c63854e5c0fa95ca0635cffc9b5dd3d077ecf971feedbc42b1274\" already present on machine,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-02-24 05:37:15.322990459 +0000 UTC m=+1366.357733615,LastTimestamp:2026-02-24 05:37:15.322990459 +0000 UTC m=+1366.357733615,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Feb 24 05:37:15.664230 master-0 kubenswrapper[7614]: I0224 05:37:15.664189 7614 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-zxkt2 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 05:37:15.664230 master-0 kubenswrapper[7614]: [-]has-synced failed: reason withheld Feb 24 05:37:15.664230 master-0 kubenswrapper[7614]: [+]process-running ok Feb 24 05:37:15.664230 master-0 kubenswrapper[7614]: healthz check failed Feb 24 05:37:15.664805 master-0 kubenswrapper[7614]: I0224 05:37:15.664775 7614 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-zxkt2" podUID="be7a4b9e-1e9a-4298-b804-21b683805c0e" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 05:37:15.798724 master-0 kubenswrapper[7614]: I0224 05:37:15.798513 7614 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" event={"ID":"5c4f5d60772fa42f26e9c219bffa62b9","Type":"ContainerStarted","Data":"31d60c0e2062a40dfd5b30452208a7d04b42bc7bacd899eda1a3c59f7769f350"} Feb 24 05:37:15.798724 master-0 kubenswrapper[7614]: I0224 05:37:15.798624 7614 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" event={"ID":"5c4f5d60772fa42f26e9c219bffa62b9","Type":"ContainerStarted","Data":"4097b46c5415e7a8b1651e87123bd125c21ee99b1c3af149041760e25e6378ee"} Feb 24 05:37:15.801176 master-0 kubenswrapper[7614]: E0224 05:37:15.801047 7614 kubelet.go:1929] "Failed creating a mirror pod for" err="Post \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods\": dial tcp 192.168.32.10:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Feb 24 05:37:15.801767 master-0 kubenswrapper[7614]: I0224 05:37:15.801723 7614 generic.go:334] "Generic (PLEG): container finished" podID="eb342c942d3d92fd08ed7cf68fafb94c" containerID="adcff6c4727c14e380617b95acac9bc44a9d20883cf6a8223ff49c60fefabbf8" exitCode=0 Feb 24 05:37:15.801926 master-0 kubenswrapper[7614]: I0224 05:37:15.801900 7614 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" event={"ID":"eb342c942d3d92fd08ed7cf68fafb94c","Type":"ContainerDied","Data":"adcff6c4727c14e380617b95acac9bc44a9d20883cf6a8223ff49c60fefabbf8"} Feb 24 05:37:15.802017 master-0 kubenswrapper[7614]: I0224 05:37:15.802004 7614 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" event={"ID":"eb342c942d3d92fd08ed7cf68fafb94c","Type":"ContainerStarted","Data":"ef21d52c34e0ff209e507b2e241489d3a22d4196f3b18bf8ced7797fda251b4a"} Feb 24 05:37:15.803525 master-0 kubenswrapper[7614]: E0224 05:37:15.803459 7614 kubelet.go:1929] "Failed creating a mirror pod for" err="Post \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods\": dial tcp 192.168.32.10:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-master-0" Feb 24 05:37:15.804177 master-0 kubenswrapper[7614]: I0224 05:37:15.804149 7614 generic.go:334] "Generic (PLEG): container finished" podID="afaa1cb5-3ad5-4c67-8802-9c4db23a2e3a" containerID="8904c0214073753fcab4acc8adc0da951a7afde283497eeb5955cf76d5cf0b70" exitCode=0 Feb 24 05:37:15.804356 master-0 kubenswrapper[7614]: I0224 05:37:15.804274 7614 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-3-master-0" event={"ID":"afaa1cb5-3ad5-4c67-8802-9c4db23a2e3a","Type":"ContainerDied","Data":"8904c0214073753fcab4acc8adc0da951a7afde283497eeb5955cf76d5cf0b70"} Feb 24 05:37:15.806421 master-0 kubenswrapper[7614]: I0224 05:37:15.806335 7614 status_manager.go:851] "Failed to get status for pod" podUID="afaa1cb5-3ad5-4c67-8802-9c4db23a2e3a" pod="openshift-kube-apiserver/installer-3-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-3-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 24 05:37:15.808066 master-0 kubenswrapper[7614]: I0224 05:37:15.807998 7614 generic.go:334] "Generic (PLEG): container finished" podID="687e92a6cecf1e2beeef16a0b322ad08" containerID="675a6b8500a8182d86383462e937389b91fb39e2924ac2657060054e1ef499ea" exitCode=0 Feb 24 05:37:16.663134 master-0 kubenswrapper[7614]: I0224 05:37:16.662861 7614 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-zxkt2 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 05:37:16.663134 master-0 kubenswrapper[7614]: [-]has-synced failed: reason withheld Feb 24 05:37:16.663134 master-0 kubenswrapper[7614]: [+]process-running ok Feb 24 05:37:16.663134 master-0 kubenswrapper[7614]: healthz check failed Feb 24 05:37:16.663134 master-0 kubenswrapper[7614]: I0224 05:37:16.662953 7614 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-zxkt2" podUID="be7a4b9e-1e9a-4298-b804-21b683805c0e" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 05:37:16.828925 master-0 kubenswrapper[7614]: I0224 05:37:16.828828 7614 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" event={"ID":"eb342c942d3d92fd08ed7cf68fafb94c","Type":"ContainerStarted","Data":"b58a590ac15050b699587076d87598dc3616423922f5c5d6a6a0ff788a44d80e"} Feb 24 05:37:16.829441 master-0 kubenswrapper[7614]: I0224 05:37:16.828942 7614 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" event={"ID":"eb342c942d3d92fd08ed7cf68fafb94c","Type":"ContainerStarted","Data":"0d935090be6ba5c3e4e9afb96a81c728e02b21ff904a964bb0876a43723149d9"} Feb 24 05:37:16.829441 master-0 kubenswrapper[7614]: I0224 05:37:16.828977 7614 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" event={"ID":"eb342c942d3d92fd08ed7cf68fafb94c","Type":"ContainerStarted","Data":"ced751750522f9c03e2e52e91588c4be42691ecfec782aec5dc06ccc927894a3"} Feb 24 05:37:17.313957 master-0 kubenswrapper[7614]: I0224 05:37:17.313896 7614 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Feb 24 05:37:17.360354 master-0 kubenswrapper[7614]: I0224 05:37:17.360263 7614 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-3-master-0" Feb 24 05:37:17.427336 master-0 kubenswrapper[7614]: I0224 05:37:17.427107 7614 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/afaa1cb5-3ad5-4c67-8802-9c4db23a2e3a-var-lock\") pod \"afaa1cb5-3ad5-4c67-8802-9c4db23a2e3a\" (UID: \"afaa1cb5-3ad5-4c67-8802-9c4db23a2e3a\") " Feb 24 05:37:17.427336 master-0 kubenswrapper[7614]: I0224 05:37:17.427219 7614 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/afaa1cb5-3ad5-4c67-8802-9c4db23a2e3a-kubelet-dir\") pod \"afaa1cb5-3ad5-4c67-8802-9c4db23a2e3a\" (UID: \"afaa1cb5-3ad5-4c67-8802-9c4db23a2e3a\") " Feb 24 05:37:17.427336 master-0 kubenswrapper[7614]: I0224 05:37:17.427295 7614 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/afaa1cb5-3ad5-4c67-8802-9c4db23a2e3a-kube-api-access\") pod \"afaa1cb5-3ad5-4c67-8802-9c4db23a2e3a\" (UID: \"afaa1cb5-3ad5-4c67-8802-9c4db23a2e3a\") " Feb 24 05:37:17.427336 master-0 kubenswrapper[7614]: I0224 05:37:17.427345 7614 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secrets\" (UniqueName: \"kubernetes.io/host-path/687e92a6cecf1e2beeef16a0b322ad08-secrets\") pod \"687e92a6cecf1e2beeef16a0b322ad08\" (UID: \"687e92a6cecf1e2beeef16a0b322ad08\") " Feb 24 05:37:17.427613 master-0 kubenswrapper[7614]: I0224 05:37:17.427370 7614 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/687e92a6cecf1e2beeef16a0b322ad08-audit-dir\") pod \"687e92a6cecf1e2beeef16a0b322ad08\" (UID: \"687e92a6cecf1e2beeef16a0b322ad08\") " Feb 24 05:37:17.427613 master-0 kubenswrapper[7614]: I0224 05:37:17.427411 7614 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/host-path/687e92a6cecf1e2beeef16a0b322ad08-logs\") pod \"687e92a6cecf1e2beeef16a0b322ad08\" (UID: \"687e92a6cecf1e2beeef16a0b322ad08\") " Feb 24 05:37:17.427613 master-0 kubenswrapper[7614]: I0224 05:37:17.427463 7614 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-kubernetes-cloud\" (UniqueName: \"kubernetes.io/host-path/687e92a6cecf1e2beeef16a0b322ad08-etc-kubernetes-cloud\") pod \"687e92a6cecf1e2beeef16a0b322ad08\" (UID: \"687e92a6cecf1e2beeef16a0b322ad08\") " Feb 24 05:37:17.427613 master-0 kubenswrapper[7614]: I0224 05:37:17.427481 7614 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssl-certs-host\" (UniqueName: \"kubernetes.io/host-path/687e92a6cecf1e2beeef16a0b322ad08-ssl-certs-host\") pod \"687e92a6cecf1e2beeef16a0b322ad08\" (UID: \"687e92a6cecf1e2beeef16a0b322ad08\") " Feb 24 05:37:17.427613 master-0 kubenswrapper[7614]: I0224 05:37:17.427552 7614 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/host-path/687e92a6cecf1e2beeef16a0b322ad08-config\") pod \"687e92a6cecf1e2beeef16a0b322ad08\" (UID: \"687e92a6cecf1e2beeef16a0b322ad08\") " Feb 24 05:37:17.427945 master-0 kubenswrapper[7614]: I0224 05:37:17.427888 7614 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/687e92a6cecf1e2beeef16a0b322ad08-config" (OuterVolumeSpecName: "config") pod "687e92a6cecf1e2beeef16a0b322ad08" (UID: "687e92a6cecf1e2beeef16a0b322ad08"). InnerVolumeSpecName "config". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 24 05:37:17.427945 master-0 kubenswrapper[7614]: I0224 05:37:17.427941 7614 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/afaa1cb5-3ad5-4c67-8802-9c4db23a2e3a-var-lock" (OuterVolumeSpecName: "var-lock") pod "afaa1cb5-3ad5-4c67-8802-9c4db23a2e3a" (UID: "afaa1cb5-3ad5-4c67-8802-9c4db23a2e3a"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 24 05:37:17.428055 master-0 kubenswrapper[7614]: I0224 05:37:17.427960 7614 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/afaa1cb5-3ad5-4c67-8802-9c4db23a2e3a-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "afaa1cb5-3ad5-4c67-8802-9c4db23a2e3a" (UID: "afaa1cb5-3ad5-4c67-8802-9c4db23a2e3a"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 24 05:37:17.428400 master-0 kubenswrapper[7614]: I0224 05:37:17.428333 7614 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/687e92a6cecf1e2beeef16a0b322ad08-logs" (OuterVolumeSpecName: "logs") pod "687e92a6cecf1e2beeef16a0b322ad08" (UID: "687e92a6cecf1e2beeef16a0b322ad08"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 24 05:37:17.428400 master-0 kubenswrapper[7614]: I0224 05:37:17.428375 7614 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/687e92a6cecf1e2beeef16a0b322ad08-etc-kubernetes-cloud" (OuterVolumeSpecName: "etc-kubernetes-cloud") pod "687e92a6cecf1e2beeef16a0b322ad08" (UID: "687e92a6cecf1e2beeef16a0b322ad08"). InnerVolumeSpecName "etc-kubernetes-cloud". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 24 05:37:17.428517 master-0 kubenswrapper[7614]: I0224 05:37:17.428404 7614 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/687e92a6cecf1e2beeef16a0b322ad08-ssl-certs-host" (OuterVolumeSpecName: "ssl-certs-host") pod "687e92a6cecf1e2beeef16a0b322ad08" (UID: "687e92a6cecf1e2beeef16a0b322ad08"). InnerVolumeSpecName "ssl-certs-host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 24 05:37:17.428517 master-0 kubenswrapper[7614]: I0224 05:37:17.428416 7614 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/687e92a6cecf1e2beeef16a0b322ad08-secrets" (OuterVolumeSpecName: "secrets") pod "687e92a6cecf1e2beeef16a0b322ad08" (UID: "687e92a6cecf1e2beeef16a0b322ad08"). InnerVolumeSpecName "secrets". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 24 05:37:17.428517 master-0 kubenswrapper[7614]: I0224 05:37:17.428431 7614 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/687e92a6cecf1e2beeef16a0b322ad08-audit-dir" (OuterVolumeSpecName: "audit-dir") pod "687e92a6cecf1e2beeef16a0b322ad08" (UID: "687e92a6cecf1e2beeef16a0b322ad08"). InnerVolumeSpecName "audit-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 24 05:37:17.432077 master-0 kubenswrapper[7614]: I0224 05:37:17.432034 7614 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/afaa1cb5-3ad5-4c67-8802-9c4db23a2e3a-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "afaa1cb5-3ad5-4c67-8802-9c4db23a2e3a" (UID: "afaa1cb5-3ad5-4c67-8802-9c4db23a2e3a"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 24 05:37:17.529010 master-0 kubenswrapper[7614]: I0224 05:37:17.528912 7614 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/host-path/687e92a6cecf1e2beeef16a0b322ad08-config\") on node \"master-0\" DevicePath \"\"" Feb 24 05:37:17.529010 master-0 kubenswrapper[7614]: I0224 05:37:17.528970 7614 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/afaa1cb5-3ad5-4c67-8802-9c4db23a2e3a-var-lock\") on node \"master-0\" DevicePath \"\"" Feb 24 05:37:17.529010 master-0 kubenswrapper[7614]: I0224 05:37:17.528981 7614 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/afaa1cb5-3ad5-4c67-8802-9c4db23a2e3a-kubelet-dir\") on node \"master-0\" DevicePath \"\"" Feb 24 05:37:17.529010 master-0 kubenswrapper[7614]: I0224 05:37:17.528993 7614 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/afaa1cb5-3ad5-4c67-8802-9c4db23a2e3a-kube-api-access\") on node \"master-0\" DevicePath \"\"" Feb 24 05:37:17.529010 master-0 kubenswrapper[7614]: I0224 05:37:17.529003 7614 reconciler_common.go:293] "Volume detached for volume \"secrets\" (UniqueName: \"kubernetes.io/host-path/687e92a6cecf1e2beeef16a0b322ad08-secrets\") on node \"master-0\" DevicePath \"\"" Feb 24 05:37:17.529010 master-0 kubenswrapper[7614]: I0224 05:37:17.529014 7614 reconciler_common.go:293] "Volume detached for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/687e92a6cecf1e2beeef16a0b322ad08-audit-dir\") on node \"master-0\" DevicePath \"\"" Feb 24 05:37:17.529010 master-0 kubenswrapper[7614]: I0224 05:37:17.529023 7614 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/host-path/687e92a6cecf1e2beeef16a0b322ad08-logs\") on node \"master-0\" DevicePath \"\"" Feb 24 05:37:17.529010 master-0 kubenswrapper[7614]: I0224 05:37:17.529032 7614 reconciler_common.go:293] "Volume detached for volume \"etc-kubernetes-cloud\" (UniqueName: \"kubernetes.io/host-path/687e92a6cecf1e2beeef16a0b322ad08-etc-kubernetes-cloud\") on node \"master-0\" DevicePath \"\"" Feb 24 05:37:17.529010 master-0 kubenswrapper[7614]: I0224 05:37:17.529041 7614 reconciler_common.go:293] "Volume detached for volume \"ssl-certs-host\" (UniqueName: \"kubernetes.io/host-path/687e92a6cecf1e2beeef16a0b322ad08-ssl-certs-host\") on node \"master-0\" DevicePath \"\"" Feb 24 05:37:17.674064 master-0 kubenswrapper[7614]: I0224 05:37:17.673986 7614 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-zxkt2 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 05:37:17.674064 master-0 kubenswrapper[7614]: [-]has-synced failed: reason withheld Feb 24 05:37:17.674064 master-0 kubenswrapper[7614]: [+]process-running ok Feb 24 05:37:17.674064 master-0 kubenswrapper[7614]: healthz check failed Feb 24 05:37:17.674442 master-0 kubenswrapper[7614]: I0224 05:37:17.674091 7614 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-zxkt2" podUID="be7a4b9e-1e9a-4298-b804-21b683805c0e" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 05:37:17.901674 master-0 kubenswrapper[7614]: I0224 05:37:17.900461 7614 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" event={"ID":"eb342c942d3d92fd08ed7cf68fafb94c","Type":"ContainerStarted","Data":"8f8f553fcbcac28ff76f32289d1c9a6da6236c150e8a334600ddcd18ae702328"} Feb 24 05:37:17.901674 master-0 kubenswrapper[7614]: I0224 05:37:17.900526 7614 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" event={"ID":"eb342c942d3d92fd08ed7cf68fafb94c","Type":"ContainerStarted","Data":"dc7f334ba093edb1de3b24fa9de1ce5cc2e49dfb4a9a68f400dd46d39244c4c0"} Feb 24 05:37:17.901674 master-0 kubenswrapper[7614]: I0224 05:37:17.901124 7614 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-master-0" Feb 24 05:37:17.920581 master-0 kubenswrapper[7614]: I0224 05:37:17.916662 7614 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-3-master-0" event={"ID":"afaa1cb5-3ad5-4c67-8802-9c4db23a2e3a","Type":"ContainerDied","Data":"e43e86c2da24898ed3ceda5fba223181eeaf5fa1fa61d7f1b9a1561a31040dae"} Feb 24 05:37:17.920581 master-0 kubenswrapper[7614]: I0224 05:37:17.916716 7614 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e43e86c2da24898ed3ceda5fba223181eeaf5fa1fa61d7f1b9a1561a31040dae" Feb 24 05:37:17.920581 master-0 kubenswrapper[7614]: I0224 05:37:17.916783 7614 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-3-master-0" Feb 24 05:37:17.941302 master-0 kubenswrapper[7614]: I0224 05:37:17.938510 7614 generic.go:334] "Generic (PLEG): container finished" podID="687e92a6cecf1e2beeef16a0b322ad08" containerID="cc6b6656b384d618f557c520566b8453504d4b13ef2e0b51275773dd51dc1d14" exitCode=0 Feb 24 05:37:17.941302 master-0 kubenswrapper[7614]: I0224 05:37:17.938583 7614 scope.go:117] "RemoveContainer" containerID="675a6b8500a8182d86383462e937389b91fb39e2924ac2657060054e1ef499ea" Feb 24 05:37:17.941302 master-0 kubenswrapper[7614]: I0224 05:37:17.938716 7614 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Feb 24 05:37:18.010667 master-0 kubenswrapper[7614]: I0224 05:37:18.010505 7614 scope.go:117] "RemoveContainer" containerID="cc6b6656b384d618f557c520566b8453504d4b13ef2e0b51275773dd51dc1d14" Feb 24 05:37:18.098842 master-0 kubenswrapper[7614]: I0224 05:37:18.097760 7614 scope.go:117] "RemoveContainer" containerID="8a354a07387d3c5b777d3173404f7e00d07019b84f1dd84ee4973912d46b023d" Feb 24 05:37:18.147865 master-0 kubenswrapper[7614]: I0224 05:37:18.147793 7614 scope.go:117] "RemoveContainer" containerID="675a6b8500a8182d86383462e937389b91fb39e2924ac2657060054e1ef499ea" Feb 24 05:37:18.169728 master-0 kubenswrapper[7614]: E0224 05:37:18.169577 7614 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"675a6b8500a8182d86383462e937389b91fb39e2924ac2657060054e1ef499ea\": container with ID starting with 675a6b8500a8182d86383462e937389b91fb39e2924ac2657060054e1ef499ea not found: ID does not exist" containerID="675a6b8500a8182d86383462e937389b91fb39e2924ac2657060054e1ef499ea" Feb 24 05:37:18.169728 master-0 kubenswrapper[7614]: I0224 05:37:18.169671 7614 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"675a6b8500a8182d86383462e937389b91fb39e2924ac2657060054e1ef499ea"} err="failed to get container status \"675a6b8500a8182d86383462e937389b91fb39e2924ac2657060054e1ef499ea\": rpc error: code = NotFound desc = could not find container \"675a6b8500a8182d86383462e937389b91fb39e2924ac2657060054e1ef499ea\": container with ID starting with 675a6b8500a8182d86383462e937389b91fb39e2924ac2657060054e1ef499ea not found: ID does not exist" Feb 24 05:37:18.169728 master-0 kubenswrapper[7614]: I0224 05:37:18.169713 7614 scope.go:117] "RemoveContainer" containerID="cc6b6656b384d618f557c520566b8453504d4b13ef2e0b51275773dd51dc1d14" Feb 24 05:37:18.171196 master-0 kubenswrapper[7614]: E0224 05:37:18.170921 7614 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"cc6b6656b384d618f557c520566b8453504d4b13ef2e0b51275773dd51dc1d14\": container with ID starting with cc6b6656b384d618f557c520566b8453504d4b13ef2e0b51275773dd51dc1d14 not found: ID does not exist" containerID="cc6b6656b384d618f557c520566b8453504d4b13ef2e0b51275773dd51dc1d14" Feb 24 05:37:18.171196 master-0 kubenswrapper[7614]: I0224 05:37:18.170990 7614 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cc6b6656b384d618f557c520566b8453504d4b13ef2e0b51275773dd51dc1d14"} err="failed to get container status \"cc6b6656b384d618f557c520566b8453504d4b13ef2e0b51275773dd51dc1d14\": rpc error: code = NotFound desc = could not find container \"cc6b6656b384d618f557c520566b8453504d4b13ef2e0b51275773dd51dc1d14\": container with ID starting with cc6b6656b384d618f557c520566b8453504d4b13ef2e0b51275773dd51dc1d14 not found: ID does not exist" Feb 24 05:37:18.171196 master-0 kubenswrapper[7614]: I0224 05:37:18.171026 7614 scope.go:117] "RemoveContainer" containerID="8a354a07387d3c5b777d3173404f7e00d07019b84f1dd84ee4973912d46b023d" Feb 24 05:37:18.171727 master-0 kubenswrapper[7614]: E0224 05:37:18.171663 7614 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8a354a07387d3c5b777d3173404f7e00d07019b84f1dd84ee4973912d46b023d\": container with ID starting with 8a354a07387d3c5b777d3173404f7e00d07019b84f1dd84ee4973912d46b023d not found: ID does not exist" containerID="8a354a07387d3c5b777d3173404f7e00d07019b84f1dd84ee4973912d46b023d" Feb 24 05:37:18.171783 master-0 kubenswrapper[7614]: I0224 05:37:18.171749 7614 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8a354a07387d3c5b777d3173404f7e00d07019b84f1dd84ee4973912d46b023d"} err="failed to get container status \"8a354a07387d3c5b777d3173404f7e00d07019b84f1dd84ee4973912d46b023d\": rpc error: code = NotFound desc = could not find container \"8a354a07387d3c5b777d3173404f7e00d07019b84f1dd84ee4973912d46b023d\": container with ID starting with 8a354a07387d3c5b777d3173404f7e00d07019b84f1dd84ee4973912d46b023d not found: ID does not exist" Feb 24 05:37:18.662789 master-0 kubenswrapper[7614]: I0224 05:37:18.662699 7614 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-zxkt2 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 05:37:18.662789 master-0 kubenswrapper[7614]: [-]has-synced failed: reason withheld Feb 24 05:37:18.662789 master-0 kubenswrapper[7614]: [+]process-running ok Feb 24 05:37:18.662789 master-0 kubenswrapper[7614]: healthz check failed Feb 24 05:37:18.663246 master-0 kubenswrapper[7614]: I0224 05:37:18.662797 7614 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-zxkt2" podUID="be7a4b9e-1e9a-4298-b804-21b683805c0e" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 05:37:19.189405 master-0 kubenswrapper[7614]: I0224 05:37:19.189306 7614 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="687e92a6cecf1e2beeef16a0b322ad08" path="/var/lib/kubelet/pods/687e92a6cecf1e2beeef16a0b322ad08/volumes" Feb 24 05:37:19.190498 master-0 kubenswrapper[7614]: I0224 05:37:19.189890 7614 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" podUID="" Feb 24 05:37:19.663863 master-0 kubenswrapper[7614]: I0224 05:37:19.663734 7614 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-zxkt2 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 05:37:19.663863 master-0 kubenswrapper[7614]: [-]has-synced failed: reason withheld Feb 24 05:37:19.663863 master-0 kubenswrapper[7614]: [+]process-running ok Feb 24 05:37:19.663863 master-0 kubenswrapper[7614]: healthz check failed Feb 24 05:37:19.663863 master-0 kubenswrapper[7614]: I0224 05:37:19.663852 7614 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-zxkt2" podUID="be7a4b9e-1e9a-4298-b804-21b683805c0e" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 05:37:20.130069 master-0 systemd[1]: Stopping Kubernetes Kubelet... Feb 24 05:37:20.168933 master-0 systemd[1]: kubelet.service: Deactivated successfully. Feb 24 05:37:20.169505 master-0 systemd[1]: Stopped Kubernetes Kubelet. Feb 24 05:37:20.176399 master-0 systemd[1]: kubelet.service: Consumed 4min 14.023s CPU time. Feb 24 05:37:20.196198 master-0 systemd[1]: Starting Kubernetes Kubelet... Feb 24 05:37:20.347214 master-0 kubenswrapper[34361]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 24 05:37:20.347214 master-0 kubenswrapper[34361]: Flag --minimum-container-ttl-duration has been deprecated, Use --eviction-hard or --eviction-soft instead. Will be removed in a future version. Feb 24 05:37:20.347214 master-0 kubenswrapper[34361]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 24 05:37:20.347214 master-0 kubenswrapper[34361]: Flag --register-with-taints has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 24 05:37:20.347214 master-0 kubenswrapper[34361]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Feb 24 05:37:20.347214 master-0 kubenswrapper[34361]: Flag --system-reserved has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 24 05:37:20.348546 master-0 kubenswrapper[34361]: I0224 05:37:20.347337 34361 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Feb 24 05:37:20.353146 master-0 kubenswrapper[34361]: W0224 05:37:20.353066 34361 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Feb 24 05:37:20.353146 master-0 kubenswrapper[34361]: W0224 05:37:20.353129 34361 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Feb 24 05:37:20.353146 master-0 kubenswrapper[34361]: W0224 05:37:20.353140 34361 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Feb 24 05:37:20.353146 master-0 kubenswrapper[34361]: W0224 05:37:20.353156 34361 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Feb 24 05:37:20.353529 master-0 kubenswrapper[34361]: W0224 05:37:20.353168 34361 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Feb 24 05:37:20.353529 master-0 kubenswrapper[34361]: W0224 05:37:20.353178 34361 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Feb 24 05:37:20.353529 master-0 kubenswrapper[34361]: W0224 05:37:20.353189 34361 feature_gate.go:330] unrecognized feature gate: PinnedImages Feb 24 05:37:20.353529 master-0 kubenswrapper[34361]: W0224 05:37:20.353199 34361 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Feb 24 05:37:20.353529 master-0 kubenswrapper[34361]: W0224 05:37:20.353216 34361 feature_gate.go:330] unrecognized feature gate: OVNObservability Feb 24 05:37:20.353529 master-0 kubenswrapper[34361]: W0224 05:37:20.353230 34361 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Feb 24 05:37:20.353529 master-0 kubenswrapper[34361]: W0224 05:37:20.353246 34361 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Feb 24 05:37:20.353529 master-0 kubenswrapper[34361]: W0224 05:37:20.353258 34361 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Feb 24 05:37:20.353529 master-0 kubenswrapper[34361]: W0224 05:37:20.353268 34361 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Feb 24 05:37:20.353529 master-0 kubenswrapper[34361]: W0224 05:37:20.353279 34361 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Feb 24 05:37:20.353529 master-0 kubenswrapper[34361]: W0224 05:37:20.353293 34361 feature_gate.go:330] unrecognized feature gate: Example Feb 24 05:37:20.353529 master-0 kubenswrapper[34361]: W0224 05:37:20.353305 34361 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Feb 24 05:37:20.353529 master-0 kubenswrapper[34361]: W0224 05:37:20.353388 34361 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Feb 24 05:37:20.353529 master-0 kubenswrapper[34361]: W0224 05:37:20.353446 34361 feature_gate.go:330] unrecognized feature gate: PlatformOperators Feb 24 05:37:20.353529 master-0 kubenswrapper[34361]: W0224 05:37:20.353460 34361 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Feb 24 05:37:20.354592 master-0 kubenswrapper[34361]: W0224 05:37:20.353907 34361 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Feb 24 05:37:20.354592 master-0 kubenswrapper[34361]: W0224 05:37:20.353924 34361 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Feb 24 05:37:20.354592 master-0 kubenswrapper[34361]: W0224 05:37:20.353933 34361 feature_gate.go:330] unrecognized feature gate: SignatureStores Feb 24 05:37:20.354592 master-0 kubenswrapper[34361]: W0224 05:37:20.353943 34361 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Feb 24 05:37:20.354592 master-0 kubenswrapper[34361]: W0224 05:37:20.353954 34361 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Feb 24 05:37:20.354592 master-0 kubenswrapper[34361]: W0224 05:37:20.353963 34361 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Feb 24 05:37:20.354592 master-0 kubenswrapper[34361]: W0224 05:37:20.353972 34361 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Feb 24 05:37:20.354592 master-0 kubenswrapper[34361]: W0224 05:37:20.353981 34361 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Feb 24 05:37:20.354592 master-0 kubenswrapper[34361]: W0224 05:37:20.353991 34361 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Feb 24 05:37:20.354592 master-0 kubenswrapper[34361]: W0224 05:37:20.354000 34361 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Feb 24 05:37:20.354592 master-0 kubenswrapper[34361]: W0224 05:37:20.354009 34361 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Feb 24 05:37:20.354592 master-0 kubenswrapper[34361]: W0224 05:37:20.354018 34361 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Feb 24 05:37:20.354592 master-0 kubenswrapper[34361]: W0224 05:37:20.354028 34361 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Feb 24 05:37:20.354592 master-0 kubenswrapper[34361]: W0224 05:37:20.354037 34361 feature_gate.go:330] unrecognized feature gate: InsightsConfig Feb 24 05:37:20.354592 master-0 kubenswrapper[34361]: W0224 05:37:20.354046 34361 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Feb 24 05:37:20.354592 master-0 kubenswrapper[34361]: W0224 05:37:20.354054 34361 feature_gate.go:330] unrecognized feature gate: NewOLM Feb 24 05:37:20.354592 master-0 kubenswrapper[34361]: W0224 05:37:20.354071 34361 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Feb 24 05:37:20.354592 master-0 kubenswrapper[34361]: W0224 05:37:20.354081 34361 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Feb 24 05:37:20.354592 master-0 kubenswrapper[34361]: W0224 05:37:20.354091 34361 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Feb 24 05:37:20.354592 master-0 kubenswrapper[34361]: W0224 05:37:20.354102 34361 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Feb 24 05:37:20.356124 master-0 kubenswrapper[34361]: W0224 05:37:20.354111 34361 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Feb 24 05:37:20.356124 master-0 kubenswrapper[34361]: W0224 05:37:20.354120 34361 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Feb 24 05:37:20.356124 master-0 kubenswrapper[34361]: W0224 05:37:20.354128 34361 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Feb 24 05:37:20.356124 master-0 kubenswrapper[34361]: W0224 05:37:20.354137 34361 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Feb 24 05:37:20.356124 master-0 kubenswrapper[34361]: W0224 05:37:20.354146 34361 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Feb 24 05:37:20.356124 master-0 kubenswrapper[34361]: W0224 05:37:20.354155 34361 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Feb 24 05:37:20.356124 master-0 kubenswrapper[34361]: W0224 05:37:20.354164 34361 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Feb 24 05:37:20.356124 master-0 kubenswrapper[34361]: W0224 05:37:20.354172 34361 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Feb 24 05:37:20.356124 master-0 kubenswrapper[34361]: W0224 05:37:20.354181 34361 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Feb 24 05:37:20.356124 master-0 kubenswrapper[34361]: W0224 05:37:20.354189 34361 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Feb 24 05:37:20.356124 master-0 kubenswrapper[34361]: W0224 05:37:20.354198 34361 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Feb 24 05:37:20.356124 master-0 kubenswrapper[34361]: W0224 05:37:20.354206 34361 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Feb 24 05:37:20.356124 master-0 kubenswrapper[34361]: W0224 05:37:20.354216 34361 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Feb 24 05:37:20.356124 master-0 kubenswrapper[34361]: W0224 05:37:20.354225 34361 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Feb 24 05:37:20.356124 master-0 kubenswrapper[34361]: W0224 05:37:20.354242 34361 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Feb 24 05:37:20.356124 master-0 kubenswrapper[34361]: W0224 05:37:20.354257 34361 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Feb 24 05:37:20.356124 master-0 kubenswrapper[34361]: W0224 05:37:20.354267 34361 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Feb 24 05:37:20.356124 master-0 kubenswrapper[34361]: W0224 05:37:20.354276 34361 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Feb 24 05:37:20.356124 master-0 kubenswrapper[34361]: W0224 05:37:20.354286 34361 feature_gate.go:330] unrecognized feature gate: ExternalOIDCWithUIDAndExtraClaimMappings Feb 24 05:37:20.359263 master-0 kubenswrapper[34361]: W0224 05:37:20.354295 34361 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Feb 24 05:37:20.359263 master-0 kubenswrapper[34361]: W0224 05:37:20.354304 34361 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Feb 24 05:37:20.359263 master-0 kubenswrapper[34361]: W0224 05:37:20.354341 34361 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Feb 24 05:37:20.359263 master-0 kubenswrapper[34361]: W0224 05:37:20.354351 34361 feature_gate.go:330] unrecognized feature gate: GatewayAPI Feb 24 05:37:20.359263 master-0 kubenswrapper[34361]: W0224 05:37:20.354362 34361 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Feb 24 05:37:20.359263 master-0 kubenswrapper[34361]: W0224 05:37:20.354371 34361 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Feb 24 05:37:20.359263 master-0 kubenswrapper[34361]: W0224 05:37:20.354383 34361 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Feb 24 05:37:20.359263 master-0 kubenswrapper[34361]: W0224 05:37:20.354394 34361 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Feb 24 05:37:20.359263 master-0 kubenswrapper[34361]: W0224 05:37:20.354404 34361 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Feb 24 05:37:20.359263 master-0 kubenswrapper[34361]: W0224 05:37:20.354413 34361 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Feb 24 05:37:20.359263 master-0 kubenswrapper[34361]: W0224 05:37:20.354423 34361 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Feb 24 05:37:20.359263 master-0 kubenswrapper[34361]: W0224 05:37:20.354434 34361 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Feb 24 05:37:20.359263 master-0 kubenswrapper[34361]: W0224 05:37:20.354443 34361 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Feb 24 05:37:20.359263 master-0 kubenswrapper[34361]: W0224 05:37:20.354454 34361 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Feb 24 05:37:20.359263 master-0 kubenswrapper[34361]: I0224 05:37:20.354697 34361 flags.go:64] FLAG: --address="0.0.0.0" Feb 24 05:37:20.359263 master-0 kubenswrapper[34361]: I0224 05:37:20.354722 34361 flags.go:64] FLAG: --allowed-unsafe-sysctls="[]" Feb 24 05:37:20.359263 master-0 kubenswrapper[34361]: I0224 05:37:20.354773 34361 flags.go:64] FLAG: --anonymous-auth="true" Feb 24 05:37:20.359263 master-0 kubenswrapper[34361]: I0224 05:37:20.354786 34361 flags.go:64] FLAG: --application-metrics-count-limit="100" Feb 24 05:37:20.359263 master-0 kubenswrapper[34361]: I0224 05:37:20.354800 34361 flags.go:64] FLAG: --authentication-token-webhook="false" Feb 24 05:37:20.359263 master-0 kubenswrapper[34361]: I0224 05:37:20.354810 34361 flags.go:64] FLAG: --authentication-token-webhook-cache-ttl="2m0s" Feb 24 05:37:20.359263 master-0 kubenswrapper[34361]: I0224 05:37:20.354824 34361 flags.go:64] FLAG: --authorization-mode="AlwaysAllow" Feb 24 05:37:20.361794 master-0 kubenswrapper[34361]: I0224 05:37:20.354837 34361 flags.go:64] FLAG: --authorization-webhook-cache-authorized-ttl="5m0s" Feb 24 05:37:20.361794 master-0 kubenswrapper[34361]: I0224 05:37:20.354848 34361 flags.go:64] FLAG: --authorization-webhook-cache-unauthorized-ttl="30s" Feb 24 05:37:20.361794 master-0 kubenswrapper[34361]: I0224 05:37:20.354858 34361 flags.go:64] FLAG: --boot-id-file="/proc/sys/kernel/random/boot_id" Feb 24 05:37:20.361794 master-0 kubenswrapper[34361]: I0224 05:37:20.354871 34361 flags.go:64] FLAG: --bootstrap-kubeconfig="/etc/kubernetes/kubeconfig" Feb 24 05:37:20.361794 master-0 kubenswrapper[34361]: I0224 05:37:20.354884 34361 flags.go:64] FLAG: --cert-dir="/var/lib/kubelet/pki" Feb 24 05:37:20.361794 master-0 kubenswrapper[34361]: I0224 05:37:20.354896 34361 flags.go:64] FLAG: --cgroup-driver="cgroupfs" Feb 24 05:37:20.361794 master-0 kubenswrapper[34361]: I0224 05:37:20.354907 34361 flags.go:64] FLAG: --cgroup-root="" Feb 24 05:37:20.361794 master-0 kubenswrapper[34361]: I0224 05:37:20.354916 34361 flags.go:64] FLAG: --cgroups-per-qos="true" Feb 24 05:37:20.361794 master-0 kubenswrapper[34361]: I0224 05:37:20.354926 34361 flags.go:64] FLAG: --client-ca-file="" Feb 24 05:37:20.361794 master-0 kubenswrapper[34361]: I0224 05:37:20.354936 34361 flags.go:64] FLAG: --cloud-config="" Feb 24 05:37:20.361794 master-0 kubenswrapper[34361]: I0224 05:37:20.354946 34361 flags.go:64] FLAG: --cloud-provider="" Feb 24 05:37:20.361794 master-0 kubenswrapper[34361]: I0224 05:37:20.354957 34361 flags.go:64] FLAG: --cluster-dns="[]" Feb 24 05:37:20.361794 master-0 kubenswrapper[34361]: I0224 05:37:20.354970 34361 flags.go:64] FLAG: --cluster-domain="" Feb 24 05:37:20.361794 master-0 kubenswrapper[34361]: I0224 05:37:20.354980 34361 flags.go:64] FLAG: --config="/etc/kubernetes/kubelet.conf" Feb 24 05:37:20.361794 master-0 kubenswrapper[34361]: I0224 05:37:20.354993 34361 flags.go:64] FLAG: --config-dir="" Feb 24 05:37:20.361794 master-0 kubenswrapper[34361]: I0224 05:37:20.355002 34361 flags.go:64] FLAG: --container-hints="/etc/cadvisor/container_hints.json" Feb 24 05:37:20.361794 master-0 kubenswrapper[34361]: I0224 05:37:20.355013 34361 flags.go:64] FLAG: --container-log-max-files="5" Feb 24 05:37:20.361794 master-0 kubenswrapper[34361]: I0224 05:37:20.355026 34361 flags.go:64] FLAG: --container-log-max-size="10Mi" Feb 24 05:37:20.361794 master-0 kubenswrapper[34361]: I0224 05:37:20.355036 34361 flags.go:64] FLAG: --container-runtime-endpoint="/var/run/crio/crio.sock" Feb 24 05:37:20.361794 master-0 kubenswrapper[34361]: I0224 05:37:20.355046 34361 flags.go:64] FLAG: --containerd="/run/containerd/containerd.sock" Feb 24 05:37:20.361794 master-0 kubenswrapper[34361]: I0224 05:37:20.355056 34361 flags.go:64] FLAG: --containerd-namespace="k8s.io" Feb 24 05:37:20.361794 master-0 kubenswrapper[34361]: I0224 05:37:20.355067 34361 flags.go:64] FLAG: --contention-profiling="false" Feb 24 05:37:20.361794 master-0 kubenswrapper[34361]: I0224 05:37:20.355077 34361 flags.go:64] FLAG: --cpu-cfs-quota="true" Feb 24 05:37:20.361794 master-0 kubenswrapper[34361]: I0224 05:37:20.355087 34361 flags.go:64] FLAG: --cpu-cfs-quota-period="100ms" Feb 24 05:37:20.361794 master-0 kubenswrapper[34361]: I0224 05:37:20.355098 34361 flags.go:64] FLAG: --cpu-manager-policy="none" Feb 24 05:37:20.363721 master-0 kubenswrapper[34361]: I0224 05:37:20.355108 34361 flags.go:64] FLAG: --cpu-manager-policy-options="" Feb 24 05:37:20.363721 master-0 kubenswrapper[34361]: I0224 05:37:20.355121 34361 flags.go:64] FLAG: --cpu-manager-reconcile-period="10s" Feb 24 05:37:20.363721 master-0 kubenswrapper[34361]: I0224 05:37:20.355132 34361 flags.go:64] FLAG: --enable-controller-attach-detach="true" Feb 24 05:37:20.363721 master-0 kubenswrapper[34361]: I0224 05:37:20.355142 34361 flags.go:64] FLAG: --enable-debugging-handlers="true" Feb 24 05:37:20.363721 master-0 kubenswrapper[34361]: I0224 05:37:20.355151 34361 flags.go:64] FLAG: --enable-load-reader="false" Feb 24 05:37:20.363721 master-0 kubenswrapper[34361]: I0224 05:37:20.355162 34361 flags.go:64] FLAG: --enable-server="true" Feb 24 05:37:20.363721 master-0 kubenswrapper[34361]: I0224 05:37:20.355171 34361 flags.go:64] FLAG: --enforce-node-allocatable="[pods]" Feb 24 05:37:20.363721 master-0 kubenswrapper[34361]: I0224 05:37:20.355187 34361 flags.go:64] FLAG: --event-burst="100" Feb 24 05:37:20.363721 master-0 kubenswrapper[34361]: I0224 05:37:20.355198 34361 flags.go:64] FLAG: --event-qps="50" Feb 24 05:37:20.363721 master-0 kubenswrapper[34361]: I0224 05:37:20.355208 34361 flags.go:64] FLAG: --event-storage-age-limit="default=0" Feb 24 05:37:20.363721 master-0 kubenswrapper[34361]: I0224 05:37:20.355218 34361 flags.go:64] FLAG: --event-storage-event-limit="default=0" Feb 24 05:37:20.363721 master-0 kubenswrapper[34361]: I0224 05:37:20.355228 34361 flags.go:64] FLAG: --eviction-hard="" Feb 24 05:37:20.363721 master-0 kubenswrapper[34361]: I0224 05:37:20.355241 34361 flags.go:64] FLAG: --eviction-max-pod-grace-period="0" Feb 24 05:37:20.363721 master-0 kubenswrapper[34361]: I0224 05:37:20.355251 34361 flags.go:64] FLAG: --eviction-minimum-reclaim="" Feb 24 05:37:20.363721 master-0 kubenswrapper[34361]: I0224 05:37:20.355262 34361 flags.go:64] FLAG: --eviction-pressure-transition-period="5m0s" Feb 24 05:37:20.363721 master-0 kubenswrapper[34361]: I0224 05:37:20.355273 34361 flags.go:64] FLAG: --eviction-soft="" Feb 24 05:37:20.363721 master-0 kubenswrapper[34361]: I0224 05:37:20.355282 34361 flags.go:64] FLAG: --eviction-soft-grace-period="" Feb 24 05:37:20.363721 master-0 kubenswrapper[34361]: I0224 05:37:20.355292 34361 flags.go:64] FLAG: --exit-on-lock-contention="false" Feb 24 05:37:20.363721 master-0 kubenswrapper[34361]: I0224 05:37:20.355302 34361 flags.go:64] FLAG: --experimental-allocatable-ignore-eviction="false" Feb 24 05:37:20.363721 master-0 kubenswrapper[34361]: I0224 05:37:20.355342 34361 flags.go:64] FLAG: --experimental-mounter-path="" Feb 24 05:37:20.363721 master-0 kubenswrapper[34361]: I0224 05:37:20.355352 34361 flags.go:64] FLAG: --fail-cgroupv1="false" Feb 24 05:37:20.363721 master-0 kubenswrapper[34361]: I0224 05:37:20.355362 34361 flags.go:64] FLAG: --fail-swap-on="true" Feb 24 05:37:20.363721 master-0 kubenswrapper[34361]: I0224 05:37:20.355373 34361 flags.go:64] FLAG: --feature-gates="" Feb 24 05:37:20.363721 master-0 kubenswrapper[34361]: I0224 05:37:20.355385 34361 flags.go:64] FLAG: --file-check-frequency="20s" Feb 24 05:37:20.363721 master-0 kubenswrapper[34361]: I0224 05:37:20.355395 34361 flags.go:64] FLAG: --global-housekeeping-interval="1m0s" Feb 24 05:37:20.365469 master-0 kubenswrapper[34361]: I0224 05:37:20.355405 34361 flags.go:64] FLAG: --hairpin-mode="promiscuous-bridge" Feb 24 05:37:20.365469 master-0 kubenswrapper[34361]: I0224 05:37:20.355415 34361 flags.go:64] FLAG: --healthz-bind-address="127.0.0.1" Feb 24 05:37:20.365469 master-0 kubenswrapper[34361]: I0224 05:37:20.355426 34361 flags.go:64] FLAG: --healthz-port="10248" Feb 24 05:37:20.365469 master-0 kubenswrapper[34361]: I0224 05:37:20.355436 34361 flags.go:64] FLAG: --help="false" Feb 24 05:37:20.365469 master-0 kubenswrapper[34361]: I0224 05:37:20.355445 34361 flags.go:64] FLAG: --hostname-override="" Feb 24 05:37:20.365469 master-0 kubenswrapper[34361]: I0224 05:37:20.355455 34361 flags.go:64] FLAG: --housekeeping-interval="10s" Feb 24 05:37:20.365469 master-0 kubenswrapper[34361]: I0224 05:37:20.355465 34361 flags.go:64] FLAG: --http-check-frequency="20s" Feb 24 05:37:20.365469 master-0 kubenswrapper[34361]: I0224 05:37:20.355475 34361 flags.go:64] FLAG: --image-credential-provider-bin-dir="" Feb 24 05:37:20.365469 master-0 kubenswrapper[34361]: I0224 05:37:20.355485 34361 flags.go:64] FLAG: --image-credential-provider-config="" Feb 24 05:37:20.365469 master-0 kubenswrapper[34361]: I0224 05:37:20.355495 34361 flags.go:64] FLAG: --image-gc-high-threshold="85" Feb 24 05:37:20.365469 master-0 kubenswrapper[34361]: I0224 05:37:20.355505 34361 flags.go:64] FLAG: --image-gc-low-threshold="80" Feb 24 05:37:20.365469 master-0 kubenswrapper[34361]: I0224 05:37:20.355514 34361 flags.go:64] FLAG: --image-service-endpoint="" Feb 24 05:37:20.365469 master-0 kubenswrapper[34361]: I0224 05:37:20.355524 34361 flags.go:64] FLAG: --kernel-memcg-notification="false" Feb 24 05:37:20.365469 master-0 kubenswrapper[34361]: I0224 05:37:20.355534 34361 flags.go:64] FLAG: --kube-api-burst="100" Feb 24 05:37:20.365469 master-0 kubenswrapper[34361]: I0224 05:37:20.355544 34361 flags.go:64] FLAG: --kube-api-content-type="application/vnd.kubernetes.protobuf" Feb 24 05:37:20.365469 master-0 kubenswrapper[34361]: I0224 05:37:20.355555 34361 flags.go:64] FLAG: --kube-api-qps="50" Feb 24 05:37:20.365469 master-0 kubenswrapper[34361]: I0224 05:37:20.355564 34361 flags.go:64] FLAG: --kube-reserved="" Feb 24 05:37:20.365469 master-0 kubenswrapper[34361]: I0224 05:37:20.355574 34361 flags.go:64] FLAG: --kube-reserved-cgroup="" Feb 24 05:37:20.365469 master-0 kubenswrapper[34361]: I0224 05:37:20.355585 34361 flags.go:64] FLAG: --kubeconfig="/var/lib/kubelet/kubeconfig" Feb 24 05:37:20.365469 master-0 kubenswrapper[34361]: I0224 05:37:20.355595 34361 flags.go:64] FLAG: --kubelet-cgroups="" Feb 24 05:37:20.365469 master-0 kubenswrapper[34361]: I0224 05:37:20.355604 34361 flags.go:64] FLAG: --local-storage-capacity-isolation="true" Feb 24 05:37:20.365469 master-0 kubenswrapper[34361]: I0224 05:37:20.355615 34361 flags.go:64] FLAG: --lock-file="" Feb 24 05:37:20.365469 master-0 kubenswrapper[34361]: I0224 05:37:20.355624 34361 flags.go:64] FLAG: --log-cadvisor-usage="false" Feb 24 05:37:20.365469 master-0 kubenswrapper[34361]: I0224 05:37:20.355635 34361 flags.go:64] FLAG: --log-flush-frequency="5s" Feb 24 05:37:20.365469 master-0 kubenswrapper[34361]: I0224 05:37:20.355645 34361 flags.go:64] FLAG: --log-json-info-buffer-size="0" Feb 24 05:37:20.368210 master-0 kubenswrapper[34361]: I0224 05:37:20.355664 34361 flags.go:64] FLAG: --log-json-split-stream="false" Feb 24 05:37:20.368210 master-0 kubenswrapper[34361]: I0224 05:37:20.355674 34361 flags.go:64] FLAG: --log-text-info-buffer-size="0" Feb 24 05:37:20.368210 master-0 kubenswrapper[34361]: I0224 05:37:20.355684 34361 flags.go:64] FLAG: --log-text-split-stream="false" Feb 24 05:37:20.368210 master-0 kubenswrapper[34361]: I0224 05:37:20.355694 34361 flags.go:64] FLAG: --logging-format="text" Feb 24 05:37:20.368210 master-0 kubenswrapper[34361]: I0224 05:37:20.355704 34361 flags.go:64] FLAG: --machine-id-file="/etc/machine-id,/var/lib/dbus/machine-id" Feb 24 05:37:20.368210 master-0 kubenswrapper[34361]: I0224 05:37:20.355715 34361 flags.go:64] FLAG: --make-iptables-util-chains="true" Feb 24 05:37:20.368210 master-0 kubenswrapper[34361]: I0224 05:37:20.355724 34361 flags.go:64] FLAG: --manifest-url="" Feb 24 05:37:20.368210 master-0 kubenswrapper[34361]: I0224 05:37:20.355734 34361 flags.go:64] FLAG: --manifest-url-header="" Feb 24 05:37:20.368210 master-0 kubenswrapper[34361]: I0224 05:37:20.355748 34361 flags.go:64] FLAG: --max-housekeeping-interval="15s" Feb 24 05:37:20.368210 master-0 kubenswrapper[34361]: I0224 05:37:20.355758 34361 flags.go:64] FLAG: --max-open-files="1000000" Feb 24 05:37:20.368210 master-0 kubenswrapper[34361]: I0224 05:37:20.355770 34361 flags.go:64] FLAG: --max-pods="110" Feb 24 05:37:20.368210 master-0 kubenswrapper[34361]: I0224 05:37:20.355780 34361 flags.go:64] FLAG: --maximum-dead-containers="-1" Feb 24 05:37:20.368210 master-0 kubenswrapper[34361]: I0224 05:37:20.355791 34361 flags.go:64] FLAG: --maximum-dead-containers-per-container="1" Feb 24 05:37:20.368210 master-0 kubenswrapper[34361]: I0224 05:37:20.355800 34361 flags.go:64] FLAG: --memory-manager-policy="None" Feb 24 05:37:20.368210 master-0 kubenswrapper[34361]: I0224 05:37:20.355811 34361 flags.go:64] FLAG: --minimum-container-ttl-duration="6m0s" Feb 24 05:37:20.368210 master-0 kubenswrapper[34361]: I0224 05:37:20.355824 34361 flags.go:64] FLAG: --minimum-image-ttl-duration="2m0s" Feb 24 05:37:20.368210 master-0 kubenswrapper[34361]: I0224 05:37:20.355834 34361 flags.go:64] FLAG: --node-ip="192.168.32.10" Feb 24 05:37:20.368210 master-0 kubenswrapper[34361]: I0224 05:37:20.355844 34361 flags.go:64] FLAG: --node-labels="node-role.kubernetes.io/control-plane=,node-role.kubernetes.io/master=,node.openshift.io/os_id=rhcos" Feb 24 05:37:20.368210 master-0 kubenswrapper[34361]: I0224 05:37:20.355873 34361 flags.go:64] FLAG: --node-status-max-images="50" Feb 24 05:37:20.368210 master-0 kubenswrapper[34361]: I0224 05:37:20.355884 34361 flags.go:64] FLAG: --node-status-update-frequency="10s" Feb 24 05:37:20.368210 master-0 kubenswrapper[34361]: I0224 05:37:20.355894 34361 flags.go:64] FLAG: --oom-score-adj="-999" Feb 24 05:37:20.368210 master-0 kubenswrapper[34361]: I0224 05:37:20.355904 34361 flags.go:64] FLAG: --pod-cidr="" Feb 24 05:37:20.368210 master-0 kubenswrapper[34361]: I0224 05:37:20.355914 34361 flags.go:64] FLAG: --pod-infra-container-image="quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6d5001a555eb05eef7f23d64667303c2b4db8343ee900c265f7613c40c1db229" Feb 24 05:37:20.369994 master-0 kubenswrapper[34361]: I0224 05:37:20.355929 34361 flags.go:64] FLAG: --pod-manifest-path="" Feb 24 05:37:20.369994 master-0 kubenswrapper[34361]: I0224 05:37:20.355939 34361 flags.go:64] FLAG: --pod-max-pids="-1" Feb 24 05:37:20.369994 master-0 kubenswrapper[34361]: I0224 05:37:20.355949 34361 flags.go:64] FLAG: --pods-per-core="0" Feb 24 05:37:20.369994 master-0 kubenswrapper[34361]: I0224 05:37:20.355959 34361 flags.go:64] FLAG: --port="10250" Feb 24 05:37:20.369994 master-0 kubenswrapper[34361]: I0224 05:37:20.355970 34361 flags.go:64] FLAG: --protect-kernel-defaults="false" Feb 24 05:37:20.369994 master-0 kubenswrapper[34361]: I0224 05:37:20.355980 34361 flags.go:64] FLAG: --provider-id="" Feb 24 05:37:20.369994 master-0 kubenswrapper[34361]: I0224 05:37:20.355990 34361 flags.go:64] FLAG: --qos-reserved="" Feb 24 05:37:20.369994 master-0 kubenswrapper[34361]: I0224 05:37:20.356000 34361 flags.go:64] FLAG: --read-only-port="10255" Feb 24 05:37:20.369994 master-0 kubenswrapper[34361]: I0224 05:37:20.356010 34361 flags.go:64] FLAG: --register-node="true" Feb 24 05:37:20.369994 master-0 kubenswrapper[34361]: I0224 05:37:20.356020 34361 flags.go:64] FLAG: --register-schedulable="true" Feb 24 05:37:20.369994 master-0 kubenswrapper[34361]: I0224 05:37:20.356029 34361 flags.go:64] FLAG: --register-with-taints="node-role.kubernetes.io/master=:NoSchedule" Feb 24 05:37:20.369994 master-0 kubenswrapper[34361]: I0224 05:37:20.356046 34361 flags.go:64] FLAG: --registry-burst="10" Feb 24 05:37:20.369994 master-0 kubenswrapper[34361]: I0224 05:37:20.356056 34361 flags.go:64] FLAG: --registry-qps="5" Feb 24 05:37:20.369994 master-0 kubenswrapper[34361]: I0224 05:37:20.356067 34361 flags.go:64] FLAG: --reserved-cpus="" Feb 24 05:37:20.369994 master-0 kubenswrapper[34361]: I0224 05:37:20.356077 34361 flags.go:64] FLAG: --reserved-memory="" Feb 24 05:37:20.369994 master-0 kubenswrapper[34361]: I0224 05:37:20.356090 34361 flags.go:64] FLAG: --resolv-conf="/etc/resolv.conf" Feb 24 05:37:20.369994 master-0 kubenswrapper[34361]: I0224 05:37:20.356100 34361 flags.go:64] FLAG: --root-dir="/var/lib/kubelet" Feb 24 05:37:20.369994 master-0 kubenswrapper[34361]: I0224 05:37:20.356111 34361 flags.go:64] FLAG: --rotate-certificates="false" Feb 24 05:37:20.369994 master-0 kubenswrapper[34361]: I0224 05:37:20.356129 34361 flags.go:64] FLAG: --rotate-server-certificates="false" Feb 24 05:37:20.369994 master-0 kubenswrapper[34361]: I0224 05:37:20.356139 34361 flags.go:64] FLAG: --runonce="false" Feb 24 05:37:20.369994 master-0 kubenswrapper[34361]: I0224 05:37:20.356150 34361 flags.go:64] FLAG: --runtime-cgroups="/system.slice/crio.service" Feb 24 05:37:20.369994 master-0 kubenswrapper[34361]: I0224 05:37:20.356161 34361 flags.go:64] FLAG: --runtime-request-timeout="2m0s" Feb 24 05:37:20.369994 master-0 kubenswrapper[34361]: I0224 05:37:20.356172 34361 flags.go:64] FLAG: --seccomp-default="false" Feb 24 05:37:20.369994 master-0 kubenswrapper[34361]: I0224 05:37:20.356181 34361 flags.go:64] FLAG: --serialize-image-pulls="true" Feb 24 05:37:20.369994 master-0 kubenswrapper[34361]: I0224 05:37:20.356191 34361 flags.go:64] FLAG: --storage-driver-buffer-duration="1m0s" Feb 24 05:37:20.369994 master-0 kubenswrapper[34361]: I0224 05:37:20.356201 34361 flags.go:64] FLAG: --storage-driver-db="cadvisor" Feb 24 05:37:20.371766 master-0 kubenswrapper[34361]: I0224 05:37:20.356212 34361 flags.go:64] FLAG: --storage-driver-host="localhost:8086" Feb 24 05:37:20.371766 master-0 kubenswrapper[34361]: I0224 05:37:20.356223 34361 flags.go:64] FLAG: --storage-driver-password="root" Feb 24 05:37:20.371766 master-0 kubenswrapper[34361]: I0224 05:37:20.356233 34361 flags.go:64] FLAG: --storage-driver-secure="false" Feb 24 05:37:20.371766 master-0 kubenswrapper[34361]: I0224 05:37:20.356242 34361 flags.go:64] FLAG: --storage-driver-table="stats" Feb 24 05:37:20.371766 master-0 kubenswrapper[34361]: I0224 05:37:20.356252 34361 flags.go:64] FLAG: --storage-driver-user="root" Feb 24 05:37:20.371766 master-0 kubenswrapper[34361]: I0224 05:37:20.356262 34361 flags.go:64] FLAG: --streaming-connection-idle-timeout="4h0m0s" Feb 24 05:37:20.371766 master-0 kubenswrapper[34361]: I0224 05:37:20.356273 34361 flags.go:64] FLAG: --sync-frequency="1m0s" Feb 24 05:37:20.371766 master-0 kubenswrapper[34361]: I0224 05:37:20.356283 34361 flags.go:64] FLAG: --system-cgroups="" Feb 24 05:37:20.371766 master-0 kubenswrapper[34361]: I0224 05:37:20.356293 34361 flags.go:64] FLAG: --system-reserved="cpu=500m,ephemeral-storage=1Gi,memory=1Gi" Feb 24 05:37:20.371766 master-0 kubenswrapper[34361]: I0224 05:37:20.356339 34361 flags.go:64] FLAG: --system-reserved-cgroup="" Feb 24 05:37:20.371766 master-0 kubenswrapper[34361]: I0224 05:37:20.356350 34361 flags.go:64] FLAG: --tls-cert-file="" Feb 24 05:37:20.371766 master-0 kubenswrapper[34361]: I0224 05:37:20.356361 34361 flags.go:64] FLAG: --tls-cipher-suites="[]" Feb 24 05:37:20.371766 master-0 kubenswrapper[34361]: I0224 05:37:20.356373 34361 flags.go:64] FLAG: --tls-min-version="" Feb 24 05:37:20.371766 master-0 kubenswrapper[34361]: I0224 05:37:20.356384 34361 flags.go:64] FLAG: --tls-private-key-file="" Feb 24 05:37:20.371766 master-0 kubenswrapper[34361]: I0224 05:37:20.356394 34361 flags.go:64] FLAG: --topology-manager-policy="none" Feb 24 05:37:20.371766 master-0 kubenswrapper[34361]: I0224 05:37:20.356404 34361 flags.go:64] FLAG: --topology-manager-policy-options="" Feb 24 05:37:20.371766 master-0 kubenswrapper[34361]: I0224 05:37:20.356414 34361 flags.go:64] FLAG: --topology-manager-scope="container" Feb 24 05:37:20.371766 master-0 kubenswrapper[34361]: I0224 05:37:20.356425 34361 flags.go:64] FLAG: --v="2" Feb 24 05:37:20.371766 master-0 kubenswrapper[34361]: I0224 05:37:20.356437 34361 flags.go:64] FLAG: --version="false" Feb 24 05:37:20.371766 master-0 kubenswrapper[34361]: I0224 05:37:20.356450 34361 flags.go:64] FLAG: --vmodule="" Feb 24 05:37:20.371766 master-0 kubenswrapper[34361]: I0224 05:37:20.356462 34361 flags.go:64] FLAG: --volume-plugin-dir="/etc/kubernetes/kubelet-plugins/volume/exec" Feb 24 05:37:20.371766 master-0 kubenswrapper[34361]: I0224 05:37:20.356473 34361 flags.go:64] FLAG: --volume-stats-agg-period="1m0s" Feb 24 05:37:20.371766 master-0 kubenswrapper[34361]: W0224 05:37:20.356796 34361 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Feb 24 05:37:20.371766 master-0 kubenswrapper[34361]: W0224 05:37:20.356813 34361 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Feb 24 05:37:20.373659 master-0 kubenswrapper[34361]: W0224 05:37:20.356829 34361 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Feb 24 05:37:20.373659 master-0 kubenswrapper[34361]: W0224 05:37:20.356839 34361 feature_gate.go:330] unrecognized feature gate: SignatureStores Feb 24 05:37:20.373659 master-0 kubenswrapper[34361]: W0224 05:37:20.356858 34361 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Feb 24 05:37:20.373659 master-0 kubenswrapper[34361]: W0224 05:37:20.356868 34361 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Feb 24 05:37:20.373659 master-0 kubenswrapper[34361]: W0224 05:37:20.356882 34361 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Feb 24 05:37:20.373659 master-0 kubenswrapper[34361]: W0224 05:37:20.356892 34361 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Feb 24 05:37:20.373659 master-0 kubenswrapper[34361]: W0224 05:37:20.356901 34361 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Feb 24 05:37:20.373659 master-0 kubenswrapper[34361]: W0224 05:37:20.356910 34361 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Feb 24 05:37:20.373659 master-0 kubenswrapper[34361]: W0224 05:37:20.356918 34361 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Feb 24 05:37:20.373659 master-0 kubenswrapper[34361]: W0224 05:37:20.356927 34361 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Feb 24 05:37:20.373659 master-0 kubenswrapper[34361]: W0224 05:37:20.356935 34361 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Feb 24 05:37:20.373659 master-0 kubenswrapper[34361]: W0224 05:37:20.356944 34361 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Feb 24 05:37:20.373659 master-0 kubenswrapper[34361]: W0224 05:37:20.356954 34361 feature_gate.go:330] unrecognized feature gate: InsightsConfig Feb 24 05:37:20.373659 master-0 kubenswrapper[34361]: W0224 05:37:20.356963 34361 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Feb 24 05:37:20.373659 master-0 kubenswrapper[34361]: W0224 05:37:20.356972 34361 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Feb 24 05:37:20.373659 master-0 kubenswrapper[34361]: W0224 05:37:20.356982 34361 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Feb 24 05:37:20.373659 master-0 kubenswrapper[34361]: W0224 05:37:20.356990 34361 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Feb 24 05:37:20.373659 master-0 kubenswrapper[34361]: W0224 05:37:20.356999 34361 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Feb 24 05:37:20.373659 master-0 kubenswrapper[34361]: W0224 05:37:20.357007 34361 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Feb 24 05:37:20.373659 master-0 kubenswrapper[34361]: W0224 05:37:20.357016 34361 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Feb 24 05:37:20.375158 master-0 kubenswrapper[34361]: W0224 05:37:20.357025 34361 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Feb 24 05:37:20.375158 master-0 kubenswrapper[34361]: W0224 05:37:20.357033 34361 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Feb 24 05:37:20.375158 master-0 kubenswrapper[34361]: W0224 05:37:20.357042 34361 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Feb 24 05:37:20.375158 master-0 kubenswrapper[34361]: W0224 05:37:20.357053 34361 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Feb 24 05:37:20.375158 master-0 kubenswrapper[34361]: W0224 05:37:20.357064 34361 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Feb 24 05:37:20.375158 master-0 kubenswrapper[34361]: W0224 05:37:20.357074 34361 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Feb 24 05:37:20.375158 master-0 kubenswrapper[34361]: W0224 05:37:20.357083 34361 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Feb 24 05:37:20.375158 master-0 kubenswrapper[34361]: W0224 05:37:20.357093 34361 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Feb 24 05:37:20.375158 master-0 kubenswrapper[34361]: W0224 05:37:20.357102 34361 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Feb 24 05:37:20.375158 master-0 kubenswrapper[34361]: W0224 05:37:20.357113 34361 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Feb 24 05:37:20.375158 master-0 kubenswrapper[34361]: W0224 05:37:20.357122 34361 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Feb 24 05:37:20.375158 master-0 kubenswrapper[34361]: W0224 05:37:20.357130 34361 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Feb 24 05:37:20.375158 master-0 kubenswrapper[34361]: W0224 05:37:20.357143 34361 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Feb 24 05:37:20.375158 master-0 kubenswrapper[34361]: W0224 05:37:20.357152 34361 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Feb 24 05:37:20.375158 master-0 kubenswrapper[34361]: W0224 05:37:20.357164 34361 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Feb 24 05:37:20.375158 master-0 kubenswrapper[34361]: W0224 05:37:20.357175 34361 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Feb 24 05:37:20.375158 master-0 kubenswrapper[34361]: W0224 05:37:20.357188 34361 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Feb 24 05:37:20.375158 master-0 kubenswrapper[34361]: W0224 05:37:20.357198 34361 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Feb 24 05:37:20.375158 master-0 kubenswrapper[34361]: W0224 05:37:20.357207 34361 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Feb 24 05:37:20.376525 master-0 kubenswrapper[34361]: W0224 05:37:20.357217 34361 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Feb 24 05:37:20.376525 master-0 kubenswrapper[34361]: W0224 05:37:20.357228 34361 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Feb 24 05:37:20.376525 master-0 kubenswrapper[34361]: W0224 05:37:20.357237 34361 feature_gate.go:330] unrecognized feature gate: NewOLM Feb 24 05:37:20.376525 master-0 kubenswrapper[34361]: W0224 05:37:20.357248 34361 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Feb 24 05:37:20.376525 master-0 kubenswrapper[34361]: W0224 05:37:20.357256 34361 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Feb 24 05:37:20.376525 master-0 kubenswrapper[34361]: W0224 05:37:20.357265 34361 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Feb 24 05:37:20.376525 master-0 kubenswrapper[34361]: W0224 05:37:20.357274 34361 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Feb 24 05:37:20.376525 master-0 kubenswrapper[34361]: W0224 05:37:20.357283 34361 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Feb 24 05:37:20.376525 master-0 kubenswrapper[34361]: W0224 05:37:20.357291 34361 feature_gate.go:330] unrecognized feature gate: ExternalOIDCWithUIDAndExtraClaimMappings Feb 24 05:37:20.376525 master-0 kubenswrapper[34361]: W0224 05:37:20.357300 34361 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Feb 24 05:37:20.376525 master-0 kubenswrapper[34361]: W0224 05:37:20.357335 34361 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Feb 24 05:37:20.376525 master-0 kubenswrapper[34361]: W0224 05:37:20.357346 34361 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Feb 24 05:37:20.376525 master-0 kubenswrapper[34361]: W0224 05:37:20.357355 34361 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Feb 24 05:37:20.376525 master-0 kubenswrapper[34361]: W0224 05:37:20.357366 34361 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Feb 24 05:37:20.376525 master-0 kubenswrapper[34361]: W0224 05:37:20.357377 34361 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Feb 24 05:37:20.376525 master-0 kubenswrapper[34361]: W0224 05:37:20.357387 34361 feature_gate.go:330] unrecognized feature gate: PinnedImages Feb 24 05:37:20.376525 master-0 kubenswrapper[34361]: W0224 05:37:20.357397 34361 feature_gate.go:330] unrecognized feature gate: PlatformOperators Feb 24 05:37:20.376525 master-0 kubenswrapper[34361]: W0224 05:37:20.357407 34361 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Feb 24 05:37:20.376525 master-0 kubenswrapper[34361]: W0224 05:37:20.357416 34361 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Feb 24 05:37:20.376525 master-0 kubenswrapper[34361]: W0224 05:37:20.357424 34361 feature_gate.go:330] unrecognized feature gate: GatewayAPI Feb 24 05:37:20.379025 master-0 kubenswrapper[34361]: W0224 05:37:20.357433 34361 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Feb 24 05:37:20.379025 master-0 kubenswrapper[34361]: W0224 05:37:20.357442 34361 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Feb 24 05:37:20.379025 master-0 kubenswrapper[34361]: W0224 05:37:20.357451 34361 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Feb 24 05:37:20.379025 master-0 kubenswrapper[34361]: W0224 05:37:20.357459 34361 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Feb 24 05:37:20.379025 master-0 kubenswrapper[34361]: W0224 05:37:20.357468 34361 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Feb 24 05:37:20.379025 master-0 kubenswrapper[34361]: W0224 05:37:20.357480 34361 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Feb 24 05:37:20.379025 master-0 kubenswrapper[34361]: W0224 05:37:20.357488 34361 feature_gate.go:330] unrecognized feature gate: OVNObservability Feb 24 05:37:20.379025 master-0 kubenswrapper[34361]: W0224 05:37:20.357499 34361 feature_gate.go:330] unrecognized feature gate: Example Feb 24 05:37:20.379025 master-0 kubenswrapper[34361]: W0224 05:37:20.357508 34361 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Feb 24 05:37:20.379025 master-0 kubenswrapper[34361]: W0224 05:37:20.357520 34361 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Feb 24 05:37:20.379025 master-0 kubenswrapper[34361]: W0224 05:37:20.357528 34361 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Feb 24 05:37:20.379025 master-0 kubenswrapper[34361]: I0224 05:37:20.357558 34361 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false StreamingCollectionEncodingToJSON:true StreamingCollectionEncodingToProtobuf:true TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Feb 24 05:37:20.379025 master-0 kubenswrapper[34361]: I0224 05:37:20.367181 34361 server.go:491] "Kubelet version" kubeletVersion="v1.31.14" Feb 24 05:37:20.379025 master-0 kubenswrapper[34361]: I0224 05:37:20.367237 34361 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Feb 24 05:37:20.379025 master-0 kubenswrapper[34361]: W0224 05:37:20.367405 34361 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Feb 24 05:37:20.380825 master-0 kubenswrapper[34361]: W0224 05:37:20.367425 34361 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Feb 24 05:37:20.380825 master-0 kubenswrapper[34361]: W0224 05:37:20.367435 34361 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Feb 24 05:37:20.380825 master-0 kubenswrapper[34361]: W0224 05:37:20.367445 34361 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Feb 24 05:37:20.380825 master-0 kubenswrapper[34361]: W0224 05:37:20.367457 34361 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Feb 24 05:37:20.380825 master-0 kubenswrapper[34361]: W0224 05:37:20.367468 34361 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Feb 24 05:37:20.380825 master-0 kubenswrapper[34361]: W0224 05:37:20.367477 34361 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Feb 24 05:37:20.380825 master-0 kubenswrapper[34361]: W0224 05:37:20.367485 34361 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Feb 24 05:37:20.380825 master-0 kubenswrapper[34361]: W0224 05:37:20.367494 34361 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Feb 24 05:37:20.380825 master-0 kubenswrapper[34361]: W0224 05:37:20.367504 34361 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Feb 24 05:37:20.380825 master-0 kubenswrapper[34361]: W0224 05:37:20.367513 34361 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Feb 24 05:37:20.380825 master-0 kubenswrapper[34361]: W0224 05:37:20.367521 34361 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Feb 24 05:37:20.380825 master-0 kubenswrapper[34361]: W0224 05:37:20.367529 34361 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Feb 24 05:37:20.380825 master-0 kubenswrapper[34361]: W0224 05:37:20.367537 34361 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Feb 24 05:37:20.380825 master-0 kubenswrapper[34361]: W0224 05:37:20.367546 34361 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Feb 24 05:37:20.380825 master-0 kubenswrapper[34361]: W0224 05:37:20.367554 34361 feature_gate.go:330] unrecognized feature gate: SignatureStores Feb 24 05:37:20.380825 master-0 kubenswrapper[34361]: W0224 05:37:20.367562 34361 feature_gate.go:330] unrecognized feature gate: PinnedImages Feb 24 05:37:20.380825 master-0 kubenswrapper[34361]: W0224 05:37:20.367570 34361 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Feb 24 05:37:20.380825 master-0 kubenswrapper[34361]: W0224 05:37:20.367578 34361 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Feb 24 05:37:20.380825 master-0 kubenswrapper[34361]: W0224 05:37:20.367585 34361 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Feb 24 05:37:20.383206 master-0 kubenswrapper[34361]: W0224 05:37:20.367596 34361 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Feb 24 05:37:20.383206 master-0 kubenswrapper[34361]: W0224 05:37:20.367606 34361 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Feb 24 05:37:20.383206 master-0 kubenswrapper[34361]: W0224 05:37:20.367614 34361 feature_gate.go:330] unrecognized feature gate: ExternalOIDCWithUIDAndExtraClaimMappings Feb 24 05:37:20.383206 master-0 kubenswrapper[34361]: W0224 05:37:20.367622 34361 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Feb 24 05:37:20.383206 master-0 kubenswrapper[34361]: W0224 05:37:20.367630 34361 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Feb 24 05:37:20.383206 master-0 kubenswrapper[34361]: W0224 05:37:20.367639 34361 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Feb 24 05:37:20.383206 master-0 kubenswrapper[34361]: W0224 05:37:20.367647 34361 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Feb 24 05:37:20.383206 master-0 kubenswrapper[34361]: W0224 05:37:20.367655 34361 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Feb 24 05:37:20.383206 master-0 kubenswrapper[34361]: W0224 05:37:20.367663 34361 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Feb 24 05:37:20.383206 master-0 kubenswrapper[34361]: W0224 05:37:20.367672 34361 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Feb 24 05:37:20.383206 master-0 kubenswrapper[34361]: W0224 05:37:20.367680 34361 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Feb 24 05:37:20.383206 master-0 kubenswrapper[34361]: W0224 05:37:20.367689 34361 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Feb 24 05:37:20.383206 master-0 kubenswrapper[34361]: W0224 05:37:20.367697 34361 feature_gate.go:330] unrecognized feature gate: NewOLM Feb 24 05:37:20.383206 master-0 kubenswrapper[34361]: W0224 05:37:20.367706 34361 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Feb 24 05:37:20.383206 master-0 kubenswrapper[34361]: W0224 05:37:20.367717 34361 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Feb 24 05:37:20.383206 master-0 kubenswrapper[34361]: W0224 05:37:20.367725 34361 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Feb 24 05:37:20.383206 master-0 kubenswrapper[34361]: W0224 05:37:20.367733 34361 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Feb 24 05:37:20.383206 master-0 kubenswrapper[34361]: W0224 05:37:20.367744 34361 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Feb 24 05:37:20.383206 master-0 kubenswrapper[34361]: W0224 05:37:20.367753 34361 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Feb 24 05:37:20.385741 master-0 kubenswrapper[34361]: W0224 05:37:20.367762 34361 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Feb 24 05:37:20.385741 master-0 kubenswrapper[34361]: W0224 05:37:20.367771 34361 feature_gate.go:330] unrecognized feature gate: PlatformOperators Feb 24 05:37:20.385741 master-0 kubenswrapper[34361]: W0224 05:37:20.367780 34361 feature_gate.go:330] unrecognized feature gate: InsightsConfig Feb 24 05:37:20.385741 master-0 kubenswrapper[34361]: W0224 05:37:20.367789 34361 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Feb 24 05:37:20.385741 master-0 kubenswrapper[34361]: W0224 05:37:20.367797 34361 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Feb 24 05:37:20.385741 master-0 kubenswrapper[34361]: W0224 05:37:20.367806 34361 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Feb 24 05:37:20.385741 master-0 kubenswrapper[34361]: W0224 05:37:20.367814 34361 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Feb 24 05:37:20.385741 master-0 kubenswrapper[34361]: W0224 05:37:20.367823 34361 feature_gate.go:330] unrecognized feature gate: Example Feb 24 05:37:20.385741 master-0 kubenswrapper[34361]: W0224 05:37:20.367832 34361 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Feb 24 05:37:20.385741 master-0 kubenswrapper[34361]: W0224 05:37:20.367840 34361 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Feb 24 05:37:20.385741 master-0 kubenswrapper[34361]: W0224 05:37:20.367850 34361 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Feb 24 05:37:20.385741 master-0 kubenswrapper[34361]: W0224 05:37:20.367858 34361 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Feb 24 05:37:20.385741 master-0 kubenswrapper[34361]: W0224 05:37:20.367868 34361 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Feb 24 05:37:20.385741 master-0 kubenswrapper[34361]: W0224 05:37:20.367876 34361 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Feb 24 05:37:20.385741 master-0 kubenswrapper[34361]: W0224 05:37:20.367884 34361 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Feb 24 05:37:20.385741 master-0 kubenswrapper[34361]: W0224 05:37:20.367892 34361 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Feb 24 05:37:20.385741 master-0 kubenswrapper[34361]: W0224 05:37:20.367900 34361 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Feb 24 05:37:20.385741 master-0 kubenswrapper[34361]: W0224 05:37:20.367908 34361 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Feb 24 05:37:20.385741 master-0 kubenswrapper[34361]: W0224 05:37:20.367918 34361 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Feb 24 05:37:20.385741 master-0 kubenswrapper[34361]: W0224 05:37:20.367927 34361 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Feb 24 05:37:20.388503 master-0 kubenswrapper[34361]: W0224 05:37:20.367936 34361 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Feb 24 05:37:20.388503 master-0 kubenswrapper[34361]: W0224 05:37:20.367944 34361 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Feb 24 05:37:20.388503 master-0 kubenswrapper[34361]: W0224 05:37:20.367954 34361 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Feb 24 05:37:20.388503 master-0 kubenswrapper[34361]: W0224 05:37:20.367963 34361 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Feb 24 05:37:20.388503 master-0 kubenswrapper[34361]: W0224 05:37:20.367970 34361 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Feb 24 05:37:20.388503 master-0 kubenswrapper[34361]: W0224 05:37:20.367979 34361 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Feb 24 05:37:20.388503 master-0 kubenswrapper[34361]: W0224 05:37:20.367987 34361 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Feb 24 05:37:20.388503 master-0 kubenswrapper[34361]: W0224 05:37:20.367995 34361 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Feb 24 05:37:20.388503 master-0 kubenswrapper[34361]: W0224 05:37:20.368004 34361 feature_gate.go:330] unrecognized feature gate: GatewayAPI Feb 24 05:37:20.388503 master-0 kubenswrapper[34361]: W0224 05:37:20.368012 34361 feature_gate.go:330] unrecognized feature gate: OVNObservability Feb 24 05:37:20.388503 master-0 kubenswrapper[34361]: W0224 05:37:20.368021 34361 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Feb 24 05:37:20.388503 master-0 kubenswrapper[34361]: W0224 05:37:20.368032 34361 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Feb 24 05:37:20.388503 master-0 kubenswrapper[34361]: W0224 05:37:20.368041 34361 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Feb 24 05:37:20.388503 master-0 kubenswrapper[34361]: I0224 05:37:20.368055 34361 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false StreamingCollectionEncodingToJSON:true StreamingCollectionEncodingToProtobuf:true TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Feb 24 05:37:20.388503 master-0 kubenswrapper[34361]: W0224 05:37:20.368344 34361 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Feb 24 05:37:20.388503 master-0 kubenswrapper[34361]: W0224 05:37:20.368357 34361 feature_gate.go:330] unrecognized feature gate: OVNObservability Feb 24 05:37:20.389838 master-0 kubenswrapper[34361]: W0224 05:37:20.368366 34361 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Feb 24 05:37:20.389838 master-0 kubenswrapper[34361]: W0224 05:37:20.368375 34361 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Feb 24 05:37:20.389838 master-0 kubenswrapper[34361]: W0224 05:37:20.368383 34361 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Feb 24 05:37:20.389838 master-0 kubenswrapper[34361]: W0224 05:37:20.368391 34361 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Feb 24 05:37:20.389838 master-0 kubenswrapper[34361]: W0224 05:37:20.368399 34361 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Feb 24 05:37:20.389838 master-0 kubenswrapper[34361]: W0224 05:37:20.368407 34361 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Feb 24 05:37:20.389838 master-0 kubenswrapper[34361]: W0224 05:37:20.368415 34361 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Feb 24 05:37:20.389838 master-0 kubenswrapper[34361]: W0224 05:37:20.368423 34361 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Feb 24 05:37:20.389838 master-0 kubenswrapper[34361]: W0224 05:37:20.368431 34361 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Feb 24 05:37:20.389838 master-0 kubenswrapper[34361]: W0224 05:37:20.368440 34361 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Feb 24 05:37:20.389838 master-0 kubenswrapper[34361]: W0224 05:37:20.368448 34361 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Feb 24 05:37:20.389838 master-0 kubenswrapper[34361]: W0224 05:37:20.368455 34361 feature_gate.go:330] unrecognized feature gate: PinnedImages Feb 24 05:37:20.389838 master-0 kubenswrapper[34361]: W0224 05:37:20.368463 34361 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Feb 24 05:37:20.389838 master-0 kubenswrapper[34361]: W0224 05:37:20.368471 34361 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Feb 24 05:37:20.389838 master-0 kubenswrapper[34361]: W0224 05:37:20.368479 34361 feature_gate.go:330] unrecognized feature gate: NewOLM Feb 24 05:37:20.389838 master-0 kubenswrapper[34361]: W0224 05:37:20.368486 34361 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Feb 24 05:37:20.389838 master-0 kubenswrapper[34361]: W0224 05:37:20.368494 34361 feature_gate.go:330] unrecognized feature gate: ExternalOIDCWithUIDAndExtraClaimMappings Feb 24 05:37:20.389838 master-0 kubenswrapper[34361]: W0224 05:37:20.368503 34361 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Feb 24 05:37:20.389838 master-0 kubenswrapper[34361]: W0224 05:37:20.368511 34361 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Feb 24 05:37:20.389838 master-0 kubenswrapper[34361]: W0224 05:37:20.368519 34361 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Feb 24 05:37:20.391152 master-0 kubenswrapper[34361]: W0224 05:37:20.368527 34361 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Feb 24 05:37:20.391152 master-0 kubenswrapper[34361]: W0224 05:37:20.368535 34361 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Feb 24 05:37:20.391152 master-0 kubenswrapper[34361]: W0224 05:37:20.368543 34361 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Feb 24 05:37:20.391152 master-0 kubenswrapper[34361]: W0224 05:37:20.368551 34361 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Feb 24 05:37:20.391152 master-0 kubenswrapper[34361]: W0224 05:37:20.368558 34361 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Feb 24 05:37:20.391152 master-0 kubenswrapper[34361]: W0224 05:37:20.368566 34361 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Feb 24 05:37:20.391152 master-0 kubenswrapper[34361]: W0224 05:37:20.368576 34361 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Feb 24 05:37:20.391152 master-0 kubenswrapper[34361]: W0224 05:37:20.368586 34361 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Feb 24 05:37:20.391152 master-0 kubenswrapper[34361]: W0224 05:37:20.368595 34361 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Feb 24 05:37:20.391152 master-0 kubenswrapper[34361]: W0224 05:37:20.368603 34361 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Feb 24 05:37:20.391152 master-0 kubenswrapper[34361]: W0224 05:37:20.368612 34361 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Feb 24 05:37:20.391152 master-0 kubenswrapper[34361]: W0224 05:37:20.368621 34361 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Feb 24 05:37:20.391152 master-0 kubenswrapper[34361]: W0224 05:37:20.368629 34361 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Feb 24 05:37:20.391152 master-0 kubenswrapper[34361]: W0224 05:37:20.368638 34361 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Feb 24 05:37:20.391152 master-0 kubenswrapper[34361]: W0224 05:37:20.368646 34361 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Feb 24 05:37:20.391152 master-0 kubenswrapper[34361]: W0224 05:37:20.368654 34361 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Feb 24 05:37:20.391152 master-0 kubenswrapper[34361]: W0224 05:37:20.368662 34361 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Feb 24 05:37:20.391152 master-0 kubenswrapper[34361]: W0224 05:37:20.368670 34361 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Feb 24 05:37:20.391152 master-0 kubenswrapper[34361]: W0224 05:37:20.368679 34361 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Feb 24 05:37:20.392474 master-0 kubenswrapper[34361]: W0224 05:37:20.368687 34361 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Feb 24 05:37:20.392474 master-0 kubenswrapper[34361]: W0224 05:37:20.368695 34361 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Feb 24 05:37:20.392474 master-0 kubenswrapper[34361]: W0224 05:37:20.368704 34361 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Feb 24 05:37:20.392474 master-0 kubenswrapper[34361]: W0224 05:37:20.368712 34361 feature_gate.go:330] unrecognized feature gate: PlatformOperators Feb 24 05:37:20.392474 master-0 kubenswrapper[34361]: W0224 05:37:20.368721 34361 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Feb 24 05:37:20.392474 master-0 kubenswrapper[34361]: W0224 05:37:20.368729 34361 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Feb 24 05:37:20.392474 master-0 kubenswrapper[34361]: W0224 05:37:20.368737 34361 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Feb 24 05:37:20.392474 master-0 kubenswrapper[34361]: W0224 05:37:20.368746 34361 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Feb 24 05:37:20.392474 master-0 kubenswrapper[34361]: W0224 05:37:20.368756 34361 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Feb 24 05:37:20.392474 master-0 kubenswrapper[34361]: W0224 05:37:20.368766 34361 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Feb 24 05:37:20.392474 master-0 kubenswrapper[34361]: W0224 05:37:20.368775 34361 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Feb 24 05:37:20.392474 master-0 kubenswrapper[34361]: W0224 05:37:20.368785 34361 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Feb 24 05:37:20.392474 master-0 kubenswrapper[34361]: W0224 05:37:20.368795 34361 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Feb 24 05:37:20.392474 master-0 kubenswrapper[34361]: W0224 05:37:20.368804 34361 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Feb 24 05:37:20.392474 master-0 kubenswrapper[34361]: W0224 05:37:20.368814 34361 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Feb 24 05:37:20.392474 master-0 kubenswrapper[34361]: W0224 05:37:20.368822 34361 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Feb 24 05:37:20.392474 master-0 kubenswrapper[34361]: W0224 05:37:20.368832 34361 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Feb 24 05:37:20.392474 master-0 kubenswrapper[34361]: W0224 05:37:20.368842 34361 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Feb 24 05:37:20.392474 master-0 kubenswrapper[34361]: W0224 05:37:20.368851 34361 feature_gate.go:330] unrecognized feature gate: InsightsConfig Feb 24 05:37:20.393646 master-0 kubenswrapper[34361]: W0224 05:37:20.368860 34361 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Feb 24 05:37:20.393646 master-0 kubenswrapper[34361]: W0224 05:37:20.368869 34361 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Feb 24 05:37:20.393646 master-0 kubenswrapper[34361]: W0224 05:37:20.368878 34361 feature_gate.go:330] unrecognized feature gate: Example Feb 24 05:37:20.393646 master-0 kubenswrapper[34361]: W0224 05:37:20.368887 34361 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Feb 24 05:37:20.393646 master-0 kubenswrapper[34361]: W0224 05:37:20.368895 34361 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Feb 24 05:37:20.393646 master-0 kubenswrapper[34361]: W0224 05:37:20.368912 34361 feature_gate.go:330] unrecognized feature gate: GatewayAPI Feb 24 05:37:20.393646 master-0 kubenswrapper[34361]: W0224 05:37:20.368921 34361 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Feb 24 05:37:20.393646 master-0 kubenswrapper[34361]: W0224 05:37:20.368929 34361 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Feb 24 05:37:20.393646 master-0 kubenswrapper[34361]: W0224 05:37:20.368937 34361 feature_gate.go:330] unrecognized feature gate: SignatureStores Feb 24 05:37:20.393646 master-0 kubenswrapper[34361]: W0224 05:37:20.368947 34361 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Feb 24 05:37:20.393646 master-0 kubenswrapper[34361]: W0224 05:37:20.368955 34361 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Feb 24 05:37:20.393646 master-0 kubenswrapper[34361]: W0224 05:37:20.369147 34361 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Feb 24 05:37:20.393646 master-0 kubenswrapper[34361]: I0224 05:37:20.369161 34361 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false StreamingCollectionEncodingToJSON:true StreamingCollectionEncodingToProtobuf:true TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Feb 24 05:37:20.393646 master-0 kubenswrapper[34361]: I0224 05:37:20.369685 34361 server.go:940] "Client rotation is on, will bootstrap in background" Feb 24 05:37:20.393646 master-0 kubenswrapper[34361]: I0224 05:37:20.373029 34361 bootstrap.go:85] "Current kubeconfig file contents are still valid, no bootstrap necessary" Feb 24 05:37:20.394667 master-0 kubenswrapper[34361]: I0224 05:37:20.373178 34361 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Feb 24 05:37:20.394667 master-0 kubenswrapper[34361]: I0224 05:37:20.373632 34361 server.go:997] "Starting client certificate rotation" Feb 24 05:37:20.394667 master-0 kubenswrapper[34361]: I0224 05:37:20.373650 34361 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate rotation is enabled Feb 24 05:37:20.394667 master-0 kubenswrapper[34361]: I0224 05:37:20.373930 34361 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate expiration is 2026-02-25 05:04:32 +0000 UTC, rotation deadline is 2026-02-25 01:28:50.248274839 +0000 UTC Feb 24 05:37:20.394667 master-0 kubenswrapper[34361]: I0224 05:37:20.374063 34361 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Waiting 19h51m29.87421779s for next certificate rotation Feb 24 05:37:20.394667 master-0 kubenswrapper[34361]: I0224 05:37:20.374787 34361 dynamic_cafile_content.go:123] "Loaded a new CA Bundle and Verifier" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Feb 24 05:37:20.394667 master-0 kubenswrapper[34361]: I0224 05:37:20.377710 34361 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Feb 24 05:37:20.394667 master-0 kubenswrapper[34361]: I0224 05:37:20.381726 34361 log.go:25] "Validated CRI v1 runtime API" Feb 24 05:37:20.394667 master-0 kubenswrapper[34361]: I0224 05:37:20.389485 34361 log.go:25] "Validated CRI v1 image API" Feb 24 05:37:20.399664 master-0 kubenswrapper[34361]: I0224 05:37:20.399599 34361 server.go:1437] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Feb 24 05:37:20.422118 master-0 kubenswrapper[34361]: I0224 05:37:20.421910 34361 fs.go:135] Filesystem UUIDs: map[7B77-95E7:/dev/vda2 910678ff-f77e-4a7d-8d53-86f2ac47a823:/dev/vda4 c6a7f20e-7412-4bcb-a694-c65c3535af20:/dev/vda3] Feb 24 05:37:20.425156 master-0 kubenswrapper[34361]: I0224 05:37:20.422082 34361 fs.go:136] Filesystem partitions: map[/dev/shm:{mountpoint:/dev/shm major:0 minor:22 fsType:tmpfs blockSize:0} /dev/vda3:{mountpoint:/boot major:252 minor:3 fsType:ext4 blockSize:0} /dev/vda4:{mountpoint:/var major:252 minor:4 fsType:xfs blockSize:0} /run:{mountpoint:/run major:0 minor:24 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/005aea3f18d4d280e39bcec0aace6a6b0719831dd54d5e5f2bb06b03a10a1e55/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/005aea3f18d4d280e39bcec0aace6a6b0719831dd54d5e5f2bb06b03a10a1e55/userdata/shm major:0 minor:933 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/0655b027cab36844f1bd97da97e52b25a2bc334d369a5c8c6902c2874a930630/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/0655b027cab36844f1bd97da97e52b25a2bc334d369a5c8c6902c2874a930630/userdata/shm major:0 minor:301 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/081425b6bb126676c8a3b61b952db3a17ca28803f3b46af593db55de6dd0db70/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/081425b6bb126676c8a3b61b952db3a17ca28803f3b46af593db55de6dd0db70/userdata/shm major:0 minor:274 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/0a200f132e292ed5670ebdd181d6f49bb6c398710ac1ebdc14c3c7cdc32842f8/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/0a200f132e292ed5670ebdd181d6f49bb6c398710ac1ebdc14c3c7cdc32842f8/userdata/shm major:0 minor:707 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/0c671a703dbac86ce7b1c5dcbfbe1729e65e787dfd6afe8e60d163a277f3e763/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/0c671a703dbac86ce7b1c5dcbfbe1729e65e787dfd6afe8e60d163a277f3e763/userdata/shm major:0 minor:936 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/0de580e3a4de4a7d062f7572a6d4a10fb107356c71fe5f479e8d76eb00cfe863/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/0de580e3a4de4a7d062f7572a6d4a10fb107356c71fe5f479e8d76eb00cfe863/userdata/shm major:0 minor:516 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/0e75a15a8297368a6c95abe6074b8d1fd12c66b5f2515773157daf62c40e79a8/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/0e75a15a8297368a6c95abe6074b8d1fd12c66b5f2515773157daf62c40e79a8/userdata/shm major:0 minor:511 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/0fcfa31d947740e8b2c9697ed507eb02078278c10de3439215a818d10753dde6/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/0fcfa31d947740e8b2c9697ed507eb02078278c10de3439215a818d10753dde6/userdata/shm major:0 minor:281 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/19df6454a08add523c5ff47203d9500ee4d5041717ffe824b8f6b33008f7fb0d/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/19df6454a08add523c5ff47203d9500ee4d5041717ffe824b8f6b33008f7fb0d/userdata/shm major:0 minor:795 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/267ebddc959ac57c572038da835a770f0388428b8136a92cef38a57e55a51aac/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/267ebddc959ac57c572038da835a770f0388428b8136a92cef38a57e55a51aac/userdata/shm major:0 minor:603 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/2b0278ee2f5e88257e8f5b58fed5df5f9b9d95fcd14996f65f2dd1c054e4ac57/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/2b0278ee2f5e88257e8f5b58fed5df5f9b9d95fcd14996f65f2dd1c054e4ac57/userdata/shm major:0 minor:402 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/2d6d12cb5b54a813b83ddffc4965018d471ee515affc2a1d0cb0aec4f5245797/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/2d6d12cb5b54a813b83ddffc4965018d471ee515affc2a1d0cb0aec4f5245797/userdata/shm major:0 minor:814 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/2e08dd98145938b80638e25896f965db6111532d375ded80b0d82dda78b2522d/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/2e08dd98145938b80638e25896f965db6111532d375ded80b0d82dda78b2522d/userdata/shm major:0 minor:271 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/2e6f428788cdb3f513e95cc63ecf43bbf7b7de35faa154cc080dbc5634ce8151/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/2e6f428788cdb3f513e95cc63ecf43bbf7b7de35faa154cc080dbc5634ce8151/userdata/shm major:0 minor:186 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/31db0370c08dc41ae971998fe86ac9cb0b2bcc6c08ec28eb749ac1396b3c2667/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/31db0370c08dc41ae971998fe86ac9cb0b2bcc6c08ec28eb749ac1396b3c2667/userdata/shm major:0 minor:282 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/32f719b1fae3e7d132b769e21e46c31c5ab4d99d85c92e0fd1953cfcbf40dc0a/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/32f719b1fae3e7d132b769e21e46c31c5ab4d99d85c92e0fd1953cfcbf40dc0a/userdata/shm major:0 minor:530 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/334819f0fd2ed876c3fd6a59791380d72baaff02835bfb8dad2cfe7eb85f0397/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/334819f0fd2ed876c3fd6a59791380d72baaff02835bfb8dad2cfe7eb85f0397/userdata/shm major:0 minor:112 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/371c4924a11b805a233cd8aa1cdf64502325cac941f4d66f86f54a68683a9e74/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/371c4924a11b805a233cd8aa1cdf64502325cac941f4d66f86f54a68683a9e74/userdata/shm major:0 minor:602 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/3aa615a9d796b417e579505462fba818eb63c6e04f0fc9bcc949d228f425e015/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/3aa615a9d796b417e579505462fba818eb63c6e04f0fc9bcc949d228f425e015/userdata/shm major:0 minor:811 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/4097b46c5415e7a8b1651e87123bd125c21ee99b1c3af149041760e25e6378ee/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/4097b46c5415e7a8b1651e87123bd125c21ee99b1c3af149041760e25e6378ee/userdata/shm major:0 minor:105 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/410534ca0c42d1b797ab53ba5fbf6b12f5a1a2db22751f87c2aa91614045629d/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/410534ca0c42d1b797ab53ba5fbf6b12f5a1a2db22751f87c2aa91614045629d/userdata/shm major:0 minor:824 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/42dcfde8494f887ef3a1248e80ba66a922da1760343eca1d2afd960d88b81901/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/42dcfde8494f887ef3a1248e80ba66a922da1760343eca1d2afd960d88b81901/userdata/shm major:0 minor:324 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/47463debfe8a4cd4bfc5f6610d0dc3da5ba2eb733f6d27a5379ed121dc26350d/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/47463debfe8a4cd4bfc5f6610d0dc3da5ba2eb733f6d27a5379ed121dc26350d/userdata/shm major:0 minor:1158 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/4ebd137aadd86a90697f1884cb52d1970bb5138e39026928308cfa18816924e6/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/4ebd137aadd86a90697f1884cb52d1970bb5138e39026928308cfa18816924e6/userdata/shm major:0 minor:1242 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/53aff8ce601eb36b54bc43ffb3ad6e1b16683e9a02c222af744cc38c77ef8aa0/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/53aff8ce601eb36b54bc43ffb3ad6e1b16683e9a02c222af744cc38c77ef8aa0/userdata/shm major:0 minor:58 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/54e1df610bab1f2d6afe25113c517fd17a97b3a82ba411dc4888d98b1a65da1d/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/54e1df610bab1f2d6afe25113c517fd17a97b3a82ba411dc4888d98b1a65da1d/userdata/shm major:0 minor:291 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/5c76314bfc127c2893886d4278db6947daa2fbb82909a575cdadd2f5a3b4b008/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/5c76314bfc127c2893886d4278db6947daa2fbb82909a575cdadd2f5a3b4b008/userdata/shm major:0 minor:1088 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/5dd4d0e15147dd2dcd433c46cdfb1a10fbbcd3b91480c55088fbf67973e54f4c/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/5dd4d0e15147dd2dcd433c46cdfb1a10fbbcd3b91480c55088fbf67973e54f4c/userdata/shm major:0 minor:606 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/6042346e04d14789f9df563facc73503846c93f9a58755284a883ae67d6dfa74/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/6042346e04d14789f9df563facc73503846c93f9a58755284a883ae67d6dfa74/userdata/shm major:0 minor:1185 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/64d82ee2903a4034f2cd6f4a7fd22197c2cda9f27e9a4810423ee5ca5bc5cc6d/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/64d82ee2903a4034f2cd6f4a7fd22197c2cda9f27e9a4810423ee5ca5bc5cc6d/userdata/shm major:0 minor:287 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/68f61c7a09ca20650d4a6ea4b0f5e362ed36ea985ba0db19d10925a21520b6ad/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/68f61c7a09ca20650d4a6ea4b0f5e362ed36ea985ba0db19d10925a21520b6ad/userdata/shm major:0 minor:817 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/6bea8d6f03626b01b052e73eecef6934077ef78e8f1a77511bf8222ddfca016e/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/6bea8d6f03626b01b052e73eecef6934077ef78e8f1a77511bf8222ddfca016e/userdata/shm major:0 minor:263 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/714673c16fe0665ef1b16d03b2319efbfe055f0459ee84843763239d325f2af4/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/714673c16fe0665ef1b16d03b2319efbfe055f0459ee84843763239d325f2af4/userdata/shm major:0 minor:273 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/79723ddb5fac1ee4009ac879b87cc7a72172f4afc11c2c1be74ae202b150e818/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/79723ddb5fac1ee4009ac879b87cc7a72172f4afc11c2c1be74ae202b150e818/userdata/shm major:0 minor:934 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/84b8e720c1d11da23dcffc231251263a604179069ed4f2a829aaaefed039c537/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/84b8e720c1d11da23dcffc231251263a604179069ed4f2a829aaaefed039c537/userdata/shm major:0 minor:108 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/894870cb71b93cf170c026145b9ea2c31998ab3f9fd22cdcbd9083b354b5406e/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/894870cb71b93cf170c026145b9ea2c31998ab3f9fd22cdcbd9083b354b5406e/userdata/shm major:0 minor:1188 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/89dd38053c589bc34a06848b1d85945f7e695c76927a0e1433d3c5444dd1eb09/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/89dd38053c589bc34a06848b1d85945f7e695c76927a0e1433d3c5444dd1eb09/userdata/shm major:0 minor:791 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/8a8cf406c663f290d9d876c25d67c60eea733c614a8da4d512ef2ea405de9382/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/8a8cf406c663f290d9d876c25d67c60eea733c614a8da4d512ef2ea405de9382/userdata/shm major:0 minor:270 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/8b96b8f7d5979105f35e071dc0c704b23c24808d5269da621b3e55a924016c6c/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/8b96b8f7d5979105f35e071dc0c704b23c24808d5269da621b3e55a924016c6c/userdata/shm major:0 minor:1060 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/8e403e85ba5e32d44b48160b30b4587230e7b0f26d90604af0e04232edc028bd/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/8e403e85ba5e32d44b48160b30b4587230e7b0f26d90604af0e04232edc028bd/userdata/shm major:0 minor:776 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/8edfb6097f947373026f0b09e341e33fda8a35b32db2f2f2929d0f92ff74f282/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/8edfb6097f947373026f0b09e341e33fda8a35b32db2f2f2929d0f92ff74f282/userdata/shm major:0 minor:822 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/8f0c2bd56106a14890572575d4661ad3be97a3bf1270d2b66fc4d182958ebb72/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/8f0c2bd56106a14890572575d4661ad3be97a3bf1270d2b66fc4d182958ebb72/userdata/shm major:0 minor:1309 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/8f82575ddbb5dc664a876d323c277ef91af413f2e9ed224a0250e918dc81ae61/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/8f82575ddbb5dc664a876d323c277ef91af413f2e9ed224a0250e918dc81ae61/userdata/shm major:0 minor:937 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/906a4975f221a3093bffb39f286ed36f66979e79a259e327d3df353ea75730c0/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/906a4975f221a3093bffb39f286ed36f66979e79a259e327d3df353ea75730c0/userdata/shm major:0 minor:818 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/922eed7d19f9dd738cf0b3fc3e3b004e0316f8e1783948356d4d447355655a65/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/922eed7d19f9dd738cf0b3fc3e3b004e0316f8e1783948356d4d447355655a65/userdata/shm major:0 minor:1320 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/924c790b9f927c27385b4ab4089845c57c9181271438a831e175110ba7205a0b/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/924c790b9f927c27385b4ab4089845c57c9181271438a831e175110ba7205a0b/userdata/shm major:0 minor:131 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/937f03ad2559d182c0cdd1d2762487960e12dca202f4d10b53ec97e755cb0a40/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/937f03ad2559d182c0cdd1d2762487960e12dca202f4d10b53ec97e755cb0a40/userdata/shm major:0 minor:805 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/93dd263e4986822eec0c710075ac8eebc645d482f87f7ef8bb335adc841614f2/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/93dd263e4986822eec0c710075ac8eebc645d482f87f7ef8bb335adc841614f2/userdata/shm major:0 minor:806 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/9e66323acb79027dbee260b2bd6ea317379967ab104a220c1093c958a45ebc27/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/9e66323acb79027dbee260b2bd6ea317379967ab104a220c1093c958a45ebc27/userdata/shm major:0 minor:1083 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/a1b7fe82470a07c52d024e13d01069cc6897029891ba56a4cf999816f805e9a7/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/a1b7fe82470a07c52d024e13d01069cc6897029891ba56a4cf999816f805e9a7/userdata/shm major:0 minor:261 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/a2b7a210dee36e67d332da03e90107812f166b01198822dfb676fc0a9a05fc25/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/a2b7a210dee36e67d332da03e90107812f166b01198822dfb676fc0a9a05fc25/userdata/shm major:0 minor:168 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/a51d75323a923af00f3bd0e9f47fc2b98d3fa4f81d500b08ed1b5763acd5b079/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/a51d75323a923af00f3bd0e9f47fc2b98d3fa4f81d500b08ed1b5763acd5b079/userdata/shm major:0 minor:808 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/aa70a59110835e6aad43cf1cb5ed855bb86de37892d716ff87772c740d916d65/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/aa70a59110835e6aad43cf1cb5ed855bb86de37892d716ff87772c740d916d65/userdata/shm major:0 minor:338 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/af62c50cd75ed27beeb63e0f7014692299e172af746bf8738716ac3ff47c9622/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/af62c50cd75ed27beeb63e0f7014692299e172af746bf8738716ac3ff47c9622/userdata/shm major:0 minor:50 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/b23dfe329a1134a3919827a4fef6a742a5c3a54647b515a5ae24efa737eaeba7/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/b23dfe329a1134a3919827a4fef6a742a5c3a54647b515a5ae24efa737eaeba7/userdata/shm major:0 minor:48 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/b5410db202b2d2565e3f21ef6f188dc18cdaa71ef843bfa19039eca0376e0d6a/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/b5410db202b2d2565e3f21ef6f188dc18cdaa71ef843bfa19039eca0376e0d6a/userdata/shm major:0 minor:513 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/b59c858a83fd92adb897139656578eaefef3c02c4b1c6979cd2c3711ce4f5720/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/b59c858a83fd92adb897139656578eaefef3c02c4b1c6979cd2c3711ce4f5720/userdata/shm major:0 minor:145 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/b5eb5695ccec6b92144f40353b32b80192cdcb4ed71afa4329c2fd87d4604e30/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/b5eb5695ccec6b92144f40353b32b80192cdcb4ed71afa4329c2fd87d4604e30/userdata/shm major:0 minor:938 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/b74c9c781dd953b15122d114627fe038414c5f0f995df649cb54aad5bc2f4e07/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/b74c9c781dd953b15122d114627fe038414c5f0f995df649cb54aad5bc2f4e07/userdata/shm major:0 minor:501 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/bec37b05d26590ac90852a463adcb2612e0087e0d2b710f75cef020a89559e29/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/bec37b05d26590ac90852a463adcb2612e0087e0d2b710f75cef020a89559e29/userdata/shm major:0 minor:144 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/c125f0138a2358ed33a087eaebb28b417878c3d57e675823d35e0431d5663d9e/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/c125f0138a2358ed33a087eaebb28b417878c3d57e675823d35e0431d5663d9e/userdata/shm major:0 minor:605 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/c932287e23f5b8d24efa88b511b35c92261a32985b4d2a556c22eb4a08ba11cb/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/c932287e23f5b8d24efa88b511b35c92261a32985b4d2a556c22eb4a08ba11cb/userdata/shm major:0 minor:1087 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/ca1f4967e893fa63378ca09c1eeb80d103b9e8e60104bb8036c8ccc5faa3a035/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/ca1f4967e893fa63378ca09c1eeb80d103b9e8e60104bb8036c8ccc5faa3a035/userdata/shm major:0 minor:676 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/cb022277db501e47c11144c7784ae45171d1fe684dae009de53aad7904c4eadc/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/cb022277db501e47c11144c7784ae45171d1fe684dae009de53aad7904c4eadc/userdata/shm major:0 minor:821 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/cd174549be5b88f39588bafbc22af8049014b8bbed26dfd817fa5184b48774e3/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/cd174549be5b88f39588bafbc22af8049014b8bbed26dfd817fa5184b48774e3/userdata/shm major:0 minor:403 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/d243d9f4d6d9c16fd75ab0c5744222bf367eeb4a55dc3a56ad2f15b145aca434/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/d243d9f4d6d9c16fd75ab0c5744222bf367eeb4a55dc3a56ad2f15b145aca434/userdata/shm major:0 minor:825 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/d279f5c83a7334bb036cb98c51916708c8e0553fc71eae75ca717993b0118072/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/d279f5c83a7334bb036cb98c51916708c8e0553fc71eae75ca717993b0118072/userdata/shm major:0 minor:1183 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/d3656437a9ce9676295b2eb9bd8bc3fb63776e655e923084238b22192495f791/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/d3656437a9ce9676295b2eb9bd8bc3fb63776e655e923084238b22192495f791/userdata/shm major:0 minor:829 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/da13c43822ff6ebef72ea5dada557656eab3613ad082a77190dd348e4d4caec1/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/da13c43822ff6ebef72ea5dada557656eab3613ad082a77190dd348e4d4caec1/userdata/shm major:0 minor:384 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/dd7b027ed4dfa318c6f765780e7da4b378d4a45eec9c4d60403e7f1cb887d422/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/dd7b027ed4dfa318c6f765780e7da4b378d4a45eec9c4d60403e7f1cb887d422/userdata/shm major:0 minor:60 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/e0b212afd7d07d05ad4af03681bd28027ddd652c6e3c593a77163ced8697a47e/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/e0b212afd7d07d05ad4af03681bd28027ddd652c6e3c593a77163ced8697a47e/userdata/shm major:0 minor:407 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/e0d20c57fe745f0a7a074b91ba4c54bbdd4dc326b155cd4b8a578d9c21d5db21/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/e0d20c57fe745f0a7a074b91ba4c54bbdd4dc326b155cd4b8a578d9c21d5db21/userdata/shm major:0 minor:797 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/ed120d47621f85e51e2ef771ce28687d4c0566d41771f7a4a34982cc8d975798/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/ed120d47621f85e51e2ef771ce28687d4c0566d41771f7a4a34982cc8d975798/userdata/shm major:0 minor:593 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/ef21d52c34e0ff209e507b2e241489d3a22d4196f3b18bf8ced7797fda251b4a/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/ef21d52c34e0ff209e507b2e241489d3a22d4196f3b18bf8ced7797fda251b4a/userdata/shm major:0 minor:97 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/f05f4c8572660fb60933e1a43cdf2d946cf6624f2ede2a6f783e25d928dd09bd/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/f05f4c8572660fb60933e1a43cdf2d946cf6624f2ede2a6f783e25d928dd09bd/userdata/shm major:0 minor:935 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/f5885425638056ce98b14e0964ddb8ab6fa82dc0c949c580e04a0b062a448107/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/f5885425638056ce98b14e0964ddb8ab6fa82dc0c949c580e04a0b062a448107/userdata/shm major:0 minor:747 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/fd03b91adf31c70f04d420a5ba045d6cd9e1f68b14c47322c66de7814d71ccf4/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/fd03b91adf31c70f04d420a5ba045d6cd9e1f68b14c47322c66de7814d71ccf4/userdata/shm major:0 minor:404 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/fd87d63ea110a273569e5b66501c57bfaf932272be25e92340e227a60cef6dea/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/fd87d63ea110a273569e5b66501c57bfaf932272be25e92340e227a60cef6dea/userdata/shm major:0 minor:266 fsType:tmpfs blockSize:0} /tmp:{mountpoint:/tmp major:0 minor:30 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/03e4cebe-f3df-423f-be2b-7fb22bd58341/volumes/kubernetes.io~projected/kube-api-access-f9pp4:{mountpoint:/var/lib/kubelet/pods/03e4cebe-f3df-423f-be2b-7fb22bd58341/volumes/kubernetes.io~projected/kube-api-access-f9pp4 major:0 minor:389 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/073c9b40-bb80-41a2-bcd2-cfdbe040a5a4/volumes/kubernetes.io~empty-dir/etc-tuned:{mountpoint:/var/lib/kubelet/pods/073c9b40-bb80-41a2-bcd2-cfdbe040a5a4/volumes/kubernetes.io~empty-dir/etc-tuned major:0 minor:455 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/073c9b40-bb80-41a2-bcd2-cfdbe040a5a4/volumes/kubernetes.io~empty-dir/tmp:{mountpoint:/var/lib/kubelet/pods/073c9b40-bb80-41a2-bcd2-cfdbe040a5a4/volumes/kubernetes.io~empty-dir/tmp major:0 minor:488 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/073c9b40-bb80-41a2-bcd2-cfdbe040a5a4/volumes/kubernetes.io~projected/kube-api-access-dh2rh:{mountpoint:/var/lib/kubelet/pods/073c9b40-bb80-41a2-bcd2-cfdbe040a5a4/volumes/kubernetes.io~projected/kube-api-access-dh2rh major:0 minor:489 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/0e05783d-6bd1-4c71-87d9-1eb3edd827b3/volumes/kubernetes.io~projected/kube-api-access:{mountpoint:/var/lib/kubelet/pods/0e05783d-6bd1-4c71-87d9-1eb3edd827b3/volumes/kubernetes.io~projected/kube-api-access major:0 minor:675 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/0e05783d-6bd1-4c71-87d9-1eb3edd827b3/volumes/kubernetes.io~secret/serving-cert:{mountpoint:/var/lib/kubelet/pods/0e05783d-6bd1-4c71-87d9-1eb3edd827b3/volumes/kubernetes.io~secret/serving-cert major:0 minor:666 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/1163571d-f555-41ad-b04c-74c2dc452efe/volumes/kubernetes.io~projected/kube-api-access-46fll:{mountpoint:/var/lib/kubelet/pods/1163571d-f555-41ad-b04c-74c2dc452efe/volumes/kubernetes.io~projected/kube-api-access-46fll major:0 minor:1315 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/1163571d-f555-41ad-b04c-74c2dc452efe/volumes/kubernetes.io~secret/federate-client-tls:{mountpoint:/var/lib/kubelet/pods/1163571d-f555-41ad-b04c-74c2dc452efe/volumes/kubernetes.io~secret/federate-client-tls major:0 minor:1311 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/1163571d-f555-41ad-b04c-74c2dc452efe/volumes/kubernetes.io~secret/secret-telemeter-client:{mountpoint:/var/lib/kubelet/pods/1163571d-f555-41ad-b04c-74c2dc452efe/volumes/kubernetes.io~secret/secret-telemeter-client major:0 minor:1313 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/1163571d-f555-41ad-b04c-74c2dc452efe/volumes/kubernetes.io~secret/secret-telemeter-client-kube-rbac-proxy-config:{mountpoint:/var/lib/kubelet/pods/1163571d-f555-41ad-b04c-74c2dc452efe/volumes/kubernetes.io~secret/secret-telemeter-client-kube-rbac-proxy-config major:0 minor:1314 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/1163571d-f555-41ad-b04c-74c2dc452efe/volumes/kubernetes.io~secret/telemeter-client-tls:{mountpoint:/var/lib/kubelet/pods/1163571d-f555-41ad-b04c-74c2dc452efe/volumes/kubernetes.io~secret/telemeter-client-tls major:0 minor:1312 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/116e6b47-d435-49ca-abb5-088788daf16a/volumes/kubernetes.io~projected/kube-api-access-jt9fb:{mountpoint:/var/lib/kubelet/pods/116e6b47-d435-49ca-abb5-088788daf16a/volumes/kubernetes.io~projected/kube-api-access-jt9fb major:0 minor:760 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/116e6b47-d435-49ca-abb5-088788daf16a/volumes/kubernetes.io~secret/machine-api-operator-tls:{mountpoint:/var/lib/kubelet/pods/116e6b47-d435-49ca-abb5-088788daf16a/volumes/kubernetes.io~secret/machine-api-operator-tls major:0 minor:752 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/1533c5fa-0387-40bd-a959-e714b65cdacc/volumes/kubernetes.io~projected/kube-api-access-jspzm:{mountpoint:/var/lib/kubelet/pods/1533c5fa-0387-40bd-a959-e714b65cdacc/volumes/kubernetes.io~projected/kube-api-access-jspzm major:0 minor:1082 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/17f8e10b-88dc-4158-a7c4-aaa2f5d5fb9d/volumes/kubernetes.io~projected/kube-api-access:{mountpoint:/var/lib/kubelet/pods/17f8e10b-88dc-4158-a7c4-aaa2f5d5fb9d/volumes/kubernetes.io~projected/kube-api-access major:0 minor:269 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/17f8e10b-88dc-4158-a7c4-aaa2f5d5fb9d/volumes/kubernetes.io~secret/serving-cert:{mountpoint:/var/lib/kubelet/pods/17f8e10b-88dc-4158-a7c4-aaa2f5d5fb9d/volumes/kubernetes.io~secret/serving-cert major:0 minor:264 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/1d922f6f-70a3-46cb-b230-6a1e2b8cfdfa/volumes/kubernetes.io~projected/kube-api-access-ckfnc:{mountpoint:/var/lib/kubelet/pods/1d922f6f-70a3-46cb-b230-6a1e2b8cfdfa/volumes/kubernetes.io~projected/kube-api-access-ckfnc major:0 minor:318 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/1e7f7c02-4c84-432a-8d59-25dd3bfef5c2/volumes/kubernetes.io~projected/kube-api-access-4bf6w:{mountpoint:/var/lib/kubelet/pods/1e7f7c02-4c84-432a-8d59-25dd3bfef5c2/volumes/kubernetes.io~projected/kube-api-access-4bf6w major:0 minor:1059 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/1e7f7c02-4c84-432a-8d59-25dd3bfef5c2/volumes/kubernetes.io~secret/proxy-tls:{mountpoint:/var/lib/kubelet/pods/1e7f7c02-4c84-432a-8d59-25dd3bfef5c2/volumes/kubernetes.io~secret/proxy-tls major:0 minor:1054 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/22813c83-2f60-44ad-9624-ad367cec08f7/volumes/kubernetes.io~projected/kube-api-access:{mountpoint:/var/lib/kubelet/pods/22813c83-2f60-44ad-9624-ad367cec08f7/volumes/kubernetes.io~projected/kube-api-access major:0 minor:254 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/22813c83-2f60-44ad-9624-ad367cec08f7/volumes/kubernetes.io~secret/serving-cert:{mountpoint:/var/lib/kubelet/pods/22813c83-2f60-44ad-9624-ad367cec08f7/volumes/kubernetes.io~secret/serving-cert major:0 minor:248 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/23bdafdd-27c9-4461-be4a-3ea916ac3875/volumes/kubernetes.io~projected/bound-sa-token:{mountpoint:/var/lib/kubelet/pods/23bdafdd-27c9-4461-be4a-3ea916ac3875/volumes/kubernetes.io~projected/bound-sa-token major:0 minor:771 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/23bdafdd-27c9-4461-be4a-3ea916ac3875/volumes/kubernetes.io~projected/kube-api-access-cczbm:{mountpoint:/var/lib/kubelet/pods/23bdafdd-27c9-4461-be4a-3ea916ac3875/volumes/kubernetes.io~projected/kube-api-access-cczbm major:0 minor:761 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/23bdafdd-27c9-4461-be4a-3ea916ac3875/volumes/kubernetes.io~secret/image-registry-operator-tls:{mountpoint:/var/lib/kubelet/pods/23bdafdd-27c9-4461-be4a-3ea916ac3875/volumes/kubernetes.io~secret/image-registry-operator-tls major:0 minor:726 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/2c6bb439-ed17-4761-b193-580be5f6aa00/volumes/kubernetes.io~projected/kube-api-access-pl6rx:{mountpoint:/var/lib/kubelet/pods/2c6bb439-ed17-4761-b193-580be5f6aa00/volumes/kubernetes.io~projected/kube-api-access-pl6rx major:0 minor:921 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/2f48332e-92de-42aa-a6e6-db161f005e74/volumes/kubernetes.io~projected/kube-api-access-kc42f:{mountpoint:/var/lib/kubelet/pods/2f48332e-92de-42aa-a6e6-db161f005e74/volumes/kubernetes.io~projected/kube-api-access-kc42f major:0 minor:1241 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/2f48332e-92de-42aa-a6e6-db161f005e74/volumes/kubernetes.io~secret/client-ca-bundle:{mountpoint:/var/lib/kubelet/pods/2f48332e-92de-42aa-a6e6-db161f005e74/volumes/kubernetes.io~secret/client-ca-bundle major:0 minor:1235 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/2f48332e-92de-42aa-a6e6-db161f005e74/volumes/kubernetes.io~secret/secret-metrics-client-certs:{mountpoint:/var/lib/kubelet/pods/2f48332e-92de-42aa-a6e6-db161f005e74/volumes/kubernetes.io~secret/secret-metrics-client-certs major:0 minor:1239 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/2f48332e-92de-42aa-a6e6-db161f005e74/volumes/kubernetes.io~secret/secret-metrics-server-tls:{mountpoint:/var/lib/kubelet/pods/2f48332e-92de-42aa-a6e6-db161f005e74/volumes/kubernetes.io~secret/secret-metrics-server-tls major:0 minor:1240 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/32fd577d-8966-4ab1-95cf-357291084156/volumes/kubernetes.io~projected/kube-api-access-fh2pc:{mountpoint:/var/lib/kubelet/pods/32fd577d-8966-4ab1-95cf-357291084156/volumes/kubernetes.io~projected/kube-api-access-fh2pc major:0 minor:741 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/32fd577d-8966-4ab1-95cf-357291084156/volumes/kubernetes.io~secret/control-plane-machine-set-operator-tls:{mountpoint:/var/lib/kubelet/pods/32fd577d-8966-4ab1-95cf-357291084156/volumes/kubernetes.io~secret/control-plane-machine-set-operator-tls major:0 minor:746 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/3363f001-1cfa-41f5-b245-30cc99dd09cb/volumes/kubernetes.io~projected/kube-api-access-589rv:{mountpoint:/var/lib/kubelet/pods/3363f001-1cfa-41f5-b245-30cc99dd09cb/volumes/kubernetes.io~projected/kube-api-access-589rv major:0 minor:494 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/3363f001-1cfa-41f5-b245-30cc99dd09cb/volumes/kubernetes.io~secret/metrics-tls:{mountpoint:/var/lib/kubelet/pods/3363f001-1cfa-41f5-b245-30cc99dd09cb/volumes/kubernetes.io~secret/metrics-tls major:0 minor:499 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/347c43e5-86d5-436f-bdc5-1c7bbe19ab2a/volumes/kubernetes.io~projected/ca-certs:{mountpoint:/var/lib/kubelet/pods/347c43e5-86d5-436f-bdc5-1c7bbe19ab2a/volumes/kubernetes.io~projected/ca-certs major:0 minor:515 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/347c43e5-86d5-436f-bdc5-1c7bbe19ab2a/volumes/kubernetes.io~projected/kube-api-access-qgl4j:{mountpoint:/var/lib/kubelet/pods/347c43e5-86d5-436f-bdc5-1c7bbe19ab2a/volumes/kubernetes.io~projected/kube-api-access-qgl4j major:0 minor:500 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/39623346-691b-42c8-af76-409d4f6629af/volumes/kubernetes.io~projected/kube-api-access-ddfqw:{mountpoint:/var/lib/kubelet/pods/39623346-691b-42c8-af76-409d4f6629af/volumes/kubernetes.io~projected/kube-api-access-ddfqw major:0 minor:768 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/39623346-691b-42c8-af76-409d4f6629af/volumes/kubernetes.io~secret/cert:{mountpoint:/var/lib/kubelet/pods/39623346-691b-42c8-af76-409d4f6629af/volumes/kubernetes.io~secret/cert major:0 minor:657 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/39623346-691b-42c8-af76-409d4f6629af/volumes/kubernetes.io~secret/cluster-baremetal-operator-tls:{mountpoint:/var/lib/kubelet/pods/39623346-691b-42c8-af76-409d4f6629af/volumes/kubernetes.io~secret/cluster-baremetal-operator-tls major:0 minor:706 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/39c4d0aa-c372-4d02-9302-337e68b56784/volumes/kubernetes.io~projected/kube-api-access-b2fkp:{mountpoint:/var/lib/kubelet/pods/39c4d0aa-c372-4d02-9302-337e68b56784/volumes/kubernetes.io~projected/kube-api-access-b2fkp major:0 minor:799 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/39c4d0aa-c372-4d02-9302-337e68b56784/volumes/kubernetes.io~secret/proxy-tls:{mountpoint:/var/lib/kubelet/pods/39c4d0aa-c372-4d02-9302-337e68b56784/volumes/kubernetes.io~secret/proxy-tls major:0 minor:793 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/3d6b1ce7-1213-494c-829d-186d39eac7eb/volumes/kubernetes.io~projected/bound-sa-token:{mountpoint:/var/lib/kubelet/pods/3d6b1ce7-1213-494c-829d-186d39eac7eb/volumes/kubernetes.io~projected/bound-sa-token major:0 minor:279 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/3d6b1ce7-1213-494c-829d-186d39eac7eb/volumes/kubernetes.io~projected/kube-api-access-5q2r9:{mountpoint:/var/lib/kubelet/pods/3d6b1ce7-1213-494c-829d-186d39eac7eb/volumes/kubernetes.io~projected/kube-api-access-5q2r9 major:0 minor:267 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/3d6b1ce7-1213-494c-829d-186d39eac7eb/volumes/kubernetes.io~secret/metrics-tls:{mountpoint:/var/lib/kubelet/pods/3d6b1ce7-1213-494c-829d-186d39eac7eb/volumes/kubernetes.io~secret/metrics-tls major:0 minor:400 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/3f511d03-a182-4968-ba40-5c5c10e5e6be/volumes/kubernetes.io~projected/kube-api-access-4vdmz:{mountpoint:/var/lib/kubelet/pods/3f511d03-a182-4968-ba40-5c5c10e5e6be/volumes/kubernetes.io~projected/kube-api-access-4vdmz major:0 minor:756 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/3f511d03-a182-4968-ba40-5c5c10e5e6be/volumes/kubernetes.io~secret/serving-cert:{mountpoint:/var/lib/kubelet/pods/3f511d03-a182-4968-ba40-5c5c10e5e6be/volumes/kubernetes.io~secret/serving-cert major:0 minor:656 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/49b426a3-f16e-40e9-a166-7270d4cfcc60/volumes/kubernetes.io~projected/kube-api-access-9zxwj:{mountpoint:/var/lib/kubelet/pods/49b426a3-f16e-40e9-a166-7270d4cfcc60/volumes/kubernetes.io~projected/kube-api-access-9zxwj major:0 minor:928 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/49b426a3-f16e-40e9-a166-7270d4cfcc60/volumes/kubernetes.io~secret/apiservice-cert:{mountpoint:/var/lib/kubelet/pods/49b426a3-f16e-40e9-a166-7270d4cfcc60/volumes/kubernetes.io~secret/apiservice-cert major:0 minor:927 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/49b426a3-f16e-40e9-a166-7270d4cfcc60/volumes/kubernetes.io~secret/webhook-cert:{mountpoint:/var/lib/kubelet/pods/49b426a3-f16e-40e9-a166-7270d4cfcc60/volumes/kubernetes.io~secret/webhook-cert major:0 minor:926 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/49bfccec-61ec-4bef-a561-9f6e6f906215/volumes/kubernetes.io~projected/kube-api-access-d4d5x:{mountpoint:/var/lib/kubelet/pods/49bfccec-61ec-4bef-a561-9f6e6f906215/volumes/kubernetes.io~projected/kube-api-access-d4d5x major:0 minor:253 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/49bfccec-61ec-4bef-a561-9f6e6f906215/volumes/kubernetes.io~secret/package-server-manager-serving-cert:{mountpoint:/var/lib/kubelet/pods/49bfccec-61ec-4bef-a561-9f6e6f906215/volumes/kubernetes.io~secret/package-server-manager-serving-cert major:0 minor:599 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/4bb05b64-74d7-41bc-991c-5d3cddc9d8f4/volumes/kubernetes.io~projected/kube-api-access-7vjzn:{mountpoint:/var/lib/kubelet/pods/4bb05b64-74d7-41bc-991c-5d3cddc9d8f4/volumes/kubernetes.io~projected/kube-api-access-7vjzn major:0 minor:772 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/4bb05b64-74d7-41bc-991c-5d3cddc9d8f4/volumes/kubernetes.io~secret/samples-operator-tls:{mountpoint:/var/lib/kubelet/pods/4bb05b64-74d7-41bc-991c-5d3cddc9d8f4/volumes/kubernetes.io~secret/samples-operator-tls major:0 minor:731 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/58ecd829-4749-4c8a-933b-16b4acccac90/volumes/kubernetes.io~projected/kube-api-access-m9kf2:{mountpoint:/var/lib/kubelet/pods/58ecd829-4749-4c8a-933b-16b4acccac90/volumes/kubernetes.io~projected/kube-api-access-m9kf2 major:0 minor:260 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/58ecd829-4749-4c8a-933b-16b4acccac90/volumes/kubernetes.io~secret/serving-cert:{mountpoint:/var/lib/kubelet/pods/58ecd829-4749-4c8a-933b-16b4acccac90/volumes/kubernetes.io~secret/serving-cert major:0 minor:249 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/59333a14-5bdc-4590-a3da-af7300f086da/volumes/kubernetes.io~projected/kube-api-access-wwc5b:{mountpoint:/var/lib/kubelet/pods/59333a14-5bdc-4590-a3da-af7300f086da/volumes/kubernetes.io~projected/kube-api-access-wwc5b major:0 minor:259 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/59333a14-5bdc-4590-a3da-af7300f086da/volumes/kubernetes.io~secret/serving-cert:{mountpoint:/var/lib/kubelet/pods/59333a14-5bdc-4590-a3da-af7300f086da/volumes/kubernetes.io~secret/serving-cert major:0 minor:246 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/5d51ce58-55f6-45d5-9d5d-7b31ae42380a/volumes/kubernetes.io~projected/kube-api-access-2kh6l:{mountpoint:/var/lib/kubelet/pods/5d51ce58-55f6-45d5-9d5d-7b31ae42380a/volumes/kubernetes.io~projected/kube-api-access-2kh6l major:0 minor:769 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/5d51ce58-55f6-45d5-9d5d-7b31ae42380a/volumes/kubernetes.io~secret/cert:{mountpoint:/var/lib/kubelet/pods/5d51ce58-55f6-45d5-9d5d-7b31ae42380a/volumes/kubernetes.io~secret/cert major:0 minor:727 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/633d33a1-e1b1-40b0-b56a-afb0c1085d97/volumes/kubernetes.io~projected/kube-api-access-62xzk:{mountpoint:/var/lib/kubelet/pods/633d33a1-e1b1-40b0-b56a-afb0c1085d97/volumes/kubernetes.io~projected/kube-api-access-62xzk major:0 minor:255 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/633d33a1-e1b1-40b0-b56a-afb0c1085d97/volumes/kubernetes.io~secret/cluster-olm-operator-serving-cert:{mountpoint:/var/lib/kubelet/pods/633d33a1-e1b1-40b0-b56a-afb0c1085d97/volumes/kubernetes.io~secret/cluster-olm-operator-serving-cert major:0 minor:250 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/6ddb5ab7-0c1f-44ed-84fa-aaeb6b553e03/volumes/kubernetes.io~projected/kube-api-access-rkz2q:{mountpoint:/var/lib/kubelet/pods/6ddb5ab7-0c1f-44ed-84fa-aaeb6b553e03/volumes/kubernetes.io~projected/kube-api-access-rkz2q major:0 minor:1308 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/6ddb5ab7-0c1f-44ed-84fa-aaeb6b553e03/volumes/kubernetes.io~secret/webhook-certs:{mountpoint:/var/lib/kubelet/pods/6ddb5ab7-0c1f-44ed-84fa-aaeb6b553e03/volumes/kubernetes.io~secret/webhook-certs major:0 minor:1304 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/6e5ede6a-9d4b-47a2-b4ba-e6018910d05a/volumes/kubernetes.io~projected/kube-api-access-zb68s:{mountpoint:/var/lib/kubelet/pods/6e5ede6a-9d4b-47a2-b4ba-e6018910d05a/volumes/kubernetes.io~projected/kube-api-access-zb68s major:0 minor:257 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/6e5ede6a-9d4b-47a2-b4ba-e6018910d05a/volumes/kubernetes.io~secret/apiservice-cert:{mountpoint:/var/lib/kubelet/pods/6e5ede6a-9d4b-47a2-b4ba-e6018910d05a/volumes/kubernetes.io~secret/apiservice-cert major:0 minor:397 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/6e5ede6a-9d4b-47a2-b4ba-e6018910d05a/volumes/kubernetes.io~secret/node-tuning-operator-tls:{mountpoint:/var/lib/kubelet/pods/6e5ede6a-9d4b-47a2-b4ba-e6018910d05a/volumes/kubernetes.io~secret/node-tuning-operator-tls major:0 minor:398 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/74e8b3c8-da80-492c-bfcf-199b40bde40b/volume-subpaths/run-systemd/ovnkube-controller/6:{mountpoint:/var/lib/kubelet/pods/74e8b3c8-da80-492c-bfcf-199b40bde40b/volume-subpaths/run-systemd/ovnkube-controller/6 major:0 minor:24 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/74e8b3c8-da80-492c-bfcf-199b40bde40b/volumes/kubernetes.io~projected/kube-api-access-79h66:{mountpoint:/var/lib/kubelet/pods/74e8b3c8-da80-492c-bfcf-199b40bde40b/volumes/kubernetes.io~projected/kube-api-access-79h66 major:0 minor:143 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/74e8b3c8-da80-492c-bfcf-199b40bde40b/volumes/kubernetes.io~secret/ovn-node-metrics-cert:{mountpoint:/var/lib/kubelet/pods/74e8b3c8-da80-492c-bfcf-199b40bde40b/volumes/kubernetes.io~secret/ovn-node-metrics-cert major:0 minor:142 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/75b4304c-09f2-499e-8c2f-da603e43ba72/volumes/kubernetes.io~projected/kube-api-access-7jflg:{mountpoint:/var/lib/kubelet/pods/75b4304c-09f2-499e-8c2f-da603e43ba72/volumes/kubernetes.io~projected/kube-api-access-7jflg major:0 minor:929 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/767424fb-babf-4b73-b5e2-0bee65fcf207/volumes/kubernetes.io~projected/kube-api-access-hl828:{mountpoint:/var/lib/kubelet/pods/767424fb-babf-4b73-b5e2-0bee65fcf207/volumes/kubernetes.io~projected/kube-api-access-hl828 major:0 minor:130 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/798dcf46-8377-46b8-8387-5261d9bbefa1/volumes/kubernetes.io~projected/kube-api-access-jl24z:{mountpoint:/var/lib/kubelet/pods/798dcf46-8377-46b8-8387-5261d9bbefa1/volumes/kubernetes.io~projected/kube-api-access-jl24z major:0 minor:498 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/7a2c651d-ea1a-41f2-9745-04adc8d88904/volumes/kubernetes.io~projected/kube-api-access-fgf94:{mountpoint:/var/lib/kubelet/pods/7a2c651d-ea1a-41f2-9745-04adc8d88904/volumes/kubernetes.io~projected/kube-api-access-fgf94 major:0 minor:252 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/7a2c651d-ea1a-41f2-9745-04adc8d88904/volumes/kubernetes.io~secret/etcd-client:{mountpoint:/var/lib/kubelet/pods/7a2c651d-ea1a-41f2-9745-04adc8d88904/volumes/kubernetes.io~secret/etcd-client major:0 minor:241 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/7a2c651d-ea1a-41f2-9745-04adc8d88904/volumes/kubernetes.io~secret/serving-cert:{mountpoint:/var/lib/kubelet/pods/7a2c651d-ea1a-41f2-9745-04adc8d88904/volumes/kubernetes.io~secret/serving-cert major:0 minor:240 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/7dcc5520-7aa8-4cd5-b06d-591827ed4e2a/volumes/kubernetes.io~projected/kube-api-access-8ktz5:{mountpoint:/var/lib/kubelet/pods/7dcc5520-7aa8-4cd5-b06d-591827ed4e2a/volumes/kubernetes.io~projected/kube-api-access-8ktz5 major:0 minor:135 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/7dcc5520-7aa8-4cd5-b06d-591827ed4e2a/volumes/kubernetes.io~secret/metrics-certs:{mountpoint:/var/lib/kubelet/pods/7dcc5520-7aa8-4cd5-b06d-591827ed4e2a/volumes/kubernetes.io~secret/metrics-certs major:0 minor:597 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/80cc7ad6-051b-4ee5-94af-611388d9622a/volumes/kubernetes.io~projected/kube-api-access-hgl5l:{mountpoint:/var/lib/kubelet/pods/80cc7ad6-051b-4ee5-94af-611388d9622a/volumes/kubernetes.io~projected/kube-api-access-hgl5l major:0 minor:1181 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/80cc7ad6-051b-4ee5-94af-611388d9622a/volumes/kubernetes.io~secret/kube-state-metrics-kube-rbac-proxy-config:{mountpoint:/var/lib/kubelet/pods/80cc7ad6-051b-4ee5-94af-611388d9622a/volumes/kubernetes.io~secret/kube-state-metrics-kube-rbac-proxy-config major:0 minor:1176 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/80cc7ad6-051b-4ee5-94af-611388d9622a/volumes/kubernetes.io~secret/kube-state-metrics-tls:{mountpoint:/var/lib/kubelet/pods/80cc7ad6-051b-4ee5-94af-611388d9622a/volumes/kubernetes.io~secret/kube-state-metrics-tls major:0 minor:1178 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/812552f3-09b1-43f8-b910-c78e776127f8/volumes/kubernetes.io~projected/kube-api-access-4lt5r:{mountpoint:/var/lib/kubelet/pods/812552f3-09b1-43f8-b910-c78e776127f8/volumes/kubernetes.io~projected/kube-api-access-4lt5r major:0 minor:692 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/812552f3-09b1-43f8-b910-c78e776127f8/volumes/kubernetes.io~secret/encryption-config:{mountpoint:/var/lib/kubelet/pods/812552f3-09b1-43f8-b910-c78e776127f8/volumes/kubernetes.io~secret/encryption-config major:0 minor:690 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/812552f3-09b1-43f8-b910-c78e776127f8/volumes/kubernetes.io~secret/etcd-client:{mountpoint:/var/lib/kubelet/pods/812552f3-09b1-43f8-b910-c78e776127f8/volumes/kubernetes.io~secret/etcd-client major:0 minor:686 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/812552f3-09b1-43f8-b910-c78e776127f8/volumes/kubernetes.io~secret/serving-cert:{mountpoint:/var/lib/kubelet/pods/812552f3-09b1-43f8-b910-c78e776127f8/volumes/kubernetes.io~secret/serving-cert major:0 minor:691 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/88b915ff-fd94-4998-aa09-70f95c0f1b8a/volumes/kubernetes.io~projected/kube-api-access-bs794:{mountpoint:/var/lib/kubelet/pods/88b915ff-fd94-4998-aa09-70f95c0f1b8a/volumes/kubernetes.io~projected/kube-api-access-bs794 major:0 minor:141 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/88b915ff-fd94-4998-aa09-70f95c0f1b8a/volumes/kubernetes.io~secret/ovn-control-plane-metrics-cert:{mountpoint:/var/lib/kubelet/pods/88b915ff-fd94-4998-aa09-70f95c0f1b8a/volumes/kubernetes.io~secret/ovn-control-plane-metrics-cert major:0 minor:140 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/8f3825c1-975c-40b5-a6ad-0f200968b3cd/volumes/kubernetes.io~projected/kube-api-access-l8z6s:{mountpoint:/var/lib/kubelet/pods/8f3825c1-975c-40b5-a6ad-0f200968b3cd/volumes/kubernetes.io~projected/kube-api-access-l8z6s major:0 minor:932 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/933beda1-c930-4831-a886-3cc6b7a992ad/volumes/kubernetes.io~projected/kube-api-access-gmf87:{mountpoint:/var/lib/kubelet/pods/933beda1-c930-4831-a886-3cc6b7a992ad/volumes/kubernetes.io~projected/kube-api-access-gmf87 major:0 minor:256 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/933beda1-c930-4831-a886-3cc6b7a992ad/volumes/kubernetes.io~secret/serving-cert:{mountpoint:/var/lib/kubelet/pods/933beda1-c930-4831-a886-3cc6b7a992ad/volumes/kubernetes.io~secret/serving-cert major:0 minor:247 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/9666fc94-71e3-46af-8b45-26e3a085d076/volumes/kubernetes.io~projected/kube-api-access-5bwl7:{mountpoint:/var/lib/kubelet/pods/9666fc94-71e3-46af-8b45-26e3a085d076/volumes/kubernetes.io~projected/kube-api-access-5bwl7 major:0 minor:754 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/9666fc94-71e3-46af-8b45-26e3a085d076/volumes/kubernetes.io~secret/profile-collector-cert:{mountpoint:/var/lib/kubelet/pods/9666fc94-71e3-46af-8b45-26e3a085d076/volumes/kubernetes.io~secret/profile-collector-cert major:0 minor:700 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/9666fc94-71e3-46af-8b45-26e3a085d076/volumes/kubernetes.io~secret/srv-cert:{mountpoint:/var/lib/kubelet/pods/9666fc94-71e3-46af-8b45-26e3a085d076/volumes/kubernetes.io~secret/srv-cert major:0 minor:715 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/996ae0be-d36c-47f4-98b2-1c89591f9506/volumes/kubernetes.io~projected/kube-api-access-jrhmp:{mountpoint:/var/lib/kubelet/pods/996ae0be-d36c-47f4-98b2-1c89591f9506/volumes/kubernetes.io~projected/kube-api-access-jrhmp major:0 minor:278 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/996ae0be-d36c-47f4-98b2-1c89591f9506/volumes/kubernetes.io~secret/metrics-tls:{mountpoint:/var/lib/kubelet/pods/996ae0be-d36c-47f4-98b2-1c89591f9506/volu Feb 24 05:37:20.426079 master-0 kubenswrapper[34361]: mes/kubernetes.io~secret/metrics-tls major:0 minor:401 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/a3561f49-0808-4d96-95ec-456fcb5c5bb4/volumes/kubernetes.io~projected/kube-api-access-r5tgk:{mountpoint:/var/lib/kubelet/pods/a3561f49-0808-4d96-95ec-456fcb5c5bb4/volumes/kubernetes.io~projected/kube-api-access-r5tgk major:0 minor:930 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/a3561f49-0808-4d96-95ec-456fcb5c5bb4/volumes/kubernetes.io~secret/proxy-tls:{mountpoint:/var/lib/kubelet/pods/a3561f49-0808-4d96-95ec-456fcb5c5bb4/volumes/kubernetes.io~secret/proxy-tls major:0 minor:931 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/ab5afff8-1081-4acc-8ab9-d6bfd8df1d67/volumes/kubernetes.io~projected/kube-api-access-p67bp:{mountpoint:/var/lib/kubelet/pods/ab5afff8-1081-4acc-8ab9-d6bfd8df1d67/volumes/kubernetes.io~projected/kube-api-access-p67bp major:0 minor:337 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/ab5afff8-1081-4acc-8ab9-d6bfd8df1d67/volumes/kubernetes.io~secret/signing-key:{mountpoint:/var/lib/kubelet/pods/ab5afff8-1081-4acc-8ab9-d6bfd8df1d67/volumes/kubernetes.io~secret/signing-key major:0 minor:336 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/b21148ab-4e3e-4d0b-b198-3278dd8e2e7e/volumes/kubernetes.io~projected/kube-api-access-dtnxg:{mountpoint:/var/lib/kubelet/pods/b21148ab-4e3e-4d0b-b198-3278dd8e2e7e/volumes/kubernetes.io~projected/kube-api-access-dtnxg major:0 minor:625 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/b21148ab-4e3e-4d0b-b198-3278dd8e2e7e/volumes/kubernetes.io~secret/encryption-config:{mountpoint:/var/lib/kubelet/pods/b21148ab-4e3e-4d0b-b198-3278dd8e2e7e/volumes/kubernetes.io~secret/encryption-config major:0 minor:567 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/b21148ab-4e3e-4d0b-b198-3278dd8e2e7e/volumes/kubernetes.io~secret/etcd-client:{mountpoint:/var/lib/kubelet/pods/b21148ab-4e3e-4d0b-b198-3278dd8e2e7e/volumes/kubernetes.io~secret/etcd-client major:0 minor:566 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/b21148ab-4e3e-4d0b-b198-3278dd8e2e7e/volumes/kubernetes.io~secret/serving-cert:{mountpoint:/var/lib/kubelet/pods/b21148ab-4e3e-4d0b-b198-3278dd8e2e7e/volumes/kubernetes.io~secret/serving-cert major:0 minor:622 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/b426cb33-1624-45e6-b8c5-4e8d251f6339/volumes/kubernetes.io~projected/kube-api-access-hjtv8:{mountpoint:/var/lib/kubelet/pods/b426cb33-1624-45e6-b8c5-4e8d251f6339/volumes/kubernetes.io~projected/kube-api-access-hjtv8 major:0 minor:766 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/b426cb33-1624-45e6-b8c5-4e8d251f6339/volumes/kubernetes.io~secret/serving-cert:{mountpoint:/var/lib/kubelet/pods/b426cb33-1624-45e6-b8c5-4e8d251f6339/volumes/kubernetes.io~secret/serving-cert major:0 minor:751 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/b46907eb-36d6-4410-b7d8-8012b254c861/volumes/kubernetes.io~projected/kube-api-access-k8dtv:{mountpoint:/var/lib/kubelet/pods/b46907eb-36d6-4410-b7d8-8012b254c861/volumes/kubernetes.io~projected/kube-api-access-k8dtv major:0 minor:767 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/b46907eb-36d6-4410-b7d8-8012b254c861/volumes/kubernetes.io~secret/cloud-credential-operator-serving-cert:{mountpoint:/var/lib/kubelet/pods/b46907eb-36d6-4410-b7d8-8012b254c861/volumes/kubernetes.io~secret/cloud-credential-operator-serving-cert major:0 minor:699 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/b79ef90c-dc66-4d5f-8943-2c3ac68796ba/volumes/kubernetes.io~projected/kube-api-access-zb4rw:{mountpoint:/var/lib/kubelet/pods/b79ef90c-dc66-4d5f-8943-2c3ac68796ba/volumes/kubernetes.io~projected/kube-api-access-zb4rw major:0 minor:507 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/b8d28792-2365-4e9e-b61a-46cd2ef8b632/volumes/kubernetes.io~projected/kube-api-access-77lsr:{mountpoint:/var/lib/kubelet/pods/b8d28792-2365-4e9e-b61a-46cd2ef8b632/volumes/kubernetes.io~projected/kube-api-access-77lsr major:0 minor:1157 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/b8d28792-2365-4e9e-b61a-46cd2ef8b632/volumes/kubernetes.io~secret/prometheus-operator-kube-rbac-proxy-config:{mountpoint:/var/lib/kubelet/pods/b8d28792-2365-4e9e-b61a-46cd2ef8b632/volumes/kubernetes.io~secret/prometheus-operator-kube-rbac-proxy-config major:0 minor:1156 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/b8d28792-2365-4e9e-b61a-46cd2ef8b632/volumes/kubernetes.io~secret/prometheus-operator-tls:{mountpoint:/var/lib/kubelet/pods/b8d28792-2365-4e9e-b61a-46cd2ef8b632/volumes/kubernetes.io~secret/prometheus-operator-tls major:0 minor:1135 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/b9a96f0d-16b8-47ee-baf2-807d2260fa71/volumes/kubernetes.io~secret/tls-certificates:{mountpoint:/var/lib/kubelet/pods/b9a96f0d-16b8-47ee-baf2-807d2260fa71/volumes/kubernetes.io~secret/tls-certificates major:0 minor:1077 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/be7a4b9e-1e9a-4298-b804-21b683805c0e/volumes/kubernetes.io~projected/kube-api-access-wvm29:{mountpoint:/var/lib/kubelet/pods/be7a4b9e-1e9a-4298-b804-21b683805c0e/volumes/kubernetes.io~projected/kube-api-access-wvm29 major:0 minor:1080 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/be7a4b9e-1e9a-4298-b804-21b683805c0e/volumes/kubernetes.io~secret/default-certificate:{mountpoint:/var/lib/kubelet/pods/be7a4b9e-1e9a-4298-b804-21b683805c0e/volumes/kubernetes.io~secret/default-certificate major:0 minor:1076 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/be7a4b9e-1e9a-4298-b804-21b683805c0e/volumes/kubernetes.io~secret/metrics-certs:{mountpoint:/var/lib/kubelet/pods/be7a4b9e-1e9a-4298-b804-21b683805c0e/volumes/kubernetes.io~secret/metrics-certs major:0 minor:1079 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/be7a4b9e-1e9a-4298-b804-21b683805c0e/volumes/kubernetes.io~secret/stats-auth:{mountpoint:/var/lib/kubelet/pods/be7a4b9e-1e9a-4298-b804-21b683805c0e/volumes/kubernetes.io~secret/stats-auth major:0 minor:1078 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/bf303acd-b62e-4aa3-bd8d-15f5844302d8/volumes/kubernetes.io~projected/kube-api-access-f92qq:{mountpoint:/var/lib/kubelet/pods/bf303acd-b62e-4aa3-bd8d-15f5844302d8/volumes/kubernetes.io~projected/kube-api-access-f92qq major:0 minor:1174 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/bf303acd-b62e-4aa3-bd8d-15f5844302d8/volumes/kubernetes.io~secret/openshift-state-metrics-kube-rbac-proxy-config:{mountpoint:/var/lib/kubelet/pods/bf303acd-b62e-4aa3-bd8d-15f5844302d8/volumes/kubernetes.io~secret/openshift-state-metrics-kube-rbac-proxy-config major:0 minor:1170 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/bf303acd-b62e-4aa3-bd8d-15f5844302d8/volumes/kubernetes.io~secret/openshift-state-metrics-tls:{mountpoint:/var/lib/kubelet/pods/bf303acd-b62e-4aa3-bd8d-15f5844302d8/volumes/kubernetes.io~secret/openshift-state-metrics-tls major:0 minor:1175 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/c00ee01c-143b-4e44-823c-c6bfdedb8ed6/volumes/kubernetes.io~projected/kube-api-access-jx4rw:{mountpoint:/var/lib/kubelet/pods/c00ee01c-143b-4e44-823c-c6bfdedb8ed6/volumes/kubernetes.io~projected/kube-api-access-jx4rw major:0 minor:73 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/c106275b-72b6-4877-95c3-830f93e35375/volumes/kubernetes.io~projected/kube-api-access-4p8zb:{mountpoint:/var/lib/kubelet/pods/c106275b-72b6-4877-95c3-830f93e35375/volumes/kubernetes.io~projected/kube-api-access-4p8zb major:0 minor:164 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/c106275b-72b6-4877-95c3-830f93e35375/volumes/kubernetes.io~secret/webhook-cert:{mountpoint:/var/lib/kubelet/pods/c106275b-72b6-4877-95c3-830f93e35375/volumes/kubernetes.io~secret/webhook-cert major:0 minor:167 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/c177f8fe-8145-4557-ae78-af121efe001c/volumes/kubernetes.io~projected/kube-api-access-mdpfz:{mountpoint:/var/lib/kubelet/pods/c177f8fe-8145-4557-ae78-af121efe001c/volumes/kubernetes.io~projected/kube-api-access-mdpfz major:0 minor:251 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/c177f8fe-8145-4557-ae78-af121efe001c/volumes/kubernetes.io~secret/cluster-monitoring-operator-tls:{mountpoint:/var/lib/kubelet/pods/c177f8fe-8145-4557-ae78-af121efe001c/volumes/kubernetes.io~secret/cluster-monitoring-operator-tls major:0 minor:600 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/c3fed34f-b275-42c6-af6c-8de3e6fe0f9e/volumes/kubernetes.io~projected/kube-api-access-tlwzq:{mountpoint:/var/lib/kubelet/pods/c3fed34f-b275-42c6-af6c-8de3e6fe0f9e/volumes/kubernetes.io~projected/kube-api-access-tlwzq major:0 minor:243 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/c3fed34f-b275-42c6-af6c-8de3e6fe0f9e/volumes/kubernetes.io~secret/serving-cert:{mountpoint:/var/lib/kubelet/pods/c3fed34f-b275-42c6-af6c-8de3e6fe0f9e/volumes/kubernetes.io~secret/serving-cert major:0 minor:239 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/c847d0c0-cc92-4d56-9e47-b83d9a39a745/volumes/kubernetes.io~projected/kube-api-access-qvznm:{mountpoint:/var/lib/kubelet/pods/c847d0c0-cc92-4d56-9e47-b83d9a39a745/volumes/kubernetes.io~projected/kube-api-access-qvznm major:0 minor:1100 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/c847d0c0-cc92-4d56-9e47-b83d9a39a745/volumes/kubernetes.io~secret/certs:{mountpoint:/var/lib/kubelet/pods/c847d0c0-cc92-4d56-9e47-b83d9a39a745/volumes/kubernetes.io~secret/certs major:0 minor:1099 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/c847d0c0-cc92-4d56-9e47-b83d9a39a745/volumes/kubernetes.io~secret/node-bootstrap-token:{mountpoint:/var/lib/kubelet/pods/c847d0c0-cc92-4d56-9e47-b83d9a39a745/volumes/kubernetes.io~secret/node-bootstrap-token major:0 minor:1098 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/cc0cfdd6-99d8-40dc-87d0-06c2a6767f38/volumes/kubernetes.io~projected/kube-api-access-25dbj:{mountpoint:/var/lib/kubelet/pods/cc0cfdd6-99d8-40dc-87d0-06c2a6767f38/volumes/kubernetes.io~projected/kube-api-access-25dbj major:0 minor:755 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/cc0cfdd6-99d8-40dc-87d0-06c2a6767f38/volumes/kubernetes.io~secret/profile-collector-cert:{mountpoint:/var/lib/kubelet/pods/cc0cfdd6-99d8-40dc-87d0-06c2a6767f38/volumes/kubernetes.io~secret/profile-collector-cert major:0 minor:732 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/cc0cfdd6-99d8-40dc-87d0-06c2a6767f38/volumes/kubernetes.io~secret/srv-cert:{mountpoint:/var/lib/kubelet/pods/cc0cfdd6-99d8-40dc-87d0-06c2a6767f38/volumes/kubernetes.io~secret/srv-cert major:0 minor:728 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/cd674e58-b749-46fb-8a28-66012fd8b401/volumes/kubernetes.io~projected/kube-api-access-67qg5:{mountpoint:/var/lib/kubelet/pods/cd674e58-b749-46fb-8a28-66012fd8b401/volumes/kubernetes.io~projected/kube-api-access-67qg5 major:0 minor:925 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/d86d5bbe-3768-4695-810b-245a56e4fd1d/volumes/kubernetes.io~projected/kube-api-access-xj8cq:{mountpoint:/var/lib/kubelet/pods/d86d5bbe-3768-4695-810b-245a56e4fd1d/volumes/kubernetes.io~projected/kube-api-access-xj8cq major:0 minor:245 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/d86d5bbe-3768-4695-810b-245a56e4fd1d/volumes/kubernetes.io~secret/serving-cert:{mountpoint:/var/lib/kubelet/pods/d86d5bbe-3768-4695-810b-245a56e4fd1d/volumes/kubernetes.io~secret/serving-cert major:0 minor:242 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/d9492fbf-d0f4-4ecf-84ba-b089d69535c1/volumes/kubernetes.io~projected/ca-certs:{mountpoint:/var/lib/kubelet/pods/d9492fbf-d0f4-4ecf-84ba-b089d69535c1/volumes/kubernetes.io~projected/ca-certs major:0 minor:509 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/d9492fbf-d0f4-4ecf-84ba-b089d69535c1/volumes/kubernetes.io~projected/kube-api-access-fzp4b:{mountpoint:/var/lib/kubelet/pods/d9492fbf-d0f4-4ecf-84ba-b089d69535c1/volumes/kubernetes.io~projected/kube-api-access-fzp4b major:0 minor:510 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/d9492fbf-d0f4-4ecf-84ba-b089d69535c1/volumes/kubernetes.io~secret/catalogserver-certs:{mountpoint:/var/lib/kubelet/pods/d9492fbf-d0f4-4ecf-84ba-b089d69535c1/volumes/kubernetes.io~secret/catalogserver-certs major:0 minor:508 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/da19bb93-c9ba-4e60-9e83-d92bc0dd33c4/volumes/kubernetes.io~projected/kube-api-access-9lkf2:{mountpoint:/var/lib/kubelet/pods/da19bb93-c9ba-4e60-9e83-d92bc0dd33c4/volumes/kubernetes.io~projected/kube-api-access-9lkf2 major:0 minor:753 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/da19bb93-c9ba-4e60-9e83-d92bc0dd33c4/volumes/kubernetes.io~secret/serving-cert:{mountpoint:/var/lib/kubelet/pods/da19bb93-c9ba-4e60-9e83-d92bc0dd33c4/volumes/kubernetes.io~secret/serving-cert major:0 minor:734 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/dd29bef3-d27e-48b3-9aa0-d915e949b3d5/volumes/kubernetes.io~projected/kube-api-access-zcb72:{mountpoint:/var/lib/kubelet/pods/dd29bef3-d27e-48b3-9aa0-d915e949b3d5/volumes/kubernetes.io~projected/kube-api-access-zcb72 major:0 minor:277 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/dd29bef3-d27e-48b3-9aa0-d915e949b3d5/volumes/kubernetes.io~secret/marketplace-operator-metrics:{mountpoint:/var/lib/kubelet/pods/dd29bef3-d27e-48b3-9aa0-d915e949b3d5/volumes/kubernetes.io~secret/marketplace-operator-metrics major:0 minor:598 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/e1f03d97-1a6a-41e4-9ed3-cd9b01c46400/volumes/kubernetes.io~projected/kube-api-access-nb75b:{mountpoint:/var/lib/kubelet/pods/e1f03d97-1a6a-41e4-9ed3-cd9b01c46400/volumes/kubernetes.io~projected/kube-api-access-nb75b major:0 minor:759 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/e1f03d97-1a6a-41e4-9ed3-cd9b01c46400/volumes/kubernetes.io~secret/cluster-storage-operator-serving-cert:{mountpoint:/var/lib/kubelet/pods/e1f03d97-1a6a-41e4-9ed3-cd9b01c46400/volumes/kubernetes.io~secret/cluster-storage-operator-serving-cert major:0 minor:722 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/e6a0fc47-b446-4902-9f8a-04870cbafcab/volumes/kubernetes.io~projected/kube-api-access-kx4qf:{mountpoint:/var/lib/kubelet/pods/e6a0fc47-b446-4902-9f8a-04870cbafcab/volumes/kubernetes.io~projected/kube-api-access-kx4qf major:0 minor:773 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/e6a0fc47-b446-4902-9f8a-04870cbafcab/volumes/kubernetes.io~secret/machine-approver-tls:{mountpoint:/var/lib/kubelet/pods/e6a0fc47-b446-4902-9f8a-04870cbafcab/volumes/kubernetes.io~secret/machine-approver-tls major:0 minor:733 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/e6f05507-d5c1-4102-a220-1db715a496e3/volumes/kubernetes.io~projected/kube-api-access:{mountpoint:/var/lib/kubelet/pods/e6f05507-d5c1-4102-a220-1db715a496e3/volumes/kubernetes.io~projected/kube-api-access major:0 minor:244 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/e6f05507-d5c1-4102-a220-1db715a496e3/volumes/kubernetes.io~secret/serving-cert:{mountpoint:/var/lib/kubelet/pods/e6f05507-d5c1-4102-a220-1db715a496e3/volumes/kubernetes.io~secret/serving-cert major:0 minor:235 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/ee10cd9f-23da-4e40-bc7a-0856a9fa7ae5/volumes/kubernetes.io~projected/kube-api-access-5dwz2:{mountpoint:/var/lib/kubelet/pods/ee10cd9f-23da-4e40-bc7a-0856a9fa7ae5/volumes/kubernetes.io~projected/kube-api-access-5dwz2 major:0 minor:796 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/ee10cd9f-23da-4e40-bc7a-0856a9fa7ae5/volumes/kubernetes.io~secret/serving-cert:{mountpoint:/var/lib/kubelet/pods/ee10cd9f-23da-4e40-bc7a-0856a9fa7ae5/volumes/kubernetes.io~secret/serving-cert major:0 minor:792 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/f2be5ed6-fdf0-4462-a319-eed1a5a1c778/volumes/kubernetes.io~projected/kube-api-access-lm88x:{mountpoint:/var/lib/kubelet/pods/f2be5ed6-fdf0-4462-a319-eed1a5a1c778/volumes/kubernetes.io~projected/kube-api-access-lm88x major:0 minor:1180 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/f2be5ed6-fdf0-4462-a319-eed1a5a1c778/volumes/kubernetes.io~secret/node-exporter-kube-rbac-proxy-config:{mountpoint:/var/lib/kubelet/pods/f2be5ed6-fdf0-4462-a319-eed1a5a1c778/volumes/kubernetes.io~secret/node-exporter-kube-rbac-proxy-config major:0 minor:1179 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/f2be5ed6-fdf0-4462-a319-eed1a5a1c778/volumes/kubernetes.io~secret/node-exporter-tls:{mountpoint:/var/lib/kubelet/pods/f2be5ed6-fdf0-4462-a319-eed1a5a1c778/volumes/kubernetes.io~secret/node-exporter-tls major:0 minor:1177 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/f3cd3830-62b5-49d1-917e-bd993d685c65/volumes/kubernetes.io~projected/kube-api-access-957g9:{mountpoint:/var/lib/kubelet/pods/f3cd3830-62b5-49d1-917e-bd993d685c65/volumes/kubernetes.io~projected/kube-api-access-957g9 major:0 minor:393 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/f3cd3830-62b5-49d1-917e-bd993d685c65/volumes/kubernetes.io~secret/cloud-controller-manager-operator-tls:{mountpoint:/var/lib/kubelet/pods/f3cd3830-62b5-49d1-917e-bd993d685c65/volumes/kubernetes.io~secret/cloud-controller-manager-operator-tls major:0 minor:392 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/f6690909-3a87-4bdc-b0ec-1cdd4df32e4b/volumes/kubernetes.io~projected/kube-api-access-6b7f4:{mountpoint:/var/lib/kubelet/pods/f6690909-3a87-4bdc-b0ec-1cdd4df32e4b/volumes/kubernetes.io~projected/kube-api-access-6b7f4 major:0 minor:272 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/f77227c8-c52d-4a71-ae1b-792055f6f23d/volumes/kubernetes.io~projected/kube-api-access-dcj62:{mountpoint:/var/lib/kubelet/pods/f77227c8-c52d-4a71-ae1b-792055f6f23d/volumes/kubernetes.io~projected/kube-api-access-dcj62 major:0 minor:107 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/f77227c8-c52d-4a71-ae1b-792055f6f23d/volumes/kubernetes.io~secret/metrics-tls:{mountpoint:/var/lib/kubelet/pods/f77227c8-c52d-4a71-ae1b-792055f6f23d/volumes/kubernetes.io~secret/metrics-tls major:0 minor:43 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/f938daff-1d36-4348-a689-3d1607058296/volumes/kubernetes.io~projected/kube-api-access-xbt92:{mountpoint:/var/lib/kubelet/pods/f938daff-1d36-4348-a689-3d1607058296/volumes/kubernetes.io~projected/kube-api-access-xbt92 major:0 minor:445 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/f938daff-1d36-4348-a689-3d1607058296/volumes/kubernetes.io~secret/cert:{mountpoint:/var/lib/kubelet/pods/f938daff-1d36-4348-a689-3d1607058296/volumes/kubernetes.io~secret/cert major:0 minor:444 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/feee7fe8-e805-4807-b4c0-ecc7ef0f88d9/volumes/kubernetes.io~projected/kube-api-access-h5djr:{mountpoint:/var/lib/kubelet/pods/feee7fe8-e805-4807-b4c0-ecc7ef0f88d9/volumes/kubernetes.io~projected/kube-api-access-h5djr major:0 minor:286 fsType:tmpfs blockSize:0} overlay_0-1003:{mountpoint:/var/lib/containers/storage/overlay/5206c84675e2ff8870ddbcf2e38dfa014f469eae167ec54a0de02fca6d09f447/merged major:0 minor:1003 fsType:overlay blockSize:0} overlay_0-1005:{mountpoint:/var/lib/containers/storage/overlay/96c9a5d9f3c100e5cba1ad002b6c4676093f4bbf6f064838954248c15da1e014/merged major:0 minor:1005 fsType:overlay blockSize:0} overlay_0-1007:{mountpoint:/var/lib/containers/storage/overlay/8836834fcd99ba72b34d42baa4a122a730bb826a7a12e030cc8643996bc8757f/merged major:0 minor:1007 fsType:overlay blockSize:0} overlay_0-1017:{mountpoint:/var/lib/containers/storage/overlay/afa8a5a4267044ac0398c9b1a6d6e14c1e957aeb6fb38d1e6e16f704de757dfa/merged major:0 minor:1017 fsType:overlay blockSize:0} overlay_0-102:{mountpoint:/var/lib/containers/storage/overlay/44fe6c1778db0c91b1b45ea04e271340d7b72486fa632d6c44dcc083bdcbb1fc/merged major:0 minor:102 fsType:overlay blockSize:0} overlay_0-1023:{mountpoint:/var/lib/containers/storage/overlay/7bca58112ef1cbe838a80232a55e22930f407e908107e85af3df0f9e8b035b16/merged major:0 minor:1023 fsType:overlay blockSize:0} overlay_0-1027:{mountpoint:/var/lib/containers/storage/overlay/a05dcbcbfe0f0fc73978a896dbd8318e0829d5d0a6a0365e2091c43fb8215e9c/merged major:0 minor:1027 fsType:overlay blockSize:0} overlay_0-103:{mountpoint:/var/lib/containers/storage/overlay/f59b09a26661cb0524d690b218768a853284a860a8224f0dcd764609842ddb8e/merged major:0 minor:103 fsType:overlay blockSize:0} overlay_0-1030:{mountpoint:/var/lib/containers/storage/overlay/7c75482e9aac362b602eaa9b6fd44b5828bca9e38e81975283b67f298b5b6c52/merged major:0 minor:1030 fsType:overlay blockSize:0} overlay_0-1036:{mountpoint:/var/lib/containers/storage/overlay/d9b28d17185d13aaa518ab8d749e04bbeceba8ceda52151e8ce022160aa8dec7/merged major:0 minor:1036 fsType:overlay blockSize:0} overlay_0-1038:{mountpoint:/var/lib/containers/storage/overlay/b72b17334bcc5122ea56a74a728ce3b17fff377c4eefa7e5cb3354b52529a7e6/merged major:0 minor:1038 fsType:overlay blockSize:0} overlay_0-1044:{mountpoint:/var/lib/containers/storage/overlay/7e157d21d81c1382dc3356e3343d70274c6c01c5403867b4059a748a274ecf98/merged major:0 minor:1044 fsType:overlay blockSize:0} overlay_0-1046:{mountpoint:/var/lib/containers/storage/overlay/0e4de6a1e3c15295b3c43860fa21b47748c8772e4f6298683b0b90c2ead3767c/merged major:0 minor:1046 fsType:overlay blockSize:0} overlay_0-1048:{mountpoint:/var/lib/containers/storage/overlay/792915f4823456c04be8dd40e5070495fc3e56e69d3ccbb3313f20bf231cfb79/merged major:0 minor:1048 fsType:overlay blockSize:0} overlay_0-1049:{mountpoint:/var/lib/containers/storage/overlay/7bc5704c83aaf05096dfd00ec00d2450ed174d8f2175f053697fe778973eff60/merged major:0 minor:1049 fsType:overlay blockSize:0} overlay_0-1062:{mountpoint:/var/lib/containers/storage/overlay/cd7e9078831d6a3540d922105e1d23265998a5bbf315e0d7208b50b307250b5b/merged major:0 minor:1062 fsType:overlay blockSize:0} overlay_0-1064:{mountpoint:/var/lib/containers/storage/overlay/b59543187b65de69c4a54f4268cb6808ff8bd48e3e3e0edba87c6b6ffbe88115/merged major:0 minor:1064 fsType:overlay blockSize:0} overlay_0-1066:{mountpoint:/var/lib/containers/storage/overlay/5e1983e981d94355c21a8cfd7550896d8048ecf27596ab2dd6461afb8a19d5af/merged major:0 minor:1066 fsType:overlay blockSize:0} overlay_0-1081:{mountpoint:/var/lib/containers/storage/overlay/5bf8d6bd64238c9b0827ec832b94ec9fc908276b30ef78768a3fced4a3e0011b/merged major:0 minor:1081 fsType:overlay blockSize:0} overlay_0-1086:{mountpoint:/var/lib/containers/storage/overlay/2c99d1520eb4ce129454fe9a74cf2bbb0d7283b5d13c253c1bb26dd09a170766/merged major:0 minor:1086 fsType:overlay blockSize:0} overlay_0-1090:{mountpoint:/var/lib/containers/storage/overlay/9051c5ae43901b8be4479c29cf4eb8a287f0bd584f6d21feaf0d8cd65589d3e3/merged major:0 minor:1090 fsType:overlay blockSize:0} overlay_0-1095:{mountpoint:/var/lib/containers/storage/overlay/f0b0e3f71e6a6863a482fba99c574696c87728606827874ed502aca0705fe1fc/merged major:0 minor:1095 fsType:overlay blockSize:0} overlay_0-1097:{mountpoint:/var/lib/containers/storage/overlay/198c64545d7d1ffc41114e325ca9700a21625c28b7d48528adf238254baecf28/merged major:0 minor:1097 fsType:overlay blockSize:0} overlay_0-110:{mountpoint:/var/lib/containers/storage/overlay/81796ead5d4c2f08e3ddc9f813ddd71124a15a43d23db04c9cbf641e81a87798/merged major:0 minor:110 fsType:overlay blockSize:0} overlay_0-1102:{mountpoint:/var/lib/containers/storage/overlay/ce2ca04ad7a57973690eb0402f5076cffa71791c5b0940dfec3661468271722c/merged major:0 minor:1102 fsType:overlay blockSize:0} overlay_0-1104:{mountpoint:/var/lib/containers/storage/overlay/76bcd7ecb7b1a52c7faa5f7b5942d6d7e5330d57d7e512ef3f396db030865192/merged major:0 minor:1104 fsType:overlay blockSize:0} overlay_0-1113:{mountpoint:/var/lib/containers/storage/overlay/1e44d010e16aa2a4ce9aabf78f36f180fcaf20a5f935bb0cd9091a0817f425b0/merged major:0 minor:1113 fsType:overlay blockSize:0} overlay_0-1123:{mountpoint:/var/lib/containers/storage/overlay/a0a1b69529ed8c0be95139c90f5ca8b59411d4e30fac353a49f9cd9c51754d95/merged major:0 minor:1123 fsType:overlay blockSize:0} overlay_0-1126:{mountpoint:/var/lib/containers/storage/overlay/9834c5dde337ef21fa058756ec350fa5cc8131a984d7f5b3d9057f0211f7176a/merged major:0 minor:1126 fsType:overlay blockSize:0} overlay_0-1128:{mountpoint:/var/lib/containers/storage/overlay/01eb33b8c091c2ca4bd8b57c7583586777ad0d2f5ae54b0cea75e62ef19edc7f/merged major:0 minor:1128 fsType:overlay blockSize:0} overlay_0-1130:{mountpoint:/var/lib/containers/storage/overlay/efdf1a232b3eb3e4d85358a149d0f4ed01c3f4d9c1e2e1abbd944d3099c0c3e3/merged major:0 minor:1130 fsType:overlay blockSize:0} overlay_0-1133:{mountpoint:/var/lib/containers/storage/overlay/ab83e4117474ccc9decd2bebd74e5e79729008107dd926d8d413bab1b5243147/merged major:0 minor:1133 fsType:overlay blockSize:0} overlay_0-114:{mountpoint:/var/lib/containers/storage/overlay/86c64ebc49fa12a2558c14e0736340d2f710dc402f171b5bdd984d8da1c2f548/merged major:0 minor:114 fsType:overlay blockSize:0} overlay_0-116:{mountpoint:/var/lib/containers/storage/overlay/4fdec9c6578199a49a80bc23ee60d473afa665baed62d2fc66fabdba076bc057/merged major:0 minor:116 fsType:overlay blockSize:0} overlay_0-1160:{mountpoint:/var/lib/containers/storage/overlay/426936777a2fd915c9eb9ebad4c366f054d62d78088d51a70447f13a88382b51/merged major:0 minor:1160 fsType:overlay blockSize:0} overlay_0-1162:{mountpoint:/var/lib/containers/storage/overlay/c4eef98af86a2511ededdbb2f9a45cc1f66febec59d35f6b97be7c9b56735934/merged major:0 minor:1162 fsType:overlay blockSize:0} overlay_0-1164:{mountpoint:/var/lib/containers/storage/overlay/9d27761be0e76ad66c825b00c4db05a810193b4668c66a747a8a82c9194a62b4/merged major:0 minor:1164 fsType:overlay blockSize:0} overlay_0-118:{mountpoint:/var/lib/containers/storage/overlay/582a6898b7fe7f85a28584d6800d32d73b8b0e2e6ef1f022270ae49ef504eb4b/merged major:0 minor:118 fsType:overlay blockSize:0} overlay_0-1187:{mountpoint:/var/lib/containers/storage/overlay/23cd0293b11f56ea3c9e62d0df6f83c672115f78c18cf6bcb6d1cd40e5361a1e/merged major:0 minor:1187 fsType:overlay blockSize:0} overlay_0-1190:{mountpoint:/var/lib/containers/storage/overlay/8beeda880d4975df00561dd9680bdb904b84902aa65821ef36e3f59802f3c5e2/merged major:0 minor:1190 fsType:overlay blockSize:0} overlay_0-1192:{mountpoint:/var/lib/containers/storage/overlay/d6dd23fb92d0e9ded64959fc99a6f535da71ee678600a72683240d9a452d9cbf/merged major:0 minor:1192 fsType:overlay blockSize:0} overlay_0-1194:{mountpoint:/var/lib/containers/storage/overlay/1fe7825890b07ccdd48ded0d64e3a95071d5bc9b0335110fcae39f7444f161a7/merged major:0 minor:1194 fsType:overlay blockSize:0} overlay_0-1196:{mountpoint:/var/lib/containers/storage/overlay/8d54a4fdc28347a520730db2aa0259236664ba16f22c1796cc01c029b90e388d/merged major:0 minor:1196 fsType:overlay blockSize:0} overlay_0-120:{mountpoint:/var/lib/containers/storage/overlay/a1b3e7ee9d58c6d9bf74643775215d673de149733e2b58eec692c5b8c2ec77cb/merged major:0 minor:120 fsType:overlay blockSize:0} overlay_0-1202:{mountpoint:/var/lib/containers/storage/overlay/c74346652521d604a451434e4c7293fac6aa6889e04fc81e827609e802403c57/merged major:0 minor:1202 fsType:overlay blockSize:0} overlay_0-1207:{mountpoint:/var/lib/containers/storage/overlay/b41bde6936b279d78900f77a25f970804c282f58e89e55a50934b4283c070dfe/merged major:0 minor:1207 fsType:overlay blockSize:0} overlay_0-1209:{mountpoint:/var/lib/containers/storage/overlay/db2306dfac9008935d0b1e36949e4f043fbecde35f5870218d51e75328b438ff/merged major:0 minor:1209 fsType:overlay blockSize:0} overlay_0-1214:{mountpoint:/var/lib/containers/storage/overlay/eda91023fd2de73cb78ed3993686bab44958a0c4e3b8d2fd778e51bba808f1a7/merged major:0 minor:1214 fsType:overlay blockSize:0} overlay_0-1216:{mountpoint:/var/lib/containers/storage/overlay/6adc142d4bcb0d6c0481212babcd4e14647f1c05235c061f70a8388e76690b68/merged major:0 minor:1216 fsType:overlay blockSize:0} overlay_0-1218:{mountpoint:/var/lib/containers/storage/overlay/70fa3e9d272968676d9b3492e9c3f0c4d0d2446e83ce3cb6c76effe90cc8e5fb/merged major:0 minor:1218 fsType:overlay blockSize:0} overlay_0-122:{mountpoint:/var/lib/containers/storage/overlay/491d3b70de17b8c4c86e4ab37667b3096ec21e0a8aa6780b55992909f2713027/merged major:0 minor:122 fsType:overlay blockSize:0} overlay_0-1220:{mountpoint:/var/lib/containers/storage/overlay/959f0be9a966b95da399eb8bd8af8e06ce6c758ec47f0ae4c8bbbc2001d7a2b8/merged major:0 minor:1220 fsType:overlay blockSize:0} overlay_0-1244:{mountpoint:/var/lib/containers/storage/overlay/f47b7a6ba969032fcfa5fb1544c56c45a5e414eee0274b1f0702ee4454bc835c/merged major:0 minor:1244 fsType:overlay blockSize:0} overlay_0-1246:{mountpoint:/var/lib/containers/storage/overlay/4282cb683a56cd4f1a6cd35e07b1f17d3da3ab0e2abdc32e664805cfa0fdc4e9/merged major:0 minor:1246 fsType:overlay blockSize:0} overlay_0-125:{mountpoint:/var/lib/containers/storage/overlay/730cf1ef4ef1bbadbacbf83b2b40fbcbc5835d1081bb310df9cfa6792ca447d9/merged major:0 minor:125 fsType:overlay blockSize:0} overlay_0-1253:{mountpoint:/var/lib/containers/storage/overlay/95da78b9d911445020eb023f9b148c2e71a766723723168bc6ccce6a6b0ec17f/merged major:0 minor:1253 fsType:overlay blockSize:0} overlay_0-1261:{mountpoint:/var/lib/containers/storage/overlay/249a8171405852edb25dd81541e4ab7eba65143767b0de2db75fea5a6164032d/merged major:0 minor:1261 fsType:overlay blockSize:0} overlay_0-1263:{mountpoint:/var/lib/containers/storage/overlay/68f9d1e85abd440b2dded2c2aaa1eb3c817d5061ef231c586196da3f0d491cff/merged major:0 minor:1263 fsType:overlay blockSize:0} overlay_0-1266:{mountpoint:/var/lib/containers/storage/overlay/6a313de556e759215773ed17f1df8f7ac0e8e9e5e602d8bee7d07c9c85bce82f/merged major:0 minor:1266 fsType:overlay blockSize:0} overlay_0-1269:{mountpoint:/var/lib/containers/storage/overlay/2ccdc77efa6476358ef64effe1a10f43934a9ecc846397b0742823627678d76f/merged major:0 minor:1269 fsType:overlay blockSize:0} overlay_0-1271:{mountpoint:/var/lib/containers/storage/overlay/ef68238672d0b5a76d71d8d9803ca87e06c9d6ec09ede4f02fa6b1125310fc19/merged major:0 minor:1271 fsType:overlay blockSize:0} overlay_0-1280:{mountpoint:/var/lib/containers/storage/overlay/9cb1d095bab81bd11a5d33cef4299cab3057261deee56653164989fe62ec9db4/merged major:0 minor:1280 fsType:overlay blockSize:0} overlay_0-1291:{mountpoint:/var/lib/containers/storage/overlay/9ce1500167ac8750ef86865088d0a8bc28f77290bd5ad85e2c665a88dbe41d9a/merged major:0 minor:1291 fsType:overlay blockSize:0} overlay_0-1316:{mountpoint:/var/lib/containers/storage/overlay/a40254f156f777c4f125a72858e2a0497f9484a7b0c9b4a927442487e91aa00f/merged major:0 minor:1316 fsType:overlay blockSize:0} overlay_0-1318:{mountpoint:/var/lib/containers/storage/overlay/26ed6c95e69147a1d599cfe9990816a8068a3b21a3cc853c911844c3a801e3fb/merged major:0 minor:1318 fsType:overlay blockSize:0} overlay_0-1322:{mountpoint:/var/lib/containers/storage/overlay/a63c4ba8c1beaf650569b34e8afc4dd0afb4473d25ba81964e6a2241a73e5d5f/merged major:0 minor:1322 fsType:overlay blockSize:0} overlay_0-1328:{mountpoint:/var/lib/containers/storage/overlay/6d3a3ce3b75d7cfe590f0702635263e792ea5d77f245045a38968c9d2a78a3a0/merged major:0 minor:1328 fsType:overlay blockSize:0} overlay_0-133:{mountpoint:/var/lib/containers/storage/overlay/b1b452a7ba83fd534073e80424169fe31afe1fd76960607a59176690abcdf3e9/merged major:0 minor:133 fsType:overlay blockSize:0} overlay_0-1335:{mountpoint:/var/lib/containers/storage/overlay/47378ee72a24bd6a31b45162536d0d29d98e86bd1e4ab5175288ef680d58f191/merged major:0 minor:1335 fsType:overlay blockSize:0} overlay_0-1338:{mountpoint:/var/lib/containers/storage/overlay/6915e786314203e7ce5d3ae5fbd67714b8d0f35de8ecbab88b5fb626e813d637/merged major:0 minor:1338 fsType:overlay blockSize:0} overlay_0-1342:{mountpoint:/var/lib/containers/storage/overlay/557779f14b615ce65ddd50edf44ecb45812a1ecbd69e5652eeda491b319758aa/merged major:0 minor:1342 fsType:overlay blockSize:0} overlay_0-1345:{mountpoint:/var/lib/containers/storage/overlay/b84a4b023a6c29a79be4d2ebe45dbe4868a607b9040d172a0c3d20aa1197a1ac/merged major:0 minor:1345 fsType:overlay blockSize:0} overlay_0-1350:{mountpoint:/var/lib/containers/storage/overlay/c03c0730a659757460f56de6ad7be6edf58efdbc5f0d4f338559ff87da451857/merged major:0 minor:1350 fsType:overlay blockSize:0} overlay_0-1353:{mountpoint:/var/lib/containers/storage/overlay/688dd6b250b49dab34aff42328721523fbc5469f4f70989730d1f6b4b3f283f5/merged major:0 minor:1353 fsType:overlay blockSize:0} overlay_0-1356:{mountpoint:/var/lib/containers/storage/overlay/06820c0983d3df2e053fd2a346a4ce06ca4da14ae2957600447738219fa16594/merged major:0 minor:1356 fsType:overlay blockSize:0} overlay_0-136:{mountpoint:/var/lib/containers/storage/overlay/fc0b8a651db054cc08c0734ffcc2a9f0d455f59a5f82d0b6f4c4bc2ec09464bb/merged major:0 minor:136 fsType:overlay blockSize:0} overlay_0-1361:{mountpoint:/var/lib/containers/storage/overlay/7424f240febe56e7d0260c780d04409a7f9988b2d109001485587fddba466fd9/merged major:0 minor:1361 fsType:overlay blockSize:0} overlay_0-1365:{mountpoint:/var/lib/containers/storage/overlay/d2f4c333417a33d82daeb90a052f77b4f0dd5c76d7acbdc58a5a8fd2d1a99fdd/merged major:0 minor:1365 fsType:overlay blockSize:0} overlay_0-1376:{mountpoint:/var/lib/containers/storage/overlay/25369b023fcb321e903d4acf2b455a132a398f9006b27f4346e26e87d8b0fa58/merged major:0 minor:1376 fsType:overlay blockSize:0} overlay_0-138:{mountpoint:/var/lib/containers/storage/overlay/8b9123ed0c8c5e692061aa2dd116a0bb4107301711c12468dbd267e1e8370177/merged major:0 minor:138 fsType:overlay blockSize:0} overlay_0-148:{mountpoint:/var/lib/containers/storage/overlay/ea3937862f477b84afb4497a226964794e29146baa015e84512a581e2754eb4f/merged major:0 minor:148 fsType:overlay blockSize:0} overlay_0-150:{mountpoint:/var/lib/containers/storage/overlay/388693ac608b221c22e30cc02916335a296781477a587a43dc4a921e285085cb/merged major:0 minor:150 fsType:overlay blockSize:0} overlay_0-152:{mountpoint:/var/lib/containers/storage/overlay/4d8776f592a3da95bcb765045bf24294ca12fca96bbbc9eea3834ebfd8edf1cb/merged major:0 minor:152 fsType:overlay blockSize:0} overlay_0-154:{mountpoint:/var/lib/containers/storage/overlay/83615b1a79296f07ea844a91f1ec6e1f36bad5a5dd21361ccaadd0abece8f611/merged major:0 minor:154 fsType:overlay blockSize:0} overlay_0-156:{mountpoint:/var/lib/containers/storage/overlay/e5675b379b806105ec1bd682c049511ae49fb88c3fe8dc64871a8d02d2889eae/merged major:0 minor:156 fsType:overlay blockSize:0} overlay_0-165:{mountpoint:/var/lib/containers/storage/overlay/68cb588003993d95c397a477f67ec210b3ea0ae2d8fdee8968778c568bc8e343/merged major:0 minor:165 fsType:overlay blockSize:0} overlay_0-170:{mountpoint:/var/lib/containers/storage/overlay/db63326da438136f8706338ebfbb7dc10886f1ce165e76ca27ec6590f26b6848/merged major:0 minor:170 fsType:overlay blockSize:0} overlay_0-172:{mountpoint:/var/lib/containers/storage/overlay/6814bad0fc4640ef79f58713a014817e8c81fed4399f08cc63392bf46c302761/merged major:0 minor:172 fsType:overlay blockSize:0} overlay_0-174:{mountpoint:/var/lib/containers/storage/overlay/1a29f3024cefb905a4db591200213a22a88745b85067efea3d01057d4fbc9338/merged major:0 minor:174 fsType:overlay blockSize:0} overlay_0-177:{mountpoint:/var/lib/containers/storage/overlay/86c38e88780248cbf2f43b2e05119a2db1ca8386aac391b7e3b9769cd4da498c/merged major:0 minor:177 fsType:overlay blockSize:0} overlay_0-181:{mountpoint:/var/lib/containers/storage/overlay/d477ad6fcfce89711b44521e4726ebda8e25f0cacef85f9e4e0ae9a115b22b65/merged major:0 minor:181 fsType:overlay blockSize:0} overlay_0-182:{mountpoint:/var/lib/containers/storage/overlay/7f0dfb186347815369f2545d8bad2df2429a14312728e04a41f6da526e1dbbd1/merged major:0 minor:182 fsType:overlay blockSize:0} overlay_0-184:{mountpoint:/var/lib/containers/storage/overlay/c2c5a8b5ca416224456ca4d884b5232a6e42d2c32802685c4a1a8a7299040c4b/merged major:0 minor:184 fsType:overlay blockSize:0} overlay_0-185:{mountpoint:/var/lib/containers/storage/overlay/9222cfc34724fa85e99985ec030f03cad67f1d59590184131a81e27e8c12b87c/merged major:0 minor:185 fsType:overlay blockSize:0} overlay_0-188:{mountpoint:/var/lib/containers/storage/overlay/778d243bb2436b0cf34464ccd846e245d236bd358dcf7d6c30447b2dae9cb4dc/merged major:0 minor:188 fsType:overlay blockSize:0} overlay_0-190:{mountpoint:/var/lib/containers/storage/overlay/86c4a6940606b9991c443606e3479aba93741d603f62260e4425b8acdb82a4d7/merged major:0 minor:190 fsType:overlay blockSize:0} overlay_0-193:{mountpoint:/var/lib/containers/storage/overlay/7e5cb4f0c88dc179fadd626d07bf6ac90cfe4c8de2819f4ef9a088534be44740/merged major:0 minor:193 fsType:overlay blockSize:0} overlay_0-195:{mountpoint:/var/lib/containers/storage/overlay/bbb1a8e076f18882f30eac3228b417a41071dcaf1784b7510b6010ef3e68394e/merged major:0 minor:195 fsType:overlay blockSize:0} overlay_0-197:{mountpoint:/var/lib/containers/storage/overlay/edb902b63708c2524971f38a31b7d59038aae4d7fc0837f7b8390ccfae7d1666/merged major:0 minor:197 fsType:overlay blockSize:0} overlay_0-205:{mountpoint:/var/lib/containers/storage/overlay/99f218b94c5d4eeb54bc364000b17aec0cf57543be54c3dd9f0339952bb54e0c/merged major:0 minor:205 fsType:overlay blockSize:0} overlay_0-210:{mountpoint:/var/lib/containers/storage/overlay/182dbb475f39f3238003607f30407f85e0cf67b81f873b627e644acea7e8dc51/merged major:0 minor:210 fsType:overlay blockSize:0} overlay_0-215:{mountpoint:/var/lib/containers/storage/overlay/bc858f41ec9d813d80b1b853bdebbb9e7c35c84758c25eb9e3dfefbdd14b8b85/merged major:0 minor:215 fsType:overlay blockSize:0} overlay_0-220:{mountpoint:/var/lib/containers/storage/overlay/7d37bda42d2075af4abf39a9abab42884c491e4d0c80048866a147265f324383/merged major:0 minor:220 fsType:overlay blockSize:0} overlay_0-221:{mountpoint:/var/lib/containers/storage/overlay/b10c9b87ac48b114bf1d7c8b6f6ebb9861c8162b8482e45ab559bf2708c84642/merged major:0 minor:221 fsType:overlay blockSize:0} overlay_0-230:{mountpoint:/var/lib/containers/storage/overlay/00605ea8170362cf6b21ae8c78d579d85217c64b531cf54aceb1fdc00a6f221e/merged major:0 minor:230 fsType:overlay blockSize:0} overlay_0-289:{mountpoint:/var/lib/containers/storage/overlay/136e726352ea6729573ed3b48631ba334e76c1fedfe103f06d67defb1caf2dd3/merged major:0 minor:289 fsType:overlay blockSize:0} overlay_0-293:{mountpoint:/var/lib/containers/storage/overlay/fcd3b8d4e8e55f8d657eebb5ea4e5533e4cff6840b0a26637dcf5def2242e73f/merged major:0 minor:293 fsType:overlay blockSize:0} overlay_0-295:{mountpoint:/var/lib/containers/storage/overlay/c11c3cefc93140c26cc04a46e0e9eacfa3670c3109e9577843ff872b77098701/merged major:0 minor:295 fsType:overlay blockSize:0} overlay_0-297:{mountpoint:/var/lib/containers/storage/overlay/03b8abd6f0c85d872372a92a78a046169fc4d97fa83a6d7076d106092278b0cd/merged major:0 minor:297 fsType:overlay blockSize:0} overlay_0-299:{mountpoint:/var/lib/containers/storage/overlay/506eaf5639e4e8343277178cd973f190e29ae0b9fd1a9f786735dd14f892fa42/merged major:0 minor:299 fsType:overlay blockSize:0} overlay_0-303:{mountpoint:/var/lib/containers/storage/overlay/66618ee9e9a82f6195f5a73b73c64ca596d0f30b46ef21156a622393beedc4ab/merged major:0 minor:303 fsType:overlay blockSize:0} overlay_0-305:{mountpoint:/var/lib/containers/storage/overlay/e74d84294102aab640a5eb2058d95be2024a38ade76430fb42ba1fb66b16ae5e/merged major:0 minor:305 fsType:overlay blockSize:0} overlay_0-309:{mountpoint:/var/lib/containers/storage/overlay/4d780b3ecac2672da8979576516038c89cf07a2f11c43bfd2dafa351c4d6b64a/merged major:0 minor:309 fsType:overlay blockSize:0} overlay_0-311:{mountpoint:/var/lib/containers/storage/overlay/c5d92dca52645885d67f69993695180d6cee3ccb13816ddfcad508f4200bd52a/merged major:0 minor:311 fsType:overlay blockSize:0} overlay_0-313:{mountpoint:/var/lib/containers/storage/overlay/0616c13dd7fa330b231d55f748821c9b8082756bb1e35cd1dfa1beba0f940c2a/merged major:0 minor:313 fsType:overlay blockSize:0} overlay_0-315:{mountpoint:/var/lib/containers/storage/overlay/b3edf2f0f7444ec792cc5e00172501a59d9d0437167275f2b52b85a88e8a2e01/merged major:0 minor:315 fsType:overlay blockSize:0} overlay_0-317:{mountpoint:/var/lib/containers/storage/overlay/9d56a59c966efa2f16ea35c40d25bc74ecc1e8029d724a43f1e36f7e9f3211de/merged major:0 minor:317 fsType:overlay blockSize:0} overlay_0-322:{mountpoint:/var/lib/containers/storage/overlay/7c1e68f603db191132d2f101694eeb372e36b3334fbf7c2954e8b57c413af061/merged major:0 minor:322 fsType:overlay blockSize:0} overlay_0-323:{mountpoint:/var/lib/containers/storage/overlay/e7522cf9449ebfcf3dfe0838f53eb0c615b0f1a8bd274d797e5f0140aded0cb4/merged major:0 minor:323 fsType:overlay blockSize:0} overlay_0-326:{mountpoint:/var/lib/containers/storage/overlay/945f950b5ef3736ade93f813ba4a4f24b5c1edb6b50894cac04a61fefbd1e912/merged major:0 minor:326 fsType:overlay blockSize:0} overlay_0-328:{mountpoint:/var/lib/containers/storage/overlay/60ba3b8ee7317020fa0e1bc2f67de0151ed462abbdfce878a8a4714353b66699/merged major:0 minor:328 fsType:overlay blockSize:0} overlay_0-330:{mountpoint:/var/lib/containers/storage/overlay/e117a5bfb6e828d461faaacb81cf362679ceb3cc422f4e55388d62a54764e9c0/merged major:0 minor:330 fsType:overlay blockSize:0} overlay_0-333:{mountpoint:/var/lib/containers/storage/overlay/8cf1f2470fffd8ed39414ba6e2e5e141a65b938fd8cac2261ae4778723f903e2/merged major:0 minor:333 fsType:overlay blockSize:0} overlay_0-334:{mountpoint:/var/lib/containers/storage/overlay/6fe5244c612f93c6907edf254394901fd3f1a26d1e76d50f1cc7daf9571d0a52/merged major:0 minor:334 fsType:overlay blockSize:0} overlay_0-343:{mountpoint:/var/lib/containers/storage/overlay/53ff663f6572270bafdb52d9cf9be2411c29fd8e5e3baaa009fa1f36b651e7ba/merged major:0 minor:343 fsType:overlay blockSize:0} overlay_0-344:{mountpoint:/var/lib/containers/storage/overlay/e56e702d5d74113bd0f389fd97277c3c8c5549caeb09ae057d637e4df8ad77c1/merged major:0 minor:344 fsType:overlay blockSize:0} overlay_0-346:{mountpoint:/var/lib/containers/storage/overlay/a8f5dd735603b751b0c47bbd220593fd5a3d8923360b742eb05fd65a28bc9585/merged major:0 minor:346 fsType:overlay blockSize:0} overlay_0-354:{mountpoint:/var/lib/containers/storage/overlay/8567523a8a2a47c0e7ab4ede32762e0cd3e8a042b9e0e87ca1fc8d393e0404f4/merged major:0 minor:354 fsType:overlay blockSize:0} overlay_0-356:{mountpoint:/var/lib/containers/storage/overlay/bf1476c41b6e59ba698617b126f9a48a4684131478a4515b3973d4fc13c331b7/merged major:0 minor:356 fsType:overlay blockSize:0} overlay_0-358:{mountpoint:/var/lib/containers/storage/overlay/d4de2cadbb86fbd2ff6f4e82d6f8fd748bc3f84c15f3ac3e88d78b1eb39212bc/merged major:0 minor:358 fsType:overlay blockSize:0} overlay_0-360:{mountpoint:/var/lib/containers/storage/overlay/dcc3bddc1278236e108ccea4fbf93f8274c72597d5388eba6ad6a34c8eb1cd1b/merged major:0 minor:360 fsType:overlay blockSize:0} overlay_0-362:{mountpoint:/var/lib/containers/storage/overlay/877e1c2f55bee5e93f0ff5c7cb5d9dfa4b7faf820454f6144cf6f5572c0d3e9d/merged major:0 minor:362 fsType:overlay blockSize:0} overlay_0-363:{mountpoint:/var/lib/containers/storage/overlay/82a20e5493b61a53f2f84ca5c48e796feefd00b567da20470bd04e5bafddc8b2/merged major:0 minor:363 fsType:overlay blockSize:0} overlay_0-365:{mountpoint:/var/lib/containers/storage/overlay/5fbd027c39f8e7e0ae542eef79a1416b3105093efbc12ececaf96452317eb97d/merged major:0 minor:365 fsType:overlay blockSize:0} overlay_0-370:{mountpoint:/var/lib/containers/storage/overlay/112a13398a3c633ca5d7ff1d753ccc42a73c89569171728219d05f4013ff466c/merged major:0 minor:370 fsType:overlay blockSize:0} overlay_0-373:{mountpoint:/var/lib/containers/storage/overlay/b9ba5ad153dd7d8977d7bd481d487404fe0c10ed20ea6fd5131f1e34abdae210/merged major:0 minor:373 fsType:overlay blockSize:0} overlay_0-376:{mountpoint:/var/lib/containers/storage/overlay/743b4924cd50db9daba90225cf01d9cad7b43f7bb3329d97f3bb23f01a3dc259/merged major:0 minor:376 fsType:overlay blockSize:0} overlay_0-377:{mountpoint:/var/lib/containers/storage/overlay/637500f536ea3b143dee3328073e7cab25d3a6b74318ee8b216ba4f43a88eb78/merged major:0 minor:377 fsType:overlay blockSize:0} overlay_0-379:{mountpoint:/var/lib/containers/storage/overlay/a59e0e4b991134e429df666a660ec6b964657fae9e195972b736352d5386c1f7/merged major:0 minor:379 fsType:overlay blockSize:0} overlay_0-382:{mountpoint:/var/lib/containers/storage/overlay/ed63a50044b2c29baab0689d0f6ece01d477faff6aa23dfe78d07a18b7b00cac/merged major:0 minor:382 fsType:overlay blockSize:0} overlay_0-385:{mountpoint:/var/lib/containers/storage/overlay/239ba039874f4e09bc0b80717ae6e8e955c205bd1e4d2cf0ff820f50aacc8780/merged major:0 minor:385 fsType:overlay blockSize:0} overlay_0-388:{mountpoint:/var/lib/containers/storage/overlay/ec9df29008da8b569f9e92082e572890e1ab39837d47bdbab637c42bc5aaeb82/merged major:0 minor:388 fsType:overlay blockSize:0} overlay_0-390:{mountpoint:/var/lib/containers/storage/overlay/5f5d77d7fade99bc1968368add1dc25e88f377a3d93231f90ffd9be4b5d2bec0/merged major:0 minor:390 fsType:overlay blockSize:0} overlay_0-395:{mountpoint:/var/lib/containers/storage/overlay/4f96d3b3f35dffccb386ae292244a7be2cef40d00af08585ebd483c9074d85db/merged major:0 minor:395 fsType:overlay blockSize:0} overlay_0-41:{mountpoint:/var/lib/containers/storage/overlay/b1b858e078926b6600e23d39825a3d36652a734c1121def54d0dfd3c76330bbd/merged major:0 minor:41 fsType:overlay blockSize:0} overlay_0-412:{mountpoint:/var/lib/containers/storage/overlay/a645b4465e0b5cedde603f6268ab2b0a98823870513f86e4afe7874c0345eef6/merged major:0 minor:412 fsType:overlay blockSize:0} overlay_0-414:{mountpoint:/var/lib/containers/storage/overlay/7df457cb4575a3ad73b88565712080a0386e61acc11170fd9f53aa30daa446ff/merged major:0 minor:414 fsType:overlay blockSize:0} overlay_0-416:{mountpoint:/var/lib/containers/storage/overlay/a1a93151c0d42292dd6020361648a88ba15643c82cd83c69fcfc2d74b296a5b2/merged major:0 minor:416 fsType:overlay blockSize:0} overlay_0-418:{mountpoint:/var/lib/containers/storage/overlay/aeeb1f506eecb69485ba915fff19ab38112517444d1b34fd373181081c46a78e/merged major:0 minor:418 fsType:overlay blockSize:0} overlay_0-420:{mountpoint:/var/lib/containers/storage/overlay/dc6e231d242534176942d914b3d73c137f303f036b88599f237977a1a8047850/merged major:0 minor:420 fsType:overlay blockSize:0} overlay_0-422:{mountpoint:/var/lib/containers/storage/overlay/1cc2360fb8101042b114c2c98768e9be913624566dd8736a24ce8799c0518f85/merged major:0 minor:422 fsType:overlay blockSize:0} overlay_0-424:{mountpoint:/var/lib/containers/storage/overlay/27e2c3b68d6349f3157cef06614106de465c37b540f0a98aecc51bc8149124ca/merged major:0 minor:424 fsType:overlay blockSize:0} overlay_0-426:{mountpoint:/var/lib/containers/storage/overlay/50eeeacc16f7f8b3ea33c1a01b3cf45b8e9c454f3b06dc4dde0e71b5b174bd91/merged major:0 minor:426 fsType:overlay blockSize:0} overlay_0-428:{mountpoint:/var/lib/containers/storage/overlay/d152cc52b802a678342734e1383820ed06b1f8d71fed5a2db8cd8a3444a9e711/merged major:0 minor:428 fsType:overlay blockSize:0} overlay_0-434:{mountpoint:/var/lib/containers/storage/overlay/13631dd808e634d0042da995b84c6f0742d3293f0f2188ba0131ffab4f12f3d0/merged major:0 minor:434 fsType:overlay blockSize:0} overlay_0-442:{mountpoint:/var/lib/containers/storage/overlay/820f4728759d896c8cc2b7519e5b2934c78009bd9cae9b669c325f2d0639b5a8/merged major:0 minor:442 fsType:overlay blockSize:0} overlay_0-446:{mountpoint:/var/lib/containers/storage/overlay/1e1656f7b30e137be69458d7eb0c0af6e8b05d9414d71c6dc42c605c1c46726d/merged major:0 minor:446 fsType:overlay blockSize:0} overlay_0-448:{mountpoint:/var/lib/containers/storage/overlay/988ac409131d5384c73b6e7f65559915bb66a24135c87ec82d2c0386041197e8/merged major:0 minor:448 fsType:overlay blockSize:0} overlay_0-453:{mountpoint:/var/lib/containers/storage/overlay/8edaa5cdffba1494b989f635221da65e073ece6f9466134303c9968e734d32ca/merged major:0 minor:453 fsType:overlay blockSize:0} overlay_0-458:{mountpoint:/var/lib/containers/storage/overlay/48123ec710022dfd04ccb24525cc2b9684a69b945c586e1580df4c2a71d44938/merged major:0 minor:458 fsType:overlay blockSize:0} overlay_0-46:{mountpoint:/var/lib/containers/storage/overlay/de8ea2e119c6761b20ef5a685801ec5b4949c10df7cd22de982acf2f709dc8aa/merged major:0 minor:46 fsType:overlay blockSize:0} overlay_0-473:{mountpoint:/var/lib/containers/storage/overlay/34938e34ea2490a31ea931fd2f81760cbee3fdda201c40b0d8967c910d6ac098/merged major:0 minor:473 fsType:overlay blockSize:0} overlay_0-490:{mountpoint:/var/lib/containers/storage/overlay/64ec6673487dfa696f02dcc495752671b1bbab35740ddbfaece586ed6fd0759e/merged major:0 minor:490 fsType:overlay blockSize:0} overlay_0-492:{mountpoint:/var/lib/containers/storage/overlay/9c0df2b695770a9b615275a41a8e8bad28ca0fd0a49764f3c95837a0c04b71a7/merged major:0 minor:492 fsType:overlay blockSize:0} overlay_0-503:{mountpoint:/var/lib/containers/storage/overlay/e4aecf5a02172a51558de00bebbbd646927af789fcfcc5fba1e4f0a13f33b0e5/merged major:0 minor:503 fsType:overlay blockSize:0} overlay_0-505:{mountpoint:/var/lib/containers/storage/overlay/049a16cbe70fb005548d47a95148b0550973bc7f844808fa65755eacd80696a8/merged major:0 minor:505 fsType:overlay blockSize:0} overlay_0-52:{mountpoint:/var/lib/containers/storage/overlay/bc16d57a3358e4cf195bdc7d98f5efbf42b25fcecb89041131c54a0e8deea85f/merged major:0 minor:52 fsType:overlay blockSize:0} overlay_0-521:{mountpoint:/var/lib/containers/storage/overlay/19a68be62cd50dbb5af382d854f4e9ca11818bce725523ff9b5d3238efe637b1/merged major:0 minor:521 fsType:overlay blockSize:0} overlay_0-523:{mountpoint:/var/lib/containers/storage/overlay/56475fbf1b878ed3d668a468374d222871bdb921ba57142778942c025055e3c9/merged major:0 minor:523 fsType:overlay blockSize:0} overlay_0-524:{mountpoint:/var/lib/containers/storage/overlay/f8294c5b0c63cb8181810aa3390cf4d4e95183943da5e69c8fa1da352dd708f8/merged major:0 minor:524 fsType:overlay blockSize:0} overlay_0-526:{mountpoint:/var/lib/containers/storage/overlay/883a90000ab7433611c36d9c Feb 24 05:37:20.427197 master-0 kubenswrapper[34361]: 75fb179627ba9ef44cce94695c0df5fe7ee2e03d/merged major:0 minor:526 fsType:overlay blockSize:0} overlay_0-528:{mountpoint:/var/lib/containers/storage/overlay/93d0ffbb23f652156cf0500340c641c2b3a1fa0d3a5cc7937afef7c7f30b29fb/merged major:0 minor:528 fsType:overlay blockSize:0} overlay_0-532:{mountpoint:/var/lib/containers/storage/overlay/483c29c63557673697514e176713788bc839437e203bc2b555122f5285bde326/merged major:0 minor:532 fsType:overlay blockSize:0} overlay_0-537:{mountpoint:/var/lib/containers/storage/overlay/66f7fba222e5924010ec3d456f20da9b6078e5a732f81eab8cc8ac6cc0889b8c/merged major:0 minor:537 fsType:overlay blockSize:0} overlay_0-539:{mountpoint:/var/lib/containers/storage/overlay/672c99fa6b4892df807b3614a6c3041a23fb7a09b84bac737aa654d151576cd9/merged major:0 minor:539 fsType:overlay blockSize:0} overlay_0-541:{mountpoint:/var/lib/containers/storage/overlay/610757cc33185178a70487419978faa7a0907fc324bc8d47ec35e388ba2357c4/merged major:0 minor:541 fsType:overlay blockSize:0} overlay_0-543:{mountpoint:/var/lib/containers/storage/overlay/72006c3c2299a3c9a11bdf6e31a6e53336ada26c32125d0d3c1c5322aa647069/merged major:0 minor:543 fsType:overlay blockSize:0} overlay_0-544:{mountpoint:/var/lib/containers/storage/overlay/5df19aa8f22045511a9f95a7019d6b987ef887122d1056f0e5d55ab34248113d/merged major:0 minor:544 fsType:overlay blockSize:0} overlay_0-558:{mountpoint:/var/lib/containers/storage/overlay/14ce2cde86a031d4d30f1e532eb2cf285aa4c91c2183e46a030748708dc51324/merged major:0 minor:558 fsType:overlay blockSize:0} overlay_0-560:{mountpoint:/var/lib/containers/storage/overlay/de93874c42af4af5fd040590c932ff697209b7d1da6fde4eddae54771fd11d05/merged major:0 minor:560 fsType:overlay blockSize:0} overlay_0-568:{mountpoint:/var/lib/containers/storage/overlay/50272fa925e0d5b365ccb986f121072cc29520aefbfd6610d7ee3119d9a21689/merged major:0 minor:568 fsType:overlay blockSize:0} overlay_0-576:{mountpoint:/var/lib/containers/storage/overlay/e3a2971727fd119e96392d2016d8fd076595a1dc9c0a643b6037107d2de3d2b2/merged major:0 minor:576 fsType:overlay blockSize:0} overlay_0-583:{mountpoint:/var/lib/containers/storage/overlay/5d33d05a9be9f92bb15bd8e5f56f0c097a86a305418968b62bfcc2aa2557ec67/merged major:0 minor:583 fsType:overlay blockSize:0} overlay_0-585:{mountpoint:/var/lib/containers/storage/overlay/85a7e2cceefe55fb5a34efbece66cc14aaf0b2a8b7f7d4ff46b17f540f0a01f9/merged major:0 minor:585 fsType:overlay blockSize:0} overlay_0-587:{mountpoint:/var/lib/containers/storage/overlay/3705591262db6b41703ea7c45e6f416f45a3f509adc5a0e7a655107c19c92edf/merged major:0 minor:587 fsType:overlay blockSize:0} overlay_0-589:{mountpoint:/var/lib/containers/storage/overlay/f3808faac79462b1ca734a6054dfef17d9cf6286da66ee7ea911ecd6bb36cdde/merged major:0 minor:589 fsType:overlay blockSize:0} overlay_0-601:{mountpoint:/var/lib/containers/storage/overlay/7950e80104c0141783932c33bc1ef3f28bedfe359508d8ce8082bb6f1ef1e780/merged major:0 minor:601 fsType:overlay blockSize:0} overlay_0-608:{mountpoint:/var/lib/containers/storage/overlay/cc27404b38d1b35f2772a687980fcbebbb9bb9458ddce0655b6bfb2fc9233d0b/merged major:0 minor:608 fsType:overlay blockSize:0} overlay_0-61:{mountpoint:/var/lib/containers/storage/overlay/9d4d01607649d467d16a5612e9eeea5147cedd826e442a65ab61d94c8e643105/merged major:0 minor:61 fsType:overlay blockSize:0} overlay_0-612:{mountpoint:/var/lib/containers/storage/overlay/1f26d9fa0d4d8f636672a2c53ff33da2a5d45002cf4fd12a89aa4bdd6cb507c5/merged major:0 minor:612 fsType:overlay blockSize:0} overlay_0-614:{mountpoint:/var/lib/containers/storage/overlay/9e5f5f37cd0c517467e454c65ae10e934581f6dbae16a5c742556f29d781753d/merged major:0 minor:614 fsType:overlay blockSize:0} overlay_0-617:{mountpoint:/var/lib/containers/storage/overlay/34f4b5b9ac8a418b64a2087da926b4b7de8d975a2b52c13226b8d5dca128eef0/merged major:0 minor:617 fsType:overlay blockSize:0} overlay_0-618:{mountpoint:/var/lib/containers/storage/overlay/d11ea440cbb1ed6d1f0dcbde2e517cf7d1a515ec32cf34e591d5ff8322e5fc13/merged major:0 minor:618 fsType:overlay blockSize:0} overlay_0-62:{mountpoint:/var/lib/containers/storage/overlay/4b05a0568fd434eb80224ed6ba0a3077df60ed9e605c9475bb5b06b2e5b999b1/merged major:0 minor:62 fsType:overlay blockSize:0} overlay_0-620:{mountpoint:/var/lib/containers/storage/overlay/7b98096edcb1b7838dc281eb09cf14dd2fb522a57b558c6876e387622f097bed/merged major:0 minor:620 fsType:overlay blockSize:0} overlay_0-624:{mountpoint:/var/lib/containers/storage/overlay/5f8caa47d54a97a258d6103b0c14c9c775ed8cd94a9d2032f76441a2894651ba/merged major:0 minor:624 fsType:overlay blockSize:0} overlay_0-631:{mountpoint:/var/lib/containers/storage/overlay/38f84bba1c875d9e984d9aaac0d5c00ad80290cb3e5b2cf151587a74c2cdd0b0/merged major:0 minor:631 fsType:overlay blockSize:0} overlay_0-635:{mountpoint:/var/lib/containers/storage/overlay/5d610db5d49b0d7283ad180eeda5e37c04f18af1117271e782d9dc99bbcd2a5f/merged major:0 minor:635 fsType:overlay blockSize:0} overlay_0-638:{mountpoint:/var/lib/containers/storage/overlay/79595d3611ab74b1ab6fb2b1fad6654ee82bb45463435d36533d1f9431e1cca7/merged major:0 minor:638 fsType:overlay blockSize:0} overlay_0-64:{mountpoint:/var/lib/containers/storage/overlay/887c29476898261d12cd652e6380d39161c1064fa54ceb062f9bca88785d4e0b/merged major:0 minor:64 fsType:overlay blockSize:0} overlay_0-65:{mountpoint:/var/lib/containers/storage/overlay/aea6f34282c56664cef8abb6705d534f0c6140b7a2a55148f94c5193a86439a4/merged major:0 minor:65 fsType:overlay blockSize:0} overlay_0-651:{mountpoint:/var/lib/containers/storage/overlay/65a8a9674b68052a6d458290d98f0ce639c08aed8437b6cef31193160c9757f5/merged major:0 minor:651 fsType:overlay blockSize:0} overlay_0-652:{mountpoint:/var/lib/containers/storage/overlay/f30e4a5fc8cd5bd13f5bee084e26d9e4ee2c7c412d4acf7191a4573f060f2f25/merged major:0 minor:652 fsType:overlay blockSize:0} overlay_0-653:{mountpoint:/var/lib/containers/storage/overlay/3a4d2682eff4f7f559c293a28927d29a03eb056c16e783bde6b3775772cbe37f/merged major:0 minor:653 fsType:overlay blockSize:0} overlay_0-659:{mountpoint:/var/lib/containers/storage/overlay/7602948dea1f60e7591d33cf0c196901f327f7a4bef05967c98219cc6e67ec83/merged major:0 minor:659 fsType:overlay blockSize:0} overlay_0-662:{mountpoint:/var/lib/containers/storage/overlay/47cc63c170a513823c88d1d2f952c4a322ea893f05ecedaf1aa9111c31d92aee/merged major:0 minor:662 fsType:overlay blockSize:0} overlay_0-667:{mountpoint:/var/lib/containers/storage/overlay/4614f3b73012ddcb653072a86d96ef7a2edeb9871e00188e717e84cf26aaf1af/merged major:0 minor:667 fsType:overlay blockSize:0} overlay_0-67:{mountpoint:/var/lib/containers/storage/overlay/d2d5d5a3b568dd3a7f423fda03b0d5d219bab35ecd6c29a4b0e1734b6a8e9900/merged major:0 minor:67 fsType:overlay blockSize:0} overlay_0-682:{mountpoint:/var/lib/containers/storage/overlay/ca2a30172327c768008d3a09598886a42f2c29420c5f1c2ab0e9fcf82e7db7f8/merged major:0 minor:682 fsType:overlay blockSize:0} overlay_0-684:{mountpoint:/var/lib/containers/storage/overlay/2efe8fc6a92a121fa0848ddcfb48cf6212e9d9519c89634ec4bf66d25051f2c5/merged major:0 minor:684 fsType:overlay blockSize:0} overlay_0-687:{mountpoint:/var/lib/containers/storage/overlay/259fbbd2f45ad8cd31a38ce770f771cc26fb00bd7b7c852dc0443570412fcc17/merged major:0 minor:687 fsType:overlay blockSize:0} overlay_0-689:{mountpoint:/var/lib/containers/storage/overlay/5b21c9f2c68da5a4ad119c152ed8aca1ad7e39b01a12ab2d57322d8e0418f64e/merged major:0 minor:689 fsType:overlay blockSize:0} overlay_0-69:{mountpoint:/var/lib/containers/storage/overlay/cb06c9de66c14a6267dc2003f0a9e5d7ce49e6d34edca493575fc5c9ab9ec0df/merged major:0 minor:69 fsType:overlay blockSize:0} overlay_0-694:{mountpoint:/var/lib/containers/storage/overlay/45ebda6be21feb78d3103eab71a2079db4704ca1feb8c078f5d76bb5e16ee190/merged major:0 minor:694 fsType:overlay blockSize:0} overlay_0-696:{mountpoint:/var/lib/containers/storage/overlay/a2884137215e9696b60a53ae43088f5c0cb7b6295630d454e6ff5a13625a2d40/merged major:0 minor:696 fsType:overlay blockSize:0} overlay_0-701:{mountpoint:/var/lib/containers/storage/overlay/904a1f045a2791bf9118a71d57e8ad1a7a6110a0f91048f489f402b0e034e416/merged major:0 minor:701 fsType:overlay blockSize:0} overlay_0-703:{mountpoint:/var/lib/containers/storage/overlay/01f75d4b1e633fb9caacedd159b21863ac3bc0866ec92835f90b7a6fb6a54170/merged major:0 minor:703 fsType:overlay blockSize:0} overlay_0-705:{mountpoint:/var/lib/containers/storage/overlay/5d48394cba2a5d1d99e042e03e75759d3dfe550c95f5f1ac585772d8ae493bd7/merged major:0 minor:705 fsType:overlay blockSize:0} overlay_0-71:{mountpoint:/var/lib/containers/storage/overlay/24bd5b420f7de69d9df7cdad02e4593ee830058f5a482fd9fccd19b5978c3653/merged major:0 minor:71 fsType:overlay blockSize:0} overlay_0-710:{mountpoint:/var/lib/containers/storage/overlay/c4a829d6db61c285b62b4a8828eca7598c6b1e8c142ea6b24c6638f3ac285d41/merged major:0 minor:710 fsType:overlay blockSize:0} overlay_0-712:{mountpoint:/var/lib/containers/storage/overlay/9d8dad70e4444ae1b1a53c1d5236c7eb3720f0fee4be647f8bc6658b5d805641/merged major:0 minor:712 fsType:overlay blockSize:0} overlay_0-716:{mountpoint:/var/lib/containers/storage/overlay/1dfb3cd4328c44b637373432a977c0507f52aacf455ef4f97699bad046a5d7ca/merged major:0 minor:716 fsType:overlay blockSize:0} overlay_0-729:{mountpoint:/var/lib/containers/storage/overlay/25b9b7fe488641bae329227f5c0a7cb106d4bdaa06dcd9a1c6c962b51dbdbdf2/merged major:0 minor:729 fsType:overlay blockSize:0} overlay_0-735:{mountpoint:/var/lib/containers/storage/overlay/cea50dfdde9b832e7bd27b1542afd16bb0d1ccc65ddf3b69e5d0e7dda3a62218/merged major:0 minor:735 fsType:overlay blockSize:0} overlay_0-737:{mountpoint:/var/lib/containers/storage/overlay/0f7eaf7b78dcb0ecd7246eef84701a8fc161b9899a5bbd129645c010732e0c0d/merged major:0 minor:737 fsType:overlay blockSize:0} overlay_0-739:{mountpoint:/var/lib/containers/storage/overlay/939195d48b6c24a47923a3d759386a26b6bd15c372b1ba5367b8adc5346ff1b0/merged major:0 minor:739 fsType:overlay blockSize:0} overlay_0-749:{mountpoint:/var/lib/containers/storage/overlay/21ea02f26da0f77e7e7aaf02d22a46945734b479c128a6657f5950681ca2fd72/merged major:0 minor:749 fsType:overlay blockSize:0} overlay_0-775:{mountpoint:/var/lib/containers/storage/overlay/802f1e0fb0c302e1de6b94a5dc7e075d5645453f09a73319e03f3fe56881be42/merged major:0 minor:775 fsType:overlay blockSize:0} overlay_0-783:{mountpoint:/var/lib/containers/storage/overlay/1ac2c5c6be85b605f565f8c4319e09097b9efba12ae963ebbcf0722f3585d4fa/merged major:0 minor:783 fsType:overlay blockSize:0} overlay_0-789:{mountpoint:/var/lib/containers/storage/overlay/d2dd4e9ec1258be3c0e539a39d3f90ef28311fc75eb76a85e97459fccca535d6/merged major:0 minor:789 fsType:overlay blockSize:0} overlay_0-790:{mountpoint:/var/lib/containers/storage/overlay/b0c2013651d28dec22ef5efcfe8083b6e9d61ad78ee5f70d1f22b887414d100a/merged major:0 minor:790 fsType:overlay blockSize:0} overlay_0-80:{mountpoint:/var/lib/containers/storage/overlay/aa0e24a5c646fba5a80c6d229a2e7c05cfd4c6e67fcbcd8a83e1403ff561973f/merged major:0 minor:80 fsType:overlay blockSize:0} overlay_0-801:{mountpoint:/var/lib/containers/storage/overlay/a730025461364c994d148c26931b6ee3ac996999bf3fa9534d6715b5ac4aaadc/merged major:0 minor:801 fsType:overlay blockSize:0} overlay_0-803:{mountpoint:/var/lib/containers/storage/overlay/b40061aa5c8f65aa43b26037dff8c181afbdac752660720b10f12d1f59ebc6c8/merged major:0 minor:803 fsType:overlay blockSize:0} overlay_0-812:{mountpoint:/var/lib/containers/storage/overlay/6cc31a0f528d9a583fb26240fd80d0267ab4e0ff7c5f0e41059d6d2b7d6c3d7d/merged major:0 minor:812 fsType:overlay blockSize:0} overlay_0-831:{mountpoint:/var/lib/containers/storage/overlay/72cd5b2fe0b79b424e38ac8f5d2ad0f3f47e641a4f41e339a6f0fea985cd8858/merged major:0 minor:831 fsType:overlay blockSize:0} overlay_0-832:{mountpoint:/var/lib/containers/storage/overlay/4a788958b84a989ca696a20c16c3afe6ab0a17affc96e6c113182da69d22b78b/merged major:0 minor:832 fsType:overlay blockSize:0} overlay_0-837:{mountpoint:/var/lib/containers/storage/overlay/8874a503494644586fb6a6aae556491600d526faad743b6dbf26361872f6d48b/merged major:0 minor:837 fsType:overlay blockSize:0} overlay_0-839:{mountpoint:/var/lib/containers/storage/overlay/571f33faec1729504a49cf027f68317f6e40a6d8a0cd785cdf9b0ab4996f350b/merged major:0 minor:839 fsType:overlay blockSize:0} overlay_0-841:{mountpoint:/var/lib/containers/storage/overlay/3f103c347f2e294d6b6426933042ca53cc238d77a64558e895eefdcd8a9c306c/merged major:0 minor:841 fsType:overlay blockSize:0} overlay_0-847:{mountpoint:/var/lib/containers/storage/overlay/84d361753ebd2024a5cf1bebc83598ce58b77db10f6d9ac0673106f01e9785b8/merged major:0 minor:847 fsType:overlay blockSize:0} overlay_0-849:{mountpoint:/var/lib/containers/storage/overlay/55b4efb0936711d57c090e69a7fdbdcc787cdc7fb5101555f9ed1b579bfba07a/merged major:0 minor:849 fsType:overlay blockSize:0} overlay_0-85:{mountpoint:/var/lib/containers/storage/overlay/9f0117e7311739ae65b68c0f2d4993b94675852de28ac94fc98a87c585823eec/merged major:0 minor:85 fsType:overlay blockSize:0} overlay_0-852:{mountpoint:/var/lib/containers/storage/overlay/d4498dce4d68e6634f584a152b8d85b222cae5f9a8d76ac91359264bacf65f73/merged major:0 minor:852 fsType:overlay blockSize:0} overlay_0-854:{mountpoint:/var/lib/containers/storage/overlay/9f0e00dee584ae43cfefa53a172fd83097ef6e3a0880d064321df9b73381ab70/merged major:0 minor:854 fsType:overlay blockSize:0} overlay_0-856:{mountpoint:/var/lib/containers/storage/overlay/d119a61ba565e053c666518a367e51b7edc54ca6a4a3c84e9c94fee2d1521eda/merged major:0 minor:856 fsType:overlay blockSize:0} overlay_0-858:{mountpoint:/var/lib/containers/storage/overlay/0cf820097773db32c6cce946c5a00dd8829244dbd26397820fdbcf8a3b416a17/merged major:0 minor:858 fsType:overlay blockSize:0} overlay_0-86:{mountpoint:/var/lib/containers/storage/overlay/a85ce8e1a4b31f0cd222ab7945d32b0ea69b902b9b3a1076dabc9ddeaf956d12/merged major:0 minor:86 fsType:overlay blockSize:0} overlay_0-860:{mountpoint:/var/lib/containers/storage/overlay/72a72c26c51c7c3d1b481a49eef2ce1289c759e23085d1c753b12c32b7215954/merged major:0 minor:860 fsType:overlay blockSize:0} overlay_0-861:{mountpoint:/var/lib/containers/storage/overlay/23cc48de3324837ea30dfbdf6fe6eb8fe57dd41d6247b71794ab525661ffa925/merged major:0 minor:861 fsType:overlay blockSize:0} overlay_0-865:{mountpoint:/var/lib/containers/storage/overlay/dc67ebfafad791f0f6c0b70d5e06999ae14c954aa87beb5d39824d80a3608e6f/merged major:0 minor:865 fsType:overlay blockSize:0} overlay_0-868:{mountpoint:/var/lib/containers/storage/overlay/ef285a3ae80a2ec9bca98d296c6ab1b6b4e3896b247fb6cce65c280eaf804770/merged major:0 minor:868 fsType:overlay blockSize:0} overlay_0-87:{mountpoint:/var/lib/containers/storage/overlay/44f4b6f0f55b3f1949c474ff3b46f3f1bfd81120a268bb7b511bcc4992b6c67f/merged major:0 minor:87 fsType:overlay blockSize:0} overlay_0-879:{mountpoint:/var/lib/containers/storage/overlay/34f30774af43c8be4822f6c83e7e28e26bef4617f61e92d0ec804295a14f77e6/merged major:0 minor:879 fsType:overlay blockSize:0} overlay_0-881:{mountpoint:/var/lib/containers/storage/overlay/6225a624809f0e5935897f6f78f74dd16ad485ed677613dba47e56a61d2ca695/merged major:0 minor:881 fsType:overlay blockSize:0} overlay_0-883:{mountpoint:/var/lib/containers/storage/overlay/63d120fd99c0b3c5edf2ef37622317733fa144aca1ff1a54cfe67448c54b2447/merged major:0 minor:883 fsType:overlay blockSize:0} overlay_0-886:{mountpoint:/var/lib/containers/storage/overlay/11c87f97a5e16214a0150b793eab5d12e4f56feecbbed2343ef77644351778b8/merged major:0 minor:886 fsType:overlay blockSize:0} overlay_0-89:{mountpoint:/var/lib/containers/storage/overlay/632306fa14390bf0ae600552baf4d7b24fd2b231e1d819793232b78cfcd9de97/merged major:0 minor:89 fsType:overlay blockSize:0} overlay_0-890:{mountpoint:/var/lib/containers/storage/overlay/f8ebf4ddb658183c5f2f762ff1b3bf8539d6bf5ed16a20526b94bae858ea7b99/merged major:0 minor:890 fsType:overlay blockSize:0} overlay_0-893:{mountpoint:/var/lib/containers/storage/overlay/897ec733e3d96005cb8e3ab19e62ff736539ba618495d1d3ffe8541a300ec42e/merged major:0 minor:893 fsType:overlay blockSize:0} overlay_0-903:{mountpoint:/var/lib/containers/storage/overlay/7eb83ba1780f655367e7fd818dd2d00df54a38ff265577e852a5124ec1921df6/merged major:0 minor:903 fsType:overlay blockSize:0} overlay_0-905:{mountpoint:/var/lib/containers/storage/overlay/8c24dd3b1ff53bc7cc6e2e1e24c44687204e7fc631696456868bbba7197501b8/merged major:0 minor:905 fsType:overlay blockSize:0} overlay_0-907:{mountpoint:/var/lib/containers/storage/overlay/b271420d00c979b52645eee01d87c0b23b5627e168044309412e567649f492be/merged major:0 minor:907 fsType:overlay blockSize:0} overlay_0-909:{mountpoint:/var/lib/containers/storage/overlay/c669fc9d10c239327e558e09920bd8f9d2108f0ba6c6500af166ed256ebe2941/merged major:0 minor:909 fsType:overlay blockSize:0} overlay_0-911:{mountpoint:/var/lib/containers/storage/overlay/0204cd86728dd32e88f78f4f13fd4da59d2db8d708915b4bbeebbc68ebd461cb/merged major:0 minor:911 fsType:overlay blockSize:0} overlay_0-94:{mountpoint:/var/lib/containers/storage/overlay/f9835831c18a5752cd7078069af5a6105047281ca97a3b76297c197963ec244e/merged major:0 minor:94 fsType:overlay blockSize:0} overlay_0-944:{mountpoint:/var/lib/containers/storage/overlay/d3c9380c55765b0de668d960ec3a9c4b2c3f86e6677c5c119c45e2a879d575d9/merged major:0 minor:944 fsType:overlay blockSize:0} overlay_0-949:{mountpoint:/var/lib/containers/storage/overlay/a0b1bb6c66bfb17e133eb155eb46ffade52d64037747b8cb051a83c994175180/merged major:0 minor:949 fsType:overlay blockSize:0} overlay_0-951:{mountpoint:/var/lib/containers/storage/overlay/c1308f4f8a667d5485e7a578f2490f0ce2351c2682c1f497e6d399de76bef5de/merged major:0 minor:951 fsType:overlay blockSize:0} overlay_0-952:{mountpoint:/var/lib/containers/storage/overlay/dede9e35ba65df59e386210d88c199202c46e6b80079dea1abe648404e31569c/merged major:0 minor:952 fsType:overlay blockSize:0} overlay_0-955:{mountpoint:/var/lib/containers/storage/overlay/f0339016b3bb3a91df7a18739c1e7a127111e5838e3053ddc864a008ffa8fac4/merged major:0 minor:955 fsType:overlay blockSize:0} overlay_0-96:{mountpoint:/var/lib/containers/storage/overlay/2aadc6f140c9f055e40032656670e6ba7bfe306024f5b635fb168b43f2335e96/merged major:0 minor:96 fsType:overlay blockSize:0} overlay_0-960:{mountpoint:/var/lib/containers/storage/overlay/d14092c812a794dd3a1fc77154178fbd7ae1ed39f643bc7a94ef490937d0a4f9/merged major:0 minor:960 fsType:overlay blockSize:0} overlay_0-967:{mountpoint:/var/lib/containers/storage/overlay/471b302ee540f9b4eaea49c0823c90c2a963b9b6126644b5fa3f0fe717d3d75b/merged major:0 minor:967 fsType:overlay blockSize:0} overlay_0-969:{mountpoint:/var/lib/containers/storage/overlay/b315663bc11fda417a9cb664da4ad85c0951df002b381b91299c51262279ab75/merged major:0 minor:969 fsType:overlay blockSize:0} overlay_0-970:{mountpoint:/var/lib/containers/storage/overlay/4c7bfbd81abf52d1a9d341da218ae788bf2cba25c4bf4f38d05b9d3df5a43e5a/merged major:0 minor:970 fsType:overlay blockSize:0} overlay_0-971:{mountpoint:/var/lib/containers/storage/overlay/2fb8e7daf07c2b19f5280714ccf215f5af2fad864887e91e5c96b256d3e7834c/merged major:0 minor:971 fsType:overlay blockSize:0} overlay_0-973:{mountpoint:/var/lib/containers/storage/overlay/cc9905c59d1ef6407ddf9ff7496c10b809356520128364c1dda183636af93ac2/merged major:0 minor:973 fsType:overlay blockSize:0} overlay_0-975:{mountpoint:/var/lib/containers/storage/overlay/51c222538aebe86c62c168d7e799afe581a2b5371e5d60d10bc28b2225036d0e/merged major:0 minor:975 fsType:overlay blockSize:0} overlay_0-983:{mountpoint:/var/lib/containers/storage/overlay/a74139086c8288e899c886e1a483dc2438663470c8e383b725a8c86963d3c218/merged major:0 minor:983 fsType:overlay blockSize:0} overlay_0-985:{mountpoint:/var/lib/containers/storage/overlay/bda0fe35f16385888bd10a82221be076f4d35dbf464c05aacbe12d7e81682e7c/merged major:0 minor:985 fsType:overlay blockSize:0} overlay_0-99:{mountpoint:/var/lib/containers/storage/overlay/966140665a459bafce1e0a060da17fc6812bac03abd6b2027bf4e5b0705a2c44/merged major:0 minor:99 fsType:overlay blockSize:0} overlay_0-993:{mountpoint:/var/lib/containers/storage/overlay/9e3ddd94cd27bc9de24c454a21455832e7839115ca1908fda11fd6d6b125952a/merged major:0 minor:993 fsType:overlay blockSize:0} overlay_0-996:{mountpoint:/var/lib/containers/storage/overlay/c873145e3ca13e11c8781f8d510c260f9d3e4d9291adc7799c1bcb23b68de54f/merged major:0 minor:996 fsType:overlay blockSize:0}] Feb 24 05:37:20.496057 master-0 kubenswrapper[34361]: I0224 05:37:20.494533 34361 manager.go:217] Machine: {Timestamp:2026-02-24 05:37:20.493508005 +0000 UTC m=+0.196125071 CPUVendorID:AuthenticAMD NumCores:16 NumPhysicalCores:1 NumSockets:16 CpuFrequency:2799998 MemoryCapacity:50514145280 SwapCapacity:0 MemoryByType:map[] NVMInfo:{MemoryModeCapacity:0 AppDirectModeCapacity:0 AvgPowerBudget:0} HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] MachineID:8094cc4b75b94a6193669cda4f2ebd55 SystemUUID:8094cc4b-75b9-4a61-9366-9cda4f2ebd55 BootID:a3e360dd-b72b-40f0-a056-0eff64b26b55 Filesystems:[{Device:overlay_0-116 DeviceMajor:0 DeviceMinor:116 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/5c76314bfc127c2893886d4278db6947daa2fbb82909a575cdadd2f5a3b4b008/userdata/shm DeviceMajor:0 DeviceMinor:1088 Capacity:67108864 Type:vfs Inodes:6166277 HasInodes:true} {Device:overlay_0-110 DeviceMajor:0 DeviceMinor:110 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-133 DeviceMajor:0 DeviceMinor:133 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-576 DeviceMajor:0 DeviceMinor:576 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/5dd4d0e15147dd2dcd433c46cdfb1a10fbbcd3b91480c55088fbf67973e54f4c/userdata/shm DeviceMajor:0 DeviceMinor:606 Capacity:67108864 Type:vfs Inodes:6166277 HasInodes:true} {Device:overlay_0-705 DeviceMajor:0 DeviceMinor:705 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/5d51ce58-55f6-45d5-9d5d-7b31ae42380a/volumes/kubernetes.io~secret/cert DeviceMajor:0 DeviceMinor:727 Capacity:49335545856 Type:vfs Inodes:6166277 HasInodes:true} {Device:/var/lib/kubelet/pods/bf303acd-b62e-4aa3-bd8d-15f5844302d8/volumes/kubernetes.io~secret/openshift-state-metrics-tls DeviceMajor:0 DeviceMinor:1175 Capacity:49335545856 Type:vfs Inodes:6166277 HasInodes:true} {Device:/var/lib/kubelet/pods/88b915ff-fd94-4998-aa09-70f95c0f1b8a/volumes/kubernetes.io~projected/kube-api-access-bs794 DeviceMajor:0 DeviceMinor:141 Capacity:49335545856 Type:vfs Inodes:6166277 HasInodes:true} {Device:/var/lib/kubelet/pods/7a2c651d-ea1a-41f2-9745-04adc8d88904/volumes/kubernetes.io~projected/kube-api-access-fgf94 DeviceMajor:0 DeviceMinor:252 Capacity:49335545856 Type:vfs Inodes:6166277 HasInodes:true} {Device:overlay_0-1253 DeviceMajor:0 DeviceMinor:1253 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-1316 DeviceMajor:0 DeviceMinor:1316 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/b426cb33-1624-45e6-b8c5-4e8d251f6339/volumes/kubernetes.io~secret/serving-cert DeviceMajor:0 DeviceMinor:751 Capacity:49335545856 Type:vfs Inodes:6166277 HasInodes:true} {Device:/var/lib/kubelet/pods/39c4d0aa-c372-4d02-9302-337e68b56784/volumes/kubernetes.io~secret/proxy-tls DeviceMajor:0 DeviceMinor:793 Capacity:49335545856 Type:vfs Inodes:6166277 HasInodes:true} {Device:overlay_0-971 DeviceMajor:0 DeviceMinor:971 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-1027 DeviceMajor:0 DeviceMinor:1027 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-1090 DeviceMajor:0 DeviceMinor:1090 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/798dcf46-8377-46b8-8387-5261d9bbefa1/volumes/kubernetes.io~projected/kube-api-access-jl24z DeviceMajor:0 DeviceMinor:498 Capacity:49335545856 Type:vfs Inodes:6166277 HasInodes:true} {Device:overlay_0-716 DeviceMajor:0 DeviceMinor:716 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-631 DeviceMajor:0 DeviceMinor:631 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-735 DeviceMajor:0 DeviceMinor:735 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-1038 DeviceMajor:0 DeviceMinor:1038 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/84b8e720c1d11da23dcffc231251263a604179069ed4f2a829aaaefed039c537/userdata/shm DeviceMajor:0 DeviceMinor:108 Capacity:67108864 Type:vfs Inodes:6166277 HasInodes:true} {Device:overlay_0-177 DeviceMajor:0 DeviceMinor:177 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/be7a4b9e-1e9a-4298-b804-21b683805c0e/volumes/kubernetes.io~secret/stats-auth DeviceMajor:0 DeviceMinor:1078 Capacity:49335545856 Type:vfs Inodes:6166277 HasInodes:true} {Device:overlay_0-1218 DeviceMajor:0 DeviceMinor:1218 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/6e5ede6a-9d4b-47a2-b4ba-e6018910d05a/volumes/kubernetes.io~secret/node-tuning-operator-tls DeviceMajor:0 DeviceMinor:398 Capacity:49335545856 Type:vfs Inodes:6166277 HasInodes:true} {Device:/run/containers/storage/overlay-containers/2b0278ee2f5e88257e8f5b58fed5df5f9b9d95fcd14996f65f2dd1c054e4ac57/userdata/shm DeviceMajor:0 DeviceMinor:402 Capacity:67108864 Type:vfs Inodes:6166277 HasInodes:true} {Device:/var/lib/kubelet/pods/4bb05b64-74d7-41bc-991c-5d3cddc9d8f4/volumes/kubernetes.io~projected/kube-api-access-7vjzn DeviceMajor:0 DeviceMinor:772 Capacity:49335545856 Type:vfs Inodes:6166277 HasInodes:true} {Device:/var/lib/kubelet/pods/a3561f49-0808-4d96-95ec-456fcb5c5bb4/volumes/kubernetes.io~secret/proxy-tls DeviceMajor:0 DeviceMinor:931 Capacity:49335545856 Type:vfs Inodes:6166277 HasInodes:true} {Device:/run/containers/storage/overlay-containers/cd174549be5b88f39588bafbc22af8049014b8bbed26dfd817fa5184b48774e3/userdata/shm DeviceMajor:0 DeviceMinor:403 Capacity:67108864 Type:vfs Inodes:6166277 HasInodes:true} {Device:overlay_0-1196 DeviceMajor:0 DeviceMinor:1196 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/2f48332e-92de-42aa-a6e6-db161f005e74/volumes/kubernetes.io~secret/secret-metrics-server-tls DeviceMajor:0 DeviceMinor:1240 Capacity:49335545856 Type:vfs Inodes:6166277 HasInodes:true} {Device:/var/lib/kubelet/pods/f6690909-3a87-4bdc-b0ec-1cdd4df32e4b/volumes/kubernetes.io~projected/kube-api-access-6b7f4 DeviceMajor:0 DeviceMinor:272 Capacity:49335545856 Type:vfs Inodes:6166277 HasInodes:true} {Device:overlay_0-442 DeviceMajor:0 DeviceMinor:442 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-558 DeviceMajor:0 DeviceMinor:558 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-955 DeviceMajor:0 DeviceMinor:955 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-1328 DeviceMajor:0 DeviceMinor:1328 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-170 DeviceMajor:0 DeviceMinor:170 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/347c43e5-86d5-436f-bdc5-1c7bbe19ab2a/volumes/kubernetes.io~projected/ca-certs DeviceMajor:0 DeviceMinor:515 Capacity:49335545856 Type:vfs Inodes:6166277 HasInodes:true} {Device:/run/containers/storage/overlay-containers/64d82ee2903a4034f2cd6f4a7fd22197c2cda9f27e9a4810423ee5ca5bc5cc6d/userdata/shm DeviceMajor:0 DeviceMinor:287 Capacity:67108864 Type:vfs Inodes:6166277 HasInodes:true} {Device:/run/containers/storage/overlay-containers/0e75a15a8297368a6c95abe6074b8d1fd12c66b5f2515773157daf62c40e79a8/userdata/shm DeviceMajor:0 DeviceMinor:511 Capacity:67108864 Type:vfs Inodes:6166277 HasInodes:true} {Device:overlay_0-841 DeviceMajor:0 DeviceMinor:841 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-1086 DeviceMajor:0 DeviceMinor:1086 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/1163571d-f555-41ad-b04c-74c2dc452efe/volumes/kubernetes.io~secret/telemeter-client-tls DeviceMajor:0 DeviceMinor:1312 Capacity:49335545856 Type:vfs Inodes:6166277 HasInodes:true} {Device:overlay_0-220 DeviceMajor:0 DeviceMinor:220 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-148 DeviceMajor:0 DeviceMinor:148 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-973 DeviceMajor:0 DeviceMinor:973 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-330 DeviceMajor:0 DeviceMinor:330 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/3d6b1ce7-1213-494c-829d-186d39eac7eb/volumes/kubernetes.io~secret/metrics-tls DeviceMajor:0 DeviceMinor:400 Capacity:49335545856 Type:vfs Inodes:6166277 HasInodes:true} {Device:/var/lib/kubelet/pods/dd29bef3-d27e-48b3-9aa0-d915e949b3d5/volumes/kubernetes.io~secret/marketplace-operator-metrics DeviceMajor:0 DeviceMinor:598 Capacity:49335545856 Type:vfs Inodes:6166277 HasInodes:true} {Device:overlay_0-587 DeviceMajor:0 DeviceMinor:587 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-893 DeviceMajor:0 DeviceMinor:893 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/be7a4b9e-1e9a-4298-b804-21b683805c0e/volumes/kubernetes.io~secret/metrics-certs DeviceMajor:0 DeviceMinor:1079 Capacity:49335545856 Type:vfs Inodes:6166277 HasInodes:true} {Device:overlay_0-388 DeviceMajor:0 DeviceMinor:388 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-543 DeviceMajor:0 DeviceMinor:543 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/49bfccec-61ec-4bef-a561-9f6e6f906215/volumes/kubernetes.io~secret/package-server-manager-serving-cert DeviceMajor:0 DeviceMinor:599 Capacity:49335545856 Type:vfs Inodes:6166277 HasInodes:true} {Device:overlay_0-1291 DeviceMajor:0 DeviceMinor:1291 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-328 DeviceMajor:0 DeviceMinor:328 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/59333a14-5bdc-4590-a3da-af7300f086da/volumes/kubernetes.io~secret/serving-cert DeviceMajor:0 DeviceMinor:246 Capacity:49335545856 Type:vfs Inodes:6166277 HasInodes:true} {Device:/run/containers/storage/overlay-containers/31db0370c08dc41ae971998fe86ac9cb0b2bcc6c08ec28eb749ac1396b3c2667/userdata/shm DeviceMajor:0 DeviceMinor:282 Capacity:67108864 Type:vfs Inodes:6166277 HasInodes:true} {Device:overlay_0-612 DeviceMajor:0 DeviceMinor:612 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/b8d28792-2365-4e9e-b61a-46cd2ef8b632/volumes/kubernetes.io~projected/kube-api-access-77lsr DeviceMajor:0 DeviceMinor:1157 Capacity:49335545856 Type:vfs Inodes:6166277 HasInodes:true} {Device:overlay_0-537 DeviceMajor:0 DeviceMinor:537 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-583 DeviceMajor:0 DeviceMinor:583 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-775 DeviceMajor:0 DeviceMinor:775 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-154 DeviceMajor:0 DeviceMinor:154 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/3363f001-1cfa-41f5-b245-30cc99dd09cb/volumes/kubernetes.io~secret/metrics-tls DeviceMajor:0 DeviceMinor:499 Capacity:49335545856 Type:vfs Inodes:6166277 HasInodes:true} {Device:overlay_0-453 DeviceMajor:0 DeviceMinor:453 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-528 DeviceMajor:0 DeviceMinor:528 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/081425b6bb126676c8a3b61b952db3a17ca28803f3b46af593db55de6dd0db70/userdata/shm DeviceMajor:0 DeviceMinor:274 Capacity:67108864 Type:vfs Inodes:6166277 HasInodes:true} {Device:/var/lib/kubelet/pods/03e4cebe-f3df-423f-be2b-7fb22bd58341/volumes/kubernetes.io~projected/kube-api-access-f9pp4 DeviceMajor:0 DeviceMinor:389 Capacity:49335545856 Type:vfs Inodes:6166277 HasInodes:true} {Device:/var/lib/kubelet/pods/d86d5bbe-3768-4695-810b-245a56e4fd1d/volumes/kubernetes.io~projected/kube-api-access-xj8cq DeviceMajor:0 DeviceMinor:245 Capacity:49335545856 Type:vfs Inodes:6166277 HasInodes:true} {Device:/var/lib/kubelet/pods/c847d0c0-cc92-4d56-9e47-b83d9a39a745/volumes/kubernetes.io~secret/certs DeviceMajor:0 DeviceMinor:1099 Capacity:49335545856 Type:vfs Inodes:6166277 HasInodes:true} {Device:/var/lib/kubelet/pods/80cc7ad6-051b-4ee5-94af-611388d9622a/volumes/kubernetes.io~secret/kube-state-metrics-tls DeviceMajor:0 DeviceMinor:1178 Capacity:49335545856 Type:vfs Inodes:6166277 HasInodes:true} {Device:overlay_0-1190 DeviceMajor:0 DeviceMinor:1190 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-1214 DeviceMajor:0 DeviceMinor:1214 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-188 DeviceMajor:0 DeviceMinor:188 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-197 DeviceMajor:0 DeviceMinor:197 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/6042346e04d14789f9df563facc73503846c93f9a58755284a883ae67d6dfa74/userdata/shm DeviceMajor:0 DeviceMinor:1185 Capacity:67108864 Type:vfs Inodes:6166277 HasInodes:true} {Device:overlay_0-311 DeviceMajor:0 DeviceMinor:311 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/32f719b1fae3e7d132b769e21e46c31c5ab4d99d85c92e0fd1953cfcbf40dc0a/userdata/shm DeviceMajor:0 DeviceMinor:530 Capacity:67108864 Type:vfs Inodes:6166277 HasInodes:true} {Device:/var/lib/kubelet/pods/8f3825c1-975c-40b5-a6ad-0f200968b3cd/volumes/kubernetes.io~projected/kube-api-access-l8z6s DeviceMajor:0 DeviceMinor:932 Capacity:49335545856 Type:vfs Inodes:6166277 HasInodes:true} {Device:overlay_0-221 DeviceMajor:0 DeviceMinor:221 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/17f8e10b-88dc-4158-a7c4-aaa2f5d5fb9d/volumes/kubernetes.io~secret/serving-cert DeviceMajor:0 DeviceMinor:264 Capacity:49335545856 Type:vfs Inodes:6166277 HasInodes:true} {Device:/var/lib/kubelet/pods/b46907eb-36d6-4410-b7d8-8012b254c861/volumes/kubernetes.io~projected/kube-api-access-k8dtv DeviceMajor:0 DeviceMinor:767 Capacity:49335545856 Type:vfs Inodes:6166277 HasInodes:true} {Device:overlay_0-861 DeviceMajor:0 DeviceMinor:861 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/2e6f428788cdb3f513e95cc63ecf43bbf7b7de35faa154cc080dbc5634ce8151/userdata/shm DeviceMajor:0 DeviceMinor:186 Capacity:67108864 Type:vfs Inodes:6166277 HasInodes:true} {Device:overlay_0-653 DeviceMajor:0 DeviceMinor:653 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/3d6b1ce7-1213-494c-829d-186d39eac7eb/volumes/kubernetes.io~projected/kube-api-access-5q2r9 DeviceMajor:0 DeviceMinor:267 Capacity:49335545856 Type:vfs Inodes:6166277 HasInodes:true} {Device:/var/lib/kubelet/pods/ab5afff8-1081-4acc-8ab9-d6bfd8df1d67/volumes/kubernetes.io~secret/signing-key DeviceMajor:0 DeviceMinor:336 Capacity:49335545856 Type:vfs Inodes:6166277 HasInodes:true} {Device:overlay_0-322 DeviceMajor:0 DeviceMinor:322 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/f3cd3830-62b5-49d1-917e-bd993d685c65/volumes/kubernetes.io~projected/kube-api-access-957g9 DeviceMajor:0 DeviceMinor:393 Capacity:49335545856 Type:vfs Inodes:6166277 HasInodes:true} {Device:overlay_0-1064 DeviceMajor:0 DeviceMinor:1064 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-80 DeviceMajor:0 DeviceMinor:80 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/996ae0be-d36c-47f4-98b2-1c89591f9506/volumes/kubernetes.io~secret/metrics-tls DeviceMajor:0 DeviceMinor:401 Capacity:49335545856 Type:vfs Inodes:6166277 HasInodes:true} {Device:/var/lib/kubelet/pods/d9492fbf-d0f4-4ecf-84ba-b089d69535c1/volumes/kubernetes.io~secret/catalogserver-certs DeviceMajor:0 DeviceMinor:508 Capacity:49335545856 Type:vfs Inodes:6166277 HasInodes:true} {Device:overlay_0-1003 DeviceMajor:0 DeviceMinor:1003 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-1263 DeviceMajor:0 DeviceMinor:1263 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-585 DeviceMajor:0 DeviceMinor:585 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-118 DeviceMajor:0 DeviceMinor:118 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-635 DeviceMajor:0 DeviceMinor:635 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/0e05783d-6bd1-4c71-87d9-1eb3edd827b3/volumes/kubernetes.io~projected/kube-api-access DeviceMajor:0 DeviceMinor:675 Capacity:49335545856 Type:vfs Inodes:6166277 HasInodes:true} {Device:/var/lib/kubelet/pods/e1f03d97-1a6a-41e4-9ed3-cd9b01c46400/volumes/kubernetes.io~secret/cluster-storage-operator-serving-cert DeviceMajor:0 DeviceMinor:722 Capacity:49335545856 Type:vfs Inodes:6166277 HasInodes:true} {Device:/run/containers/storage/overlay-containers/a51d75323a923af00f3bd0e9f47fc2b98d3fa4f81d500b08ed1b5763acd5b079/userdata/shm DeviceMajor:0 DeviceMinor:808 Capacity:67108864 Type:vfs Inodes:6166277 HasInodes:true} {Device:/run/containers/storage/overlay-containers/d3656437a9ce9676295b2eb9bd8bc3fb63776e655e923084238b22192495f791/userdata/shm DeviceMajor:0 DeviceMinor:829 Capacity:67108864 Type:vfs Inodes:6166277 HasInodes:true} {Device:/var/lib/kubelet/pods/49b426a3-f16e-40e9-a166-7270d4cfcc60/volumes/kubernetes.io~secret/webhook-cert DeviceMajor:0 DeviceMinor:926 Capacity:49335545856 Type:vfs Inodes:6166277 HasInodes:true} {Device:overlay_0-315 DeviceMajor:0 DeviceMinor:315 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/fd03b91adf31c70f04d420a5ba045d6cd9e1f68b14c47322c66de7814d71ccf4/userdata/shm DeviceMajor:0 DeviceMinor:404 Capacity:67108864 Type:vfs Inodes:6166277 HasInodes:true} {Device:overlay_0-684 DeviceMajor:0 DeviceMinor:684 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/e6f05507-d5c1-4102-a220-1db715a496e3/volumes/kubernetes.io~secret/serving-cert DeviceMajor:0 DeviceMinor:235 Capacity:49335545856 Type:vfs Inodes:6166277 HasInodes:true} {Device:/var/lib/kubelet/pods/22813c83-2f60-44ad-9624-ad367cec08f7/volumes/kubernetes.io~secret/serving-cert DeviceMajor:0 DeviceMinor:248 Capacity:49335545856 Type:vfs Inodes:6166277 HasInodes:true} {Device:/run/containers/storage/overlay-containers/19df6454a08add523c5ff47203d9500ee4d5041717ffe824b8f6b33008f7fb0d/userdata/shm DeviceMajor:0 DeviceMinor:795 Capacity:67108864 Type:vfs Inodes:6166277 HasInodes:true} {Device:overlay_0-1164 DeviceMajor:0 DeviceMinor:1164 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/80cc7ad6-051b-4ee5-94af-611388d9622a/volumes/kubernetes.io~projected/kube-api-access-hgl5l DeviceMajor:0 DeviceMinor:1181 Capacity:49335545856 Type:vfs Inodes:6166277 HasInodes:true} {Device:/var/lib/kubelet/pods/116e6b47-d435-49ca-abb5-088788daf16a/volumes/kubernetes.io~secret/machine-api-operator-tls DeviceMajor:0 DeviceMinor:752 Capacity:49335545856 Type:vfs Inodes:6166277 HasInodes:true} {Device:/var/lib/kubelet/pods/39623346-691b-42c8-af76-409d4f6629af/volumes/kubernetes.io~projected/kube-api-access-ddfqw DeviceMajor:0 DeviceMinor:768 Capacity:49335545856 Type:vfs Inodes:6166277 HasInodes:true} {Device:overlay_0-969 DeviceMajor:0 DeviceMinor:969 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/4ebd137aadd86a90697f1884cb52d1970bb5138e39026928308cfa18816924e6/userdata/shm DeviceMajor:0 DeviceMinor:1242 Capacity:67108864 Type:vfs Inodes:6166277 HasInodes:true} {Device:/var/lib/kubelet/pods/39c4d0aa-c372-4d02-9302-337e68b56784/volumes/kubernetes.io~projected/kube-api-access-b2fkp DeviceMajor:0 DeviceMinor:799 Capacity:49335545856 Type:vfs Inodes:6166277 HasInodes:true} {Device:overlay_0-905 DeviceMajor:0 DeviceMinor:905 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/8edfb6097f947373026f0b09e341e33fda8a35b32db2f2f2929d0f92ff74f282/userdata/shm DeviceMajor:0 DeviceMinor:822 Capacity:67108864 Type:vfs Inodes:6166277 HasInodes:true} {Device:overlay_0-801 DeviceMajor:0 DeviceMinor:801 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-1345 DeviceMajor:0 DeviceMinor:1345 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/59333a14-5bdc-4590-a3da-af7300f086da/volumes/kubernetes.io~projected/kube-api-access-wwc5b DeviceMajor:0 DeviceMinor:259 Capacity:49335545856 Type:vfs Inodes:6166277 HasInodes:true} {Device:overlay_0-803 DeviceMajor:0 DeviceMinor:803 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-434 DeviceMajor:0 DeviceMinor:434 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-614 DeviceMajor:0 DeviceMinor:614 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-182 DeviceMajor:0 DeviceMinor:182 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-812 DeviceMajor:0 DeviceMinor:812 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-854 DeviceMajor:0 DeviceMinor:854 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/9e66323acb79027dbee260b2bd6ea317379967ab104a220c1093c958a45ebc27/userdata/shm DeviceMajor:0 DeviceMinor:1083 Capacity:67108864 Type:vfs Inodes:6166277 HasInodes:true} {Device:/run/containers/storage/overlay-containers/da13c43822ff6ebef72ea5dada557656eab3613ad082a77190dd348e4d4caec1/userdata/shm DeviceMajor:0 DeviceMinor:384 Capacity:67108864 Type:vfs Inodes:6166277 HasInodes:true} {Device:/var/lib/kubelet/pods/6e5ede6a-9d4b-47a2-b4ba-e6018910d05a/volumes/kubernetes.io~secret/apiservice-cert DeviceMajor:0 DeviceMinor:397 Capacity:49335545856 Type:vfs Inodes:6166277 HasInodes:true} {Device:overlay_0-354 DeviceMajor:0 DeviceMinor:354 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-1081 DeviceMajor:0 DeviceMinor:1081 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-1280 DeviceMajor:0 DeviceMinor:1280 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-960 DeviceMajor:0 DeviceMinor:960 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/32fd577d-8966-4ab1-95cf-357291084156/volumes/kubernetes.io~projected/kube-api-access-fh2pc DeviceMajor:0 DeviceMinor:741 Capacity:49335545856 Type:vfs Inodes:6166277 HasInodes:true} {Device:overlay_0-886 DeviceMajor:0 DeviceMinor:886 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/23bdafdd-27c9-4461-be4a-3ea916ac3875/volumes/kubernetes.io~projected/kube-api-access-cczbm DeviceMajor:0 DeviceMinor:761 Capacity:49335545856 Type:vfs Inodes:6166277 HasInodes:true} {Device:overlay_0-617 DeviceMajor:0 DeviceMinor:617 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/2e08dd98145938b80638e25896f965db6111532d375ded80b0d82dda78b2522d/userdata/shm DeviceMajor:0 DeviceMinor:271 Capacity:67108864 Type:vfs Inodes:6166277 HasInodes:true} {Device:/run/containers/storage/overlay-containers/f5885425638056ce98b14e0964ddb8ab6fa82dc0c949c580e04a0b062a448107/userdata/shm DeviceMajor:0 DeviceMinor:747 Capacity:67108864 Type:vfs Inodes:6166277 HasInodes:true} {Device:/run/containers/storage/overlay-containers/47463debfe8a4cd4bfc5f6610d0dc3da5ba2eb733f6d27a5379ed121dc26350d/userdata/shm DeviceMajor:0 DeviceMinor:1158 Capacity:67108864 Type:vfs Inodes:6166277 HasInodes:true} {Device:overlay_0-1376 DeviceMajor:0 DeviceMinor:1376 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/f05f4c8572660fb60933e1a43cdf2d946cf6624f2ede2a6f783e25d928dd09bd/userdata/shm DeviceMajor:0 DeviceMinor:935 Capacity:67108864 Type:vfs Inodes:6166277 HasInodes:true} {Device:overlay_0-975 DeviceMajor:0 DeviceMinor:975 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-1062 DeviceMajor:0 DeviceMinor:1062 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/b5eb5695ccec6b92144f40353b32b80192cdcb4ed71afa4329c2fd87d4604e30/userdata/shm DeviceMajor:0 DeviceMinor:938 Capacity:67108864 Type:vfs Inodes:6166277 HasInodes:true} {Device:/var/lib/kubelet/pods/1e7f7c02-4c84-432a-8d59-25dd3bfef5c2/volumes/kubernetes.io~projected/kube-api-access-4bf6w DeviceMajor:0 DeviceMinor:1059 Capacity:49335545856 Type:vfs Inodes:6166277 HasInodes:true} {Device:overlay_0-710 DeviceMajor:0 DeviceMinor:710 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/937f03ad2559d182c0cdd1d2762487960e12dca202f4d10b53ec97e755cb0a40/userdata/shm DeviceMajor:0 DeviceMinor:805 Capacity:67108864 Type:vfs Inodes:6166277 HasInodes:true} {Device:/run/containers/storage/overlay-containers/005aea3f18d4d280e39bcec0aace6a6b0719831dd54d5e5f2bb06b03a10a1e55/userdata/shm DeviceMajor:0 DeviceMinor:933 Capacity:67108864 Type:vfs Inodes:6166277 HasInodes:true} {Device:overlay_0-1192 DeviceMajor:0 DeviceMinor:1192 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/7a2c651d-ea1a-41f2-9745-04adc8d88904/volumes/kubernetes.io~secret/etcd-client DeviceMajor:0 DeviceMinor:241 Capacity:49335545856 Type:vfs Inodes:6166277 HasInodes:true} {Device:/var/lib/kubelet/pods/17f8e10b-88dc-4158-a7c4-aaa2f5d5fb9d/volumes/kubernetes.io~projected/kube-api-access DeviceMajor:0 DeviceMinor:269 Capacity:49335545856 Type:vfs Inodes:6166277 HasInodes:true} {Device:overlay_0-944 DeviceMajor:0 DeviceMinor:944 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-426 DeviceMajor:0 DeviceMinor:426 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-856 DeviceMajor:0 DeviceMinor:856 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-1030 DeviceMajor:0 DeviceMinor:1030 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/1163571d-f555-41ad-b04c-74c2dc452efe/volumes/kubernetes.io~projected/kube-api-access-46fll DeviceMajor:0 DeviceMinor:1315 Capacity:49335545856 Type:vfs Inodes:6166277 HasInodes:true} {Device:overlay_0-152 DeviceMajor:0 DeviceMinor:152 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-362 DeviceMajor:0 DeviceMinor:362 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-303 DeviceMajor:0 DeviceMinor:303 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/1163571d-f555-41ad-b04c-74c2dc452efe/volumes/kubernetes.io~secret/federate-client-tls DeviceMajor:0 DeviceMinor:1311 Capacity:49335545856 Type:vfs Inodes:6166277 HasInodes:true} {Device:overlay_0-363 DeviceMajor:0 DeviceMinor:363 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-608 DeviceMajor:0 DeviceMinor:608 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-215 DeviceMajor:0 DeviceMinor:215 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/812552f3-09b1-43f8-b910-c78e776127f8/volumes/kubernetes.io~secret/encryption-config DeviceMajor:0 DeviceMinor:690 Capacity:49335545856 Type:vfs Inodes:6166277 HasInodes:true} {Device:overlay_0-326 DeviceMajor:0 DeviceMinor:326 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-412 DeviceMajor:0 DeviceMinor:412 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-701 DeviceMajor:0 DeviceMinor:701 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-1036 DeviceMajor:0 DeviceMinor:1036 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-1207 DeviceMajor:0 DeviceMinor:1207 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/6ddb5ab7-0c1f-44ed-84fa-aaeb6b553e03/volumes/kubernetes.io~secret/webhook-certs DeviceMajor:0 DeviceMinor:1304 Capacity:49335545856 Type:vfs Inodes:6166277 HasInodes:true} {Device:overlay_0-120 DeviceMajor:0 DeviceMinor:120 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-138 DeviceMajor:0 DeviceMinor:138 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-852 DeviceMajor:0 DeviceMinor:852 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/2f48332e-92de-42aa-a6e6-db161f005e74/volumes/kubernetes.io~projected/kube-api-access-kc42f DeviceMajor:0 DeviceMinor:1241 Capacity:49335545856 Type:vfs Inodes:6166277 HasInodes:true} {Device:overlay_0-377 DeviceMajor:0 DeviceMinor:377 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-86 DeviceMajor:0 DeviceMinor:86 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/dev/vda3 DeviceMajor:252 DeviceMinor:3 Capacity:366869504 Type:vfs Inodes:98304 HasInodes:true} {Device:/var/lib/kubelet/pods/e1f03d97-1a6a-41e4-9ed3-cd9b01c46400/volumes/kubernetes.io~projected/kube-api-access-nb75b DeviceMajor:0 DeviceMinor:759 Capacity:49335545856 Type:vfs Inodes:6166277 HasInodes:true} {Device:overlay_0-87 DeviceMajor:0 DeviceMinor:87 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-103 DeviceMajor:0 DeviceMinor:103 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/feee7fe8-e805-4807-b4c0-ecc7ef0f88d9/volumes/kubernetes.io~projected/kube-api-access-h5djr DeviceMajor:0 DeviceMinor:286 Capacity:49335545856 Type:vfs Inodes:6166277 HasInodes:true} {Device:overlay_0-524 DeviceMajor:0 DeviceMinor:524 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/371c4924a11b805a233cd8aa1cdf64502325cac941f4d66f86f54a68683a9e74/userdata/shm DeviceMajor:0 DeviceMinor:602 Capacity:67108864 Type:vfs Inodes:6166277 HasInodes:true} {Device:overlay_0-879 DeviceMajor:0 DeviceMinor:879 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/79723ddb5fac1ee4009ac879b87cc7a72172f4afc11c2c1be74ae202b150e818/userdata/shm DeviceMajor:0 DeviceMinor:934 Capacity:67108864 Type:vfs Inodes:6166277 HasInodes:true} {Device:overlay_0-544 DeviceMajor:0 DeviceMinor:544 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-1271 DeviceMajor:0 DeviceMinor:1271 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/f77227c8-c52d-4a71-ae1b-792055f6f23d/volumes/kubernetes.io~secret/metrics-tls DeviceMajor:0 DeviceMinor:43 Capacity:49335545856 Type:vfs Inodes:6166277 HasInodes:true} {Device:/run/containers/storage/overlay-containers/a2b7a210dee36e67d332da03e90107812f166b01198822dfb676fc0a9a05fc25/userdata/shm DeviceMajor:0 DeviceMinor:168 Capacity:67108864 Type:vfs Inodes:6166277 HasInodes:true} {Device:/run/containers/storage/overlay-containers/54e1df610bab1f2d6afe25113c517fd17a97b3a82ba411dc4888d98b1a65da1d/userdata/shm DeviceMajor:0 DeviceMinor:291 Capacity:67108864 Type:vfs Inodes:6166277 HasInodes:true} {Device:overlay_0-414 DeviceMajor:0 DeviceMinor:414 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-568 DeviceMajor:0 DeviceMinor:568 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/7a2c651d-ea1a-41f2-9745-04adc8d88904/volumes/kubernetes.io~secret/serving-cert DeviceMajor:0 DeviceMinor:240 Capacity:49335545856 Type:vfs Inodes:6166277 HasInodes:true} {Device:/run/containers/storage/overlay-containers/fd87d63ea110a273569e5b66501c57bfaf932272be25e92340e227a60cef6dea/userdata/shm DeviceMajor:0 DeviceMinor:266 Capacity:67108864 Type:vfs Inodes:6166277 HasInodes:true} {Device:/var/lib/kubelet/pods/f938daff-1d36-4348-a689-3d1607058296/volumes/kubernetes.io~secret/cert DeviceMajor:0 DeviceMinor:444 Capacity:49335545856 Type:vfs Inodes:6166277 HasInodes:true} {Device:/var/lib/kubelet/pods/1163571d-f555-41ad-b04c-74c2dc452efe/volumes/kubernetes.io~secret/secret-telemeter-client DeviceMajor:0 DeviceMinor:1313 Capacity:49335545856 Type:vfs Inodes:6166277 HasInodes:true} {Device:overlay_0-365 DeviceMajor:0 DeviceMinor:365 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/d9492fbf-d0f4-4ecf-84ba-b089d69535c1/volumes/kubernetes.io~projected/kube-api-access-fzp4b DeviceMajor:0 DeviceMinor:510 Capacity:49335545856 Type:vfs Inodes:6166277 HasInodes:true} {Device:/var/lib/kubelet/pods/f2be5ed6-fdf0-4462-a319-eed1a5a1c778/volumes/kubernetes.io~projected/kube-api-access-lm88x DeviceMajor:0 DeviceMinor:1180 Capacity:49335545856 Type:vfs Inodes:6166277 HasInodes:true} {Device:overlay_0-1126 DeviceMajor:0 DeviceMinor:1126 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/3363f001-1cfa-41f5-b245-30cc99dd09cb/volumes/kubernetes.io~projected/kube-api-access-589rv DeviceMajor:0 DeviceMinor:494 Capacity:49335545856 Type:vfs Inodes:6166277 HasInodes:true} {Device:overlay_0-847 DeviceMajor:0 DeviceMinor:847 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-1130 DeviceMajor:0 DeviceMinor:1130 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/4097b46c5415e7a8b1651e87123bd125c21ee99b1c3af149041760e25e6378ee/userdata/shm DeviceMajor:0 DeviceMinor:105 Capacity:67108864 Type:vfs Inodes:6166277 HasInodes:true} {Device:/run/containers/storage/overlay-containers/93dd263e4986822eec0c710075ac8eebc645d482f87f7ef8bb335adc841614f2/userdata/shm DeviceMajor:0 DeviceMinor:806 Capacity:67108864 Type:vfs Inodes:6166277 HasInodes:true} {Device:overlay_0-1266 DeviceMajor:0 DeviceMinor:1266 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-1318 DeviceMajor:0 DeviceMinor:1318 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-1162 DeviceMajor:0 DeviceMinor:1162 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/2f48332e-92de-42aa-a6e6-db161f005e74/volumes/kubernetes.io~secret/client-ca-bundle DeviceMajor:0 DeviceMinor:1235 Capacity:49335545856 Type:vfs Inodes:6166277 HasInodes:true} {Device:overlay_0-521 DeviceMajor:0 DeviceMinor:521 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/812552f3-09b1-43f8-b910-c78e776127f8/volumes/kubernetes.io~projected/kube-api-access-4lt5r DeviceMajor:0 DeviceMinor:692 Capacity:49335545856 Type:vfs Inodes:6166277 HasInodes:true} {Device:/var/lib/kubelet/pods/cc0cfdd6-99d8-40dc-87d0-06c2a6767f38/volumes/kubernetes.io~secret/srv-cert DeviceMajor:0 DeviceMinor:728 Capacity:49335545856 Type:vfs Inodes:6166277 HasInodes:true} {Device:overlay_0-839 DeviceMajor:0 DeviceMinor:839 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-911 DeviceMajor:0 DeviceMinor:911 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-1097 DeviceMajor:0 DeviceMinor:1097 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-360 DeviceMajor:0 DeviceMinor:360 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-448 DeviceMajor:0 DeviceMinor:448 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-952 DeviceMajor:0 DeviceMinor:952 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-712 DeviceMajor:0 DeviceMinor:712 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/e6a0fc47-b446-4902-9f8a-04870cbafcab/volumes/kubernetes.io~secret/machine-approver-tls DeviceMajor:0 DeviceMinor:733 Capacity:49335545856 Type:vfs Inodes:6166277 HasInodes:true} {Device:overlay_0-114 DeviceMajor:0 DeviceMinor:114 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-317 DeviceMajor:0 DeviceMinor:317 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-883 DeviceMajor:0 DeviceMinor:883 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/cd674e58-b749-46fb-8a28-66012fd8b401/volumes/kubernetes.io~projected/kube-api-access-67qg5 DeviceMajor:0 DeviceMinor:925 Capacity:49335545856 Type:vfs Inodes:6166277 HasInodes:true} {Device:overlay_0-1365 DeviceMajor:0 DeviceMinor:1365 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/9666fc94-71e3-46af-8b45-26e3a085d076/volumes/kubernetes.io~secret/profile-collector-cert DeviceMajor:0 DeviceMinor:700 Capacity:49335545856 Type:vfs Inodes:6166277 HasInodes:true} {Device:/var/lib/kubelet/pods/9666fc94-71e3-46af-8b45-26e3a085d076/volumes/kubernetes.io~secret/srv-cert DeviceMajor:0 DeviceMinor:715 Capacity:49335545856 Type:vfs Inodes:6166277 HasInodes:true} {Device:/var/lib/kubelet/pods/812552f3-09b1-43f8-b910-c78e776127f8/volumes/kubernetes.io~secret/serving-cert DeviceMajor:0 DeviceMinor:691 Capacity:49335545856 Type:vfs Inodes:6166277 HasInodes:true} {Device:/run DeviceMajor:0 DeviceMinor:24 Capacity:10102829056 Type:vfs Inodes:819200 HasInodes:true} {Device:/run/containers/storage/overlay-containers/b5410db202b2d2565e3f21ef6f188dc18cdaa71ef843bfa19039eca0376e0d6a/userdata/shm DeviceMajor:0 DeviceMinor:513 Capacity:67108864 Type:vfs Inodes:6166277 HasInodes:true} {Device:/var/lib/kubelet/pods/073c9b40-bb80-41a2-bcd2-cfdbe040a5a4/volumes/kubernetes.io~empty-dir/etc-tuned DeviceMajor:0 DeviceMinor:455 Capacity:49335545856 Type:vfs Inodes:6166277 HasInodes:true} {Device:/var/lib/kubelet/pods/39623346-691b-42c8-af76-409d4f6629af/volumes/kubernetes.io~secret/cert DeviceMajor:0 DeviceMinor:657 Capacity:49335545856 Type:vfs Inodes:6166277 HasInodes:true} {Device:overlay_0-210 DeviceMajor:0 DeviceMinor:210 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-418 DeviceMajor:0 DeviceMinor:418 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-865 DeviceMajor:0 DeviceMinor:865 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-789 DeviceMajor:0 DeviceMinor:789 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/e0d20c57fe745f0a7a074b91ba4c54bbdd4dc326b155cd4b8a578d9c21d5db21/userdata/shm DeviceMajor:0 DeviceMinor:797 Capacity:67108864 Type:vfs Inodes:6166277 HasInodes:true} {Device:/run/containers/storage/overlay-containers/68f61c7a09ca20650d4a6ea4b0f5e362ed36ea985ba0db19d10925a21520b6ad/userdata/shm DeviceMajor:0 DeviceMinor:817 Capacity:67108864 Type:vfs Inodes:6166277 HasInodes:true} {Device:overlay_0-181 DeviceMajor:0 DeviceMinor:181 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/ee10cd9f-23da-4e40-bc7a-0856a9fa7ae5/volumes/kubernetes.io~projected/kube-api-access-5dwz2 DeviceMajor:0 DeviceMinor:796 Capacity:49335545856 Type:vfs Inodes:6166277 HasInodes:true} {Device:overlay_0-949 DeviceMajor:0 DeviceMinor:949 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/2f48332e-92de-42aa-a6e6-db161f005e74/volumes/kubernetes.io~secret/secret-metrics-client-certs DeviceMajor:0 DeviceMinor:1239 Capacity:49335545856 Type:vfs Inodes:6166277 HasInodes:true} {Device:overlay_0-174 DeviceMajor:0 DeviceMinor:174 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-694 DeviceMajor:0 DeviceMinor:694 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/dev/vda4 DeviceMajor:252 DeviceMinor:4 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/tmp DeviceMajor:0 DeviceMinor:30 Capacity:25257074688 Type:vfs Inodes:1048576 HasInodes:true} {Device:overlay_0-1209 DeviceMajor:0 DeviceMinor:1209 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-85 DeviceMajor:0 DeviceMinor:85 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/53aff8ce601eb36b54bc43ffb3ad6e1b16683e9a02c222af744cc38c77ef8aa0/userdata/shm DeviceMajor:0 DeviceMinor:58 Capacity:67108864 Type:vfs Inodes:6166277 HasInodes:true} {Device:overlay_0-382 DeviceMajor:0 DeviceMinor:382 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/74e8b3c8-da80-492c-bfcf-199b40bde40b/volumes/kubernetes.io~secret/ovn-node-metrics-cert DeviceMajor:0 DeviceMinor:142 Capacity:49335545856 Type:vfs Inodes:6166277 HasInodes:true} {Device:overlay_0-832 DeviceMajor:0 DeviceMinor:832 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/267ebddc959ac57c572038da835a770f0388428b8136a92cef38a57e55a51aac/userdata/shm DeviceMajor:0 DeviceMinor:603 Capacity:67108864 Type:vfs Inodes:6166277 HasInodes:true} {Device:/var/lib/kubelet/pods/a3561f49-0808-4d96-95ec-456fcb5c5bb4/volumes/kubernetes.io~projected/kube-api-access-r5tgk DeviceMajor:0 DeviceMinor:930 Capacity:49335545856 Type:vfs Inodes:6166277 HasInodes:true} {Device:overlay_0-150 DeviceMajor:0 DeviceMinor:150 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/74e8b3c8-da80-492c-bfcf-199b40bde40b/volume-subpaths/run-systemd/ovnkube-controller/6 DeviceMajor:0 DeviceMinor:24 Capacity:10102829056 Type:vfs Inodes:819200 HasInodes:true} {Device:overlay_0-1007 DeviceMajor:0 DeviceMinor:1007 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/922eed7d19f9dd738cf0b3fc3e3b004e0316f8e1783948356d4d447355655a65/userdata/shm DeviceMajor:0 DeviceMinor:1320 Capacity:67108864 Type:vfs Inodes:6166277 HasInodes:true} {Device:overlay_0-1322 DeviceMajor:0 DeviceMinor:1322 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-67 DeviceMajor:0 DeviceMinor:67 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/4bb05b64-74d7-41bc-991c-5d3cddc9d8f4/volumes/kubernetes.io~secret/samples-operator-tls DeviceMajor:0 DeviceMinor:731 Capacity:49335545856 Type:vfs Inodes:6166277 HasInodes:true} {Device:overlay_0-1194 DeviceMajor:0 DeviceMinor:1194 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-868 DeviceMajor:0 DeviceMinor:868 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-346 DeviceMajor:0 DeviceMinor:346 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/c932287e23f5b8d24efa88b511b35c92261a32985b4d2a556c22eb4a08ba11cb/userdata/shm DeviceMajor:0 DeviceMinor:1087 Capacity:67108864 Type:vfs Inodes:6166277 HasInodes:true} {Device:/var/lib/kubelet/pods/1533c5fa-0387-40bd-a959-e714b65cdacc/volumes/kubernetes.io~projected/kube-api-access-jspzm DeviceMajor:0 DeviceMinor:1082 Capacity:49335545856 Type:vfs Inodes:6166277 HasInodes:true} {Device:/run/containers/storage/overlay-containers/c125f0138a2358ed33a087eaebb28b417878c3d57e675823d35e0431d5663d9e/userdata/shm DeviceMajor:0 DeviceMinor:605 Capacity:67108864 Type:vfs Inodes:6166277 HasInodes:true} {Device:/var/lib/kubelet/pods/b21148ab-4e3e-4d0b-b198-3278dd8e2e7e/volumes/kubernetes.io~secret/etcd-client DeviceMajor:0 DeviceMinor:566 Capacity:49335545856 Type:vfs Inodes:6166277 HasInodes:true} {Device:overlay_0-416 DeviceMajor:0 DeviceMinor:416 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-1338 DeviceMajor:0 DeviceMinor:1338 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/c177f8fe-8145-4557-ae78-af121efe001c/volumes/kubernetes.io~projected/kube-api-access-mdpfz DeviceMajor:0 DeviceMinor:251 Capacity:49335545856 Type:vfs Inodes:6166277 HasInodes:true} {Device:/run/containers/storage/overlay-containers/aa70a59110835e6aad43cf1cb5ed855bb86de37892d716ff87772c740d916d65/userdata/shm DeviceMajor:0 DeviceMinor:338 Capacity:67108864 Type:vfs Inodes:6166277 HasInodes:true} {Device:/var/lib/kubelet/pods/b21148ab-4e3e-4d0b-b198-3278dd8e2e7e/volumes/kubernetes.io~secret/encryption-config DeviceMajor:0 DeviceMinor:567 Capacity:49335545856 Type:vfs Inodes:6166277 HasInodes:true} {Device:overlay_0-737 DeviceMajor:0 DeviceMinor:737 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/be7a4b9e-1e9a-4298-b804-21b683805c0e/volumes/kubernetes.io~secret/default-certificate DeviceMajor:0 DeviceMinor:1076 Capacity:49335545856 Type:vfs Inodes:6166277 HasInodes:true} {Device:/var/lib/kubelet/pods/6ddb5ab7-0c1f-44ed-84fa-aaeb6b553e03/volumes/kubernetes.io~projected/kube-api-access-rkz2q DeviceMajor:0 DeviceMinor:1308 Capacity:49335545856 Type:vfs Inodes:6166277 HasInodes:true} {Device:/var/lib/kubelet/pods/073c9b40-bb80-41a2-bcd2-cfdbe040a5a4/volumes/kubernetes.io~empty-dir/tmp DeviceMajor:0 DeviceMinor:488 Capacity:49335545856 Type:vfs Inodes:6166277 HasInodes:true} {Device:overlay_0-539 DeviceMajor:0 DeviceMinor:539 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/3aa615a9d796b417e579505462fba818eb63c6e04f0fc9bcc949d228f425e015/userdata/shm DeviceMajor:0 DeviceMinor:811 Capacity:67108864 Type:vfs Inodes:6166277 HasInodes:true} {Device:overlay_0-831 DeviceMajor:0 DeviceMinor:831 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-1187 DeviceMajor:0 DeviceMinor:1187 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-376 DeviceMajor:0 DeviceMinor:376 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/933beda1-c930-4831-a886-3cc6b7a992ad/volumes/kubernetes.io~projected/kube-api-access-gmf87 DeviceMajor:0 DeviceMinor:256 Capacity:49335545856 Type:vfs Inodes:6166277 HasInodes:true} {Device:overlay_0-505 DeviceMajor:0 DeviceMinor:505 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/d9492fbf-d0f4-4ecf-84ba-b089d69535c1/volumes/kubernetes.io~projected/ca-certs DeviceMajor:0 DeviceMinor:509 Capacity:49335545856 Type:vfs Inodes:6166277 HasInodes:true} {Device:/var/lib/kubelet/pods/c847d0c0-cc92-4d56-9e47-b83d9a39a745/volumes/kubernetes.io~secret/node-bootstrap-token DeviceMajor:0 DeviceMinor:1098 Capacity:49335545856 Type:vfs Inodes:6166277 HasInodes:true} {Device:overlay_0-1123 DeviceMajor:0 DeviceMinor:1123 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-1216 DeviceMajor:0 DeviceMinor:1216 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-205 DeviceMajor:0 DeviceMinor:205 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-370 DeviceMajor:0 DeviceMinor:370 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-1246 DeviceMajor:0 DeviceMinor:1246 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-185 DeviceMajor:0 DeviceMinor:185 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-46 DeviceMajor:0 DeviceMinor:46 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/c106275b-72b6-4877-95c3-830f93e35375/volumes/kubernetes.io~secret/webhook-cert DeviceMajor:0 DeviceMinor:167 Capacity:49335545856 Type:vfs Inodes:6166277 HasInodes:true} {Device:/var/lib/kubelet/pods/b21148ab-4e3e-4d0b-b198-3278dd8e2e7e/volumes/kubernetes.io~projected/kube-api-access-dtnxg DeviceMajor:0 DeviceMinor:625 Capacity:49335545856 Type:vfs Inodes:6166277 HasInodes:true} {Device:/run/containers/storage/overlay-containers/906a4975f221a3093bffb39f286ed36f66979e79a259e327d3df353ea75730c0/userdata/shm DeviceMajor:0 DeviceMinor:818 Capacity:67108864 Type:vfs Inodes:6166277 HasInodes:true} {Device:overlay_0-881 DeviceMajor:0 DeviceMinor:881 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-703 DeviceMajor:0 DeviceMinor:703 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-620 DeviceMajor:0 DeviceMinor:620 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/da19bb93-c9ba-4e60-9e83-d92bc0dd33c4/volumes/kubernetes.io~projected/kube-api-access-9lkf2 DeviceMajor:0 DeviceMinor Feb 24 05:37:20.497065 master-0 kubenswrapper[34361]: :753 Capacity:49335545856 Type:vfs Inodes:6166277 HasInodes:true} {Device:overlay_0-446 DeviceMajor:0 DeviceMinor:446 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-299 DeviceMajor:0 DeviceMinor:299 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/7dcc5520-7aa8-4cd5-b06d-591827ed4e2a/volumes/kubernetes.io~secret/metrics-certs DeviceMajor:0 DeviceMinor:597 Capacity:49335545856 Type:vfs Inodes:6166277 HasInodes:true} {Device:overlay_0-122 DeviceMajor:0 DeviceMinor:122 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/8f0c2bd56106a14890572575d4661ad3be97a3bf1270d2b66fc4d182958ebb72/userdata/shm DeviceMajor:0 DeviceMinor:1309 Capacity:67108864 Type:vfs Inodes:6166277 HasInodes:true} {Device:/var/lib/kubelet/pods/933beda1-c930-4831-a886-3cc6b7a992ad/volumes/kubernetes.io~secret/serving-cert DeviceMajor:0 DeviceMinor:247 Capacity:49335545856 Type:vfs Inodes:6166277 HasInodes:true} {Device:/var/lib/kubelet/pods/633d33a1-e1b1-40b0-b56a-afb0c1085d97/volumes/kubernetes.io~secret/cluster-olm-operator-serving-cert DeviceMajor:0 DeviceMinor:250 Capacity:49335545856 Type:vfs Inodes:6166277 HasInodes:true} {Device:/var/lib/kubelet/pods/116e6b47-d435-49ca-abb5-088788daf16a/volumes/kubernetes.io~projected/kube-api-access-jt9fb DeviceMajor:0 DeviceMinor:760 Capacity:49335545856 Type:vfs Inodes:6166277 HasInodes:true} {Device:overlay_0-903 DeviceMajor:0 DeviceMinor:903 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-983 DeviceMajor:0 DeviceMinor:983 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/dd7b027ed4dfa318c6f765780e7da4b378d4a45eec9c4d60403e7f1cb887d422/userdata/shm DeviceMajor:0 DeviceMinor:60 Capacity:67108864 Type:vfs Inodes:6166277 HasInodes:true} {Device:/run/containers/storage/overlay-containers/bec37b05d26590ac90852a463adcb2612e0087e0d2b710f75cef020a89559e29/userdata/shm DeviceMajor:0 DeviceMinor:144 Capacity:67108864 Type:vfs Inodes:6166277 HasInodes:true} {Device:overlay_0-618 DeviceMajor:0 DeviceMinor:618 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-652 DeviceMajor:0 DeviceMinor:652 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/2c6bb439-ed17-4761-b193-580be5f6aa00/volumes/kubernetes.io~projected/kube-api-access-pl6rx DeviceMajor:0 DeviceMinor:921 Capacity:49335545856 Type:vfs Inodes:6166277 HasInodes:true} {Device:/run/containers/storage/overlay-containers/924c790b9f927c27385b4ab4089845c57c9181271438a831e175110ba7205a0b/userdata/shm DeviceMajor:0 DeviceMinor:131 Capacity:67108864 Type:vfs Inodes:6166277 HasInodes:true} {Device:/var/lib/kubelet/pods/49bfccec-61ec-4bef-a561-9f6e6f906215/volumes/kubernetes.io~projected/kube-api-access-d4d5x DeviceMajor:0 DeviceMinor:253 Capacity:49335545856 Type:vfs Inodes:6166277 HasInodes:true} {Device:overlay_0-428 DeviceMajor:0 DeviceMinor:428 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/af62c50cd75ed27beeb63e0f7014692299e172af746bf8738716ac3ff47c9622/userdata/shm DeviceMajor:0 DeviceMinor:50 Capacity:67108864 Type:vfs Inodes:6166277 HasInodes:true} {Device:overlay_0-1095 DeviceMajor:0 DeviceMinor:1095 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-1023 DeviceMajor:0 DeviceMinor:1023 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-61 DeviceMajor:0 DeviceMinor:61 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-373 DeviceMajor:0 DeviceMinor:373 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/3f511d03-a182-4968-ba40-5c5c10e5e6be/volumes/kubernetes.io~secret/serving-cert DeviceMajor:0 DeviceMinor:656 Capacity:49335545856 Type:vfs Inodes:6166277 HasInodes:true} {Device:overlay_0-985 DeviceMajor:0 DeviceMinor:985 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-503 DeviceMajor:0 DeviceMinor:503 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-526 DeviceMajor:0 DeviceMinor:526 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/812552f3-09b1-43f8-b910-c78e776127f8/volumes/kubernetes.io~secret/etcd-client DeviceMajor:0 DeviceMinor:686 Capacity:49335545856 Type:vfs Inodes:6166277 HasInodes:true} {Device:/var/lib/kubelet/pods/cc0cfdd6-99d8-40dc-87d0-06c2a6767f38/volumes/kubernetes.io~projected/kube-api-access-25dbj DeviceMajor:0 DeviceMinor:755 Capacity:49335545856 Type:vfs Inodes:6166277 HasInodes:true} {Device:overlay_0-1128 DeviceMajor:0 DeviceMinor:1128 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-1353 DeviceMajor:0 DeviceMinor:1353 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-344 DeviceMajor:0 DeviceMinor:344 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-390 DeviceMajor:0 DeviceMinor:390 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/b46907eb-36d6-4410-b7d8-8012b254c861/volumes/kubernetes.io~secret/cloud-credential-operator-serving-cert DeviceMajor:0 DeviceMinor:699 Capacity:49335545856 Type:vfs Inodes:6166277 HasInodes:true} {Device:/var/lib/kubelet/pods/f77227c8-c52d-4a71-ae1b-792055f6f23d/volumes/kubernetes.io~projected/kube-api-access-dcj62 DeviceMajor:0 DeviceMinor:107 Capacity:49335545856 Type:vfs Inodes:6166277 HasInodes:true} {Device:/run/containers/storage/overlay-containers/a1b7fe82470a07c52d024e13d01069cc6897029891ba56a4cf999816f805e9a7/userdata/shm DeviceMajor:0 DeviceMinor:261 Capacity:67108864 Type:vfs Inodes:6166277 HasInodes:true} {Device:/var/lib/kubelet/pods/f3cd3830-62b5-49d1-917e-bd993d685c65/volumes/kubernetes.io~secret/cloud-controller-manager-operator-tls DeviceMajor:0 DeviceMinor:392 Capacity:49335545856 Type:vfs Inodes:6166277 HasInodes:true} {Device:/run/containers/storage/overlay-containers/d243d9f4d6d9c16fd75ab0c5744222bf367eeb4a55dc3a56ad2f15b145aca434/userdata/shm DeviceMajor:0 DeviceMinor:825 Capacity:67108864 Type:vfs Inodes:6166277 HasInodes:true} {Device:/run/containers/storage/overlay-containers/8f82575ddbb5dc664a876d323c277ef91af413f2e9ed224a0250e918dc81ae61/userdata/shm DeviceMajor:0 DeviceMinor:937 Capacity:67108864 Type:vfs Inodes:6166277 HasInodes:true} {Device:overlay_0-993 DeviceMajor:0 DeviceMinor:993 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/3f511d03-a182-4968-ba40-5c5c10e5e6be/volumes/kubernetes.io~projected/kube-api-access-4vdmz DeviceMajor:0 DeviceMinor:756 Capacity:49335545856 Type:vfs Inodes:6166277 HasInodes:true} {Device:overlay_0-860 DeviceMajor:0 DeviceMinor:860 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/89dd38053c589bc34a06848b1d85945f7e695c76927a0e1433d3c5444dd1eb09/userdata/shm DeviceMajor:0 DeviceMinor:791 Capacity:67108864 Type:vfs Inodes:6166277 HasInodes:true} {Device:overlay_0-1342 DeviceMajor:0 DeviceMinor:1342 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-305 DeviceMajor:0 DeviceMinor:305 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-395 DeviceMajor:0 DeviceMinor:395 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-560 DeviceMajor:0 DeviceMinor:560 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-659 DeviceMajor:0 DeviceMinor:659 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-1133 DeviceMajor:0 DeviceMinor:1133 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/d279f5c83a7334bb036cb98c51916708c8e0553fc71eae75ca717993b0118072/userdata/shm DeviceMajor:0 DeviceMinor:1183 Capacity:67108864 Type:vfs Inodes:6166277 HasInodes:true} {Device:overlay_0-102 DeviceMajor:0 DeviceMinor:102 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-172 DeviceMajor:0 DeviceMinor:172 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-1220 DeviceMajor:0 DeviceMinor:1220 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/58ecd829-4749-4c8a-933b-16b4acccac90/volumes/kubernetes.io~projected/kube-api-access-m9kf2 DeviceMajor:0 DeviceMinor:260 Capacity:49335545856 Type:vfs Inodes:6166277 HasInodes:true} {Device:overlay_0-297 DeviceMajor:0 DeviceMinor:297 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/c177f8fe-8145-4557-ae78-af121efe001c/volumes/kubernetes.io~secret/cluster-monitoring-operator-tls DeviceMajor:0 DeviceMinor:600 Capacity:49335545856 Type:vfs Inodes:6166277 HasInodes:true} {Device:/run/containers/storage/overlay-containers/0655b027cab36844f1bd97da97e52b25a2bc334d369a5c8c6902c2874a930630/userdata/shm DeviceMajor:0 DeviceMinor:301 Capacity:67108864 Type:vfs Inodes:6166277 HasInodes:true} {Device:overlay_0-420 DeviceMajor:0 DeviceMinor:420 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/6bea8d6f03626b01b052e73eecef6934077ef78e8f1a77511bf8222ddfca016e/userdata/shm DeviceMajor:0 DeviceMinor:263 Capacity:67108864 Type:vfs Inodes:6166277 HasInodes:true} {Device:overlay_0-293 DeviceMajor:0 DeviceMinor:293 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-71 DeviceMajor:0 DeviceMinor:71 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/ab5afff8-1081-4acc-8ab9-d6bfd8df1d67/volumes/kubernetes.io~projected/kube-api-access-p67bp DeviceMajor:0 DeviceMinor:337 Capacity:49335545856 Type:vfs Inodes:6166277 HasInodes:true} {Device:/var/lib/kubelet/pods/ee10cd9f-23da-4e40-bc7a-0856a9fa7ae5/volumes/kubernetes.io~secret/serving-cert DeviceMajor:0 DeviceMinor:792 Capacity:49335545856 Type:vfs Inodes:6166277 HasInodes:true} {Device:overlay_0-682 DeviceMajor:0 DeviceMinor:682 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/23bdafdd-27c9-4461-be4a-3ea916ac3875/volumes/kubernetes.io~secret/image-registry-operator-tls DeviceMajor:0 DeviceMinor:726 Capacity:49335545856 Type:vfs Inodes:6166277 HasInodes:true} {Device:overlay_0-1049 DeviceMajor:0 DeviceMinor:1049 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-532 DeviceMajor:0 DeviceMinor:532 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-41 DeviceMajor:0 DeviceMinor:41 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/767424fb-babf-4b73-b5e2-0bee65fcf207/volumes/kubernetes.io~projected/kube-api-access-hl828 DeviceMajor:0 DeviceMinor:130 Capacity:49335545856 Type:vfs Inodes:6166277 HasInodes:true} {Device:/var/lib/kubelet/pods/1d922f6f-70a3-46cb-b230-6a1e2b8cfdfa/volumes/kubernetes.io~projected/kube-api-access-ckfnc DeviceMajor:0 DeviceMinor:318 Capacity:49335545856 Type:vfs Inodes:6166277 HasInodes:true} {Device:overlay_0-909 DeviceMajor:0 DeviceMinor:909 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/0c671a703dbac86ce7b1c5dcbfbe1729e65e787dfd6afe8e60d163a277f3e763/userdata/shm DeviceMajor:0 DeviceMinor:936 Capacity:67108864 Type:vfs Inodes:6166277 HasInodes:true} {Device:overlay_0-1017 DeviceMajor:0 DeviceMinor:1017 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-358 DeviceMajor:0 DeviceMinor:358 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-749 DeviceMajor:0 DeviceMinor:749 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/ca1f4967e893fa63378ca09c1eeb80d103b9e8e60104bb8036c8ccc5faa3a035/userdata/shm DeviceMajor:0 DeviceMinor:676 Capacity:67108864 Type:vfs Inodes:6166277 HasInodes:true} {Device:/run/containers/storage/overlay-containers/ed120d47621f85e51e2ef771ce28687d4c0566d41771f7a4a34982cc8d975798/userdata/shm DeviceMajor:0 DeviceMinor:593 Capacity:67108864 Type:vfs Inodes:6166277 HasInodes:true} {Device:/var/lib/kubelet/pods/b426cb33-1624-45e6-b8c5-4e8d251f6339/volumes/kubernetes.io~projected/kube-api-access-hjtv8 DeviceMajor:0 DeviceMinor:766 Capacity:49335545856 Type:vfs Inodes:6166277 HasInodes:true} {Device:overlay_0-907 DeviceMajor:0 DeviceMinor:907 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-1046 DeviceMajor:0 DeviceMinor:1046 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/996ae0be-d36c-47f4-98b2-1c89591f9506/volumes/kubernetes.io~projected/kube-api-access-jrhmp DeviceMajor:0 DeviceMinor:278 Capacity:49335545856 Type:vfs Inodes:6166277 HasInodes:true} {Device:overlay_0-295 DeviceMajor:0 DeviceMinor:295 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-1104 DeviceMajor:0 DeviceMinor:1104 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-1261 DeviceMajor:0 DeviceMinor:1261 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-379 DeviceMajor:0 DeviceMinor:379 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-156 DeviceMajor:0 DeviceMinor:156 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-422 DeviceMajor:0 DeviceMinor:422 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-289 DeviceMajor:0 DeviceMinor:289 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-65 DeviceMajor:0 DeviceMinor:65 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-996 DeviceMajor:0 DeviceMinor:996 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-356 DeviceMajor:0 DeviceMinor:356 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-951 DeviceMajor:0 DeviceMinor:951 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/58ecd829-4749-4c8a-933b-16b4acccac90/volumes/kubernetes.io~secret/serving-cert DeviceMajor:0 DeviceMinor:249 Capacity:49335545856 Type:vfs Inodes:6166277 HasInodes:true} {Device:/run/containers/storage/overlay-containers/8a8cf406c663f290d9d876c25d67c60eea733c614a8da4d512ef2ea405de9382/userdata/shm DeviceMajor:0 DeviceMinor:270 Capacity:67108864 Type:vfs Inodes:6166277 HasInodes:true} {Device:/var/lib/kubelet/pods/0e05783d-6bd1-4c71-87d9-1eb3edd827b3/volumes/kubernetes.io~secret/serving-cert DeviceMajor:0 DeviceMinor:666 Capacity:49335545856 Type:vfs Inodes:6166277 HasInodes:true} {Device:/var/lib/kubelet/pods/23bdafdd-27c9-4461-be4a-3ea916ac3875/volumes/kubernetes.io~projected/bound-sa-token DeviceMajor:0 DeviceMinor:771 Capacity:49335545856 Type:vfs Inodes:6166277 HasInodes:true} {Device:overlay_0-849 DeviceMajor:0 DeviceMinor:849 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/bf303acd-b62e-4aa3-bd8d-15f5844302d8/volumes/kubernetes.io~secret/openshift-state-metrics-kube-rbac-proxy-config DeviceMajor:0 DeviceMinor:1170 Capacity:49335545856 Type:vfs Inodes:6166277 HasInodes:true} {Device:overlay_0-385 DeviceMajor:0 DeviceMinor:385 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/c106275b-72b6-4877-95c3-830f93e35375/volumes/kubernetes.io~projected/kube-api-access-4p8zb DeviceMajor:0 DeviceMinor:164 Capacity:49335545856 Type:vfs Inodes:6166277 HasInodes:true} {Device:/run/containers/storage/overlay-containers/0fcfa31d947740e8b2c9697ed507eb02078278c10de3439215a818d10753dde6/userdata/shm DeviceMajor:0 DeviceMinor:281 Capacity:67108864 Type:vfs Inodes:6166277 HasInodes:true} {Device:/run/containers/storage/overlay-containers/cb022277db501e47c11144c7784ae45171d1fe684dae009de53aad7904c4eadc/userdata/shm DeviceMajor:0 DeviceMinor:821 Capacity:67108864 Type:vfs Inodes:6166277 HasInodes:true} {Device:overlay_0-624 DeviceMajor:0 DeviceMinor:624 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-1335 DeviceMajor:0 DeviceMinor:1335 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/e6f05507-d5c1-4102-a220-1db715a496e3/volumes/kubernetes.io~projected/kube-api-access DeviceMajor:0 DeviceMinor:244 Capacity:49335545856 Type:vfs Inodes:6166277 HasInodes:true} {Device:overlay_0-589 DeviceMajor:0 DeviceMinor:589 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-890 DeviceMajor:0 DeviceMinor:890 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/334819f0fd2ed876c3fd6a59791380d72baaff02835bfb8dad2cfe7eb85f0397/userdata/shm DeviceMajor:0 DeviceMinor:112 Capacity:67108864 Type:vfs Inodes:6166277 HasInodes:true} {Device:overlay_0-729 DeviceMajor:0 DeviceMinor:729 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-94 DeviceMajor:0 DeviceMinor:94 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/e0b212afd7d07d05ad4af03681bd28027ddd652c6e3c593a77163ced8697a47e/userdata/shm DeviceMajor:0 DeviceMinor:407 Capacity:67108864 Type:vfs Inodes:6166277 HasInodes:true} {Device:/var/lib/kubelet/pods/f938daff-1d36-4348-a689-3d1607058296/volumes/kubernetes.io~projected/kube-api-access-xbt92 DeviceMajor:0 DeviceMinor:445 Capacity:49335545856 Type:vfs Inodes:6166277 HasInodes:true} {Device:overlay_0-970 DeviceMajor:0 DeviceMinor:970 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-165 DeviceMajor:0 DeviceMinor:165 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/3d6b1ce7-1213-494c-829d-186d39eac7eb/volumes/kubernetes.io~projected/bound-sa-token DeviceMajor:0 DeviceMinor:279 Capacity:49335545856 Type:vfs Inodes:6166277 HasInodes:true} {Device:/var/lib/kubelet/pods/f2be5ed6-fdf0-4462-a319-eed1a5a1c778/volumes/kubernetes.io~secret/node-exporter-tls DeviceMajor:0 DeviceMinor:1177 Capacity:49335545856 Type:vfs Inodes:6166277 HasInodes:true} {Device:overlay_0-313 DeviceMajor:0 DeviceMinor:313 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/80cc7ad6-051b-4ee5-94af-611388d9622a/volumes/kubernetes.io~secret/kube-state-metrics-kube-rbac-proxy-config DeviceMajor:0 DeviceMinor:1176 Capacity:49335545856 Type:vfs Inodes:6166277 HasInodes:true} {Device:overlay_0-1269 DeviceMajor:0 DeviceMinor:1269 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-99 DeviceMajor:0 DeviceMinor:99 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/b59c858a83fd92adb897139656578eaefef3c02c4b1c6979cd2c3711ce4f5720/userdata/shm DeviceMajor:0 DeviceMinor:145 Capacity:67108864 Type:vfs Inodes:6166277 HasInodes:true} {Device:overlay_0-696 DeviceMajor:0 DeviceMinor:696 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-967 DeviceMajor:0 DeviceMinor:967 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-1102 DeviceMajor:0 DeviceMinor:1102 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-1244 DeviceMajor:0 DeviceMinor:1244 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-343 DeviceMajor:0 DeviceMinor:343 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/c00ee01c-143b-4e44-823c-c6bfdedb8ed6/volumes/kubernetes.io~projected/kube-api-access-jx4rw DeviceMajor:0 DeviceMinor:73 Capacity:49335545856 Type:vfs Inodes:6166277 HasInodes:true} {Device:/var/lib/kubelet/pods/22813c83-2f60-44ad-9624-ad367cec08f7/volumes/kubernetes.io~projected/kube-api-access DeviceMajor:0 DeviceMinor:254 Capacity:49335545856 Type:vfs Inodes:6166277 HasInodes:true} {Device:/var/lib/kubelet/pods/be7a4b9e-1e9a-4298-b804-21b683805c0e/volumes/kubernetes.io~projected/kube-api-access-wvm29 DeviceMajor:0 DeviceMinor:1080 Capacity:49335545856 Type:vfs Inodes:6166277 HasInodes:true} {Device:overlay_0-1356 DeviceMajor:0 DeviceMinor:1356 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-667 DeviceMajor:0 DeviceMinor:667 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-1066 DeviceMajor:0 DeviceMinor:1066 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/ef21d52c34e0ff209e507b2e241489d3a22d4196f3b18bf8ced7797fda251b4a/userdata/shm DeviceMajor:0 DeviceMinor:97 Capacity:67108864 Type:vfs Inodes:6166277 HasInodes:true} {Device:/var/lib/kubelet/pods/32fd577d-8966-4ab1-95cf-357291084156/volumes/kubernetes.io~secret/control-plane-machine-set-operator-tls DeviceMajor:0 DeviceMinor:746 Capacity:49335545856 Type:vfs Inodes:6166277 HasInodes:true} {Device:overlay_0-184 DeviceMajor:0 DeviceMinor:184 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-837 DeviceMajor:0 DeviceMinor:837 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-458 DeviceMajor:0 DeviceMinor:458 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/c847d0c0-cc92-4d56-9e47-b83d9a39a745/volumes/kubernetes.io~projected/kube-api-access-qvznm DeviceMajor:0 DeviceMinor:1100 Capacity:49335545856 Type:vfs Inodes:6166277 HasInodes:true} {Device:/var/lib/kubelet/pods/c3fed34f-b275-42c6-af6c-8de3e6fe0f9e/volumes/kubernetes.io~projected/kube-api-access-tlwzq DeviceMajor:0 DeviceMinor:243 Capacity:49335545856 Type:vfs Inodes:6166277 HasInodes:true} {Device:/run/containers/storage/overlay-containers/b74c9c781dd953b15122d114627fe038414c5f0f995df649cb54aad5bc2f4e07/userdata/shm DeviceMajor:0 DeviceMinor:501 Capacity:67108864 Type:vfs Inodes:6166277 HasInodes:true} {Device:overlay_0-662 DeviceMajor:0 DeviceMinor:662 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/5d51ce58-55f6-45d5-9d5d-7b31ae42380a/volumes/kubernetes.io~projected/kube-api-access-2kh6l DeviceMajor:0 DeviceMinor:769 Capacity:49335545856 Type:vfs Inodes:6166277 HasInodes:true} {Device:/var/lib/kubelet/pods/7dcc5520-7aa8-4cd5-b06d-591827ed4e2a/volumes/kubernetes.io~projected/kube-api-access-8ktz5 DeviceMajor:0 DeviceMinor:135 Capacity:49335545856 Type:vfs Inodes:6166277 HasInodes:true} {Device:/var/lib/kubelet/pods/b21148ab-4e3e-4d0b-b198-3278dd8e2e7e/volumes/kubernetes.io~secret/serving-cert DeviceMajor:0 DeviceMinor:622 Capacity:49335545856 Type:vfs Inodes:6166277 HasInodes:true} {Device:overlay_0-1048 DeviceMajor:0 DeviceMinor:1048 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/b8d28792-2365-4e9e-b61a-46cd2ef8b632/volumes/kubernetes.io~secret/prometheus-operator-tls DeviceMajor:0 DeviceMinor:1135 Capacity:49335545856 Type:vfs Inodes:6166277 HasInodes:true} {Device:overlay_0-69 DeviceMajor:0 DeviceMinor:69 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-689 DeviceMajor:0 DeviceMinor:689 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-1361 DeviceMajor:0 DeviceMinor:1361 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/dev/shm DeviceMajor:0 DeviceMinor:22 Capacity:25257070592 Type:vfs Inodes:6166277 HasInodes:true} {Device:/run/containers/storage/overlay-containers/42dcfde8494f887ef3a1248e80ba66a922da1760343eca1d2afd960d88b81901/userdata/shm DeviceMajor:0 DeviceMinor:324 Capacity:67108864 Type:vfs Inodes:6166277 HasInodes:true} {Device:overlay_0-858 DeviceMajor:0 DeviceMinor:858 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-783 DeviceMajor:0 DeviceMinor:783 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-334 DeviceMajor:0 DeviceMinor:334 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/073c9b40-bb80-41a2-bcd2-cfdbe040a5a4/volumes/kubernetes.io~projected/kube-api-access-dh2rh DeviceMajor:0 DeviceMinor:489 Capacity:49335545856 Type:vfs Inodes:6166277 HasInodes:true} {Device:/var/lib/kubelet/pods/49b426a3-f16e-40e9-a166-7270d4cfcc60/volumes/kubernetes.io~projected/kube-api-access-9zxwj DeviceMajor:0 DeviceMinor:928 Capacity:49335545856 Type:vfs Inodes:6166277 HasInodes:true} {Device:/run/containers/storage/overlay-containers/8b96b8f7d5979105f35e071dc0c704b23c24808d5269da621b3e55a924016c6c/userdata/shm DeviceMajor:0 DeviceMinor:1060 Capacity:67108864 Type:vfs Inodes:6166277 HasInodes:true} {Device:/var/lib/kubelet/pods/633d33a1-e1b1-40b0-b56a-afb0c1085d97/volumes/kubernetes.io~projected/kube-api-access-62xzk DeviceMajor:0 DeviceMinor:255 Capacity:49335545856 Type:vfs Inodes:6166277 HasInodes:true} {Device:overlay_0-490 DeviceMajor:0 DeviceMinor:490 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/6e5ede6a-9d4b-47a2-b4ba-e6018910d05a/volumes/kubernetes.io~projected/kube-api-access-zb68s DeviceMajor:0 DeviceMinor:257 Capacity:49335545856 Type:vfs Inodes:6166277 HasInodes:true} {Device:overlay_0-1113 DeviceMajor:0 DeviceMinor:1113 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-1202 DeviceMajor:0 DeviceMinor:1202 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/d86d5bbe-3768-4695-810b-245a56e4fd1d/volumes/kubernetes.io~secret/serving-cert DeviceMajor:0 DeviceMinor:242 Capacity:49335545856 Type:vfs Inodes:6166277 HasInodes:true} {Device:/var/lib/kubelet/pods/cc0cfdd6-99d8-40dc-87d0-06c2a6767f38/volumes/kubernetes.io~secret/profile-collector-cert DeviceMajor:0 DeviceMinor:732 Capacity:49335545856 Type:vfs Inodes:6166277 HasInodes:true} {Device:/var/lib/kubelet/pods/c3fed34f-b275-42c6-af6c-8de3e6fe0f9e/volumes/kubernetes.io~secret/serving-cert DeviceMajor:0 DeviceMinor:239 Capacity:49335545856 Type:vfs Inodes:6166277 HasInodes:true} {Device:overlay_0-523 DeviceMajor:0 DeviceMinor:523 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/b9a96f0d-16b8-47ee-baf2-807d2260fa71/volumes/kubernetes.io~secret/tls-certificates DeviceMajor:0 DeviceMinor:1077 Capacity:49335545856 Type:vfs Inodes:6166277 HasInodes:true} {Device:overlay_0-309 DeviceMajor:0 DeviceMinor:309 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-1005 DeviceMajor:0 DeviceMinor:1005 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-790 DeviceMajor:0 DeviceMinor:790 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/894870cb71b93cf170c026145b9ea2c31998ab3f9fd22cdcbd9083b354b5406e/userdata/shm DeviceMajor:0 DeviceMinor:1188 Capacity:67108864 Type:vfs Inodes:6166277 HasInodes:true} {Device:overlay_0-323 DeviceMajor:0 DeviceMinor:323 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-638 DeviceMajor:0 DeviceMinor:638 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/88b915ff-fd94-4998-aa09-70f95c0f1b8a/volumes/kubernetes.io~secret/ovn-control-plane-metrics-cert DeviceMajor:0 DeviceMinor:140 Capacity:49335545856 Type:vfs Inodes:6166277 HasInodes:true} {Device:/var/lib/kubelet/pods/dd29bef3-d27e-48b3-9aa0-d915e949b3d5/volumes/kubernetes.io~projected/kube-api-access-zcb72 DeviceMajor:0 DeviceMinor:277 Capacity:49335545856 Type:vfs Inodes:6166277 HasInodes:true} {Device:/var/lib/kubelet/pods/75b4304c-09f2-499e-8c2f-da603e43ba72/volumes/kubernetes.io~projected/kube-api-access-7jflg DeviceMajor:0 DeviceMinor:929 Capacity:49335545856 Type:vfs Inodes:6166277 HasInodes:true} {Device:/run/containers/storage/overlay-containers/0a200f132e292ed5670ebdd181d6f49bb6c398710ac1ebdc14c3c7cdc32842f8/userdata/shm DeviceMajor:0 DeviceMinor:707 Capacity:67108864 Type:vfs Inodes:6166277 HasInodes:true} {Device:overlay_0-96 DeviceMajor:0 DeviceMinor:96 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-190 DeviceMajor:0 DeviceMinor:190 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/da19bb93-c9ba-4e60-9e83-d92bc0dd33c4/volumes/kubernetes.io~secret/serving-cert DeviceMajor:0 DeviceMinor:734 Capacity:49335545856 Type:vfs Inodes:6166277 HasInodes:true} {Device:/var/lib/kubelet/pods/e6a0fc47-b446-4902-9f8a-04870cbafcab/volumes/kubernetes.io~projected/kube-api-access-kx4qf DeviceMajor:0 DeviceMinor:773 Capacity:49335545856 Type:vfs Inodes:6166277 HasInodes:true} {Device:overlay_0-52 DeviceMajor:0 DeviceMinor:52 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-230 DeviceMajor:0 DeviceMinor:230 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-1160 DeviceMajor:0 DeviceMinor:1160 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/f2be5ed6-fdf0-4462-a319-eed1a5a1c778/volumes/kubernetes.io~secret/node-exporter-kube-rbac-proxy-config DeviceMajor:0 DeviceMinor:1179 Capacity:49335545856 Type:vfs Inodes:6166277 HasInodes:true} {Device:/var/lib/kubelet/pods/347c43e5-86d5-436f-bdc5-1c7bbe19ab2a/volumes/kubernetes.io~projected/kube-api-access-qgl4j DeviceMajor:0 DeviceMinor:500 Capacity:49335545856 Type:vfs Inodes:6166277 HasInodes:true} {Device:overlay_0-64 DeviceMajor:0 DeviceMinor:64 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/9666fc94-71e3-46af-8b45-26e3a085d076/volumes/kubernetes.io~projected/kube-api-access-5bwl7 DeviceMajor:0 DeviceMinor:754 Capacity:49335545856 Type:vfs Inodes:6166277 HasInodes:true} {Device:/var/lib/kubelet/pods/bf303acd-b62e-4aa3-bd8d-15f5844302d8/volumes/kubernetes.io~projected/kube-api-access-f92qq DeviceMajor:0 DeviceMinor:1174 Capacity:49335545856 Type:vfs Inodes:6166277 HasInodes:true} {Device:overlay_0-62 DeviceMajor:0 DeviceMinor:62 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-136 DeviceMajor:0 DeviceMinor:136 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/714673c16fe0665ef1b16d03b2319efbfe055f0459ee84843763239d325f2af4/userdata/shm DeviceMajor:0 DeviceMinor:273 Capacity:67108864 Type:vfs Inodes:6166277 HasInodes:true} {Device:/run/containers/storage/overlay-containers/2d6d12cb5b54a813b83ddffc4965018d471ee515affc2a1d0cb0aec4f5245797/userdata/shm DeviceMajor:0 DeviceMinor:814 Capacity:67108864 Type:vfs Inodes:6166277 HasInodes:true} {Device:/var/lib/kubelet/pods/1163571d-f555-41ad-b04c-74c2dc452efe/volumes/kubernetes.io~secret/secret-telemeter-client-kube-rbac-proxy-config DeviceMajor:0 DeviceMinor:1314 Capacity:49335545856 Type:vfs Inodes:6166277 HasInodes:true} {Device:/var/lib/kubelet/pods/74e8b3c8-da80-492c-bfcf-199b40bde40b/volumes/kubernetes.io~projected/kube-api-access-79h66 DeviceMajor:0 DeviceMinor:143 Capacity:49335545856 Type:vfs Inodes:6166277 HasInodes:true} {Device:overlay_0-195 DeviceMajor:0 DeviceMinor:195 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/b8d28792-2365-4e9e-b61a-46cd2ef8b632/volumes/kubernetes.io~secret/prometheus-operator-kube-rbac-proxy-config DeviceMajor:0 DeviceMinor:1156 Capacity:49335545856 Type:vfs Inodes:6166277 HasInodes:true} {Device:overlay_0-739 DeviceMajor:0 DeviceMinor:739 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-687 DeviceMajor:0 DeviceMinor:687 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/b23dfe329a1134a3919827a4fef6a742a5c3a54647b515a5ae24efa737eaeba7/userdata/shm DeviceMajor:0 DeviceMinor:48 Capacity:67108864 Type:vfs Inodes:6166277 HasInodes:true} {Device:overlay_0-492 DeviceMajor:0 DeviceMinor:492 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/49b426a3-f16e-40e9-a166-7270d4cfcc60/volumes/kubernetes.io~secret/apiservice-cert DeviceMajor:0 DeviceMinor:927 Capacity:49335545856 Type:vfs Inodes:6166277 HasInodes:true} {Device:/var/lib/kubelet/pods/39623346-691b-42c8-af76-409d4f6629af/volumes/kubernetes.io~secret/cluster-baremetal-operator-tls DeviceMajor:0 DeviceMinor:706 Capacity:49335545856 Type:vfs Inodes:6166277 HasInodes:true} {Device:overlay_0-193 DeviceMajor:0 DeviceMinor:193 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-651 DeviceMajor:0 DeviceMinor:651 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-1044 DeviceMajor:0 DeviceMinor:1044 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-125 DeviceMajor:0 DeviceMinor:125 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-333 DeviceMajor:0 DeviceMinor:333 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-424 DeviceMajor:0 DeviceMinor:424 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/b79ef90c-dc66-4d5f-8943-2c3ac68796ba/volumes/kubernetes.io~projected/kube-api-access-zb4rw DeviceMajor:0 DeviceMinor:507 Capacity:49335545856 Type:vfs Inodes:6166277 HasInodes:true} {Device:/run/containers/storage/overlay-containers/8e403e85ba5e32d44b48160b30b4587230e7b0f26d90604af0e04232edc028bd/userdata/shm DeviceMajor:0 DeviceMinor:776 Capacity:67108864 Type:vfs Inodes:6166277 HasInodes:true} {Device:/run/containers/storage/overlay-containers/410534ca0c42d1b797ab53ba5fbf6b12f5a1a2db22751f87c2aa91614045629d/userdata/shm DeviceMajor:0 DeviceMinor:824 Capacity:67108864 Type:vfs Inodes:6166277 HasInodes:true} {Device:overlay_0-89 DeviceMajor:0 DeviceMinor:89 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-541 DeviceMajor:0 DeviceMinor:541 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-473 DeviceMajor:0 DeviceMinor:473 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/0de580e3a4de4a7d062f7572a6d4a10fb107356c71fe5f479e8d76eb00cfe863/userdata/shm DeviceMajor:0 DeviceMinor:516 Capacity:67108864 Type:vfs Inodes:6166277 HasInodes:true} {Device:overlay_0-601 DeviceMajor:0 DeviceMinor:601 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/1e7f7c02-4c84-432a-8d59-25dd3bfef5c2/volumes/kubernetes.io~secret/proxy-tls DeviceMajor:0 DeviceMinor:1054 Capacity:49335545856 Type:vfs Inodes:6166277 HasInodes:true} {Device:overlay_0-1350 DeviceMajor:0 DeviceMinor:1350 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true}] DiskMap:map[252:0:{Name:vda Major:252 Minor:0 Size:214748364800 Scheduler:none} 252:16:{Name:vdb Major:252 Minor:16 Size:21474836480 Scheduler:none} 252:32:{Name:vdc Major:252 Minor:32 Size:21474836480 Scheduler:none} 252:48:{Name:vdd Major:252 Minor:48 Size:21474836480 Scheduler:none} 252:64:{Name:vde Major:252 Minor:64 Size:21474836480 Scheduler:none}] NetworkDevices:[{Name:005aea3f18d4d28 MacAddress:a6:50:00:2b:2b:89 Speed:10000 Mtu:8900} {Name:0655b027cab3684 MacAddress:b2:45:fc:0e:5c:5d Speed:10000 Mtu:8900} {Name:081425b6bb12667 MacAddress:82:44:29:3c:d9:a1 Speed:10000 Mtu:8900} {Name:0a200f132e292ed MacAddress:22:aa:d9:5a:54:9c Speed:10000 Mtu:8900} {Name:0c671a703dbac86 MacAddress:16:34:bb:96:9d:12 Speed:10000 Mtu:8900} {Name:0de580e3a4de4a7 MacAddress:c2:6d:f1:84:a8:63 Speed:10000 Mtu:8900} {Name:0e75a15a8297368 MacAddress:82:01:48:24:79:6b Speed:10000 Mtu:8900} {Name:0fcfa31d947740e MacAddress:42:d7:dc:9c:8f:89 Speed:10000 Mtu:8900} {Name:267ebddc959ac57 MacAddress:ee:1e:76:44:1f:a1 Speed:10000 Mtu:8900} {Name:2b0278ee2f5e882 MacAddress:16:83:65:6a:d7:27 Speed:10000 Mtu:8900} {Name:2d6d12cb5b54a81 MacAddress:da:98:fa:a4:dd:73 Speed:10000 Mtu:8900} {Name:2e08dd98145938b MacAddress:d6:6a:a9:ce:b8:20 Speed:10000 Mtu:8900} {Name:31db0370c08dc41 MacAddress:be:3e:a5:2a:ce:91 Speed:10000 Mtu:8900} {Name:32f719b1fae3e7d MacAddress:a6:7d:89:1f:4b:29 Speed:10000 Mtu:8900} {Name:371c4924a11b805 MacAddress:ba:eb:d5:dc:5b:14 Speed:10000 Mtu:8900} {Name:3aa615a9d796b41 MacAddress:36:6d:97:60:7e:d5 Speed:10000 Mtu:8900} {Name:410534ca0c42d1b MacAddress:de:ee:bb:af:ee:2b Speed:10000 Mtu:8900} {Name:42dcfde8494f887 MacAddress:46:11:77:65:72:c4 Speed:10000 Mtu:8900} {Name:47463debfe8a4cd MacAddress:2a:6e:51:b3:ff:12 Speed:10000 Mtu:8900} {Name:4ebd137aadd86a9 MacAddress:ca:6e:7a:24:f7:8b Speed:10000 Mtu:8900} {Name:54e1df610bab1f2 MacAddress:36:73:3a:72:f0:cb Speed:10000 Mtu:8900} {Name:5dd4d0e15147dd2 MacAddress:f6:6f:70:c0:f5:41 Speed:10000 Mtu:8900} {Name:68f61c7a09ca206 MacAddress:5e:58:15:a2:13:fc Speed:10000 Mtu:8900} {Name:6bea8d6f03626b0 MacAddress:fe:14:ed:31:3b:e7 Speed:10000 Mtu:8900} {Name:714673c16fe0665 MacAddress:22:24:59:d3:7a:7e Speed:10000 Mtu:8900} {Name:79723ddb5fac1ee MacAddress:46:75:21:24:bf:10 Speed:10000 Mtu:8900} {Name:894870cb71b93cf MacAddress:82:58:28:6e:92:7d Speed:10000 Mtu:8900} {Name:89dd38053c589bc MacAddress:ea:af:6e:58:de:0c Speed:10000 Mtu:8900} {Name:8a8cf406c663f29 MacAddress:0e:7e:4c:d9:1a:36 Speed:10000 Mtu:8900} {Name:8b96b8f7d597910 MacAddress:d6:cd:34:3c:78:2e Speed:10000 Mtu:8900} {Name:8e403e85ba5e32d MacAddress:12:db:36:57:fa:d6 Speed:10000 Mtu:8900} {Name:8edfb6097f94737 MacAddress:de:2b:75:bb:48:c2 Speed:10000 Mtu:8900} {Name:8f0c2bd56106a14 MacAddress:82:79:38:34:90:a2 Speed:10000 Mtu:8900} {Name:906a4975f221a30 MacAddress:5a:5d:64:3f:06:5a Speed:10000 Mtu:8900} {Name:922eed7d19f9dd7 MacAddress:0e:1e:76:15:48:21 Speed:10000 Mtu:8900} {Name:937f03ad2559d18 MacAddress:9a:4a:05:a3:23:5e Speed:10000 Mtu:8900} {Name:93dd263e4986822 MacAddress:ae:ba:d8:77:96:e5 Speed:10000 Mtu:8900} {Name:9e66323acb79027 MacAddress:0a:fb:1b:d7:a3:db Speed:10000 Mtu:8900} {Name:a1b7fe82470a07c MacAddress:ae:53:62:0b:d4:84 Speed:10000 Mtu:8900} {Name:a51d75323a923af MacAddress:5a:0e:de:55:0b:4b Speed:10000 Mtu:8900} {Name:aa70a59110835e6 MacAddress:32:a5:a0:53:64:b9 Speed:10000 Mtu:8900} {Name:b5410db202b2d25 MacAddress:32:da:ef:f8:83:18 Speed:10000 Mtu:8900} {Name:b5eb5695ccec6b9 MacAddress:aa:46:eb:6d:61:f1 Speed:10000 Mtu:8900} {Name:br-ex MacAddress:fa:16:9e:81:f6:10 Speed:0 Mtu:9000} {Name:br-int MacAddress:aa:2b:07:18:10:d7 Speed:0 Mtu:8900} {Name:c125f0138a2358e MacAddress:86:17:c6:d9:f7:60 Speed:10000 Mtu:8900} {Name:c932287e23f5b8d MacAddress:9a:11:ac:ee:18:2a Speed:10000 Mtu:8900} {Name:cb022277db501e4 MacAddress:0a:b7:31:b7:aa:78 Speed:10000 Mtu:8900} {Name:cd174549be5b88f MacAddress:f6:ba:86:04:3a:f7 Speed:10000 Mtu:8900} {Name:d243d9f4d6d9c16 MacAddress:66:4e:48:31:9a:eb Speed:10000 Mtu:8900} {Name:d279f5c83a7334b MacAddress:66:8c:d9:18:ed:72 Speed:10000 Mtu:8900} {Name:d3656437a9ce967 MacAddress:72:b7:6e:08:a0:53 Speed:10000 Mtu:8900} {Name:da13c43822ff6eb MacAddress:1e:76:51:c4:35:1e Speed:10000 Mtu:8900} {Name:e0b212afd7d07d0 MacAddress:62:0f:24:07:cf:8f Speed:10000 Mtu:8900} {Name:ed120d47621f85e MacAddress:6e:b4:ad:41:da:69 Speed:10000 Mtu:8900} {Name:eth0 MacAddress:fa:16:9e:81:f6:10 Speed:-1 Mtu:9000} {Name:eth1 MacAddress:fa:16:3e:63:ba:dc Speed:-1 Mtu:9000} {Name:eth2 MacAddress:fa:16:3e:5d:e3:99 Speed:-1 Mtu:9000} {Name:f05f4c8572660fb MacAddress:a2:2a:af:0f:62:cd Speed:10000 Mtu:8900} {Name:f5885425638056c MacAddress:f6:9b:e4:3b:07:49 Speed:10000 Mtu:8900} {Name:fd03b91adf31c70 MacAddress:ce:96:dc:19:b3:c6 Speed:10000 Mtu:8900} {Name:fd87d63ea110a27 MacAddress:ea:68:92:84:e6:93 Speed:10000 Mtu:8900} {Name:ovn-k8s-mp0 MacAddress:0a:58:0a:80:00:02 Speed:0 Mtu:8900} {Name:ovs-system MacAddress:2e:18:f0:62:3d:21 Speed:0 Mtu:1500}] Topology:[{Id:0 Memory:50514145280 HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] Cores:[{Id:0 Threads:[0] Caches:[{Id:0 Size:32768 Type:Data Level:1} {Id:0 Size:32768 Type:Instruction Level:1} {Id:0 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:0 Size:16777216 Type:Unified Level:3}] SocketID:0 BookID: DrawerID:} {Id:0 Threads:[1] Caches:[{Id:1 Size:32768 Type:Data Level:1} {Id:1 Size:32768 Type:Instruction Level:1} {Id:1 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:1 Size:16777216 Type:Unified Level:3}] SocketID:1 BookID: DrawerID:} {Id:0 Threads:[10] Caches:[{Id:10 Size:32768 Type:Data Level:1} {Id:10 Size:32768 Type:Instruction Level:1} {Id:10 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:10 Size:16777216 Type:Unified Level:3}] SocketID:10 BookID: DrawerID:} {Id:0 Threads:[11] Caches:[{Id:11 Size:32768 Type:Data Level:1} {Id:11 Size:32768 Type:Instruction Level:1} {Id:11 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:11 Size:16777216 Type:Unified Level:3}] SocketID:11 BookID: DrawerID:} {Id:0 Threads:[12] Caches:[{Id:12 Size:32768 Type:Data Level:1} {Id:12 Size:32768 Type:Instruction Level:1} {Id:12 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:12 Size:16777216 Type:Unified Level:3}] SocketID:12 BookID: DrawerID:} {Id:0 Threads:[13] Caches:[{Id:13 Size:32768 Type:Data Level:1} {Id:13 Size:32768 Type:Instruction Level:1} {Id:13 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:13 Size:16777216 Type:Unified Level:3}] SocketID:13 BookID: DrawerID:} {Id:0 Threads:[14] Caches:[{Id:14 Size:32768 Type:Data Level:1} {Id:14 Size:32768 Type:Instruction Level:1} {Id:14 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:14 Size:16777216 Type:Unified Level:3}] SocketID:14 BookID: DrawerID:} {Id:0 Threads:[15] Caches:[{Id:15 Size:32768 Type:Data Level:1} {Id:15 Size:32768 Type:Instruction Level:1} {Id:15 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:15 Size:16777216 Type:Unified Level:3}] SocketID:15 BookID: DrawerID:} {Id:0 Threads:[2] Caches:[{Id:2 Size:32768 Type:Data Level:1} {Id:2 Size:32768 Type:Instruction Level:1} {Id:2 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:2 Size:16777216 Type:Unified Level:3}] SocketID:2 BookID: DrawerID:} {Id:0 Threads:[3] Caches:[{Id:3 Size:32768 Type:Data Level:1} {Id:3 Size:32768 Type:Instruction Level:1} {Id:3 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:3 Size:16777216 Type:Unified Level:3}] SocketID:3 BookID: DrawerID:} {Id:0 Threads:[4] Caches:[{Id:4 Size:32768 Type:Data Level:1} {Id:4 Size:32768 Type:Instruction Level:1} {Id:4 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:4 Size:16777216 Type:Unified Level:3}] SocketID:4 BookID: DrawerID:} {Id:0 Threads:[5] Caches:[{Id:5 Size:32768 Type:Data Level:1} {Id:5 Size:32768 Type:Instruction Level:1} {Id:5 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:5 Size:16777216 Type:Unified Level:3}] SocketID:5 BookID: DrawerID:} {Id:0 Threads:[6] Caches:[{Id:6 Size:32768 Type:Data Level:1} {Id:6 Size:32768 Type:Instruction Level:1} {Id:6 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:6 Size:16777216 Type:Unified Level:3}] SocketID:6 BookID: DrawerID:} {Id:0 Threads:[7] Caches:[{Id:7 Size:32768 Type:Data Level:1} {Id:7 Size:32768 Type:Instruction Level:1} {Id:7 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:7 Size:16777216 Type:Unified Level:3}] SocketID:7 BookID: DrawerID:} {Id:0 Threads:[8] Caches:[{Id:8 Size:32768 Type:Data Level:1} {Id:8 Size:32768 Type:Instruction Level:1} {Id:8 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:8 Size:16777216 Type:Unified Level:3}] SocketID:8 BookID: DrawerID:} {Id:0 Threads:[9] Caches:[{Id:9 Size:32768 Type:Data Level:1} {Id:9 Size:32768 Type:Instruction Level:1} {Id:9 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:9 Size:16777216 Type:Unified Level:3}] SocketID:9 BookID: DrawerID:}] Caches:[] Distances:[10]}] CloudProvider:Unknown InstanceType:Unknown InstanceID:None} Feb 24 05:37:20.497065 master-0 kubenswrapper[34361]: I0224 05:37:20.496028 34361 manager_no_libpfm.go:29] cAdvisor is build without cgo and/or libpfm support. Perf event counters are not available. Feb 24 05:37:20.497065 master-0 kubenswrapper[34361]: I0224 05:37:20.496095 34361 manager.go:233] Version: {KernelVersion:5.14.0-427.109.1.el9_4.x86_64 ContainerOsVersion:Red Hat Enterprise Linux CoreOS 418.94.202602022246-0 DockerVersion: DockerAPIVersion: CadvisorVersion: CadvisorRevision:} Feb 24 05:37:20.497065 master-0 kubenswrapper[34361]: I0224 05:37:20.496379 34361 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Feb 24 05:37:20.497065 master-0 kubenswrapper[34361]: I0224 05:37:20.496523 34361 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Feb 24 05:37:20.497065 master-0 kubenswrapper[34361]: I0224 05:37:20.496550 34361 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"master-0","RuntimeCgroupsName":"/system.slice/crio.service","SystemCgroupsName":"/system.slice","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":true,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":{"cpu":"500m","ephemeral-storage":"1Gi","memory":"1Gi"},"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":4096,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Feb 24 05:37:20.497065 master-0 kubenswrapper[34361]: I0224 05:37:20.496759 34361 topology_manager.go:138] "Creating topology manager with none policy" Feb 24 05:37:20.497065 master-0 kubenswrapper[34361]: I0224 05:37:20.496769 34361 container_manager_linux.go:303] "Creating device plugin manager" Feb 24 05:37:20.497065 master-0 kubenswrapper[34361]: I0224 05:37:20.496777 34361 manager.go:142] "Creating Device Plugin manager" path="/var/lib/kubelet/device-plugins/kubelet.sock" Feb 24 05:37:20.497065 master-0 kubenswrapper[34361]: I0224 05:37:20.496795 34361 server.go:66] "Creating device plugin registration server" version="v1beta1" socket="/var/lib/kubelet/device-plugins/kubelet.sock" Feb 24 05:37:20.497065 master-0 kubenswrapper[34361]: I0224 05:37:20.496835 34361 state_mem.go:36] "Initialized new in-memory state store" Feb 24 05:37:20.497065 master-0 kubenswrapper[34361]: I0224 05:37:20.496922 34361 server.go:1245] "Using root directory" path="/var/lib/kubelet" Feb 24 05:37:20.497065 master-0 kubenswrapper[34361]: I0224 05:37:20.496989 34361 kubelet.go:418] "Attempting to sync node with API server" Feb 24 05:37:20.497065 master-0 kubenswrapper[34361]: I0224 05:37:20.497001 34361 kubelet.go:313] "Adding static pod path" path="/etc/kubernetes/manifests" Feb 24 05:37:20.497065 master-0 kubenswrapper[34361]: I0224 05:37:20.497016 34361 file.go:69] "Watching path" path="/etc/kubernetes/manifests" Feb 24 05:37:20.497065 master-0 kubenswrapper[34361]: I0224 05:37:20.497028 34361 kubelet.go:324] "Adding apiserver pod source" Feb 24 05:37:20.497065 master-0 kubenswrapper[34361]: I0224 05:37:20.497040 34361 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Feb 24 05:37:20.500796 master-0 kubenswrapper[34361]: I0224 05:37:20.500640 34361 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="cri-o" version="1.31.13-6.rhaos4.18.git7ed6156.el9" apiVersion="v1" Feb 24 05:37:20.502234 master-0 kubenswrapper[34361]: I0224 05:37:20.501154 34361 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-server-current.pem". Feb 24 05:37:20.502234 master-0 kubenswrapper[34361]: I0224 05:37:20.501715 34361 kubelet.go:854] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Feb 24 05:37:20.502234 master-0 kubenswrapper[34361]: I0224 05:37:20.501970 34361 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/portworx-volume" Feb 24 05:37:20.502234 master-0 kubenswrapper[34361]: I0224 05:37:20.502004 34361 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/empty-dir" Feb 24 05:37:20.502234 master-0 kubenswrapper[34361]: I0224 05:37:20.502019 34361 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/git-repo" Feb 24 05:37:20.502234 master-0 kubenswrapper[34361]: I0224 05:37:20.502033 34361 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/host-path" Feb 24 05:37:20.502234 master-0 kubenswrapper[34361]: I0224 05:37:20.502048 34361 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/nfs" Feb 24 05:37:20.502234 master-0 kubenswrapper[34361]: I0224 05:37:20.502063 34361 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/secret" Feb 24 05:37:20.502234 master-0 kubenswrapper[34361]: I0224 05:37:20.502076 34361 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/iscsi" Feb 24 05:37:20.502234 master-0 kubenswrapper[34361]: I0224 05:37:20.502090 34361 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/downward-api" Feb 24 05:37:20.502234 master-0 kubenswrapper[34361]: I0224 05:37:20.502105 34361 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/fc" Feb 24 05:37:20.504904 master-0 kubenswrapper[34361]: I0224 05:37:20.502963 34361 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/configmap" Feb 24 05:37:20.504904 master-0 kubenswrapper[34361]: I0224 05:37:20.503045 34361 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/projected" Feb 24 05:37:20.504904 master-0 kubenswrapper[34361]: I0224 05:37:20.503075 34361 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/local-volume" Feb 24 05:37:20.504904 master-0 kubenswrapper[34361]: I0224 05:37:20.503148 34361 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/csi" Feb 24 05:37:20.504904 master-0 kubenswrapper[34361]: I0224 05:37:20.503905 34361 server.go:1280] "Started kubelet" Feb 24 05:37:20.506232 master-0 kubenswrapper[34361]: I0224 05:37:20.505441 34361 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Feb 24 05:37:20.506232 master-0 kubenswrapper[34361]: I0224 05:37:20.505588 34361 server_v1.go:47] "podresources" method="list" useActivePods=true Feb 24 05:37:20.506596 master-0 kubenswrapper[34361]: I0224 05:37:20.506392 34361 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Feb 24 05:37:20.506819 master-0 systemd[1]: Started Kubernetes Kubelet. Feb 24 05:37:20.514582 master-0 kubenswrapper[34361]: I0224 05:37:20.506812 34361 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Feb 24 05:37:20.514582 master-0 kubenswrapper[34361]: I0224 05:37:20.513154 34361 reflector.go:368] Caches populated for *v1.Node from k8s.io/client-go/informers/factory.go:160 Feb 24 05:37:20.515265 master-0 kubenswrapper[34361]: I0224 05:37:20.514836 34361 reflector.go:368] Caches populated for *v1.Service from k8s.io/client-go/informers/factory.go:160 Feb 24 05:37:20.518472 master-0 kubenswrapper[34361]: I0224 05:37:20.518441 34361 server.go:449] "Adding debug handlers to kubelet server" Feb 24 05:37:20.528522 master-0 kubenswrapper[34361]: I0224 05:37:20.528454 34361 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate rotation is enabled Feb 24 05:37:20.528522 master-0 kubenswrapper[34361]: I0224 05:37:20.528521 34361 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Feb 24 05:37:20.529097 master-0 kubenswrapper[34361]: I0224 05:37:20.528995 34361 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-25 05:04:32 +0000 UTC, rotation deadline is 2026-02-24 22:07:50.995725701 +0000 UTC Feb 24 05:37:20.529169 master-0 kubenswrapper[34361]: I0224 05:37:20.529098 34361 certificate_manager.go:356] kubernetes.io/kubelet-serving: Waiting 16h30m30.466635428s for next certificate rotation Feb 24 05:37:20.529169 master-0 kubenswrapper[34361]: I0224 05:37:20.529137 34361 volume_manager.go:287] "The desired_state_of_world populator starts" Feb 24 05:37:20.529169 master-0 kubenswrapper[34361]: I0224 05:37:20.529155 34361 volume_manager.go:289] "Starting Kubelet Volume Manager" Feb 24 05:37:20.531420 master-0 kubenswrapper[34361]: I0224 05:37:20.529344 34361 desired_state_of_world_populator.go:147] "Desired state populator starts to run" Feb 24 05:37:20.531420 master-0 kubenswrapper[34361]: E0224 05:37:20.529557 34361 kubelet.go:1495] "Image garbage collection failed once. Stats initialization may not have completed yet" err="failed to get imageFs info: unable to find data in memory cache" Feb 24 05:37:20.531420 master-0 kubenswrapper[34361]: I0224 05:37:20.530272 34361 factory.go:55] Registering systemd factory Feb 24 05:37:20.531420 master-0 kubenswrapper[34361]: I0224 05:37:20.530297 34361 factory.go:221] Registration of the systemd container factory successfully Feb 24 05:37:20.532250 master-0 kubenswrapper[34361]: I0224 05:37:20.531973 34361 factory.go:153] Registering CRI-O factory Feb 24 05:37:20.532250 master-0 kubenswrapper[34361]: I0224 05:37:20.532042 34361 factory.go:221] Registration of the crio container factory successfully Feb 24 05:37:20.532250 master-0 kubenswrapper[34361]: I0224 05:37:20.532069 34361 reflector.go:368] Caches populated for *v1.CSIDriver from k8s.io/client-go/informers/factory.go:160 Feb 24 05:37:20.532250 master-0 kubenswrapper[34361]: I0224 05:37:20.532234 34361 factory.go:219] Registration of the containerd container factory failed: unable to create containerd client: containerd: cannot unix dial containerd api service: dial unix /run/containerd/containerd.sock: connect: no such file or directory Feb 24 05:37:20.532984 master-0 kubenswrapper[34361]: I0224 05:37:20.532283 34361 factory.go:103] Registering Raw factory Feb 24 05:37:20.532984 master-0 kubenswrapper[34361]: I0224 05:37:20.532345 34361 manager.go:1196] Started watching for new ooms in manager Feb 24 05:37:20.533347 master-0 kubenswrapper[34361]: I0224 05:37:20.533276 34361 manager.go:319] Starting recovery of all containers Feb 24 05:37:20.559547 master-0 kubenswrapper[34361]: I0224 05:37:20.556059 34361 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="03e4cebe-f3df-423f-be2b-7fb22bd58341" volumeName="kubernetes.io/projected/03e4cebe-f3df-423f-be2b-7fb22bd58341-kube-api-access-f9pp4" seLinuxMountContext="" Feb 24 05:37:20.559547 master-0 kubenswrapper[34361]: I0224 05:37:20.556163 34361 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c3fed34f-b275-42c6-af6c-8de3e6fe0f9e" volumeName="kubernetes.io/configmap/c3fed34f-b275-42c6-af6c-8de3e6fe0f9e-config" seLinuxMountContext="" Feb 24 05:37:20.559547 master-0 kubenswrapper[34361]: I0224 05:37:20.556205 34361 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f938daff-1d36-4348-a689-3d1607058296" volumeName="kubernetes.io/projected/f938daff-1d36-4348-a689-3d1607058296-kube-api-access-xbt92" seLinuxMountContext="" Feb 24 05:37:20.559547 master-0 kubenswrapper[34361]: I0224 05:37:20.556230 34361 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3d6b1ce7-1213-494c-829d-186d39eac7eb" volumeName="kubernetes.io/configmap/3d6b1ce7-1213-494c-829d-186d39eac7eb-trusted-ca" seLinuxMountContext="" Feb 24 05:37:20.559547 master-0 kubenswrapper[34361]: I0224 05:37:20.556252 34361 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3d6b1ce7-1213-494c-829d-186d39eac7eb" volumeName="kubernetes.io/secret/3d6b1ce7-1213-494c-829d-186d39eac7eb-metrics-tls" seLinuxMountContext="" Feb 24 05:37:20.559547 master-0 kubenswrapper[34361]: I0224 05:37:20.556273 34361 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e1f03d97-1a6a-41e4-9ed3-cd9b01c46400" volumeName="kubernetes.io/secret/e1f03d97-1a6a-41e4-9ed3-cd9b01c46400-cluster-storage-operator-serving-cert" seLinuxMountContext="" Feb 24 05:37:20.559547 master-0 kubenswrapper[34361]: I0224 05:37:20.556295 34361 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3f511d03-a182-4968-ba40-5c5c10e5e6be" volumeName="kubernetes.io/empty-dir/3f511d03-a182-4968-ba40-5c5c10e5e6be-available-featuregates" seLinuxMountContext="" Feb 24 05:37:20.559547 master-0 kubenswrapper[34361]: I0224 05:37:20.556364 34361 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49b426a3-f16e-40e9-a166-7270d4cfcc60" volumeName="kubernetes.io/secret/49b426a3-f16e-40e9-a166-7270d4cfcc60-webhook-cert" seLinuxMountContext="" Feb 24 05:37:20.559547 master-0 kubenswrapper[34361]: I0224 05:37:20.556395 34361 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9666fc94-71e3-46af-8b45-26e3a085d076" volumeName="kubernetes.io/projected/9666fc94-71e3-46af-8b45-26e3a085d076-kube-api-access-5bwl7" seLinuxMountContext="" Feb 24 05:37:20.559547 master-0 kubenswrapper[34361]: I0224 05:37:20.556420 34361 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="feee7fe8-e805-4807-b4c0-ecc7ef0f88d9" volumeName="kubernetes.io/projected/feee7fe8-e805-4807-b4c0-ecc7ef0f88d9-kube-api-access-h5djr" seLinuxMountContext="" Feb 24 05:37:20.559547 master-0 kubenswrapper[34361]: I0224 05:37:20.556444 34361 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3363f001-1cfa-41f5-b245-30cc99dd09cb" volumeName="kubernetes.io/projected/3363f001-1cfa-41f5-b245-30cc99dd09cb-kube-api-access-589rv" seLinuxMountContext="" Feb 24 05:37:20.559547 master-0 kubenswrapper[34361]: I0224 05:37:20.556465 34361 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="39623346-691b-42c8-af76-409d4f6629af" volumeName="kubernetes.io/projected/39623346-691b-42c8-af76-409d4f6629af-kube-api-access-ddfqw" seLinuxMountContext="" Feb 24 05:37:20.559547 master-0 kubenswrapper[34361]: I0224 05:37:20.556487 34361 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3d6b1ce7-1213-494c-829d-186d39eac7eb" volumeName="kubernetes.io/projected/3d6b1ce7-1213-494c-829d-186d39eac7eb-bound-sa-token" seLinuxMountContext="" Feb 24 05:37:20.559547 master-0 kubenswrapper[34361]: I0224 05:37:20.556513 34361 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5d51ce58-55f6-45d5-9d5d-7b31ae42380a" volumeName="kubernetes.io/configmap/5d51ce58-55f6-45d5-9d5d-7b31ae42380a-auth-proxy-config" seLinuxMountContext="" Feb 24 05:37:20.559547 master-0 kubenswrapper[34361]: I0224 05:37:20.556538 34361 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="767424fb-babf-4b73-b5e2-0bee65fcf207" volumeName="kubernetes.io/configmap/767424fb-babf-4b73-b5e2-0bee65fcf207-cni-sysctl-allowlist" seLinuxMountContext="" Feb 24 05:37:20.559547 master-0 kubenswrapper[34361]: I0224 05:37:20.556561 34361 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9666fc94-71e3-46af-8b45-26e3a085d076" volumeName="kubernetes.io/secret/9666fc94-71e3-46af-8b45-26e3a085d076-srv-cert" seLinuxMountContext="" Feb 24 05:37:20.559547 master-0 kubenswrapper[34361]: I0224 05:37:20.556585 34361 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1163571d-f555-41ad-b04c-74c2dc452efe" volumeName="kubernetes.io/secret/1163571d-f555-41ad-b04c-74c2dc452efe-federate-client-tls" seLinuxMountContext="" Feb 24 05:37:20.559547 master-0 kubenswrapper[34361]: I0224 05:37:20.556607 34361 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6e5ede6a-9d4b-47a2-b4ba-e6018910d05a" volumeName="kubernetes.io/projected/6e5ede6a-9d4b-47a2-b4ba-e6018910d05a-kube-api-access-zb68s" seLinuxMountContext="" Feb 24 05:37:20.559547 master-0 kubenswrapper[34361]: I0224 05:37:20.556629 34361 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="80cc7ad6-051b-4ee5-94af-611388d9622a" volumeName="kubernetes.io/projected/80cc7ad6-051b-4ee5-94af-611388d9622a-kube-api-access-hgl5l" seLinuxMountContext="" Feb 24 05:37:20.559547 master-0 kubenswrapper[34361]: I0224 05:37:20.556650 34361 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c106275b-72b6-4877-95c3-830f93e35375" volumeName="kubernetes.io/projected/c106275b-72b6-4877-95c3-830f93e35375-kube-api-access-4p8zb" seLinuxMountContext="" Feb 24 05:37:20.559547 master-0 kubenswrapper[34361]: I0224 05:37:20.556672 34361 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c177f8fe-8145-4557-ae78-af121efe001c" volumeName="kubernetes.io/configmap/c177f8fe-8145-4557-ae78-af121efe001c-telemetry-config" seLinuxMountContext="" Feb 24 05:37:20.559547 master-0 kubenswrapper[34361]: I0224 05:37:20.556696 34361 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c847d0c0-cc92-4d56-9e47-b83d9a39a745" volumeName="kubernetes.io/secret/c847d0c0-cc92-4d56-9e47-b83d9a39a745-certs" seLinuxMountContext="" Feb 24 05:37:20.559547 master-0 kubenswrapper[34361]: I0224 05:37:20.556721 34361 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="da19bb93-c9ba-4e60-9e83-d92bc0dd33c4" volumeName="kubernetes.io/secret/da19bb93-c9ba-4e60-9e83-d92bc0dd33c4-serving-cert" seLinuxMountContext="" Feb 24 05:37:20.559547 master-0 kubenswrapper[34361]: I0224 05:37:20.556776 34361 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49b426a3-f16e-40e9-a166-7270d4cfcc60" volumeName="kubernetes.io/projected/49b426a3-f16e-40e9-a166-7270d4cfcc60-kube-api-access-9zxwj" seLinuxMountContext="" Feb 24 05:37:20.559547 master-0 kubenswrapper[34361]: I0224 05:37:20.556806 34361 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7a2c651d-ea1a-41f2-9745-04adc8d88904" volumeName="kubernetes.io/secret/7a2c651d-ea1a-41f2-9745-04adc8d88904-etcd-client" seLinuxMountContext="" Feb 24 05:37:20.559547 master-0 kubenswrapper[34361]: I0224 05:37:20.556836 34361 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b21148ab-4e3e-4d0b-b198-3278dd8e2e7e" volumeName="kubernetes.io/configmap/b21148ab-4e3e-4d0b-b198-3278dd8e2e7e-audit" seLinuxMountContext="" Feb 24 05:37:20.559547 master-0 kubenswrapper[34361]: I0224 05:37:20.556870 34361 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ee10cd9f-23da-4e40-bc7a-0856a9fa7ae5" volumeName="kubernetes.io/empty-dir/ee10cd9f-23da-4e40-bc7a-0856a9fa7ae5-snapshots" seLinuxMountContext="" Feb 24 05:37:20.559547 master-0 kubenswrapper[34361]: I0224 05:37:20.556897 34361 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="23bdafdd-27c9-4461-be4a-3ea916ac3875" volumeName="kubernetes.io/secret/23bdafdd-27c9-4461-be4a-3ea916ac3875-image-registry-operator-tls" seLinuxMountContext="" Feb 24 05:37:20.559547 master-0 kubenswrapper[34361]: I0224 05:37:20.556920 34361 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7a2c651d-ea1a-41f2-9745-04adc8d88904" volumeName="kubernetes.io/configmap/7a2c651d-ea1a-41f2-9745-04adc8d88904-config" seLinuxMountContext="" Feb 24 05:37:20.559547 master-0 kubenswrapper[34361]: I0224 05:37:20.556947 34361 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e6a0fc47-b446-4902-9f8a-04870cbafcab" volumeName="kubernetes.io/configmap/e6a0fc47-b446-4902-9f8a-04870cbafcab-config" seLinuxMountContext="" Feb 24 05:37:20.559547 master-0 kubenswrapper[34361]: I0224 05:37:20.556968 34361 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="2f48332e-92de-42aa-a6e6-db161f005e74" volumeName="kubernetes.io/empty-dir/2f48332e-92de-42aa-a6e6-db161f005e74-audit-log" seLinuxMountContext="" Feb 24 05:37:20.559547 master-0 kubenswrapper[34361]: I0224 05:37:20.556989 34361 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="347c43e5-86d5-436f-bdc5-1c7bbe19ab2a" volumeName="kubernetes.io/empty-dir/347c43e5-86d5-436f-bdc5-1c7bbe19ab2a-cache" seLinuxMountContext="" Feb 24 05:37:20.559547 master-0 kubenswrapper[34361]: I0224 05:37:20.557011 34361 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="75b4304c-09f2-499e-8c2f-da603e43ba72" volumeName="kubernetes.io/empty-dir/75b4304c-09f2-499e-8c2f-da603e43ba72-catalog-content" seLinuxMountContext="" Feb 24 05:37:20.559547 master-0 kubenswrapper[34361]: I0224 05:37:20.557035 34361 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="073c9b40-bb80-41a2-bcd2-cfdbe040a5a4" volumeName="kubernetes.io/empty-dir/073c9b40-bb80-41a2-bcd2-cfdbe040a5a4-etc-tuned" seLinuxMountContext="" Feb 24 05:37:20.559547 master-0 kubenswrapper[34361]: I0224 05:37:20.557056 34361 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="116e6b47-d435-49ca-abb5-088788daf16a" volumeName="kubernetes.io/configmap/116e6b47-d435-49ca-abb5-088788daf16a-config" seLinuxMountContext="" Feb 24 05:37:20.559547 master-0 kubenswrapper[34361]: I0224 05:37:20.557076 34361 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="116e6b47-d435-49ca-abb5-088788daf16a" volumeName="kubernetes.io/secret/116e6b47-d435-49ca-abb5-088788daf16a-machine-api-operator-tls" seLinuxMountContext="" Feb 24 05:37:20.559547 master-0 kubenswrapper[34361]: I0224 05:37:20.557098 34361 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f3825c1-975c-40b5-a6ad-0f200968b3cd" volumeName="kubernetes.io/empty-dir/8f3825c1-975c-40b5-a6ad-0f200968b3cd-catalog-content" seLinuxMountContext="" Feb 24 05:37:20.559547 master-0 kubenswrapper[34361]: I0224 05:37:20.557121 34361 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b9a96f0d-16b8-47ee-baf2-807d2260fa71" volumeName="kubernetes.io/secret/b9a96f0d-16b8-47ee-baf2-807d2260fa71-tls-certificates" seLinuxMountContext="" Feb 24 05:37:20.559547 master-0 kubenswrapper[34361]: I0224 05:37:20.557142 34361 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="be7a4b9e-1e9a-4298-b804-21b683805c0e" volumeName="kubernetes.io/secret/be7a4b9e-1e9a-4298-b804-21b683805c0e-metrics-certs" seLinuxMountContext="" Feb 24 05:37:20.559547 master-0 kubenswrapper[34361]: I0224 05:37:20.557163 34361 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="32fd577d-8966-4ab1-95cf-357291084156" volumeName="kubernetes.io/projected/32fd577d-8966-4ab1-95cf-357291084156-kube-api-access-fh2pc" seLinuxMountContext="" Feb 24 05:37:20.559547 master-0 kubenswrapper[34361]: I0224 05:37:20.557188 34361 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="74e8b3c8-da80-492c-bfcf-199b40bde40b" volumeName="kubernetes.io/configmap/74e8b3c8-da80-492c-bfcf-199b40bde40b-ovnkube-config" seLinuxMountContext="" Feb 24 05:37:20.559547 master-0 kubenswrapper[34361]: I0224 05:37:20.557209 34361 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="cc0cfdd6-99d8-40dc-87d0-06c2a6767f38" volumeName="kubernetes.io/secret/cc0cfdd6-99d8-40dc-87d0-06c2a6767f38-srv-cert" seLinuxMountContext="" Feb 24 05:37:20.559547 master-0 kubenswrapper[34361]: I0224 05:37:20.557230 34361 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="2c6bb439-ed17-4761-b193-580be5f6aa00" volumeName="kubernetes.io/projected/2c6bb439-ed17-4761-b193-580be5f6aa00-kube-api-access-pl6rx" seLinuxMountContext="" Feb 24 05:37:20.559547 master-0 kubenswrapper[34361]: I0224 05:37:20.557251 34361 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="be7a4b9e-1e9a-4298-b804-21b683805c0e" volumeName="kubernetes.io/projected/be7a4b9e-1e9a-4298-b804-21b683805c0e-kube-api-access-wvm29" seLinuxMountContext="" Feb 24 05:37:20.559547 master-0 kubenswrapper[34361]: I0224 05:37:20.557272 34361 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="da19bb93-c9ba-4e60-9e83-d92bc0dd33c4" volumeName="kubernetes.io/configmap/da19bb93-c9ba-4e60-9e83-d92bc0dd33c4-proxy-ca-bundles" seLinuxMountContext="" Feb 24 05:37:20.559547 master-0 kubenswrapper[34361]: I0224 05:37:20.557293 34361 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ee10cd9f-23da-4e40-bc7a-0856a9fa7ae5" volumeName="kubernetes.io/configmap/ee10cd9f-23da-4e40-bc7a-0856a9fa7ae5-trusted-ca-bundle" seLinuxMountContext="" Feb 24 05:37:20.559547 master-0 kubenswrapper[34361]: I0224 05:37:20.557353 34361 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb05b64-74d7-41bc-991c-5d3cddc9d8f4" volumeName="kubernetes.io/secret/4bb05b64-74d7-41bc-991c-5d3cddc9d8f4-samples-operator-tls" seLinuxMountContext="" Feb 24 05:37:20.559547 master-0 kubenswrapper[34361]: I0224 05:37:20.557382 34361 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5d51ce58-55f6-45d5-9d5d-7b31ae42380a" volumeName="kubernetes.io/projected/5d51ce58-55f6-45d5-9d5d-7b31ae42380a-kube-api-access-2kh6l" seLinuxMountContext="" Feb 24 05:37:20.559547 master-0 kubenswrapper[34361]: I0224 05:37:20.557406 34361 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="80cc7ad6-051b-4ee5-94af-611388d9622a" volumeName="kubernetes.io/secret/80cc7ad6-051b-4ee5-94af-611388d9622a-kube-state-metrics-tls" seLinuxMountContext="" Feb 24 05:37:20.559547 master-0 kubenswrapper[34361]: I0224 05:37:20.557429 34361 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c847d0c0-cc92-4d56-9e47-b83d9a39a745" volumeName="kubernetes.io/secret/c847d0c0-cc92-4d56-9e47-b83d9a39a745-node-bootstrap-token" seLinuxMountContext="" Feb 24 05:37:20.559547 master-0 kubenswrapper[34361]: I0224 05:37:20.557452 34361 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="58ecd829-4749-4c8a-933b-16b4acccac90" volumeName="kubernetes.io/secret/58ecd829-4749-4c8a-933b-16b4acccac90-serving-cert" seLinuxMountContext="" Feb 24 05:37:20.559547 master-0 kubenswrapper[34361]: I0224 05:37:20.557475 34361 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c3fed34f-b275-42c6-af6c-8de3e6fe0f9e" volumeName="kubernetes.io/projected/c3fed34f-b275-42c6-af6c-8de3e6fe0f9e-kube-api-access-tlwzq" seLinuxMountContext="" Feb 24 05:37:20.559547 master-0 kubenswrapper[34361]: I0224 05:37:20.557536 34361 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d86d5bbe-3768-4695-810b-245a56e4fd1d" volumeName="kubernetes.io/projected/d86d5bbe-3768-4695-810b-245a56e4fd1d-kube-api-access-xj8cq" seLinuxMountContext="" Feb 24 05:37:20.559547 master-0 kubenswrapper[34361]: I0224 05:37:20.557560 34361 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1e7f7c02-4c84-432a-8d59-25dd3bfef5c2" volumeName="kubernetes.io/configmap/1e7f7c02-4c84-432a-8d59-25dd3bfef5c2-mcc-auth-proxy-config" seLinuxMountContext="" Feb 24 05:37:20.559547 master-0 kubenswrapper[34361]: I0224 05:37:20.557583 34361 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b21148ab-4e3e-4d0b-b198-3278dd8e2e7e" volumeName="kubernetes.io/secret/b21148ab-4e3e-4d0b-b198-3278dd8e2e7e-serving-cert" seLinuxMountContext="" Feb 24 05:37:20.559547 master-0 kubenswrapper[34361]: I0224 05:37:20.557607 34361 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3f511d03-a182-4968-ba40-5c5c10e5e6be" volumeName="kubernetes.io/projected/3f511d03-a182-4968-ba40-5c5c10e5e6be-kube-api-access-4vdmz" seLinuxMountContext="" Feb 24 05:37:20.559547 master-0 kubenswrapper[34361]: I0224 05:37:20.557629 34361 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49bfccec-61ec-4bef-a561-9f6e6f906215" volumeName="kubernetes.io/projected/49bfccec-61ec-4bef-a561-9f6e6f906215-kube-api-access-d4d5x" seLinuxMountContext="" Feb 24 05:37:20.559547 master-0 kubenswrapper[34361]: I0224 05:37:20.557653 34361 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb05b64-74d7-41bc-991c-5d3cddc9d8f4" volumeName="kubernetes.io/projected/4bb05b64-74d7-41bc-991c-5d3cddc9d8f4-kube-api-access-7vjzn" seLinuxMountContext="" Feb 24 05:37:20.559547 master-0 kubenswrapper[34361]: I0224 05:37:20.557675 34361 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1e7f7c02-4c84-432a-8d59-25dd3bfef5c2" volumeName="kubernetes.io/projected/1e7f7c02-4c84-432a-8d59-25dd3bfef5c2-kube-api-access-4bf6w" seLinuxMountContext="" Feb 24 05:37:20.559547 master-0 kubenswrapper[34361]: I0224 05:37:20.557697 34361 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7a2c651d-ea1a-41f2-9745-04adc8d88904" volumeName="kubernetes.io/secret/7a2c651d-ea1a-41f2-9745-04adc8d88904-serving-cert" seLinuxMountContext="" Feb 24 05:37:20.559547 master-0 kubenswrapper[34361]: I0224 05:37:20.557718 34361 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b8d28792-2365-4e9e-b61a-46cd2ef8b632" volumeName="kubernetes.io/secret/b8d28792-2365-4e9e-b61a-46cd2ef8b632-prometheus-operator-kube-rbac-proxy-config" seLinuxMountContext="" Feb 24 05:37:20.559547 master-0 kubenswrapper[34361]: I0224 05:37:20.557742 34361 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1163571d-f555-41ad-b04c-74c2dc452efe" volumeName="kubernetes.io/configmap/1163571d-f555-41ad-b04c-74c2dc452efe-telemeter-trusted-ca-bundle" seLinuxMountContext="" Feb 24 05:37:20.559547 master-0 kubenswrapper[34361]: I0224 05:37:20.557763 34361 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49bfccec-61ec-4bef-a561-9f6e6f906215" volumeName="kubernetes.io/secret/49bfccec-61ec-4bef-a561-9f6e6f906215-package-server-manager-serving-cert" seLinuxMountContext="" Feb 24 05:37:20.559547 master-0 kubenswrapper[34361]: I0224 05:37:20.557784 34361 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ddb5ab7-0c1f-44ed-84fa-aaeb6b553e03" volumeName="kubernetes.io/projected/6ddb5ab7-0c1f-44ed-84fa-aaeb6b553e03-kube-api-access-rkz2q" seLinuxMountContext="" Feb 24 05:37:20.559547 master-0 kubenswrapper[34361]: I0224 05:37:20.557812 34361 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f6690909-3a87-4bdc-b0ec-1cdd4df32e4b" volumeName="kubernetes.io/configmap/f6690909-3a87-4bdc-b0ec-1cdd4df32e4b-iptables-alerter-script" seLinuxMountContext="" Feb 24 05:37:20.559547 master-0 kubenswrapper[34361]: I0224 05:37:20.557834 34361 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6e5ede6a-9d4b-47a2-b4ba-e6018910d05a" volumeName="kubernetes.io/configmap/6e5ede6a-9d4b-47a2-b4ba-e6018910d05a-trusted-ca" seLinuxMountContext="" Feb 24 05:37:20.559547 master-0 kubenswrapper[34361]: I0224 05:37:20.557855 34361 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="812552f3-09b1-43f8-b910-c78e776127f8" volumeName="kubernetes.io/secret/812552f3-09b1-43f8-b910-c78e776127f8-serving-cert" seLinuxMountContext="" Feb 24 05:37:20.559547 master-0 kubenswrapper[34361]: I0224 05:37:20.557876 34361 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="88b915ff-fd94-4998-aa09-70f95c0f1b8a" volumeName="kubernetes.io/configmap/88b915ff-fd94-4998-aa09-70f95c0f1b8a-ovnkube-config" seLinuxMountContext="" Feb 24 05:37:20.559547 master-0 kubenswrapper[34361]: I0224 05:37:20.557899 34361 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a3561f49-0808-4d96-95ec-456fcb5c5bb4" volumeName="kubernetes.io/projected/a3561f49-0808-4d96-95ec-456fcb5c5bb4-kube-api-access-r5tgk" seLinuxMountContext="" Feb 24 05:37:20.559547 master-0 kubenswrapper[34361]: I0224 05:37:20.557919 34361 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b21148ab-4e3e-4d0b-b198-3278dd8e2e7e" volumeName="kubernetes.io/configmap/b21148ab-4e3e-4d0b-b198-3278dd8e2e7e-image-import-ca" seLinuxMountContext="" Feb 24 05:37:20.559547 master-0 kubenswrapper[34361]: I0224 05:37:20.557939 34361 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e6a0fc47-b446-4902-9f8a-04870cbafcab" volumeName="kubernetes.io/secret/e6a0fc47-b446-4902-9f8a-04870cbafcab-machine-approver-tls" seLinuxMountContext="" Feb 24 05:37:20.559547 master-0 kubenswrapper[34361]: I0224 05:37:20.557960 34361 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ee10cd9f-23da-4e40-bc7a-0856a9fa7ae5" volumeName="kubernetes.io/projected/ee10cd9f-23da-4e40-bc7a-0856a9fa7ae5-kube-api-access-5dwz2" seLinuxMountContext="" Feb 24 05:37:20.559547 master-0 kubenswrapper[34361]: I0224 05:37:20.557983 34361 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="59333a14-5bdc-4590-a3da-af7300f086da" volumeName="kubernetes.io/configmap/59333a14-5bdc-4590-a3da-af7300f086da-config" seLinuxMountContext="" Feb 24 05:37:20.559547 master-0 kubenswrapper[34361]: I0224 05:37:20.558003 34361 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="74e8b3c8-da80-492c-bfcf-199b40bde40b" volumeName="kubernetes.io/projected/74e8b3c8-da80-492c-bfcf-199b40bde40b-kube-api-access-79h66" seLinuxMountContext="" Feb 24 05:37:20.559547 master-0 kubenswrapper[34361]: I0224 05:37:20.558024 34361 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="74e8b3c8-da80-492c-bfcf-199b40bde40b" volumeName="kubernetes.io/secret/74e8b3c8-da80-492c-bfcf-199b40bde40b-ovn-node-metrics-cert" seLinuxMountContext="" Feb 24 05:37:20.559547 master-0 kubenswrapper[34361]: I0224 05:37:20.558050 34361 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="23bdafdd-27c9-4461-be4a-3ea916ac3875" volumeName="kubernetes.io/configmap/23bdafdd-27c9-4461-be4a-3ea916ac3875-trusted-ca" seLinuxMountContext="" Feb 24 05:37:20.559547 master-0 kubenswrapper[34361]: I0224 05:37:20.558073 34361 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7a2c651d-ea1a-41f2-9745-04adc8d88904" volumeName="kubernetes.io/configmap/7a2c651d-ea1a-41f2-9745-04adc8d88904-etcd-service-ca" seLinuxMountContext="" Feb 24 05:37:20.559547 master-0 kubenswrapper[34361]: I0224 05:37:20.558095 34361 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="996ae0be-d36c-47f4-98b2-1c89591f9506" volumeName="kubernetes.io/secret/996ae0be-d36c-47f4-98b2-1c89591f9506-metrics-tls" seLinuxMountContext="" Feb 24 05:37:20.559547 master-0 kubenswrapper[34361]: I0224 05:37:20.558145 34361 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a3561f49-0808-4d96-95ec-456fcb5c5bb4" volumeName="kubernetes.io/configmap/a3561f49-0808-4d96-95ec-456fcb5c5bb4-mcd-auth-proxy-config" seLinuxMountContext="" Feb 24 05:37:20.559547 master-0 kubenswrapper[34361]: I0224 05:37:20.558171 34361 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d9492fbf-d0f4-4ecf-84ba-b089d69535c1" volumeName="kubernetes.io/secret/d9492fbf-d0f4-4ecf-84ba-b089d69535c1-catalogserver-certs" seLinuxMountContext="" Feb 24 05:37:20.559547 master-0 kubenswrapper[34361]: I0224 05:37:20.558199 34361 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f77227c8-c52d-4a71-ae1b-792055f6f23d" volumeName="kubernetes.io/secret/f77227c8-c52d-4a71-ae1b-792055f6f23d-metrics-tls" seLinuxMountContext="" Feb 24 05:37:20.559547 master-0 kubenswrapper[34361]: I0224 05:37:20.558226 34361 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1163571d-f555-41ad-b04c-74c2dc452efe" volumeName="kubernetes.io/configmap/1163571d-f555-41ad-b04c-74c2dc452efe-serving-certs-ca-bundle" seLinuxMountContext="" Feb 24 05:37:20.559547 master-0 kubenswrapper[34361]: I0224 05:37:20.558254 34361 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="75b4304c-09f2-499e-8c2f-da603e43ba72" volumeName="kubernetes.io/empty-dir/75b4304c-09f2-499e-8c2f-da603e43ba72-utilities" seLinuxMountContext="" Feb 24 05:37:20.559547 master-0 kubenswrapper[34361]: I0224 05:37:20.558281 34361 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e6f05507-d5c1-4102-a220-1db715a496e3" volumeName="kubernetes.io/secret/e6f05507-d5c1-4102-a220-1db715a496e3-serving-cert" seLinuxMountContext="" Feb 24 05:37:20.559547 master-0 kubenswrapper[34361]: I0224 05:37:20.558338 34361 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f2be5ed6-fdf0-4462-a319-eed1a5a1c778" volumeName="kubernetes.io/secret/f2be5ed6-fdf0-4462-a319-eed1a5a1c778-node-exporter-tls" seLinuxMountContext="" Feb 24 05:37:20.559547 master-0 kubenswrapper[34361]: I0224 05:37:20.558366 34361 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="59333a14-5bdc-4590-a3da-af7300f086da" volumeName="kubernetes.io/projected/59333a14-5bdc-4590-a3da-af7300f086da-kube-api-access-wwc5b" seLinuxMountContext="" Feb 24 05:37:20.559547 master-0 kubenswrapper[34361]: I0224 05:37:20.558393 34361 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="798dcf46-8377-46b8-8387-5261d9bbefa1" volumeName="kubernetes.io/projected/798dcf46-8377-46b8-8387-5261d9bbefa1-kube-api-access-jl24z" seLinuxMountContext="" Feb 24 05:37:20.559547 master-0 kubenswrapper[34361]: I0224 05:37:20.558419 34361 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="80cc7ad6-051b-4ee5-94af-611388d9622a" volumeName="kubernetes.io/configmap/80cc7ad6-051b-4ee5-94af-611388d9622a-kube-state-metrics-custom-resource-state-configmap" seLinuxMountContext="" Feb 24 05:37:20.559547 master-0 kubenswrapper[34361]: I0224 05:37:20.558445 34361 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="80cc7ad6-051b-4ee5-94af-611388d9622a" volumeName="kubernetes.io/secret/80cc7ad6-051b-4ee5-94af-611388d9622a-kube-state-metrics-kube-rbac-proxy-config" seLinuxMountContext="" Feb 24 05:37:20.559547 master-0 kubenswrapper[34361]: I0224 05:37:20.558472 34361 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d9492fbf-d0f4-4ecf-84ba-b089d69535c1" volumeName="kubernetes.io/empty-dir/d9492fbf-d0f4-4ecf-84ba-b089d69535c1-cache" seLinuxMountContext="" Feb 24 05:37:20.559547 master-0 kubenswrapper[34361]: I0224 05:37:20.558500 34361 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f2be5ed6-fdf0-4462-a319-eed1a5a1c778" volumeName="kubernetes.io/projected/f2be5ed6-fdf0-4462-a319-eed1a5a1c778-kube-api-access-lm88x" seLinuxMountContext="" Feb 24 05:37:20.559547 master-0 kubenswrapper[34361]: I0224 05:37:20.558529 34361 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="80cc7ad6-051b-4ee5-94af-611388d9622a" volumeName="kubernetes.io/empty-dir/80cc7ad6-051b-4ee5-94af-611388d9622a-volume-directive-shadow" seLinuxMountContext="" Feb 24 05:37:20.559547 master-0 kubenswrapper[34361]: I0224 05:37:20.558555 34361 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="996ae0be-d36c-47f4-98b2-1c89591f9506" volumeName="kubernetes.io/projected/996ae0be-d36c-47f4-98b2-1c89591f9506-kube-api-access-jrhmp" seLinuxMountContext="" Feb 24 05:37:20.559547 master-0 kubenswrapper[34361]: I0224 05:37:20.558582 34361 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf303acd-b62e-4aa3-bd8d-15f5844302d8" volumeName="kubernetes.io/configmap/bf303acd-b62e-4aa3-bd8d-15f5844302d8-metrics-client-ca" seLinuxMountContext="" Feb 24 05:37:20.559547 master-0 kubenswrapper[34361]: I0224 05:37:20.558608 34361 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c3fed34f-b275-42c6-af6c-8de3e6fe0f9e" volumeName="kubernetes.io/secret/c3fed34f-b275-42c6-af6c-8de3e6fe0f9e-serving-cert" seLinuxMountContext="" Feb 24 05:37:20.559547 master-0 kubenswrapper[34361]: I0224 05:37:20.558634 34361 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1163571d-f555-41ad-b04c-74c2dc452efe" volumeName="kubernetes.io/secret/1163571d-f555-41ad-b04c-74c2dc452efe-telemeter-client-tls" seLinuxMountContext="" Feb 24 05:37:20.559547 master-0 kubenswrapper[34361]: I0224 05:37:20.558669 34361 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6e5ede6a-9d4b-47a2-b4ba-e6018910d05a" volumeName="kubernetes.io/secret/6e5ede6a-9d4b-47a2-b4ba-e6018910d05a-apiservice-cert" seLinuxMountContext="" Feb 24 05:37:20.559547 master-0 kubenswrapper[34361]: I0224 05:37:20.558696 34361 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c00ee01c-143b-4e44-823c-c6bfdedb8ed6" volumeName="kubernetes.io/configmap/c00ee01c-143b-4e44-823c-c6bfdedb8ed6-multus-daemon-config" seLinuxMountContext="" Feb 24 05:37:20.559547 master-0 kubenswrapper[34361]: I0224 05:37:20.558726 34361 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="da19bb93-c9ba-4e60-9e83-d92bc0dd33c4" volumeName="kubernetes.io/configmap/da19bb93-c9ba-4e60-9e83-d92bc0dd33c4-client-ca" seLinuxMountContext="" Feb 24 05:37:20.559547 master-0 kubenswrapper[34361]: I0224 05:37:20.558756 34361 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="17f8e10b-88dc-4158-a7c4-aaa2f5d5fb9d" volumeName="kubernetes.io/projected/17f8e10b-88dc-4158-a7c4-aaa2f5d5fb9d-kube-api-access" seLinuxMountContext="" Feb 24 05:37:20.559547 master-0 kubenswrapper[34361]: I0224 05:37:20.558781 34361 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1e7f7c02-4c84-432a-8d59-25dd3bfef5c2" volumeName="kubernetes.io/secret/1e7f7c02-4c84-432a-8d59-25dd3bfef5c2-proxy-tls" seLinuxMountContext="" Feb 24 05:37:20.559547 master-0 kubenswrapper[34361]: I0224 05:37:20.558807 34361 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7a2c651d-ea1a-41f2-9745-04adc8d88904" volumeName="kubernetes.io/configmap/7a2c651d-ea1a-41f2-9745-04adc8d88904-etcd-ca" seLinuxMountContext="" Feb 24 05:37:20.559547 master-0 kubenswrapper[34361]: I0224 05:37:20.558835 34361 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="88b915ff-fd94-4998-aa09-70f95c0f1b8a" volumeName="kubernetes.io/projected/88b915ff-fd94-4998-aa09-70f95c0f1b8a-kube-api-access-bs794" seLinuxMountContext="" Feb 24 05:37:20.559547 master-0 kubenswrapper[34361]: I0224 05:37:20.558860 34361 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b21148ab-4e3e-4d0b-b198-3278dd8e2e7e" volumeName="kubernetes.io/secret/b21148ab-4e3e-4d0b-b198-3278dd8e2e7e-encryption-config" seLinuxMountContext="" Feb 24 05:37:20.559547 master-0 kubenswrapper[34361]: I0224 05:37:20.558897 34361 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f3cd3830-62b5-49d1-917e-bd993d685c65" volumeName="kubernetes.io/secret/f3cd3830-62b5-49d1-917e-bd993d685c65-cloud-controller-manager-operator-tls" seLinuxMountContext="" Feb 24 05:37:20.559547 master-0 kubenswrapper[34361]: I0224 05:37:20.559018 34361 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f938daff-1d36-4348-a689-3d1607058296" volumeName="kubernetes.io/secret/f938daff-1d36-4348-a689-3d1607058296-cert" seLinuxMountContext="" Feb 24 05:37:20.559547 master-0 kubenswrapper[34361]: I0224 05:37:20.559056 34361 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b46907eb-36d6-4410-b7d8-8012b254c861" volumeName="kubernetes.io/secret/b46907eb-36d6-4410-b7d8-8012b254c861-cloud-credential-operator-serving-cert" seLinuxMountContext="" Feb 24 05:37:20.559547 master-0 kubenswrapper[34361]: I0224 05:37:20.559090 34361 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d9492fbf-d0f4-4ecf-84ba-b089d69535c1" volumeName="kubernetes.io/projected/d9492fbf-d0f4-4ecf-84ba-b089d69535c1-ca-certs" seLinuxMountContext="" Feb 24 05:37:20.559547 master-0 kubenswrapper[34361]: I0224 05:37:20.559118 34361 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f3cd3830-62b5-49d1-917e-bd993d685c65" volumeName="kubernetes.io/configmap/f3cd3830-62b5-49d1-917e-bd993d685c65-auth-proxy-config" seLinuxMountContext="" Feb 24 05:37:20.559547 master-0 kubenswrapper[34361]: I0224 05:37:20.559145 34361 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="347c43e5-86d5-436f-bdc5-1c7bbe19ab2a" volumeName="kubernetes.io/projected/347c43e5-86d5-436f-bdc5-1c7bbe19ab2a-ca-certs" seLinuxMountContext="" Feb 24 05:37:20.559547 master-0 kubenswrapper[34361]: I0224 05:37:20.559173 34361 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="39c4d0aa-c372-4d02-9302-337e68b56784" volumeName="kubernetes.io/configmap/39c4d0aa-c372-4d02-9302-337e68b56784-images" seLinuxMountContext="" Feb 24 05:37:20.559547 master-0 kubenswrapper[34361]: I0224 05:37:20.559203 34361 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="812552f3-09b1-43f8-b910-c78e776127f8" volumeName="kubernetes.io/projected/812552f3-09b1-43f8-b910-c78e776127f8-kube-api-access-4lt5r" seLinuxMountContext="" Feb 24 05:37:20.559547 master-0 kubenswrapper[34361]: I0224 05:37:20.559229 34361 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c106275b-72b6-4877-95c3-830f93e35375" volumeName="kubernetes.io/secret/c106275b-72b6-4877-95c3-830f93e35375-webhook-cert" seLinuxMountContext="" Feb 24 05:37:20.559547 master-0 kubenswrapper[34361]: I0224 05:37:20.559258 34361 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d86d5bbe-3768-4695-810b-245a56e4fd1d" volumeName="kubernetes.io/secret/d86d5bbe-3768-4695-810b-245a56e4fd1d-serving-cert" seLinuxMountContext="" Feb 24 05:37:20.559547 master-0 kubenswrapper[34361]: I0224 05:37:20.559287 34361 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="da19bb93-c9ba-4e60-9e83-d92bc0dd33c4" volumeName="kubernetes.io/projected/da19bb93-c9ba-4e60-9e83-d92bc0dd33c4-kube-api-access-9lkf2" seLinuxMountContext="" Feb 24 05:37:20.559547 master-0 kubenswrapper[34361]: I0224 05:37:20.559342 34361 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22813c83-2f60-44ad-9624-ad367cec08f7" volumeName="kubernetes.io/secret/22813c83-2f60-44ad-9624-ad367cec08f7-serving-cert" seLinuxMountContext="" Feb 24 05:37:20.559547 master-0 kubenswrapper[34361]: I0224 05:37:20.559371 34361 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="be7a4b9e-1e9a-4298-b804-21b683805c0e" volumeName="kubernetes.io/secret/be7a4b9e-1e9a-4298-b804-21b683805c0e-default-certificate" seLinuxMountContext="" Feb 24 05:37:20.559547 master-0 kubenswrapper[34361]: I0224 05:37:20.559398 34361 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf303acd-b62e-4aa3-bd8d-15f5844302d8" volumeName="kubernetes.io/secret/bf303acd-b62e-4aa3-bd8d-15f5844302d8-openshift-state-metrics-kube-rbac-proxy-config" seLinuxMountContext="" Feb 24 05:37:20.559547 master-0 kubenswrapper[34361]: I0224 05:37:20.559432 34361 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e6f05507-d5c1-4102-a220-1db715a496e3" volumeName="kubernetes.io/configmap/e6f05507-d5c1-4102-a220-1db715a496e3-config" seLinuxMountContext="" Feb 24 05:37:20.559547 master-0 kubenswrapper[34361]: I0224 05:37:20.559458 34361 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f6690909-3a87-4bdc-b0ec-1cdd4df32e4b" volumeName="kubernetes.io/projected/f6690909-3a87-4bdc-b0ec-1cdd4df32e4b-kube-api-access-6b7f4" seLinuxMountContext="" Feb 24 05:37:20.559547 master-0 kubenswrapper[34361]: I0224 05:37:20.559486 34361 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1163571d-f555-41ad-b04c-74c2dc452efe" volumeName="kubernetes.io/secret/1163571d-f555-41ad-b04c-74c2dc452efe-secret-telemeter-client-kube-rbac-proxy-config" seLinuxMountContext="" Feb 24 05:37:20.559547 master-0 kubenswrapper[34361]: I0224 05:37:20.559510 34361 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="767424fb-babf-4b73-b5e2-0bee65fcf207" volumeName="kubernetes.io/projected/767424fb-babf-4b73-b5e2-0bee65fcf207-kube-api-access-hl828" seLinuxMountContext="" Feb 24 05:37:20.559547 master-0 kubenswrapper[34361]: I0224 05:37:20.559537 34361 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="812552f3-09b1-43f8-b910-c78e776127f8" volumeName="kubernetes.io/configmap/812552f3-09b1-43f8-b910-c78e776127f8-etcd-serving-ca" seLinuxMountContext="" Feb 24 05:37:20.559547 master-0 kubenswrapper[34361]: I0224 05:37:20.559567 34361 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c847d0c0-cc92-4d56-9e47-b83d9a39a745" volumeName="kubernetes.io/projected/c847d0c0-cc92-4d56-9e47-b83d9a39a745-kube-api-access-qvznm" seLinuxMountContext="" Feb 24 05:37:20.559547 master-0 kubenswrapper[34361]: I0224 05:37:20.559595 34361 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="2f48332e-92de-42aa-a6e6-db161f005e74" volumeName="kubernetes.io/secret/2f48332e-92de-42aa-a6e6-db161f005e74-client-ca-bundle" seLinuxMountContext="" Feb 24 05:37:20.559547 master-0 kubenswrapper[34361]: I0224 05:37:20.559622 34361 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="767424fb-babf-4b73-b5e2-0bee65fcf207" volumeName="kubernetes.io/configmap/767424fb-babf-4b73-b5e2-0bee65fcf207-cni-binary-copy" seLinuxMountContext="" Feb 24 05:37:20.559547 master-0 kubenswrapper[34361]: I0224 05:37:20.559649 34361 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b21148ab-4e3e-4d0b-b198-3278dd8e2e7e" volumeName="kubernetes.io/configmap/b21148ab-4e3e-4d0b-b198-3278dd8e2e7e-config" seLinuxMountContext="" Feb 24 05:37:20.559547 master-0 kubenswrapper[34361]: I0224 05:37:20.559676 34361 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="59333a14-5bdc-4590-a3da-af7300f086da" volumeName="kubernetes.io/configmap/59333a14-5bdc-4590-a3da-af7300f086da-service-ca-bundle" seLinuxMountContext="" Feb 24 05:37:20.559547 master-0 kubenswrapper[34361]: I0224 05:37:20.559704 34361 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="633d33a1-e1b1-40b0-b56a-afb0c1085d97" volumeName="kubernetes.io/secret/633d33a1-e1b1-40b0-b56a-afb0c1085d97-cluster-olm-operator-serving-cert" seLinuxMountContext="" Feb 24 05:37:20.559547 master-0 kubenswrapper[34361]: I0224 05:37:20.559735 34361 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b21148ab-4e3e-4d0b-b198-3278dd8e2e7e" volumeName="kubernetes.io/projected/b21148ab-4e3e-4d0b-b198-3278dd8e2e7e-kube-api-access-dtnxg" seLinuxMountContext="" Feb 24 05:37:20.559547 master-0 kubenswrapper[34361]: I0224 05:37:20.559760 34361 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0e05783d-6bd1-4c71-87d9-1eb3edd827b3" volumeName="kubernetes.io/configmap/0e05783d-6bd1-4c71-87d9-1eb3edd827b3-service-ca" seLinuxMountContext="" Feb 24 05:37:20.559547 master-0 kubenswrapper[34361]: I0224 05:37:20.559790 34361 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0e05783d-6bd1-4c71-87d9-1eb3edd827b3" volumeName="kubernetes.io/secret/0e05783d-6bd1-4c71-87d9-1eb3edd827b3-serving-cert" seLinuxMountContext="" Feb 24 05:37:20.559547 master-0 kubenswrapper[34361]: I0224 05:37:20.559818 34361 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22813c83-2f60-44ad-9624-ad367cec08f7" volumeName="kubernetes.io/configmap/22813c83-2f60-44ad-9624-ad367cec08f7-config" seLinuxMountContext="" Feb 24 05:37:20.559547 master-0 kubenswrapper[34361]: I0224 05:37:20.559845 34361 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="2c6bb439-ed17-4761-b193-580be5f6aa00" volumeName="kubernetes.io/empty-dir/2c6bb439-ed17-4761-b193-580be5f6aa00-utilities" seLinuxMountContext="" Feb 24 05:37:20.570728 master-0 kubenswrapper[34361]: I0224 05:37:20.559875 34361 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49b426a3-f16e-40e9-a166-7270d4cfcc60" volumeName="kubernetes.io/empty-dir/49b426a3-f16e-40e9-a166-7270d4cfcc60-tmpfs" seLinuxMountContext="" Feb 24 05:37:20.570728 master-0 kubenswrapper[34361]: I0224 05:37:20.559903 34361 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b8d28792-2365-4e9e-b61a-46cd2ef8b632" volumeName="kubernetes.io/secret/b8d28792-2365-4e9e-b61a-46cd2ef8b632-prometheus-operator-tls" seLinuxMountContext="" Feb 24 05:37:20.570728 master-0 kubenswrapper[34361]: I0224 05:37:20.559929 34361 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="cc0cfdd6-99d8-40dc-87d0-06c2a6767f38" volumeName="kubernetes.io/projected/cc0cfdd6-99d8-40dc-87d0-06c2a6767f38-kube-api-access-25dbj" seLinuxMountContext="" Feb 24 05:37:20.570728 master-0 kubenswrapper[34361]: I0224 05:37:20.559956 34361 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="073c9b40-bb80-41a2-bcd2-cfdbe040a5a4" volumeName="kubernetes.io/projected/073c9b40-bb80-41a2-bcd2-cfdbe040a5a4-kube-api-access-dh2rh" seLinuxMountContext="" Feb 24 05:37:20.570728 master-0 kubenswrapper[34361]: I0224 05:37:20.559984 34361 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22813c83-2f60-44ad-9624-ad367cec08f7" volumeName="kubernetes.io/projected/22813c83-2f60-44ad-9624-ad367cec08f7-kube-api-access" seLinuxMountContext="" Feb 24 05:37:20.570728 master-0 kubenswrapper[34361]: I0224 05:37:20.560014 34361 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="2f48332e-92de-42aa-a6e6-db161f005e74" volumeName="kubernetes.io/configmap/2f48332e-92de-42aa-a6e6-db161f005e74-configmap-kubelet-serving-ca-bundle" seLinuxMountContext="" Feb 24 05:37:20.570728 master-0 kubenswrapper[34361]: I0224 05:37:20.560140 34361 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="767424fb-babf-4b73-b5e2-0bee65fcf207" volumeName="kubernetes.io/configmap/767424fb-babf-4b73-b5e2-0bee65fcf207-whereabouts-configmap" seLinuxMountContext="" Feb 24 05:37:20.570728 master-0 kubenswrapper[34361]: I0224 05:37:20.560230 34361 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="933beda1-c930-4831-a886-3cc6b7a992ad" volumeName="kubernetes.io/projected/933beda1-c930-4831-a886-3cc6b7a992ad-kube-api-access-gmf87" seLinuxMountContext="" Feb 24 05:37:20.570728 master-0 kubenswrapper[34361]: I0224 05:37:20.560263 34361 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf303acd-b62e-4aa3-bd8d-15f5844302d8" volumeName="kubernetes.io/secret/bf303acd-b62e-4aa3-bd8d-15f5844302d8-openshift-state-metrics-tls" seLinuxMountContext="" Feb 24 05:37:20.570728 master-0 kubenswrapper[34361]: I0224 05:37:20.560291 34361 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f2be5ed6-fdf0-4462-a319-eed1a5a1c778" volumeName="kubernetes.io/secret/f2be5ed6-fdf0-4462-a319-eed1a5a1c778-node-exporter-kube-rbac-proxy-config" seLinuxMountContext="" Feb 24 05:37:20.570728 master-0 kubenswrapper[34361]: I0224 05:37:20.560375 34361 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1163571d-f555-41ad-b04c-74c2dc452efe" volumeName="kubernetes.io/secret/1163571d-f555-41ad-b04c-74c2dc452efe-secret-telemeter-client" seLinuxMountContext="" Feb 24 05:37:20.570728 master-0 kubenswrapper[34361]: I0224 05:37:20.560408 34361 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="59333a14-5bdc-4590-a3da-af7300f086da" volumeName="kubernetes.io/secret/59333a14-5bdc-4590-a3da-af7300f086da-serving-cert" seLinuxMountContext="" Feb 24 05:37:20.570728 master-0 kubenswrapper[34361]: I0224 05:37:20.560435 34361 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a3561f49-0808-4d96-95ec-456fcb5c5bb4" volumeName="kubernetes.io/secret/a3561f49-0808-4d96-95ec-456fcb5c5bb4-proxy-tls" seLinuxMountContext="" Feb 24 05:37:20.570728 master-0 kubenswrapper[34361]: I0224 05:37:20.560462 34361 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="be7a4b9e-1e9a-4298-b804-21b683805c0e" volumeName="kubernetes.io/configmap/be7a4b9e-1e9a-4298-b804-21b683805c0e-service-ca-bundle" seLinuxMountContext="" Feb 24 05:37:20.570728 master-0 kubenswrapper[34361]: I0224 05:37:20.560491 34361 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="cc0cfdd6-99d8-40dc-87d0-06c2a6767f38" volumeName="kubernetes.io/secret/cc0cfdd6-99d8-40dc-87d0-06c2a6767f38-profile-collector-cert" seLinuxMountContext="" Feb 24 05:37:20.570728 master-0 kubenswrapper[34361]: I0224 05:37:20.560519 34361 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f2be5ed6-fdf0-4462-a319-eed1a5a1c778" volumeName="kubernetes.io/empty-dir/f2be5ed6-fdf0-4462-a319-eed1a5a1c778-node-exporter-textfile" seLinuxMountContext="" Feb 24 05:37:20.570728 master-0 kubenswrapper[34361]: I0224 05:37:20.560547 34361 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="dd29bef3-d27e-48b3-9aa0-d915e949b3d5" volumeName="kubernetes.io/secret/dd29bef3-d27e-48b3-9aa0-d915e949b3d5-marketplace-operator-metrics" seLinuxMountContext="" Feb 24 05:37:20.570728 master-0 kubenswrapper[34361]: I0224 05:37:20.560573 34361 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1163571d-f555-41ad-b04c-74c2dc452efe" volumeName="kubernetes.io/configmap/1163571d-f555-41ad-b04c-74c2dc452efe-metrics-client-ca" seLinuxMountContext="" Feb 24 05:37:20.570728 master-0 kubenswrapper[34361]: I0224 05:37:20.560603 34361 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="347c43e5-86d5-436f-bdc5-1c7bbe19ab2a" volumeName="kubernetes.io/projected/347c43e5-86d5-436f-bdc5-1c7bbe19ab2a-kube-api-access-qgl4j" seLinuxMountContext="" Feb 24 05:37:20.570728 master-0 kubenswrapper[34361]: I0224 05:37:20.560631 34361 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="39623346-691b-42c8-af76-409d4f6629af" volumeName="kubernetes.io/configmap/39623346-691b-42c8-af76-409d4f6629af-config" seLinuxMountContext="" Feb 24 05:37:20.570728 master-0 kubenswrapper[34361]: I0224 05:37:20.560659 34361 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="75b4304c-09f2-499e-8c2f-da603e43ba72" volumeName="kubernetes.io/projected/75b4304c-09f2-499e-8c2f-da603e43ba72-kube-api-access-7jflg" seLinuxMountContext="" Feb 24 05:37:20.570728 master-0 kubenswrapper[34361]: I0224 05:37:20.560691 34361 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="80cc7ad6-051b-4ee5-94af-611388d9622a" volumeName="kubernetes.io/configmap/80cc7ad6-051b-4ee5-94af-611388d9622a-metrics-client-ca" seLinuxMountContext="" Feb 24 05:37:20.570728 master-0 kubenswrapper[34361]: I0224 05:37:20.560717 34361 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c106275b-72b6-4877-95c3-830f93e35375" volumeName="kubernetes.io/configmap/c106275b-72b6-4877-95c3-830f93e35375-env-overrides" seLinuxMountContext="" Feb 24 05:37:20.570728 master-0 kubenswrapper[34361]: I0224 05:37:20.560743 34361 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c177f8fe-8145-4557-ae78-af121efe001c" volumeName="kubernetes.io/projected/c177f8fe-8145-4557-ae78-af121efe001c-kube-api-access-mdpfz" seLinuxMountContext="" Feb 24 05:37:20.570728 master-0 kubenswrapper[34361]: I0224 05:37:20.560767 34361 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7a2c651d-ea1a-41f2-9745-04adc8d88904" volumeName="kubernetes.io/projected/7a2c651d-ea1a-41f2-9745-04adc8d88904-kube-api-access-fgf94" seLinuxMountContext="" Feb 24 05:37:20.570728 master-0 kubenswrapper[34361]: I0224 05:37:20.560795 34361 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b46907eb-36d6-4410-b7d8-8012b254c861" volumeName="kubernetes.io/configmap/b46907eb-36d6-4410-b7d8-8012b254c861-cco-trusted-ca" seLinuxMountContext="" Feb 24 05:37:20.570728 master-0 kubenswrapper[34361]: I0224 05:37:20.560821 34361 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b46907eb-36d6-4410-b7d8-8012b254c861" volumeName="kubernetes.io/projected/b46907eb-36d6-4410-b7d8-8012b254c861-kube-api-access-k8dtv" seLinuxMountContext="" Feb 24 05:37:20.570728 master-0 kubenswrapper[34361]: I0224 05:37:20.560851 34361 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e6a0fc47-b446-4902-9f8a-04870cbafcab" volumeName="kubernetes.io/projected/e6a0fc47-b446-4902-9f8a-04870cbafcab-kube-api-access-kx4qf" seLinuxMountContext="" Feb 24 05:37:20.570728 master-0 kubenswrapper[34361]: I0224 05:37:20.560877 34361 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="17f8e10b-88dc-4158-a7c4-aaa2f5d5fb9d" volumeName="kubernetes.io/configmap/17f8e10b-88dc-4158-a7c4-aaa2f5d5fb9d-config" seLinuxMountContext="" Feb 24 05:37:20.570728 master-0 kubenswrapper[34361]: I0224 05:37:20.560903 34361 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3363f001-1cfa-41f5-b245-30cc99dd09cb" volumeName="kubernetes.io/configmap/3363f001-1cfa-41f5-b245-30cc99dd09cb-config-volume" seLinuxMountContext="" Feb 24 05:37:20.570728 master-0 kubenswrapper[34361]: I0224 05:37:20.560931 34361 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b426cb33-1624-45e6-b8c5-4e8d251f6339" volumeName="kubernetes.io/projected/b426cb33-1624-45e6-b8c5-4e8d251f6339-kube-api-access-hjtv8" seLinuxMountContext="" Feb 24 05:37:20.570728 master-0 kubenswrapper[34361]: I0224 05:37:20.560957 34361 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="dd29bef3-d27e-48b3-9aa0-d915e949b3d5" volumeName="kubernetes.io/projected/dd29bef3-d27e-48b3-9aa0-d915e949b3d5-kube-api-access-zcb72" seLinuxMountContext="" Feb 24 05:37:20.570728 master-0 kubenswrapper[34361]: I0224 05:37:20.560985 34361 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0e05783d-6bd1-4c71-87d9-1eb3edd827b3" volumeName="kubernetes.io/projected/0e05783d-6bd1-4c71-87d9-1eb3edd827b3-kube-api-access" seLinuxMountContext="" Feb 24 05:37:20.570728 master-0 kubenswrapper[34361]: I0224 05:37:20.561011 34361 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="17f8e10b-88dc-4158-a7c4-aaa2f5d5fb9d" volumeName="kubernetes.io/secret/17f8e10b-88dc-4158-a7c4-aaa2f5d5fb9d-serving-cert" seLinuxMountContext="" Feb 24 05:37:20.570728 master-0 kubenswrapper[34361]: I0224 05:37:20.561036 34361 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b21148ab-4e3e-4d0b-b198-3278dd8e2e7e" volumeName="kubernetes.io/configmap/b21148ab-4e3e-4d0b-b198-3278dd8e2e7e-trusted-ca-bundle" seLinuxMountContext="" Feb 24 05:37:20.570728 master-0 kubenswrapper[34361]: I0224 05:37:20.561064 34361 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e6a0fc47-b446-4902-9f8a-04870cbafcab" volumeName="kubernetes.io/configmap/e6a0fc47-b446-4902-9f8a-04870cbafcab-auth-proxy-config" seLinuxMountContext="" Feb 24 05:37:20.570728 master-0 kubenswrapper[34361]: I0224 05:37:20.561092 34361 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ee10cd9f-23da-4e40-bc7a-0856a9fa7ae5" volumeName="kubernetes.io/secret/ee10cd9f-23da-4e40-bc7a-0856a9fa7ae5-serving-cert" seLinuxMountContext="" Feb 24 05:37:20.570728 master-0 kubenswrapper[34361]: I0224 05:37:20.561121 34361 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f77227c8-c52d-4a71-ae1b-792055f6f23d" volumeName="kubernetes.io/projected/f77227c8-c52d-4a71-ae1b-792055f6f23d-kube-api-access-dcj62" seLinuxMountContext="" Feb 24 05:37:20.570728 master-0 kubenswrapper[34361]: I0224 05:37:20.561150 34361 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="2c6bb439-ed17-4761-b193-580be5f6aa00" volumeName="kubernetes.io/empty-dir/2c6bb439-ed17-4761-b193-580be5f6aa00-catalog-content" seLinuxMountContext="" Feb 24 05:37:20.570728 master-0 kubenswrapper[34361]: I0224 05:37:20.561177 34361 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ab5afff8-1081-4acc-8ab9-d6bfd8df1d67" volumeName="kubernetes.io/secret/ab5afff8-1081-4acc-8ab9-d6bfd8df1d67-signing-key" seLinuxMountContext="" Feb 24 05:37:20.570728 master-0 kubenswrapper[34361]: I0224 05:37:20.561204 34361 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c00ee01c-143b-4e44-823c-c6bfdedb8ed6" volumeName="kubernetes.io/projected/c00ee01c-143b-4e44-823c-c6bfdedb8ed6-kube-api-access-jx4rw" seLinuxMountContext="" Feb 24 05:37:20.570728 master-0 kubenswrapper[34361]: I0224 05:37:20.561230 34361 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ee10cd9f-23da-4e40-bc7a-0856a9fa7ae5" volumeName="kubernetes.io/configmap/ee10cd9f-23da-4e40-bc7a-0856a9fa7ae5-service-ca-bundle" seLinuxMountContext="" Feb 24 05:37:20.570728 master-0 kubenswrapper[34361]: I0224 05:37:20.561257 34361 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="2f48332e-92de-42aa-a6e6-db161f005e74" volumeName="kubernetes.io/configmap/2f48332e-92de-42aa-a6e6-db161f005e74-metrics-server-audit-profiles" seLinuxMountContext="" Feb 24 05:37:20.570728 master-0 kubenswrapper[34361]: I0224 05:37:20.561282 34361 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="2f48332e-92de-42aa-a6e6-db161f005e74" volumeName="kubernetes.io/secret/2f48332e-92de-42aa-a6e6-db161f005e74-secret-metrics-client-certs" seLinuxMountContext="" Feb 24 05:37:20.570728 master-0 kubenswrapper[34361]: I0224 05:37:20.561338 34361 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="39623346-691b-42c8-af76-409d4f6629af" volumeName="kubernetes.io/secret/39623346-691b-42c8-af76-409d4f6629af-cluster-baremetal-operator-tls" seLinuxMountContext="" Feb 24 05:37:20.570728 master-0 kubenswrapper[34361]: I0224 05:37:20.561368 34361 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b79ef90c-dc66-4d5f-8943-2c3ac68796ba" volumeName="kubernetes.io/projected/b79ef90c-dc66-4d5f-8943-2c3ac68796ba-kube-api-access-zb4rw" seLinuxMountContext="" Feb 24 05:37:20.570728 master-0 kubenswrapper[34361]: I0224 05:37:20.561398 34361 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c177f8fe-8145-4557-ae78-af121efe001c" volumeName="kubernetes.io/secret/c177f8fe-8145-4557-ae78-af121efe001c-cluster-monitoring-operator-tls" seLinuxMountContext="" Feb 24 05:37:20.570728 master-0 kubenswrapper[34361]: I0224 05:37:20.561425 34361 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5d51ce58-55f6-45d5-9d5d-7b31ae42380a" volumeName="kubernetes.io/secret/5d51ce58-55f6-45d5-9d5d-7b31ae42380a-cert" seLinuxMountContext="" Feb 24 05:37:20.570728 master-0 kubenswrapper[34361]: I0224 05:37:20.561450 34361 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b426cb33-1624-45e6-b8c5-4e8d251f6339" volumeName="kubernetes.io/secret/b426cb33-1624-45e6-b8c5-4e8d251f6339-serving-cert" seLinuxMountContext="" Feb 24 05:37:20.570728 master-0 kubenswrapper[34361]: I0224 05:37:20.561475 34361 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e6f05507-d5c1-4102-a220-1db715a496e3" volumeName="kubernetes.io/projected/e6f05507-d5c1-4102-a220-1db715a496e3-kube-api-access" seLinuxMountContext="" Feb 24 05:37:20.570728 master-0 kubenswrapper[34361]: I0224 05:37:20.561504 34361 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f2be5ed6-fdf0-4462-a319-eed1a5a1c778" volumeName="kubernetes.io/configmap/f2be5ed6-fdf0-4462-a319-eed1a5a1c778-metrics-client-ca" seLinuxMountContext="" Feb 24 05:37:20.570728 master-0 kubenswrapper[34361]: I0224 05:37:20.561533 34361 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f3cd3830-62b5-49d1-917e-bd993d685c65" volumeName="kubernetes.io/configmap/f3cd3830-62b5-49d1-917e-bd993d685c65-images" seLinuxMountContext="" Feb 24 05:37:20.570728 master-0 kubenswrapper[34361]: I0224 05:37:20.561558 34361 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="39623346-691b-42c8-af76-409d4f6629af" volumeName="kubernetes.io/secret/39623346-691b-42c8-af76-409d4f6629af-cert" seLinuxMountContext="" Feb 24 05:37:20.570728 master-0 kubenswrapper[34361]: I0224 05:37:20.561588 34361 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="633d33a1-e1b1-40b0-b56a-afb0c1085d97" volumeName="kubernetes.io/empty-dir/633d33a1-e1b1-40b0-b56a-afb0c1085d97-operand-assets" seLinuxMountContext="" Feb 24 05:37:20.570728 master-0 kubenswrapper[34361]: I0224 05:37:20.561617 34361 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="74e8b3c8-da80-492c-bfcf-199b40bde40b" volumeName="kubernetes.io/configmap/74e8b3c8-da80-492c-bfcf-199b40bde40b-ovnkube-script-lib" seLinuxMountContext="" Feb 24 05:37:20.570728 master-0 kubenswrapper[34361]: I0224 05:37:20.561644 34361 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b426cb33-1624-45e6-b8c5-4e8d251f6339" volumeName="kubernetes.io/configmap/b426cb33-1624-45e6-b8c5-4e8d251f6339-client-ca" seLinuxMountContext="" Feb 24 05:37:20.570728 master-0 kubenswrapper[34361]: I0224 05:37:20.561672 34361 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c106275b-72b6-4877-95c3-830f93e35375" volumeName="kubernetes.io/configmap/c106275b-72b6-4877-95c3-830f93e35375-ovnkube-identity-cm" seLinuxMountContext="" Feb 24 05:37:20.570728 master-0 kubenswrapper[34361]: I0224 05:37:20.561698 34361 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="cd674e58-b749-46fb-8a28-66012fd8b401" volumeName="kubernetes.io/empty-dir/cd674e58-b749-46fb-8a28-66012fd8b401-catalog-content" seLinuxMountContext="" Feb 24 05:37:20.570728 master-0 kubenswrapper[34361]: I0224 05:37:20.561724 34361 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="da19bb93-c9ba-4e60-9e83-d92bc0dd33c4" volumeName="kubernetes.io/configmap/da19bb93-c9ba-4e60-9e83-d92bc0dd33c4-config" seLinuxMountContext="" Feb 24 05:37:20.570728 master-0 kubenswrapper[34361]: I0224 05:37:20.561757 34361 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="812552f3-09b1-43f8-b910-c78e776127f8" volumeName="kubernetes.io/secret/812552f3-09b1-43f8-b910-c78e776127f8-encryption-config" seLinuxMountContext="" Feb 24 05:37:20.570728 master-0 kubenswrapper[34361]: I0224 05:37:20.561786 34361 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b426cb33-1624-45e6-b8c5-4e8d251f6339" volumeName="kubernetes.io/configmap/b426cb33-1624-45e6-b8c5-4e8d251f6339-config" seLinuxMountContext="" Feb 24 05:37:20.570728 master-0 kubenswrapper[34361]: I0224 05:37:20.561811 34361 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3363f001-1cfa-41f5-b245-30cc99dd09cb" volumeName="kubernetes.io/secret/3363f001-1cfa-41f5-b245-30cc99dd09cb-metrics-tls" seLinuxMountContext="" Feb 24 05:37:20.570728 master-0 kubenswrapper[34361]: I0224 05:37:20.561836 34361 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="39c4d0aa-c372-4d02-9302-337e68b56784" volumeName="kubernetes.io/configmap/39c4d0aa-c372-4d02-9302-337e68b56784-auth-proxy-config" seLinuxMountContext="" Feb 24 05:37:20.570728 master-0 kubenswrapper[34361]: I0224 05:37:20.561863 34361 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="58ecd829-4749-4c8a-933b-16b4acccac90" volumeName="kubernetes.io/configmap/58ecd829-4749-4c8a-933b-16b4acccac90-config" seLinuxMountContext="" Feb 24 05:37:20.570728 master-0 kubenswrapper[34361]: I0224 05:37:20.561891 34361 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7dcc5520-7aa8-4cd5-b06d-591827ed4e2a" volumeName="kubernetes.io/secret/7dcc5520-7aa8-4cd5-b06d-591827ed4e2a-metrics-certs" seLinuxMountContext="" Feb 24 05:37:20.570728 master-0 kubenswrapper[34361]: I0224 05:37:20.561917 34361 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf303acd-b62e-4aa3-bd8d-15f5844302d8" volumeName="kubernetes.io/projected/bf303acd-b62e-4aa3-bd8d-15f5844302d8-kube-api-access-f92qq" seLinuxMountContext="" Feb 24 05:37:20.570728 master-0 kubenswrapper[34361]: I0224 05:37:20.561945 34361 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="23bdafdd-27c9-4461-be4a-3ea916ac3875" volumeName="kubernetes.io/projected/23bdafdd-27c9-4461-be4a-3ea916ac3875-bound-sa-token" seLinuxMountContext="" Feb 24 05:37:20.570728 master-0 kubenswrapper[34361]: I0224 05:37:20.561974 34361 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ddb5ab7-0c1f-44ed-84fa-aaeb6b553e03" volumeName="kubernetes.io/secret/6ddb5ab7-0c1f-44ed-84fa-aaeb6b553e03-webhook-certs" seLinuxMountContext="" Feb 24 05:37:20.570728 master-0 kubenswrapper[34361]: I0224 05:37:20.562000 34361 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6e5ede6a-9d4b-47a2-b4ba-e6018910d05a" volumeName="kubernetes.io/secret/6e5ede6a-9d4b-47a2-b4ba-e6018910d05a-node-tuning-operator-tls" seLinuxMountContext="" Feb 24 05:37:20.570728 master-0 kubenswrapper[34361]: I0224 05:37:20.562027 34361 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b8d28792-2365-4e9e-b61a-46cd2ef8b632" volumeName="kubernetes.io/configmap/b8d28792-2365-4e9e-b61a-46cd2ef8b632-metrics-client-ca" seLinuxMountContext="" Feb 24 05:37:20.570728 master-0 kubenswrapper[34361]: I0224 05:37:20.562054 34361 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1533c5fa-0387-40bd-a959-e714b65cdacc" volumeName="kubernetes.io/projected/1533c5fa-0387-40bd-a959-e714b65cdacc-kube-api-access-jspzm" seLinuxMountContext="" Feb 24 05:37:20.570728 master-0 kubenswrapper[34361]: I0224 05:37:20.562084 34361 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3d6b1ce7-1213-494c-829d-186d39eac7eb" volumeName="kubernetes.io/projected/3d6b1ce7-1213-494c-829d-186d39eac7eb-kube-api-access-5q2r9" seLinuxMountContext="" Feb 24 05:37:20.570728 master-0 kubenswrapper[34361]: I0224 05:37:20.562140 34361 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e1f03d97-1a6a-41e4-9ed3-cd9b01c46400" volumeName="kubernetes.io/projected/e1f03d97-1a6a-41e4-9ed3-cd9b01c46400-kube-api-access-nb75b" seLinuxMountContext="" Feb 24 05:37:20.570728 master-0 kubenswrapper[34361]: I0224 05:37:20.562170 34361 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="2f48332e-92de-42aa-a6e6-db161f005e74" volumeName="kubernetes.io/secret/2f48332e-92de-42aa-a6e6-db161f005e74-secret-metrics-server-tls" seLinuxMountContext="" Feb 24 05:37:20.570728 master-0 kubenswrapper[34361]: I0224 05:37:20.562211 34361 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="812552f3-09b1-43f8-b910-c78e776127f8" volumeName="kubernetes.io/configmap/812552f3-09b1-43f8-b910-c78e776127f8-audit-policies" seLinuxMountContext="" Feb 24 05:37:20.570728 master-0 kubenswrapper[34361]: I0224 05:37:20.562245 34361 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="39c4d0aa-c372-4d02-9302-337e68b56784" volumeName="kubernetes.io/projected/39c4d0aa-c372-4d02-9302-337e68b56784-kube-api-access-b2fkp" seLinuxMountContext="" Feb 24 05:37:20.570728 master-0 kubenswrapper[34361]: I0224 05:37:20.562274 34361 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3f511d03-a182-4968-ba40-5c5c10e5e6be" volumeName="kubernetes.io/secret/3f511d03-a182-4968-ba40-5c5c10e5e6be-serving-cert" seLinuxMountContext="" Feb 24 05:37:20.570728 master-0 kubenswrapper[34361]: I0224 05:37:20.562343 34361 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="812552f3-09b1-43f8-b910-c78e776127f8" volumeName="kubernetes.io/configmap/812552f3-09b1-43f8-b910-c78e776127f8-trusted-ca-bundle" seLinuxMountContext="" Feb 24 05:37:20.570728 master-0 kubenswrapper[34361]: I0224 05:37:20.562382 34361 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="116e6b47-d435-49ca-abb5-088788daf16a" volumeName="kubernetes.io/configmap/116e6b47-d435-49ca-abb5-088788daf16a-images" seLinuxMountContext="" Feb 24 05:37:20.570728 master-0 kubenswrapper[34361]: I0224 05:37:20.562411 34361 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9666fc94-71e3-46af-8b45-26e3a085d076" volumeName="kubernetes.io/secret/9666fc94-71e3-46af-8b45-26e3a085d076-profile-collector-cert" seLinuxMountContext="" Feb 24 05:37:20.570728 master-0 kubenswrapper[34361]: I0224 05:37:20.562443 34361 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="23bdafdd-27c9-4461-be4a-3ea916ac3875" volumeName="kubernetes.io/projected/23bdafdd-27c9-4461-be4a-3ea916ac3875-kube-api-access-cczbm" seLinuxMountContext="" Feb 24 05:37:20.570728 master-0 kubenswrapper[34361]: I0224 05:37:20.562526 34361 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="2f48332e-92de-42aa-a6e6-db161f005e74" volumeName="kubernetes.io/projected/2f48332e-92de-42aa-a6e6-db161f005e74-kube-api-access-kc42f" seLinuxMountContext="" Feb 24 05:37:20.570728 master-0 kubenswrapper[34361]: I0224 05:37:20.562560 34361 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="39623346-691b-42c8-af76-409d4f6629af" volumeName="kubernetes.io/configmap/39623346-691b-42c8-af76-409d4f6629af-images" seLinuxMountContext="" Feb 24 05:37:20.570728 master-0 kubenswrapper[34361]: I0224 05:37:20.562593 34361 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="58ecd829-4749-4c8a-933b-16b4acccac90" volumeName="kubernetes.io/projected/58ecd829-4749-4c8a-933b-16b4acccac90-kube-api-access-m9kf2" seLinuxMountContext="" Feb 24 05:37:20.570728 master-0 kubenswrapper[34361]: I0224 05:37:20.562621 34361 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f3825c1-975c-40b5-a6ad-0f200968b3cd" volumeName="kubernetes.io/projected/8f3825c1-975c-40b5-a6ad-0f200968b3cd-kube-api-access-l8z6s" seLinuxMountContext="" Feb 24 05:37:20.570728 master-0 kubenswrapper[34361]: I0224 05:37:20.562656 34361 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="933beda1-c930-4831-a886-3cc6b7a992ad" volumeName="kubernetes.io/configmap/933beda1-c930-4831-a886-3cc6b7a992ad-config" seLinuxMountContext="" Feb 24 05:37:20.570728 master-0 kubenswrapper[34361]: I0224 05:37:20.562684 34361 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c00ee01c-143b-4e44-823c-c6bfdedb8ed6" volumeName="kubernetes.io/configmap/c00ee01c-143b-4e44-823c-c6bfdedb8ed6-cni-binary-copy" seLinuxMountContext="" Feb 24 05:37:20.570728 master-0 kubenswrapper[34361]: I0224 05:37:20.562712 34361 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d922f6f-70a3-46cb-b230-6a1e2b8cfdfa" volumeName="kubernetes.io/projected/1d922f6f-70a3-46cb-b230-6a1e2b8cfdfa-kube-api-access-ckfnc" seLinuxMountContext="" Feb 24 05:37:20.570728 master-0 kubenswrapper[34361]: I0224 05:37:20.562740 34361 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="39c4d0aa-c372-4d02-9302-337e68b56784" volumeName="kubernetes.io/secret/39c4d0aa-c372-4d02-9302-337e68b56784-proxy-tls" seLinuxMountContext="" Feb 24 05:37:20.570728 master-0 kubenswrapper[34361]: I0224 05:37:20.562768 34361 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="633d33a1-e1b1-40b0-b56a-afb0c1085d97" volumeName="kubernetes.io/projected/633d33a1-e1b1-40b0-b56a-afb0c1085d97-kube-api-access-62xzk" seLinuxMountContext="" Feb 24 05:37:20.570728 master-0 kubenswrapper[34361]: I0224 05:37:20.562800 34361 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f3825c1-975c-40b5-a6ad-0f200968b3cd" volumeName="kubernetes.io/empty-dir/8f3825c1-975c-40b5-a6ad-0f200968b3cd-utilities" seLinuxMountContext="" Feb 24 05:37:20.570728 master-0 kubenswrapper[34361]: I0224 05:37:20.562913 34361 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="dd29bef3-d27e-48b3-9aa0-d915e949b3d5" volumeName="kubernetes.io/configmap/dd29bef3-d27e-48b3-9aa0-d915e949b3d5-marketplace-trusted-ca" seLinuxMountContext="" Feb 24 05:37:20.570728 master-0 kubenswrapper[34361]: I0224 05:37:20.562999 34361 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="116e6b47-d435-49ca-abb5-088788daf16a" volumeName="kubernetes.io/projected/116e6b47-d435-49ca-abb5-088788daf16a-kube-api-access-jt9fb" seLinuxMountContext="" Feb 24 05:37:20.570728 master-0 kubenswrapper[34361]: I0224 05:37:20.563077 34361 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b8d28792-2365-4e9e-b61a-46cd2ef8b632" volumeName="kubernetes.io/projected/b8d28792-2365-4e9e-b61a-46cd2ef8b632-kube-api-access-77lsr" seLinuxMountContext="" Feb 24 05:37:20.570728 master-0 kubenswrapper[34361]: I0224 05:37:20.563113 34361 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d9492fbf-d0f4-4ecf-84ba-b089d69535c1" volumeName="kubernetes.io/projected/d9492fbf-d0f4-4ecf-84ba-b089d69535c1-kube-api-access-fzp4b" seLinuxMountContext="" Feb 24 05:37:20.570728 master-0 kubenswrapper[34361]: I0224 05:37:20.563191 34361 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="812552f3-09b1-43f8-b910-c78e776127f8" volumeName="kubernetes.io/secret/812552f3-09b1-43f8-b910-c78e776127f8-etcd-client" seLinuxMountContext="" Feb 24 05:37:20.570728 master-0 kubenswrapper[34361]: I0224 05:37:20.563225 34361 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="88b915ff-fd94-4998-aa09-70f95c0f1b8a" volumeName="kubernetes.io/secret/88b915ff-fd94-4998-aa09-70f95c0f1b8a-ovn-control-plane-metrics-cert" seLinuxMountContext="" Feb 24 05:37:20.570728 master-0 kubenswrapper[34361]: I0224 05:37:20.563300 34361 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="cd674e58-b749-46fb-8a28-66012fd8b401" volumeName="kubernetes.io/empty-dir/cd674e58-b749-46fb-8a28-66012fd8b401-utilities" seLinuxMountContext="" Feb 24 05:37:20.570728 master-0 kubenswrapper[34361]: I0224 05:37:20.563613 34361 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49b426a3-f16e-40e9-a166-7270d4cfcc60" volumeName="kubernetes.io/secret/49b426a3-f16e-40e9-a166-7270d4cfcc60-apiservice-cert" seLinuxMountContext="" Feb 24 05:37:20.570728 master-0 kubenswrapper[34361]: I0224 05:37:20.563650 34361 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="59333a14-5bdc-4590-a3da-af7300f086da" volumeName="kubernetes.io/configmap/59333a14-5bdc-4590-a3da-af7300f086da-trusted-ca-bundle" seLinuxMountContext="" Feb 24 05:37:20.570728 master-0 kubenswrapper[34361]: I0224 05:37:20.563728 34361 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f3cd3830-62b5-49d1-917e-bd993d685c65" volumeName="kubernetes.io/projected/f3cd3830-62b5-49d1-917e-bd993d685c65-kube-api-access-957g9" seLinuxMountContext="" Feb 24 05:37:20.570728 master-0 kubenswrapper[34361]: I0224 05:37:20.563804 34361 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="073c9b40-bb80-41a2-bcd2-cfdbe040a5a4" volumeName="kubernetes.io/empty-dir/073c9b40-bb80-41a2-bcd2-cfdbe040a5a4-tmp" seLinuxMountContext="" Feb 24 05:37:20.570728 master-0 kubenswrapper[34361]: I0224 05:37:20.563837 34361 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="88b915ff-fd94-4998-aa09-70f95c0f1b8a" volumeName="kubernetes.io/configmap/88b915ff-fd94-4998-aa09-70f95c0f1b8a-env-overrides" seLinuxMountContext="" Feb 24 05:37:20.570728 master-0 kubenswrapper[34361]: I0224 05:37:20.563863 34361 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="be7a4b9e-1e9a-4298-b804-21b683805c0e" volumeName="kubernetes.io/secret/be7a4b9e-1e9a-4298-b804-21b683805c0e-stats-auth" seLinuxMountContext="" Feb 24 05:37:20.570728 master-0 kubenswrapper[34361]: I0224 05:37:20.563890 34361 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7dcc5520-7aa8-4cd5-b06d-591827ed4e2a" volumeName="kubernetes.io/projected/7dcc5520-7aa8-4cd5-b06d-591827ed4e2a-kube-api-access-8ktz5" seLinuxMountContext="" Feb 24 05:37:20.570728 master-0 kubenswrapper[34361]: I0224 05:37:20.563918 34361 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="933beda1-c930-4831-a886-3cc6b7a992ad" volumeName="kubernetes.io/secret/933beda1-c930-4831-a886-3cc6b7a992ad-serving-cert" seLinuxMountContext="" Feb 24 05:37:20.570728 master-0 kubenswrapper[34361]: I0224 05:37:20.563946 34361 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b21148ab-4e3e-4d0b-b198-3278dd8e2e7e" volumeName="kubernetes.io/configmap/b21148ab-4e3e-4d0b-b198-3278dd8e2e7e-etcd-serving-ca" seLinuxMountContext="" Feb 24 05:37:20.570728 master-0 kubenswrapper[34361]: I0224 05:37:20.563973 34361 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="cd674e58-b749-46fb-8a28-66012fd8b401" volumeName="kubernetes.io/projected/cd674e58-b749-46fb-8a28-66012fd8b401-kube-api-access-67qg5" seLinuxMountContext="" Feb 24 05:37:20.570728 master-0 kubenswrapper[34361]: I0224 05:37:20.563999 34361 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d86d5bbe-3768-4695-810b-245a56e4fd1d" volumeName="kubernetes.io/configmap/d86d5bbe-3768-4695-810b-245a56e4fd1d-config" seLinuxMountContext="" Feb 24 05:37:20.570728 master-0 kubenswrapper[34361]: I0224 05:37:20.564030 34361 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1163571d-f555-41ad-b04c-74c2dc452efe" volumeName="kubernetes.io/projected/1163571d-f555-41ad-b04c-74c2dc452efe-kube-api-access-46fll" seLinuxMountContext="" Feb 24 05:37:20.570728 master-0 kubenswrapper[34361]: I0224 05:37:20.564060 34361 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="32fd577d-8966-4ab1-95cf-357291084156" volumeName="kubernetes.io/secret/32fd577d-8966-4ab1-95cf-357291084156-control-plane-machine-set-operator-tls" seLinuxMountContext="" Feb 24 05:37:20.570728 master-0 kubenswrapper[34361]: I0224 05:37:20.564089 34361 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="74e8b3c8-da80-492c-bfcf-199b40bde40b" volumeName="kubernetes.io/configmap/74e8b3c8-da80-492c-bfcf-199b40bde40b-env-overrides" seLinuxMountContext="" Feb 24 05:37:20.570728 master-0 kubenswrapper[34361]: I0224 05:37:20.564115 34361 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ab5afff8-1081-4acc-8ab9-d6bfd8df1d67" volumeName="kubernetes.io/configmap/ab5afff8-1081-4acc-8ab9-d6bfd8df1d67-signing-cabundle" seLinuxMountContext="" Feb 24 05:37:20.570728 master-0 kubenswrapper[34361]: I0224 05:37:20.564146 34361 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ab5afff8-1081-4acc-8ab9-d6bfd8df1d67" volumeName="kubernetes.io/projected/ab5afff8-1081-4acc-8ab9-d6bfd8df1d67-kube-api-access-p67bp" seLinuxMountContext="" Feb 24 05:37:20.570728 master-0 kubenswrapper[34361]: I0224 05:37:20.564196 34361 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b21148ab-4e3e-4d0b-b198-3278dd8e2e7e" volumeName="kubernetes.io/secret/b21148ab-4e3e-4d0b-b198-3278dd8e2e7e-etcd-client" seLinuxMountContext="" Feb 24 05:37:20.570728 master-0 kubenswrapper[34361]: I0224 05:37:20.564224 34361 reconstruct.go:97] "Volume reconstruction finished" Feb 24 05:37:20.570728 master-0 kubenswrapper[34361]: I0224 05:37:20.564243 34361 reconciler.go:26] "Reconciler: start to sync state" Feb 24 05:37:20.575733 master-0 kubenswrapper[34361]: I0224 05:37:20.570995 34361 reconstruct.go:205] "DevicePaths of reconstructed volumes updated" Feb 24 05:37:20.593968 master-0 kubenswrapper[34361]: I0224 05:37:20.592896 34361 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Feb 24 05:37:20.595991 master-0 kubenswrapper[34361]: I0224 05:37:20.595951 34361 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Feb 24 05:37:20.596100 master-0 kubenswrapper[34361]: I0224 05:37:20.596013 34361 status_manager.go:217] "Starting to sync pod status with apiserver" Feb 24 05:37:20.596100 master-0 kubenswrapper[34361]: I0224 05:37:20.596049 34361 kubelet.go:2335] "Starting kubelet main sync loop" Feb 24 05:37:20.596182 master-0 kubenswrapper[34361]: E0224 05:37:20.596125 34361 kubelet.go:2359] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Feb 24 05:37:20.598839 master-0 kubenswrapper[34361]: I0224 05:37:20.598791 34361 reflector.go:368] Caches populated for *v1.RuntimeClass from k8s.io/client-go/informers/factory.go:160 Feb 24 05:37:20.609837 master-0 kubenswrapper[34361]: I0224 05:37:20.609771 34361 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_cluster-autoscaler-operator-86b8dc6d6-mcf2z_5d51ce58-55f6-45d5-9d5d-7b31ae42380a/cluster-autoscaler-operator/0.log" Feb 24 05:37:20.610446 master-0 kubenswrapper[34361]: I0224 05:37:20.610373 34361 generic.go:334] "Generic (PLEG): container finished" podID="5d51ce58-55f6-45d5-9d5d-7b31ae42380a" containerID="bb3a0e8898f8ea9060490a27cc51b9a9e7a34486fe6313b2342ac6b15f983128" exitCode=255 Feb 24 05:37:20.613622 master-0 kubenswrapper[34361]: I0224 05:37:20.613524 34361 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operator-controller_operator-controller-controller-manager-9cc7d7bb-t75jj_347c43e5-86d5-436f-bdc5-1c7bbe19ab2a/manager/1.log" Feb 24 05:37:20.614292 master-0 kubenswrapper[34361]: I0224 05:37:20.614227 34361 generic.go:334] "Generic (PLEG): container finished" podID="347c43e5-86d5-436f-bdc5-1c7bbe19ab2a" containerID="27d3c979d980c52be573082c4d98e2b43efa2f5962b15df7eb3f072aaaaf8885" exitCode=1 Feb 24 05:37:20.618227 master-0 kubenswrapper[34361]: I0224 05:37:20.618148 34361 generic.go:334] "Generic (PLEG): container finished" podID="b21148ab-4e3e-4d0b-b198-3278dd8e2e7e" containerID="1ace97d4544be2984fbfabaf345c26dd7a0a17435d49cf2e1b85891ef684fa54" exitCode=0 Feb 24 05:37:20.621195 master-0 kubenswrapper[34361]: I0224 05:37:20.621151 34361 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_control-plane-machine-set-operator-686847ff5f-zzvtt_32fd577d-8966-4ab1-95cf-357291084156/control-plane-machine-set-operator/1.log" Feb 24 05:37:20.621266 master-0 kubenswrapper[34361]: I0224 05:37:20.621214 34361 generic.go:334] "Generic (PLEG): container finished" podID="32fd577d-8966-4ab1-95cf-357291084156" containerID="b931c4e73120acfd5edaa21c3bd09b78ab41757182041f2c3263ed0153cf894b" exitCode=1 Feb 24 05:37:20.630861 master-0 kubenswrapper[34361]: I0224 05:37:20.630815 34361 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_cluster-baremetal-operator-d6bb9bb76-54hnv_39623346-691b-42c8-af76-409d4f6629af/cluster-baremetal-operator/2.log" Feb 24 05:37:20.631380 master-0 kubenswrapper[34361]: I0224 05:37:20.631326 34361 generic.go:334] "Generic (PLEG): container finished" podID="39623346-691b-42c8-af76-409d4f6629af" containerID="86e637d0b5dc95d562f8425432d6a525c0e0e358c1d51fc8a2c0d80b43fd747a" exitCode=1 Feb 24 05:37:20.633887 master-0 kubenswrapper[34361]: I0224 05:37:20.633838 34361 generic.go:334] "Generic (PLEG): container finished" podID="b426cb33-1624-45e6-b8c5-4e8d251f6339" containerID="adced94d4c33380fc07637d5383f9ac889fe6f6c9825230d6be68622728bb5ff" exitCode=0 Feb 24 05:37:20.636099 master-0 kubenswrapper[34361]: I0224 05:37:20.636038 34361 generic.go:334] "Generic (PLEG): container finished" podID="e6f05507-d5c1-4102-a220-1db715a496e3" containerID="e2064230fd04624f769c4f745b80aa38ea29b6c2deabd8a0fd7e19128af8486a" exitCode=0 Feb 24 05:37:20.638976 master-0 kubenswrapper[34361]: I0224 05:37:20.638928 34361 generic.go:334] "Generic (PLEG): container finished" podID="933beda1-c930-4831-a886-3cc6b7a992ad" containerID="2c56b69fc4337064fa388eb97509499abfd2df910bf7a2fa34bbdc4682b29843" exitCode=0 Feb 24 05:37:20.645237 master-0 kubenswrapper[34361]: I0224 05:37:20.645187 34361 generic.go:334] "Generic (PLEG): container finished" podID="23bdafdd-27c9-4461-be4a-3ea916ac3875" containerID="e316013fb83fe451b12a337302e18c3ea427b3968c1f30f37e4c5892013d663c" exitCode=0 Feb 24 05:37:20.650943 master-0 kubenswrapper[34361]: I0224 05:37:20.650911 34361 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cluster-node-tuning-operator_cluster-node-tuning-operator-bcf775fc9-h99t4_6e5ede6a-9d4b-47a2-b4ba-e6018910d05a/cluster-node-tuning-operator/0.log" Feb 24 05:37:20.651010 master-0 kubenswrapper[34361]: I0224 05:37:20.650955 34361 generic.go:334] "Generic (PLEG): container finished" podID="6e5ede6a-9d4b-47a2-b4ba-e6018910d05a" containerID="8e61e1d5a62185ea40dd7889454ccd250bbeb0122433d8e3015d94ba9f1d1334" exitCode=1 Feb 24 05:37:20.654923 master-0 kubenswrapper[34361]: I0224 05:37:20.654079 34361 generic.go:334] "Generic (PLEG): container finished" podID="ab5afff8-1081-4acc-8ab9-d6bfd8df1d67" containerID="6c52c639645d2cd2c7e662742a4602420e9f03d769221f35786d315c1351ca22" exitCode=0 Feb 24 05:37:20.659262 master-0 kubenswrapper[34361]: I0224 05:37:20.659207 34361 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cluster-machine-approver_machine-approver-7dd9c7d7b9-pb6sw_e6a0fc47-b446-4902-9f8a-04870cbafcab/machine-approver-controller/0.log" Feb 24 05:37:20.660614 master-0 kubenswrapper[34361]: I0224 05:37:20.660549 34361 generic.go:334] "Generic (PLEG): container finished" podID="e6a0fc47-b446-4902-9f8a-04870cbafcab" containerID="ff86ebcc5c21c17d77b09c8668eacb2f60f3347c8c630b1700b81d719fb05f20" exitCode=255 Feb 24 05:37:20.669052 master-0 kubenswrapper[34361]: I0224 05:37:20.669006 34361 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cloud-controller-manager-operator_cluster-cloud-controller-manager-operator-67dd8d7969-m8d2t_f3cd3830-62b5-49d1-917e-bd993d685c65/config-sync-controllers/0.log" Feb 24 05:37:20.670126 master-0 kubenswrapper[34361]: I0224 05:37:20.670056 34361 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cloud-controller-manager-operator_cluster-cloud-controller-manager-operator-67dd8d7969-m8d2t_f3cd3830-62b5-49d1-917e-bd993d685c65/cluster-cloud-controller-manager/0.log" Feb 24 05:37:20.670179 master-0 kubenswrapper[34361]: I0224 05:37:20.670140 34361 generic.go:334] "Generic (PLEG): container finished" podID="f3cd3830-62b5-49d1-917e-bd993d685c65" containerID="1bb8d464111f0e717ad599e137d9e8e3853e8cfeea75bffbb868b896a7e93fff" exitCode=1 Feb 24 05:37:20.670215 master-0 kubenswrapper[34361]: I0224 05:37:20.670189 34361 generic.go:334] "Generic (PLEG): container finished" podID="f3cd3830-62b5-49d1-917e-bd993d685c65" containerID="1f44dc53b225ecb6e6f89dd2368c871c5572185f200fea78cfb5b504bac772aa" exitCode=1 Feb 24 05:37:20.679919 master-0 kubenswrapper[34361]: I0224 05:37:20.679802 34361 generic.go:334] "Generic (PLEG): container finished" podID="154c1cd0-d69a-4213-8fc2-2d80217c358e" containerID="b3e9d6215183b510ab3e035f5ecd9035f6b7ed2689c41b39393ba0067bb54568" exitCode=0 Feb 24 05:37:20.684259 master-0 kubenswrapper[34361]: I0224 05:37:20.684200 34361 generic.go:334] "Generic (PLEG): container finished" podID="633d33a1-e1b1-40b0-b56a-afb0c1085d97" containerID="f0a59447aa5599eed278c625c9ff436eeea9214419570f5ba689ba155470685a" exitCode=0 Feb 24 05:37:20.684259 master-0 kubenswrapper[34361]: I0224 05:37:20.684256 34361 generic.go:334] "Generic (PLEG): container finished" podID="633d33a1-e1b1-40b0-b56a-afb0c1085d97" containerID="4a59d0e70f795f32652b83ec45dcee79f28f6c433debe212f2b8fdd27b68d652" exitCode=0 Feb 24 05:37:20.684358 master-0 kubenswrapper[34361]: I0224 05:37:20.684268 34361 generic.go:334] "Generic (PLEG): container finished" podID="633d33a1-e1b1-40b0-b56a-afb0c1085d97" containerID="949e362ec4e4631e4492e74c9f4477ed75b0f79c5280bc8dd59a6bd3118464ad" exitCode=0 Feb 24 05:37:20.690927 master-0 kubenswrapper[34361]: I0224 05:37:20.690844 34361 generic.go:334] "Generic (PLEG): container finished" podID="b419b8533666d3ae7054c771ce97a95f" containerID="91e0b255f1211698af466c04efc39f34de18fa6be54be7cd67ac60b0d5f244e7" exitCode=0 Feb 24 05:37:20.690927 master-0 kubenswrapper[34361]: I0224 05:37:20.690921 34361 generic.go:334] "Generic (PLEG): container finished" podID="b419b8533666d3ae7054c771ce97a95f" containerID="bafd9772766031fe924e6722dc991fce3b4b72af5430d21cc3d769595f49edeb" exitCode=0 Feb 24 05:37:20.691027 master-0 kubenswrapper[34361]: I0224 05:37:20.690955 34361 generic.go:334] "Generic (PLEG): container finished" podID="b419b8533666d3ae7054c771ce97a95f" containerID="d72f9375dea0ad0635b80a9933bdb84b391c0ae97efa1ec6ec782f2d615cceb4" exitCode=0 Feb 24 05:37:20.696245 master-0 kubenswrapper[34361]: I0224 05:37:20.696186 34361 generic.go:334] "Generic (PLEG): container finished" podID="feee7fe8-e805-4807-b4c0-ecc7ef0f88d9" containerID="e0310f65eb21da7836bef1892997027dc547f133c634a87f14b119b040f60bd1" exitCode=0 Feb 24 05:37:20.696398 master-0 kubenswrapper[34361]: E0224 05:37:20.696356 34361 kubelet.go:2359] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Feb 24 05:37:20.698743 master-0 kubenswrapper[34361]: I0224 05:37:20.698708 34361 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_installer-1-master-0_e44f770d-f88d-446a-a22f-51b30e89690c/installer/0.log" Feb 24 05:37:20.698811 master-0 kubenswrapper[34361]: I0224 05:37:20.698774 34361 generic.go:334] "Generic (PLEG): container finished" podID="e44f770d-f88d-446a-a22f-51b30e89690c" containerID="1f43a4854636c4d4d499b77fab14041aa2c65280b5d333f68ca719e5325adfaf" exitCode=1 Feb 24 05:37:20.701235 master-0 kubenswrapper[34361]: I0224 05:37:20.701194 34361 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-scheduler_installer-1-master-0_74d070e9-4193-4598-ad68-15955b07d649/installer/0.log" Feb 24 05:37:20.701286 master-0 kubenswrapper[34361]: I0224 05:37:20.701256 34361 generic.go:334] "Generic (PLEG): container finished" podID="74d070e9-4193-4598-ad68-15955b07d649" containerID="ec62ccfb72151c7c722b6450bced3a8fc5369d64de69ed787b605e7b33bf1f14" exitCode=1 Feb 24 05:37:20.708125 master-0 kubenswrapper[34361]: I0224 05:37:20.708067 34361 generic.go:334] "Generic (PLEG): container finished" podID="58ecd829-4749-4c8a-933b-16b4acccac90" containerID="19a4a70cd708813c9cf34e54dd49971eba939aacdcaa013905918a3ca917b13e" exitCode=0 Feb 24 05:37:20.712304 master-0 kubenswrapper[34361]: I0224 05:37:20.712232 34361 generic.go:334] "Generic (PLEG): container finished" podID="17f8e10b-88dc-4158-a7c4-aaa2f5d5fb9d" containerID="4128e6ec737b6b0efca5e7827427326735a8755e3faf1df48d6f075e6755cd88" exitCode=0 Feb 24 05:37:20.719099 master-0 kubenswrapper[34361]: I0224 05:37:20.719004 34361 generic.go:334] "Generic (PLEG): container finished" podID="c3fed34f-b275-42c6-af6c-8de3e6fe0f9e" containerID="8eadd02a3eb053b6fcdd393a3aeb7df438083855b4ae5ac3cfedf974ce5cb69c" exitCode=0 Feb 24 05:37:20.721663 master-0 kubenswrapper[34361]: I0224 05:37:20.721586 34361 generic.go:334] "Generic (PLEG): container finished" podID="8978e4e5-18ef-4b69-a127-5e9409163935" containerID="3c24b58bd92b804a63d803200f7a1ff1770a8e7351e2091f1326f31e84f6d272" exitCode=0 Feb 24 05:37:20.724072 master-0 kubenswrapper[34361]: I0224 05:37:20.724011 34361 generic.go:334] "Generic (PLEG): container finished" podID="afaa1cb5-3ad5-4c67-8802-9c4db23a2e3a" containerID="8904c0214073753fcab4acc8adc0da951a7afde283497eeb5955cf76d5cf0b70" exitCode=0 Feb 24 05:37:20.725954 master-0 kubenswrapper[34361]: I0224 05:37:20.725907 34361 generic.go:334] "Generic (PLEG): container finished" podID="4df29682-0936-44a2-9629-2e90115671e0" containerID="9591bdc727c99f89e551f4c32dad8c2aa3f7be8a52343c558f1322701668f7df" exitCode=0 Feb 24 05:37:20.734343 master-0 kubenswrapper[34361]: I0224 05:37:20.734252 34361 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cluster-storage-operator_csi-snapshot-controller-6847bb4785-vqn96_b79ef90c-dc66-4d5f-8943-2c3ac68796ba/snapshot-controller/4.log" Feb 24 05:37:20.734449 master-0 kubenswrapper[34361]: I0224 05:37:20.734394 34361 generic.go:334] "Generic (PLEG): container finished" podID="b79ef90c-dc66-4d5f-8943-2c3ac68796ba" containerID="1dbe14eb848b87711b564dbd00190070ac04cfc0d462906a427a3af22f0cfd2a" exitCode=1 Feb 24 05:37:20.742072 master-0 kubenswrapper[34361]: I0224 05:37:20.740680 34361 generic.go:334] "Generic (PLEG): container finished" podID="39c4d0aa-c372-4d02-9302-337e68b56784" containerID="986b482003ff19c4b718ec972373fc705ec17bcf47510b88393859e89ab2007d" exitCode=0 Feb 24 05:37:20.754944 master-0 kubenswrapper[34361]: I0224 05:37:20.754859 34361 generic.go:334] "Generic (PLEG): container finished" podID="4e058a29-f50f-473a-a217-0021923ebc7c" containerID="4a683c2df0643cd32ba4287e2bcfda52e85d58cdef62154fe0290d7b742d186c" exitCode=0 Feb 24 05:37:20.762133 master-0 kubenswrapper[34361]: I0224 05:37:20.762066 34361 generic.go:334] "Generic (PLEG): container finished" podID="0e05783d-6bd1-4c71-87d9-1eb3edd827b3" containerID="883402f37d06428c5ac9d5006756ff5c514e20caeb827c4b80ee87b11ce334df" exitCode=0 Feb 24 05:37:20.765937 master-0 kubenswrapper[34361]: I0224 05:37:20.765878 34361 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_installer-2-master-0_7d063f48-5f89-47d0-bafc-84a52839c806/installer/0.log" Feb 24 05:37:20.766084 master-0 kubenswrapper[34361]: I0224 05:37:20.766000 34361 generic.go:334] "Generic (PLEG): container finished" podID="7d063f48-5f89-47d0-bafc-84a52839c806" containerID="d347e24453ee574539f27391a430e305f8f75f2030a25c584a9b3378c1e400e8" exitCode=1 Feb 24 05:37:20.780451 master-0 kubenswrapper[34361]: I0224 05:37:20.780275 34361 generic.go:334] "Generic (PLEG): container finished" podID="74e8b3c8-da80-492c-bfcf-199b40bde40b" containerID="1bdb0179be74494ec4b280a7fe7b1b7a56e9431efa12bfe29e8db06ceb6772c4" exitCode=0 Feb 24 05:37:20.785604 master-0 kubenswrapper[34361]: I0224 05:37:20.785523 34361 generic.go:334] "Generic (PLEG): container finished" podID="29b0d9bb-1b88-4023-8b08-896d581c79c7" containerID="e12e5627ae03ebb97ca362b2b8faa759ca1b9a419649b89bb29941198d85f2b3" exitCode=0 Feb 24 05:37:20.789853 master-0 kubenswrapper[34361]: I0224 05:37:20.789787 34361 generic.go:334] "Generic (PLEG): container finished" podID="f2249df3-3ce9-4f96-8f6f-59943125f8ed" containerID="f98e6d86d52c9e26477f3eaacf651db4b9ae2a6be8a9a3959935ba8da1491173" exitCode=0 Feb 24 05:37:20.792434 master-0 kubenswrapper[34361]: I0224 05:37:20.792393 34361 generic.go:334] "Generic (PLEG): container finished" podID="22813c83-2f60-44ad-9624-ad367cec08f7" containerID="c0559153cb9d3232da1d9baca34a653eff61d748f8d7e4af8a7f1e0e1d63e86d" exitCode=0 Feb 24 05:37:20.796118 master-0 kubenswrapper[34361]: I0224 05:37:20.795938 34361 generic.go:334] "Generic (PLEG): container finished" podID="2c6bb439-ed17-4761-b193-580be5f6aa00" containerID="53c7e3fc41d9bab35b02eeb11ff0277359d3318a819e1c141438a6ded2b7e362" exitCode=0 Feb 24 05:37:20.796118 master-0 kubenswrapper[34361]: I0224 05:37:20.796000 34361 generic.go:334] "Generic (PLEG): container finished" podID="2c6bb439-ed17-4761-b193-580be5f6aa00" containerID="1e0a6a04590c29af11ea3e9db28d3f49a4348c84904bd3e2b3e794e87f147724" exitCode=0 Feb 24 05:37:20.799556 master-0 kubenswrapper[34361]: I0224 05:37:20.799378 34361 generic.go:334] "Generic (PLEG): container finished" podID="75b4304c-09f2-499e-8c2f-da603e43ba72" containerID="6d21fdb0da7b4e08eb7332d4f6f4cc9f79390ab0d373543815483f10f2185255" exitCode=0 Feb 24 05:37:20.799556 master-0 kubenswrapper[34361]: I0224 05:37:20.799416 34361 generic.go:334] "Generic (PLEG): container finished" podID="75b4304c-09f2-499e-8c2f-da603e43ba72" containerID="b33243ea493b8d799596bfb5b13489bdfd7fcd9e03b18f82f7534ca74a24e7e7" exitCode=0 Feb 24 05:37:20.802531 master-0 kubenswrapper[34361]: I0224 05:37:20.802475 34361 generic.go:334] "Generic (PLEG): container finished" podID="812552f3-09b1-43f8-b910-c78e776127f8" containerID="b9d581ca9c4c50dcca1980b09409483a53c5ca25eba6a7a71de1be1dc2987a3e" exitCode=0 Feb 24 05:37:20.805086 master-0 kubenswrapper[34361]: I0224 05:37:20.805032 34361 generic.go:334] "Generic (PLEG): container finished" podID="88b915ff-fd94-4998-aa09-70f95c0f1b8a" containerID="96a4e787b3e1f9eeaea51f2ad42e9605d98e2f89f59460135daea10bdd951213" exitCode=0 Feb 24 05:37:20.814876 master-0 kubenswrapper[34361]: I0224 05:37:20.813589 34361 generic.go:334] "Generic (PLEG): container finished" podID="be7a4b9e-1e9a-4298-b804-21b683805c0e" containerID="4f3ae3a1fb93152f16413963009dac29f899944719e22e0315c1d5fd940eb4a6" exitCode=0 Feb 24 05:37:20.815870 master-0 kubenswrapper[34361]: I0224 05:37:20.815801 34361 generic.go:334] "Generic (PLEG): container finished" podID="dd29bef3-d27e-48b3-9aa0-d915e949b3d5" containerID="c2d1c04894486e075c5bb15ad6bb88a45eb446ca42f9495fa6638b84c3d79262" exitCode=0 Feb 24 05:37:20.825725 master-0 kubenswrapper[34361]: I0224 05:37:20.825681 34361 generic.go:334] "Generic (PLEG): container finished" podID="767424fb-babf-4b73-b5e2-0bee65fcf207" containerID="db9f1ce1d0787cc02e6669cdb33b3c44fb0d9c881cd88a981199272e23c784a9" exitCode=0 Feb 24 05:37:20.825725 master-0 kubenswrapper[34361]: I0224 05:37:20.825712 34361 generic.go:334] "Generic (PLEG): container finished" podID="767424fb-babf-4b73-b5e2-0bee65fcf207" containerID="10e04cc7b2fe6f5614f2167cd49733daceb69f740134e7a457b65b54dad51b16" exitCode=0 Feb 24 05:37:20.825725 master-0 kubenswrapper[34361]: I0224 05:37:20.825723 34361 generic.go:334] "Generic (PLEG): container finished" podID="767424fb-babf-4b73-b5e2-0bee65fcf207" containerID="5ade2b4cc50015238a7faa7e8d4af8c535b8fa2c1005c60f4da3c1f127ccbe16" exitCode=0 Feb 24 05:37:20.825725 master-0 kubenswrapper[34361]: I0224 05:37:20.825731 34361 generic.go:334] "Generic (PLEG): container finished" podID="767424fb-babf-4b73-b5e2-0bee65fcf207" containerID="08804aa446128a3eba2bae15a34a0cc35ebced6e192e0098ad42bbf36874d56b" exitCode=0 Feb 24 05:37:20.825725 master-0 kubenswrapper[34361]: I0224 05:37:20.825739 34361 generic.go:334] "Generic (PLEG): container finished" podID="767424fb-babf-4b73-b5e2-0bee65fcf207" containerID="1273096ef4d43d16e5ea21290ec73d25330bc531d5f7358ac2c2166cc791f502" exitCode=0 Feb 24 05:37:20.825725 master-0 kubenswrapper[34361]: I0224 05:37:20.825748 34361 generic.go:334] "Generic (PLEG): container finished" podID="767424fb-babf-4b73-b5e2-0bee65fcf207" containerID="2b0f6afa851de70b995ddec42c066893d0946d31fc515e6b27f74dd91d84efa9" exitCode=0 Feb 24 05:37:20.828612 master-0 kubenswrapper[34361]: I0224 05:37:20.828521 34361 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ingress-operator_ingress-operator-6569778c84-rr8r7_3d6b1ce7-1213-494c-829d-186d39eac7eb/ingress-operator/6.log" Feb 24 05:37:20.829230 master-0 kubenswrapper[34361]: I0224 05:37:20.829150 34361 generic.go:334] "Generic (PLEG): container finished" podID="3d6b1ce7-1213-494c-829d-186d39eac7eb" containerID="e5961da58ba0000499976ed125663a28df9508f26428d259f2513e76bb11ef6f" exitCode=1 Feb 24 05:37:20.835952 master-0 kubenswrapper[34361]: I0224 05:37:20.835867 34361 generic.go:334] "Generic (PLEG): container finished" podID="8f3825c1-975c-40b5-a6ad-0f200968b3cd" containerID="ffc314400db214f427906ec4ca12f75c59303e7a375e1e0d03ee1ca927488079" exitCode=0 Feb 24 05:37:20.835952 master-0 kubenswrapper[34361]: I0224 05:37:20.835924 34361 generic.go:334] "Generic (PLEG): container finished" podID="8f3825c1-975c-40b5-a6ad-0f200968b3cd" containerID="99fcb3aa839cddf10ee1220b0b8dba6f4ce8ca2800ef080d6330776f6b0863c7" exitCode=0 Feb 24 05:37:20.839661 master-0 kubenswrapper[34361]: I0224 05:37:20.839600 34361 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-config-operator_openshift-config-operator-6f47d587d6-7b87v_3f511d03-a182-4968-ba40-5c5c10e5e6be/openshift-config-operator/2.log" Feb 24 05:37:20.840386 master-0 kubenswrapper[34361]: I0224 05:37:20.840297 34361 generic.go:334] "Generic (PLEG): container finished" podID="3f511d03-a182-4968-ba40-5c5c10e5e6be" containerID="dd63c41ce4fcbe7d75c9e6d476fdf1e340af8b590914ef213ef34b53284ccafe" exitCode=255 Feb 24 05:37:20.840386 master-0 kubenswrapper[34361]: I0224 05:37:20.840367 34361 generic.go:334] "Generic (PLEG): container finished" podID="3f511d03-a182-4968-ba40-5c5c10e5e6be" containerID="9d5d2fd92f71a6c0810699352fbe58ce30a0fa6af46df79a0db731109cbec1eb" exitCode=0 Feb 24 05:37:20.847914 master-0 kubenswrapper[34361]: I0224 05:37:20.847838 34361 generic.go:334] "Generic (PLEG): container finished" podID="f77227c8-c52d-4a71-ae1b-792055f6f23d" containerID="77344984c3a22910313574fd5443c3f8c0826a85a9d2f12dd8592b5e925a1b84" exitCode=0 Feb 24 05:37:20.851222 master-0 kubenswrapper[34361]: I0224 05:37:20.851150 34361 generic.go:334] "Generic (PLEG): container finished" podID="1e7f7c02-4c84-432a-8d59-25dd3bfef5c2" containerID="efa90e77631439dbef62b24eb0a109dbbb0250a2d2b24124da5e8a8cbc7dcbd0" exitCode=0 Feb 24 05:37:20.854545 master-0 kubenswrapper[34361]: I0224 05:37:20.854408 34361 generic.go:334] "Generic (PLEG): container finished" podID="cd674e58-b749-46fb-8a28-66012fd8b401" containerID="124c812cfefad15d4947b33d7dd6cb8f0bef4d7acc6ad12461d90e6b781bfc01" exitCode=0 Feb 24 05:37:20.854545 master-0 kubenswrapper[34361]: I0224 05:37:20.854461 34361 generic.go:334] "Generic (PLEG): container finished" podID="cd674e58-b749-46fb-8a28-66012fd8b401" containerID="8b37d0025618263e47dfe8f40022b28e5392017192dbe6c7bc145156cde44d71" exitCode=0 Feb 24 05:37:20.858375 master-0 kubenswrapper[34361]: I0224 05:37:20.858255 34361 generic.go:334] "Generic (PLEG): container finished" podID="416b60c941b7224bbf94e8f78b59b910" containerID="f8b39be67a04cf9d38216643f5aaffec2fb3ec2bf8622811dc4fae7f64bc4612" exitCode=0 Feb 24 05:37:20.861041 master-0 kubenswrapper[34361]: I0224 05:37:20.860972 34361 generic.go:334] "Generic (PLEG): container finished" podID="d86d5bbe-3768-4695-810b-245a56e4fd1d" containerID="2f151e3442498eed531dc228511816d55db9ae5db685cbb2166ce65b5b71997d" exitCode=0 Feb 24 05:37:20.863926 master-0 kubenswrapper[34361]: I0224 05:37:20.863817 34361 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-config-operator_kube-rbac-proxy-crio-master-0_c997c8e9d3be51d454d8e61e376bef08/kube-rbac-proxy-crio/2.log" Feb 24 05:37:20.864422 master-0 kubenswrapper[34361]: I0224 05:37:20.864357 34361 generic.go:334] "Generic (PLEG): container finished" podID="c997c8e9d3be51d454d8e61e376bef08" containerID="3e6942d2ca28138c7420b132dcdbb1b9a811151a995bdac20311a616719b966c" exitCode=1 Feb 24 05:37:20.864422 master-0 kubenswrapper[34361]: I0224 05:37:20.864408 34361 generic.go:334] "Generic (PLEG): container finished" podID="c997c8e9d3be51d454d8e61e376bef08" containerID="23d5e42153d1239bec04afab6c545620b9ef683ee911bb6159c7f6877a1bbf3e" exitCode=0 Feb 24 05:37:20.867113 master-0 kubenswrapper[34361]: I0224 05:37:20.867049 34361 generic.go:334] "Generic (PLEG): container finished" podID="8a278410-3079-49d9-8c59-4cedf3f50213" containerID="e982480a91e40cd1e1954911193f2f93b612563b4c71eb1b41d290507d50a572" exitCode=0 Feb 24 05:37:20.871496 master-0 kubenswrapper[34361]: I0224 05:37:20.871435 34361 generic.go:334] "Generic (PLEG): container finished" podID="eb342c942d3d92fd08ed7cf68fafb94c" containerID="adcff6c4727c14e380617b95acac9bc44a9d20883cf6a8223ff49c60fefabbf8" exitCode=0 Feb 24 05:37:20.873694 master-0 kubenswrapper[34361]: I0224 05:37:20.873633 34361 generic.go:334] "Generic (PLEG): container finished" podID="7a2c651d-ea1a-41f2-9745-04adc8d88904" containerID="5b9fbeb4c761c7177b525ed4d8c68cf8e069fca30c46bcfac1010c8ec65d4d07" exitCode=0 Feb 24 05:37:20.881427 master-0 kubenswrapper[34361]: I0224 05:37:20.881355 34361 generic.go:334] "Generic (PLEG): container finished" podID="e1f03d97-1a6a-41e4-9ed3-cd9b01c46400" containerID="a36fb847cfc8df5fc6c5185376329dd9ae5ab47df139ba0d792b1adb2ce6277f" exitCode=0 Feb 24 05:37:20.884137 master-0 kubenswrapper[34361]: I0224 05:37:20.883872 34361 generic.go:334] "Generic (PLEG): container finished" podID="17ac3cae-8c8a-4e8f-9f58-ab82b543ec86" containerID="f1b7c6a181b3c4b7c381db07bd0f31166802251328ff7e67a24e7a9f4676e269" exitCode=0 Feb 24 05:37:20.895171 master-0 kubenswrapper[34361]: I0224 05:37:20.895106 34361 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-catalogd_catalogd-controller-manager-84b8d9d697-zvzxs_d9492fbf-d0f4-4ecf-84ba-b089d69535c1/manager/1.log" Feb 24 05:37:20.895660 master-0 kubenswrapper[34361]: I0224 05:37:20.895614 34361 generic.go:334] "Generic (PLEG): container finished" podID="d9492fbf-d0f4-4ecf-84ba-b089d69535c1" containerID="54cc6a7eea7de4886fcefce8b98bd35f27338eed7eb5d39d1aa4df2fed85d25a" exitCode=1 Feb 24 05:37:20.896605 master-0 kubenswrapper[34361]: E0224 05:37:20.896440 34361 kubelet.go:2359] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Feb 24 05:37:20.899006 master-0 kubenswrapper[34361]: I0224 05:37:20.898956 34361 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operator-lifecycle-manager_package-server-manager-5c75f78c8b-9d82f_49bfccec-61ec-4bef-a561-9f6e6f906215/package-server-manager/0.log" Feb 24 05:37:20.899564 master-0 kubenswrapper[34361]: I0224 05:37:20.899498 34361 generic.go:334] "Generic (PLEG): container finished" podID="49bfccec-61ec-4bef-a561-9f6e6f906215" containerID="44c8e9a1ff88f591315795d60d58a57e8877a5eadcf63c1d03aab3f292d278d7" exitCode=1 Feb 24 05:37:20.913113 master-0 kubenswrapper[34361]: I0224 05:37:20.913057 34361 generic.go:334] "Generic (PLEG): container finished" podID="2d3d57f1-cd67-4f1d-b267-f652b9bb3448" containerID="9b98ab8d2dc17a91ddedb320e3bb1181b379c4590b7ec6f960ba108eb0e71383" exitCode=0 Feb 24 05:37:20.915546 master-0 kubenswrapper[34361]: I0224 05:37:20.915508 34361 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5c7cf458b4-65mc5_116e6b47-d435-49ca-abb5-088788daf16a/machine-api-operator/0.log" Feb 24 05:37:20.916126 master-0 kubenswrapper[34361]: I0224 05:37:20.916081 34361 generic.go:334] "Generic (PLEG): container finished" podID="116e6b47-d435-49ca-abb5-088788daf16a" containerID="6b3c3ebf05dd2e018df6f39f4bdd076d24f312bc4472c6ee016795dfeeb9269e" exitCode=255 Feb 24 05:37:20.918869 master-0 kubenswrapper[34361]: I0224 05:37:20.918812 34361 generic.go:334] "Generic (PLEG): container finished" podID="f2be5ed6-fdf0-4462-a319-eed1a5a1c778" containerID="9efe8f0118c66739205c89b7031607a78cfad712b2abd1398e2a5aea5ff44c44" exitCode=0 Feb 24 05:37:20.928483 master-0 kubenswrapper[34361]: I0224 05:37:20.928283 34361 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-authentication-operator_authentication-operator-5bd7c86784-kbb8z_59333a14-5bdc-4590-a3da-af7300f086da/authentication-operator/4.log" Feb 24 05:37:20.928483 master-0 kubenswrapper[34361]: I0224 05:37:20.928387 34361 generic.go:334] "Generic (PLEG): container finished" podID="59333a14-5bdc-4590-a3da-af7300f086da" containerID="fb14b25796af448c2bc49088ecec2ac65559ebf13053ffc78319aa0e2b8844d9" exitCode=255 Feb 24 05:37:20.934907 master-0 kubenswrapper[34361]: I0224 05:37:20.934851 34361 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-network-node-identity_network-node-identity-rlg4x_c106275b-72b6-4877-95c3-830f93e35375/approver/1.log" Feb 24 05:37:20.935371 master-0 kubenswrapper[34361]: I0224 05:37:20.935254 34361 generic.go:334] "Generic (PLEG): container finished" podID="c106275b-72b6-4877-95c3-830f93e35375" containerID="8d89f8110c46f839405874fb4dba9bf410e3a518ca5d273b143187f669975cd0" exitCode=1 Feb 24 05:37:20.939869 master-0 kubenswrapper[34361]: I0224 05:37:20.939773 34361 generic.go:334] "Generic (PLEG): container finished" podID="da19bb93-c9ba-4e60-9e83-d92bc0dd33c4" containerID="d54fd19b9eb4386cf27b0171bbd26afecfaf6c5721e1c1b2aba9af1126e48295" exitCode=0 Feb 24 05:37:21.151166 master-0 kubenswrapper[34361]: I0224 05:37:21.150978 34361 manager.go:324] Recovery completed Feb 24 05:37:21.257134 master-0 kubenswrapper[34361]: I0224 05:37:21.255978 34361 cpu_manager.go:225] "Starting CPU manager" policy="none" Feb 24 05:37:21.257134 master-0 kubenswrapper[34361]: I0224 05:37:21.256028 34361 cpu_manager.go:226] "Reconciling" reconcilePeriod="10s" Feb 24 05:37:21.257134 master-0 kubenswrapper[34361]: I0224 05:37:21.256170 34361 state_mem.go:36] "Initialized new in-memory state store" Feb 24 05:37:21.257134 master-0 kubenswrapper[34361]: I0224 05:37:21.256617 34361 state_mem.go:88] "Updated default CPUSet" cpuSet="" Feb 24 05:37:21.257134 master-0 kubenswrapper[34361]: I0224 05:37:21.256635 34361 state_mem.go:96] "Updated CPUSet assignments" assignments={} Feb 24 05:37:21.257134 master-0 kubenswrapper[34361]: I0224 05:37:21.256680 34361 state_checkpoint.go:136] "State checkpoint: restored state from checkpoint" Feb 24 05:37:21.257134 master-0 kubenswrapper[34361]: I0224 05:37:21.256691 34361 state_checkpoint.go:137] "State checkpoint: defaultCPUSet" defaultCpuSet="" Feb 24 05:37:21.257134 master-0 kubenswrapper[34361]: I0224 05:37:21.256701 34361 policy_none.go:49] "None policy: Start" Feb 24 05:37:21.263875 master-0 kubenswrapper[34361]: I0224 05:37:21.263820 34361 memory_manager.go:170] "Starting memorymanager" policy="None" Feb 24 05:37:21.263964 master-0 kubenswrapper[34361]: I0224 05:37:21.263896 34361 state_mem.go:35] "Initializing new in-memory state store" Feb 24 05:37:21.264451 master-0 kubenswrapper[34361]: I0224 05:37:21.264419 34361 state_mem.go:75] "Updated machine memory state" Feb 24 05:37:21.264451 master-0 kubenswrapper[34361]: I0224 05:37:21.264448 34361 state_checkpoint.go:82] "State checkpoint: restored state from checkpoint" Feb 24 05:37:21.295109 master-0 kubenswrapper[34361]: I0224 05:37:21.295037 34361 manager.go:334] "Starting Device Plugin manager" Feb 24 05:37:21.295338 master-0 kubenswrapper[34361]: I0224 05:37:21.295154 34361 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Feb 24 05:37:21.295338 master-0 kubenswrapper[34361]: I0224 05:37:21.295178 34361 server.go:79] "Starting device plugin registration server" Feb 24 05:37:21.296147 master-0 kubenswrapper[34361]: I0224 05:37:21.295939 34361 eviction_manager.go:189] "Eviction manager: starting control loop" Feb 24 05:37:21.296147 master-0 kubenswrapper[34361]: I0224 05:37:21.295970 34361 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Feb 24 05:37:21.296352 master-0 kubenswrapper[34361]: I0224 05:37:21.296237 34361 plugin_watcher.go:51] "Plugin Watcher Start" path="/var/lib/kubelet/plugins_registry" Feb 24 05:37:21.296615 master-0 kubenswrapper[34361]: I0224 05:37:21.296596 34361 plugin_manager.go:116] "The desired_state_of_world populator (plugin watcher) starts" Feb 24 05:37:21.296615 master-0 kubenswrapper[34361]: I0224 05:37:21.296611 34361 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Feb 24 05:37:21.296709 master-0 kubenswrapper[34361]: I0224 05:37:21.296593 34361 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-apiserver/kube-apiserver-master-0","openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0","openshift-kube-controller-manager/kube-controller-manager-master-0","openshift-kube-scheduler/openshift-kube-scheduler-master-0","openshift-machine-config-operator/kube-rbac-proxy-crio-master-0","openshift-etcd/etcd-master-0"] Feb 24 05:37:21.298219 master-0 kubenswrapper[34361]: I0224 05:37:21.298141 34361 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="49dc4d8de02054e0c7305ee0abb7f18a0ace00c3ecc8e971017afe0705de270d" Feb 24 05:37:21.298443 master-0 kubenswrapper[34361]: I0224 05:37:21.298299 34361 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-master-0" event={"ID":"b419b8533666d3ae7054c771ce97a95f","Type":"ContainerStarted","Data":"fdbebfeba39a731ff604c815c6df5321e69f6b2fb32e9fc408276330fc71c740"} Feb 24 05:37:21.298527 master-0 kubenswrapper[34361]: I0224 05:37:21.298456 34361 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-master-0" event={"ID":"b419b8533666d3ae7054c771ce97a95f","Type":"ContainerStarted","Data":"a2e40245bac675d1008091343bd8e0a984311d8d60109e460ea7d49e335d061a"} Feb 24 05:37:21.298527 master-0 kubenswrapper[34361]: I0224 05:37:21.298488 34361 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-master-0" event={"ID":"b419b8533666d3ae7054c771ce97a95f","Type":"ContainerStarted","Data":"3faa482b60d54621bea5a4ad6da8d12fd13e54888c7a5e9ca7eac409b6e3607e"} Feb 24 05:37:21.298527 master-0 kubenswrapper[34361]: I0224 05:37:21.298513 34361 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-master-0" event={"ID":"b419b8533666d3ae7054c771ce97a95f","Type":"ContainerStarted","Data":"3b8e272471b366b9bb172b6754ab88ba7b2f94edde98e730bec762fb2e90114b"} Feb 24 05:37:21.298807 master-0 kubenswrapper[34361]: I0224 05:37:21.298535 34361 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-master-0" event={"ID":"b419b8533666d3ae7054c771ce97a95f","Type":"ContainerStarted","Data":"d0ab31f6f0d346b7ad6a527bcfc361448429c220e4ee35962995980c2b8c2920"} Feb 24 05:37:21.298807 master-0 kubenswrapper[34361]: I0224 05:37:21.298561 34361 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-master-0" event={"ID":"b419b8533666d3ae7054c771ce97a95f","Type":"ContainerDied","Data":"91e0b255f1211698af466c04efc39f34de18fa6be54be7cd67ac60b0d5f244e7"} Feb 24 05:37:21.298807 master-0 kubenswrapper[34361]: I0224 05:37:21.298587 34361 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-master-0" event={"ID":"b419b8533666d3ae7054c771ce97a95f","Type":"ContainerDied","Data":"bafd9772766031fe924e6722dc991fce3b4b72af5430d21cc3d769595f49edeb"} Feb 24 05:37:21.298807 master-0 kubenswrapper[34361]: I0224 05:37:21.298614 34361 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-master-0" event={"ID":"b419b8533666d3ae7054c771ce97a95f","Type":"ContainerDied","Data":"d72f9375dea0ad0635b80a9933bdb84b391c0ae97efa1ec6ec782f2d615cceb4"} Feb 24 05:37:21.298807 master-0 kubenswrapper[34361]: I0224 05:37:21.298638 34361 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-master-0" event={"ID":"b419b8533666d3ae7054c771ce97a95f","Type":"ContainerStarted","Data":"2e6f428788cdb3f513e95cc63ecf43bbf7b7de35faa154cc080dbc5634ce8151"} Feb 24 05:37:21.298807 master-0 kubenswrapper[34361]: I0224 05:37:21.298676 34361 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2de4f0bf021dd4e6a7368be09b5e12113f2c9fbed68c5c931e616a804a48f74b" Feb 24 05:37:21.298807 master-0 kubenswrapper[34361]: I0224 05:37:21.298702 34361 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b3e22a12aff8d5b6b6bf25f421a38e1ab75e1b3a0b022c9941c1b0c879a1106e" Feb 24 05:37:21.298807 master-0 kubenswrapper[34361]: I0224 05:37:21.298763 34361 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="379b0200953b199da1fee7353da8664ed763cba78b2a8cda5a307db9466ab184" Feb 24 05:37:21.298807 master-0 kubenswrapper[34361]: I0224 05:37:21.298790 34361 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e43e86c2da24898ed3ceda5fba223181eeaf5fa1fa61d7f1b9a1561a31040dae" Feb 24 05:37:21.298807 master-0 kubenswrapper[34361]: I0224 05:37:21.298813 34361 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="33a4bcbe5ee93a7507e3b17c9d65e1fc83f9e2c984de2f2f9d7e2c4fd84b6d8a" Feb 24 05:37:21.299282 master-0 kubenswrapper[34361]: I0224 05:37:21.298879 34361 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="769734d30190536a2d572317485788006caf1f452e2bf4039cbb5f5e275cd997" Feb 24 05:37:21.299282 master-0 kubenswrapper[34361]: I0224 05:37:21.298916 34361 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="835ae03e3e8588604d9220c7c10316442703346b5052f347621a9b0860a0156c" Feb 24 05:37:21.299282 master-0 kubenswrapper[34361]: I0224 05:37:21.298936 34361 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" event={"ID":"5c4f5d60772fa42f26e9c219bffa62b9","Type":"ContainerStarted","Data":"31d60c0e2062a40dfd5b30452208a7d04b42bc7bacd899eda1a3c59f7769f350"} Feb 24 05:37:21.299282 master-0 kubenswrapper[34361]: I0224 05:37:21.298964 34361 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" event={"ID":"5c4f5d60772fa42f26e9c219bffa62b9","Type":"ContainerStarted","Data":"4097b46c5415e7a8b1651e87123bd125c21ee99b1c3af149041760e25e6378ee"} Feb 24 05:37:21.299282 master-0 kubenswrapper[34361]: I0224 05:37:21.299043 34361 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e62e33bc2b32fa546c8b71cdec9803c18e73e881c996067ed355eb35c01427f7" Feb 24 05:37:21.299282 master-0 kubenswrapper[34361]: I0224 05:37:21.299081 34361 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="47ea3d92ee18dd9e6cbbd5b8e7f44f8b09235cb5c1fd91ba759f995d35faf1f2" Feb 24 05:37:21.303492 master-0 kubenswrapper[34361]: I0224 05:37:21.303419 34361 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b372465c7e56b5169454db98ec70891520a7992edc8d9521f0da0806e2998e04" Feb 24 05:37:21.303605 master-0 kubenswrapper[34361]: I0224 05:37:21.303490 34361 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" event={"ID":"416b60c941b7224bbf94e8f78b59b910","Type":"ContainerStarted","Data":"5d14453ddb467f5c28b4c89fef9f05456c5bc2ab851e4cdb483a72f52c45f0ea"} Feb 24 05:37:21.303605 master-0 kubenswrapper[34361]: I0224 05:37:21.303524 34361 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" event={"ID":"416b60c941b7224bbf94e8f78b59b910","Type":"ContainerStarted","Data":"7df6d68e4eccd870d7979d194dd996cd069e699306fa6a1039debffe4bc0d5b8"} Feb 24 05:37:21.303605 master-0 kubenswrapper[34361]: I0224 05:37:21.303541 34361 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" event={"ID":"416b60c941b7224bbf94e8f78b59b910","Type":"ContainerStarted","Data":"920bf35ac9d63c2c6150dee7e01c82a4f11232a87154bda4b9a5efa5e5177bc2"} Feb 24 05:37:21.303605 master-0 kubenswrapper[34361]: I0224 05:37:21.303555 34361 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" event={"ID":"416b60c941b7224bbf94e8f78b59b910","Type":"ContainerDied","Data":"f8b39be67a04cf9d38216643f5aaffec2fb3ec2bf8622811dc4fae7f64bc4612"} Feb 24 05:37:21.303605 master-0 kubenswrapper[34361]: I0224 05:37:21.303571 34361 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" event={"ID":"416b60c941b7224bbf94e8f78b59b910","Type":"ContainerStarted","Data":"dd7b027ed4dfa318c6f765780e7da4b378d4a45eec9c4d60403e7f1cb887d422"} Feb 24 05:37:21.303605 master-0 kubenswrapper[34361]: I0224 05:37:21.303594 34361 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" event={"ID":"c997c8e9d3be51d454d8e61e376bef08","Type":"ContainerStarted","Data":"c041af7c63d223942ce08c38d39df788b42cf76c6700a1fcbc754b1fc0059d6c"} Feb 24 05:37:21.303605 master-0 kubenswrapper[34361]: I0224 05:37:21.303609 34361 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" event={"ID":"c997c8e9d3be51d454d8e61e376bef08","Type":"ContainerDied","Data":"3e6942d2ca28138c7420b132dcdbb1b9a811151a995bdac20311a616719b966c"} Feb 24 05:37:21.304002 master-0 kubenswrapper[34361]: I0224 05:37:21.303630 34361 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" event={"ID":"c997c8e9d3be51d454d8e61e376bef08","Type":"ContainerDied","Data":"23d5e42153d1239bec04afab6c545620b9ef683ee911bb6159c7f6877a1bbf3e"} Feb 24 05:37:21.304002 master-0 kubenswrapper[34361]: I0224 05:37:21.303645 34361 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" event={"ID":"c997c8e9d3be51d454d8e61e376bef08","Type":"ContainerStarted","Data":"af62c50cd75ed27beeb63e0f7014692299e172af746bf8738716ac3ff47c9622"} Feb 24 05:37:21.304002 master-0 kubenswrapper[34361]: I0224 05:37:21.303664 34361 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f2b2e64cf1008b56ca7ac547f9f48c6ff5064b81e3d54d12e96dc4d8b69f818b" Feb 24 05:37:21.304002 master-0 kubenswrapper[34361]: I0224 05:37:21.303677 34361 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" event={"ID":"eb342c942d3d92fd08ed7cf68fafb94c","Type":"ContainerStarted","Data":"8f8f553fcbcac28ff76f32289d1c9a6da6236c150e8a334600ddcd18ae702328"} Feb 24 05:37:21.304002 master-0 kubenswrapper[34361]: I0224 05:37:21.303692 34361 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" event={"ID":"eb342c942d3d92fd08ed7cf68fafb94c","Type":"ContainerStarted","Data":"dc7f334ba093edb1de3b24fa9de1ce5cc2e49dfb4a9a68f400dd46d39244c4c0"} Feb 24 05:37:21.304002 master-0 kubenswrapper[34361]: I0224 05:37:21.303705 34361 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" event={"ID":"eb342c942d3d92fd08ed7cf68fafb94c","Type":"ContainerStarted","Data":"b58a590ac15050b699587076d87598dc3616423922f5c5d6a6a0ff788a44d80e"} Feb 24 05:37:21.304002 master-0 kubenswrapper[34361]: I0224 05:37:21.303720 34361 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" event={"ID":"eb342c942d3d92fd08ed7cf68fafb94c","Type":"ContainerStarted","Data":"0d935090be6ba5c3e4e9afb96a81c728e02b21ff904a964bb0876a43723149d9"} Feb 24 05:37:21.304002 master-0 kubenswrapper[34361]: I0224 05:37:21.303733 34361 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" event={"ID":"eb342c942d3d92fd08ed7cf68fafb94c","Type":"ContainerStarted","Data":"ced751750522f9c03e2e52e91588c4be42691ecfec782aec5dc06ccc927894a3"} Feb 24 05:37:21.304002 master-0 kubenswrapper[34361]: I0224 05:37:21.303780 34361 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" event={"ID":"eb342c942d3d92fd08ed7cf68fafb94c","Type":"ContainerDied","Data":"adcff6c4727c14e380617b95acac9bc44a9d20883cf6a8223ff49c60fefabbf8"} Feb 24 05:37:21.304002 master-0 kubenswrapper[34361]: I0224 05:37:21.303799 34361 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" event={"ID":"eb342c942d3d92fd08ed7cf68fafb94c","Type":"ContainerStarted","Data":"ef21d52c34e0ff209e507b2e241489d3a22d4196f3b18bf8ced7797fda251b4a"} Feb 24 05:37:21.304002 master-0 kubenswrapper[34361]: I0224 05:37:21.303821 34361 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" event={"ID":"c0305da6e0b04a4394ef2888a487bfa1","Type":"ContainerStarted","Data":"7b398e544e2416957c4399885f805d9a52847bdbb755fa9e7b753808f3ff7fcb"} Feb 24 05:37:21.304002 master-0 kubenswrapper[34361]: I0224 05:37:21.303834 34361 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" event={"ID":"c0305da6e0b04a4394ef2888a487bfa1","Type":"ContainerStarted","Data":"5f3f429a73b99edab07440134a29330648aee1055142d0e2a471d2ca4da191ec"} Feb 24 05:37:21.304002 master-0 kubenswrapper[34361]: I0224 05:37:21.303848 34361 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" event={"ID":"c0305da6e0b04a4394ef2888a487bfa1","Type":"ContainerStarted","Data":"25ae168ba418dfc4c1b33e602fae0945e84f4e24a75587f39220f0946080e548"} Feb 24 05:37:21.304002 master-0 kubenswrapper[34361]: I0224 05:37:21.303862 34361 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" event={"ID":"c0305da6e0b04a4394ef2888a487bfa1","Type":"ContainerStarted","Data":"e0f72d95db3b526338789b8fcf2468920b15351bce1ec3d46e5d53624269cc95"} Feb 24 05:37:21.304002 master-0 kubenswrapper[34361]: I0224 05:37:21.303878 34361 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" event={"ID":"c0305da6e0b04a4394ef2888a487bfa1","Type":"ContainerStarted","Data":"b23dfe329a1134a3919827a4fef6a742a5c3a54647b515a5ae24efa737eaeba7"} Feb 24 05:37:21.304002 master-0 kubenswrapper[34361]: I0224 05:37:21.303902 34361 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ee58f94aaa31646ed744150034f7422744de52c8ea47ed7679b57341645f987d" Feb 24 05:37:21.304002 master-0 kubenswrapper[34361]: I0224 05:37:21.303995 34361 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="345bd8023fa43822945ff7359cdfe764906fb44812bf8f7d37334c964ddefedc" Feb 24 05:37:21.324941 master-0 kubenswrapper[34361]: E0224 05:37:21.323686 34361 kubelet.go:1929] "Failed creating a mirror pod for" err="pods \"openshift-kube-scheduler-master-0\" already exists" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" Feb 24 05:37:21.324941 master-0 kubenswrapper[34361]: E0224 05:37:21.324010 34361 kubelet.go:1929] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-master-0\" already exists" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Feb 24 05:37:21.325423 master-0 kubenswrapper[34361]: E0224 05:37:21.325124 34361 kubelet.go:1929] "Failed creating a mirror pod for" err="pods \"etcd-master-0\" already exists" pod="openshift-etcd/etcd-master-0" Feb 24 05:37:21.372619 master-0 kubenswrapper[34361]: I0224 05:37:21.372509 34361 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/c0305da6e0b04a4394ef2888a487bfa1-cert-dir\") pod \"kube-controller-manager-master-0\" (UID: \"c0305da6e0b04a4394ef2888a487bfa1\") " pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Feb 24 05:37:21.372619 master-0 kubenswrapper[34361]: I0224 05:37:21.372572 34361 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/b419b8533666d3ae7054c771ce97a95f-resource-dir\") pod \"etcd-master-0\" (UID: \"b419b8533666d3ae7054c771ce97a95f\") " pod="openshift-etcd/etcd-master-0" Feb 24 05:37:21.373955 master-0 kubenswrapper[34361]: I0224 05:37:21.372824 34361 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/5c4f5d60772fa42f26e9c219bffa62b9-var-lock\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"5c4f5d60772fa42f26e9c219bffa62b9\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Feb 24 05:37:21.373955 master-0 kubenswrapper[34361]: I0224 05:37:21.372859 34361 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/5c4f5d60772fa42f26e9c219bffa62b9-manifests\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"5c4f5d60772fa42f26e9c219bffa62b9\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Feb 24 05:37:21.373955 master-0 kubenswrapper[34361]: I0224 05:37:21.372944 34361 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/c0305da6e0b04a4394ef2888a487bfa1-resource-dir\") pod \"kube-controller-manager-master-0\" (UID: \"c0305da6e0b04a4394ef2888a487bfa1\") " pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Feb 24 05:37:21.373955 master-0 kubenswrapper[34361]: I0224 05:37:21.373002 34361 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/416b60c941b7224bbf94e8f78b59b910-resource-dir\") pod \"openshift-kube-scheduler-master-0\" (UID: \"416b60c941b7224bbf94e8f78b59b910\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" Feb 24 05:37:21.373955 master-0 kubenswrapper[34361]: I0224 05:37:21.373040 34361 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/c997c8e9d3be51d454d8e61e376bef08-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-master-0\" (UID: \"c997c8e9d3be51d454d8e61e376bef08\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" Feb 24 05:37:21.373955 master-0 kubenswrapper[34361]: I0224 05:37:21.373067 34361 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/b419b8533666d3ae7054c771ce97a95f-static-pod-dir\") pod \"etcd-master-0\" (UID: \"b419b8533666d3ae7054c771ce97a95f\") " pod="openshift-etcd/etcd-master-0" Feb 24 05:37:21.373955 master-0 kubenswrapper[34361]: I0224 05:37:21.373100 34361 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/b419b8533666d3ae7054c771ce97a95f-cert-dir\") pod \"etcd-master-0\" (UID: \"b419b8533666d3ae7054c771ce97a95f\") " pod="openshift-etcd/etcd-master-0" Feb 24 05:37:21.373955 master-0 kubenswrapper[34361]: I0224 05:37:21.373124 34361 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/b419b8533666d3ae7054c771ce97a95f-usr-local-bin\") pod \"etcd-master-0\" (UID: \"b419b8533666d3ae7054c771ce97a95f\") " pod="openshift-etcd/etcd-master-0" Feb 24 05:37:21.373955 master-0 kubenswrapper[34361]: I0224 05:37:21.373149 34361 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/eb342c942d3d92fd08ed7cf68fafb94c-cert-dir\") pod \"kube-apiserver-master-0\" (UID: \"eb342c942d3d92fd08ed7cf68fafb94c\") " pod="openshift-kube-apiserver/kube-apiserver-master-0" Feb 24 05:37:21.373955 master-0 kubenswrapper[34361]: I0224 05:37:21.373244 34361 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/eb342c942d3d92fd08ed7cf68fafb94c-audit-dir\") pod \"kube-apiserver-master-0\" (UID: \"eb342c942d3d92fd08ed7cf68fafb94c\") " pod="openshift-kube-apiserver/kube-apiserver-master-0" Feb 24 05:37:21.373955 master-0 kubenswrapper[34361]: I0224 05:37:21.373386 34361 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/5c4f5d60772fa42f26e9c219bffa62b9-var-log\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"5c4f5d60772fa42f26e9c219bffa62b9\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Feb 24 05:37:21.373955 master-0 kubenswrapper[34361]: I0224 05:37:21.373431 34361 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/5c4f5d60772fa42f26e9c219bffa62b9-resource-dir\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"5c4f5d60772fa42f26e9c219bffa62b9\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Feb 24 05:37:21.373955 master-0 kubenswrapper[34361]: I0224 05:37:21.373476 34361 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/5c4f5d60772fa42f26e9c219bffa62b9-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"5c4f5d60772fa42f26e9c219bffa62b9\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Feb 24 05:37:21.373955 master-0 kubenswrapper[34361]: I0224 05:37:21.373511 34361 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/416b60c941b7224bbf94e8f78b59b910-cert-dir\") pod \"openshift-kube-scheduler-master-0\" (UID: \"416b60c941b7224bbf94e8f78b59b910\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" Feb 24 05:37:21.373955 master-0 kubenswrapper[34361]: I0224 05:37:21.373553 34361 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/c997c8e9d3be51d454d8e61e376bef08-etc-kube\") pod \"kube-rbac-proxy-crio-master-0\" (UID: \"c997c8e9d3be51d454d8e61e376bef08\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" Feb 24 05:37:21.373955 master-0 kubenswrapper[34361]: I0224 05:37:21.373614 34361 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/b419b8533666d3ae7054c771ce97a95f-data-dir\") pod \"etcd-master-0\" (UID: \"b419b8533666d3ae7054c771ce97a95f\") " pod="openshift-etcd/etcd-master-0" Feb 24 05:37:21.373955 master-0 kubenswrapper[34361]: I0224 05:37:21.373637 34361 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/b419b8533666d3ae7054c771ce97a95f-log-dir\") pod \"etcd-master-0\" (UID: \"b419b8533666d3ae7054c771ce97a95f\") " pod="openshift-etcd/etcd-master-0" Feb 24 05:37:21.373955 master-0 kubenswrapper[34361]: I0224 05:37:21.373665 34361 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/eb342c942d3d92fd08ed7cf68fafb94c-resource-dir\") pod \"kube-apiserver-master-0\" (UID: \"eb342c942d3d92fd08ed7cf68fafb94c\") " pod="openshift-kube-apiserver/kube-apiserver-master-0" Feb 24 05:37:21.396449 master-0 kubenswrapper[34361]: I0224 05:37:21.396368 34361 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 24 05:37:21.399600 master-0 kubenswrapper[34361]: I0224 05:37:21.399537 34361 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Feb 24 05:37:21.399600 master-0 kubenswrapper[34361]: I0224 05:37:21.399589 34361 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Feb 24 05:37:21.399600 master-0 kubenswrapper[34361]: I0224 05:37:21.399603 34361 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Feb 24 05:37:21.399877 master-0 kubenswrapper[34361]: I0224 05:37:21.399802 34361 kubelet_node_status.go:76] "Attempting to register node" node="master-0" Feb 24 05:37:21.405854 master-0 kubenswrapper[34361]: E0224 05:37:21.405705 34361 kubelet_node_status.go:99] "Unable to register node with API server" err="nodes \"master-0\" is forbidden: autoscaling.openshift.io/ManagedNode infra config cache not synchronized" node="master-0" Feb 24 05:37:21.474253 master-0 kubenswrapper[34361]: I0224 05:37:21.474144 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/b419b8533666d3ae7054c771ce97a95f-usr-local-bin\") pod \"etcd-master-0\" (UID: \"b419b8533666d3ae7054c771ce97a95f\") " pod="openshift-etcd/etcd-master-0" Feb 24 05:37:21.474779 master-0 kubenswrapper[34361]: I0224 05:37:21.474298 34361 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/b419b8533666d3ae7054c771ce97a95f-usr-local-bin\") pod \"etcd-master-0\" (UID: \"b419b8533666d3ae7054c771ce97a95f\") " pod="openshift-etcd/etcd-master-0" Feb 24 05:37:21.474779 master-0 kubenswrapper[34361]: I0224 05:37:21.474370 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/eb342c942d3d92fd08ed7cf68fafb94c-cert-dir\") pod \"kube-apiserver-master-0\" (UID: \"eb342c942d3d92fd08ed7cf68fafb94c\") " pod="openshift-kube-apiserver/kube-apiserver-master-0" Feb 24 05:37:21.474779 master-0 kubenswrapper[34361]: I0224 05:37:21.474427 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/eb342c942d3d92fd08ed7cf68fafb94c-audit-dir\") pod \"kube-apiserver-master-0\" (UID: \"eb342c942d3d92fd08ed7cf68fafb94c\") " pod="openshift-kube-apiserver/kube-apiserver-master-0" Feb 24 05:37:21.474779 master-0 kubenswrapper[34361]: I0224 05:37:21.474462 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/5c4f5d60772fa42f26e9c219bffa62b9-var-log\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"5c4f5d60772fa42f26e9c219bffa62b9\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Feb 24 05:37:21.474779 master-0 kubenswrapper[34361]: I0224 05:37:21.474503 34361 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/eb342c942d3d92fd08ed7cf68fafb94c-audit-dir\") pod \"kube-apiserver-master-0\" (UID: \"eb342c942d3d92fd08ed7cf68fafb94c\") " pod="openshift-kube-apiserver/kube-apiserver-master-0" Feb 24 05:37:21.474779 master-0 kubenswrapper[34361]: I0224 05:37:21.474509 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/416b60c941b7224bbf94e8f78b59b910-cert-dir\") pod \"openshift-kube-scheduler-master-0\" (UID: \"416b60c941b7224bbf94e8f78b59b910\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" Feb 24 05:37:21.474779 master-0 kubenswrapper[34361]: I0224 05:37:21.474554 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/c997c8e9d3be51d454d8e61e376bef08-etc-kube\") pod \"kube-rbac-proxy-crio-master-0\" (UID: \"c997c8e9d3be51d454d8e61e376bef08\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" Feb 24 05:37:21.474779 master-0 kubenswrapper[34361]: I0224 05:37:21.474539 34361 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/5c4f5d60772fa42f26e9c219bffa62b9-var-log\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"5c4f5d60772fa42f26e9c219bffa62b9\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Feb 24 05:37:21.474779 master-0 kubenswrapper[34361]: I0224 05:37:21.474597 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/b419b8533666d3ae7054c771ce97a95f-data-dir\") pod \"etcd-master-0\" (UID: \"b419b8533666d3ae7054c771ce97a95f\") " pod="openshift-etcd/etcd-master-0" Feb 24 05:37:21.474779 master-0 kubenswrapper[34361]: I0224 05:37:21.474610 34361 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/416b60c941b7224bbf94e8f78b59b910-cert-dir\") pod \"openshift-kube-scheduler-master-0\" (UID: \"416b60c941b7224bbf94e8f78b59b910\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" Feb 24 05:37:21.474779 master-0 kubenswrapper[34361]: I0224 05:37:21.474478 34361 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/eb342c942d3d92fd08ed7cf68fafb94c-cert-dir\") pod \"kube-apiserver-master-0\" (UID: \"eb342c942d3d92fd08ed7cf68fafb94c\") " pod="openshift-kube-apiserver/kube-apiserver-master-0" Feb 24 05:37:21.474779 master-0 kubenswrapper[34361]: I0224 05:37:21.474636 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/b419b8533666d3ae7054c771ce97a95f-log-dir\") pod \"etcd-master-0\" (UID: \"b419b8533666d3ae7054c771ce97a95f\") " pod="openshift-etcd/etcd-master-0" Feb 24 05:37:21.474779 master-0 kubenswrapper[34361]: I0224 05:37:21.474650 34361 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/c997c8e9d3be51d454d8e61e376bef08-etc-kube\") pod \"kube-rbac-proxy-crio-master-0\" (UID: \"c997c8e9d3be51d454d8e61e376bef08\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" Feb 24 05:37:21.474779 master-0 kubenswrapper[34361]: I0224 05:37:21.474670 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/eb342c942d3d92fd08ed7cf68fafb94c-resource-dir\") pod \"kube-apiserver-master-0\" (UID: \"eb342c942d3d92fd08ed7cf68fafb94c\") " pod="openshift-kube-apiserver/kube-apiserver-master-0" Feb 24 05:37:21.474779 master-0 kubenswrapper[34361]: I0224 05:37:21.474680 34361 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/b419b8533666d3ae7054c771ce97a95f-data-dir\") pod \"etcd-master-0\" (UID: \"b419b8533666d3ae7054c771ce97a95f\") " pod="openshift-etcd/etcd-master-0" Feb 24 05:37:21.474779 master-0 kubenswrapper[34361]: I0224 05:37:21.474709 34361 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/b419b8533666d3ae7054c771ce97a95f-log-dir\") pod \"etcd-master-0\" (UID: \"b419b8533666d3ae7054c771ce97a95f\") " pod="openshift-etcd/etcd-master-0" Feb 24 05:37:21.474779 master-0 kubenswrapper[34361]: I0224 05:37:21.474708 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/5c4f5d60772fa42f26e9c219bffa62b9-resource-dir\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"5c4f5d60772fa42f26e9c219bffa62b9\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Feb 24 05:37:21.474779 master-0 kubenswrapper[34361]: I0224 05:37:21.474736 34361 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/eb342c942d3d92fd08ed7cf68fafb94c-resource-dir\") pod \"kube-apiserver-master-0\" (UID: \"eb342c942d3d92fd08ed7cf68fafb94c\") " pod="openshift-kube-apiserver/kube-apiserver-master-0" Feb 24 05:37:21.474779 master-0 kubenswrapper[34361]: I0224 05:37:21.474753 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/5c4f5d60772fa42f26e9c219bffa62b9-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"5c4f5d60772fa42f26e9c219bffa62b9\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Feb 24 05:37:21.474779 master-0 kubenswrapper[34361]: I0224 05:37:21.474765 34361 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/5c4f5d60772fa42f26e9c219bffa62b9-resource-dir\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"5c4f5d60772fa42f26e9c219bffa62b9\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Feb 24 05:37:21.474779 master-0 kubenswrapper[34361]: I0224 05:37:21.474788 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/c0305da6e0b04a4394ef2888a487bfa1-cert-dir\") pod \"kube-controller-manager-master-0\" (UID: \"c0305da6e0b04a4394ef2888a487bfa1\") " pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Feb 24 05:37:21.474779 master-0 kubenswrapper[34361]: I0224 05:37:21.474797 34361 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/5c4f5d60772fa42f26e9c219bffa62b9-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"5c4f5d60772fa42f26e9c219bffa62b9\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Feb 24 05:37:21.474779 master-0 kubenswrapper[34361]: I0224 05:37:21.474822 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/b419b8533666d3ae7054c771ce97a95f-resource-dir\") pod \"etcd-master-0\" (UID: \"b419b8533666d3ae7054c771ce97a95f\") " pod="openshift-etcd/etcd-master-0" Feb 24 05:37:21.474779 master-0 kubenswrapper[34361]: I0224 05:37:21.474833 34361 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/c0305da6e0b04a4394ef2888a487bfa1-cert-dir\") pod \"kube-controller-manager-master-0\" (UID: \"c0305da6e0b04a4394ef2888a487bfa1\") " pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Feb 24 05:37:21.476846 master-0 kubenswrapper[34361]: I0224 05:37:21.474859 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/5c4f5d60772fa42f26e9c219bffa62b9-var-lock\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"5c4f5d60772fa42f26e9c219bffa62b9\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Feb 24 05:37:21.476846 master-0 kubenswrapper[34361]: I0224 05:37:21.474897 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/c0305da6e0b04a4394ef2888a487bfa1-resource-dir\") pod \"kube-controller-manager-master-0\" (UID: \"c0305da6e0b04a4394ef2888a487bfa1\") " pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Feb 24 05:37:21.476846 master-0 kubenswrapper[34361]: I0224 05:37:21.474931 34361 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/b419b8533666d3ae7054c771ce97a95f-resource-dir\") pod \"etcd-master-0\" (UID: \"b419b8533666d3ae7054c771ce97a95f\") " pod="openshift-etcd/etcd-master-0" Feb 24 05:37:21.476846 master-0 kubenswrapper[34361]: I0224 05:37:21.474991 34361 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/5c4f5d60772fa42f26e9c219bffa62b9-var-lock\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"5c4f5d60772fa42f26e9c219bffa62b9\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Feb 24 05:37:21.476846 master-0 kubenswrapper[34361]: I0224 05:37:21.475073 34361 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/c0305da6e0b04a4394ef2888a487bfa1-resource-dir\") pod \"kube-controller-manager-master-0\" (UID: \"c0305da6e0b04a4394ef2888a487bfa1\") " pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Feb 24 05:37:21.476846 master-0 kubenswrapper[34361]: I0224 05:37:21.475126 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/416b60c941b7224bbf94e8f78b59b910-resource-dir\") pod \"openshift-kube-scheduler-master-0\" (UID: \"416b60c941b7224bbf94e8f78b59b910\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" Feb 24 05:37:21.476846 master-0 kubenswrapper[34361]: I0224 05:37:21.475158 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/c997c8e9d3be51d454d8e61e376bef08-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-master-0\" (UID: \"c997c8e9d3be51d454d8e61e376bef08\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" Feb 24 05:37:21.476846 master-0 kubenswrapper[34361]: I0224 05:37:21.475190 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/b419b8533666d3ae7054c771ce97a95f-static-pod-dir\") pod \"etcd-master-0\" (UID: \"b419b8533666d3ae7054c771ce97a95f\") " pod="openshift-etcd/etcd-master-0" Feb 24 05:37:21.476846 master-0 kubenswrapper[34361]: I0224 05:37:21.475294 34361 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/416b60c941b7224bbf94e8f78b59b910-resource-dir\") pod \"openshift-kube-scheduler-master-0\" (UID: \"416b60c941b7224bbf94e8f78b59b910\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" Feb 24 05:37:21.476846 master-0 kubenswrapper[34361]: I0224 05:37:21.475436 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/b419b8533666d3ae7054c771ce97a95f-cert-dir\") pod \"etcd-master-0\" (UID: \"b419b8533666d3ae7054c771ce97a95f\") " pod="openshift-etcd/etcd-master-0" Feb 24 05:37:21.476846 master-0 kubenswrapper[34361]: I0224 05:37:21.475554 34361 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/b419b8533666d3ae7054c771ce97a95f-cert-dir\") pod \"etcd-master-0\" (UID: \"b419b8533666d3ae7054c771ce97a95f\") " pod="openshift-etcd/etcd-master-0" Feb 24 05:37:21.476846 master-0 kubenswrapper[34361]: I0224 05:37:21.475580 34361 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/b419b8533666d3ae7054c771ce97a95f-static-pod-dir\") pod \"etcd-master-0\" (UID: \"b419b8533666d3ae7054c771ce97a95f\") " pod="openshift-etcd/etcd-master-0" Feb 24 05:37:21.476846 master-0 kubenswrapper[34361]: I0224 05:37:21.475646 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/5c4f5d60772fa42f26e9c219bffa62b9-manifests\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"5c4f5d60772fa42f26e9c219bffa62b9\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Feb 24 05:37:21.476846 master-0 kubenswrapper[34361]: I0224 05:37:21.475715 34361 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/c997c8e9d3be51d454d8e61e376bef08-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-master-0\" (UID: \"c997c8e9d3be51d454d8e61e376bef08\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" Feb 24 05:37:21.476846 master-0 kubenswrapper[34361]: I0224 05:37:21.475762 34361 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/5c4f5d60772fa42f26e9c219bffa62b9-manifests\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"5c4f5d60772fa42f26e9c219bffa62b9\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Feb 24 05:37:21.498448 master-0 kubenswrapper[34361]: I0224 05:37:21.498368 34361 apiserver.go:52] "Watching apiserver" Feb 24 05:37:21.532611 master-0 kubenswrapper[34361]: I0224 05:37:21.532508 34361 reflector.go:368] Caches populated for *v1.Pod from pkg/kubelet/config/apiserver.go:66 Feb 24 05:37:21.538864 master-0 kubenswrapper[34361]: I0224 05:37:21.535264 34361 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/bootstrap-kube-apiserver-master-0","openshift-kube-apiserver/installer-1-master-0","openshift-kube-apiserver/installer-2-master-0","openshift-multus/multus-admission-controller-5f54bf67d4-5tf9t","openshift-multus/network-metrics-daemon-2vsjh","openshift-kube-apiserver-operator/kube-apiserver-operator-5d87bf58c-ncrqj","openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0","openshift-kube-scheduler/installer-1-retry-1-master-0","openshift-image-registry/cluster-image-registry-operator-779979bdf7-t98nr","openshift-ingress-operator/ingress-operator-6569778c84-rr8r7","openshift-monitoring/cluster-monitoring-operator-6bb6d78bf-mzb7q","openshift-operator-lifecycle-manager/collect-profiles-29531835-tsgrz","openshift-cluster-storage-operator/cluster-storage-operator-f94476f49-tlmg5","openshift-cluster-storage-operator/csi-snapshot-controller-operator-6fb4df594f-8tv99","openshift-controller-manager-operator/openshift-controller-manager-operator-584cc7bcb5-zz9fm","openshift-kube-apiserver/installer-3-master-0","openshift-monitoring/prometheus-operator-754bc4d665-xjddh","openshift-multus/multus-additional-cni-plugins-jknmn","openshift-operator-lifecycle-manager/collect-profiles-29531850-l54gb","openshift-cluster-storage-operator/csi-snapshot-controller-6847bb4785-vqn96","openshift-config-operator/openshift-config-operator-6f47d587d6-7b87v","openshift-ingress-canary/ingress-canary-5m82s","openshift-machine-api/machine-api-operator-5c7cf458b4-65mc5","openshift-monitoring/metrics-server-65cdf565cd-555rj","openshift-network-operator/iptables-alerter-r2vvc","openshift-operator-lifecycle-manager/package-server-manager-5c75f78c8b-9d82f","openshift-service-ca-operator/service-ca-operator-c48c8bf7c-mcdrl","openshift-cloud-credential-operator/cloud-credential-operator-6968c58f46-68rth","openshift-cluster-version/cluster-version-operator-57476485-7g2gq","openshift-machine-api/cluster-baremetal-operator-d6bb9bb76-54hnv","openshift-etcd/etcd-master-0","openshift-marketplace/community-operators-68vwc","openshift-network-diagnostics/network-check-target-vp2jg","openshift-oauth-apiserver/apiserver-6f8b7f45f7-5df4m","assisted-installer/assisted-installer-controller-r6zx7","openshift-cluster-node-tuning-operator/tuned-2w6mj","openshift-controller-manager/controller-manager-7657d7494-mmsz6","openshift-kube-controller-manager/installer-2-master-0","openshift-route-controller-manager/route-controller-manager-654dcf5585-fgmnd","openshift-operator-lifecycle-manager/catalog-operator-596f79dd6f-v22h2","openshift-operator-lifecycle-manager/olm-operator-5499d7f7bb-8xdmq","openshift-apiserver-operator/openshift-apiserver-operator-8586dccc9b-49fsv","openshift-cluster-samples-operator/cluster-samples-operator-65c5c48b9b-hmlsl","openshift-machine-config-operator/machine-config-server-xxl55","openshift-monitoring/openshift-state-metrics-6dbff8cb4c-hvjlk","openshift-multus/multus-8qp5g","openshift-network-operator/network-operator-7d7db75979-4fk6k","openshift-operator-lifecycle-manager/packageserver-df5f88cd4-cwzcs","openshift-kube-apiserver/kube-apiserver-master-0","openshift-marketplace/certified-operators-gn8m8","openshift-marketplace/redhat-operators-xm8sw","openshift-authentication-operator/authentication-operator-5bd7c86784-kbb8z","openshift-catalogd/catalogd-controller-manager-84b8d9d697-zvzxs","openshift-cluster-machine-approver/machine-approver-7dd9c7d7b9-pb6sw","openshift-kube-scheduler/openshift-kube-scheduler-master-0","openshift-machine-api/cluster-autoscaler-operator-86b8dc6d6-mcf2z","openshift-monitoring/kube-state-metrics-59584d565f-gsgxz","openshift-monitoring/node-exporter-qk7rz","openshift-monitoring/prometheus-operator-admission-webhook-75d56db95f-hw4m2","openshift-cluster-olm-operator/cluster-olm-operator-5bd7768f54-qh6j7","openshift-etcd-operator/etcd-operator-545bf96f4d-tfmbs","openshift-etcd/installer-1-master-0","openshift-network-diagnostics/network-check-source-58fb6744f5-kn2z7","openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-67dd8d7969-m8d2t","openshift-machine-api/control-plane-machine-set-operator-686847ff5f-zzvtt","openshift-marketplace/marketplace-operator-6f5488b997-dbsnm","openshift-network-node-identity/network-node-identity-rlg4x","openshift-ovn-kubernetes/ovnkube-control-plane-5d8dfcdc87-b8ght","openshift-service-ca/service-ca-576b4d78bd-fsmrl","openshift-kube-controller-manager-operator/kube-controller-manager-operator-7bcfbc574b-8zrj9","openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-77cd4d9559-8l7xv","openshift-marketplace/redhat-marketplace-v64s6","openshift-machine-config-operator/machine-config-daemon-c56dz","openshift-machine-config-operator/machine-config-operator-7f8c75f984-922md","openshift-monitoring/telemeter-client-96c995bf5-57k8x","openshift-operator-controller/operator-controller-controller-manager-9cc7d7bb-t75jj","openshift-ovn-kubernetes/ovnkube-node-vd82q","openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-bcf775fc9-h99t4","openshift-kube-controller-manager/kube-controller-manager-master-0","openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-fc889cfd5-r6p58","openshift-etcd/installer-2-master-0","openshift-ingress/router-default-7b65dc9fcb-zxkt2","openshift-kube-storage-version-migrator/migrator-5c85bff57-txt9d","openshift-machine-config-operator/kube-rbac-proxy-crio-master-0","openshift-apiserver/apiserver-fdc9d7cdd-8v72m","openshift-dns-operator/dns-operator-8c7d49845-4dhth","openshift-dns/dns-default-cdk2w","openshift-kube-scheduler/installer-1-master-0","openshift-kube-scheduler/installer-2-master-0","openshift-machine-config-operator/machine-config-controller-54cb48566c-9ww5z","openshift-dns/node-resolver-ng8tz","openshift-insights/insights-operator-59b498fcfb-mprnx","openshift-kube-controller-manager/installer-3-master-0"] Feb 24 05:37:21.538864 master-0 kubenswrapper[34361]: I0224 05:37:21.535766 34361 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="assisted-installer/assisted-installer-controller-r6zx7" Feb 24 05:37:21.540008 master-0 kubenswrapper[34361]: I0224 05:37:21.539895 34361 kubelet.go:2566] "Unable to find pod for mirror pod, skipping" mirrorPod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" mirrorPodUID="18870904-bc46-4310-ab4a-d3ad9e6837a8" Feb 24 05:37:21.550564 master-0 kubenswrapper[34361]: I0224 05:37:21.548100 34361 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"openshift-service-ca.crt" Feb 24 05:37:21.550564 master-0 kubenswrapper[34361]: I0224 05:37:21.548497 34361 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"kube-root-ca.crt" Feb 24 05:37:21.550564 master-0 kubenswrapper[34361]: I0224 05:37:21.548909 34361 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/installer-1-master-0" Feb 24 05:37:21.550564 master-0 kubenswrapper[34361]: I0224 05:37:21.549083 34361 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"service-ca-operator-config" Feb 24 05:37:21.550564 master-0 kubenswrapper[34361]: I0224 05:37:21.549298 34361 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"authentication-operator-config" Feb 24 05:37:21.550564 master-0 kubenswrapper[34361]: I0224 05:37:21.549999 34361 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-node-tuning-operator"/"kube-root-ca.crt" Feb 24 05:37:21.550564 master-0 kubenswrapper[34361]: I0224 05:37:21.550033 34361 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert" Feb 24 05:37:21.550564 master-0 kubenswrapper[34361]: I0224 05:37:21.550152 34361 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29531835-tsgrz" Feb 24 05:37:21.550564 master-0 kubenswrapper[34361]: I0224 05:37:21.550382 34361 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-config" Feb 24 05:37:21.550564 master-0 kubenswrapper[34361]: I0224 05:37:21.550508 34361 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"serving-cert" Feb 24 05:37:21.553445 master-0 kubenswrapper[34361]: I0224 05:37:21.550732 34361 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-service-ca.crt" Feb 24 05:37:21.553445 master-0 kubenswrapper[34361]: I0224 05:37:21.550970 34361 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-node-tuning-operator"/"performance-addon-operator-webhook-cert" Feb 24 05:37:21.553445 master-0 kubenswrapper[34361]: I0224 05:37:21.550384 34361 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"openshift-service-ca.crt" Feb 24 05:37:21.553445 master-0 kubenswrapper[34361]: I0224 05:37:21.551923 34361 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-1-master-0" Feb 24 05:37:21.553445 master-0 kubenswrapper[34361]: I0224 05:37:21.553047 34361 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/installer-1-master-0" Feb 24 05:37:21.554743 master-0 kubenswrapper[34361]: I0224 05:37:21.554695 34361 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert" Feb 24 05:37:21.555124 master-0 kubenswrapper[34361]: I0224 05:37:21.555047 34361 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"service-ca-bundle" Feb 24 05:37:21.555906 master-0 kubenswrapper[34361]: I0224 05:37:21.555798 34361 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-root-ca.crt" Feb 24 05:37:21.556464 master-0 kubenswrapper[34361]: I0224 05:37:21.556419 34361 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-config" Feb 24 05:37:21.558953 master-0 kubenswrapper[34361]: I0224 05:37:21.558919 34361 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-node-tuning-operator"/"node-tuning-operator-tls" Feb 24 05:37:21.561923 master-0 kubenswrapper[34361]: I0224 05:37:21.561856 34361 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-olm-operator"/"kube-root-ca.crt" Feb 24 05:37:21.563387 master-0 kubenswrapper[34361]: I0224 05:37:21.562095 34361 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert" Feb 24 05:37:21.563387 master-0 kubenswrapper[34361]: I0224 05:37:21.562671 34361 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"kube-root-ca.crt" Feb 24 05:37:21.564717 master-0 kubenswrapper[34361]: I0224 05:37:21.564673 34361 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-serving-cert" Feb 24 05:37:21.564969 master-0 kubenswrapper[34361]: I0224 05:37:21.564905 34361 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert" Feb 24 05:37:21.564969 master-0 kubenswrapper[34361]: I0224 05:37:21.564704 34361 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"cluster-monitoring-operator-tls" Feb 24 05:37:21.565357 master-0 kubenswrapper[34361]: I0224 05:37:21.564987 34361 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"kube-scheduler-operator-serving-cert" Feb 24 05:37:21.567986 master-0 kubenswrapper[34361]: I0224 05:37:21.566549 34361 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"serving-cert" Feb 24 05:37:21.567986 master-0 kubenswrapper[34361]: I0224 05:37:21.567881 34361 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/installer-1-retry-1-master-0" Feb 24 05:37:21.568589 master-0 kubenswrapper[34361]: I0224 05:37:21.568545 34361 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-olm-operator"/"cluster-olm-operator-serving-cert" Feb 24 05:37:21.569183 master-0 kubenswrapper[34361]: I0224 05:37:21.569054 34361 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"metrics-tls" Feb 24 05:37:21.569949 master-0 kubenswrapper[34361]: I0224 05:37:21.569876 34361 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-client" Feb 24 05:37:21.569949 master-0 kubenswrapper[34361]: I0224 05:37:21.569894 34361 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert" Feb 24 05:37:21.569949 master-0 kubenswrapper[34361]: I0224 05:37:21.569931 34361 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-operator"/"metrics-tls" Feb 24 05:37:21.570985 master-0 kubenswrapper[34361]: I0224 05:37:21.570818 34361 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"serving-cert" Feb 24 05:37:21.571206 master-0 kubenswrapper[34361]: I0224 05:37:21.570892 34361 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-metrics" Feb 24 05:37:21.572596 master-0 kubenswrapper[34361]: I0224 05:37:21.572517 34361 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-secret" Feb 24 05:37:21.573341 master-0 kubenswrapper[34361]: I0224 05:37:21.573254 34361 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-node-tuning-operator"/"openshift-service-ca.crt" Feb 24 05:37:21.573595 master-0 kubenswrapper[34361]: I0224 05:37:21.573552 34361 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-olm-operator"/"openshift-service-ca.crt" Feb 24 05:37:21.573893 master-0 kubenswrapper[34361]: I0224 05:37:21.573848 34361 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-control-plane-metrics-cert" Feb 24 05:37:21.574189 master-0 kubenswrapper[34361]: I0224 05:37:21.574145 34361 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-node-metrics-cert" Feb 24 05:37:21.574597 master-0 kubenswrapper[34361]: I0224 05:37:21.574532 34361 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-node-identity"/"network-node-identity-cert" Feb 24 05:37:21.574768 master-0 kubenswrapper[34361]: I0224 05:37:21.574726 34361 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"kube-root-ca.crt" Feb 24 05:37:21.574997 master-0 kubenswrapper[34361]: I0224 05:37:21.574962 34361 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"metrics-tls" Feb 24 05:37:21.576604 master-0 kubenswrapper[34361]: I0224 05:37:21.576550 34361 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"signing-key" Feb 24 05:37:21.576771 master-0 kubenswrapper[34361]: I0224 05:37:21.576616 34361 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-catalogd"/"kube-root-ca.crt" Feb 24 05:37:21.578623 master-0 kubenswrapper[34361]: I0224 05:37:21.578578 34361 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"kube-root-ca.crt" Feb 24 05:37:21.578829 master-0 kubenswrapper[34361]: I0224 05:37:21.578794 34361 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"openshift-service-ca.crt" Feb 24 05:37:21.579023 master-0 kubenswrapper[34361]: I0224 05:37:21.578982 34361 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"openshift-service-ca.crt" Feb 24 05:37:21.579378 master-0 kubenswrapper[34361]: I0224 05:37:21.579147 34361 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"openshift-service-ca.crt" Feb 24 05:37:21.579378 master-0 kubenswrapper[34361]: I0224 05:37:21.579193 34361 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"openshift-service-ca.crt" Feb 24 05:37:21.579924 master-0 kubenswrapper[34361]: I0224 05:37:21.579863 34361 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"dns-default" Feb 24 05:37:21.580941 master-0 kubenswrapper[34361]: I0224 05:37:21.580837 34361 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"kube-root-ca.crt" Feb 24 05:37:21.581138 master-0 kubenswrapper[34361]: I0224 05:37:21.581117 34361 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"kube-root-ca.crt" Feb 24 05:37:21.581254 master-0 kubenswrapper[34361]: I0224 05:37:21.581208 34361 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-service-ca-bundle" Feb 24 05:37:21.581375 master-0 kubenswrapper[34361]: I0224 05:37:21.581267 34361 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-default-metrics-tls" Feb 24 05:37:21.581706 master-0 kubenswrapper[34361]: I0224 05:37:21.581671 34361 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-operator-config" Feb 24 05:37:21.582657 master-0 kubenswrapper[34361]: I0224 05:37:21.582578 34361 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"kube-root-ca.crt" Feb 24 05:37:21.583251 master-0 kubenswrapper[34361]: I0224 05:37:21.583159 34361 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"config" Feb 24 05:37:21.584419 master-0 kubenswrapper[34361]: I0224 05:37:21.584284 34361 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" Feb 24 05:37:21.584419 master-0 kubenswrapper[34361]: I0224 05:37:21.584404 34361 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"openshift-service-ca.crt" Feb 24 05:37:21.584666 master-0 kubenswrapper[34361]: I0224 05:37:21.584609 34361 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"kube-root-ca.crt" Feb 24 05:37:21.584991 master-0 kubenswrapper[34361]: I0224 05:37:21.584926 34361 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"kube-root-ca.crt" Feb 24 05:37:21.585885 master-0 kubenswrapper[34361]: I0224 05:37:21.585835 34361 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"kube-root-ca.crt" Feb 24 05:37:21.585885 master-0 kubenswrapper[34361]: I0224 05:37:21.585867 34361 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"kube-root-ca.crt" Feb 24 05:37:21.586078 master-0 kubenswrapper[34361]: I0224 05:37:21.585962 34361 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" Feb 24 05:37:21.586078 master-0 kubenswrapper[34361]: I0224 05:37:21.586016 34361 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"multus-daemon-config" Feb 24 05:37:21.592044 master-0 kubenswrapper[34361]: I0224 05:37:21.591961 34361 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-ca-bundle" Feb 24 05:37:21.592263 master-0 kubenswrapper[34361]: I0224 05:37:21.592103 34361 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"openshift-service-ca.crt" Feb 24 05:37:21.592689 master-0 kubenswrapper[34361]: I0224 05:37:21.592635 34361 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"trusted-ca-bundle" Feb 24 05:37:21.595110 master-0 kubenswrapper[34361]: I0224 05:37:21.594032 34361 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-service-ca.crt" Feb 24 05:37:21.595817 master-0 kubenswrapper[34361]: I0224 05:37:21.594058 34361 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-config" Feb 24 05:37:21.595977 master-0 kubenswrapper[34361]: I0224 05:37:21.595828 34361 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/installer-2-master-0" Feb 24 05:37:21.595977 master-0 kubenswrapper[34361]: I0224 05:37:21.595777 34361 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-2-master-0" Feb 24 05:37:21.596193 master-0 kubenswrapper[34361]: I0224 05:37:21.594139 34361 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-config" Feb 24 05:37:21.596193 master-0 kubenswrapper[34361]: I0224 05:37:21.595336 34361 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"kube-root-ca.crt" Feb 24 05:37:21.596485 master-0 kubenswrapper[34361]: I0224 05:37:21.596283 34361 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-storage-operator"/"openshift-service-ca.crt" Feb 24 05:37:21.597175 master-0 kubenswrapper[34361]: I0224 05:37:21.597129 34361 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"kube-root-ca.crt" Feb 24 05:37:21.597175 master-0 kubenswrapper[34361]: I0224 05:37:21.597139 34361 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"openshift-service-ca.crt" Feb 24 05:37:21.597476 master-0 kubenswrapper[34361]: I0224 05:37:21.597247 34361 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"cni-copy-resources" Feb 24 05:37:21.597476 master-0 kubenswrapper[34361]: I0224 05:37:21.597456 34361 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-catalogd"/"openshift-service-ca.crt" Feb 24 05:37:21.597618 master-0 kubenswrapper[34361]: I0224 05:37:21.597159 34361 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"service-ca-bundle" Feb 24 05:37:21.598370 master-0 kubenswrapper[34361]: I0224 05:37:21.598285 34361 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"openshift-service-ca.crt" Feb 24 05:37:21.598494 master-0 kubenswrapper[34361]: I0224 05:37:21.598396 34361 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"openshift-service-ca.crt" Feb 24 05:37:21.598780 master-0 kubenswrapper[34361]: I0224 05:37:21.598688 34361 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/installer-2-master-0" Feb 24 05:37:21.600572 master-0 kubenswrapper[34361]: I0224 05:37:21.600529 34361 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"whereabouts-config" Feb 24 05:37:21.600852 master-0 kubenswrapper[34361]: I0224 05:37:21.600796 34361 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-root-ca.crt" Feb 24 05:37:21.601124 master-0 kubenswrapper[34361]: I0224 05:37:21.601057 34361 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"openshift-service-ca.crt" Feb 24 05:37:21.601341 master-0 kubenswrapper[34361]: I0224 05:37:21.601270 34361 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-controller"/"kube-root-ca.crt" Feb 24 05:37:21.601561 master-0 kubenswrapper[34361]: I0224 05:37:21.601458 34361 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29531850-l54gb" Feb 24 05:37:21.601648 master-0 kubenswrapper[34361]: I0224 05:37:21.601589 34361 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/installer-2-master-0" Feb 24 05:37:21.602093 master-0 kubenswrapper[34361]: I0224 05:37:21.602042 34361 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-3-master-0" Feb 24 05:37:21.602634 master-0 kubenswrapper[34361]: I0224 05:37:21.602591 34361 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/installer-3-master-0" Feb 24 05:37:21.603017 master-0 kubenswrapper[34361]: I0224 05:37:21.602979 34361 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-metrics-certs-default" Feb 24 05:37:21.603119 master-0 kubenswrapper[34361]: I0224 05:37:21.603067 34361 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-storage-operator"/"kube-root-ca.crt" Feb 24 05:37:21.603227 master-0 kubenswrapper[34361]: I0224 05:37:21.603130 34361 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-catalogd"/"catalogserver-cert" Feb 24 05:37:21.603227 master-0 kubenswrapper[34361]: I0224 05:37:21.603215 34361 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"kube-root-ca.crt" Feb 24 05:37:21.603570 master-0 kubenswrapper[34361]: I0224 05:37:21.603462 34361 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-stats-default" Feb 24 05:37:21.604955 master-0 kubenswrapper[34361]: I0224 05:37:21.604909 34361 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-node-tuning-operator"/"trusted-ca" Feb 24 05:37:21.605997 master-0 kubenswrapper[34361]: I0224 05:37:21.605938 34361 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 24 05:37:21.607486 master-0 kubenswrapper[34361]: I0224 05:37:21.607413 34361 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"kube-root-ca.crt" Feb 24 05:37:21.607980 master-0 kubenswrapper[34361]: I0224 05:37:21.607934 34361 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-config" Feb 24 05:37:21.609226 master-0 kubenswrapper[34361]: I0224 05:37:21.609194 34361 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" Feb 24 05:37:21.609983 master-0 kubenswrapper[34361]: I0224 05:37:21.609935 34361 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"telemetry-config" Feb 24 05:37:21.612209 master-0 kubenswrapper[34361]: I0224 05:37:21.612172 34361 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Feb 24 05:37:21.612209 master-0 kubenswrapper[34361]: I0224 05:37:21.612207 34361 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Feb 24 05:37:21.612379 master-0 kubenswrapper[34361]: I0224 05:37:21.612218 34361 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Feb 24 05:37:21.612422 master-0 kubenswrapper[34361]: I0224 05:37:21.612409 34361 kubelet_node_status.go:76] "Attempting to register node" node="master-0" Feb 24 05:37:21.618545 master-0 kubenswrapper[34361]: I0224 05:37:21.618499 34361 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-controller"/"openshift-service-ca.crt" Feb 24 05:37:21.618913 master-0 kubenswrapper[34361]: I0224 05:37:21.618885 34361 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config" Feb 24 05:37:21.619041 master-0 kubenswrapper[34361]: I0224 05:37:21.619005 34361 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"env-overrides" Feb 24 05:37:21.619104 master-0 kubenswrapper[34361]: I0224 05:37:21.619066 34361 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"ovnkube-identity-cm" Feb 24 05:37:21.619179 master-0 kubenswrapper[34361]: I0224 05:37:21.618919 34361 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"encryption-config-1" Feb 24 05:37:21.619257 master-0 kubenswrapper[34361]: I0224 05:37:21.619213 34361 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" Feb 24 05:37:21.620590 master-0 kubenswrapper[34361]: I0224 05:37:21.620550 34361 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"kube-root-ca.crt" Feb 24 05:37:21.623998 master-0 kubenswrapper[34361]: I0224 05:37:21.623978 34361 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"trusted-ca" Feb 24 05:37:21.630022 master-0 kubenswrapper[34361]: I0224 05:37:21.629840 34361 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"marketplace-trusted-ca" Feb 24 05:37:21.648510 master-0 kubenswrapper[34361]: I0224 05:37:21.638981 34361 desired_state_of_world_populator.go:155] "Finished populating initial desired state of world" Feb 24 05:37:21.665475 master-0 kubenswrapper[34361]: I0224 05:37:21.663874 34361 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-controller"/"operator-controller-trusted-ca-bundle" Feb 24 05:37:21.665475 master-0 kubenswrapper[34361]: I0224 05:37:21.664064 34361 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"openshift-service-ca.crt" Feb 24 05:37:21.665475 master-0 kubenswrapper[34361]: I0224 05:37:21.664367 34361 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-certs-default" Feb 24 05:37:21.665475 master-0 kubenswrapper[34361]: I0224 05:37:21.664614 34361 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"default-cni-sysctl-allowlist" Feb 24 05:37:21.665475 master-0 kubenswrapper[34361]: I0224 05:37:21.665367 34361 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"etcd-client" Feb 24 05:37:21.677908 master-0 kubenswrapper[34361]: I0224 05:37:21.677851 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hgl5l\" (UniqueName: \"kubernetes.io/projected/80cc7ad6-051b-4ee5-94af-611388d9622a-kube-api-access-hgl5l\") pod \"kube-state-metrics-59584d565f-gsgxz\" (UID: \"80cc7ad6-051b-4ee5-94af-611388d9622a\") " pod="openshift-monitoring/kube-state-metrics-59584d565f-gsgxz" Feb 24 05:37:21.678016 master-0 kubenswrapper[34361]: I0224 05:37:21.677953 34361 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/f6690909-3a87-4bdc-b0ec-1cdd4df32e4b-host-slash\") pod \"iptables-alerter-r2vvc\" (UID: \"f6690909-3a87-4bdc-b0ec-1cdd4df32e4b\") " pod="openshift-network-operator/iptables-alerter-r2vvc" Feb 24 05:37:21.678150 master-0 kubenswrapper[34361]: I0224 05:37:21.678070 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/da19bb93-c9ba-4e60-9e83-d92bc0dd33c4-client-ca\") pod \"controller-manager-7657d7494-mmsz6\" (UID: \"da19bb93-c9ba-4e60-9e83-d92bc0dd33c4\") " pod="openshift-controller-manager/controller-manager-7657d7494-mmsz6" Feb 24 05:37:21.678197 master-0 kubenswrapper[34361]: I0224 05:37:21.678163 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9lkf2\" (UniqueName: \"kubernetes.io/projected/da19bb93-c9ba-4e60-9e83-d92bc0dd33c4-kube-api-access-9lkf2\") pod \"controller-manager-7657d7494-mmsz6\" (UID: \"da19bb93-c9ba-4e60-9e83-d92bc0dd33c4\") " pod="openshift-controller-manager/controller-manager-7657d7494-mmsz6" Feb 24 05:37:21.678472 master-0 kubenswrapper[34361]: I0224 05:37:21.678408 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cczbm\" (UniqueName: \"kubernetes.io/projected/23bdafdd-27c9-4461-be4a-3ea916ac3875-kube-api-access-cczbm\") pod \"cluster-image-registry-operator-779979bdf7-t98nr\" (UID: \"23bdafdd-27c9-4461-be4a-3ea916ac3875\") " pod="openshift-image-registry/cluster-image-registry-operator-779979bdf7-t98nr" Feb 24 05:37:21.678632 master-0 kubenswrapper[34361]: I0224 05:37:21.678561 34361 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"openshift-service-ca.crt" Feb 24 05:37:21.678691 master-0 kubenswrapper[34361]: I0224 05:37:21.678565 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cloud-controller-manager-operator-tls\" (UniqueName: \"kubernetes.io/secret/f3cd3830-62b5-49d1-917e-bd993d685c65-cloud-controller-manager-operator-tls\") pod \"cluster-cloud-controller-manager-operator-67dd8d7969-m8d2t\" (UID: \"f3cd3830-62b5-49d1-917e-bd993d685c65\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-67dd8d7969-m8d2t" Feb 24 05:37:21.678880 master-0 kubenswrapper[34361]: I0224 05:37:21.678819 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/7a2c651d-ea1a-41f2-9745-04adc8d88904-etcd-client\") pod \"etcd-operator-545bf96f4d-tfmbs\" (UID: \"7a2c651d-ea1a-41f2-9745-04adc8d88904\") " pod="openshift-etcd-operator/etcd-operator-545bf96f4d-tfmbs" Feb 24 05:37:21.678926 master-0 kubenswrapper[34361]: I0224 05:37:21.678898 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/88b915ff-fd94-4998-aa09-70f95c0f1b8a-ovnkube-config\") pod \"ovnkube-control-plane-5d8dfcdc87-b8ght\" (UID: \"88b915ff-fd94-4998-aa09-70f95c0f1b8a\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-5d8dfcdc87-b8ght" Feb 24 05:37:21.679806 master-0 kubenswrapper[34361]: I0224 05:37:21.679757 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4vdmz\" (UniqueName: \"kubernetes.io/projected/3f511d03-a182-4968-ba40-5c5c10e5e6be-kube-api-access-4vdmz\") pod \"openshift-config-operator-6f47d587d6-7b87v\" (UID: \"3f511d03-a182-4968-ba40-5c5c10e5e6be\") " pod="openshift-config-operator/openshift-config-operator-6f47d587d6-7b87v" Feb 24 05:37:21.679806 master-0 kubenswrapper[34361]: I0224 05:37:21.679793 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/17f8e10b-88dc-4158-a7c4-aaa2f5d5fb9d-config\") pod \"kube-apiserver-operator-5d87bf58c-ncrqj\" (UID: \"17f8e10b-88dc-4158-a7c4-aaa2f5d5fb9d\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-5d87bf58c-ncrqj" Feb 24 05:37:21.680021 master-0 kubenswrapper[34361]: I0224 05:37:21.679814 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/3363f001-1cfa-41f5-b245-30cc99dd09cb-metrics-tls\") pod \"dns-default-cdk2w\" (UID: \"3363f001-1cfa-41f5-b245-30cc99dd09cb\") " pod="openshift-dns/dns-default-cdk2w" Feb 24 05:37:21.680021 master-0 kubenswrapper[34361]: I0224 05:37:21.679838 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/74e8b3c8-da80-492c-bfcf-199b40bde40b-env-overrides\") pod \"ovnkube-node-vd82q\" (UID: \"74e8b3c8-da80-492c-bfcf-199b40bde40b\") " pod="openshift-ovn-kubernetes/ovnkube-node-vd82q" Feb 24 05:37:21.680021 master-0 kubenswrapper[34361]: I0224 05:37:21.679792 34361 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/7a2c651d-ea1a-41f2-9745-04adc8d88904-etcd-client\") pod \"etcd-operator-545bf96f4d-tfmbs\" (UID: \"7a2c651d-ea1a-41f2-9745-04adc8d88904\") " pod="openshift-etcd-operator/etcd-operator-545bf96f4d-tfmbs" Feb 24 05:37:21.680021 master-0 kubenswrapper[34361]: I0224 05:37:21.679858 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/cc0cfdd6-99d8-40dc-87d0-06c2a6767f38-srv-cert\") pod \"catalog-operator-596f79dd6f-v22h2\" (UID: \"cc0cfdd6-99d8-40dc-87d0-06c2a6767f38\") " pod="openshift-operator-lifecycle-manager/catalog-operator-596f79dd6f-v22h2" Feb 24 05:37:21.680021 master-0 kubenswrapper[34361]: I0224 05:37:21.679886 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/dd29bef3-d27e-48b3-9aa0-d915e949b3d5-marketplace-operator-metrics\") pod \"marketplace-operator-6f5488b997-dbsnm\" (UID: \"dd29bef3-d27e-48b3-9aa0-d915e949b3d5\") " pod="openshift-marketplace/marketplace-operator-6f5488b997-dbsnm" Feb 24 05:37:21.680021 master-0 kubenswrapper[34361]: I0224 05:37:21.679917 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jx4rw\" (UniqueName: \"kubernetes.io/projected/c00ee01c-143b-4e44-823c-c6bfdedb8ed6-kube-api-access-jx4rw\") pod \"multus-8qp5g\" (UID: \"c00ee01c-143b-4e44-823c-c6bfdedb8ed6\") " pod="openshift-multus/multus-8qp5g" Feb 24 05:37:21.680021 master-0 kubenswrapper[34361]: I0224 05:37:21.679941 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/23bdafdd-27c9-4461-be4a-3ea916ac3875-image-registry-operator-tls\") pod \"cluster-image-registry-operator-779979bdf7-t98nr\" (UID: \"23bdafdd-27c9-4461-be4a-3ea916ac3875\") " pod="openshift-image-registry/cluster-image-registry-operator-779979bdf7-t98nr" Feb 24 05:37:21.680021 master-0 kubenswrapper[34361]: I0224 05:37:21.679958 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/49b426a3-f16e-40e9-a166-7270d4cfcc60-tmpfs\") pod \"packageserver-df5f88cd4-cwzcs\" (UID: \"49b426a3-f16e-40e9-a166-7270d4cfcc60\") " pod="openshift-operator-lifecycle-manager/packageserver-df5f88cd4-cwzcs" Feb 24 05:37:21.680021 master-0 kubenswrapper[34361]: I0224 05:37:21.679976 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/812552f3-09b1-43f8-b910-c78e776127f8-serving-cert\") pod \"apiserver-6f8b7f45f7-5df4m\" (UID: \"812552f3-09b1-43f8-b910-c78e776127f8\") " pod="openshift-oauth-apiserver/apiserver-6f8b7f45f7-5df4m" Feb 24 05:37:21.680434 master-0 kubenswrapper[34361]: I0224 05:37:21.680273 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cluster-baremetal-operator-tls\" (UniqueName: \"kubernetes.io/secret/39623346-691b-42c8-af76-409d4f6629af-cluster-baremetal-operator-tls\") pod \"cluster-baremetal-operator-d6bb9bb76-54hnv\" (UID: \"39623346-691b-42c8-af76-409d4f6629af\") " pod="openshift-machine-api/cluster-baremetal-operator-d6bb9bb76-54hnv" Feb 24 05:37:21.680434 master-0 kubenswrapper[34361]: I0224 05:37:21.680354 34361 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/88b915ff-fd94-4998-aa09-70f95c0f1b8a-ovnkube-config\") pod \"ovnkube-control-plane-5d8dfcdc87-b8ght\" (UID: \"88b915ff-fd94-4998-aa09-70f95c0f1b8a\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-5d8dfcdc87-b8ght" Feb 24 05:37:21.680547 master-0 kubenswrapper[34361]: I0224 05:37:21.680401 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b426cb33-1624-45e6-b8c5-4e8d251f6339-config\") pod \"route-controller-manager-654dcf5585-fgmnd\" (UID: \"b426cb33-1624-45e6-b8c5-4e8d251f6339\") " pod="openshift-route-controller-manager/route-controller-manager-654dcf5585-fgmnd" Feb 24 05:37:21.680547 master-0 kubenswrapper[34361]: I0224 05:37:21.680534 34361 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/c00ee01c-143b-4e44-823c-c6bfdedb8ed6-host-var-lib-kubelet\") pod \"multus-8qp5g\" (UID: \"c00ee01c-143b-4e44-823c-c6bfdedb8ed6\") " pod="openshift-multus/multus-8qp5g" Feb 24 05:37:21.680696 master-0 kubenswrapper[34361]: I0224 05:37:21.680583 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/f77227c8-c52d-4a71-ae1b-792055f6f23d-metrics-tls\") pod \"network-operator-7d7db75979-4fk6k\" (UID: \"f77227c8-c52d-4a71-ae1b-792055f6f23d\") " pod="openshift-network-operator/network-operator-7d7db75979-4fk6k" Feb 24 05:37:21.680696 master-0 kubenswrapper[34361]: I0224 05:37:21.680663 34361 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/812552f3-09b1-43f8-b910-c78e776127f8-audit-dir\") pod \"apiserver-6f8b7f45f7-5df4m\" (UID: \"812552f3-09b1-43f8-b910-c78e776127f8\") " pod="openshift-oauth-apiserver/apiserver-6f8b7f45f7-5df4m" Feb 24 05:37:21.680887 master-0 kubenswrapper[34361]: I0224 05:37:21.680809 34361 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/3363f001-1cfa-41f5-b245-30cc99dd09cb-metrics-tls\") pod \"dns-default-cdk2w\" (UID: \"3363f001-1cfa-41f5-b245-30cc99dd09cb\") " pod="openshift-dns/dns-default-cdk2w" Feb 24 05:37:21.680887 master-0 kubenswrapper[34361]: I0224 05:37:21.680748 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-589rv\" (UniqueName: \"kubernetes.io/projected/3363f001-1cfa-41f5-b245-30cc99dd09cb-kube-api-access-589rv\") pod \"dns-default-cdk2w\" (UID: \"3363f001-1cfa-41f5-b245-30cc99dd09cb\") " pod="openshift-dns/dns-default-cdk2w" Feb 24 05:37:21.681065 master-0 kubenswrapper[34361]: I0224 05:37:21.680918 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/be7a4b9e-1e9a-4298-b804-21b683805c0e-default-certificate\") pod \"router-default-7b65dc9fcb-zxkt2\" (UID: \"be7a4b9e-1e9a-4298-b804-21b683805c0e\") " pod="openshift-ingress/router-default-7b65dc9fcb-zxkt2" Feb 24 05:37:21.681132 master-0 kubenswrapper[34361]: I0224 05:37:21.681077 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cco-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b46907eb-36d6-4410-b7d8-8012b254c861-cco-trusted-ca\") pod \"cloud-credential-operator-6968c58f46-68rth\" (UID: \"b46907eb-36d6-4410-b7d8-8012b254c861\") " pod="openshift-cloud-credential-operator/cloud-credential-operator-6968c58f46-68rth" Feb 24 05:37:21.681188 master-0 kubenswrapper[34361]: I0224 05:37:21.681165 34361 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/073c9b40-bb80-41a2-bcd2-cfdbe040a5a4-lib-modules\") pod \"tuned-2w6mj\" (UID: \"073c9b40-bb80-41a2-bcd2-cfdbe040a5a4\") " pod="openshift-cluster-node-tuning-operator/tuned-2w6mj" Feb 24 05:37:21.681243 master-0 kubenswrapper[34361]: I0224 05:37:21.681207 34361 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/dd29bef3-d27e-48b3-9aa0-d915e949b3d5-marketplace-operator-metrics\") pod \"marketplace-operator-6f5488b997-dbsnm\" (UID: \"dd29bef3-d27e-48b3-9aa0-d915e949b3d5\") " pod="openshift-marketplace/marketplace-operator-6f5488b997-dbsnm" Feb 24 05:37:21.681299 master-0 kubenswrapper[34361]: I0224 05:37:21.681267 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/0e05783d-6bd1-4c71-87d9-1eb3edd827b3-service-ca\") pod \"cluster-version-operator-57476485-7g2gq\" (UID: \"0e05783d-6bd1-4c71-87d9-1eb3edd827b3\") " pod="openshift-cluster-version/cluster-version-operator-57476485-7g2gq" Feb 24 05:37:21.681390 master-0 kubenswrapper[34361]: I0224 05:37:21.681364 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mdpfz\" (UniqueName: \"kubernetes.io/projected/c177f8fe-8145-4557-ae78-af121efe001c-kube-api-access-mdpfz\") pod \"cluster-monitoring-operator-6bb6d78bf-mzb7q\" (UID: \"c177f8fe-8145-4557-ae78-af121efe001c\") " pod="openshift-monitoring/cluster-monitoring-operator-6bb6d78bf-mzb7q" Feb 24 05:37:21.681454 master-0 kubenswrapper[34361]: I0224 05:37:21.681438 34361 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/49b426a3-f16e-40e9-a166-7270d4cfcc60-tmpfs\") pod \"packageserver-df5f88cd4-cwzcs\" (UID: \"49b426a3-f16e-40e9-a166-7270d4cfcc60\") " pod="openshift-operator-lifecycle-manager/packageserver-df5f88cd4-cwzcs" Feb 24 05:37:21.681547 master-0 kubenswrapper[34361]: I0224 05:37:21.680610 34361 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/17f8e10b-88dc-4158-a7c4-aaa2f5d5fb9d-config\") pod \"kube-apiserver-operator-5d87bf58c-ncrqj\" (UID: \"17f8e10b-88dc-4158-a7c4-aaa2f5d5fb9d\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-5d87bf58c-ncrqj" Feb 24 05:37:21.681547 master-0 kubenswrapper[34361]: I0224 05:37:21.681448 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8f3825c1-975c-40b5-a6ad-0f200968b3cd-utilities\") pod \"redhat-operators-xm8sw\" (UID: \"8f3825c1-975c-40b5-a6ad-0f200968b3cd\") " pod="openshift-marketplace/redhat-operators-xm8sw" Feb 24 05:37:21.681676 master-0 kubenswrapper[34361]: I0224 05:37:21.680994 34361 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/74e8b3c8-da80-492c-bfcf-199b40bde40b-env-overrides\") pod \"ovnkube-node-vd82q\" (UID: \"74e8b3c8-da80-492c-bfcf-199b40bde40b\") " pod="openshift-ovn-kubernetes/ovnkube-node-vd82q" Feb 24 05:37:21.681878 master-0 kubenswrapper[34361]: I0224 05:37:21.681812 34361 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8f3825c1-975c-40b5-a6ad-0f200968b3cd-utilities\") pod \"redhat-operators-xm8sw\" (UID: \"8f3825c1-975c-40b5-a6ad-0f200968b3cd\") " pod="openshift-marketplace/redhat-operators-xm8sw" Feb 24 05:37:21.682461 master-0 kubenswrapper[34361]: I0224 05:37:21.682256 34361 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/be7a4b9e-1e9a-4298-b804-21b683805c0e-default-certificate\") pod \"router-default-7b65dc9fcb-zxkt2\" (UID: \"be7a4b9e-1e9a-4298-b804-21b683805c0e\") " pod="openshift-ingress/router-default-7b65dc9fcb-zxkt2" Feb 24 05:37:21.682552 master-0 kubenswrapper[34361]: I0224 05:37:21.682469 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l8z6s\" (UniqueName: \"kubernetes.io/projected/8f3825c1-975c-40b5-a6ad-0f200968b3cd-kube-api-access-l8z6s\") pod \"redhat-operators-xm8sw\" (UID: \"8f3825c1-975c-40b5-a6ad-0f200968b3cd\") " pod="openshift-marketplace/redhat-operators-xm8sw" Feb 24 05:37:21.682552 master-0 kubenswrapper[34361]: I0224 05:37:21.682540 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-62xzk\" (UniqueName: \"kubernetes.io/projected/633d33a1-e1b1-40b0-b56a-afb0c1085d97-kube-api-access-62xzk\") pod \"cluster-olm-operator-5bd7768f54-qh6j7\" (UID: \"633d33a1-e1b1-40b0-b56a-afb0c1085d97\") " pod="openshift-cluster-olm-operator/cluster-olm-operator-5bd7768f54-qh6j7" Feb 24 05:37:21.682687 master-0 kubenswrapper[34361]: I0224 05:37:21.682592 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e6a0fc47-b446-4902-9f8a-04870cbafcab-config\") pod \"machine-approver-7dd9c7d7b9-pb6sw\" (UID: \"e6a0fc47-b446-4902-9f8a-04870cbafcab\") " pod="openshift-cluster-machine-approver/machine-approver-7dd9c7d7b9-pb6sw" Feb 24 05:37:21.682761 master-0 kubenswrapper[34361]: I0224 05:37:21.682683 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ee10cd9f-23da-4e40-bc7a-0856a9fa7ae5-serving-cert\") pod \"insights-operator-59b498fcfb-mprnx\" (UID: \"ee10cd9f-23da-4e40-bc7a-0856a9fa7ae5\") " pod="openshift-insights/insights-operator-59b498fcfb-mprnx" Feb 24 05:37:21.682761 master-0 kubenswrapper[34361]: I0224 05:37:21.682738 34361 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/c00ee01c-143b-4e44-823c-c6bfdedb8ed6-cnibin\") pod \"multus-8qp5g\" (UID: \"c00ee01c-143b-4e44-823c-c6bfdedb8ed6\") " pod="openshift-multus/multus-8qp5g" Feb 24 05:37:21.682876 master-0 kubenswrapper[34361]: I0224 05:37:21.682795 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"volume-directive-shadow\" (UniqueName: \"kubernetes.io/empty-dir/80cc7ad6-051b-4ee5-94af-611388d9622a-volume-directive-shadow\") pod \"kube-state-metrics-59584d565f-gsgxz\" (UID: \"80cc7ad6-051b-4ee5-94af-611388d9622a\") " pod="openshift-monitoring/kube-state-metrics-59584d565f-gsgxz" Feb 24 05:37:21.682876 master-0 kubenswrapper[34361]: I0224 05:37:21.682837 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/c847d0c0-cc92-4d56-9e47-b83d9a39a745-node-bootstrap-token\") pod \"machine-config-server-xxl55\" (UID: \"c847d0c0-cc92-4d56-9e47-b83d9a39a745\") " pod="openshift-machine-config-operator/machine-config-server-xxl55" Feb 24 05:37:21.682876 master-0 kubenswrapper[34361]: I0224 05:37:21.682864 34361 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/f77227c8-c52d-4a71-ae1b-792055f6f23d-metrics-tls\") pod \"network-operator-7d7db75979-4fk6k\" (UID: \"f77227c8-c52d-4a71-ae1b-792055f6f23d\") " pod="openshift-network-operator/network-operator-7d7db75979-4fk6k" Feb 24 05:37:21.683100 master-0 kubenswrapper[34361]: I0224 05:37:21.682889 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-operator-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/b8d28792-2365-4e9e-b61a-46cd2ef8b632-prometheus-operator-kube-rbac-proxy-config\") pod \"prometheus-operator-754bc4d665-xjddh\" (UID: \"b8d28792-2365-4e9e-b61a-46cd2ef8b632\") " pod="openshift-monitoring/prometheus-operator-754bc4d665-xjddh" Feb 24 05:37:21.683169 master-0 kubenswrapper[34361]: I0224 05:37:21.683128 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/6e5ede6a-9d4b-47a2-b4ba-e6018910d05a-trusted-ca\") pod \"cluster-node-tuning-operator-bcf775fc9-h99t4\" (UID: \"6e5ede6a-9d4b-47a2-b4ba-e6018910d05a\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-bcf775fc9-h99t4" Feb 24 05:37:21.683169 master-0 kubenswrapper[34361]: I0224 05:37:21.683143 34361 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"volume-directive-shadow\" (UniqueName: \"kubernetes.io/empty-dir/80cc7ad6-051b-4ee5-94af-611388d9622a-volume-directive-shadow\") pod \"kube-state-metrics-59584d565f-gsgxz\" (UID: \"80cc7ad6-051b-4ee5-94af-611388d9622a\") " pod="openshift-monitoring/kube-state-metrics-59584d565f-gsgxz" Feb 24 05:37:21.683333 master-0 kubenswrapper[34361]: I0224 05:37:21.683178 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f9pp4\" (UniqueName: \"kubernetes.io/projected/03e4cebe-f3df-423f-be2b-7fb22bd58341-kube-api-access-f9pp4\") pod \"migrator-5c85bff57-txt9d\" (UID: \"03e4cebe-f3df-423f-be2b-7fb22bd58341\") " pod="openshift-kube-storage-version-migrator/migrator-5c85bff57-txt9d" Feb 24 05:37:21.683454 master-0 kubenswrapper[34361]: I0224 05:37:21.683364 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hjtv8\" (UniqueName: \"kubernetes.io/projected/b426cb33-1624-45e6-b8c5-4e8d251f6339-kube-api-access-hjtv8\") pod \"route-controller-manager-654dcf5585-fgmnd\" (UID: \"b426cb33-1624-45e6-b8c5-4e8d251f6339\") " pod="openshift-route-controller-manager/route-controller-manager-654dcf5585-fgmnd" Feb 24 05:37:21.683543 master-0 kubenswrapper[34361]: I0224 05:37:21.683505 34361 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/a3561f49-0808-4d96-95ec-456fcb5c5bb4-rootfs\") pod \"machine-config-daemon-c56dz\" (UID: \"a3561f49-0808-4d96-95ec-456fcb5c5bb4\") " pod="openshift-machine-config-operator/machine-config-daemon-c56dz" Feb 24 05:37:21.683636 master-0 kubenswrapper[34361]: I0224 05:37:21.683598 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e6f05507-d5c1-4102-a220-1db715a496e3-config\") pod \"openshift-kube-scheduler-operator-77cd4d9559-8l7xv\" (UID: \"e6f05507-d5c1-4102-a220-1db715a496e3\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-77cd4d9559-8l7xv" Feb 24 05:37:21.683714 master-0 kubenswrapper[34361]: I0224 05:37:21.683689 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cd674e58-b749-46fb-8a28-66012fd8b401-catalog-content\") pod \"community-operators-68vwc\" (UID: \"cd674e58-b749-46fb-8a28-66012fd8b401\") " pod="openshift-marketplace/community-operators-68vwc" Feb 24 05:37:21.683786 master-0 kubenswrapper[34361]: I0224 05:37:21.683733 34361 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-docker\" (UniqueName: \"kubernetes.io/host-path/d9492fbf-d0f4-4ecf-84ba-b089d69535c1-etc-docker\") pod \"catalogd-controller-manager-84b8d9d697-zvzxs\" (UID: \"d9492fbf-d0f4-4ecf-84ba-b089d69535c1\") " pod="openshift-catalogd/catalogd-controller-manager-84b8d9d697-zvzxs" Feb 24 05:37:21.683977 master-0 kubenswrapper[34361]: I0224 05:37:21.683810 34361 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-sysctl-d\" (UniqueName: \"kubernetes.io/host-path/073c9b40-bb80-41a2-bcd2-cfdbe040a5a4-etc-sysctl-d\") pod \"tuned-2w6mj\" (UID: \"073c9b40-bb80-41a2-bcd2-cfdbe040a5a4\") " pod="openshift-cluster-node-tuning-operator/tuned-2w6mj" Feb 24 05:37:21.683977 master-0 kubenswrapper[34361]: I0224 05:37:21.683882 34361 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/073c9b40-bb80-41a2-bcd2-cfdbe040a5a4-host\") pod \"tuned-2w6mj\" (UID: \"073c9b40-bb80-41a2-bcd2-cfdbe040a5a4\") " pod="openshift-cluster-node-tuning-operator/tuned-2w6mj" Feb 24 05:37:21.683977 master-0 kubenswrapper[34361]: I0224 05:37:21.683951 34361 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/6e5ede6a-9d4b-47a2-b4ba-e6018910d05a-trusted-ca\") pod \"cluster-node-tuning-operator-bcf775fc9-h99t4\" (UID: \"6e5ede6a-9d4b-47a2-b4ba-e6018910d05a\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-bcf775fc9-h99t4" Feb 24 05:37:21.684459 master-0 kubenswrapper[34361]: I0224 05:37:21.683958 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jl24z\" (UniqueName: \"kubernetes.io/projected/798dcf46-8377-46b8-8387-5261d9bbefa1-kube-api-access-jl24z\") pod \"node-resolver-ng8tz\" (UID: \"798dcf46-8377-46b8-8387-5261d9bbefa1\") " pod="openshift-dns/node-resolver-ng8tz" Feb 24 05:37:21.684599 master-0 kubenswrapper[34361]: I0224 05:37:21.684454 34361 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e6f05507-d5c1-4102-a220-1db715a496e3-config\") pod \"openshift-kube-scheduler-operator-77cd4d9559-8l7xv\" (UID: \"e6f05507-d5c1-4102-a220-1db715a496e3\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-77cd4d9559-8l7xv" Feb 24 05:37:21.684686 master-0 kubenswrapper[34361]: I0224 05:37:21.684556 34361 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"root\" (UniqueName: \"kubernetes.io/host-path/f2be5ed6-fdf0-4462-a319-eed1a5a1c778-root\") pod \"node-exporter-qk7rz\" (UID: \"f2be5ed6-fdf0-4462-a319-eed1a5a1c778\") " pod="openshift-monitoring/node-exporter-qk7rz" Feb 24 05:37:21.684754 master-0 kubenswrapper[34361]: I0224 05:37:21.684683 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d86d5bbe-3768-4695-810b-245a56e4fd1d-config\") pod \"service-ca-operator-c48c8bf7c-mcdrl\" (UID: \"d86d5bbe-3768-4695-810b-245a56e4fd1d\") " pod="openshift-service-ca-operator/service-ca-operator-c48c8bf7c-mcdrl" Feb 24 05:37:21.684815 master-0 kubenswrapper[34361]: I0224 05:37:21.684772 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/3f511d03-a182-4968-ba40-5c5c10e5e6be-available-featuregates\") pod \"openshift-config-operator-6f47d587d6-7b87v\" (UID: \"3f511d03-a182-4968-ba40-5c5c10e5e6be\") " pod="openshift-config-operator/openshift-config-operator-6f47d587d6-7b87v" Feb 24 05:37:21.684874 master-0 kubenswrapper[34361]: I0224 05:37:21.684829 34361 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cd674e58-b749-46fb-8a28-66012fd8b401-catalog-content\") pod \"community-operators-68vwc\" (UID: \"cd674e58-b749-46fb-8a28-66012fd8b401\") " pod="openshift-marketplace/community-operators-68vwc" Feb 24 05:37:21.685050 master-0 kubenswrapper[34361]: I0224 05:37:21.685003 34361 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/3f511d03-a182-4968-ba40-5c5c10e5e6be-available-featuregates\") pod \"openshift-config-operator-6f47d587d6-7b87v\" (UID: \"3f511d03-a182-4968-ba40-5c5c10e5e6be\") " pod="openshift-config-operator/openshift-config-operator-6f47d587d6-7b87v" Feb 24 05:37:21.685050 master-0 kubenswrapper[34361]: I0224 05:37:21.684867 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/39c4d0aa-c372-4d02-9302-337e68b56784-auth-proxy-config\") pod \"machine-config-operator-7f8c75f984-922md\" (UID: \"39c4d0aa-c372-4d02-9302-337e68b56784\") " pod="openshift-machine-config-operator/machine-config-operator-7f8c75f984-922md" Feb 24 05:37:21.685197 master-0 kubenswrapper[34361]: I0224 05:37:21.685061 34361 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d86d5bbe-3768-4695-810b-245a56e4fd1d-config\") pod \"service-ca-operator-c48c8bf7c-mcdrl\" (UID: \"d86d5bbe-3768-4695-810b-245a56e4fd1d\") " pod="openshift-service-ca-operator/service-ca-operator-c48c8bf7c-mcdrl" Feb 24 05:37:21.685262 master-0 kubenswrapper[34361]: I0224 05:37:21.685214 34361 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/c00ee01c-143b-4e44-823c-c6bfdedb8ed6-os-release\") pod \"multus-8qp5g\" (UID: \"c00ee01c-143b-4e44-823c-c6bfdedb8ed6\") " pod="openshift-multus/multus-8qp5g" Feb 24 05:37:21.685436 master-0 kubenswrapper[34361]: I0224 05:37:21.685390 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"telemetry-config\" (UniqueName: \"kubernetes.io/configmap/c177f8fe-8145-4557-ae78-af121efe001c-telemetry-config\") pod \"cluster-monitoring-operator-6bb6d78bf-mzb7q\" (UID: \"c177f8fe-8145-4557-ae78-af121efe001c\") " pod="openshift-monitoring/cluster-monitoring-operator-6bb6d78bf-mzb7q" Feb 24 05:37:21.685537 master-0 kubenswrapper[34361]: I0224 05:37:21.685464 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/6ddb5ab7-0c1f-44ed-84fa-aaeb6b553e03-webhook-certs\") pod \"multus-admission-controller-5f54bf67d4-5tf9t\" (UID: \"6ddb5ab7-0c1f-44ed-84fa-aaeb6b553e03\") " pod="openshift-multus/multus-admission-controller-5f54bf67d4-5tf9t" Feb 24 05:37:21.685537 master-0 kubenswrapper[34361]: I0224 05:37:21.685505 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/1163571d-f555-41ad-b04c-74c2dc452efe-metrics-client-ca\") pod \"telemeter-client-96c995bf5-57k8x\" (UID: \"1163571d-f555-41ad-b04c-74c2dc452efe\") " pod="openshift-monitoring/telemeter-client-96c995bf5-57k8x" Feb 24 05:37:21.685677 master-0 kubenswrapper[34361]: I0224 05:37:21.685555 34361 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/073c9b40-bb80-41a2-bcd2-cfdbe040a5a4-run\") pod \"tuned-2w6mj\" (UID: \"073c9b40-bb80-41a2-bcd2-cfdbe040a5a4\") " pod="openshift-cluster-node-tuning-operator/tuned-2w6mj" Feb 24 05:37:21.685739 master-0 kubenswrapper[34361]: I0224 05:37:21.685679 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/f2be5ed6-fdf0-4462-a319-eed1a5a1c778-metrics-client-ca\") pod \"node-exporter-qk7rz\" (UID: \"f2be5ed6-fdf0-4462-a319-eed1a5a1c778\") " pod="openshift-monitoring/node-exporter-qk7rz" Feb 24 05:37:21.685850 master-0 kubenswrapper[34361]: I0224 05:37:21.685808 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/f938daff-1d36-4348-a689-3d1607058296-cert\") pod \"ingress-canary-5m82s\" (UID: \"f938daff-1d36-4348-a689-3d1607058296\") " pod="openshift-ingress-canary/ingress-canary-5m82s" Feb 24 05:37:21.685921 master-0 kubenswrapper[34361]: I0224 05:37:21.685889 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/dd29bef3-d27e-48b3-9aa0-d915e949b3d5-marketplace-trusted-ca\") pod \"marketplace-operator-6f5488b997-dbsnm\" (UID: \"dd29bef3-d27e-48b3-9aa0-d915e949b3d5\") " pod="openshift-marketplace/marketplace-operator-6f5488b997-dbsnm" Feb 24 05:37:21.685988 master-0 kubenswrapper[34361]: I0224 05:37:21.685943 34361 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/afaa1cb5-3ad5-4c67-8802-9c4db23a2e3a-var-lock\") pod \"installer-3-master-0\" (UID: \"afaa1cb5-3ad5-4c67-8802-9c4db23a2e3a\") " pod="openshift-kube-apiserver/installer-3-master-0" Feb 24 05:37:21.685988 master-0 kubenswrapper[34361]: I0224 05:37:21.685960 34361 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"telemetry-config\" (UniqueName: \"kubernetes.io/configmap/c177f8fe-8145-4557-ae78-af121efe001c-telemetry-config\") pod \"cluster-monitoring-operator-6bb6d78bf-mzb7q\" (UID: \"c177f8fe-8145-4557-ae78-af121efe001c\") " pod="openshift-monitoring/cluster-monitoring-operator-6bb6d78bf-mzb7q" Feb 24 05:37:21.686105 master-0 kubenswrapper[34361]: I0224 05:37:21.686007 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/22813c83-2f60-44ad-9624-ad367cec08f7-kube-api-access\") pod \"kube-controller-manager-operator-7bcfbc574b-8zrj9\" (UID: \"22813c83-2f60-44ad-9624-ad367cec08f7\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-7bcfbc574b-8zrj9" Feb 24 05:37:21.686105 master-0 kubenswrapper[34361]: I0224 05:37:21.686065 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hl828\" (UniqueName: \"kubernetes.io/projected/767424fb-babf-4b73-b5e2-0bee65fcf207-kube-api-access-hl828\") pod \"multus-additional-cni-plugins-jknmn\" (UID: \"767424fb-babf-4b73-b5e2-0bee65fcf207\") " pod="openshift-multus/multus-additional-cni-plugins-jknmn" Feb 24 05:37:21.686222 master-0 kubenswrapper[34361]: I0224 05:37:21.686112 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalogserver-certs\" (UniqueName: \"kubernetes.io/secret/d9492fbf-d0f4-4ecf-84ba-b089d69535c1-catalogserver-certs\") pod \"catalogd-controller-manager-84b8d9d697-zvzxs\" (UID: \"d9492fbf-d0f4-4ecf-84ba-b089d69535c1\") " pod="openshift-catalogd/catalogd-controller-manager-84b8d9d697-zvzxs" Feb 24 05:37:21.686222 master-0 kubenswrapper[34361]: I0224 05:37:21.686179 34361 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-sysconfig\" (UniqueName: \"kubernetes.io/host-path/073c9b40-bb80-41a2-bcd2-cfdbe040a5a4-etc-sysconfig\") pod \"tuned-2w6mj\" (UID: \"073c9b40-bb80-41a2-bcd2-cfdbe040a5a4\") " pod="openshift-cluster-node-tuning-operator/tuned-2w6mj" Feb 24 05:37:21.686222 master-0 kubenswrapper[34361]: I0224 05:37:21.686218 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/b21148ab-4e3e-4d0b-b198-3278dd8e2e7e-image-import-ca\") pod \"apiserver-fdc9d7cdd-8v72m\" (UID: \"b21148ab-4e3e-4d0b-b198-3278dd8e2e7e\") " pod="openshift-apiserver/apiserver-fdc9d7cdd-8v72m" Feb 24 05:37:21.686374 master-0 kubenswrapper[34361]: I0224 05:37:21.686253 34361 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/dd29bef3-d27e-48b3-9aa0-d915e949b3d5-marketplace-trusted-ca\") pod \"marketplace-operator-6f5488b997-dbsnm\" (UID: \"dd29bef3-d27e-48b3-9aa0-d915e949b3d5\") " pod="openshift-marketplace/marketplace-operator-6f5488b997-dbsnm" Feb 24 05:37:21.686374 master-0 kubenswrapper[34361]: I0224 05:37:21.686341 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/3d6b1ce7-1213-494c-829d-186d39eac7eb-bound-sa-token\") pod \"ingress-operator-6569778c84-rr8r7\" (UID: \"3d6b1ce7-1213-494c-829d-186d39eac7eb\") " pod="openshift-ingress-operator/ingress-operator-6569778c84-rr8r7" Feb 24 05:37:21.686374 master-0 kubenswrapper[34361]: I0224 05:37:21.686368 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operand-assets\" (UniqueName: \"kubernetes.io/empty-dir/633d33a1-e1b1-40b0-b56a-afb0c1085d97-operand-assets\") pod \"cluster-olm-operator-5bd7768f54-qh6j7\" (UID: \"633d33a1-e1b1-40b0-b56a-afb0c1085d97\") " pod="openshift-cluster-olm-operator/cluster-olm-operator-5bd7768f54-qh6j7" Feb 24 05:37:21.686519 master-0 kubenswrapper[34361]: I0224 05:37:21.686393 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cd674e58-b749-46fb-8a28-66012fd8b401-utilities\") pod \"community-operators-68vwc\" (UID: \"cd674e58-b749-46fb-8a28-66012fd8b401\") " pod="openshift-marketplace/community-operators-68vwc" Feb 24 05:37:21.686627 master-0 kubenswrapper[34361]: I0224 05:37:21.686580 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-telemeter-client-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/1163571d-f555-41ad-b04c-74c2dc452efe-secret-telemeter-client-kube-rbac-proxy-config\") pod \"telemeter-client-96c995bf5-57k8x\" (UID: \"1163571d-f555-41ad-b04c-74c2dc452efe\") " pod="openshift-monitoring/telemeter-client-96c995bf5-57k8x" Feb 24 05:37:21.686687 master-0 kubenswrapper[34361]: I0224 05:37:21.686650 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3f511d03-a182-4968-ba40-5c5c10e5e6be-serving-cert\") pod \"openshift-config-operator-6f47d587d6-7b87v\" (UID: \"3f511d03-a182-4968-ba40-5c5c10e5e6be\") " pod="openshift-config-operator/openshift-config-operator-6f47d587d6-7b87v" Feb 24 05:37:21.686745 master-0 kubenswrapper[34361]: I0224 05:37:21.686697 34361 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/c00ee01c-143b-4e44-823c-c6bfdedb8ed6-host-run-k8s-cni-cncf-io\") pod \"multus-8qp5g\" (UID: \"c00ee01c-143b-4e44-823c-c6bfdedb8ed6\") " pod="openshift-multus/multus-8qp5g" Feb 24 05:37:21.686745 master-0 kubenswrapper[34361]: I0224 05:37:21.686722 34361 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalogserver-certs\" (UniqueName: \"kubernetes.io/secret/d9492fbf-d0f4-4ecf-84ba-b089d69535c1-catalogserver-certs\") pod \"catalogd-controller-manager-84b8d9d697-zvzxs\" (UID: \"d9492fbf-d0f4-4ecf-84ba-b089d69535c1\") " pod="openshift-catalogd/catalogd-controller-manager-84b8d9d697-zvzxs" Feb 24 05:37:21.686745 master-0 kubenswrapper[34361]: I0224 05:37:21.686738 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/5d51ce58-55f6-45d5-9d5d-7b31ae42380a-cert\") pod \"cluster-autoscaler-operator-86b8dc6d6-mcf2z\" (UID: \"5d51ce58-55f6-45d5-9d5d-7b31ae42380a\") " pod="openshift-machine-api/cluster-autoscaler-operator-86b8dc6d6-mcf2z" Feb 24 05:37:21.686864 master-0 kubenswrapper[34361]: I0224 05:37:21.686753 34361 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cd674e58-b749-46fb-8a28-66012fd8b401-utilities\") pod \"community-operators-68vwc\" (UID: \"cd674e58-b749-46fb-8a28-66012fd8b401\") " pod="openshift-marketplace/community-operators-68vwc" Feb 24 05:37:21.686864 master-0 kubenswrapper[34361]: I0224 05:37:21.686781 34361 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/74e8b3c8-da80-492c-bfcf-199b40bde40b-run-ovn\") pod \"ovnkube-node-vd82q\" (UID: \"74e8b3c8-da80-492c-bfcf-199b40bde40b\") " pod="openshift-ovn-kubernetes/ovnkube-node-vd82q" Feb 24 05:37:21.686994 master-0 kubenswrapper[34361]: I0224 05:37:21.686866 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qgl4j\" (UniqueName: \"kubernetes.io/projected/347c43e5-86d5-436f-bdc5-1c7bbe19ab2a-kube-api-access-qgl4j\") pod \"operator-controller-controller-manager-9cc7d7bb-t75jj\" (UID: \"347c43e5-86d5-436f-bdc5-1c7bbe19ab2a\") " pod="openshift-operator-controller/operator-controller-controller-manager-9cc7d7bb-t75jj" Feb 24 05:37:21.687040 master-0 kubenswrapper[34361]: I0224 05:37:21.686984 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-25dbj\" (UniqueName: \"kubernetes.io/projected/cc0cfdd6-99d8-40dc-87d0-06c2a6767f38-kube-api-access-25dbj\") pod \"catalog-operator-596f79dd6f-v22h2\" (UID: \"cc0cfdd6-99d8-40dc-87d0-06c2a6767f38\") " pod="openshift-operator-lifecycle-manager/catalog-operator-596f79dd6f-v22h2" Feb 24 05:37:21.687086 master-0 kubenswrapper[34361]: I0224 05:37:21.687039 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/116e6b47-d435-49ca-abb5-088788daf16a-images\") pod \"machine-api-operator-5c7cf458b4-65mc5\" (UID: \"116e6b47-d435-49ca-abb5-088788daf16a\") " pod="openshift-machine-api/machine-api-operator-5c7cf458b4-65mc5" Feb 24 05:37:21.687086 master-0 kubenswrapper[34361]: I0224 05:37:21.687048 34361 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operand-assets\" (UniqueName: \"kubernetes.io/empty-dir/633d33a1-e1b1-40b0-b56a-afb0c1085d97-operand-assets\") pod \"cluster-olm-operator-5bd7768f54-qh6j7\" (UID: \"633d33a1-e1b1-40b0-b56a-afb0c1085d97\") " pod="openshift-cluster-olm-operator/cluster-olm-operator-5bd7768f54-qh6j7" Feb 24 05:37:21.687164 master-0 kubenswrapper[34361]: I0224 05:37:21.687095 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tlwzq\" (UniqueName: \"kubernetes.io/projected/c3fed34f-b275-42c6-af6c-8de3e6fe0f9e-kube-api-access-tlwzq\") pod \"kube-storage-version-migrator-operator-fc889cfd5-r6p58\" (UID: \"c3fed34f-b275-42c6-af6c-8de3e6fe0f9e\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-fc889cfd5-r6p58" Feb 24 05:37:21.687378 master-0 kubenswrapper[34361]: I0224 05:37:21.687292 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qvznm\" (UniqueName: \"kubernetes.io/projected/c847d0c0-cc92-4d56-9e47-b83d9a39a745-kube-api-access-qvznm\") pod \"machine-config-server-xxl55\" (UID: \"c847d0c0-cc92-4d56-9e47-b83d9a39a745\") " pod="openshift-machine-config-operator/machine-config-server-xxl55" Feb 24 05:37:21.688595 master-0 kubenswrapper[34361]: I0224 05:37:21.688415 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e6f05507-d5c1-4102-a220-1db715a496e3-serving-cert\") pod \"openshift-kube-scheduler-operator-77cd4d9559-8l7xv\" (UID: \"e6f05507-d5c1-4102-a220-1db715a496e3\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-77cd4d9559-8l7xv" Feb 24 05:37:21.688683 master-0 kubenswrapper[34361]: I0224 05:37:21.688650 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/23bdafdd-27c9-4461-be4a-3ea916ac3875-trusted-ca\") pod \"cluster-image-registry-operator-779979bdf7-t98nr\" (UID: \"23bdafdd-27c9-4461-be4a-3ea916ac3875\") " pod="openshift-image-registry/cluster-image-registry-operator-779979bdf7-t98nr" Feb 24 05:37:21.688732 master-0 kubenswrapper[34361]: I0224 05:37:21.688712 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"telemeter-client-tls\" (UniqueName: \"kubernetes.io/secret/1163571d-f555-41ad-b04c-74c2dc452efe-telemeter-client-tls\") pod \"telemeter-client-96c995bf5-57k8x\" (UID: \"1163571d-f555-41ad-b04c-74c2dc452efe\") " pod="openshift-monitoring/telemeter-client-96c995bf5-57k8x" Feb 24 05:37:21.688940 master-0 kubenswrapper[34361]: I0224 05:37:21.688897 34361 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e6f05507-d5c1-4102-a220-1db715a496e3-serving-cert\") pod \"openshift-kube-scheduler-operator-77cd4d9559-8l7xv\" (UID: \"e6f05507-d5c1-4102-a220-1db715a496e3\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-77cd4d9559-8l7xv" Feb 24 05:37:21.689140 master-0 kubenswrapper[34361]: I0224 05:37:21.688894 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-certificates\" (UniqueName: \"kubernetes.io/secret/b9a96f0d-16b8-47ee-baf2-807d2260fa71-tls-certificates\") pod \"prometheus-operator-admission-webhook-75d56db95f-hw4m2\" (UID: \"b9a96f0d-16b8-47ee-baf2-807d2260fa71\") " pod="openshift-monitoring/prometheus-operator-admission-webhook-75d56db95f-hw4m2" Feb 24 05:37:21.689236 master-0 kubenswrapper[34361]: I0224 05:37:21.689197 34361 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-containers\" (UniqueName: \"kubernetes.io/host-path/347c43e5-86d5-436f-bdc5-1c7bbe19ab2a-etc-containers\") pod \"operator-controller-controller-manager-9cc7d7bb-t75jj\" (UID: \"347c43e5-86d5-436f-bdc5-1c7bbe19ab2a\") " pod="openshift-operator-controller/operator-controller-controller-manager-9cc7d7bb-t75jj" Feb 24 05:37:21.689301 master-0 kubenswrapper[34361]: I0224 05:37:21.689272 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7jflg\" (UniqueName: \"kubernetes.io/projected/75b4304c-09f2-499e-8c2f-da603e43ba72-kube-api-access-7jflg\") pod \"redhat-marketplace-v64s6\" (UID: \"75b4304c-09f2-499e-8c2f-da603e43ba72\") " pod="openshift-marketplace/redhat-marketplace-v64s6" Feb 24 05:37:21.689405 master-0 kubenswrapper[34361]: I0224 05:37:21.689368 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/be7a4b9e-1e9a-4298-b804-21b683805c0e-service-ca-bundle\") pod \"router-default-7b65dc9fcb-zxkt2\" (UID: \"be7a4b9e-1e9a-4298-b804-21b683805c0e\") " pod="openshift-ingress/router-default-7b65dc9fcb-zxkt2" Feb 24 05:37:21.689483 master-0 kubenswrapper[34361]: I0224 05:37:21.689434 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c3fed34f-b275-42c6-af6c-8de3e6fe0f9e-serving-cert\") pod \"kube-storage-version-migrator-operator-fc889cfd5-r6p58\" (UID: \"c3fed34f-b275-42c6-af6c-8de3e6fe0f9e\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-fc889cfd5-r6p58" Feb 24 05:37:21.689601 master-0 kubenswrapper[34361]: I0224 05:37:21.689563 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dcj62\" (UniqueName: \"kubernetes.io/projected/f77227c8-c52d-4a71-ae1b-792055f6f23d-kube-api-access-dcj62\") pod \"network-operator-7d7db75979-4fk6k\" (UID: \"f77227c8-c52d-4a71-ae1b-792055f6f23d\") " pod="openshift-network-operator/network-operator-7d7db75979-4fk6k" Feb 24 05:37:21.689919 master-0 kubenswrapper[34361]: I0224 05:37:21.689862 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/f3cd3830-62b5-49d1-917e-bd993d685c65-images\") pod \"cluster-cloud-controller-manager-operator-67dd8d7969-m8d2t\" (UID: \"f3cd3830-62b5-49d1-917e-bd993d685c65\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-67dd8d7969-m8d2t" Feb 24 05:37:21.689972 master-0 kubenswrapper[34361]: I0224 05:37:21.689707 34361 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c3fed34f-b275-42c6-af6c-8de3e6fe0f9e-serving-cert\") pod \"kube-storage-version-migrator-operator-fc889cfd5-r6p58\" (UID: \"c3fed34f-b275-42c6-af6c-8de3e6fe0f9e\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-fc889cfd5-r6p58" Feb 24 05:37:21.690077 master-0 kubenswrapper[34361]: I0224 05:37:21.689805 34361 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/be7a4b9e-1e9a-4298-b804-21b683805c0e-service-ca-bundle\") pod \"router-default-7b65dc9fcb-zxkt2\" (UID: \"be7a4b9e-1e9a-4298-b804-21b683805c0e\") " pod="openshift-ingress/router-default-7b65dc9fcb-zxkt2" Feb 24 05:37:21.690202 master-0 kubenswrapper[34361]: I0224 05:37:21.690156 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/17f8e10b-88dc-4158-a7c4-aaa2f5d5fb9d-kube-api-access\") pod \"kube-apiserver-operator-5d87bf58c-ncrqj\" (UID: \"17f8e10b-88dc-4158-a7c4-aaa2f5d5fb9d\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-5d87bf58c-ncrqj" Feb 24 05:37:21.690251 master-0 kubenswrapper[34361]: I0224 05:37:21.690223 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-exporter-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/f2be5ed6-fdf0-4462-a319-eed1a5a1c778-node-exporter-kube-rbac-proxy-config\") pod \"node-exporter-qk7rz\" (UID: \"f2be5ed6-fdf0-4462-a319-eed1a5a1c778\") " pod="openshift-monitoring/node-exporter-qk7rz" Feb 24 05:37:21.690559 master-0 kubenswrapper[34361]: I0224 05:37:21.690505 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zb4rw\" (UniqueName: \"kubernetes.io/projected/b79ef90c-dc66-4d5f-8943-2c3ac68796ba-kube-api-access-zb4rw\") pod \"csi-snapshot-controller-6847bb4785-vqn96\" (UID: \"b79ef90c-dc66-4d5f-8943-2c3ac68796ba\") " pod="openshift-cluster-storage-operator/csi-snapshot-controller-6847bb4785-vqn96" Feb 24 05:37:21.690696 master-0 kubenswrapper[34361]: I0224 05:37:21.690652 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/767424fb-babf-4b73-b5e2-0bee65fcf207-cni-binary-copy\") pod \"multus-additional-cni-plugins-jknmn\" (UID: \"767424fb-babf-4b73-b5e2-0bee65fcf207\") " pod="openshift-multus/multus-additional-cni-plugins-jknmn" Feb 24 05:37:21.691199 master-0 kubenswrapper[34361]: I0224 05:37:21.691148 34361 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/767424fb-babf-4b73-b5e2-0bee65fcf207-cni-binary-copy\") pod \"multus-additional-cni-plugins-jknmn\" (UID: \"767424fb-babf-4b73-b5e2-0bee65fcf207\") " pod="openshift-multus/multus-additional-cni-plugins-jknmn" Feb 24 05:37:21.691454 master-0 kubenswrapper[34361]: I0224 05:37:21.691267 34361 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-sysctl-conf\" (UniqueName: \"kubernetes.io/host-path/073c9b40-bb80-41a2-bcd2-cfdbe040a5a4-etc-sysctl-conf\") pod \"tuned-2w6mj\" (UID: \"073c9b40-bb80-41a2-bcd2-cfdbe040a5a4\") " pod="openshift-cluster-node-tuning-operator/tuned-2w6mj" Feb 24 05:37:21.691454 master-0 kubenswrapper[34361]: I0224 05:37:21.691366 34361 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/c00ee01c-143b-4e44-823c-c6bfdedb8ed6-multus-conf-dir\") pod \"multus-8qp5g\" (UID: \"c00ee01c-143b-4e44-823c-c6bfdedb8ed6\") " pod="openshift-multus/multus-8qp5g" Feb 24 05:37:21.691454 master-0 kubenswrapper[34361]: I0224 05:37:21.691422 34361 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/0e05783d-6bd1-4c71-87d9-1eb3edd827b3-etc-cvo-updatepayloads\") pod \"cluster-version-operator-57476485-7g2gq\" (UID: \"0e05783d-6bd1-4c71-87d9-1eb3edd827b3\") " pod="openshift-cluster-version/cluster-version-operator-57476485-7g2gq" Feb 24 05:37:21.691574 master-0 kubenswrapper[34361]: I0224 05:37:21.691466 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/a3561f49-0808-4d96-95ec-456fcb5c5bb4-proxy-tls\") pod \"machine-config-daemon-c56dz\" (UID: \"a3561f49-0808-4d96-95ec-456fcb5c5bb4\") " pod="openshift-machine-config-operator/machine-config-daemon-c56dz" Feb 24 05:37:21.691574 master-0 kubenswrapper[34361]: I0224 05:37:21.691509 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zb68s\" (UniqueName: \"kubernetes.io/projected/6e5ede6a-9d4b-47a2-b4ba-e6018910d05a-kube-api-access-zb68s\") pod \"cluster-node-tuning-operator-bcf775fc9-h99t4\" (UID: \"6e5ede6a-9d4b-47a2-b4ba-e6018910d05a\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-bcf775fc9-h99t4" Feb 24 05:37:21.691574 master-0 kubenswrapper[34361]: I0224 05:37:21.691547 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/39623346-691b-42c8-af76-409d4f6629af-cert\") pod \"cluster-baremetal-operator-d6bb9bb76-54hnv\" (UID: \"39623346-691b-42c8-af76-409d4f6629af\") " pod="openshift-machine-api/cluster-baremetal-operator-d6bb9bb76-54hnv" Feb 24 05:37:21.691808 master-0 kubenswrapper[34361]: I0224 05:37:21.691590 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/32fd577d-8966-4ab1-95cf-357291084156-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-686847ff5f-zzvtt\" (UID: \"32fd577d-8966-4ab1-95cf-357291084156\") " pod="openshift-machine-api/control-plane-machine-set-operator-686847ff5f-zzvtt" Feb 24 05:37:21.691808 master-0 kubenswrapper[34361]: I0224 05:37:21.691707 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/b8d28792-2365-4e9e-b61a-46cd2ef8b632-metrics-client-ca\") pod \"prometheus-operator-754bc4d665-xjddh\" (UID: \"b8d28792-2365-4e9e-b61a-46cd2ef8b632\") " pod="openshift-monitoring/prometheus-operator-754bc4d665-xjddh" Feb 24 05:37:21.691957 master-0 kubenswrapper[34361]: I0224 05:37:21.691815 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-77lsr\" (UniqueName: \"kubernetes.io/projected/b8d28792-2365-4e9e-b61a-46cd2ef8b632-kube-api-access-77lsr\") pod \"prometheus-operator-754bc4d665-xjddh\" (UID: \"b8d28792-2365-4e9e-b61a-46cd2ef8b632\") " pod="openshift-monitoring/prometheus-operator-754bc4d665-xjddh" Feb 24 05:37:21.691957 master-0 kubenswrapper[34361]: I0224 05:37:21.691875 34361 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/74e8b3c8-da80-492c-bfcf-199b40bde40b-host-cni-bin\") pod \"ovnkube-node-vd82q\" (UID: \"74e8b3c8-da80-492c-bfcf-199b40bde40b\") " pod="openshift-ovn-kubernetes/ovnkube-node-vd82q" Feb 24 05:37:21.691957 master-0 kubenswrapper[34361]: I0224 05:37:21.691925 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xj8cq\" (UniqueName: \"kubernetes.io/projected/d86d5bbe-3768-4695-810b-245a56e4fd1d-kube-api-access-xj8cq\") pod \"service-ca-operator-c48c8bf7c-mcdrl\" (UID: \"d86d5bbe-3768-4695-810b-245a56e4fd1d\") " pod="openshift-service-ca-operator/service-ca-operator-c48c8bf7c-mcdrl" Feb 24 05:37:21.692162 master-0 kubenswrapper[34361]: I0224 05:37:21.691975 34361 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/f3cd3830-62b5-49d1-917e-bd993d685c65-host-etc-kube\") pod \"cluster-cloud-controller-manager-operator-67dd8d7969-m8d2t\" (UID: \"f3cd3830-62b5-49d1-917e-bd993d685c65\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-67dd8d7969-m8d2t" Feb 24 05:37:21.692162 master-0 kubenswrapper[34361]: I0224 05:37:21.692034 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/812552f3-09b1-43f8-b910-c78e776127f8-encryption-config\") pod \"apiserver-6f8b7f45f7-5df4m\" (UID: \"812552f3-09b1-43f8-b910-c78e776127f8\") " pod="openshift-oauth-apiserver/apiserver-6f8b7f45f7-5df4m" Feb 24 05:37:21.692162 master-0 kubenswrapper[34361]: I0224 05:37:21.692074 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/projected/d9492fbf-d0f4-4ecf-84ba-b089d69535c1-ca-certs\") pod \"catalogd-controller-manager-84b8d9d697-zvzxs\" (UID: \"d9492fbf-d0f4-4ecf-84ba-b089d69535c1\") " pod="openshift-catalogd/catalogd-controller-manager-84b8d9d697-zvzxs" Feb 24 05:37:21.692162 master-0 kubenswrapper[34361]: I0224 05:37:21.692113 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openshift-state-metrics-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/bf303acd-b62e-4aa3-bd8d-15f5844302d8-openshift-state-metrics-kube-rbac-proxy-config\") pod \"openshift-state-metrics-6dbff8cb4c-hvjlk\" (UID: \"bf303acd-b62e-4aa3-bd8d-15f5844302d8\") " pod="openshift-monitoring/openshift-state-metrics-6dbff8cb4c-hvjlk" Feb 24 05:37:21.692449 master-0 kubenswrapper[34361]: I0224 05:37:21.692410 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k8dtv\" (UniqueName: \"kubernetes.io/projected/b46907eb-36d6-4410-b7d8-8012b254c861-kube-api-access-k8dtv\") pod \"cloud-credential-operator-6968c58f46-68rth\" (UID: \"b46907eb-36d6-4410-b7d8-8012b254c861\") " pod="openshift-cloud-credential-operator/cloud-credential-operator-6968c58f46-68rth" Feb 24 05:37:21.692606 master-0 kubenswrapper[34361]: I0224 05:37:21.692479 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/073c9b40-bb80-41a2-bcd2-cfdbe040a5a4-tmp\") pod \"tuned-2w6mj\" (UID: \"073c9b40-bb80-41a2-bcd2-cfdbe040a5a4\") " pod="openshift-cluster-node-tuning-operator/tuned-2w6mj" Feb 24 05:37:21.692728 master-0 kubenswrapper[34361]: I0224 05:37:21.692639 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0e05783d-6bd1-4c71-87d9-1eb3edd827b3-serving-cert\") pod \"cluster-version-operator-57476485-7g2gq\" (UID: \"0e05783d-6bd1-4c71-87d9-1eb3edd827b3\") " pod="openshift-cluster-version/cluster-version-operator-57476485-7g2gq" Feb 24 05:37:21.692728 master-0 kubenswrapper[34361]: I0224 05:37:21.692682 34361 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/74e8b3c8-da80-492c-bfcf-199b40bde40b-log-socket\") pod \"ovnkube-node-vd82q\" (UID: \"74e8b3c8-da80-492c-bfcf-199b40bde40b\") " pod="openshift-ovn-kubernetes/ovnkube-node-vd82q" Feb 24 05:37:21.692728 master-0 kubenswrapper[34361]: I0224 05:37:21.692695 34361 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/073c9b40-bb80-41a2-bcd2-cfdbe040a5a4-tmp\") pod \"tuned-2w6mj\" (UID: \"073c9b40-bb80-41a2-bcd2-cfdbe040a5a4\") " pod="openshift-cluster-node-tuning-operator/tuned-2w6mj" Feb 24 05:37:21.692928 master-0 kubenswrapper[34361]: I0224 05:37:21.692794 34361 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/073c9b40-bb80-41a2-bcd2-cfdbe040a5a4-etc-kubernetes\") pod \"tuned-2w6mj\" (UID: \"073c9b40-bb80-41a2-bcd2-cfdbe040a5a4\") " pod="openshift-cluster-node-tuning-operator/tuned-2w6mj" Feb 24 05:37:21.692928 master-0 kubenswrapper[34361]: I0224 05:37:21.692890 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dtnxg\" (UniqueName: \"kubernetes.io/projected/b21148ab-4e3e-4d0b-b198-3278dd8e2e7e-kube-api-access-dtnxg\") pod \"apiserver-fdc9d7cdd-8v72m\" (UID: \"b21148ab-4e3e-4d0b-b198-3278dd8e2e7e\") " pod="openshift-apiserver/apiserver-fdc9d7cdd-8v72m" Feb 24 05:37:21.693151 master-0 kubenswrapper[34361]: I0224 05:37:21.693112 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/22813c83-2f60-44ad-9624-ad367cec08f7-config\") pod \"kube-controller-manager-operator-7bcfbc574b-8zrj9\" (UID: \"22813c83-2f60-44ad-9624-ad367cec08f7\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-7bcfbc574b-8zrj9" Feb 24 05:37:21.693289 master-0 kubenswrapper[34361]: I0224 05:37:21.693194 34361 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/f2be5ed6-fdf0-4462-a319-eed1a5a1c778-sys\") pod \"node-exporter-qk7rz\" (UID: \"f2be5ed6-fdf0-4462-a319-eed1a5a1c778\") " pod="openshift-monitoring/node-exporter-qk7rz" Feb 24 05:37:21.693371 master-0 kubenswrapper[34361]: I0224 05:37:21.693291 34361 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/22813c83-2f60-44ad-9624-ad367cec08f7-config\") pod \"kube-controller-manager-operator-7bcfbc574b-8zrj9\" (UID: \"22813c83-2f60-44ad-9624-ad367cec08f7\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-7bcfbc574b-8zrj9" Feb 24 05:37:21.693618 master-0 kubenswrapper[34361]: I0224 05:37:21.693557 34361 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/f77227c8-c52d-4a71-ae1b-792055f6f23d-host-etc-kube\") pod \"network-operator-7d7db75979-4fk6k\" (UID: \"f77227c8-c52d-4a71-ae1b-792055f6f23d\") " pod="openshift-network-operator/network-operator-7d7db75979-4fk6k" Feb 24 05:37:21.693786 master-0 kubenswrapper[34361]: I0224 05:37:21.693710 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fzp4b\" (UniqueName: \"kubernetes.io/projected/d9492fbf-d0f4-4ecf-84ba-b089d69535c1-kube-api-access-fzp4b\") pod \"catalogd-controller-manager-84b8d9d697-zvzxs\" (UID: \"d9492fbf-d0f4-4ecf-84ba-b089d69535c1\") " pod="openshift-catalogd/catalogd-controller-manager-84b8d9d697-zvzxs" Feb 24 05:37:21.693948 master-0 kubenswrapper[34361]: I0224 05:37:21.693893 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wvm29\" (UniqueName: \"kubernetes.io/projected/be7a4b9e-1e9a-4298-b804-21b683805c0e-kube-api-access-wvm29\") pod \"router-default-7b65dc9fcb-zxkt2\" (UID: \"be7a4b9e-1e9a-4298-b804-21b683805c0e\") " pod="openshift-ingress/router-default-7b65dc9fcb-zxkt2" Feb 24 05:37:21.694030 master-0 kubenswrapper[34361]: I0224 05:37:21.693965 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cluster-monitoring-operator-tls\" (UniqueName: \"kubernetes.io/secret/c177f8fe-8145-4557-ae78-af121efe001c-cluster-monitoring-operator-tls\") pod \"cluster-monitoring-operator-6bb6d78bf-mzb7q\" (UID: \"c177f8fe-8145-4557-ae78-af121efe001c\") " pod="openshift-monitoring/cluster-monitoring-operator-6bb6d78bf-mzb7q" Feb 24 05:37:21.694118 master-0 kubenswrapper[34361]: I0224 05:37:21.694026 34361 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/74e8b3c8-da80-492c-bfcf-199b40bde40b-run-openvswitch\") pod \"ovnkube-node-vd82q\" (UID: \"74e8b3c8-da80-492c-bfcf-199b40bde40b\") " pod="openshift-ovn-kubernetes/ovnkube-node-vd82q" Feb 24 05:37:21.694118 master-0 kubenswrapper[34361]: I0224 05:37:21.694097 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5bwl7\" (UniqueName: \"kubernetes.io/projected/9666fc94-71e3-46af-8b45-26e3a085d076-kube-api-access-5bwl7\") pod \"olm-operator-5499d7f7bb-8xdmq\" (UID: \"9666fc94-71e3-46af-8b45-26e3a085d076\") " pod="openshift-operator-lifecycle-manager/olm-operator-5499d7f7bb-8xdmq" Feb 24 05:37:21.694244 master-0 kubenswrapper[34361]: I0224 05:37:21.694170 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/49b426a3-f16e-40e9-a166-7270d4cfcc60-apiservice-cert\") pod \"packageserver-df5f88cd4-cwzcs\" (UID: \"49b426a3-f16e-40e9-a166-7270d4cfcc60\") " pod="openshift-operator-lifecycle-manager/packageserver-df5f88cd4-cwzcs" Feb 24 05:37:21.694361 master-0 kubenswrapper[34361]: I0224 05:37:21.694233 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7a2c651d-ea1a-41f2-9745-04adc8d88904-serving-cert\") pod \"etcd-operator-545bf96f4d-tfmbs\" (UID: \"7a2c651d-ea1a-41f2-9745-04adc8d88904\") " pod="openshift-etcd-operator/etcd-operator-545bf96f4d-tfmbs" Feb 24 05:37:21.694361 master-0 kubenswrapper[34361]: I0224 05:37:21.694294 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7vjzn\" (UniqueName: \"kubernetes.io/projected/4bb05b64-74d7-41bc-991c-5d3cddc9d8f4-kube-api-access-7vjzn\") pod \"cluster-samples-operator-65c5c48b9b-hmlsl\" (UID: \"4bb05b64-74d7-41bc-991c-5d3cddc9d8f4\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-65c5c48b9b-hmlsl" Feb 24 05:37:21.694452 master-0 kubenswrapper[34361]: I0224 05:37:21.694403 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/b21148ab-4e3e-4d0b-b198-3278dd8e2e7e-encryption-config\") pod \"apiserver-fdc9d7cdd-8v72m\" (UID: \"b21148ab-4e3e-4d0b-b198-3278dd8e2e7e\") " pod="openshift-apiserver/apiserver-fdc9d7cdd-8v72m" Feb 24 05:37:21.694494 master-0 kubenswrapper[34361]: I0224 05:37:21.694459 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/7dcc5520-7aa8-4cd5-b06d-591827ed4e2a-metrics-certs\") pod \"network-metrics-daemon-2vsjh\" (UID: \"7dcc5520-7aa8-4cd5-b06d-591827ed4e2a\") " pod="openshift-multus/network-metrics-daemon-2vsjh" Feb 24 05:37:21.694545 master-0 kubenswrapper[34361]: I0224 05:37:21.694512 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/75b4304c-09f2-499e-8c2f-da603e43ba72-catalog-content\") pod \"redhat-marketplace-v64s6\" (UID: \"75b4304c-09f2-499e-8c2f-da603e43ba72\") " pod="openshift-marketplace/redhat-marketplace-v64s6" Feb 24 05:37:21.694593 master-0 kubenswrapper[34361]: I0224 05:37:21.694554 34361 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7a2c651d-ea1a-41f2-9745-04adc8d88904-serving-cert\") pod \"etcd-operator-545bf96f4d-tfmbs\" (UID: \"7a2c651d-ea1a-41f2-9745-04adc8d88904\") " pod="openshift-etcd-operator/etcd-operator-545bf96f4d-tfmbs" Feb 24 05:37:21.694642 master-0 kubenswrapper[34361]: I0224 05:37:21.694511 34361 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cluster-monitoring-operator-tls\" (UniqueName: \"kubernetes.io/secret/c177f8fe-8145-4557-ae78-af121efe001c-cluster-monitoring-operator-tls\") pod \"cluster-monitoring-operator-6bb6d78bf-mzb7q\" (UID: \"c177f8fe-8145-4557-ae78-af121efe001c\") " pod="openshift-monitoring/cluster-monitoring-operator-6bb6d78bf-mzb7q" Feb 24 05:37:21.694692 master-0 kubenswrapper[34361]: I0224 05:37:21.694663 34361 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/75b4304c-09f2-499e-8c2f-da603e43ba72-catalog-content\") pod \"redhat-marketplace-v64s6\" (UID: \"75b4304c-09f2-499e-8c2f-da603e43ba72\") " pod="openshift-marketplace/redhat-marketplace-v64s6" Feb 24 05:37:21.694692 master-0 kubenswrapper[34361]: I0224 05:37:21.694664 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kx4qf\" (UniqueName: \"kubernetes.io/projected/e6a0fc47-b446-4902-9f8a-04870cbafcab-kube-api-access-kx4qf\") pod \"machine-approver-7dd9c7d7b9-pb6sw\" (UID: \"e6a0fc47-b446-4902-9f8a-04870cbafcab\") " pod="openshift-cluster-machine-approver/machine-approver-7dd9c7d7b9-pb6sw" Feb 24 05:37:21.694780 master-0 kubenswrapper[34361]: I0224 05:37:21.694749 34361 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/b21148ab-4e3e-4d0b-b198-3278dd8e2e7e-encryption-config\") pod \"apiserver-fdc9d7cdd-8v72m\" (UID: \"b21148ab-4e3e-4d0b-b198-3278dd8e2e7e\") " pod="openshift-apiserver/apiserver-fdc9d7cdd-8v72m" Feb 24 05:37:21.694870 master-0 kubenswrapper[34361]: I0224 05:37:21.694824 34361 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/7dcc5520-7aa8-4cd5-b06d-591827ed4e2a-metrics-certs\") pod \"network-metrics-daemon-2vsjh\" (UID: \"7dcc5520-7aa8-4cd5-b06d-591827ed4e2a\") " pod="openshift-multus/network-metrics-daemon-2vsjh" Feb 24 05:37:21.694870 master-0 kubenswrapper[34361]: I0224 05:37:21.694800 34361 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-docker\" (UniqueName: \"kubernetes.io/host-path/347c43e5-86d5-436f-bdc5-1c7bbe19ab2a-etc-docker\") pod \"operator-controller-controller-manager-9cc7d7bb-t75jj\" (UID: \"347c43e5-86d5-436f-bdc5-1c7bbe19ab2a\") " pod="openshift-operator-controller/operator-controller-controller-manager-9cc7d7bb-t75jj" Feb 24 05:37:21.694999 master-0 kubenswrapper[34361]: I0224 05:37:21.694966 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cloud-credential-operator-serving-cert\" (UniqueName: \"kubernetes.io/secret/b46907eb-36d6-4410-b7d8-8012b254c861-cloud-credential-operator-serving-cert\") pod \"cloud-credential-operator-6968c58f46-68rth\" (UID: \"b46907eb-36d6-4410-b7d8-8012b254c861\") " pod="openshift-cloud-credential-operator/cloud-credential-operator-6968c58f46-68rth" Feb 24 05:37:21.694999 master-0 kubenswrapper[34361]: I0224 05:37:21.694996 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b2fkp\" (UniqueName: \"kubernetes.io/projected/39c4d0aa-c372-4d02-9302-337e68b56784-kube-api-access-b2fkp\") pod \"machine-config-operator-7f8c75f984-922md\" (UID: \"39c4d0aa-c372-4d02-9302-337e68b56784\") " pod="openshift-machine-config-operator/machine-config-operator-7f8c75f984-922md" Feb 24 05:37:21.695084 master-0 kubenswrapper[34361]: I0224 05:37:21.695024 34361 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/74e8b3c8-da80-492c-bfcf-199b40bde40b-var-lib-openvswitch\") pod \"ovnkube-node-vd82q\" (UID: \"74e8b3c8-da80-492c-bfcf-199b40bde40b\") " pod="openshift-ovn-kubernetes/ovnkube-node-vd82q" Feb 24 05:37:21.695261 master-0 kubenswrapper[34361]: I0224 05:37:21.695213 34361 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/c00ee01c-143b-4e44-823c-c6bfdedb8ed6-host-var-lib-cni-multus\") pod \"multus-8qp5g\" (UID: \"c00ee01c-143b-4e44-823c-c6bfdedb8ed6\") " pod="openshift-multus/multus-8qp5g" Feb 24 05:37:21.695261 master-0 kubenswrapper[34361]: I0224 05:37:21.695242 34361 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/c00ee01c-143b-4e44-823c-c6bfdedb8ed6-hostroot\") pod \"multus-8qp5g\" (UID: \"c00ee01c-143b-4e44-823c-c6bfdedb8ed6\") " pod="openshift-multus/multus-8qp5g" Feb 24 05:37:21.695542 master-0 kubenswrapper[34361]: I0224 05:37:21.695267 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/5d51ce58-55f6-45d5-9d5d-7b31ae42380a-auth-proxy-config\") pod \"cluster-autoscaler-operator-86b8dc6d6-mcf2z\" (UID: \"5d51ce58-55f6-45d5-9d5d-7b31ae42380a\") " pod="openshift-machine-api/cluster-autoscaler-operator-86b8dc6d6-mcf2z" Feb 24 05:37:21.695542 master-0 kubenswrapper[34361]: I0224 05:37:21.695290 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rkz2q\" (UniqueName: \"kubernetes.io/projected/6ddb5ab7-0c1f-44ed-84fa-aaeb6b553e03-kube-api-access-rkz2q\") pod \"multus-admission-controller-5f54bf67d4-5tf9t\" (UID: \"6ddb5ab7-0c1f-44ed-84fa-aaeb6b553e03\") " pod="openshift-multus/multus-admission-controller-5f54bf67d4-5tf9t" Feb 24 05:37:21.695542 master-0 kubenswrapper[34361]: I0224 05:37:21.695328 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"configmap-kubelet-serving-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/2f48332e-92de-42aa-a6e6-db161f005e74-configmap-kubelet-serving-ca-bundle\") pod \"metrics-server-65cdf565cd-555rj\" (UID: \"2f48332e-92de-42aa-a6e6-db161f005e74\") " pod="openshift-monitoring/metrics-server-65cdf565cd-555rj" Feb 24 05:37:21.695542 master-0 kubenswrapper[34361]: I0224 05:37:21.695395 34361 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/74e8b3c8-da80-492c-bfcf-199b40bde40b-host-cni-netd\") pod \"ovnkube-node-vd82q\" (UID: \"74e8b3c8-da80-492c-bfcf-199b40bde40b\") " pod="openshift-ovn-kubernetes/ovnkube-node-vd82q" Feb 24 05:37:21.695542 master-0 kubenswrapper[34361]: I0224 05:37:21.695481 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/59333a14-5bdc-4590-a3da-af7300f086da-config\") pod \"authentication-operator-5bd7c86784-kbb8z\" (UID: \"59333a14-5bdc-4590-a3da-af7300f086da\") " pod="openshift-authentication-operator/authentication-operator-5bd7c86784-kbb8z" Feb 24 05:37:21.695787 master-0 kubenswrapper[34361]: I0224 05:37:21.695546 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/7a2c651d-ea1a-41f2-9745-04adc8d88904-etcd-ca\") pod \"etcd-operator-545bf96f4d-tfmbs\" (UID: \"7a2c651d-ea1a-41f2-9745-04adc8d88904\") " pod="openshift-etcd-operator/etcd-operator-545bf96f4d-tfmbs" Feb 24 05:37:21.695787 master-0 kubenswrapper[34361]: I0224 05:37:21.695606 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/812552f3-09b1-43f8-b910-c78e776127f8-etcd-serving-ca\") pod \"apiserver-6f8b7f45f7-5df4m\" (UID: \"812552f3-09b1-43f8-b910-c78e776127f8\") " pod="openshift-oauth-apiserver/apiserver-6f8b7f45f7-5df4m" Feb 24 05:37:21.695787 master-0 kubenswrapper[34361]: I0224 05:37:21.695681 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fh2pc\" (UniqueName: \"kubernetes.io/projected/32fd577d-8966-4ab1-95cf-357291084156-kube-api-access-fh2pc\") pod \"control-plane-machine-set-operator-686847ff5f-zzvtt\" (UID: \"32fd577d-8966-4ab1-95cf-357291084156\") " pod="openshift-machine-api/control-plane-machine-set-operator-686847ff5f-zzvtt" Feb 24 05:37:21.695787 master-0 kubenswrapper[34361]: I0224 05:37:21.695773 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kc42f\" (UniqueName: \"kubernetes.io/projected/2f48332e-92de-42aa-a6e6-db161f005e74-kube-api-access-kc42f\") pod \"metrics-server-65cdf565cd-555rj\" (UID: \"2f48332e-92de-42aa-a6e6-db161f005e74\") " pod="openshift-monitoring/metrics-server-65cdf565cd-555rj" Feb 24 05:37:21.695948 master-0 kubenswrapper[34361]: I0224 05:37:21.695872 34361 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/7a2c651d-ea1a-41f2-9745-04adc8d88904-etcd-ca\") pod \"etcd-operator-545bf96f4d-tfmbs\" (UID: \"7a2c651d-ea1a-41f2-9745-04adc8d88904\") " pod="openshift-etcd-operator/etcd-operator-545bf96f4d-tfmbs" Feb 24 05:37:21.696052 master-0 kubenswrapper[34361]: I0224 05:37:21.696018 34361 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/74e8b3c8-da80-492c-bfcf-199b40bde40b-host-run-netns\") pod \"ovnkube-node-vd82q\" (UID: \"74e8b3c8-da80-492c-bfcf-199b40bde40b\") " pod="openshift-ovn-kubernetes/ovnkube-node-vd82q" Feb 24 05:37:21.696104 master-0 kubenswrapper[34361]: I0224 05:37:21.696075 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"telemeter-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1163571d-f555-41ad-b04c-74c2dc452efe-telemeter-trusted-ca-bundle\") pod \"telemeter-client-96c995bf5-57k8x\" (UID: \"1163571d-f555-41ad-b04c-74c2dc452efe\") " pod="openshift-monitoring/telemeter-client-96c995bf5-57k8x" Feb 24 05:37:21.696144 master-0 kubenswrapper[34361]: I0224 05:37:21.696111 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m9kf2\" (UniqueName: \"kubernetes.io/projected/58ecd829-4749-4c8a-933b-16b4acccac90-kube-api-access-m9kf2\") pod \"openshift-apiserver-operator-8586dccc9b-49fsv\" (UID: \"58ecd829-4749-4c8a-933b-16b4acccac90\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-8586dccc9b-49fsv" Feb 24 05:37:21.696185 master-0 kubenswrapper[34361]: I0224 05:37:21.696169 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2c6bb439-ed17-4761-b193-580be5f6aa00-utilities\") pod \"certified-operators-gn8m8\" (UID: \"2c6bb439-ed17-4761-b193-580be5f6aa00\") " pod="openshift-marketplace/certified-operators-gn8m8" Feb 24 05:37:21.696249 master-0 kubenswrapper[34361]: I0224 05:37:21.696174 34361 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/59333a14-5bdc-4590-a3da-af7300f086da-config\") pod \"authentication-operator-5bd7c86784-kbb8z\" (UID: \"59333a14-5bdc-4590-a3da-af7300f086da\") " pod="openshift-authentication-operator/authentication-operator-5bd7c86784-kbb8z" Feb 24 05:37:21.696370 master-0 kubenswrapper[34361]: I0224 05:37:21.696339 34361 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2c6bb439-ed17-4761-b193-580be5f6aa00-utilities\") pod \"certified-operators-gn8m8\" (UID: \"2c6bb439-ed17-4761-b193-580be5f6aa00\") " pod="openshift-marketplace/certified-operators-gn8m8" Feb 24 05:37:21.696444 master-0 kubenswrapper[34361]: I0224 05:37:21.696341 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/996ae0be-d36c-47f4-98b2-1c89591f9506-metrics-tls\") pod \"dns-operator-8c7d49845-4dhth\" (UID: \"996ae0be-d36c-47f4-98b2-1c89591f9506\") " pod="openshift-dns-operator/dns-operator-8c7d49845-4dhth" Feb 24 05:37:21.696444 master-0 kubenswrapper[34361]: I0224 05:37:21.696421 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jrhmp\" (UniqueName: \"kubernetes.io/projected/996ae0be-d36c-47f4-98b2-1c89591f9506-kube-api-access-jrhmp\") pod \"dns-operator-8c7d49845-4dhth\" (UID: \"996ae0be-d36c-47f4-98b2-1c89591f9506\") " pod="openshift-dns-operator/dns-operator-8c7d49845-4dhth" Feb 24 05:37:21.696537 master-0 kubenswrapper[34361]: I0224 05:37:21.696462 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/3d6b1ce7-1213-494c-829d-186d39eac7eb-metrics-tls\") pod \"ingress-operator-6569778c84-rr8r7\" (UID: \"3d6b1ce7-1213-494c-829d-186d39eac7eb\") " pod="openshift-ingress-operator/ingress-operator-6569778c84-rr8r7" Feb 24 05:37:21.696537 master-0 kubenswrapper[34361]: I0224 05:37:21.696499 34361 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/afaa1cb5-3ad5-4c67-8802-9c4db23a2e3a-kube-api-access\") pod \"installer-3-master-0\" (UID: \"afaa1cb5-3ad5-4c67-8802-9c4db23a2e3a\") " pod="openshift-kube-apiserver/installer-3-master-0" Feb 24 05:37:21.696627 master-0 kubenswrapper[34361]: I0224 05:37:21.696572 34361 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/74e8b3c8-da80-492c-bfcf-199b40bde40b-systemd-units\") pod \"ovnkube-node-vd82q\" (UID: \"74e8b3c8-da80-492c-bfcf-199b40bde40b\") " pod="openshift-ovn-kubernetes/ovnkube-node-vd82q" Feb 24 05:37:21.696627 master-0 kubenswrapper[34361]: I0224 05:37:21.696614 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d4d5x\" (UniqueName: \"kubernetes.io/projected/49bfccec-61ec-4bef-a561-9f6e6f906215-kube-api-access-d4d5x\") pod \"package-server-manager-5c75f78c8b-9d82f\" (UID: \"49bfccec-61ec-4bef-a561-9f6e6f906215\") " pod="openshift-operator-lifecycle-manager/package-server-manager-5c75f78c8b-9d82f" Feb 24 05:37:21.696727 master-0 kubenswrapper[34361]: I0224 05:37:21.696654 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/7a2c651d-ea1a-41f2-9745-04adc8d88904-etcd-service-ca\") pod \"etcd-operator-545bf96f4d-tfmbs\" (UID: \"7a2c651d-ea1a-41f2-9745-04adc8d88904\") " pod="openshift-etcd-operator/etcd-operator-545bf96f4d-tfmbs" Feb 24 05:37:21.696727 master-0 kubenswrapper[34361]: I0224 05:37:21.696694 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/b426cb33-1624-45e6-b8c5-4e8d251f6339-client-ca\") pod \"route-controller-manager-654dcf5585-fgmnd\" (UID: \"b426cb33-1624-45e6-b8c5-4e8d251f6339\") " pod="openshift-route-controller-manager/route-controller-manager-654dcf5585-fgmnd" Feb 24 05:37:21.696727 master-0 kubenswrapper[34361]: I0224 05:37:21.696718 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/3363f001-1cfa-41f5-b245-30cc99dd09cb-config-volume\") pod \"dns-default-cdk2w\" (UID: \"3363f001-1cfa-41f5-b245-30cc99dd09cb\") " pod="openshift-dns/dns-default-cdk2w" Feb 24 05:37:21.696893 master-0 kubenswrapper[34361]: I0224 05:37:21.696736 34361 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/996ae0be-d36c-47f4-98b2-1c89591f9506-metrics-tls\") pod \"dns-operator-8c7d49845-4dhth\" (UID: \"996ae0be-d36c-47f4-98b2-1c89591f9506\") " pod="openshift-dns-operator/dns-operator-8c7d49845-4dhth" Feb 24 05:37:21.696893 master-0 kubenswrapper[34361]: I0224 05:37:21.696758 34361 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/0e05783d-6bd1-4c71-87d9-1eb3edd827b3-etc-ssl-certs\") pod \"cluster-version-operator-57476485-7g2gq\" (UID: \"0e05783d-6bd1-4c71-87d9-1eb3edd827b3\") " pod="openshift-cluster-version/cluster-version-operator-57476485-7g2gq" Feb 24 05:37:21.696893 master-0 kubenswrapper[34361]: I0224 05:37:21.696798 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0e05783d-6bd1-4c71-87d9-1eb3edd827b3-kube-api-access\") pod \"cluster-version-operator-57476485-7g2gq\" (UID: \"0e05783d-6bd1-4c71-87d9-1eb3edd827b3\") " pod="openshift-cluster-version/cluster-version-operator-57476485-7g2gq" Feb 24 05:37:21.696893 master-0 kubenswrapper[34361]: I0224 05:37:21.696834 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/80cc7ad6-051b-4ee5-94af-611388d9622a-kube-state-metrics-kube-rbac-proxy-config\") pod \"kube-state-metrics-59584d565f-gsgxz\" (UID: \"80cc7ad6-051b-4ee5-94af-611388d9622a\") " pod="openshift-monitoring/kube-state-metrics-59584d565f-gsgxz" Feb 24 05:37:21.696893 master-0 kubenswrapper[34361]: I0224 05:37:21.696859 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-operator-tls\" (UniqueName: \"kubernetes.io/secret/b8d28792-2365-4e9e-b61a-46cd2ef8b632-prometheus-operator-tls\") pod \"prometheus-operator-754bc4d665-xjddh\" (UID: \"b8d28792-2365-4e9e-b61a-46cd2ef8b632\") " pod="openshift-monitoring/prometheus-operator-754bc4d665-xjddh" Feb 24 05:37:21.696893 master-0 kubenswrapper[34361]: I0224 05:37:21.696887 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wwc5b\" (UniqueName: \"kubernetes.io/projected/59333a14-5bdc-4590-a3da-af7300f086da-kube-api-access-wwc5b\") pod \"authentication-operator-5bd7c86784-kbb8z\" (UID: \"59333a14-5bdc-4590-a3da-af7300f086da\") " pod="openshift-authentication-operator/authentication-operator-5bd7c86784-kbb8z" Feb 24 05:37:21.697209 master-0 kubenswrapper[34361]: I0224 05:37:21.696907 34361 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/3d6b1ce7-1213-494c-829d-186d39eac7eb-metrics-tls\") pod \"ingress-operator-6569778c84-rr8r7\" (UID: \"3d6b1ce7-1213-494c-829d-186d39eac7eb\") " pod="openshift-ingress-operator/ingress-operator-6569778c84-rr8r7" Feb 24 05:37:21.697209 master-0 kubenswrapper[34361]: I0224 05:37:21.696914 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-67qg5\" (UniqueName: \"kubernetes.io/projected/cd674e58-b749-46fb-8a28-66012fd8b401-kube-api-access-67qg5\") pod \"community-operators-68vwc\" (UID: \"cd674e58-b749-46fb-8a28-66012fd8b401\") " pod="openshift-marketplace/community-operators-68vwc" Feb 24 05:37:21.697348 master-0 kubenswrapper[34361]: I0224 05:37:21.697255 34361 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/7a2c651d-ea1a-41f2-9745-04adc8d88904-etcd-service-ca\") pod \"etcd-operator-545bf96f4d-tfmbs\" (UID: \"7a2c651d-ea1a-41f2-9745-04adc8d88904\") " pod="openshift-etcd-operator/etcd-operator-545bf96f4d-tfmbs" Feb 24 05:37:21.697348 master-0 kubenswrapper[34361]: I0224 05:37:21.697298 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/39623346-691b-42c8-af76-409d4f6629af-config\") pod \"cluster-baremetal-operator-d6bb9bb76-54hnv\" (UID: \"39623346-691b-42c8-af76-409d4f6629af\") " pod="openshift-machine-api/cluster-baremetal-operator-d6bb9bb76-54hnv" Feb 24 05:37:21.697473 master-0 kubenswrapper[34361]: I0224 05:37:21.697337 34361 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/3363f001-1cfa-41f5-b245-30cc99dd09cb-config-volume\") pod \"dns-default-cdk2w\" (UID: \"3363f001-1cfa-41f5-b245-30cc99dd09cb\") " pod="openshift-dns/dns-default-cdk2w" Feb 24 05:37:21.697473 master-0 kubenswrapper[34361]: I0224 05:37:21.697349 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/933beda1-c930-4831-a886-3cc6b7a992ad-config\") pod \"openshift-controller-manager-operator-584cc7bcb5-zz9fm\" (UID: \"933beda1-c930-4831-a886-3cc6b7a992ad\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-584cc7bcb5-zz9fm" Feb 24 05:37:21.697590 master-0 kubenswrapper[34361]: I0224 05:37:21.697519 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/projected/347c43e5-86d5-436f-bdc5-1c7bbe19ab2a-ca-certs\") pod \"operator-controller-controller-manager-9cc7d7bb-t75jj\" (UID: \"347c43e5-86d5-436f-bdc5-1c7bbe19ab2a\") " pod="openshift-operator-controller/operator-controller-controller-manager-9cc7d7bb-t75jj" Feb 24 05:37:21.697590 master-0 kubenswrapper[34361]: I0224 05:37:21.697541 34361 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/933beda1-c930-4831-a886-3cc6b7a992ad-config\") pod \"openshift-controller-manager-operator-584cc7bcb5-zz9fm\" (UID: \"933beda1-c930-4831-a886-3cc6b7a992ad\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-584cc7bcb5-zz9fm" Feb 24 05:37:21.697590 master-0 kubenswrapper[34361]: I0224 05:37:21.697575 34361 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-modprobe-d\" (UniqueName: \"kubernetes.io/host-path/073c9b40-bb80-41a2-bcd2-cfdbe040a5a4-etc-modprobe-d\") pod \"tuned-2w6mj\" (UID: \"073c9b40-bb80-41a2-bcd2-cfdbe040a5a4\") " pod="openshift-cluster-node-tuning-operator/tuned-2w6mj" Feb 24 05:37:21.697864 master-0 kubenswrapper[34361]: I0224 05:37:21.697627 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b21148ab-4e3e-4d0b-b198-3278dd8e2e7e-trusted-ca-bundle\") pod \"apiserver-fdc9d7cdd-8v72m\" (UID: \"b21148ab-4e3e-4d0b-b198-3278dd8e2e7e\") " pod="openshift-apiserver/apiserver-fdc9d7cdd-8v72m" Feb 24 05:37:21.697864 master-0 kubenswrapper[34361]: I0224 05:37:21.697826 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zcb72\" (UniqueName: \"kubernetes.io/projected/dd29bef3-d27e-48b3-9aa0-d915e949b3d5-kube-api-access-zcb72\") pod \"marketplace-operator-6f5488b997-dbsnm\" (UID: \"dd29bef3-d27e-48b3-9aa0-d915e949b3d5\") " pod="openshift-marketplace/marketplace-operator-6f5488b997-dbsnm" Feb 24 05:37:21.697986 master-0 kubenswrapper[34361]: I0224 05:37:21.697877 34361 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-certs\" (UniqueName: \"kubernetes.io/projected/347c43e5-86d5-436f-bdc5-1c7bbe19ab2a-ca-certs\") pod \"operator-controller-controller-manager-9cc7d7bb-t75jj\" (UID: \"347c43e5-86d5-436f-bdc5-1c7bbe19ab2a\") " pod="openshift-operator-controller/operator-controller-controller-manager-9cc7d7bb-t75jj" Feb 24 05:37:21.697986 master-0 kubenswrapper[34361]: I0224 05:37:21.697877 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/ab5afff8-1081-4acc-8ab9-d6bfd8df1d67-signing-key\") pod \"service-ca-576b4d78bd-fsmrl\" (UID: \"ab5afff8-1081-4acc-8ab9-d6bfd8df1d67\") " pod="openshift-service-ca/service-ca-576b4d78bd-fsmrl" Feb 24 05:37:21.698105 master-0 kubenswrapper[34361]: I0224 05:37:21.698002 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/74e8b3c8-da80-492c-bfcf-199b40bde40b-ovnkube-script-lib\") pod \"ovnkube-node-vd82q\" (UID: \"74e8b3c8-da80-492c-bfcf-199b40bde40b\") " pod="openshift-ovn-kubernetes/ovnkube-node-vd82q" Feb 24 05:37:21.698105 master-0 kubenswrapper[34361]: I0224 05:37:21.698055 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/1e7f7c02-4c84-432a-8d59-25dd3bfef5c2-proxy-tls\") pod \"machine-config-controller-54cb48566c-9ww5z\" (UID: \"1e7f7c02-4c84-432a-8d59-25dd3bfef5c2\") " pod="openshift-machine-config-operator/machine-config-controller-54cb48566c-9ww5z" Feb 24 05:37:21.698193 master-0 kubenswrapper[34361]: I0224 05:37:21.698104 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-957g9\" (UniqueName: \"kubernetes.io/projected/f3cd3830-62b5-49d1-917e-bd993d685c65-kube-api-access-957g9\") pod \"cluster-cloud-controller-manager-operator-67dd8d7969-m8d2t\" (UID: \"f3cd3830-62b5-49d1-917e-bd993d685c65\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-67dd8d7969-m8d2t" Feb 24 05:37:21.698193 master-0 kubenswrapper[34361]: I0224 05:37:21.698151 34361 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/767424fb-babf-4b73-b5e2-0bee65fcf207-os-release\") pod \"multus-additional-cni-plugins-jknmn\" (UID: \"767424fb-babf-4b73-b5e2-0bee65fcf207\") " pod="openshift-multus/multus-additional-cni-plugins-jknmn" Feb 24 05:37:21.698286 master-0 kubenswrapper[34361]: I0224 05:37:21.698196 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/767424fb-babf-4b73-b5e2-0bee65fcf207-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-jknmn\" (UID: \"767424fb-babf-4b73-b5e2-0bee65fcf207\") " pod="openshift-multus/multus-additional-cni-plugins-jknmn" Feb 24 05:37:21.698286 master-0 kubenswrapper[34361]: I0224 05:37:21.698208 34361 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/ab5afff8-1081-4acc-8ab9-d6bfd8df1d67-signing-key\") pod \"service-ca-576b4d78bd-fsmrl\" (UID: \"ab5afff8-1081-4acc-8ab9-d6bfd8df1d67\") " pod="openshift-service-ca/service-ca-576b4d78bd-fsmrl" Feb 24 05:37:21.698286 master-0 kubenswrapper[34361]: I0224 05:37:21.698245 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bs794\" (UniqueName: \"kubernetes.io/projected/88b915ff-fd94-4998-aa09-70f95c0f1b8a-kube-api-access-bs794\") pod \"ovnkube-control-plane-5d8dfcdc87-b8ght\" (UID: \"88b915ff-fd94-4998-aa09-70f95c0f1b8a\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-5d8dfcdc87-b8ght" Feb 24 05:37:21.698437 master-0 kubenswrapper[34361]: I0224 05:37:21.698327 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/c106275b-72b6-4877-95c3-830f93e35375-env-overrides\") pod \"network-node-identity-rlg4x\" (UID: \"c106275b-72b6-4877-95c3-830f93e35375\") " pod="openshift-network-node-identity/network-node-identity-rlg4x" Feb 24 05:37:21.698437 master-0 kubenswrapper[34361]: I0224 05:37:21.698369 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2f48332e-92de-42aa-a6e6-db161f005e74-client-ca-bundle\") pod \"metrics-server-65cdf565cd-555rj\" (UID: \"2f48332e-92de-42aa-a6e6-db161f005e74\") " pod="openshift-monitoring/metrics-server-65cdf565cd-555rj" Feb 24 05:37:21.698437 master-0 kubenswrapper[34361]: I0224 05:37:21.698409 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/22813c83-2f60-44ad-9624-ad367cec08f7-serving-cert\") pod \"kube-controller-manager-operator-7bcfbc574b-8zrj9\" (UID: \"22813c83-2f60-44ad-9624-ad367cec08f7\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-7bcfbc574b-8zrj9" Feb 24 05:37:21.698437 master-0 kubenswrapper[34361]: I0224 05:37:21.698428 34361 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/767424fb-babf-4b73-b5e2-0bee65fcf207-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-jknmn\" (UID: \"767424fb-babf-4b73-b5e2-0bee65fcf207\") " pod="openshift-multus/multus-additional-cni-plugins-jknmn" Feb 24 05:37:21.698597 master-0 kubenswrapper[34361]: I0224 05:37:21.698446 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4bf6w\" (UniqueName: \"kubernetes.io/projected/1e7f7c02-4c84-432a-8d59-25dd3bfef5c2-kube-api-access-4bf6w\") pod \"machine-config-controller-54cb48566c-9ww5z\" (UID: \"1e7f7c02-4c84-432a-8d59-25dd3bfef5c2\") " pod="openshift-machine-config-operator/machine-config-controller-54cb48566c-9ww5z" Feb 24 05:37:21.698597 master-0 kubenswrapper[34361]: I0224 05:37:21.698489 34361 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/767424fb-babf-4b73-b5e2-0bee65fcf207-cnibin\") pod \"multus-additional-cni-plugins-jknmn\" (UID: \"767424fb-babf-4b73-b5e2-0bee65fcf207\") " pod="openshift-multus/multus-additional-cni-plugins-jknmn" Feb 24 05:37:21.698597 master-0 kubenswrapper[34361]: I0224 05:37:21.698534 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r5tgk\" (UniqueName: \"kubernetes.io/projected/a3561f49-0808-4d96-95ec-456fcb5c5bb4-kube-api-access-r5tgk\") pod \"machine-config-daemon-c56dz\" (UID: \"a3561f49-0808-4d96-95ec-456fcb5c5bb4\") " pod="openshift-machine-config-operator/machine-config-daemon-c56dz" Feb 24 05:37:21.698597 master-0 kubenswrapper[34361]: I0224 05:37:21.698570 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/812552f3-09b1-43f8-b910-c78e776127f8-etcd-client\") pod \"apiserver-6f8b7f45f7-5df4m\" (UID: \"812552f3-09b1-43f8-b910-c78e776127f8\") " pod="openshift-oauth-apiserver/apiserver-6f8b7f45f7-5df4m" Feb 24 05:37:21.698772 master-0 kubenswrapper[34361]: I0224 05:37:21.698609 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ee10cd9f-23da-4e40-bc7a-0856a9fa7ae5-service-ca-bundle\") pod \"insights-operator-59b498fcfb-mprnx\" (UID: \"ee10cd9f-23da-4e40-bc7a-0856a9fa7ae5\") " pod="openshift-insights/insights-operator-59b498fcfb-mprnx" Feb 24 05:37:21.698772 master-0 kubenswrapper[34361]: I0224 05:37:21.698637 34361 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/22813c83-2f60-44ad-9624-ad367cec08f7-serving-cert\") pod \"kube-controller-manager-operator-7bcfbc574b-8zrj9\" (UID: \"22813c83-2f60-44ad-9624-ad367cec08f7\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-7bcfbc574b-8zrj9" Feb 24 05:37:21.698772 master-0 kubenswrapper[34361]: I0224 05:37:21.698645 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/be7a4b9e-1e9a-4298-b804-21b683805c0e-metrics-certs\") pod \"router-default-7b65dc9fcb-zxkt2\" (UID: \"be7a4b9e-1e9a-4298-b804-21b683805c0e\") " pod="openshift-ingress/router-default-7b65dc9fcb-zxkt2" Feb 24 05:37:21.698772 master-0 kubenswrapper[34361]: I0224 05:37:21.698705 34361 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/073c9b40-bb80-41a2-bcd2-cfdbe040a5a4-sys\") pod \"tuned-2w6mj\" (UID: \"073c9b40-bb80-41a2-bcd2-cfdbe040a5a4\") " pod="openshift-cluster-node-tuning-operator/tuned-2w6mj" Feb 24 05:37:21.698772 master-0 kubenswrapper[34361]: I0224 05:37:21.698740 34361 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/c00ee01c-143b-4e44-823c-c6bfdedb8ed6-etc-kubernetes\") pod \"multus-8qp5g\" (UID: \"c00ee01c-143b-4e44-823c-c6bfdedb8ed6\") " pod="openshift-multus/multus-8qp5g" Feb 24 05:37:21.699180 master-0 kubenswrapper[34361]: I0224 05:37:21.698869 34361 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/be7a4b9e-1e9a-4298-b804-21b683805c0e-metrics-certs\") pod \"router-default-7b65dc9fcb-zxkt2\" (UID: \"be7a4b9e-1e9a-4298-b804-21b683805c0e\") " pod="openshift-ingress/router-default-7b65dc9fcb-zxkt2" Feb 24 05:37:21.699180 master-0 kubenswrapper[34361]: I0224 05:37:21.698923 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/59333a14-5bdc-4590-a3da-af7300f086da-trusted-ca-bundle\") pod \"authentication-operator-5bd7c86784-kbb8z\" (UID: \"59333a14-5bdc-4590-a3da-af7300f086da\") " pod="openshift-authentication-operator/authentication-operator-5bd7c86784-kbb8z" Feb 24 05:37:21.699180 master-0 kubenswrapper[34361]: I0224 05:37:21.699018 34361 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/767424fb-babf-4b73-b5e2-0bee65fcf207-tuning-conf-dir\") pod \"multus-additional-cni-plugins-jknmn\" (UID: \"767424fb-babf-4b73-b5e2-0bee65fcf207\") " pod="openshift-multus/multus-additional-cni-plugins-jknmn" Feb 24 05:37:21.699180 master-0 kubenswrapper[34361]: I0224 05:37:21.699061 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/88b915ff-fd94-4998-aa09-70f95c0f1b8a-env-overrides\") pod \"ovnkube-control-plane-5d8dfcdc87-b8ght\" (UID: \"88b915ff-fd94-4998-aa09-70f95c0f1b8a\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-5d8dfcdc87-b8ght" Feb 24 05:37:21.699180 master-0 kubenswrapper[34361]: I0224 05:37:21.699101 34361 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-containers\" (UniqueName: \"kubernetes.io/host-path/d9492fbf-d0f4-4ecf-84ba-b089d69535c1-etc-containers\") pod \"catalogd-controller-manager-84b8d9d697-zvzxs\" (UID: \"d9492fbf-d0f4-4ecf-84ba-b089d69535c1\") " pod="openshift-catalogd/catalogd-controller-manager-84b8d9d697-zvzxs" Feb 24 05:37:21.699180 master-0 kubenswrapper[34361]: I0224 05:37:21.699152 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dh2rh\" (UniqueName: \"kubernetes.io/projected/073c9b40-bb80-41a2-bcd2-cfdbe040a5a4-kube-api-access-dh2rh\") pod \"tuned-2w6mj\" (UID: \"073c9b40-bb80-41a2-bcd2-cfdbe040a5a4\") " pod="openshift-cluster-node-tuning-operator/tuned-2w6mj" Feb 24 05:37:21.699432 master-0 kubenswrapper[34361]: I0224 05:37:21.699193 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/116e6b47-d435-49ca-abb5-088788daf16a-config\") pod \"machine-api-operator-5c7cf458b4-65mc5\" (UID: \"116e6b47-d435-49ca-abb5-088788daf16a\") " pod="openshift-machine-api/machine-api-operator-5c7cf458b4-65mc5" Feb 24 05:37:21.699432 master-0 kubenswrapper[34361]: I0224 05:37:21.699256 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/6e5ede6a-9d4b-47a2-b4ba-e6018910d05a-apiservice-cert\") pod \"cluster-node-tuning-operator-bcf775fc9-h99t4\" (UID: \"6e5ede6a-9d4b-47a2-b4ba-e6018910d05a\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-bcf775fc9-h99t4" Feb 24 05:37:21.699432 master-0 kubenswrapper[34361]: I0224 05:37:21.699294 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fgf94\" (UniqueName: \"kubernetes.io/projected/7a2c651d-ea1a-41f2-9745-04adc8d88904-kube-api-access-fgf94\") pod \"etcd-operator-545bf96f4d-tfmbs\" (UID: \"7a2c651d-ea1a-41f2-9745-04adc8d88904\") " pod="openshift-etcd-operator/etcd-operator-545bf96f4d-tfmbs" Feb 24 05:37:21.699432 master-0 kubenswrapper[34361]: I0224 05:37:21.699363 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/be7a4b9e-1e9a-4298-b804-21b683805c0e-stats-auth\") pod \"router-default-7b65dc9fcb-zxkt2\" (UID: \"be7a4b9e-1e9a-4298-b804-21b683805c0e\") " pod="openshift-ingress/router-default-7b65dc9fcb-zxkt2" Feb 24 05:37:21.699432 master-0 kubenswrapper[34361]: I0224 05:37:21.699404 34361 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/74e8b3c8-da80-492c-bfcf-199b40bde40b-host-run-ovn-kubernetes\") pod \"ovnkube-node-vd82q\" (UID: \"74e8b3c8-da80-492c-bfcf-199b40bde40b\") " pod="openshift-ovn-kubernetes/ovnkube-node-vd82q" Feb 24 05:37:21.699432 master-0 kubenswrapper[34361]: I0224 05:37:21.699405 34361 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/59333a14-5bdc-4590-a3da-af7300f086da-trusted-ca-bundle\") pod \"authentication-operator-5bd7c86784-kbb8z\" (UID: \"59333a14-5bdc-4590-a3da-af7300f086da\") " pod="openshift-authentication-operator/authentication-operator-5bd7c86784-kbb8z" Feb 24 05:37:21.699652 master-0 kubenswrapper[34361]: I0224 05:37:21.699444 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/59333a14-5bdc-4590-a3da-af7300f086da-service-ca-bundle\") pod \"authentication-operator-5bd7c86784-kbb8z\" (UID: \"59333a14-5bdc-4590-a3da-af7300f086da\") " pod="openshift-authentication-operator/authentication-operator-5bd7c86784-kbb8z" Feb 24 05:37:21.699698 master-0 kubenswrapper[34361]: I0224 05:37:21.699663 34361 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"openshift-service-ca.crt" Feb 24 05:37:21.700155 master-0 kubenswrapper[34361]: I0224 05:37:21.700106 34361 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/6e5ede6a-9d4b-47a2-b4ba-e6018910d05a-apiservice-cert\") pod \"cluster-node-tuning-operator-bcf775fc9-h99t4\" (UID: \"6e5ede6a-9d4b-47a2-b4ba-e6018910d05a\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-bcf775fc9-h99t4" Feb 24 05:37:21.700226 master-0 kubenswrapper[34361]: I0224 05:37:21.700176 34361 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/59333a14-5bdc-4590-a3da-af7300f086da-service-ca-bundle\") pod \"authentication-operator-5bd7c86784-kbb8z\" (UID: \"59333a14-5bdc-4590-a3da-af7300f086da\") " pod="openshift-authentication-operator/authentication-operator-5bd7c86784-kbb8z" Feb 24 05:37:21.700390 master-0 kubenswrapper[34361]: I0224 05:37:21.700304 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cluster-storage-operator-serving-cert\" (UniqueName: \"kubernetes.io/secret/e1f03d97-1a6a-41e4-9ed3-cd9b01c46400-cluster-storage-operator-serving-cert\") pod \"cluster-storage-operator-f94476f49-tlmg5\" (UID: \"e1f03d97-1a6a-41e4-9ed3-cd9b01c46400\") " pod="openshift-cluster-storage-operator/cluster-storage-operator-f94476f49-tlmg5" Feb 24 05:37:21.700449 master-0 kubenswrapper[34361]: I0224 05:37:21.700397 34361 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/88b915ff-fd94-4998-aa09-70f95c0f1b8a-env-overrides\") pod \"ovnkube-control-plane-5d8dfcdc87-b8ght\" (UID: \"88b915ff-fd94-4998-aa09-70f95c0f1b8a\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-5d8dfcdc87-b8ght" Feb 24 05:37:21.700503 master-0 kubenswrapper[34361]: I0224 05:37:21.700442 34361 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/c00ee01c-143b-4e44-823c-c6bfdedb8ed6-host-run-multus-certs\") pod \"multus-8qp5g\" (UID: \"c00ee01c-143b-4e44-823c-c6bfdedb8ed6\") " pod="openshift-multus/multus-8qp5g" Feb 24 05:37:21.700546 master-0 kubenswrapper[34361]: I0224 05:37:21.700525 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b426cb33-1624-45e6-b8c5-4e8d251f6339-serving-cert\") pod \"route-controller-manager-654dcf5585-fgmnd\" (UID: \"b426cb33-1624-45e6-b8c5-4e8d251f6339\") " pod="openshift-route-controller-manager/route-controller-manager-654dcf5585-fgmnd" Feb 24 05:37:21.700617 master-0 kubenswrapper[34361]: I0224 05:37:21.700573 34361 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/c00ee01c-143b-4e44-823c-c6bfdedb8ed6-multus-cni-dir\") pod \"multus-8qp5g\" (UID: \"c00ee01c-143b-4e44-823c-c6bfdedb8ed6\") " pod="openshift-multus/multus-8qp5g" Feb 24 05:37:21.700687 master-0 kubenswrapper[34361]: I0224 05:37:21.700659 34361 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/c00ee01c-143b-4e44-823c-c6bfdedb8ed6-host-run-netns\") pod \"multus-8qp5g\" (UID: \"c00ee01c-143b-4e44-823c-c6bfdedb8ed6\") " pod="openshift-multus/multus-8qp5g" Feb 24 05:37:21.700862 master-0 kubenswrapper[34361]: I0224 05:37:21.700754 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p67bp\" (UniqueName: \"kubernetes.io/projected/ab5afff8-1081-4acc-8ab9-d6bfd8df1d67-kube-api-access-p67bp\") pod \"service-ca-576b4d78bd-fsmrl\" (UID: \"ab5afff8-1081-4acc-8ab9-d6bfd8df1d67\") " pod="openshift-service-ca/service-ca-576b4d78bd-fsmrl" Feb 24 05:37:21.700927 master-0 kubenswrapper[34361]: I0224 05:37:21.700856 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-46fll\" (UniqueName: \"kubernetes.io/projected/1163571d-f555-41ad-b04c-74c2dc452efe-kube-api-access-46fll\") pod \"telemeter-client-96c995bf5-57k8x\" (UID: \"1163571d-f555-41ad-b04c-74c2dc452efe\") " pod="openshift-monitoring/telemeter-client-96c995bf5-57k8x" Feb 24 05:37:21.700975 master-0 kubenswrapper[34361]: I0224 05:37:21.700935 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7a2c651d-ea1a-41f2-9745-04adc8d88904-config\") pod \"etcd-operator-545bf96f4d-tfmbs\" (UID: \"7a2c651d-ea1a-41f2-9745-04adc8d88904\") " pod="openshift-etcd-operator/etcd-operator-545bf96f4d-tfmbs" Feb 24 05:37:21.701023 master-0 kubenswrapper[34361]: I0224 05:37:21.701007 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/b21148ab-4e3e-4d0b-b198-3278dd8e2e7e-etcd-client\") pod \"apiserver-fdc9d7cdd-8v72m\" (UID: \"b21148ab-4e3e-4d0b-b198-3278dd8e2e7e\") " pod="openshift-apiserver/apiserver-fdc9d7cdd-8v72m" Feb 24 05:37:21.701106 master-0 kubenswrapper[34361]: I0224 05:37:21.701049 34361 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/b21148ab-4e3e-4d0b-b198-3278dd8e2e7e-audit-dir\") pod \"apiserver-fdc9d7cdd-8v72m\" (UID: \"b21148ab-4e3e-4d0b-b198-3278dd8e2e7e\") " pod="openshift-apiserver/apiserver-fdc9d7cdd-8v72m" Feb 24 05:37:21.701160 master-0 kubenswrapper[34361]: I0224 05:37:21.701125 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"whereabouts-configmap\" (UniqueName: \"kubernetes.io/configmap/767424fb-babf-4b73-b5e2-0bee65fcf207-whereabouts-configmap\") pod \"multus-additional-cni-plugins-jknmn\" (UID: \"767424fb-babf-4b73-b5e2-0bee65fcf207\") " pod="openshift-multus/multus-additional-cni-plugins-jknmn" Feb 24 05:37:21.701207 master-0 kubenswrapper[34361]: I0224 05:37:21.701190 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/c00ee01c-143b-4e44-823c-c6bfdedb8ed6-cni-binary-copy\") pod \"multus-8qp5g\" (UID: \"c00ee01c-143b-4e44-823c-c6bfdedb8ed6\") " pod="openshift-multus/multus-8qp5g" Feb 24 05:37:21.701272 master-0 kubenswrapper[34361]: I0224 05:37:21.701223 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d86d5bbe-3768-4695-810b-245a56e4fd1d-serving-cert\") pod \"service-ca-operator-c48c8bf7c-mcdrl\" (UID: \"d86d5bbe-3768-4695-810b-245a56e4fd1d\") " pod="openshift-service-ca-operator/service-ca-operator-c48c8bf7c-mcdrl" Feb 24 05:37:21.701354 master-0 kubenswrapper[34361]: I0224 05:37:21.701286 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4lt5r\" (UniqueName: \"kubernetes.io/projected/812552f3-09b1-43f8-b910-c78e776127f8-kube-api-access-4lt5r\") pod \"apiserver-6f8b7f45f7-5df4m\" (UID: \"812552f3-09b1-43f8-b910-c78e776127f8\") " pod="openshift-oauth-apiserver/apiserver-6f8b7f45f7-5df4m" Feb 24 05:37:21.701354 master-0 kubenswrapper[34361]: I0224 05:37:21.701335 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-tuned\" (UniqueName: \"kubernetes.io/empty-dir/073c9b40-bb80-41a2-bcd2-cfdbe040a5a4-etc-tuned\") pod \"tuned-2w6mj\" (UID: \"073c9b40-bb80-41a2-bcd2-cfdbe040a5a4\") " pod="openshift-cluster-node-tuning-operator/tuned-2w6mj" Feb 24 05:37:21.701445 master-0 kubenswrapper[34361]: I0224 05:37:21.701362 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/b21148ab-4e3e-4d0b-b198-3278dd8e2e7e-audit\") pod \"apiserver-fdc9d7cdd-8v72m\" (UID: \"b21148ab-4e3e-4d0b-b198-3278dd8e2e7e\") " pod="openshift-apiserver/apiserver-fdc9d7cdd-8v72m" Feb 24 05:37:21.701445 master-0 kubenswrapper[34361]: I0224 05:37:21.701388 34361 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/74e8b3c8-da80-492c-bfcf-199b40bde40b-node-log\") pod \"ovnkube-node-vd82q\" (UID: \"74e8b3c8-da80-492c-bfcf-199b40bde40b\") " pod="openshift-ovn-kubernetes/ovnkube-node-vd82q" Feb 24 05:37:21.701445 master-0 kubenswrapper[34361]: I0224 05:37:21.701416 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/da19bb93-c9ba-4e60-9e83-d92bc0dd33c4-serving-cert\") pod \"controller-manager-7657d7494-mmsz6\" (UID: \"da19bb93-c9ba-4e60-9e83-d92bc0dd33c4\") " pod="openshift-controller-manager/controller-manager-7657d7494-mmsz6" Feb 24 05:37:21.701445 master-0 kubenswrapper[34361]: I0224 05:37:21.701445 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/49b426a3-f16e-40e9-a166-7270d4cfcc60-webhook-cert\") pod \"packageserver-df5f88cd4-cwzcs\" (UID: \"49b426a3-f16e-40e9-a166-7270d4cfcc60\") " pod="openshift-operator-lifecycle-manager/packageserver-df5f88cd4-cwzcs" Feb 24 05:37:21.701599 master-0 kubenswrapper[34361]: I0224 05:37:21.701447 34361 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/be7a4b9e-1e9a-4298-b804-21b683805c0e-stats-auth\") pod \"router-default-7b65dc9fcb-zxkt2\" (UID: \"be7a4b9e-1e9a-4298-b804-21b683805c0e\") " pod="openshift-ingress/router-default-7b65dc9fcb-zxkt2" Feb 24 05:37:21.701599 master-0 kubenswrapper[34361]: I0224 05:37:21.701471 34361 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/767424fb-babf-4b73-b5e2-0bee65fcf207-system-cni-dir\") pod \"multus-additional-cni-plugins-jknmn\" (UID: \"767424fb-babf-4b73-b5e2-0bee65fcf207\") " pod="openshift-multus/multus-additional-cni-plugins-jknmn" Feb 24 05:37:21.701599 master-0 kubenswrapper[34361]: I0224 05:37:21.701568 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b21148ab-4e3e-4d0b-b198-3278dd8e2e7e-serving-cert\") pod \"apiserver-fdc9d7cdd-8v72m\" (UID: \"b21148ab-4e3e-4d0b-b198-3278dd8e2e7e\") " pod="openshift-apiserver/apiserver-fdc9d7cdd-8v72m" Feb 24 05:37:21.701767 master-0 kubenswrapper[34361]: I0224 05:37:21.701610 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-exporter-tls\" (UniqueName: \"kubernetes.io/secret/f2be5ed6-fdf0-4462-a319-eed1a5a1c778-node-exporter-tls\") pod \"node-exporter-qk7rz\" (UID: \"f2be5ed6-fdf0-4462-a319-eed1a5a1c778\") " pod="openshift-monitoring/node-exporter-qk7rz" Feb 24 05:37:21.701767 master-0 kubenswrapper[34361]: I0224 05:37:21.701658 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/59333a14-5bdc-4590-a3da-af7300f086da-serving-cert\") pod \"authentication-operator-5bd7c86784-kbb8z\" (UID: \"59333a14-5bdc-4590-a3da-af7300f086da\") " pod="openshift-authentication-operator/authentication-operator-5bd7c86784-kbb8z" Feb 24 05:37:21.701862 master-0 kubenswrapper[34361]: I0224 05:37:21.701711 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e6f05507-d5c1-4102-a220-1db715a496e3-kube-api-access\") pod \"openshift-kube-scheduler-operator-77cd4d9559-8l7xv\" (UID: \"e6f05507-d5c1-4102-a220-1db715a496e3\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-77cd4d9559-8l7xv" Feb 24 05:37:21.701862 master-0 kubenswrapper[34361]: I0224 05:37:21.701822 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-certs-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1163571d-f555-41ad-b04c-74c2dc452efe-serving-certs-ca-bundle\") pod \"telemeter-client-96c995bf5-57k8x\" (UID: \"1163571d-f555-41ad-b04c-74c2dc452efe\") " pod="openshift-monitoring/telemeter-client-96c995bf5-57k8x" Feb 24 05:37:21.701943 master-0 kubenswrapper[34361]: I0224 05:37:21.701865 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-telemeter-client\" (UniqueName: \"kubernetes.io/secret/1163571d-f555-41ad-b04c-74c2dc452efe-secret-telemeter-client\") pod \"telemeter-client-96c995bf5-57k8x\" (UID: \"1163571d-f555-41ad-b04c-74c2dc452efe\") " pod="openshift-monitoring/telemeter-client-96c995bf5-57k8x" Feb 24 05:37:21.701943 master-0 kubenswrapper[34361]: I0224 05:37:21.701907 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/39c4d0aa-c372-4d02-9302-337e68b56784-proxy-tls\") pod \"machine-config-operator-7f8c75f984-922md\" (UID: \"39c4d0aa-c372-4d02-9302-337e68b56784\") " pod="openshift-machine-config-operator/machine-config-operator-7f8c75f984-922md" Feb 24 05:37:21.702021 master-0 kubenswrapper[34361]: I0224 05:37:21.701948 34361 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-systemd\" (UniqueName: \"kubernetes.io/host-path/073c9b40-bb80-41a2-bcd2-cfdbe040a5a4-etc-systemd\") pod \"tuned-2w6mj\" (UID: \"073c9b40-bb80-41a2-bcd2-cfdbe040a5a4\") " pod="openshift-cluster-node-tuning-operator/tuned-2w6mj" Feb 24 05:37:21.702021 master-0 kubenswrapper[34361]: I0224 05:37:21.701953 34361 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7a2c651d-ea1a-41f2-9745-04adc8d88904-config\") pod \"etcd-operator-545bf96f4d-tfmbs\" (UID: \"7a2c651d-ea1a-41f2-9745-04adc8d88904\") " pod="openshift-etcd-operator/etcd-operator-545bf96f4d-tfmbs" Feb 24 05:37:21.702021 master-0 kubenswrapper[34361]: I0224 05:37:21.702002 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/a3561f49-0808-4d96-95ec-456fcb5c5bb4-mcd-auth-proxy-config\") pod \"machine-config-daemon-c56dz\" (UID: \"a3561f49-0808-4d96-95ec-456fcb5c5bb4\") " pod="openshift-machine-config-operator/machine-config-daemon-c56dz" Feb 24 05:37:21.702147 master-0 kubenswrapper[34361]: I0224 05:37:21.702062 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/74e8b3c8-da80-492c-bfcf-199b40bde40b-ovnkube-config\") pod \"ovnkube-node-vd82q\" (UID: \"74e8b3c8-da80-492c-bfcf-199b40bde40b\") " pod="openshift-ovn-kubernetes/ovnkube-node-vd82q" Feb 24 05:37:21.702147 master-0 kubenswrapper[34361]: I0224 05:37:21.702087 34361 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-tuned\" (UniqueName: \"kubernetes.io/empty-dir/073c9b40-bb80-41a2-bcd2-cfdbe040a5a4-etc-tuned\") pod \"tuned-2w6mj\" (UID: \"073c9b40-bb80-41a2-bcd2-cfdbe040a5a4\") " pod="openshift-cluster-node-tuning-operator/tuned-2w6mj" Feb 24 05:37:21.702545 master-0 kubenswrapper[34361]: I0224 05:37:21.702127 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/e6a0fc47-b446-4902-9f8a-04870cbafcab-machine-approver-tls\") pod \"machine-approver-7dd9c7d7b9-pb6sw\" (UID: \"e6a0fc47-b446-4902-9f8a-04870cbafcab\") " pod="openshift-cluster-machine-approver/machine-approver-7dd9c7d7b9-pb6sw" Feb 24 05:37:21.702545 master-0 kubenswrapper[34361]: I0224 05:37:21.702204 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ddfqw\" (UniqueName: \"kubernetes.io/projected/39623346-691b-42c8-af76-409d4f6629af-kube-api-access-ddfqw\") pod \"cluster-baremetal-operator-d6bb9bb76-54hnv\" (UID: \"39623346-691b-42c8-af76-409d4f6629af\") " pod="openshift-machine-api/cluster-baremetal-operator-d6bb9bb76-54hnv" Feb 24 05:37:21.702545 master-0 kubenswrapper[34361]: I0224 05:37:21.702269 34361 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/c00ee01c-143b-4e44-823c-c6bfdedb8ed6-system-cni-dir\") pod \"multus-8qp5g\" (UID: \"c00ee01c-143b-4e44-823c-c6bfdedb8ed6\") " pod="openshift-multus/multus-8qp5g" Feb 24 05:37:21.702545 master-0 kubenswrapper[34361]: I0224 05:37:21.702368 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-tls\" (UniqueName: \"kubernetes.io/secret/80cc7ad6-051b-4ee5-94af-611388d9622a-kube-state-metrics-tls\") pod \"kube-state-metrics-59584d565f-gsgxz\" (UID: \"80cc7ad6-051b-4ee5-94af-611388d9622a\") " pod="openshift-monitoring/kube-state-metrics-59584d565f-gsgxz" Feb 24 05:37:21.702545 master-0 kubenswrapper[34361]: I0224 05:37:21.702419 34361 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/b21148ab-4e3e-4d0b-b198-3278dd8e2e7e-etcd-client\") pod \"apiserver-fdc9d7cdd-8v72m\" (UID: \"b21148ab-4e3e-4d0b-b198-3278dd8e2e7e\") " pod="openshift-apiserver/apiserver-fdc9d7cdd-8v72m" Feb 24 05:37:21.702545 master-0 kubenswrapper[34361]: I0224 05:37:21.702430 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-metrics-client-certs\" (UniqueName: \"kubernetes.io/secret/2f48332e-92de-42aa-a6e6-db161f005e74-secret-metrics-client-certs\") pod \"metrics-server-65cdf565cd-555rj\" (UID: \"2f48332e-92de-42aa-a6e6-db161f005e74\") " pod="openshift-monitoring/metrics-server-65cdf565cd-555rj" Feb 24 05:37:21.702774 master-0 kubenswrapper[34361]: I0224 05:37:21.702530 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/ab5afff8-1081-4acc-8ab9-d6bfd8df1d67-signing-cabundle\") pod \"service-ca-576b4d78bd-fsmrl\" (UID: \"ab5afff8-1081-4acc-8ab9-d6bfd8df1d67\") " pod="openshift-service-ca/service-ca-576b4d78bd-fsmrl" Feb 24 05:37:21.702774 master-0 kubenswrapper[34361]: I0224 05:37:21.702608 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2c6bb439-ed17-4761-b193-580be5f6aa00-catalog-content\") pod \"certified-operators-gn8m8\" (UID: \"2c6bb439-ed17-4761-b193-580be5f6aa00\") " pod="openshift-marketplace/certified-operators-gn8m8" Feb 24 05:37:21.702774 master-0 kubenswrapper[34361]: I0224 05:37:21.702659 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/c106275b-72b6-4877-95c3-830f93e35375-webhook-cert\") pod \"network-node-identity-rlg4x\" (UID: \"c106275b-72b6-4877-95c3-830f93e35375\") " pod="openshift-network-node-identity/network-node-identity-rlg4x" Feb 24 05:37:21.702774 master-0 kubenswrapper[34361]: I0224 05:37:21.702714 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/49bfccec-61ec-4bef-a561-9f6e6f906215-package-server-manager-serving-cert\") pod \"package-server-manager-5c75f78c8b-9d82f\" (UID: \"49bfccec-61ec-4bef-a561-9f6e6f906215\") " pod="openshift-operator-lifecycle-manager/package-server-manager-5c75f78c8b-9d82f" Feb 24 05:37:21.703266 master-0 kubenswrapper[34361]: I0224 05:37:21.703210 34361 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/74e8b3c8-da80-492c-bfcf-199b40bde40b-ovnkube-config\") pod \"ovnkube-node-vd82q\" (UID: \"74e8b3c8-da80-492c-bfcf-199b40bde40b\") " pod="openshift-ovn-kubernetes/ovnkube-node-vd82q" Feb 24 05:37:21.703439 master-0 kubenswrapper[34361]: I0224 05:37:21.703382 34361 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/59333a14-5bdc-4590-a3da-af7300f086da-serving-cert\") pod \"authentication-operator-5bd7c86784-kbb8z\" (UID: \"59333a14-5bdc-4590-a3da-af7300f086da\") " pod="openshift-authentication-operator/authentication-operator-5bd7c86784-kbb8z" Feb 24 05:37:21.703439 master-0 kubenswrapper[34361]: I0224 05:37:21.703419 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/9666fc94-71e3-46af-8b45-26e3a085d076-profile-collector-cert\") pod \"olm-operator-5499d7f7bb-8xdmq\" (UID: \"9666fc94-71e3-46af-8b45-26e3a085d076\") " pod="openshift-operator-lifecycle-manager/olm-operator-5499d7f7bb-8xdmq" Feb 24 05:37:21.703555 master-0 kubenswrapper[34361]: I0224 05:37:21.703508 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ckfnc\" (UniqueName: \"kubernetes.io/projected/1d922f6f-70a3-46cb-b230-6a1e2b8cfdfa-kube-api-access-ckfnc\") pod \"network-check-target-vp2jg\" (UID: \"1d922f6f-70a3-46cb-b230-6a1e2b8cfdfa\") " pod="openshift-network-diagnostics/network-check-target-vp2jg" Feb 24 05:37:21.703606 master-0 kubenswrapper[34361]: I0224 05:37:21.703585 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6b7f4\" (UniqueName: \"kubernetes.io/projected/f6690909-3a87-4bdc-b0ec-1cdd4df32e4b-kube-api-access-6b7f4\") pod \"iptables-alerter-r2vvc\" (UID: \"f6690909-3a87-4bdc-b0ec-1cdd4df32e4b\") " pod="openshift-network-operator/iptables-alerter-r2vvc" Feb 24 05:37:21.703654 master-0 kubenswrapper[34361]: I0224 05:37:21.703633 34361 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/74e8b3c8-da80-492c-bfcf-199b40bde40b-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-vd82q\" (UID: \"74e8b3c8-da80-492c-bfcf-199b40bde40b\") " pod="openshift-ovn-kubernetes/ovnkube-node-vd82q" Feb 24 05:37:21.703701 master-0 kubenswrapper[34361]: I0224 05:37:21.703679 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/da19bb93-c9ba-4e60-9e83-d92bc0dd33c4-config\") pod \"controller-manager-7657d7494-mmsz6\" (UID: \"da19bb93-c9ba-4e60-9e83-d92bc0dd33c4\") " pod="openshift-controller-manager/controller-manager-7657d7494-mmsz6" Feb 24 05:37:21.703741 master-0 kubenswrapper[34361]: I0224 05:37:21.703721 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/9666fc94-71e3-46af-8b45-26e3a085d076-srv-cert\") pod \"olm-operator-5499d7f7bb-8xdmq\" (UID: \"9666fc94-71e3-46af-8b45-26e3a085d076\") " pod="openshift-operator-lifecycle-manager/olm-operator-5499d7f7bb-8xdmq" Feb 24 05:37:21.703803 master-0 kubenswrapper[34361]: I0224 05:37:21.703772 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4p8zb\" (UniqueName: \"kubernetes.io/projected/c106275b-72b6-4877-95c3-830f93e35375-kube-api-access-4p8zb\") pod \"network-node-identity-rlg4x\" (UID: \"c106275b-72b6-4877-95c3-830f93e35375\") " pod="openshift-network-node-identity/network-node-identity-rlg4x" Feb 24 05:37:21.703851 master-0 kubenswrapper[34361]: I0224 05:37:21.703821 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/f6690909-3a87-4bdc-b0ec-1cdd4df32e4b-iptables-alerter-script\") pod \"iptables-alerter-r2vvc\" (UID: \"f6690909-3a87-4bdc-b0ec-1cdd4df32e4b\") " pod="openshift-network-operator/iptables-alerter-r2vvc" Feb 24 05:37:21.703963 master-0 kubenswrapper[34361]: I0224 05:37:21.703902 34361 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/c106275b-72b6-4877-95c3-830f93e35375-webhook-cert\") pod \"network-node-identity-rlg4x\" (UID: \"c106275b-72b6-4877-95c3-830f93e35375\") " pod="openshift-network-node-identity/network-node-identity-rlg4x" Feb 24 05:37:21.704433 master-0 kubenswrapper[34361]: I0224 05:37:21.704391 34361 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/c00ee01c-143b-4e44-823c-c6bfdedb8ed6-cni-binary-copy\") pod \"multus-8qp5g\" (UID: \"c00ee01c-143b-4e44-823c-c6bfdedb8ed6\") " pod="openshift-multus/multus-8qp5g" Feb 24 05:37:21.704492 master-0 kubenswrapper[34361]: I0224 05:37:21.704426 34361 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d86d5bbe-3768-4695-810b-245a56e4fd1d-serving-cert\") pod \"service-ca-operator-c48c8bf7c-mcdrl\" (UID: \"d86d5bbe-3768-4695-810b-245a56e4fd1d\") " pod="openshift-service-ca-operator/service-ca-operator-c48c8bf7c-mcdrl" Feb 24 05:37:21.704534 master-0 kubenswrapper[34361]: I0224 05:37:21.704491 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-exporter-textfile\" (UniqueName: \"kubernetes.io/empty-dir/f2be5ed6-fdf0-4462-a319-eed1a5a1c778-node-exporter-textfile\") pod \"node-exporter-qk7rz\" (UID: \"f2be5ed6-fdf0-4462-a319-eed1a5a1c778\") " pod="openshift-monitoring/node-exporter-qk7rz" Feb 24 05:37:21.704580 master-0 kubenswrapper[34361]: I0224 05:37:21.704539 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xbt92\" (UniqueName: \"kubernetes.io/projected/f938daff-1d36-4348-a689-3d1607058296-kube-api-access-xbt92\") pod \"ingress-canary-5m82s\" (UID: \"f938daff-1d36-4348-a689-3d1607058296\") " pod="openshift-ingress-canary/ingress-canary-5m82s" Feb 24 05:37:21.704580 master-0 kubenswrapper[34361]: I0224 05:37:21.704555 34361 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2c6bb439-ed17-4761-b193-580be5f6aa00-catalog-content\") pod \"certified-operators-gn8m8\" (UID: \"2c6bb439-ed17-4761-b193-580be5f6aa00\") " pod="openshift-marketplace/certified-operators-gn8m8" Feb 24 05:37:21.704653 master-0 kubenswrapper[34361]: I0224 05:37:21.704584 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h5djr\" (UniqueName: \"kubernetes.io/projected/feee7fe8-e805-4807-b4c0-ecc7ef0f88d9-kube-api-access-h5djr\") pod \"csi-snapshot-controller-operator-6fb4df594f-8tv99\" (UID: \"feee7fe8-e805-4807-b4c0-ecc7ef0f88d9\") " pod="openshift-cluster-storage-operator/csi-snapshot-controller-operator-6fb4df594f-8tv99" Feb 24 05:37:21.704653 master-0 kubenswrapper[34361]: I0224 05:37:21.704626 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ee10cd9f-23da-4e40-bc7a-0856a9fa7ae5-trusted-ca-bundle\") pod \"insights-operator-59b498fcfb-mprnx\" (UID: \"ee10cd9f-23da-4e40-bc7a-0856a9fa7ae5\") " pod="openshift-insights/insights-operator-59b498fcfb-mprnx" Feb 24 05:37:21.704732 master-0 kubenswrapper[34361]: I0224 05:37:21.704667 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/d9492fbf-d0f4-4ecf-84ba-b089d69535c1-cache\") pod \"catalogd-controller-manager-84b8d9d697-zvzxs\" (UID: \"d9492fbf-d0f4-4ecf-84ba-b089d69535c1\") " pod="openshift-catalogd/catalogd-controller-manager-84b8d9d697-zvzxs" Feb 24 05:37:21.704732 master-0 kubenswrapper[34361]: I0224 05:37:21.704707 34361 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/b21148ab-4e3e-4d0b-b198-3278dd8e2e7e-node-pullsecrets\") pod \"apiserver-fdc9d7cdd-8v72m\" (UID: \"b21148ab-4e3e-4d0b-b198-3278dd8e2e7e\") " pod="openshift-apiserver/apiserver-fdc9d7cdd-8v72m" Feb 24 05:37:21.704907 master-0 kubenswrapper[34361]: I0224 05:37:21.704746 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/3d6b1ce7-1213-494c-829d-186d39eac7eb-trusted-ca\") pod \"ingress-operator-6569778c84-rr8r7\" (UID: \"3d6b1ce7-1213-494c-829d-186d39eac7eb\") " pod="openshift-ingress-operator/ingress-operator-6569778c84-rr8r7" Feb 24 05:37:21.704956 master-0 kubenswrapper[34361]: I0224 05:37:21.704925 34361 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/afaa1cb5-3ad5-4c67-8802-9c4db23a2e3a-kubelet-dir\") pod \"installer-3-master-0\" (UID: \"afaa1cb5-3ad5-4c67-8802-9c4db23a2e3a\") " pod="openshift-kube-apiserver/installer-3-master-0" Feb 24 05:37:21.705076 master-0 kubenswrapper[34361]: I0224 05:37:21.705032 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/4bb05b64-74d7-41bc-991c-5d3cddc9d8f4-samples-operator-tls\") pod \"cluster-samples-operator-65c5c48b9b-hmlsl\" (UID: \"4bb05b64-74d7-41bc-991c-5d3cddc9d8f4\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-65c5c48b9b-hmlsl" Feb 24 05:37:21.705132 master-0 kubenswrapper[34361]: I0224 05:37:21.705098 34361 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/073c9b40-bb80-41a2-bcd2-cfdbe040a5a4-var-lib-kubelet\") pod \"tuned-2w6mj\" (UID: \"073c9b40-bb80-41a2-bcd2-cfdbe040a5a4\") " pod="openshift-cluster-node-tuning-operator/tuned-2w6mj" Feb 24 05:37:21.705261 master-0 kubenswrapper[34361]: I0224 05:37:21.705166 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jt9fb\" (UniqueName: \"kubernetes.io/projected/116e6b47-d435-49ca-abb5-088788daf16a-kube-api-access-jt9fb\") pod \"machine-api-operator-5c7cf458b4-65mc5\" (UID: \"116e6b47-d435-49ca-abb5-088788daf16a\") " pod="openshift-machine-api/machine-api-operator-5c7cf458b4-65mc5" Feb 24 05:37:21.705261 master-0 kubenswrapper[34361]: I0224 05:37:21.705190 34361 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-exporter-textfile\" (UniqueName: \"kubernetes.io/empty-dir/f2be5ed6-fdf0-4462-a319-eed1a5a1c778-node-exporter-textfile\") pod \"node-exporter-qk7rz\" (UID: \"f2be5ed6-fdf0-4462-a319-eed1a5a1c778\") " pod="openshift-monitoring/node-exporter-qk7rz" Feb 24 05:37:21.705261 master-0 kubenswrapper[34361]: I0224 05:37:21.705208 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/b21148ab-4e3e-4d0b-b198-3278dd8e2e7e-etcd-serving-ca\") pod \"apiserver-fdc9d7cdd-8v72m\" (UID: \"b21148ab-4e3e-4d0b-b198-3278dd8e2e7e\") " pod="openshift-apiserver/apiserver-fdc9d7cdd-8v72m" Feb 24 05:37:21.705404 master-0 kubenswrapper[34361]: I0224 05:37:21.705267 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/c00ee01c-143b-4e44-823c-c6bfdedb8ed6-multus-daemon-config\") pod \"multus-8qp5g\" (UID: \"c00ee01c-143b-4e44-823c-c6bfdedb8ed6\") " pod="openshift-multus/multus-8qp5g" Feb 24 05:37:21.705722 master-0 kubenswrapper[34361]: I0224 05:37:21.705678 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/58ecd829-4749-4c8a-933b-16b4acccac90-config\") pod \"openshift-apiserver-operator-8586dccc9b-49fsv\" (UID: \"58ecd829-4749-4c8a-933b-16b4acccac90\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-8586dccc9b-49fsv" Feb 24 05:37:21.705722 master-0 kubenswrapper[34361]: I0224 05:37:21.705696 34361 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/49bfccec-61ec-4bef-a561-9f6e6f906215-package-server-manager-serving-cert\") pod \"package-server-manager-5c75f78c8b-9d82f\" (UID: \"49bfccec-61ec-4bef-a561-9f6e6f906215\") " pod="openshift-operator-lifecycle-manager/package-server-manager-5c75f78c8b-9d82f" Feb 24 05:37:21.705807 master-0 kubenswrapper[34361]: I0224 05:37:21.705739 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5dwz2\" (UniqueName: \"kubernetes.io/projected/ee10cd9f-23da-4e40-bc7a-0856a9fa7ae5-kube-api-access-5dwz2\") pod \"insights-operator-59b498fcfb-mprnx\" (UID: \"ee10cd9f-23da-4e40-bc7a-0856a9fa7ae5\") " pod="openshift-insights/insights-operator-59b498fcfb-mprnx" Feb 24 05:37:21.705848 master-0 kubenswrapper[34361]: I0224 05:37:21.705798 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c3fed34f-b275-42c6-af6c-8de3e6fe0f9e-config\") pod \"kube-storage-version-migrator-operator-fc889cfd5-r6p58\" (UID: \"c3fed34f-b275-42c6-af6c-8de3e6fe0f9e\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-fc889cfd5-r6p58" Feb 24 05:37:21.706030 master-0 kubenswrapper[34361]: I0224 05:37:21.705989 34361 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/d9492fbf-d0f4-4ecf-84ba-b089d69535c1-cache\") pod \"catalogd-controller-manager-84b8d9d697-zvzxs\" (UID: \"d9492fbf-d0f4-4ecf-84ba-b089d69535c1\") " pod="openshift-catalogd/catalogd-controller-manager-84b8d9d697-zvzxs" Feb 24 05:37:21.706144 master-0 kubenswrapper[34361]: I0224 05:37:21.706098 34361 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/c00ee01c-143b-4e44-823c-c6bfdedb8ed6-multus-daemon-config\") pod \"multus-8qp5g\" (UID: \"c00ee01c-143b-4e44-823c-c6bfdedb8ed6\") " pod="openshift-multus/multus-8qp5g" Feb 24 05:37:21.706191 master-0 kubenswrapper[34361]: I0224 05:37:21.706175 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-79h66\" (UniqueName: \"kubernetes.io/projected/74e8b3c8-da80-492c-bfcf-199b40bde40b-kube-api-access-79h66\") pod \"ovnkube-node-vd82q\" (UID: \"74e8b3c8-da80-492c-bfcf-199b40bde40b\") " pod="openshift-ovn-kubernetes/ovnkube-node-vd82q" Feb 24 05:37:21.706253 master-0 kubenswrapper[34361]: I0224 05:37:21.706224 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-tuning-operator-tls\" (UniqueName: \"kubernetes.io/secret/6e5ede6a-9d4b-47a2-b4ba-e6018910d05a-node-tuning-operator-tls\") pod \"cluster-node-tuning-operator-bcf775fc9-h99t4\" (UID: \"6e5ede6a-9d4b-47a2-b4ba-e6018910d05a\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-bcf775fc9-h99t4" Feb 24 05:37:21.706347 master-0 kubenswrapper[34361]: I0224 05:37:21.706286 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/812552f3-09b1-43f8-b910-c78e776127f8-audit-policies\") pod \"apiserver-6f8b7f45f7-5df4m\" (UID: \"812552f3-09b1-43f8-b910-c78e776127f8\") " pod="openshift-oauth-apiserver/apiserver-6f8b7f45f7-5df4m" Feb 24 05:37:21.706489 master-0 kubenswrapper[34361]: I0224 05:37:21.706450 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/75b4304c-09f2-499e-8c2f-da603e43ba72-utilities\") pod \"redhat-marketplace-v64s6\" (UID: \"75b4304c-09f2-499e-8c2f-da603e43ba72\") " pod="openshift-marketplace/redhat-marketplace-v64s6" Feb 24 05:37:21.706543 master-0 kubenswrapper[34361]: I0224 05:37:21.706485 34361 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"whereabouts-configmap\" (UniqueName: \"kubernetes.io/configmap/767424fb-babf-4b73-b5e2-0bee65fcf207-whereabouts-configmap\") pod \"multus-additional-cni-plugins-jknmn\" (UID: \"767424fb-babf-4b73-b5e2-0bee65fcf207\") " pod="openshift-multus/multus-additional-cni-plugins-jknmn" Feb 24 05:37:21.706543 master-0 kubenswrapper[34361]: I0224 05:37:21.706512 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/bf303acd-b62e-4aa3-bd8d-15f5844302d8-metrics-client-ca\") pod \"openshift-state-metrics-6dbff8cb4c-hvjlk\" (UID: \"bf303acd-b62e-4aa3-bd8d-15f5844302d8\") " pod="openshift-monitoring/openshift-state-metrics-6dbff8cb4c-hvjlk" Feb 24 05:37:21.706626 master-0 kubenswrapper[34361]: I0224 05:37:21.706564 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/116e6b47-d435-49ca-abb5-088788daf16a-machine-api-operator-tls\") pod \"machine-api-operator-5c7cf458b4-65mc5\" (UID: \"116e6b47-d435-49ca-abb5-088788daf16a\") " pod="openshift-machine-api/machine-api-operator-5c7cf458b4-65mc5" Feb 24 05:37:21.706626 master-0 kubenswrapper[34361]: I0224 05:37:21.706608 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/80cc7ad6-051b-4ee5-94af-611388d9622a-metrics-client-ca\") pod \"kube-state-metrics-59584d565f-gsgxz\" (UID: \"80cc7ad6-051b-4ee5-94af-611388d9622a\") " pod="openshift-monitoring/kube-state-metrics-59584d565f-gsgxz" Feb 24 05:37:21.706726 master-0 kubenswrapper[34361]: I0224 05:37:21.706652 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2kh6l\" (UniqueName: \"kubernetes.io/projected/5d51ce58-55f6-45d5-9d5d-7b31ae42380a-kube-api-access-2kh6l\") pod \"cluster-autoscaler-operator-86b8dc6d6-mcf2z\" (UID: \"5d51ce58-55f6-45d5-9d5d-7b31ae42380a\") " pod="openshift-machine-api/cluster-autoscaler-operator-86b8dc6d6-mcf2z" Feb 24 05:37:21.706829 master-0 kubenswrapper[34361]: I0224 05:37:21.706795 34361 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/3d6b1ce7-1213-494c-829d-186d39eac7eb-trusted-ca\") pod \"ingress-operator-6569778c84-rr8r7\" (UID: \"3d6b1ce7-1213-494c-829d-186d39eac7eb\") " pod="openshift-ingress-operator/ingress-operator-6569778c84-rr8r7" Feb 24 05:37:21.706990 master-0 kubenswrapper[34361]: I0224 05:37:21.706949 34361 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/58ecd829-4749-4c8a-933b-16b4acccac90-config\") pod \"openshift-apiserver-operator-8586dccc9b-49fsv\" (UID: \"58ecd829-4749-4c8a-933b-16b4acccac90\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-8586dccc9b-49fsv" Feb 24 05:37:21.707046 master-0 kubenswrapper[34361]: I0224 05:37:21.707031 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/c847d0c0-cc92-4d56-9e47-b83d9a39a745-certs\") pod \"machine-config-server-xxl55\" (UID: \"c847d0c0-cc92-4d56-9e47-b83d9a39a745\") " pod="openshift-machine-config-operator/machine-config-server-xxl55" Feb 24 05:37:21.707102 master-0 kubenswrapper[34361]: I0224 05:37:21.707076 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-metrics-server-tls\" (UniqueName: \"kubernetes.io/secret/2f48332e-92de-42aa-a6e6-db161f005e74-secret-metrics-server-tls\") pod \"metrics-server-65cdf565cd-555rj\" (UID: \"2f48332e-92de-42aa-a6e6-db161f005e74\") " pod="openshift-monitoring/metrics-server-65cdf565cd-555rj" Feb 24 05:37:21.707149 master-0 kubenswrapper[34361]: I0224 05:37:21.707124 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jspzm\" (UniqueName: \"kubernetes.io/projected/1533c5fa-0387-40bd-a959-e714b65cdacc-kube-api-access-jspzm\") pod \"network-check-source-58fb6744f5-kn2z7\" (UID: \"1533c5fa-0387-40bd-a959-e714b65cdacc\") " pod="openshift-network-diagnostics/network-check-source-58fb6744f5-kn2z7" Feb 24 05:37:21.707193 master-0 kubenswrapper[34361]: I0224 05:37:21.707164 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/58ecd829-4749-4c8a-933b-16b4acccac90-serving-cert\") pod \"openshift-apiserver-operator-8586dccc9b-49fsv\" (UID: \"58ecd829-4749-4c8a-933b-16b4acccac90\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-8586dccc9b-49fsv" Feb 24 05:37:21.707234 master-0 kubenswrapper[34361]: I0224 05:37:21.707203 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"snapshots\" (UniqueName: \"kubernetes.io/empty-dir/ee10cd9f-23da-4e40-bc7a-0856a9fa7ae5-snapshots\") pod \"insights-operator-59b498fcfb-mprnx\" (UID: \"ee10cd9f-23da-4e40-bc7a-0856a9fa7ae5\") " pod="openshift-insights/insights-operator-59b498fcfb-mprnx" Feb 24 05:37:21.707521 master-0 kubenswrapper[34361]: I0224 05:37:21.707469 34361 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c3fed34f-b275-42c6-af6c-8de3e6fe0f9e-config\") pod \"kube-storage-version-migrator-operator-fc889cfd5-r6p58\" (UID: \"c3fed34f-b275-42c6-af6c-8de3e6fe0f9e\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-fc889cfd5-r6p58" Feb 24 05:37:21.707640 master-0 kubenswrapper[34361]: I0224 05:37:21.707608 34361 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/75b4304c-09f2-499e-8c2f-da603e43ba72-utilities\") pod \"redhat-marketplace-v64s6\" (UID: \"75b4304c-09f2-499e-8c2f-da603e43ba72\") " pod="openshift-marketplace/redhat-marketplace-v64s6" Feb 24 05:37:21.708015 master-0 kubenswrapper[34361]: I0224 05:37:21.707958 34361 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-tuning-operator-tls\" (UniqueName: \"kubernetes.io/secret/6e5ede6a-9d4b-47a2-b4ba-e6018910d05a-node-tuning-operator-tls\") pod \"cluster-node-tuning-operator-bcf775fc9-h99t4\" (UID: \"6e5ede6a-9d4b-47a2-b4ba-e6018910d05a\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-bcf775fc9-h99t4" Feb 24 05:37:21.708130 master-0 kubenswrapper[34361]: I0224 05:37:21.708075 34361 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"snapshots\" (UniqueName: \"kubernetes.io/empty-dir/ee10cd9f-23da-4e40-bc7a-0856a9fa7ae5-snapshots\") pod \"insights-operator-59b498fcfb-mprnx\" (UID: \"ee10cd9f-23da-4e40-bc7a-0856a9fa7ae5\") " pod="openshift-insights/insights-operator-59b498fcfb-mprnx" Feb 24 05:37:21.708205 master-0 kubenswrapper[34361]: I0224 05:37:21.708177 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5q2r9\" (UniqueName: \"kubernetes.io/projected/3d6b1ce7-1213-494c-829d-186d39eac7eb-kube-api-access-5q2r9\") pod \"ingress-operator-6569778c84-rr8r7\" (UID: \"3d6b1ce7-1213-494c-829d-186d39eac7eb\") " pod="openshift-ingress-operator/ingress-operator-6569778c84-rr8r7" Feb 24 05:37:21.708261 master-0 kubenswrapper[34361]: I0224 05:37:21.708221 34361 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/798dcf46-8377-46b8-8387-5261d9bbefa1-hosts-file\") pod \"node-resolver-ng8tz\" (UID: \"798dcf46-8377-46b8-8387-5261d9bbefa1\") " pod="openshift-dns/node-resolver-ng8tz" Feb 24 05:37:21.708304 master-0 kubenswrapper[34361]: I0224 05:37:21.708275 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f92qq\" (UniqueName: \"kubernetes.io/projected/bf303acd-b62e-4aa3-bd8d-15f5844302d8-kube-api-access-f92qq\") pod \"openshift-state-metrics-6dbff8cb4c-hvjlk\" (UID: \"bf303acd-b62e-4aa3-bd8d-15f5844302d8\") " pod="openshift-monitoring/openshift-state-metrics-6dbff8cb4c-hvjlk" Feb 24 05:37:21.708394 master-0 kubenswrapper[34361]: I0224 05:37:21.708364 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/c106275b-72b6-4877-95c3-830f93e35375-ovnkube-identity-cm\") pod \"network-node-identity-rlg4x\" (UID: \"c106275b-72b6-4877-95c3-830f93e35375\") " pod="openshift-network-node-identity/network-node-identity-rlg4x" Feb 24 05:37:21.708444 master-0 kubenswrapper[34361]: I0224 05:37:21.708403 34361 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/74e8b3c8-da80-492c-bfcf-199b40bde40b-host-slash\") pod \"ovnkube-node-vd82q\" (UID: \"74e8b3c8-da80-492c-bfcf-199b40bde40b\") " pod="openshift-ovn-kubernetes/ovnkube-node-vd82q" Feb 24 05:37:21.708444 master-0 kubenswrapper[34361]: I0224 05:37:21.708437 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9zxwj\" (UniqueName: \"kubernetes.io/projected/49b426a3-f16e-40e9-a166-7270d4cfcc60-kube-api-access-9zxwj\") pod \"packageserver-df5f88cd4-cwzcs\" (UID: \"49b426a3-f16e-40e9-a166-7270d4cfcc60\") " pod="openshift-operator-lifecycle-manager/packageserver-df5f88cd4-cwzcs" Feb 24 05:37:21.708618 master-0 kubenswrapper[34361]: I0224 05:37:21.708572 34361 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/58ecd829-4749-4c8a-933b-16b4acccac90-serving-cert\") pod \"openshift-apiserver-operator-8586dccc9b-49fsv\" (UID: \"58ecd829-4749-4c8a-933b-16b4acccac90\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-8586dccc9b-49fsv" Feb 24 05:37:21.708618 master-0 kubenswrapper[34361]: I0224 05:37:21.708586 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/e6a0fc47-b446-4902-9f8a-04870cbafcab-auth-proxy-config\") pod \"machine-approver-7dd9c7d7b9-pb6sw\" (UID: \"e6a0fc47-b446-4902-9f8a-04870cbafcab\") " pod="openshift-cluster-machine-approver/machine-approver-7dd9c7d7b9-pb6sw" Feb 24 05:37:21.708914 master-0 kubenswrapper[34361]: I0224 05:37:21.708723 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pl6rx\" (UniqueName: \"kubernetes.io/projected/2c6bb439-ed17-4761-b193-580be5f6aa00-kube-api-access-pl6rx\") pod \"certified-operators-gn8m8\" (UID: \"2c6bb439-ed17-4761-b193-580be5f6aa00\") " pod="openshift-marketplace/certified-operators-gn8m8" Feb 24 05:37:21.708914 master-0 kubenswrapper[34361]: I0224 05:37:21.708761 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b21148ab-4e3e-4d0b-b198-3278dd8e2e7e-config\") pod \"apiserver-fdc9d7cdd-8v72m\" (UID: \"b21148ab-4e3e-4d0b-b198-3278dd8e2e7e\") " pod="openshift-apiserver/apiserver-fdc9d7cdd-8v72m" Feb 24 05:37:21.709055 master-0 kubenswrapper[34361]: I0224 05:37:21.708852 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/cc0cfdd6-99d8-40dc-87d0-06c2a6767f38-profile-collector-cert\") pod \"catalog-operator-596f79dd6f-v22h2\" (UID: \"cc0cfdd6-99d8-40dc-87d0-06c2a6767f38\") " pod="openshift-operator-lifecycle-manager/catalog-operator-596f79dd6f-v22h2" Feb 24 05:37:21.709055 master-0 kubenswrapper[34361]: I0224 05:37:21.708984 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-custom-resource-state-configmap\" (UniqueName: \"kubernetes.io/configmap/80cc7ad6-051b-4ee5-94af-611388d9622a-kube-state-metrics-custom-resource-state-configmap\") pod \"kube-state-metrics-59584d565f-gsgxz\" (UID: \"80cc7ad6-051b-4ee5-94af-611388d9622a\") " pod="openshift-monitoring/kube-state-metrics-59584d565f-gsgxz" Feb 24 05:37:21.709055 master-0 kubenswrapper[34361]: I0224 05:37:21.709030 34361 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/74e8b3c8-da80-492c-bfcf-199b40bde40b-host-kubelet\") pod \"ovnkube-node-vd82q\" (UID: \"74e8b3c8-da80-492c-bfcf-199b40bde40b\") " pod="openshift-ovn-kubernetes/ovnkube-node-vd82q" Feb 24 05:37:21.709055 master-0 kubenswrapper[34361]: I0224 05:37:21.709043 34361 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/c106275b-72b6-4877-95c3-830f93e35375-ovnkube-identity-cm\") pod \"network-node-identity-rlg4x\" (UID: \"c106275b-72b6-4877-95c3-830f93e35375\") " pod="openshift-network-node-identity/network-node-identity-rlg4x" Feb 24 05:37:21.709205 master-0 kubenswrapper[34361]: I0224 05:37:21.709114 34361 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/74e8b3c8-da80-492c-bfcf-199b40bde40b-run-systemd\") pod \"ovnkube-node-vd82q\" (UID: \"74e8b3c8-da80-492c-bfcf-199b40bde40b\") " pod="openshift-ovn-kubernetes/ovnkube-node-vd82q" Feb 24 05:37:21.709205 master-0 kubenswrapper[34361]: I0224 05:37:21.709150 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8ktz5\" (UniqueName: \"kubernetes.io/projected/7dcc5520-7aa8-4cd5-b06d-591827ed4e2a-kube-api-access-8ktz5\") pod \"network-metrics-daemon-2vsjh\" (UID: \"7dcc5520-7aa8-4cd5-b06d-591827ed4e2a\") " pod="openshift-multus/network-metrics-daemon-2vsjh" Feb 24 05:37:21.709205 master-0 kubenswrapper[34361]: I0224 05:37:21.709189 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cluster-olm-operator-serving-cert\" (UniqueName: \"kubernetes.io/secret/633d33a1-e1b1-40b0-b56a-afb0c1085d97-cluster-olm-operator-serving-cert\") pod \"cluster-olm-operator-5bd7768f54-qh6j7\" (UID: \"633d33a1-e1b1-40b0-b56a-afb0c1085d97\") " pod="openshift-cluster-olm-operator/cluster-olm-operator-5bd7768f54-qh6j7" Feb 24 05:37:21.709413 master-0 kubenswrapper[34361]: I0224 05:37:21.709367 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/f3cd3830-62b5-49d1-917e-bd993d685c65-auth-proxy-config\") pod \"cluster-cloud-controller-manager-operator-67dd8d7969-m8d2t\" (UID: \"f3cd3830-62b5-49d1-917e-bd993d685c65\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-67dd8d7969-m8d2t" Feb 24 05:37:21.709464 master-0 kubenswrapper[34361]: I0224 05:37:21.709427 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/88b915ff-fd94-4998-aa09-70f95c0f1b8a-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-5d8dfcdc87-b8ght\" (UID: \"88b915ff-fd94-4998-aa09-70f95c0f1b8a\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-5d8dfcdc87-b8ght" Feb 24 05:37:21.709516 master-0 kubenswrapper[34361]: I0224 05:37:21.709474 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nb75b\" (UniqueName: \"kubernetes.io/projected/e1f03d97-1a6a-41e4-9ed3-cd9b01c46400-kube-api-access-nb75b\") pod \"cluster-storage-operator-f94476f49-tlmg5\" (UID: \"e1f03d97-1a6a-41e4-9ed3-cd9b01c46400\") " pod="openshift-cluster-storage-operator/cluster-storage-operator-f94476f49-tlmg5" Feb 24 05:37:21.709516 master-0 kubenswrapper[34361]: I0224 05:37:21.709495 34361 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cluster-olm-operator-serving-cert\" (UniqueName: \"kubernetes.io/secret/633d33a1-e1b1-40b0-b56a-afb0c1085d97-cluster-olm-operator-serving-cert\") pod \"cluster-olm-operator-5bd7768f54-qh6j7\" (UID: \"633d33a1-e1b1-40b0-b56a-afb0c1085d97\") " pod="openshift-cluster-olm-operator/cluster-olm-operator-5bd7768f54-qh6j7" Feb 24 05:37:21.709600 master-0 kubenswrapper[34361]: I0224 05:37:21.709571 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-log\" (UniqueName: \"kubernetes.io/empty-dir/2f48332e-92de-42aa-a6e6-db161f005e74-audit-log\") pod \"metrics-server-65cdf565cd-555rj\" (UID: \"2f48332e-92de-42aa-a6e6-db161f005e74\") " pod="openshift-monitoring/metrics-server-65cdf565cd-555rj" Feb 24 05:37:21.709656 master-0 kubenswrapper[34361]: I0224 05:37:21.709614 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/74e8b3c8-da80-492c-bfcf-199b40bde40b-ovn-node-metrics-cert\") pod \"ovnkube-node-vd82q\" (UID: \"74e8b3c8-da80-492c-bfcf-199b40bde40b\") " pod="openshift-ovn-kubernetes/ovnkube-node-vd82q" Feb 24 05:37:21.709712 master-0 kubenswrapper[34361]: I0224 05:37:21.709659 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gmf87\" (UniqueName: \"kubernetes.io/projected/933beda1-c930-4831-a886-3cc6b7a992ad-kube-api-access-gmf87\") pod \"openshift-controller-manager-operator-584cc7bcb5-zz9fm\" (UID: \"933beda1-c930-4831-a886-3cc6b7a992ad\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-584cc7bcb5-zz9fm" Feb 24 05:37:21.709712 master-0 kubenswrapper[34361]: I0224 05:37:21.709701 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/39c4d0aa-c372-4d02-9302-337e68b56784-images\") pod \"machine-config-operator-7f8c75f984-922md\" (UID: \"39c4d0aa-c372-4d02-9302-337e68b56784\") " pod="openshift-machine-config-operator/machine-config-operator-7f8c75f984-922md" Feb 24 05:37:21.709797 master-0 kubenswrapper[34361]: I0224 05:37:21.709743 34361 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/c00ee01c-143b-4e44-823c-c6bfdedb8ed6-host-var-lib-cni-bin\") pod \"multus-8qp5g\" (UID: \"c00ee01c-143b-4e44-823c-c6bfdedb8ed6\") " pod="openshift-multus/multus-8qp5g" Feb 24 05:37:21.709845 master-0 kubenswrapper[34361]: I0224 05:37:21.709801 34361 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/88b915ff-fd94-4998-aa09-70f95c0f1b8a-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-5d8dfcdc87-b8ght\" (UID: \"88b915ff-fd94-4998-aa09-70f95c0f1b8a\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-5d8dfcdc87-b8ght" Feb 24 05:37:21.709887 master-0 kubenswrapper[34361]: I0224 05:37:21.709872 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8f3825c1-975c-40b5-a6ad-0f200968b3cd-catalog-content\") pod \"redhat-operators-xm8sw\" (UID: \"8f3825c1-975c-40b5-a6ad-0f200968b3cd\") " pod="openshift-marketplace/redhat-operators-xm8sw" Feb 24 05:37:21.709928 master-0 kubenswrapper[34361]: I0224 05:37:21.709904 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-server-audit-profiles\" (UniqueName: \"kubernetes.io/configmap/2f48332e-92de-42aa-a6e6-db161f005e74-metrics-server-audit-profiles\") pod \"metrics-server-65cdf565cd-555rj\" (UID: \"2f48332e-92de-42aa-a6e6-db161f005e74\") " pod="openshift-monitoring/metrics-server-65cdf565cd-555rj" Feb 24 05:37:21.709974 master-0 kubenswrapper[34361]: I0224 05:37:21.709935 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/da19bb93-c9ba-4e60-9e83-d92bc0dd33c4-proxy-ca-bundles\") pod \"controller-manager-7657d7494-mmsz6\" (UID: \"da19bb93-c9ba-4e60-9e83-d92bc0dd33c4\") " pod="openshift-controller-manager/controller-manager-7657d7494-mmsz6" Feb 24 05:37:21.709974 master-0 kubenswrapper[34361]: I0224 05:37:21.709963 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/17f8e10b-88dc-4158-a7c4-aaa2f5d5fb9d-serving-cert\") pod \"kube-apiserver-operator-5d87bf58c-ncrqj\" (UID: \"17f8e10b-88dc-4158-a7c4-aaa2f5d5fb9d\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-5d87bf58c-ncrqj" Feb 24 05:37:21.710115 master-0 kubenswrapper[34361]: I0224 05:37:21.709994 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openshift-state-metrics-tls\" (UniqueName: \"kubernetes.io/secret/bf303acd-b62e-4aa3-bd8d-15f5844302d8-openshift-state-metrics-tls\") pod \"openshift-state-metrics-6dbff8cb4c-hvjlk\" (UID: \"bf303acd-b62e-4aa3-bd8d-15f5844302d8\") " pod="openshift-monitoring/openshift-state-metrics-6dbff8cb4c-hvjlk" Feb 24 05:37:21.710158 master-0 kubenswrapper[34361]: I0224 05:37:21.710123 34361 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-log\" (UniqueName: \"kubernetes.io/empty-dir/2f48332e-92de-42aa-a6e6-db161f005e74-audit-log\") pod \"metrics-server-65cdf565cd-555rj\" (UID: \"2f48332e-92de-42aa-a6e6-db161f005e74\") " pod="openshift-monitoring/metrics-server-65cdf565cd-555rj" Feb 24 05:37:21.710284 master-0 kubenswrapper[34361]: I0224 05:37:21.710244 34361 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8f3825c1-975c-40b5-a6ad-0f200968b3cd-catalog-content\") pod \"redhat-operators-xm8sw\" (UID: \"8f3825c1-975c-40b5-a6ad-0f200968b3cd\") " pod="openshift-marketplace/redhat-operators-xm8sw" Feb 24 05:37:21.710499 master-0 kubenswrapper[34361]: I0224 05:37:21.710459 34361 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/17f8e10b-88dc-4158-a7c4-aaa2f5d5fb9d-serving-cert\") pod \"kube-apiserver-operator-5d87bf58c-ncrqj\" (UID: \"17f8e10b-88dc-4158-a7c4-aaa2f5d5fb9d\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-5d87bf58c-ncrqj" Feb 24 05:37:21.710762 master-0 kubenswrapper[34361]: I0224 05:37:21.710710 34361 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/74e8b3c8-da80-492c-bfcf-199b40bde40b-ovn-node-metrics-cert\") pod \"ovnkube-node-vd82q\" (UID: \"74e8b3c8-da80-492c-bfcf-199b40bde40b\") " pod="openshift-ovn-kubernetes/ovnkube-node-vd82q" Feb 24 05:37:21.710835 master-0 kubenswrapper[34361]: I0224 05:37:21.710805 34361 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/c00ee01c-143b-4e44-823c-c6bfdedb8ed6-multus-socket-dir-parent\") pod \"multus-8qp5g\" (UID: \"c00ee01c-143b-4e44-823c-c6bfdedb8ed6\") " pod="openshift-multus/multus-8qp5g" Feb 24 05:37:21.710938 master-0 kubenswrapper[34361]: I0224 05:37:21.710909 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/23bdafdd-27c9-4461-be4a-3ea916ac3875-bound-sa-token\") pod \"cluster-image-registry-operator-779979bdf7-t98nr\" (UID: \"23bdafdd-27c9-4461-be4a-3ea916ac3875\") " pod="openshift-image-registry/cluster-image-registry-operator-779979bdf7-t98nr" Feb 24 05:37:21.710985 master-0 kubenswrapper[34361]: I0224 05:37:21.710967 34361 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/74e8b3c8-da80-492c-bfcf-199b40bde40b-etc-openvswitch\") pod \"ovnkube-node-vd82q\" (UID: \"74e8b3c8-da80-492c-bfcf-199b40bde40b\") " pod="openshift-ovn-kubernetes/ovnkube-node-vd82q" Feb 24 05:37:21.711053 master-0 kubenswrapper[34361]: I0224 05:37:21.711025 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/1e7f7c02-4c84-432a-8d59-25dd3bfef5c2-mcc-auth-proxy-config\") pod \"machine-config-controller-54cb48566c-9ww5z\" (UID: \"1e7f7c02-4c84-432a-8d59-25dd3bfef5c2\") " pod="openshift-machine-config-operator/machine-config-controller-54cb48566c-9ww5z" Feb 24 05:37:21.711119 master-0 kubenswrapper[34361]: I0224 05:37:21.711090 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/933beda1-c930-4831-a886-3cc6b7a992ad-serving-cert\") pod \"openshift-controller-manager-operator-584cc7bcb5-zz9fm\" (UID: \"933beda1-c930-4831-a886-3cc6b7a992ad\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-584cc7bcb5-zz9fm" Feb 24 05:37:21.711189 master-0 kubenswrapper[34361]: I0224 05:37:21.711158 34361 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-exporter-wtmp\" (UniqueName: \"kubernetes.io/host-path/f2be5ed6-fdf0-4462-a319-eed1a5a1c778-node-exporter-wtmp\") pod \"node-exporter-qk7rz\" (UID: \"f2be5ed6-fdf0-4462-a319-eed1a5a1c778\") " pod="openshift-monitoring/node-exporter-qk7rz" Feb 24 05:37:21.711232 master-0 kubenswrapper[34361]: I0224 05:37:21.711211 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lm88x\" (UniqueName: \"kubernetes.io/projected/f2be5ed6-fdf0-4462-a319-eed1a5a1c778-kube-api-access-lm88x\") pod \"node-exporter-qk7rz\" (UID: \"f2be5ed6-fdf0-4462-a319-eed1a5a1c778\") " pod="openshift-monitoring/node-exporter-qk7rz" Feb 24 05:37:21.711272 master-0 kubenswrapper[34361]: I0224 05:37:21.711252 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"federate-client-tls\" (UniqueName: \"kubernetes.io/secret/1163571d-f555-41ad-b04c-74c2dc452efe-federate-client-tls\") pod \"telemeter-client-96c995bf5-57k8x\" (UID: \"1163571d-f555-41ad-b04c-74c2dc452efe\") " pod="openshift-monitoring/telemeter-client-96c995bf5-57k8x" Feb 24 05:37:21.711502 master-0 kubenswrapper[34361]: I0224 05:37:21.711291 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/812552f3-09b1-43f8-b910-c78e776127f8-trusted-ca-bundle\") pod \"apiserver-6f8b7f45f7-5df4m\" (UID: \"812552f3-09b1-43f8-b910-c78e776127f8\") " pod="openshift-oauth-apiserver/apiserver-6f8b7f45f7-5df4m" Feb 24 05:37:21.711502 master-0 kubenswrapper[34361]: I0224 05:37:21.711486 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/39623346-691b-42c8-af76-409d4f6629af-images\") pod \"cluster-baremetal-operator-d6bb9bb76-54hnv\" (UID: \"39623346-691b-42c8-af76-409d4f6629af\") " pod="openshift-machine-api/cluster-baremetal-operator-d6bb9bb76-54hnv" Feb 24 05:37:21.711598 master-0 kubenswrapper[34361]: I0224 05:37:21.711544 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/347c43e5-86d5-436f-bdc5-1c7bbe19ab2a-cache\") pod \"operator-controller-controller-manager-9cc7d7bb-t75jj\" (UID: \"347c43e5-86d5-436f-bdc5-1c7bbe19ab2a\") " pod="openshift-operator-controller/operator-controller-controller-manager-9cc7d7bb-t75jj" Feb 24 05:37:21.711652 master-0 kubenswrapper[34361]: I0224 05:37:21.711579 34361 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/933beda1-c930-4831-a886-3cc6b7a992ad-serving-cert\") pod \"openshift-controller-manager-operator-584cc7bcb5-zz9fm\" (UID: \"933beda1-c930-4831-a886-3cc6b7a992ad\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-584cc7bcb5-zz9fm" Feb 24 05:37:21.711951 master-0 kubenswrapper[34361]: I0224 05:37:21.711905 34361 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/347c43e5-86d5-436f-bdc5-1c7bbe19ab2a-cache\") pod \"operator-controller-controller-manager-9cc7d7bb-t75jj\" (UID: \"347c43e5-86d5-436f-bdc5-1c7bbe19ab2a\") " pod="openshift-operator-controller/operator-controller-controller-manager-9cc7d7bb-t75jj" Feb 24 05:37:21.718700 master-0 kubenswrapper[34361]: I0224 05:37:21.718617 34361 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-script-lib" Feb 24 05:37:21.729395 master-0 kubenswrapper[34361]: I0224 05:37:21.729303 34361 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/74e8b3c8-da80-492c-bfcf-199b40bde40b-ovnkube-script-lib\") pod \"ovnkube-node-vd82q\" (UID: \"74e8b3c8-da80-492c-bfcf-199b40bde40b\") " pod="openshift-ovn-kubernetes/ovnkube-node-vd82q" Feb 24 05:37:21.739706 master-0 kubenswrapper[34361]: I0224 05:37:21.739648 34361 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"env-overrides" Feb 24 05:37:21.749601 master-0 kubenswrapper[34361]: I0224 05:37:21.749550 34361 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/c106275b-72b6-4877-95c3-830f93e35375-env-overrides\") pod \"network-node-identity-rlg4x\" (UID: \"c106275b-72b6-4877-95c3-830f93e35375\") " pod="openshift-network-node-identity/network-node-identity-rlg4x" Feb 24 05:37:21.758968 master-0 kubenswrapper[34361]: I0224 05:37:21.758903 34361 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"iptables-alerter-script" Feb 24 05:37:21.766425 master-0 kubenswrapper[34361]: I0224 05:37:21.766109 34361 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/f6690909-3a87-4bdc-b0ec-1cdd4df32e4b-iptables-alerter-script\") pod \"iptables-alerter-r2vvc\" (UID: \"f6690909-3a87-4bdc-b0ec-1cdd4df32e4b\") " pod="openshift-network-operator/iptables-alerter-r2vvc" Feb 24 05:37:21.778816 master-0 kubenswrapper[34361]: I0224 05:37:21.778711 34361 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"serving-cert" Feb 24 05:37:21.783671 master-0 kubenswrapper[34361]: I0224 05:37:21.783598 34361 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b21148ab-4e3e-4d0b-b198-3278dd8e2e7e-serving-cert\") pod \"apiserver-fdc9d7cdd-8v72m\" (UID: \"b21148ab-4e3e-4d0b-b198-3278dd8e2e7e\") " pod="openshift-apiserver/apiserver-fdc9d7cdd-8v72m" Feb 24 05:37:21.809284 master-0 kubenswrapper[34361]: I0224 05:37:21.809186 34361 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-catalogd"/"catalogd-trusted-ca-bundle" Feb 24 05:37:21.813730 master-0 kubenswrapper[34361]: I0224 05:37:21.813654 34361 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-certs\" (UniqueName: \"kubernetes.io/projected/d9492fbf-d0f4-4ecf-84ba-b089d69535c1-ca-certs\") pod \"catalogd-controller-manager-84b8d9d697-zvzxs\" (UID: \"d9492fbf-d0f4-4ecf-84ba-b089d69535c1\") " pod="openshift-catalogd/catalogd-controller-manager-84b8d9d697-zvzxs" Feb 24 05:37:21.813799 master-0 kubenswrapper[34361]: I0224 05:37:21.813748 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-systemd\" (UniqueName: \"kubernetes.io/host-path/073c9b40-bb80-41a2-bcd2-cfdbe040a5a4-etc-systemd\") pod \"tuned-2w6mj\" (UID: \"073c9b40-bb80-41a2-bcd2-cfdbe040a5a4\") " pod="openshift-cluster-node-tuning-operator/tuned-2w6mj" Feb 24 05:37:21.814028 master-0 kubenswrapper[34361]: I0224 05:37:21.813969 34361 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-systemd\" (UniqueName: \"kubernetes.io/host-path/073c9b40-bb80-41a2-bcd2-cfdbe040a5a4-etc-systemd\") pod \"tuned-2w6mj\" (UID: \"073c9b40-bb80-41a2-bcd2-cfdbe040a5a4\") " pod="openshift-cluster-node-tuning-operator/tuned-2w6mj" Feb 24 05:37:21.814184 master-0 kubenswrapper[34361]: I0224 05:37:21.814154 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/c00ee01c-143b-4e44-823c-c6bfdedb8ed6-system-cni-dir\") pod \"multus-8qp5g\" (UID: \"c00ee01c-143b-4e44-823c-c6bfdedb8ed6\") " pod="openshift-multus/multus-8qp5g" Feb 24 05:37:21.814265 master-0 kubenswrapper[34361]: I0224 05:37:21.814236 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/74e8b3c8-da80-492c-bfcf-199b40bde40b-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-vd82q\" (UID: \"74e8b3c8-da80-492c-bfcf-199b40bde40b\") " pod="openshift-ovn-kubernetes/ovnkube-node-vd82q" Feb 24 05:37:21.814421 master-0 kubenswrapper[34361]: I0224 05:37:21.814393 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/afaa1cb5-3ad5-4c67-8802-9c4db23a2e3a-kubelet-dir\") pod \"installer-3-master-0\" (UID: \"afaa1cb5-3ad5-4c67-8802-9c4db23a2e3a\") " pod="openshift-kube-apiserver/installer-3-master-0" Feb 24 05:37:21.814458 master-0 kubenswrapper[34361]: I0224 05:37:21.814432 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/b21148ab-4e3e-4d0b-b198-3278dd8e2e7e-node-pullsecrets\") pod \"apiserver-fdc9d7cdd-8v72m\" (UID: \"b21148ab-4e3e-4d0b-b198-3278dd8e2e7e\") " pod="openshift-apiserver/apiserver-fdc9d7cdd-8v72m" Feb 24 05:37:21.814498 master-0 kubenswrapper[34361]: I0224 05:37:21.814463 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/073c9b40-bb80-41a2-bcd2-cfdbe040a5a4-var-lib-kubelet\") pod \"tuned-2w6mj\" (UID: \"073c9b40-bb80-41a2-bcd2-cfdbe040a5a4\") " pod="openshift-cluster-node-tuning-operator/tuned-2w6mj" Feb 24 05:37:21.814726 master-0 kubenswrapper[34361]: I0224 05:37:21.814692 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/798dcf46-8377-46b8-8387-5261d9bbefa1-hosts-file\") pod \"node-resolver-ng8tz\" (UID: \"798dcf46-8377-46b8-8387-5261d9bbefa1\") " pod="openshift-dns/node-resolver-ng8tz" Feb 24 05:37:21.814778 master-0 kubenswrapper[34361]: I0224 05:37:21.814740 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/74e8b3c8-da80-492c-bfcf-199b40bde40b-host-slash\") pod \"ovnkube-node-vd82q\" (UID: \"74e8b3c8-da80-492c-bfcf-199b40bde40b\") " pod="openshift-ovn-kubernetes/ovnkube-node-vd82q" Feb 24 05:37:21.814851 master-0 kubenswrapper[34361]: I0224 05:37:21.814828 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/74e8b3c8-da80-492c-bfcf-199b40bde40b-host-kubelet\") pod \"ovnkube-node-vd82q\" (UID: \"74e8b3c8-da80-492c-bfcf-199b40bde40b\") " pod="openshift-ovn-kubernetes/ovnkube-node-vd82q" Feb 24 05:37:21.814882 master-0 kubenswrapper[34361]: I0224 05:37:21.814860 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/74e8b3c8-da80-492c-bfcf-199b40bde40b-run-systemd\") pod \"ovnkube-node-vd82q\" (UID: \"74e8b3c8-da80-492c-bfcf-199b40bde40b\") " pod="openshift-ovn-kubernetes/ovnkube-node-vd82q" Feb 24 05:37:21.814977 master-0 kubenswrapper[34361]: I0224 05:37:21.814954 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/c00ee01c-143b-4e44-823c-c6bfdedb8ed6-host-var-lib-cni-bin\") pod \"multus-8qp5g\" (UID: \"c00ee01c-143b-4e44-823c-c6bfdedb8ed6\") " pod="openshift-multus/multus-8qp5g" Feb 24 05:37:21.815050 master-0 kubenswrapper[34361]: I0224 05:37:21.815029 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/c00ee01c-143b-4e44-823c-c6bfdedb8ed6-multus-socket-dir-parent\") pod \"multus-8qp5g\" (UID: \"c00ee01c-143b-4e44-823c-c6bfdedb8ed6\") " pod="openshift-multus/multus-8qp5g" Feb 24 05:37:21.815096 master-0 kubenswrapper[34361]: I0224 05:37:21.815084 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/74e8b3c8-da80-492c-bfcf-199b40bde40b-etc-openvswitch\") pod \"ovnkube-node-vd82q\" (UID: \"74e8b3c8-da80-492c-bfcf-199b40bde40b\") " pod="openshift-ovn-kubernetes/ovnkube-node-vd82q" Feb 24 05:37:21.815178 master-0 kubenswrapper[34361]: I0224 05:37:21.815155 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-exporter-wtmp\" (UniqueName: \"kubernetes.io/host-path/f2be5ed6-fdf0-4462-a319-eed1a5a1c778-node-exporter-wtmp\") pod \"node-exporter-qk7rz\" (UID: \"f2be5ed6-fdf0-4462-a319-eed1a5a1c778\") " pod="openshift-monitoring/node-exporter-qk7rz" Feb 24 05:37:21.815211 master-0 kubenswrapper[34361]: I0224 05:37:21.815192 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/f6690909-3a87-4bdc-b0ec-1cdd4df32e4b-host-slash\") pod \"iptables-alerter-r2vvc\" (UID: \"f6690909-3a87-4bdc-b0ec-1cdd4df32e4b\") " pod="openshift-network-operator/iptables-alerter-r2vvc" Feb 24 05:37:21.815369 master-0 kubenswrapper[34361]: I0224 05:37:21.815348 34361 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/f6690909-3a87-4bdc-b0ec-1cdd4df32e4b-host-slash\") pod \"iptables-alerter-r2vvc\" (UID: \"f6690909-3a87-4bdc-b0ec-1cdd4df32e4b\") " pod="openshift-network-operator/iptables-alerter-r2vvc" Feb 24 05:37:21.815429 master-0 kubenswrapper[34361]: I0224 05:37:21.815378 34361 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/74e8b3c8-da80-492c-bfcf-199b40bde40b-host-slash\") pod \"ovnkube-node-vd82q\" (UID: \"74e8b3c8-da80-492c-bfcf-199b40bde40b\") " pod="openshift-ovn-kubernetes/ovnkube-node-vd82q" Feb 24 05:37:21.815429 master-0 kubenswrapper[34361]: I0224 05:37:21.815421 34361 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/74e8b3c8-da80-492c-bfcf-199b40bde40b-run-systemd\") pod \"ovnkube-node-vd82q\" (UID: \"74e8b3c8-da80-492c-bfcf-199b40bde40b\") " pod="openshift-ovn-kubernetes/ovnkube-node-vd82q" Feb 24 05:37:21.815497 master-0 kubenswrapper[34361]: I0224 05:37:21.815396 34361 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/74e8b3c8-da80-492c-bfcf-199b40bde40b-host-kubelet\") pod \"ovnkube-node-vd82q\" (UID: \"74e8b3c8-da80-492c-bfcf-199b40bde40b\") " pod="openshift-ovn-kubernetes/ovnkube-node-vd82q" Feb 24 05:37:21.815497 master-0 kubenswrapper[34361]: I0224 05:37:21.815464 34361 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/c00ee01c-143b-4e44-823c-c6bfdedb8ed6-host-var-lib-cni-bin\") pod \"multus-8qp5g\" (UID: \"c00ee01c-143b-4e44-823c-c6bfdedb8ed6\") " pod="openshift-multus/multus-8qp5g" Feb 24 05:37:21.815557 master-0 kubenswrapper[34361]: I0224 05:37:21.815512 34361 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/c00ee01c-143b-4e44-823c-c6bfdedb8ed6-multus-socket-dir-parent\") pod \"multus-8qp5g\" (UID: \"c00ee01c-143b-4e44-823c-c6bfdedb8ed6\") " pod="openshift-multus/multus-8qp5g" Feb 24 05:37:21.815557 master-0 kubenswrapper[34361]: I0224 05:37:21.815548 34361 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/74e8b3c8-da80-492c-bfcf-199b40bde40b-etc-openvswitch\") pod \"ovnkube-node-vd82q\" (UID: \"74e8b3c8-da80-492c-bfcf-199b40bde40b\") " pod="openshift-ovn-kubernetes/ovnkube-node-vd82q" Feb 24 05:37:21.815612 master-0 kubenswrapper[34361]: I0224 05:37:21.815582 34361 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/c00ee01c-143b-4e44-823c-c6bfdedb8ed6-system-cni-dir\") pod \"multus-8qp5g\" (UID: \"c00ee01c-143b-4e44-823c-c6bfdedb8ed6\") " pod="openshift-multus/multus-8qp5g" Feb 24 05:37:21.815612 master-0 kubenswrapper[34361]: I0224 05:37:21.815606 34361 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-exporter-wtmp\" (UniqueName: \"kubernetes.io/host-path/f2be5ed6-fdf0-4462-a319-eed1a5a1c778-node-exporter-wtmp\") pod \"node-exporter-qk7rz\" (UID: \"f2be5ed6-fdf0-4462-a319-eed1a5a1c778\") " pod="openshift-monitoring/node-exporter-qk7rz" Feb 24 05:37:21.815726 master-0 kubenswrapper[34361]: I0224 05:37:21.815695 34361 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/798dcf46-8377-46b8-8387-5261d9bbefa1-hosts-file\") pod \"node-resolver-ng8tz\" (UID: \"798dcf46-8377-46b8-8387-5261d9bbefa1\") " pod="openshift-dns/node-resolver-ng8tz" Feb 24 05:37:21.815726 master-0 kubenswrapper[34361]: I0224 05:37:21.815717 34361 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/073c9b40-bb80-41a2-bcd2-cfdbe040a5a4-var-lib-kubelet\") pod \"tuned-2w6mj\" (UID: \"073c9b40-bb80-41a2-bcd2-cfdbe040a5a4\") " pod="openshift-cluster-node-tuning-operator/tuned-2w6mj" Feb 24 05:37:21.815794 master-0 kubenswrapper[34361]: I0224 05:37:21.815771 34361 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/afaa1cb5-3ad5-4c67-8802-9c4db23a2e3a-kubelet-dir\") pod \"installer-3-master-0\" (UID: \"afaa1cb5-3ad5-4c67-8802-9c4db23a2e3a\") " pod="openshift-kube-apiserver/installer-3-master-0" Feb 24 05:37:21.815843 master-0 kubenswrapper[34361]: I0224 05:37:21.815820 34361 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/b21148ab-4e3e-4d0b-b198-3278dd8e2e7e-node-pullsecrets\") pod \"apiserver-fdc9d7cdd-8v72m\" (UID: \"b21148ab-4e3e-4d0b-b198-3278dd8e2e7e\") " pod="openshift-apiserver/apiserver-fdc9d7cdd-8v72m" Feb 24 05:37:21.815875 master-0 kubenswrapper[34361]: I0224 05:37:21.815842 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/c00ee01c-143b-4e44-823c-c6bfdedb8ed6-host-var-lib-kubelet\") pod \"multus-8qp5g\" (UID: \"c00ee01c-143b-4e44-823c-c6bfdedb8ed6\") " pod="openshift-multus/multus-8qp5g" Feb 24 05:37:21.815875 master-0 kubenswrapper[34361]: I0224 05:37:21.815859 34361 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/74e8b3c8-da80-492c-bfcf-199b40bde40b-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-vd82q\" (UID: \"74e8b3c8-da80-492c-bfcf-199b40bde40b\") " pod="openshift-ovn-kubernetes/ovnkube-node-vd82q" Feb 24 05:37:21.815941 master-0 kubenswrapper[34361]: I0224 05:37:21.815893 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/812552f3-09b1-43f8-b910-c78e776127f8-audit-dir\") pod \"apiserver-6f8b7f45f7-5df4m\" (UID: \"812552f3-09b1-43f8-b910-c78e776127f8\") " pod="openshift-oauth-apiserver/apiserver-6f8b7f45f7-5df4m" Feb 24 05:37:21.815941 master-0 kubenswrapper[34361]: I0224 05:37:21.815909 34361 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/c00ee01c-143b-4e44-823c-c6bfdedb8ed6-host-var-lib-kubelet\") pod \"multus-8qp5g\" (UID: \"c00ee01c-143b-4e44-823c-c6bfdedb8ed6\") " pod="openshift-multus/multus-8qp5g" Feb 24 05:37:21.816002 master-0 kubenswrapper[34361]: I0224 05:37:21.815965 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/073c9b40-bb80-41a2-bcd2-cfdbe040a5a4-lib-modules\") pod \"tuned-2w6mj\" (UID: \"073c9b40-bb80-41a2-bcd2-cfdbe040a5a4\") " pod="openshift-cluster-node-tuning-operator/tuned-2w6mj" Feb 24 05:37:21.816139 master-0 kubenswrapper[34361]: I0224 05:37:21.816114 34361 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/073c9b40-bb80-41a2-bcd2-cfdbe040a5a4-lib-modules\") pod \"tuned-2w6mj\" (UID: \"073c9b40-bb80-41a2-bcd2-cfdbe040a5a4\") " pod="openshift-cluster-node-tuning-operator/tuned-2w6mj" Feb 24 05:37:21.816174 master-0 kubenswrapper[34361]: I0224 05:37:21.816139 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/c00ee01c-143b-4e44-823c-c6bfdedb8ed6-cnibin\") pod \"multus-8qp5g\" (UID: \"c00ee01c-143b-4e44-823c-c6bfdedb8ed6\") " pod="openshift-multus/multus-8qp5g" Feb 24 05:37:21.816174 master-0 kubenswrapper[34361]: I0224 05:37:21.816166 34361 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/812552f3-09b1-43f8-b910-c78e776127f8-audit-dir\") pod \"apiserver-6f8b7f45f7-5df4m\" (UID: \"812552f3-09b1-43f8-b910-c78e776127f8\") " pod="openshift-oauth-apiserver/apiserver-6f8b7f45f7-5df4m" Feb 24 05:37:21.816240 master-0 kubenswrapper[34361]: I0224 05:37:21.816186 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/a3561f49-0808-4d96-95ec-456fcb5c5bb4-rootfs\") pod \"machine-config-daemon-c56dz\" (UID: \"a3561f49-0808-4d96-95ec-456fcb5c5bb4\") " pod="openshift-machine-config-operator/machine-config-daemon-c56dz" Feb 24 05:37:21.816240 master-0 kubenswrapper[34361]: I0224 05:37:21.816219 34361 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/c00ee01c-143b-4e44-823c-c6bfdedb8ed6-cnibin\") pod \"multus-8qp5g\" (UID: \"c00ee01c-143b-4e44-823c-c6bfdedb8ed6\") " pod="openshift-multus/multus-8qp5g" Feb 24 05:37:21.816302 master-0 kubenswrapper[34361]: I0224 05:37:21.816241 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-docker\" (UniqueName: \"kubernetes.io/host-path/d9492fbf-d0f4-4ecf-84ba-b089d69535c1-etc-docker\") pod \"catalogd-controller-manager-84b8d9d697-zvzxs\" (UID: \"d9492fbf-d0f4-4ecf-84ba-b089d69535c1\") " pod="openshift-catalogd/catalogd-controller-manager-84b8d9d697-zvzxs" Feb 24 05:37:21.816351 master-0 kubenswrapper[34361]: I0224 05:37:21.816295 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/073c9b40-bb80-41a2-bcd2-cfdbe040a5a4-host\") pod \"tuned-2w6mj\" (UID: \"073c9b40-bb80-41a2-bcd2-cfdbe040a5a4\") " pod="openshift-cluster-node-tuning-operator/tuned-2w6mj" Feb 24 05:37:21.816389 master-0 kubenswrapper[34361]: I0224 05:37:21.816373 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"root\" (UniqueName: \"kubernetes.io/host-path/f2be5ed6-fdf0-4462-a319-eed1a5a1c778-root\") pod \"node-exporter-qk7rz\" (UID: \"f2be5ed6-fdf0-4462-a319-eed1a5a1c778\") " pod="openshift-monitoring/node-exporter-qk7rz" Feb 24 05:37:21.816442 master-0 kubenswrapper[34361]: I0224 05:37:21.816414 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-sysctl-d\" (UniqueName: \"kubernetes.io/host-path/073c9b40-bb80-41a2-bcd2-cfdbe040a5a4-etc-sysctl-d\") pod \"tuned-2w6mj\" (UID: \"073c9b40-bb80-41a2-bcd2-cfdbe040a5a4\") " pod="openshift-cluster-node-tuning-operator/tuned-2w6mj" Feb 24 05:37:21.816476 master-0 kubenswrapper[34361]: I0224 05:37:21.816462 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/c00ee01c-143b-4e44-823c-c6bfdedb8ed6-os-release\") pod \"multus-8qp5g\" (UID: \"c00ee01c-143b-4e44-823c-c6bfdedb8ed6\") " pod="openshift-multus/multus-8qp5g" Feb 24 05:37:21.816506 master-0 kubenswrapper[34361]: I0224 05:37:21.816492 34361 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/073c9b40-bb80-41a2-bcd2-cfdbe040a5a4-host\") pod \"tuned-2w6mj\" (UID: \"073c9b40-bb80-41a2-bcd2-cfdbe040a5a4\") " pod="openshift-cluster-node-tuning-operator/tuned-2w6mj" Feb 24 05:37:21.816599 master-0 kubenswrapper[34361]: I0224 05:37:21.816571 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/073c9b40-bb80-41a2-bcd2-cfdbe040a5a4-run\") pod \"tuned-2w6mj\" (UID: \"073c9b40-bb80-41a2-bcd2-cfdbe040a5a4\") " pod="openshift-cluster-node-tuning-operator/tuned-2w6mj" Feb 24 05:37:21.816643 master-0 kubenswrapper[34361]: I0224 05:37:21.816594 34361 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-docker\" (UniqueName: \"kubernetes.io/host-path/d9492fbf-d0f4-4ecf-84ba-b089d69535c1-etc-docker\") pod \"catalogd-controller-manager-84b8d9d697-zvzxs\" (UID: \"d9492fbf-d0f4-4ecf-84ba-b089d69535c1\") " pod="openshift-catalogd/catalogd-controller-manager-84b8d9d697-zvzxs" Feb 24 05:37:21.816643 master-0 kubenswrapper[34361]: I0224 05:37:21.816258 34361 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/a3561f49-0808-4d96-95ec-456fcb5c5bb4-rootfs\") pod \"machine-config-daemon-c56dz\" (UID: \"a3561f49-0808-4d96-95ec-456fcb5c5bb4\") " pod="openshift-machine-config-operator/machine-config-daemon-c56dz" Feb 24 05:37:21.816643 master-0 kubenswrapper[34361]: I0224 05:37:21.816618 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/afaa1cb5-3ad5-4c67-8802-9c4db23a2e3a-var-lock\") pod \"installer-3-master-0\" (UID: \"afaa1cb5-3ad5-4c67-8802-9c4db23a2e3a\") " pod="openshift-kube-apiserver/installer-3-master-0" Feb 24 05:37:21.816730 master-0 kubenswrapper[34361]: I0224 05:37:21.816660 34361 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-sysctl-d\" (UniqueName: \"kubernetes.io/host-path/073c9b40-bb80-41a2-bcd2-cfdbe040a5a4-etc-sysctl-d\") pod \"tuned-2w6mj\" (UID: \"073c9b40-bb80-41a2-bcd2-cfdbe040a5a4\") " pod="openshift-cluster-node-tuning-operator/tuned-2w6mj" Feb 24 05:37:21.816730 master-0 kubenswrapper[34361]: I0224 05:37:21.816717 34361 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"root\" (UniqueName: \"kubernetes.io/host-path/f2be5ed6-fdf0-4462-a319-eed1a5a1c778-root\") pod \"node-exporter-qk7rz\" (UID: \"f2be5ed6-fdf0-4462-a319-eed1a5a1c778\") " pod="openshift-monitoring/node-exporter-qk7rz" Feb 24 05:37:21.816900 master-0 kubenswrapper[34361]: I0224 05:37:21.816860 34361 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/c00ee01c-143b-4e44-823c-c6bfdedb8ed6-os-release\") pod \"multus-8qp5g\" (UID: \"c00ee01c-143b-4e44-823c-c6bfdedb8ed6\") " pod="openshift-multus/multus-8qp5g" Feb 24 05:37:21.816935 master-0 kubenswrapper[34361]: I0224 05:37:21.816880 34361 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run\" (UniqueName: \"kubernetes.io/host-path/073c9b40-bb80-41a2-bcd2-cfdbe040a5a4-run\") pod \"tuned-2w6mj\" (UID: \"073c9b40-bb80-41a2-bcd2-cfdbe040a5a4\") " pod="openshift-cluster-node-tuning-operator/tuned-2w6mj" Feb 24 05:37:21.816967 master-0 kubenswrapper[34361]: I0224 05:37:21.816930 34361 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/afaa1cb5-3ad5-4c67-8802-9c4db23a2e3a-var-lock\") pod \"installer-3-master-0\" (UID: \"afaa1cb5-3ad5-4c67-8802-9c4db23a2e3a\") " pod="openshift-kube-apiserver/installer-3-master-0" Feb 24 05:37:21.817098 master-0 kubenswrapper[34361]: I0224 05:37:21.817051 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-sysconfig\" (UniqueName: \"kubernetes.io/host-path/073c9b40-bb80-41a2-bcd2-cfdbe040a5a4-etc-sysconfig\") pod \"tuned-2w6mj\" (UID: \"073c9b40-bb80-41a2-bcd2-cfdbe040a5a4\") " pod="openshift-cluster-node-tuning-operator/tuned-2w6mj" Feb 24 05:37:21.817173 master-0 kubenswrapper[34361]: I0224 05:37:21.817137 34361 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-sysconfig\" (UniqueName: \"kubernetes.io/host-path/073c9b40-bb80-41a2-bcd2-cfdbe040a5a4-etc-sysconfig\") pod \"tuned-2w6mj\" (UID: \"073c9b40-bb80-41a2-bcd2-cfdbe040a5a4\") " pod="openshift-cluster-node-tuning-operator/tuned-2w6mj" Feb 24 05:37:21.817173 master-0 kubenswrapper[34361]: I0224 05:37:21.817139 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/c00ee01c-143b-4e44-823c-c6bfdedb8ed6-host-run-k8s-cni-cncf-io\") pod \"multus-8qp5g\" (UID: \"c00ee01c-143b-4e44-823c-c6bfdedb8ed6\") " pod="openshift-multus/multus-8qp5g" Feb 24 05:37:21.817236 master-0 kubenswrapper[34361]: I0224 05:37:21.817208 34361 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/c00ee01c-143b-4e44-823c-c6bfdedb8ed6-host-run-k8s-cni-cncf-io\") pod \"multus-8qp5g\" (UID: \"c00ee01c-143b-4e44-823c-c6bfdedb8ed6\") " pod="openshift-multus/multus-8qp5g" Feb 24 05:37:21.817267 master-0 kubenswrapper[34361]: I0224 05:37:21.817244 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/74e8b3c8-da80-492c-bfcf-199b40bde40b-run-ovn\") pod \"ovnkube-node-vd82q\" (UID: \"74e8b3c8-da80-492c-bfcf-199b40bde40b\") " pod="openshift-ovn-kubernetes/ovnkube-node-vd82q" Feb 24 05:37:21.817399 master-0 kubenswrapper[34361]: I0224 05:37:21.817371 34361 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/74e8b3c8-da80-492c-bfcf-199b40bde40b-run-ovn\") pod \"ovnkube-node-vd82q\" (UID: \"74e8b3c8-da80-492c-bfcf-199b40bde40b\") " pod="openshift-ovn-kubernetes/ovnkube-node-vd82q" Feb 24 05:37:21.817486 master-0 kubenswrapper[34361]: I0224 05:37:21.817446 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-containers\" (UniqueName: \"kubernetes.io/host-path/347c43e5-86d5-436f-bdc5-1c7bbe19ab2a-etc-containers\") pod \"operator-controller-controller-manager-9cc7d7bb-t75jj\" (UID: \"347c43e5-86d5-436f-bdc5-1c7bbe19ab2a\") " pod="openshift-operator-controller/operator-controller-controller-manager-9cc7d7bb-t75jj" Feb 24 05:37:21.817599 master-0 kubenswrapper[34361]: I0224 05:37:21.817570 34361 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-containers\" (UniqueName: \"kubernetes.io/host-path/347c43e5-86d5-436f-bdc5-1c7bbe19ab2a-etc-containers\") pod \"operator-controller-controller-manager-9cc7d7bb-t75jj\" (UID: \"347c43e5-86d5-436f-bdc5-1c7bbe19ab2a\") " pod="openshift-operator-controller/operator-controller-controller-manager-9cc7d7bb-t75jj" Feb 24 05:37:21.817664 master-0 kubenswrapper[34361]: I0224 05:37:21.817634 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/c00ee01c-143b-4e44-823c-c6bfdedb8ed6-multus-conf-dir\") pod \"multus-8qp5g\" (UID: \"c00ee01c-143b-4e44-823c-c6bfdedb8ed6\") " pod="openshift-multus/multus-8qp5g" Feb 24 05:37:21.817774 master-0 kubenswrapper[34361]: I0224 05:37:21.817727 34361 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/c00ee01c-143b-4e44-823c-c6bfdedb8ed6-multus-conf-dir\") pod \"multus-8qp5g\" (UID: \"c00ee01c-143b-4e44-823c-c6bfdedb8ed6\") " pod="openshift-multus/multus-8qp5g" Feb 24 05:37:21.817836 master-0 kubenswrapper[34361]: I0224 05:37:21.817802 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/0e05783d-6bd1-4c71-87d9-1eb3edd827b3-etc-cvo-updatepayloads\") pod \"cluster-version-operator-57476485-7g2gq\" (UID: \"0e05783d-6bd1-4c71-87d9-1eb3edd827b3\") " pod="openshift-cluster-version/cluster-version-operator-57476485-7g2gq" Feb 24 05:37:21.817879 master-0 kubenswrapper[34361]: I0224 05:37:21.817848 34361 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/0e05783d-6bd1-4c71-87d9-1eb3edd827b3-etc-cvo-updatepayloads\") pod \"cluster-version-operator-57476485-7g2gq\" (UID: \"0e05783d-6bd1-4c71-87d9-1eb3edd827b3\") " pod="openshift-cluster-version/cluster-version-operator-57476485-7g2gq" Feb 24 05:37:21.817916 master-0 kubenswrapper[34361]: I0224 05:37:21.817898 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-sysctl-conf\" (UniqueName: \"kubernetes.io/host-path/073c9b40-bb80-41a2-bcd2-cfdbe040a5a4-etc-sysctl-conf\") pod \"tuned-2w6mj\" (UID: \"073c9b40-bb80-41a2-bcd2-cfdbe040a5a4\") " pod="openshift-cluster-node-tuning-operator/tuned-2w6mj" Feb 24 05:37:21.818022 master-0 kubenswrapper[34361]: I0224 05:37:21.817982 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/74e8b3c8-da80-492c-bfcf-199b40bde40b-host-cni-bin\") pod \"ovnkube-node-vd82q\" (UID: \"74e8b3c8-da80-492c-bfcf-199b40bde40b\") " pod="openshift-ovn-kubernetes/ovnkube-node-vd82q" Feb 24 05:37:21.818057 master-0 kubenswrapper[34361]: I0224 05:37:21.818009 34361 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"kube-root-ca.crt" Feb 24 05:37:21.818090 master-0 kubenswrapper[34361]: I0224 05:37:21.818071 34361 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/74e8b3c8-da80-492c-bfcf-199b40bde40b-host-cni-bin\") pod \"ovnkube-node-vd82q\" (UID: \"74e8b3c8-da80-492c-bfcf-199b40bde40b\") " pod="openshift-ovn-kubernetes/ovnkube-node-vd82q" Feb 24 05:37:21.818256 master-0 kubenswrapper[34361]: I0224 05:37:21.818232 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/f3cd3830-62b5-49d1-917e-bd993d685c65-host-etc-kube\") pod \"cluster-cloud-controller-manager-operator-67dd8d7969-m8d2t\" (UID: \"f3cd3830-62b5-49d1-917e-bd993d685c65\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-67dd8d7969-m8d2t" Feb 24 05:37:21.818337 master-0 kubenswrapper[34361]: I0224 05:37:21.818286 34361 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-sysctl-conf\" (UniqueName: \"kubernetes.io/host-path/073c9b40-bb80-41a2-bcd2-cfdbe040a5a4-etc-sysctl-conf\") pod \"tuned-2w6mj\" (UID: \"073c9b40-bb80-41a2-bcd2-cfdbe040a5a4\") " pod="openshift-cluster-node-tuning-operator/tuned-2w6mj" Feb 24 05:37:21.818374 master-0 kubenswrapper[34361]: I0224 05:37:21.818334 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/74e8b3c8-da80-492c-bfcf-199b40bde40b-log-socket\") pod \"ovnkube-node-vd82q\" (UID: \"74e8b3c8-da80-492c-bfcf-199b40bde40b\") " pod="openshift-ovn-kubernetes/ovnkube-node-vd82q" Feb 24 05:37:21.818374 master-0 kubenswrapper[34361]: I0224 05:37:21.818364 34361 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/74e8b3c8-da80-492c-bfcf-199b40bde40b-log-socket\") pod \"ovnkube-node-vd82q\" (UID: \"74e8b3c8-da80-492c-bfcf-199b40bde40b\") " pod="openshift-ovn-kubernetes/ovnkube-node-vd82q" Feb 24 05:37:21.818465 master-0 kubenswrapper[34361]: I0224 05:37:21.818443 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/073c9b40-bb80-41a2-bcd2-cfdbe040a5a4-etc-kubernetes\") pod \"tuned-2w6mj\" (UID: \"073c9b40-bb80-41a2-bcd2-cfdbe040a5a4\") " pod="openshift-cluster-node-tuning-operator/tuned-2w6mj" Feb 24 05:37:21.818506 master-0 kubenswrapper[34361]: I0224 05:37:21.818459 34361 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/f3cd3830-62b5-49d1-917e-bd993d685c65-host-etc-kube\") pod \"cluster-cloud-controller-manager-operator-67dd8d7969-m8d2t\" (UID: \"f3cd3830-62b5-49d1-917e-bd993d685c65\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-67dd8d7969-m8d2t" Feb 24 05:37:21.818555 master-0 kubenswrapper[34361]: I0224 05:37:21.818523 34361 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/073c9b40-bb80-41a2-bcd2-cfdbe040a5a4-etc-kubernetes\") pod \"tuned-2w6mj\" (UID: \"073c9b40-bb80-41a2-bcd2-cfdbe040a5a4\") " pod="openshift-cluster-node-tuning-operator/tuned-2w6mj" Feb 24 05:37:21.818665 master-0 kubenswrapper[34361]: I0224 05:37:21.818630 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/f2be5ed6-fdf0-4462-a319-eed1a5a1c778-sys\") pod \"node-exporter-qk7rz\" (UID: \"f2be5ed6-fdf0-4462-a319-eed1a5a1c778\") " pod="openshift-monitoring/node-exporter-qk7rz" Feb 24 05:37:21.818707 master-0 kubenswrapper[34361]: I0224 05:37:21.818691 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/f77227c8-c52d-4a71-ae1b-792055f6f23d-host-etc-kube\") pod \"network-operator-7d7db75979-4fk6k\" (UID: \"f77227c8-c52d-4a71-ae1b-792055f6f23d\") " pod="openshift-network-operator/network-operator-7d7db75979-4fk6k" Feb 24 05:37:21.818738 master-0 kubenswrapper[34361]: I0224 05:37:21.818633 34361 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/f2be5ed6-fdf0-4462-a319-eed1a5a1c778-sys\") pod \"node-exporter-qk7rz\" (UID: \"f2be5ed6-fdf0-4462-a319-eed1a5a1c778\") " pod="openshift-monitoring/node-exporter-qk7rz" Feb 24 05:37:21.818778 master-0 kubenswrapper[34361]: I0224 05:37:21.818759 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/74e8b3c8-da80-492c-bfcf-199b40bde40b-run-openvswitch\") pod \"ovnkube-node-vd82q\" (UID: \"74e8b3c8-da80-492c-bfcf-199b40bde40b\") " pod="openshift-ovn-kubernetes/ovnkube-node-vd82q" Feb 24 05:37:21.818830 master-0 kubenswrapper[34361]: I0224 05:37:21.818804 34361 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/f77227c8-c52d-4a71-ae1b-792055f6f23d-host-etc-kube\") pod \"network-operator-7d7db75979-4fk6k\" (UID: \"f77227c8-c52d-4a71-ae1b-792055f6f23d\") " pod="openshift-network-operator/network-operator-7d7db75979-4fk6k" Feb 24 05:37:21.818877 master-0 kubenswrapper[34361]: I0224 05:37:21.818857 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-docker\" (UniqueName: \"kubernetes.io/host-path/347c43e5-86d5-436f-bdc5-1c7bbe19ab2a-etc-docker\") pod \"operator-controller-controller-manager-9cc7d7bb-t75jj\" (UID: \"347c43e5-86d5-436f-bdc5-1c7bbe19ab2a\") " pod="openshift-operator-controller/operator-controller-controller-manager-9cc7d7bb-t75jj" Feb 24 05:37:21.818945 master-0 kubenswrapper[34361]: I0224 05:37:21.818915 34361 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-docker\" (UniqueName: \"kubernetes.io/host-path/347c43e5-86d5-436f-bdc5-1c7bbe19ab2a-etc-docker\") pod \"operator-controller-controller-manager-9cc7d7bb-t75jj\" (UID: \"347c43e5-86d5-436f-bdc5-1c7bbe19ab2a\") " pod="openshift-operator-controller/operator-controller-controller-manager-9cc7d7bb-t75jj" Feb 24 05:37:21.818945 master-0 kubenswrapper[34361]: I0224 05:37:21.818931 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/74e8b3c8-da80-492c-bfcf-199b40bde40b-var-lib-openvswitch\") pod \"ovnkube-node-vd82q\" (UID: \"74e8b3c8-da80-492c-bfcf-199b40bde40b\") " pod="openshift-ovn-kubernetes/ovnkube-node-vd82q" Feb 24 05:37:21.819007 master-0 kubenswrapper[34361]: I0224 05:37:21.818960 34361 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/74e8b3c8-da80-492c-bfcf-199b40bde40b-var-lib-openvswitch\") pod \"ovnkube-node-vd82q\" (UID: \"74e8b3c8-da80-492c-bfcf-199b40bde40b\") " pod="openshift-ovn-kubernetes/ovnkube-node-vd82q" Feb 24 05:37:21.819007 master-0 kubenswrapper[34361]: I0224 05:37:21.818984 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/c00ee01c-143b-4e44-823c-c6bfdedb8ed6-host-var-lib-cni-multus\") pod \"multus-8qp5g\" (UID: \"c00ee01c-143b-4e44-823c-c6bfdedb8ed6\") " pod="openshift-multus/multus-8qp5g" Feb 24 05:37:21.819007 master-0 kubenswrapper[34361]: I0224 05:37:21.818858 34361 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/74e8b3c8-da80-492c-bfcf-199b40bde40b-run-openvswitch\") pod \"ovnkube-node-vd82q\" (UID: \"74e8b3c8-da80-492c-bfcf-199b40bde40b\") " pod="openshift-ovn-kubernetes/ovnkube-node-vd82q" Feb 24 05:37:21.819087 master-0 kubenswrapper[34361]: I0224 05:37:21.819023 34361 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/c00ee01c-143b-4e44-823c-c6bfdedb8ed6-host-var-lib-cni-multus\") pod \"multus-8qp5g\" (UID: \"c00ee01c-143b-4e44-823c-c6bfdedb8ed6\") " pod="openshift-multus/multus-8qp5g" Feb 24 05:37:21.819087 master-0 kubenswrapper[34361]: I0224 05:37:21.819061 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/c00ee01c-143b-4e44-823c-c6bfdedb8ed6-hostroot\") pod \"multus-8qp5g\" (UID: \"c00ee01c-143b-4e44-823c-c6bfdedb8ed6\") " pod="openshift-multus/multus-8qp5g" Feb 24 05:37:21.819158 master-0 kubenswrapper[34361]: I0224 05:37:21.819123 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/74e8b3c8-da80-492c-bfcf-199b40bde40b-host-cni-netd\") pod \"ovnkube-node-vd82q\" (UID: \"74e8b3c8-da80-492c-bfcf-199b40bde40b\") " pod="openshift-ovn-kubernetes/ovnkube-node-vd82q" Feb 24 05:37:21.819207 master-0 kubenswrapper[34361]: I0224 05:37:21.819185 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/74e8b3c8-da80-492c-bfcf-199b40bde40b-host-run-netns\") pod \"ovnkube-node-vd82q\" (UID: \"74e8b3c8-da80-492c-bfcf-199b40bde40b\") " pod="openshift-ovn-kubernetes/ovnkube-node-vd82q" Feb 24 05:37:21.819270 master-0 kubenswrapper[34361]: I0224 05:37:21.819250 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/afaa1cb5-3ad5-4c67-8802-9c4db23a2e3a-kube-api-access\") pod \"installer-3-master-0\" (UID: \"afaa1cb5-3ad5-4c67-8802-9c4db23a2e3a\") " pod="openshift-kube-apiserver/installer-3-master-0" Feb 24 05:37:21.819302 master-0 kubenswrapper[34361]: I0224 05:37:21.819281 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/74e8b3c8-da80-492c-bfcf-199b40bde40b-systemd-units\") pod \"ovnkube-node-vd82q\" (UID: \"74e8b3c8-da80-492c-bfcf-199b40bde40b\") " pod="openshift-ovn-kubernetes/ovnkube-node-vd82q" Feb 24 05:37:21.819382 master-0 kubenswrapper[34361]: I0224 05:37:21.819360 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/0e05783d-6bd1-4c71-87d9-1eb3edd827b3-etc-ssl-certs\") pod \"cluster-version-operator-57476485-7g2gq\" (UID: \"0e05783d-6bd1-4c71-87d9-1eb3edd827b3\") " pod="openshift-cluster-version/cluster-version-operator-57476485-7g2gq" Feb 24 05:37:21.819520 master-0 kubenswrapper[34361]: I0224 05:37:21.819498 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/767424fb-babf-4b73-b5e2-0bee65fcf207-os-release\") pod \"multus-additional-cni-plugins-jknmn\" (UID: \"767424fb-babf-4b73-b5e2-0bee65fcf207\") " pod="openshift-multus/multus-additional-cni-plugins-jknmn" Feb 24 05:37:21.819552 master-0 kubenswrapper[34361]: I0224 05:37:21.819531 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-modprobe-d\" (UniqueName: \"kubernetes.io/host-path/073c9b40-bb80-41a2-bcd2-cfdbe040a5a4-etc-modprobe-d\") pod \"tuned-2w6mj\" (UID: \"073c9b40-bb80-41a2-bcd2-cfdbe040a5a4\") " pod="openshift-cluster-node-tuning-operator/tuned-2w6mj" Feb 24 05:37:21.819608 master-0 kubenswrapper[34361]: I0224 05:37:21.819588 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/767424fb-babf-4b73-b5e2-0bee65fcf207-cnibin\") pod \"multus-additional-cni-plugins-jknmn\" (UID: \"767424fb-babf-4b73-b5e2-0bee65fcf207\") " pod="openshift-multus/multus-additional-cni-plugins-jknmn" Feb 24 05:37:21.819667 master-0 kubenswrapper[34361]: I0224 05:37:21.819648 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/c00ee01c-143b-4e44-823c-c6bfdedb8ed6-etc-kubernetes\") pod \"multus-8qp5g\" (UID: \"c00ee01c-143b-4e44-823c-c6bfdedb8ed6\") " pod="openshift-multus/multus-8qp5g" Feb 24 05:37:21.819706 master-0 kubenswrapper[34361]: I0224 05:37:21.819678 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/767424fb-babf-4b73-b5e2-0bee65fcf207-tuning-conf-dir\") pod \"multus-additional-cni-plugins-jknmn\" (UID: \"767424fb-babf-4b73-b5e2-0bee65fcf207\") " pod="openshift-multus/multus-additional-cni-plugins-jknmn" Feb 24 05:37:21.819738 master-0 kubenswrapper[34361]: I0224 05:37:21.819706 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-containers\" (UniqueName: \"kubernetes.io/host-path/d9492fbf-d0f4-4ecf-84ba-b089d69535c1-etc-containers\") pod \"catalogd-controller-manager-84b8d9d697-zvzxs\" (UID: \"d9492fbf-d0f4-4ecf-84ba-b089d69535c1\") " pod="openshift-catalogd/catalogd-controller-manager-84b8d9d697-zvzxs" Feb 24 05:37:21.819738 master-0 kubenswrapper[34361]: I0224 05:37:21.819732 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/073c9b40-bb80-41a2-bcd2-cfdbe040a5a4-sys\") pod \"tuned-2w6mj\" (UID: \"073c9b40-bb80-41a2-bcd2-cfdbe040a5a4\") " pod="openshift-cluster-node-tuning-operator/tuned-2w6mj" Feb 24 05:37:21.819823 master-0 kubenswrapper[34361]: I0224 05:37:21.819803 34361 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/c00ee01c-143b-4e44-823c-c6bfdedb8ed6-hostroot\") pod \"multus-8qp5g\" (UID: \"c00ee01c-143b-4e44-823c-c6bfdedb8ed6\") " pod="openshift-multus/multus-8qp5g" Feb 24 05:37:21.819893 master-0 kubenswrapper[34361]: I0224 05:37:21.819855 34361 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/767424fb-babf-4b73-b5e2-0bee65fcf207-os-release\") pod \"multus-additional-cni-plugins-jknmn\" (UID: \"767424fb-babf-4b73-b5e2-0bee65fcf207\") " pod="openshift-multus/multus-additional-cni-plugins-jknmn" Feb 24 05:37:21.820035 master-0 kubenswrapper[34361]: I0224 05:37:21.819911 34361 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-modprobe-d\" (UniqueName: \"kubernetes.io/host-path/073c9b40-bb80-41a2-bcd2-cfdbe040a5a4-etc-modprobe-d\") pod \"tuned-2w6mj\" (UID: \"073c9b40-bb80-41a2-bcd2-cfdbe040a5a4\") " pod="openshift-cluster-node-tuning-operator/tuned-2w6mj" Feb 24 05:37:21.820035 master-0 kubenswrapper[34361]: I0224 05:37:21.819928 34361 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/0e05783d-6bd1-4c71-87d9-1eb3edd827b3-etc-ssl-certs\") pod \"cluster-version-operator-57476485-7g2gq\" (UID: \"0e05783d-6bd1-4c71-87d9-1eb3edd827b3\") " pod="openshift-cluster-version/cluster-version-operator-57476485-7g2gq" Feb 24 05:37:21.820035 master-0 kubenswrapper[34361]: I0224 05:37:21.819957 34361 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/767424fb-babf-4b73-b5e2-0bee65fcf207-cnibin\") pod \"multus-additional-cni-plugins-jknmn\" (UID: \"767424fb-babf-4b73-b5e2-0bee65fcf207\") " pod="openshift-multus/multus-additional-cni-plugins-jknmn" Feb 24 05:37:21.820035 master-0 kubenswrapper[34361]: I0224 05:37:21.820001 34361 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/74e8b3c8-da80-492c-bfcf-199b40bde40b-host-run-netns\") pod \"ovnkube-node-vd82q\" (UID: \"74e8b3c8-da80-492c-bfcf-199b40bde40b\") " pod="openshift-ovn-kubernetes/ovnkube-node-vd82q" Feb 24 05:37:21.820035 master-0 kubenswrapper[34361]: I0224 05:37:21.820007 34361 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/c00ee01c-143b-4e44-823c-c6bfdedb8ed6-etc-kubernetes\") pod \"multus-8qp5g\" (UID: \"c00ee01c-143b-4e44-823c-c6bfdedb8ed6\") " pod="openshift-multus/multus-8qp5g" Feb 24 05:37:21.820175 master-0 kubenswrapper[34361]: I0224 05:37:21.820047 34361 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/767424fb-babf-4b73-b5e2-0bee65fcf207-tuning-conf-dir\") pod \"multus-additional-cni-plugins-jknmn\" (UID: \"767424fb-babf-4b73-b5e2-0bee65fcf207\") " pod="openshift-multus/multus-additional-cni-plugins-jknmn" Feb 24 05:37:21.820175 master-0 kubenswrapper[34361]: I0224 05:37:21.820113 34361 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-containers\" (UniqueName: \"kubernetes.io/host-path/d9492fbf-d0f4-4ecf-84ba-b089d69535c1-etc-containers\") pod \"catalogd-controller-manager-84b8d9d697-zvzxs\" (UID: \"d9492fbf-d0f4-4ecf-84ba-b089d69535c1\") " pod="openshift-catalogd/catalogd-controller-manager-84b8d9d697-zvzxs" Feb 24 05:37:21.820175 master-0 kubenswrapper[34361]: I0224 05:37:21.820161 34361 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/073c9b40-bb80-41a2-bcd2-cfdbe040a5a4-sys\") pod \"tuned-2w6mj\" (UID: \"073c9b40-bb80-41a2-bcd2-cfdbe040a5a4\") " pod="openshift-cluster-node-tuning-operator/tuned-2w6mj" Feb 24 05:37:21.820288 master-0 kubenswrapper[34361]: I0224 05:37:21.820228 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/74e8b3c8-da80-492c-bfcf-199b40bde40b-host-run-ovn-kubernetes\") pod \"ovnkube-node-vd82q\" (UID: \"74e8b3c8-da80-492c-bfcf-199b40bde40b\") " pod="openshift-ovn-kubernetes/ovnkube-node-vd82q" Feb 24 05:37:21.820288 master-0 kubenswrapper[34361]: I0224 05:37:21.820256 34361 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/74e8b3c8-da80-492c-bfcf-199b40bde40b-systemd-units\") pod \"ovnkube-node-vd82q\" (UID: \"74e8b3c8-da80-492c-bfcf-199b40bde40b\") " pod="openshift-ovn-kubernetes/ovnkube-node-vd82q" Feb 24 05:37:21.820288 master-0 kubenswrapper[34361]: I0224 05:37:21.820275 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/c00ee01c-143b-4e44-823c-c6bfdedb8ed6-host-run-multus-certs\") pod \"multus-8qp5g\" (UID: \"c00ee01c-143b-4e44-823c-c6bfdedb8ed6\") " pod="openshift-multus/multus-8qp5g" Feb 24 05:37:21.820397 master-0 kubenswrapper[34361]: I0224 05:37:21.820343 34361 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/74e8b3c8-da80-492c-bfcf-199b40bde40b-host-run-ovn-kubernetes\") pod \"ovnkube-node-vd82q\" (UID: \"74e8b3c8-da80-492c-bfcf-199b40bde40b\") " pod="openshift-ovn-kubernetes/ovnkube-node-vd82q" Feb 24 05:37:21.820397 master-0 kubenswrapper[34361]: I0224 05:37:21.820344 34361 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/74e8b3c8-da80-492c-bfcf-199b40bde40b-host-cni-netd\") pod \"ovnkube-node-vd82q\" (UID: \"74e8b3c8-da80-492c-bfcf-199b40bde40b\") " pod="openshift-ovn-kubernetes/ovnkube-node-vd82q" Feb 24 05:37:21.820397 master-0 kubenswrapper[34361]: I0224 05:37:21.820392 34361 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/c00ee01c-143b-4e44-823c-c6bfdedb8ed6-host-run-multus-certs\") pod \"multus-8qp5g\" (UID: \"c00ee01c-143b-4e44-823c-c6bfdedb8ed6\") " pod="openshift-multus/multus-8qp5g" Feb 24 05:37:21.820481 master-0 kubenswrapper[34361]: I0224 05:37:21.820398 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/c00ee01c-143b-4e44-823c-c6bfdedb8ed6-host-run-netns\") pod \"multus-8qp5g\" (UID: \"c00ee01c-143b-4e44-823c-c6bfdedb8ed6\") " pod="openshift-multus/multus-8qp5g" Feb 24 05:37:21.820585 master-0 kubenswrapper[34361]: I0224 05:37:21.820527 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/c00ee01c-143b-4e44-823c-c6bfdedb8ed6-multus-cni-dir\") pod \"multus-8qp5g\" (UID: \"c00ee01c-143b-4e44-823c-c6bfdedb8ed6\") " pod="openshift-multus/multus-8qp5g" Feb 24 05:37:21.820672 master-0 kubenswrapper[34361]: I0224 05:37:21.820650 34361 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/c00ee01c-143b-4e44-823c-c6bfdedb8ed6-multus-cni-dir\") pod \"multus-8qp5g\" (UID: \"c00ee01c-143b-4e44-823c-c6bfdedb8ed6\") " pod="openshift-multus/multus-8qp5g" Feb 24 05:37:21.820672 master-0 kubenswrapper[34361]: I0224 05:37:21.820663 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/b21148ab-4e3e-4d0b-b198-3278dd8e2e7e-audit-dir\") pod \"apiserver-fdc9d7cdd-8v72m\" (UID: \"b21148ab-4e3e-4d0b-b198-3278dd8e2e7e\") " pod="openshift-apiserver/apiserver-fdc9d7cdd-8v72m" Feb 24 05:37:21.820739 master-0 kubenswrapper[34361]: I0224 05:37:21.820559 34361 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/c00ee01c-143b-4e44-823c-c6bfdedb8ed6-host-run-netns\") pod \"multus-8qp5g\" (UID: \"c00ee01c-143b-4e44-823c-c6bfdedb8ed6\") " pod="openshift-multus/multus-8qp5g" Feb 24 05:37:21.820771 master-0 kubenswrapper[34361]: I0224 05:37:21.820739 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/74e8b3c8-da80-492c-bfcf-199b40bde40b-node-log\") pod \"ovnkube-node-vd82q\" (UID: \"74e8b3c8-da80-492c-bfcf-199b40bde40b\") " pod="openshift-ovn-kubernetes/ovnkube-node-vd82q" Feb 24 05:37:21.820810 master-0 kubenswrapper[34361]: I0224 05:37:21.820785 34361 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/74e8b3c8-da80-492c-bfcf-199b40bde40b-node-log\") pod \"ovnkube-node-vd82q\" (UID: \"74e8b3c8-da80-492c-bfcf-199b40bde40b\") " pod="openshift-ovn-kubernetes/ovnkube-node-vd82q" Feb 24 05:37:21.820920 master-0 kubenswrapper[34361]: I0224 05:37:21.820859 34361 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/b21148ab-4e3e-4d0b-b198-3278dd8e2e7e-audit-dir\") pod \"apiserver-fdc9d7cdd-8v72m\" (UID: \"b21148ab-4e3e-4d0b-b198-3278dd8e2e7e\") " pod="openshift-apiserver/apiserver-fdc9d7cdd-8v72m" Feb 24 05:37:21.820973 master-0 kubenswrapper[34361]: I0224 05:37:21.820926 34361 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/767424fb-babf-4b73-b5e2-0bee65fcf207-system-cni-dir\") pod \"multus-additional-cni-plugins-jknmn\" (UID: \"767424fb-babf-4b73-b5e2-0bee65fcf207\") " pod="openshift-multus/multus-additional-cni-plugins-jknmn" Feb 24 05:37:21.820973 master-0 kubenswrapper[34361]: I0224 05:37:21.820893 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/767424fb-babf-4b73-b5e2-0bee65fcf207-system-cni-dir\") pod \"multus-additional-cni-plugins-jknmn\" (UID: \"767424fb-babf-4b73-b5e2-0bee65fcf207\") " pod="openshift-multus/multus-additional-cni-plugins-jknmn" Feb 24 05:37:21.839011 master-0 kubenswrapper[34361]: I0224 05:37:21.838948 34361 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"image-import-ca" Feb 24 05:37:21.847218 master-0 kubenswrapper[34361]: I0224 05:37:21.847127 34361 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/b21148ab-4e3e-4d0b-b198-3278dd8e2e7e-image-import-ca\") pod \"apiserver-fdc9d7cdd-8v72m\" (UID: \"b21148ab-4e3e-4d0b-b198-3278dd8e2e7e\") " pod="openshift-apiserver/apiserver-fdc9d7cdd-8v72m" Feb 24 05:37:21.859192 master-0 kubenswrapper[34361]: I0224 05:37:21.858838 34361 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"etcd-serving-ca" Feb 24 05:37:21.874995 master-0 kubenswrapper[34361]: I0224 05:37:21.867203 34361 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/b21148ab-4e3e-4d0b-b198-3278dd8e2e7e-etcd-serving-ca\") pod \"apiserver-fdc9d7cdd-8v72m\" (UID: \"b21148ab-4e3e-4d0b-b198-3278dd8e2e7e\") " pod="openshift-apiserver/apiserver-fdc9d7cdd-8v72m" Feb 24 05:37:21.880069 master-0 kubenswrapper[34361]: I0224 05:37:21.879998 34361 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"cluster-version-operator-serving-cert" Feb 24 05:37:21.884463 master-0 kubenswrapper[34361]: I0224 05:37:21.884398 34361 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0e05783d-6bd1-4c71-87d9-1eb3edd827b3-serving-cert\") pod \"cluster-version-operator-57476485-7g2gq\" (UID: \"0e05783d-6bd1-4c71-87d9-1eb3edd827b3\") " pod="openshift-cluster-version/cluster-version-operator-57476485-7g2gq" Feb 24 05:37:21.907717 master-0 kubenswrapper[34361]: I0224 05:37:21.907648 34361 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"trusted-ca-bundle" Feb 24 05:37:21.908901 master-0 kubenswrapper[34361]: I0224 05:37:21.908796 34361 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b21148ab-4e3e-4d0b-b198-3278dd8e2e7e-trusted-ca-bundle\") pod \"apiserver-fdc9d7cdd-8v72m\" (UID: \"b21148ab-4e3e-4d0b-b198-3278dd8e2e7e\") " pod="openshift-apiserver/apiserver-fdc9d7cdd-8v72m" Feb 24 05:37:21.919097 master-0 kubenswrapper[34361]: I0224 05:37:21.918975 34361 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-operator-admission-webhook-tls" Feb 24 05:37:21.919672 master-0 kubenswrapper[34361]: I0224 05:37:21.919614 34361 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-certificates\" (UniqueName: \"kubernetes.io/secret/b9a96f0d-16b8-47ee-baf2-807d2260fa71-tls-certificates\") pod \"prometheus-operator-admission-webhook-75d56db95f-hw4m2\" (UID: \"b9a96f0d-16b8-47ee-baf2-807d2260fa71\") " pod="openshift-monitoring/prometheus-operator-admission-webhook-75d56db95f-hw4m2" Feb 24 05:37:21.938576 master-0 kubenswrapper[34361]: I0224 05:37:21.938521 34361 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"etcd-client" Feb 24 05:37:21.939490 master-0 kubenswrapper[34361]: I0224 05:37:21.939405 34361 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/812552f3-09b1-43f8-b910-c78e776127f8-etcd-client\") pod \"apiserver-6f8b7f45f7-5df4m\" (UID: \"812552f3-09b1-43f8-b910-c78e776127f8\") " pod="openshift-oauth-apiserver/apiserver-6f8b7f45f7-5df4m" Feb 24 05:37:21.949204 master-0 kubenswrapper[34361]: I0224 05:37:21.949158 34361 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-3-master-0" Feb 24 05:37:21.959289 master-0 kubenswrapper[34361]: I0224 05:37:21.959076 34361 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"encryption-config-1" Feb 24 05:37:21.962872 master-0 kubenswrapper[34361]: I0224 05:37:21.962745 34361 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/812552f3-09b1-43f8-b910-c78e776127f8-encryption-config\") pod \"apiserver-6f8b7f45f7-5df4m\" (UID: \"812552f3-09b1-43f8-b910-c78e776127f8\") " pod="openshift-oauth-apiserver/apiserver-6f8b7f45f7-5df4m" Feb 24 05:37:21.964871 master-0 kubenswrapper[34361]: I0224 05:37:21.964830 34361 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-3-master-0" Feb 24 05:37:21.979662 master-0 kubenswrapper[34361]: I0224 05:37:21.979580 34361 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"kube-root-ca.crt" Feb 24 05:37:21.999576 master-0 kubenswrapper[34361]: I0224 05:37:21.999510 34361 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"kube-root-ca.crt" Feb 24 05:37:22.019276 master-0 kubenswrapper[34361]: I0224 05:37:22.019224 34361 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"serving-cert" Feb 24 05:37:22.020684 master-0 kubenswrapper[34361]: I0224 05:37:22.020582 34361 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/812552f3-09b1-43f8-b910-c78e776127f8-serving-cert\") pod \"apiserver-6f8b7f45f7-5df4m\" (UID: \"812552f3-09b1-43f8-b910-c78e776127f8\") " pod="openshift-oauth-apiserver/apiserver-6f8b7f45f7-5df4m" Feb 24 05:37:22.038333 master-0 kubenswrapper[34361]: I0224 05:37:22.038265 34361 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"kube-root-ca.crt" Feb 24 05:37:22.058957 master-0 kubenswrapper[34361]: I0224 05:37:22.058887 34361 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"kube-root-ca.crt" Feb 24 05:37:22.080119 master-0 kubenswrapper[34361]: I0224 05:37:22.079759 34361 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"config" Feb 24 05:37:22.080529 master-0 kubenswrapper[34361]: I0224 05:37:22.080226 34361 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b21148ab-4e3e-4d0b-b198-3278dd8e2e7e-config\") pod \"apiserver-fdc9d7cdd-8v72m\" (UID: \"b21148ab-4e3e-4d0b-b198-3278dd8e2e7e\") " pod="openshift-apiserver/apiserver-fdc9d7cdd-8v72m" Feb 24 05:37:22.099369 master-0 kubenswrapper[34361]: I0224 05:37:22.099283 34361 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"audit-1" Feb 24 05:37:22.104060 master-0 kubenswrapper[34361]: I0224 05:37:22.103994 34361 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/b21148ab-4e3e-4d0b-b198-3278dd8e2e7e-audit\") pod \"apiserver-fdc9d7cdd-8v72m\" (UID: \"b21148ab-4e3e-4d0b-b198-3278dd8e2e7e\") " pod="openshift-apiserver/apiserver-fdc9d7cdd-8v72m" Feb 24 05:37:22.128911 master-0 kubenswrapper[34361]: I0224 05:37:22.128808 34361 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/afaa1cb5-3ad5-4c67-8802-9c4db23a2e3a-kubelet-dir\") pod \"afaa1cb5-3ad5-4c67-8802-9c4db23a2e3a\" (UID: \"afaa1cb5-3ad5-4c67-8802-9c4db23a2e3a\") " Feb 24 05:37:22.129164 master-0 kubenswrapper[34361]: I0224 05:37:22.128964 34361 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/afaa1cb5-3ad5-4c67-8802-9c4db23a2e3a-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "afaa1cb5-3ad5-4c67-8802-9c4db23a2e3a" (UID: "afaa1cb5-3ad5-4c67-8802-9c4db23a2e3a"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 24 05:37:22.129329 master-0 kubenswrapper[34361]: I0224 05:37:22.129266 34361 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/afaa1cb5-3ad5-4c67-8802-9c4db23a2e3a-var-lock\") pod \"afaa1cb5-3ad5-4c67-8802-9c4db23a2e3a\" (UID: \"afaa1cb5-3ad5-4c67-8802-9c4db23a2e3a\") " Feb 24 05:37:22.129531 master-0 kubenswrapper[34361]: I0224 05:37:22.129443 34361 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/afaa1cb5-3ad5-4c67-8802-9c4db23a2e3a-var-lock" (OuterVolumeSpecName: "var-lock") pod "afaa1cb5-3ad5-4c67-8802-9c4db23a2e3a" (UID: "afaa1cb5-3ad5-4c67-8802-9c4db23a2e3a"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 24 05:37:22.132058 master-0 kubenswrapper[34361]: I0224 05:37:22.132014 34361 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/afaa1cb5-3ad5-4c67-8802-9c4db23a2e3a-var-lock\") on node \"master-0\" DevicePath \"\"" Feb 24 05:37:22.132367 master-0 kubenswrapper[34361]: I0224 05:37:22.132072 34361 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/afaa1cb5-3ad5-4c67-8802-9c4db23a2e3a-kubelet-dir\") on node \"master-0\" DevicePath \"\"" Feb 24 05:37:22.139768 master-0 kubenswrapper[34361]: I0224 05:37:22.139724 34361 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"openshift-service-ca.crt" Feb 24 05:37:22.143091 master-0 kubenswrapper[34361]: I0224 05:37:22.143048 34361 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/0e05783d-6bd1-4c71-87d9-1eb3edd827b3-service-ca\") pod \"cluster-version-operator-57476485-7g2gq\" (UID: \"0e05783d-6bd1-4c71-87d9-1eb3edd827b3\") " pod="openshift-cluster-version/cluster-version-operator-57476485-7g2gq" Feb 24 05:37:22.160021 master-0 kubenswrapper[34361]: I0224 05:37:22.159946 34361 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"kube-root-ca.crt" Feb 24 05:37:22.179569 master-0 kubenswrapper[34361]: I0224 05:37:22.179400 34361 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"etcd-serving-ca" Feb 24 05:37:22.186943 master-0 kubenswrapper[34361]: I0224 05:37:22.186866 34361 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/812552f3-09b1-43f8-b910-c78e776127f8-etcd-serving-ca\") pod \"apiserver-6f8b7f45f7-5df4m\" (UID: \"812552f3-09b1-43f8-b910-c78e776127f8\") " pod="openshift-oauth-apiserver/apiserver-6f8b7f45f7-5df4m" Feb 24 05:37:22.198943 master-0 kubenswrapper[34361]: I0224 05:37:22.198890 34361 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"openshift-service-ca.crt" Feb 24 05:37:22.240336 master-0 kubenswrapper[34361]: I0224 05:37:22.236051 34361 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"trusted-ca-bundle" Feb 24 05:37:22.240336 master-0 kubenswrapper[34361]: I0224 05:37:22.239751 34361 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"kube-root-ca.crt" Feb 24 05:37:22.244328 master-0 kubenswrapper[34361]: I0224 05:37:22.242761 34361 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/812552f3-09b1-43f8-b910-c78e776127f8-trusted-ca-bundle\") pod \"apiserver-6f8b7f45f7-5df4m\" (UID: \"812552f3-09b1-43f8-b910-c78e776127f8\") " pod="openshift-oauth-apiserver/apiserver-6f8b7f45f7-5df4m" Feb 24 05:37:22.261100 master-0 kubenswrapper[34361]: I0224 05:37:22.261026 34361 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"signing-cabundle" Feb 24 05:37:22.268008 master-0 kubenswrapper[34361]: I0224 05:37:22.267897 34361 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/ab5afff8-1081-4acc-8ab9-d6bfd8df1d67-signing-cabundle\") pod \"service-ca-576b4d78bd-fsmrl\" (UID: \"ab5afff8-1081-4acc-8ab9-d6bfd8df1d67\") " pod="openshift-service-ca/service-ca-576b4d78bd-fsmrl" Feb 24 05:37:22.281154 master-0 kubenswrapper[34361]: I0224 05:37:22.281109 34361 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"audit-1" Feb 24 05:37:22.287929 master-0 kubenswrapper[34361]: I0224 05:37:22.287873 34361 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/812552f3-09b1-43f8-b910-c78e776127f8-audit-policies\") pod \"apiserver-6f8b7f45f7-5df4m\" (UID: \"812552f3-09b1-43f8-b910-c78e776127f8\") " pod="openshift-oauth-apiserver/apiserver-6f8b7f45f7-5df4m" Feb 24 05:37:22.299188 master-0 kubenswrapper[34361]: I0224 05:37:22.299128 34361 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"openshift-service-ca.crt" Feb 24 05:37:22.318475 master-0 kubenswrapper[34361]: I0224 05:37:22.318398 34361 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-dockercfg-kf8b6" Feb 24 05:37:22.337525 master-0 kubenswrapper[34361]: I0224 05:37:22.337482 34361 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-tls" Feb 24 05:37:22.342441 master-0 kubenswrapper[34361]: I0224 05:37:22.342408 34361 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/32fd577d-8966-4ab1-95cf-357291084156-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-686847ff5f-zzvtt\" (UID: \"32fd577d-8966-4ab1-95cf-357291084156\") " pod="openshift-machine-api/control-plane-machine-set-operator-686847ff5f-zzvtt" Feb 24 05:37:22.358497 master-0 kubenswrapper[34361]: I0224 05:37:22.358463 34361 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"openshift-service-ca.crt" Feb 24 05:37:22.379141 master-0 kubenswrapper[34361]: I0224 05:37:22.379070 34361 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-root-ca.crt" Feb 24 05:37:22.398917 master-0 kubenswrapper[34361]: I0224 05:37:22.398841 34361 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serviceaccount-dockercfg-ckqkb" Feb 24 05:37:22.419942 master-0 kubenswrapper[34361]: I0224 05:37:22.419847 34361 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"pprof-cert" Feb 24 05:37:22.420554 master-0 kubenswrapper[34361]: I0224 05:37:22.420496 34361 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/cc0cfdd6-99d8-40dc-87d0-06c2a6767f38-profile-collector-cert\") pod \"catalog-operator-596f79dd6f-v22h2\" (UID: \"cc0cfdd6-99d8-40dc-87d0-06c2a6767f38\") " pod="openshift-operator-lifecycle-manager/catalog-operator-596f79dd6f-v22h2" Feb 24 05:37:22.425710 master-0 kubenswrapper[34361]: I0224 05:37:22.425650 34361 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/9666fc94-71e3-46af-8b45-26e3a085d076-profile-collector-cert\") pod \"olm-operator-5499d7f7bb-8xdmq\" (UID: \"9666fc94-71e3-46af-8b45-26e3a085d076\") " pod="openshift-operator-lifecycle-manager/olm-operator-5499d7f7bb-8xdmq" Feb 24 05:37:22.438860 master-0 kubenswrapper[34361]: I0224 05:37:22.438707 34361 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"catalog-operator-serving-cert" Feb 24 05:37:22.441410 master-0 kubenswrapper[34361]: I0224 05:37:22.441363 34361 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/cc0cfdd6-99d8-40dc-87d0-06c2a6767f38-srv-cert\") pod \"catalog-operator-596f79dd6f-v22h2\" (UID: \"cc0cfdd6-99d8-40dc-87d0-06c2a6767f38\") " pod="openshift-operator-lifecycle-manager/catalog-operator-596f79dd6f-v22h2" Feb 24 05:37:22.458169 master-0 kubenswrapper[34361]: I0224 05:37:22.458118 34361 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"cluster-baremetal-webhook-server-cert" Feb 24 05:37:22.462299 master-0 kubenswrapper[34361]: I0224 05:37:22.462201 34361 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/39623346-691b-42c8-af76-409d4f6629af-cert\") pod \"cluster-baremetal-operator-d6bb9bb76-54hnv\" (UID: \"39623346-691b-42c8-af76-409d4f6629af\") " pod="openshift-machine-api/cluster-baremetal-operator-d6bb9bb76-54hnv" Feb 24 05:37:22.479974 master-0 kubenswrapper[34361]: I0224 05:37:22.479897 34361 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"cluster-baremetal-operator-tls" Feb 24 05:37:22.483902 master-0 kubenswrapper[34361]: I0224 05:37:22.483135 34361 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cluster-baremetal-operator-tls\" (UniqueName: \"kubernetes.io/secret/39623346-691b-42c8-af76-409d4f6629af-cluster-baremetal-operator-tls\") pod \"cluster-baremetal-operator-d6bb9bb76-54hnv\" (UID: \"39623346-691b-42c8-af76-409d4f6629af\") " pod="openshift-machine-api/cluster-baremetal-operator-d6bb9bb76-54hnv" Feb 24 05:37:22.499301 master-0 kubenswrapper[34361]: I0224 05:37:22.499180 34361 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"baremetal-kube-rbac-proxy" Feb 24 05:37:22.508504 master-0 kubenswrapper[34361]: I0224 05:37:22.508435 34361 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/39623346-691b-42c8-af76-409d4f6629af-config\") pod \"cluster-baremetal-operator-d6bb9bb76-54hnv\" (UID: \"39623346-691b-42c8-af76-409d4f6629af\") " pod="openshift-machine-api/cluster-baremetal-operator-d6bb9bb76-54hnv" Feb 24 05:37:22.519198 master-0 kubenswrapper[34361]: I0224 05:37:22.519114 34361 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"cluster-baremetal-operator-dockercfg-85vp6" Feb 24 05:37:22.538346 master-0 kubenswrapper[34361]: I0224 05:37:22.538227 34361 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"cluster-baremetal-operator-images" Feb 24 05:37:22.542194 master-0 kubenswrapper[34361]: I0224 05:37:22.542129 34361 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/39623346-691b-42c8-af76-409d4f6629af-images\") pod \"cluster-baremetal-operator-d6bb9bb76-54hnv\" (UID: \"39623346-691b-42c8-af76-409d4f6629af\") " pod="openshift-machine-api/cluster-baremetal-operator-d6bb9bb76-54hnv" Feb 24 05:37:22.559513 master-0 kubenswrapper[34361]: I0224 05:37:22.559367 34361 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Feb 24 05:37:22.562610 master-0 kubenswrapper[34361]: I0224 05:37:22.562562 34361 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/da19bb93-c9ba-4e60-9e83-d92bc0dd33c4-serving-cert\") pod \"controller-manager-7657d7494-mmsz6\" (UID: \"da19bb93-c9ba-4e60-9e83-d92bc0dd33c4\") " pod="openshift-controller-manager/controller-manager-7657d7494-mmsz6" Feb 24 05:37:22.576219 master-0 kubenswrapper[34361]: I0224 05:37:22.576174 34361 request.go:700] Waited for 1.009033357s due to client-side throttling, not priority and fairness, request: GET:https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-controller-manager/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&limit=500&resourceVersion=0 Feb 24 05:37:22.578366 master-0 kubenswrapper[34361]: I0224 05:37:22.578325 34361 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Feb 24 05:37:22.598780 master-0 kubenswrapper[34361]: I0224 05:37:22.598710 34361 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-rv6pq" Feb 24 05:37:22.608124 master-0 kubenswrapper[34361]: I0224 05:37:22.608071 34361 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" podUID="" Feb 24 05:37:22.618765 master-0 kubenswrapper[34361]: I0224 05:37:22.618697 34361 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Feb 24 05:37:22.626270 master-0 kubenswrapper[34361]: I0224 05:37:22.626113 34361 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/da19bb93-c9ba-4e60-9e83-d92bc0dd33c4-config\") pod \"controller-manager-7657d7494-mmsz6\" (UID: \"da19bb93-c9ba-4e60-9e83-d92bc0dd33c4\") " pod="openshift-controller-manager/controller-manager-7657d7494-mmsz6" Feb 24 05:37:22.638552 master-0 kubenswrapper[34361]: I0224 05:37:22.638500 34361 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Feb 24 05:37:22.665798 master-0 kubenswrapper[34361]: I0224 05:37:22.665712 34361 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Feb 24 05:37:22.671108 master-0 kubenswrapper[34361]: I0224 05:37:22.671006 34361 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/da19bb93-c9ba-4e60-9e83-d92bc0dd33c4-proxy-ca-bundles\") pod \"controller-manager-7657d7494-mmsz6\" (UID: \"da19bb93-c9ba-4e60-9e83-d92bc0dd33c4\") " pod="openshift-controller-manager/controller-manager-7657d7494-mmsz6" Feb 24 05:37:22.678628 master-0 kubenswrapper[34361]: E0224 05:37:22.678564 34361 configmap.go:193] Couldn't get configMap openshift-controller-manager/client-ca: failed to sync configmap cache: timed out waiting for the condition Feb 24 05:37:22.678784 master-0 kubenswrapper[34361]: E0224 05:37:22.678683 34361 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/da19bb93-c9ba-4e60-9e83-d92bc0dd33c4-client-ca podName:da19bb93-c9ba-4e60-9e83-d92bc0dd33c4 nodeName:}" failed. No retries permitted until 2026-02-24 05:37:23.17865859 +0000 UTC m=+2.881275646 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "client-ca" (UniqueName: "kubernetes.io/configmap/da19bb93-c9ba-4e60-9e83-d92bc0dd33c4-client-ca") pod "controller-manager-7657d7494-mmsz6" (UID: "da19bb93-c9ba-4e60-9e83-d92bc0dd33c4") : failed to sync configmap cache: timed out waiting for the condition Feb 24 05:37:22.678784 master-0 kubenswrapper[34361]: E0224 05:37:22.678714 34361 secret.go:189] Couldn't get secret openshift-cloud-controller-manager-operator/cloud-controller-manager-operator-tls: failed to sync secret cache: timed out waiting for the condition Feb 24 05:37:22.678935 master-0 kubenswrapper[34361]: I0224 05:37:22.678840 34361 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Feb 24 05:37:22.678935 master-0 kubenswrapper[34361]: E0224 05:37:22.678843 34361 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/f3cd3830-62b5-49d1-917e-bd993d685c65-cloud-controller-manager-operator-tls podName:f3cd3830-62b5-49d1-917e-bd993d685c65 nodeName:}" failed. No retries permitted until 2026-02-24 05:37:23.178806894 +0000 UTC m=+2.881423970 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cloud-controller-manager-operator-tls" (UniqueName: "kubernetes.io/secret/f3cd3830-62b5-49d1-917e-bd993d685c65-cloud-controller-manager-operator-tls") pod "cluster-cloud-controller-manager-operator-67dd8d7969-m8d2t" (UID: "f3cd3830-62b5-49d1-917e-bd993d685c65") : failed to sync secret cache: timed out waiting for the condition Feb 24 05:37:22.681940 master-0 kubenswrapper[34361]: E0224 05:37:22.681865 34361 secret.go:189] Couldn't get secret openshift-image-registry/image-registry-operator-tls: failed to sync secret cache: timed out waiting for the condition Feb 24 05:37:22.682092 master-0 kubenswrapper[34361]: E0224 05:37:22.681991 34361 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/23bdafdd-27c9-4461-be4a-3ea916ac3875-image-registry-operator-tls podName:23bdafdd-27c9-4461-be4a-3ea916ac3875 nodeName:}" failed. No retries permitted until 2026-02-24 05:37:23.181963069 +0000 UTC m=+2.884580135 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "image-registry-operator-tls" (UniqueName: "kubernetes.io/secret/23bdafdd-27c9-4461-be4a-3ea916ac3875-image-registry-operator-tls") pod "cluster-image-registry-operator-779979bdf7-t98nr" (UID: "23bdafdd-27c9-4461-be4a-3ea916ac3875") : failed to sync secret cache: timed out waiting for the condition Feb 24 05:37:22.682092 master-0 kubenswrapper[34361]: E0224 05:37:22.681890 34361 configmap.go:193] Couldn't get configMap openshift-cloud-credential-operator/cco-trusted-ca: failed to sync configmap cache: timed out waiting for the condition Feb 24 05:37:22.682092 master-0 kubenswrapper[34361]: E0224 05:37:22.682049 34361 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/b46907eb-36d6-4410-b7d8-8012b254c861-cco-trusted-ca podName:b46907eb-36d6-4410-b7d8-8012b254c861 nodeName:}" failed. No retries permitted until 2026-02-24 05:37:23.182036021 +0000 UTC m=+2.884653077 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cco-trusted-ca" (UniqueName: "kubernetes.io/configmap/b46907eb-36d6-4410-b7d8-8012b254c861-cco-trusted-ca") pod "cloud-credential-operator-6968c58f46-68rth" (UID: "b46907eb-36d6-4410-b7d8-8012b254c861") : failed to sync configmap cache: timed out waiting for the condition Feb 24 05:37:22.683300 master-0 kubenswrapper[34361]: E0224 05:37:22.683247 34361 secret.go:189] Couldn't get secret openshift-insights/openshift-insights-serving-cert: failed to sync secret cache: timed out waiting for the condition Feb 24 05:37:22.683300 master-0 kubenswrapper[34361]: E0224 05:37:22.683291 34361 secret.go:189] Couldn't get secret openshift-monitoring/prometheus-operator-kube-rbac-proxy-config: failed to sync secret cache: timed out waiting for the condition Feb 24 05:37:22.683496 master-0 kubenswrapper[34361]: E0224 05:37:22.683359 34361 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ee10cd9f-23da-4e40-bc7a-0856a9fa7ae5-serving-cert podName:ee10cd9f-23da-4e40-bc7a-0856a9fa7ae5 nodeName:}" failed. No retries permitted until 2026-02-24 05:37:23.183339956 +0000 UTC m=+2.885957042 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/ee10cd9f-23da-4e40-bc7a-0856a9fa7ae5-serving-cert") pod "insights-operator-59b498fcfb-mprnx" (UID: "ee10cd9f-23da-4e40-bc7a-0856a9fa7ae5") : failed to sync secret cache: timed out waiting for the condition Feb 24 05:37:22.683496 master-0 kubenswrapper[34361]: E0224 05:37:22.683396 34361 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b8d28792-2365-4e9e-b61a-46cd2ef8b632-prometheus-operator-kube-rbac-proxy-config podName:b8d28792-2365-4e9e-b61a-46cd2ef8b632 nodeName:}" failed. No retries permitted until 2026-02-24 05:37:23.183380037 +0000 UTC m=+2.885997113 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "prometheus-operator-kube-rbac-proxy-config" (UniqueName: "kubernetes.io/secret/b8d28792-2365-4e9e-b61a-46cd2ef8b632-prometheus-operator-kube-rbac-proxy-config") pod "prometheus-operator-754bc4d665-xjddh" (UID: "b8d28792-2365-4e9e-b61a-46cd2ef8b632") : failed to sync secret cache: timed out waiting for the condition Feb 24 05:37:22.683496 master-0 kubenswrapper[34361]: E0224 05:37:22.683398 34361 configmap.go:193] Couldn't get configMap openshift-cluster-machine-approver/machine-approver-config: failed to sync configmap cache: timed out waiting for the condition Feb 24 05:37:22.683496 master-0 kubenswrapper[34361]: E0224 05:37:22.683412 34361 configmap.go:193] Couldn't get configMap openshift-route-controller-manager/config: failed to sync configmap cache: timed out waiting for the condition Feb 24 05:37:22.683496 master-0 kubenswrapper[34361]: E0224 05:37:22.683452 34361 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/e6a0fc47-b446-4902-9f8a-04870cbafcab-config podName:e6a0fc47-b446-4902-9f8a-04870cbafcab nodeName:}" failed. No retries permitted until 2026-02-24 05:37:23.183437219 +0000 UTC m=+2.886054275 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/e6a0fc47-b446-4902-9f8a-04870cbafcab-config") pod "machine-approver-7dd9c7d7b9-pb6sw" (UID: "e6a0fc47-b446-4902-9f8a-04870cbafcab") : failed to sync configmap cache: timed out waiting for the condition Feb 24 05:37:22.683496 master-0 kubenswrapper[34361]: E0224 05:37:22.683456 34361 secret.go:189] Couldn't get secret openshift-machine-config-operator/node-bootstrapper-token: failed to sync secret cache: timed out waiting for the condition Feb 24 05:37:22.683496 master-0 kubenswrapper[34361]: E0224 05:37:22.683478 34361 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/b426cb33-1624-45e6-b8c5-4e8d251f6339-config podName:b426cb33-1624-45e6-b8c5-4e8d251f6339 nodeName:}" failed. No retries permitted until 2026-02-24 05:37:23.18346013 +0000 UTC m=+2.886077386 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/b426cb33-1624-45e6-b8c5-4e8d251f6339-config") pod "route-controller-manager-654dcf5585-fgmnd" (UID: "b426cb33-1624-45e6-b8c5-4e8d251f6339") : failed to sync configmap cache: timed out waiting for the condition Feb 24 05:37:22.683496 master-0 kubenswrapper[34361]: E0224 05:37:22.683510 34361 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c847d0c0-cc92-4d56-9e47-b83d9a39a745-node-bootstrap-token podName:c847d0c0-cc92-4d56-9e47-b83d9a39a745 nodeName:}" failed. No retries permitted until 2026-02-24 05:37:23.18349239 +0000 UTC m=+2.886109476 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "node-bootstrap-token" (UniqueName: "kubernetes.io/secret/c847d0c0-cc92-4d56-9e47-b83d9a39a745-node-bootstrap-token") pod "machine-config-server-xxl55" (UID: "c847d0c0-cc92-4d56-9e47-b83d9a39a745") : failed to sync secret cache: timed out waiting for the condition Feb 24 05:37:22.685999 master-0 kubenswrapper[34361]: E0224 05:37:22.685945 34361 configmap.go:193] Couldn't get configMap openshift-monitoring/metrics-client-ca: failed to sync configmap cache: timed out waiting for the condition Feb 24 05:37:22.686110 master-0 kubenswrapper[34361]: E0224 05:37:22.686015 34361 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/1163571d-f555-41ad-b04c-74c2dc452efe-metrics-client-ca podName:1163571d-f555-41ad-b04c-74c2dc452efe nodeName:}" failed. No retries permitted until 2026-02-24 05:37:23.186001148 +0000 UTC m=+2.888618204 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-client-ca" (UniqueName: "kubernetes.io/configmap/1163571d-f555-41ad-b04c-74c2dc452efe-metrics-client-ca") pod "telemeter-client-96c995bf5-57k8x" (UID: "1163571d-f555-41ad-b04c-74c2dc452efe") : failed to sync configmap cache: timed out waiting for the condition Feb 24 05:37:22.686110 master-0 kubenswrapper[34361]: E0224 05:37:22.686018 34361 configmap.go:193] Couldn't get configMap openshift-monitoring/metrics-client-ca: failed to sync configmap cache: timed out waiting for the condition Feb 24 05:37:22.686110 master-0 kubenswrapper[34361]: E0224 05:37:22.686063 34361 configmap.go:193] Couldn't get configMap openshift-machine-config-operator/kube-rbac-proxy: failed to sync configmap cache: timed out waiting for the condition Feb 24 05:37:22.686110 master-0 kubenswrapper[34361]: E0224 05:37:22.686069 34361 secret.go:189] Couldn't get secret openshift-multus/multus-admission-controller-secret: failed to sync secret cache: timed out waiting for the condition Feb 24 05:37:22.686110 master-0 kubenswrapper[34361]: E0224 05:37:22.686024 34361 secret.go:189] Couldn't get secret openshift-ingress-canary/canary-serving-cert: failed to sync secret cache: timed out waiting for the condition Feb 24 05:37:22.686110 master-0 kubenswrapper[34361]: E0224 05:37:22.686109 34361 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/f2be5ed6-fdf0-4462-a319-eed1a5a1c778-metrics-client-ca podName:f2be5ed6-fdf0-4462-a319-eed1a5a1c778 nodeName:}" failed. No retries permitted until 2026-02-24 05:37:23.18608592 +0000 UTC m=+2.888703006 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-client-ca" (UniqueName: "kubernetes.io/configmap/f2be5ed6-fdf0-4462-a319-eed1a5a1c778-metrics-client-ca") pod "node-exporter-qk7rz" (UID: "f2be5ed6-fdf0-4462-a319-eed1a5a1c778") : failed to sync configmap cache: timed out waiting for the condition Feb 24 05:37:22.686548 master-0 kubenswrapper[34361]: E0224 05:37:22.686142 34361 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6ddb5ab7-0c1f-44ed-84fa-aaeb6b553e03-webhook-certs podName:6ddb5ab7-0c1f-44ed-84fa-aaeb6b553e03 nodeName:}" failed. No retries permitted until 2026-02-24 05:37:23.186124531 +0000 UTC m=+2.888741607 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/6ddb5ab7-0c1f-44ed-84fa-aaeb6b553e03-webhook-certs") pod "multus-admission-controller-5f54bf67d4-5tf9t" (UID: "6ddb5ab7-0c1f-44ed-84fa-aaeb6b553e03") : failed to sync secret cache: timed out waiting for the condition Feb 24 05:37:22.686548 master-0 kubenswrapper[34361]: E0224 05:37:22.686185 34361 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/39c4d0aa-c372-4d02-9302-337e68b56784-auth-proxy-config podName:39c4d0aa-c372-4d02-9302-337e68b56784 nodeName:}" failed. No retries permitted until 2026-02-24 05:37:23.186172132 +0000 UTC m=+2.888789218 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "auth-proxy-config" (UniqueName: "kubernetes.io/configmap/39c4d0aa-c372-4d02-9302-337e68b56784-auth-proxy-config") pod "machine-config-operator-7f8c75f984-922md" (UID: "39c4d0aa-c372-4d02-9302-337e68b56784") : failed to sync configmap cache: timed out waiting for the condition Feb 24 05:37:22.686548 master-0 kubenswrapper[34361]: E0224 05:37:22.686216 34361 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/f938daff-1d36-4348-a689-3d1607058296-cert podName:f938daff-1d36-4348-a689-3d1607058296 nodeName:}" failed. No retries permitted until 2026-02-24 05:37:23.186202363 +0000 UTC m=+2.888819439 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/f938daff-1d36-4348-a689-3d1607058296-cert") pod "ingress-canary-5m82s" (UID: "f938daff-1d36-4348-a689-3d1607058296") : failed to sync secret cache: timed out waiting for the condition Feb 24 05:37:22.687355 master-0 kubenswrapper[34361]: E0224 05:37:22.687273 34361 configmap.go:193] Couldn't get configMap openshift-machine-api/machine-api-operator-images: failed to sync configmap cache: timed out waiting for the condition Feb 24 05:37:22.687355 master-0 kubenswrapper[34361]: E0224 05:37:22.687336 34361 secret.go:189] Couldn't get secret openshift-config-operator/config-operator-serving-cert: failed to sync secret cache: timed out waiting for the condition Feb 24 05:37:22.687355 master-0 kubenswrapper[34361]: E0224 05:37:22.687351 34361 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/116e6b47-d435-49ca-abb5-088788daf16a-images podName:116e6b47-d435-49ca-abb5-088788daf16a nodeName:}" failed. No retries permitted until 2026-02-24 05:37:23.187339104 +0000 UTC m=+2.889956160 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "images" (UniqueName: "kubernetes.io/configmap/116e6b47-d435-49ca-abb5-088788daf16a-images") pod "machine-api-operator-5c7cf458b4-65mc5" (UID: "116e6b47-d435-49ca-abb5-088788daf16a") : failed to sync configmap cache: timed out waiting for the condition Feb 24 05:37:22.687583 master-0 kubenswrapper[34361]: E0224 05:37:22.687399 34361 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/3f511d03-a182-4968-ba40-5c5c10e5e6be-serving-cert podName:3f511d03-a182-4968-ba40-5c5c10e5e6be nodeName:}" failed. No retries permitted until 2026-02-24 05:37:23.187384225 +0000 UTC m=+2.890001301 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/3f511d03-a182-4968-ba40-5c5c10e5e6be-serving-cert") pod "openshift-config-operator-6f47d587d6-7b87v" (UID: "3f511d03-a182-4968-ba40-5c5c10e5e6be") : failed to sync secret cache: timed out waiting for the condition Feb 24 05:37:22.687583 master-0 kubenswrapper[34361]: E0224 05:37:22.687496 34361 secret.go:189] Couldn't get secret openshift-monitoring/telemeter-client-kube-rbac-proxy-config: failed to sync secret cache: timed out waiting for the condition Feb 24 05:37:22.687583 master-0 kubenswrapper[34361]: E0224 05:37:22.687550 34361 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/1163571d-f555-41ad-b04c-74c2dc452efe-secret-telemeter-client-kube-rbac-proxy-config podName:1163571d-f555-41ad-b04c-74c2dc452efe nodeName:}" failed. No retries permitted until 2026-02-24 05:37:23.187540239 +0000 UTC m=+2.890157295 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "secret-telemeter-client-kube-rbac-proxy-config" (UniqueName: "kubernetes.io/secret/1163571d-f555-41ad-b04c-74c2dc452efe-secret-telemeter-client-kube-rbac-proxy-config") pod "telemeter-client-96c995bf5-57k8x" (UID: "1163571d-f555-41ad-b04c-74c2dc452efe") : failed to sync secret cache: timed out waiting for the condition Feb 24 05:37:22.687583 master-0 kubenswrapper[34361]: E0224 05:37:22.687562 34361 secret.go:189] Couldn't get secret openshift-machine-api/cluster-autoscaler-operator-cert: failed to sync secret cache: timed out waiting for the condition Feb 24 05:37:22.687894 master-0 kubenswrapper[34361]: E0224 05:37:22.687747 34361 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5d51ce58-55f6-45d5-9d5d-7b31ae42380a-cert podName:5d51ce58-55f6-45d5-9d5d-7b31ae42380a nodeName:}" failed. No retries permitted until 2026-02-24 05:37:23.187683793 +0000 UTC m=+2.890300879 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/5d51ce58-55f6-45d5-9d5d-7b31ae42380a-cert") pod "cluster-autoscaler-operator-86b8dc6d6-mcf2z" (UID: "5d51ce58-55f6-45d5-9d5d-7b31ae42380a") : failed to sync secret cache: timed out waiting for the condition Feb 24 05:37:22.690140 master-0 kubenswrapper[34361]: E0224 05:37:22.690018 34361 secret.go:189] Couldn't get secret openshift-monitoring/telemeter-client-tls: failed to sync secret cache: timed out waiting for the condition Feb 24 05:37:22.690140 master-0 kubenswrapper[34361]: E0224 05:37:22.690084 34361 configmap.go:193] Couldn't get configMap openshift-image-registry/trusted-ca: failed to sync configmap cache: timed out waiting for the condition Feb 24 05:37:22.690140 master-0 kubenswrapper[34361]: E0224 05:37:22.690105 34361 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/1163571d-f555-41ad-b04c-74c2dc452efe-telemeter-client-tls podName:1163571d-f555-41ad-b04c-74c2dc452efe nodeName:}" failed. No retries permitted until 2026-02-24 05:37:23.190084648 +0000 UTC m=+2.892701734 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "telemeter-client-tls" (UniqueName: "kubernetes.io/secret/1163571d-f555-41ad-b04c-74c2dc452efe-telemeter-client-tls") pod "telemeter-client-96c995bf5-57k8x" (UID: "1163571d-f555-41ad-b04c-74c2dc452efe") : failed to sync secret cache: timed out waiting for the condition Feb 24 05:37:22.690432 master-0 kubenswrapper[34361]: E0224 05:37:22.690152 34361 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/23bdafdd-27c9-4461-be4a-3ea916ac3875-trusted-ca podName:23bdafdd-27c9-4461-be4a-3ea916ac3875 nodeName:}" failed. No retries permitted until 2026-02-24 05:37:23.190135349 +0000 UTC m=+2.892752435 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "trusted-ca" (UniqueName: "kubernetes.io/configmap/23bdafdd-27c9-4461-be4a-3ea916ac3875-trusted-ca") pod "cluster-image-registry-operator-779979bdf7-t98nr" (UID: "23bdafdd-27c9-4461-be4a-3ea916ac3875") : failed to sync configmap cache: timed out waiting for the condition Feb 24 05:37:22.690432 master-0 kubenswrapper[34361]: E0224 05:37:22.690204 34361 configmap.go:193] Couldn't get configMap openshift-cloud-controller-manager-operator/cloud-controller-manager-images: failed to sync configmap cache: timed out waiting for the condition Feb 24 05:37:22.690432 master-0 kubenswrapper[34361]: E0224 05:37:22.690275 34361 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/f3cd3830-62b5-49d1-917e-bd993d685c65-images podName:f3cd3830-62b5-49d1-917e-bd993d685c65 nodeName:}" failed. No retries permitted until 2026-02-24 05:37:23.190261172 +0000 UTC m=+2.892878238 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "images" (UniqueName: "kubernetes.io/configmap/f3cd3830-62b5-49d1-917e-bd993d685c65-images") pod "cluster-cloud-controller-manager-operator-67dd8d7969-m8d2t" (UID: "f3cd3830-62b5-49d1-917e-bd993d685c65") : failed to sync configmap cache: timed out waiting for the condition Feb 24 05:37:22.691581 master-0 kubenswrapper[34361]: E0224 05:37:22.691534 34361 secret.go:189] Couldn't get secret openshift-monitoring/node-exporter-kube-rbac-proxy-config: failed to sync secret cache: timed out waiting for the condition Feb 24 05:37:22.691684 master-0 kubenswrapper[34361]: E0224 05:37:22.691627 34361 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/f2be5ed6-fdf0-4462-a319-eed1a5a1c778-node-exporter-kube-rbac-proxy-config podName:f2be5ed6-fdf0-4462-a319-eed1a5a1c778 nodeName:}" failed. No retries permitted until 2026-02-24 05:37:23.191609399 +0000 UTC m=+2.894226485 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "node-exporter-kube-rbac-proxy-config" (UniqueName: "kubernetes.io/secret/f2be5ed6-fdf0-4462-a319-eed1a5a1c778-node-exporter-kube-rbac-proxy-config") pod "node-exporter-qk7rz" (UID: "f2be5ed6-fdf0-4462-a319-eed1a5a1c778") : failed to sync secret cache: timed out waiting for the condition Feb 24 05:37:22.691684 master-0 kubenswrapper[34361]: E0224 05:37:22.691645 34361 secret.go:189] Couldn't get secret openshift-machine-config-operator/proxy-tls: failed to sync secret cache: timed out waiting for the condition Feb 24 05:37:22.691991 master-0 kubenswrapper[34361]: E0224 05:37:22.691703 34361 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a3561f49-0808-4d96-95ec-456fcb5c5bb4-proxy-tls podName:a3561f49-0808-4d96-95ec-456fcb5c5bb4 nodeName:}" failed. No retries permitted until 2026-02-24 05:37:23.191686881 +0000 UTC m=+2.894303947 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "proxy-tls" (UniqueName: "kubernetes.io/secret/a3561f49-0808-4d96-95ec-456fcb5c5bb4-proxy-tls") pod "machine-config-daemon-c56dz" (UID: "a3561f49-0808-4d96-95ec-456fcb5c5bb4") : failed to sync secret cache: timed out waiting for the condition Feb 24 05:37:22.693228 master-0 kubenswrapper[34361]: E0224 05:37:22.692834 34361 secret.go:189] Couldn't get secret openshift-monitoring/openshift-state-metrics-kube-rbac-proxy-config: failed to sync secret cache: timed out waiting for the condition Feb 24 05:37:22.693228 master-0 kubenswrapper[34361]: E0224 05:37:22.692878 34361 configmap.go:193] Couldn't get configMap openshift-monitoring/metrics-client-ca: failed to sync configmap cache: timed out waiting for the condition Feb 24 05:37:22.693228 master-0 kubenswrapper[34361]: E0224 05:37:22.692907 34361 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/bf303acd-b62e-4aa3-bd8d-15f5844302d8-openshift-state-metrics-kube-rbac-proxy-config podName:bf303acd-b62e-4aa3-bd8d-15f5844302d8 nodeName:}" failed. No retries permitted until 2026-02-24 05:37:23.192893683 +0000 UTC m=+2.895510739 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "openshift-state-metrics-kube-rbac-proxy-config" (UniqueName: "kubernetes.io/secret/bf303acd-b62e-4aa3-bd8d-15f5844302d8-openshift-state-metrics-kube-rbac-proxy-config") pod "openshift-state-metrics-6dbff8cb4c-hvjlk" (UID: "bf303acd-b62e-4aa3-bd8d-15f5844302d8") : failed to sync secret cache: timed out waiting for the condition Feb 24 05:37:22.693228 master-0 kubenswrapper[34361]: E0224 05:37:22.692935 34361 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/b8d28792-2365-4e9e-b61a-46cd2ef8b632-metrics-client-ca podName:b8d28792-2365-4e9e-b61a-46cd2ef8b632 nodeName:}" failed. No retries permitted until 2026-02-24 05:37:23.192921264 +0000 UTC m=+2.895538320 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-client-ca" (UniqueName: "kubernetes.io/configmap/b8d28792-2365-4e9e-b61a-46cd2ef8b632-metrics-client-ca") pod "prometheus-operator-754bc4d665-xjddh" (UID: "b8d28792-2365-4e9e-b61a-46cd2ef8b632") : failed to sync configmap cache: timed out waiting for the condition Feb 24 05:37:22.694620 master-0 kubenswrapper[34361]: E0224 05:37:22.694566 34361 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/packageserver-service-cert: failed to sync secret cache: timed out waiting for the condition Feb 24 05:37:22.694899 master-0 kubenswrapper[34361]: E0224 05:37:22.694668 34361 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/49b426a3-f16e-40e9-a166-7270d4cfcc60-apiservice-cert podName:49b426a3-f16e-40e9-a166-7270d4cfcc60 nodeName:}" failed. No retries permitted until 2026-02-24 05:37:23.194648581 +0000 UTC m=+2.897265667 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "apiservice-cert" (UniqueName: "kubernetes.io/secret/49b426a3-f16e-40e9-a166-7270d4cfcc60-apiservice-cert") pod "packageserver-df5f88cd4-cwzcs" (UID: "49b426a3-f16e-40e9-a166-7270d4cfcc60") : failed to sync secret cache: timed out waiting for the condition Feb 24 05:37:22.695748 master-0 kubenswrapper[34361]: E0224 05:37:22.695686 34361 secret.go:189] Couldn't get secret openshift-cloud-credential-operator/cloud-credential-operator-serving-cert: failed to sync secret cache: timed out waiting for the condition Feb 24 05:37:22.695855 master-0 kubenswrapper[34361]: E0224 05:37:22.695769 34361 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b46907eb-36d6-4410-b7d8-8012b254c861-cloud-credential-operator-serving-cert podName:b46907eb-36d6-4410-b7d8-8012b254c861 nodeName:}" failed. No retries permitted until 2026-02-24 05:37:23.19575107 +0000 UTC m=+2.898368146 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cloud-credential-operator-serving-cert" (UniqueName: "kubernetes.io/secret/b46907eb-36d6-4410-b7d8-8012b254c861-cloud-credential-operator-serving-cert") pod "cloud-credential-operator-6968c58f46-68rth" (UID: "b46907eb-36d6-4410-b7d8-8012b254c861") : failed to sync secret cache: timed out waiting for the condition Feb 24 05:37:22.695855 master-0 kubenswrapper[34361]: E0224 05:37:22.695840 34361 configmap.go:193] Couldn't get configMap openshift-monitoring/kubelet-serving-ca-bundle: failed to sync configmap cache: timed out waiting for the condition Feb 24 05:37:22.696066 master-0 kubenswrapper[34361]: E0224 05:37:22.695910 34361 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/2f48332e-92de-42aa-a6e6-db161f005e74-configmap-kubelet-serving-ca-bundle podName:2f48332e-92de-42aa-a6e6-db161f005e74 nodeName:}" failed. No retries permitted until 2026-02-24 05:37:23.195895394 +0000 UTC m=+2.898512460 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "configmap-kubelet-serving-ca-bundle" (UniqueName: "kubernetes.io/configmap/2f48332e-92de-42aa-a6e6-db161f005e74-configmap-kubelet-serving-ca-bundle") pod "metrics-server-65cdf565cd-555rj" (UID: "2f48332e-92de-42aa-a6e6-db161f005e74") : failed to sync configmap cache: timed out waiting for the condition Feb 24 05:37:22.696066 master-0 kubenswrapper[34361]: E0224 05:37:22.695801 34361 configmap.go:193] Couldn't get configMap openshift-machine-api/kube-rbac-proxy-cluster-autoscaler-operator: failed to sync configmap cache: timed out waiting for the condition Feb 24 05:37:22.696066 master-0 kubenswrapper[34361]: E0224 05:37:22.696026 34361 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5d51ce58-55f6-45d5-9d5d-7b31ae42380a-auth-proxy-config podName:5d51ce58-55f6-45d5-9d5d-7b31ae42380a nodeName:}" failed. No retries permitted until 2026-02-24 05:37:23.196005417 +0000 UTC m=+2.898622493 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "auth-proxy-config" (UniqueName: "kubernetes.io/configmap/5d51ce58-55f6-45d5-9d5d-7b31ae42380a-auth-proxy-config") pod "cluster-autoscaler-operator-86b8dc6d6-mcf2z" (UID: "5d51ce58-55f6-45d5-9d5d-7b31ae42380a") : failed to sync configmap cache: timed out waiting for the condition Feb 24 05:37:22.697017 master-0 kubenswrapper[34361]: E0224 05:37:22.696967 34361 configmap.go:193] Couldn't get configMap openshift-monitoring/telemeter-trusted-ca-bundle-8i12ta5c71j38: failed to sync configmap cache: timed out waiting for the condition Feb 24 05:37:22.697103 master-0 kubenswrapper[34361]: E0224 05:37:22.697048 34361 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/1163571d-f555-41ad-b04c-74c2dc452efe-telemeter-trusted-ca-bundle podName:1163571d-f555-41ad-b04c-74c2dc452efe nodeName:}" failed. No retries permitted until 2026-02-24 05:37:23.197029864 +0000 UTC m=+2.899646950 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "telemeter-trusted-ca-bundle" (UniqueName: "kubernetes.io/configmap/1163571d-f555-41ad-b04c-74c2dc452efe-telemeter-trusted-ca-bundle") pod "telemeter-client-96c995bf5-57k8x" (UID: "1163571d-f555-41ad-b04c-74c2dc452efe") : failed to sync configmap cache: timed out waiting for the condition Feb 24 05:37:22.697194 master-0 kubenswrapper[34361]: E0224 05:37:22.697157 34361 secret.go:189] Couldn't get secret openshift-monitoring/kube-state-metrics-kube-rbac-proxy-config: failed to sync secret cache: timed out waiting for the condition Feb 24 05:37:22.697194 master-0 kubenswrapper[34361]: E0224 05:37:22.697170 34361 configmap.go:193] Couldn't get configMap openshift-route-controller-manager/client-ca: failed to sync configmap cache: timed out waiting for the condition Feb 24 05:37:22.697343 master-0 kubenswrapper[34361]: E0224 05:37:22.697231 34361 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/80cc7ad6-051b-4ee5-94af-611388d9622a-kube-state-metrics-kube-rbac-proxy-config podName:80cc7ad6-051b-4ee5-94af-611388d9622a nodeName:}" failed. No retries permitted until 2026-02-24 05:37:23.19721188 +0000 UTC m=+2.899828966 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-state-metrics-kube-rbac-proxy-config" (UniqueName: "kubernetes.io/secret/80cc7ad6-051b-4ee5-94af-611388d9622a-kube-state-metrics-kube-rbac-proxy-config") pod "kube-state-metrics-59584d565f-gsgxz" (UID: "80cc7ad6-051b-4ee5-94af-611388d9622a") : failed to sync secret cache: timed out waiting for the condition Feb 24 05:37:22.697343 master-0 kubenswrapper[34361]: E0224 05:37:22.697260 34361 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/b426cb33-1624-45e6-b8c5-4e8d251f6339-client-ca podName:b426cb33-1624-45e6-b8c5-4e8d251f6339 nodeName:}" failed. No retries permitted until 2026-02-24 05:37:23.197248451 +0000 UTC m=+2.899865527 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "client-ca" (UniqueName: "kubernetes.io/configmap/b426cb33-1624-45e6-b8c5-4e8d251f6339-client-ca") pod "route-controller-manager-654dcf5585-fgmnd" (UID: "b426cb33-1624-45e6-b8c5-4e8d251f6339") : failed to sync configmap cache: timed out waiting for the condition Feb 24 05:37:22.698238 master-0 kubenswrapper[34361]: E0224 05:37:22.698183 34361 secret.go:189] Couldn't get secret openshift-monitoring/prometheus-operator-tls: failed to sync secret cache: timed out waiting for the condition Feb 24 05:37:22.698520 master-0 kubenswrapper[34361]: E0224 05:37:22.698252 34361 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b8d28792-2365-4e9e-b61a-46cd2ef8b632-prometheus-operator-tls podName:b8d28792-2365-4e9e-b61a-46cd2ef8b632 nodeName:}" failed. No retries permitted until 2026-02-24 05:37:23.198236747 +0000 UTC m=+2.900853833 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "prometheus-operator-tls" (UniqueName: "kubernetes.io/secret/b8d28792-2365-4e9e-b61a-46cd2ef8b632-prometheus-operator-tls") pod "prometheus-operator-754bc4d665-xjddh" (UID: "b8d28792-2365-4e9e-b61a-46cd2ef8b632") : failed to sync secret cache: timed out waiting for the condition Feb 24 05:37:22.698520 master-0 kubenswrapper[34361]: E0224 05:37:22.698268 34361 secret.go:189] Couldn't get secret openshift-machine-config-operator/mcc-proxy-tls: failed to sync secret cache: timed out waiting for the condition Feb 24 05:37:22.698520 master-0 kubenswrapper[34361]: E0224 05:37:22.698387 34361 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/1e7f7c02-4c84-432a-8d59-25dd3bfef5c2-proxy-tls podName:1e7f7c02-4c84-432a-8d59-25dd3bfef5c2 nodeName:}" failed. No retries permitted until 2026-02-24 05:37:23.198368681 +0000 UTC m=+2.900985927 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "proxy-tls" (UniqueName: "kubernetes.io/secret/1e7f7c02-4c84-432a-8d59-25dd3bfef5c2-proxy-tls") pod "machine-config-controller-54cb48566c-9ww5z" (UID: "1e7f7c02-4c84-432a-8d59-25dd3bfef5c2") : failed to sync secret cache: timed out waiting for the condition Feb 24 05:37:22.698794 master-0 kubenswrapper[34361]: E0224 05:37:22.698746 34361 configmap.go:193] Couldn't get configMap openshift-insights/service-ca-bundle: failed to sync configmap cache: timed out waiting for the condition Feb 24 05:37:22.698865 master-0 kubenswrapper[34361]: E0224 05:37:22.698814 34361 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/ee10cd9f-23da-4e40-bc7a-0856a9fa7ae5-service-ca-bundle podName:ee10cd9f-23da-4e40-bc7a-0856a9fa7ae5 nodeName:}" failed. No retries permitted until 2026-02-24 05:37:23.198796962 +0000 UTC m=+2.901414038 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "service-ca-bundle" (UniqueName: "kubernetes.io/configmap/ee10cd9f-23da-4e40-bc7a-0856a9fa7ae5-service-ca-bundle") pod "insights-operator-59b498fcfb-mprnx" (UID: "ee10cd9f-23da-4e40-bc7a-0856a9fa7ae5") : failed to sync configmap cache: timed out waiting for the condition Feb 24 05:37:22.698865 master-0 kubenswrapper[34361]: E0224 05:37:22.698830 34361 secret.go:189] Couldn't get secret openshift-monitoring/metrics-server-7qtvbjhkqad41: failed to sync secret cache: timed out waiting for the condition Feb 24 05:37:22.699000 master-0 kubenswrapper[34361]: E0224 05:37:22.698894 34361 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/2f48332e-92de-42aa-a6e6-db161f005e74-client-ca-bundle podName:2f48332e-92de-42aa-a6e6-db161f005e74 nodeName:}" failed. No retries permitted until 2026-02-24 05:37:23.198877894 +0000 UTC m=+2.901494950 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "client-ca-bundle" (UniqueName: "kubernetes.io/secret/2f48332e-92de-42aa-a6e6-db161f005e74-client-ca-bundle") pod "metrics-server-65cdf565cd-555rj" (UID: "2f48332e-92de-42aa-a6e6-db161f005e74") : failed to sync secret cache: timed out waiting for the condition Feb 24 05:37:22.699617 master-0 kubenswrapper[34361]: I0224 05:37:22.699567 34361 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Feb 24 05:37:22.699711 master-0 kubenswrapper[34361]: E0224 05:37:22.699637 34361 configmap.go:193] Couldn't get configMap openshift-machine-api/kube-rbac-proxy: failed to sync configmap cache: timed out waiting for the condition Feb 24 05:37:22.699776 master-0 kubenswrapper[34361]: E0224 05:37:22.699717 34361 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/116e6b47-d435-49ca-abb5-088788daf16a-config podName:116e6b47-d435-49ca-abb5-088788daf16a nodeName:}" failed. No retries permitted until 2026-02-24 05:37:23.199696906 +0000 UTC m=+2.902313992 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/116e6b47-d435-49ca-abb5-088788daf16a-config") pod "machine-api-operator-5c7cf458b4-65mc5" (UID: "116e6b47-d435-49ca-abb5-088788daf16a") : failed to sync configmap cache: timed out waiting for the condition Feb 24 05:37:22.701868 master-0 kubenswrapper[34361]: E0224 05:37:22.701816 34361 secret.go:189] Couldn't get secret openshift-cluster-storage-operator/cluster-storage-operator-serving-cert: failed to sync secret cache: timed out waiting for the condition Feb 24 05:37:22.701975 master-0 kubenswrapper[34361]: E0224 05:37:22.701893 34361 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e1f03d97-1a6a-41e4-9ed3-cd9b01c46400-cluster-storage-operator-serving-cert podName:e1f03d97-1a6a-41e4-9ed3-cd9b01c46400 nodeName:}" failed. No retries permitted until 2026-02-24 05:37:23.201875565 +0000 UTC m=+2.904492641 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cluster-storage-operator-serving-cert" (UniqueName: "kubernetes.io/secret/e1f03d97-1a6a-41e4-9ed3-cd9b01c46400-cluster-storage-operator-serving-cert") pod "cluster-storage-operator-f94476f49-tlmg5" (UID: "e1f03d97-1a6a-41e4-9ed3-cd9b01c46400") : failed to sync secret cache: timed out waiting for the condition Feb 24 05:37:22.702238 master-0 kubenswrapper[34361]: I0224 05:37:22.702184 34361 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b426cb33-1624-45e6-b8c5-4e8d251f6339-serving-cert\") pod \"route-controller-manager-654dcf5585-fgmnd\" (UID: \"b426cb33-1624-45e6-b8c5-4e8d251f6339\") " pod="openshift-route-controller-manager/route-controller-manager-654dcf5585-fgmnd" Feb 24 05:37:22.702367 master-0 kubenswrapper[34361]: E0224 05:37:22.702284 34361 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/packageserver-service-cert: failed to sync secret cache: timed out waiting for the condition Feb 24 05:37:22.702435 master-0 kubenswrapper[34361]: E0224 05:37:22.702385 34361 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/49b426a3-f16e-40e9-a166-7270d4cfcc60-webhook-cert podName:49b426a3-f16e-40e9-a166-7270d4cfcc60 nodeName:}" failed. No retries permitted until 2026-02-24 05:37:23.202367458 +0000 UTC m=+2.904984544 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "webhook-cert" (UniqueName: "kubernetes.io/secret/49b426a3-f16e-40e9-a166-7270d4cfcc60-webhook-cert") pod "packageserver-df5f88cd4-cwzcs" (UID: "49b426a3-f16e-40e9-a166-7270d4cfcc60") : failed to sync secret cache: timed out waiting for the condition Feb 24 05:37:22.703105 master-0 kubenswrapper[34361]: E0224 05:37:22.703058 34361 secret.go:189] Couldn't get secret openshift-monitoring/telemeter-client: failed to sync secret cache: timed out waiting for the condition Feb 24 05:37:22.703207 master-0 kubenswrapper[34361]: E0224 05:37:22.703130 34361 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/1163571d-f555-41ad-b04c-74c2dc452efe-secret-telemeter-client podName:1163571d-f555-41ad-b04c-74c2dc452efe nodeName:}" failed. No retries permitted until 2026-02-24 05:37:23.203115379 +0000 UTC m=+2.905732455 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "secret-telemeter-client" (UniqueName: "kubernetes.io/secret/1163571d-f555-41ad-b04c-74c2dc452efe-secret-telemeter-client") pod "telemeter-client-96c995bf5-57k8x" (UID: "1163571d-f555-41ad-b04c-74c2dc452efe") : failed to sync secret cache: timed out waiting for the condition Feb 24 05:37:22.703415 master-0 kubenswrapper[34361]: E0224 05:37:22.703339 34361 configmap.go:193] Couldn't get configMap openshift-monitoring/telemeter-client-serving-certs-ca-bundle: failed to sync configmap cache: timed out waiting for the condition Feb 24 05:37:22.703415 master-0 kubenswrapper[34361]: E0224 05:37:22.703412 34361 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/1163571d-f555-41ad-b04c-74c2dc452efe-serving-certs-ca-bundle podName:1163571d-f555-41ad-b04c-74c2dc452efe nodeName:}" failed. No retries permitted until 2026-02-24 05:37:23.203396796 +0000 UTC m=+2.906013882 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "serving-certs-ca-bundle" (UniqueName: "kubernetes.io/configmap/1163571d-f555-41ad-b04c-74c2dc452efe-serving-certs-ca-bundle") pod "telemeter-client-96c995bf5-57k8x" (UID: "1163571d-f555-41ad-b04c-74c2dc452efe") : failed to sync configmap cache: timed out waiting for the condition Feb 24 05:37:22.703565 master-0 kubenswrapper[34361]: E0224 05:37:22.703443 34361 configmap.go:193] Couldn't get configMap openshift-machine-config-operator/kube-rbac-proxy: failed to sync configmap cache: timed out waiting for the condition Feb 24 05:37:22.703565 master-0 kubenswrapper[34361]: E0224 05:37:22.703460 34361 secret.go:189] Couldn't get secret openshift-monitoring/node-exporter-tls: failed to sync secret cache: timed out waiting for the condition Feb 24 05:37:22.703565 master-0 kubenswrapper[34361]: E0224 05:37:22.703491 34361 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/a3561f49-0808-4d96-95ec-456fcb5c5bb4-mcd-auth-proxy-config podName:a3561f49-0808-4d96-95ec-456fcb5c5bb4 nodeName:}" failed. No retries permitted until 2026-02-24 05:37:23.203478728 +0000 UTC m=+2.906095814 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "mcd-auth-proxy-config" (UniqueName: "kubernetes.io/configmap/a3561f49-0808-4d96-95ec-456fcb5c5bb4-mcd-auth-proxy-config") pod "machine-config-daemon-c56dz" (UID: "a3561f49-0808-4d96-95ec-456fcb5c5bb4") : failed to sync configmap cache: timed out waiting for the condition Feb 24 05:37:22.703565 master-0 kubenswrapper[34361]: E0224 05:37:22.703524 34361 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/f2be5ed6-fdf0-4462-a319-eed1a5a1c778-node-exporter-tls podName:f2be5ed6-fdf0-4462-a319-eed1a5a1c778 nodeName:}" failed. No retries permitted until 2026-02-24 05:37:23.203508479 +0000 UTC m=+2.906125555 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "node-exporter-tls" (UniqueName: "kubernetes.io/secret/f2be5ed6-fdf0-4462-a319-eed1a5a1c778-node-exporter-tls") pod "node-exporter-qk7rz" (UID: "f2be5ed6-fdf0-4462-a319-eed1a5a1c778") : failed to sync secret cache: timed out waiting for the condition Feb 24 05:37:22.704467 master-0 kubenswrapper[34361]: E0224 05:37:22.704412 34361 secret.go:189] Couldn't get secret openshift-machine-config-operator/mco-proxy-tls: failed to sync secret cache: timed out waiting for the condition Feb 24 05:37:22.704467 master-0 kubenswrapper[34361]: E0224 05:37:22.704451 34361 secret.go:189] Couldn't get secret openshift-cluster-machine-approver/machine-approver-tls: failed to sync secret cache: timed out waiting for the condition Feb 24 05:37:22.704667 master-0 kubenswrapper[34361]: E0224 05:37:22.704502 34361 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/39c4d0aa-c372-4d02-9302-337e68b56784-proxy-tls podName:39c4d0aa-c372-4d02-9302-337e68b56784 nodeName:}" failed. No retries permitted until 2026-02-24 05:37:23.204483895 +0000 UTC m=+2.907100981 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "proxy-tls" (UniqueName: "kubernetes.io/secret/39c4d0aa-c372-4d02-9302-337e68b56784-proxy-tls") pod "machine-config-operator-7f8c75f984-922md" (UID: "39c4d0aa-c372-4d02-9302-337e68b56784") : failed to sync secret cache: timed out waiting for the condition Feb 24 05:37:22.704667 master-0 kubenswrapper[34361]: E0224 05:37:22.704541 34361 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e6a0fc47-b446-4902-9f8a-04870cbafcab-machine-approver-tls podName:e6a0fc47-b446-4902-9f8a-04870cbafcab nodeName:}" failed. No retries permitted until 2026-02-24 05:37:23.204524766 +0000 UTC m=+2.907141842 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "machine-approver-tls" (UniqueName: "kubernetes.io/secret/e6a0fc47-b446-4902-9f8a-04870cbafcab-machine-approver-tls") pod "machine-approver-7dd9c7d7b9-pb6sw" (UID: "e6a0fc47-b446-4902-9f8a-04870cbafcab") : failed to sync secret cache: timed out waiting for the condition Feb 24 05:37:22.705668 master-0 kubenswrapper[34361]: E0224 05:37:22.705597 34361 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/olm-operator-serving-cert: failed to sync secret cache: timed out waiting for the condition Feb 24 05:37:22.705800 master-0 kubenswrapper[34361]: E0224 05:37:22.705685 34361 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/9666fc94-71e3-46af-8b45-26e3a085d076-srv-cert podName:9666fc94-71e3-46af-8b45-26e3a085d076 nodeName:}" failed. No retries permitted until 2026-02-24 05:37:23.205669388 +0000 UTC m=+2.908286474 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "srv-cert" (UniqueName: "kubernetes.io/secret/9666fc94-71e3-46af-8b45-26e3a085d076-srv-cert") pod "olm-operator-5499d7f7bb-8xdmq" (UID: "9666fc94-71e3-46af-8b45-26e3a085d076") : failed to sync secret cache: timed out waiting for the condition Feb 24 05:37:22.706821 master-0 kubenswrapper[34361]: E0224 05:37:22.706765 34361 secret.go:189] Couldn't get secret openshift-cluster-samples-operator/samples-operator-tls: failed to sync secret cache: timed out waiting for the condition Feb 24 05:37:22.706821 master-0 kubenswrapper[34361]: E0224 05:37:22.706821 34361 configmap.go:193] Couldn't get configMap openshift-insights/trusted-ca-bundle: failed to sync configmap cache: timed out waiting for the condition Feb 24 05:37:22.707065 master-0 kubenswrapper[34361]: E0224 05:37:22.706854 34361 secret.go:189] Couldn't get secret openshift-monitoring/metrics-client-certs: failed to sync secret cache: timed out waiting for the condition Feb 24 05:37:22.707065 master-0 kubenswrapper[34361]: E0224 05:37:22.706891 34361 secret.go:189] Couldn't get secret openshift-monitoring/kube-state-metrics-tls: failed to sync secret cache: timed out waiting for the condition Feb 24 05:37:22.707065 master-0 kubenswrapper[34361]: E0224 05:37:22.706861 34361 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/4bb05b64-74d7-41bc-991c-5d3cddc9d8f4-samples-operator-tls podName:4bb05b64-74d7-41bc-991c-5d3cddc9d8f4 nodeName:}" failed. No retries permitted until 2026-02-24 05:37:23.206836419 +0000 UTC m=+2.909453665 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "samples-operator-tls" (UniqueName: "kubernetes.io/secret/4bb05b64-74d7-41bc-991c-5d3cddc9d8f4-samples-operator-tls") pod "cluster-samples-operator-65c5c48b9b-hmlsl" (UID: "4bb05b64-74d7-41bc-991c-5d3cddc9d8f4") : failed to sync secret cache: timed out waiting for the condition Feb 24 05:37:22.707065 master-0 kubenswrapper[34361]: E0224 05:37:22.706932 34361 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/ee10cd9f-23da-4e40-bc7a-0856a9fa7ae5-trusted-ca-bundle podName:ee10cd9f-23da-4e40-bc7a-0856a9fa7ae5 nodeName:}" failed. No retries permitted until 2026-02-24 05:37:23.206916281 +0000 UTC m=+2.909533367 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "trusted-ca-bundle" (UniqueName: "kubernetes.io/configmap/ee10cd9f-23da-4e40-bc7a-0856a9fa7ae5-trusted-ca-bundle") pod "insights-operator-59b498fcfb-mprnx" (UID: "ee10cd9f-23da-4e40-bc7a-0856a9fa7ae5") : failed to sync configmap cache: timed out waiting for the condition Feb 24 05:37:22.707065 master-0 kubenswrapper[34361]: E0224 05:37:22.706956 34361 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/2f48332e-92de-42aa-a6e6-db161f005e74-secret-metrics-client-certs podName:2f48332e-92de-42aa-a6e6-db161f005e74 nodeName:}" failed. No retries permitted until 2026-02-24 05:37:23.206945091 +0000 UTC m=+2.909562167 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "secret-metrics-client-certs" (UniqueName: "kubernetes.io/secret/2f48332e-92de-42aa-a6e6-db161f005e74-secret-metrics-client-certs") pod "metrics-server-65cdf565cd-555rj" (UID: "2f48332e-92de-42aa-a6e6-db161f005e74") : failed to sync secret cache: timed out waiting for the condition Feb 24 05:37:22.707065 master-0 kubenswrapper[34361]: E0224 05:37:22.706982 34361 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/80cc7ad6-051b-4ee5-94af-611388d9622a-kube-state-metrics-tls podName:80cc7ad6-051b-4ee5-94af-611388d9622a nodeName:}" failed. No retries permitted until 2026-02-24 05:37:23.206969432 +0000 UTC m=+2.909586518 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-state-metrics-tls" (UniqueName: "kubernetes.io/secret/80cc7ad6-051b-4ee5-94af-611388d9622a-kube-state-metrics-tls") pod "kube-state-metrics-59584d565f-gsgxz" (UID: "80cc7ad6-051b-4ee5-94af-611388d9622a") : failed to sync secret cache: timed out waiting for the condition Feb 24 05:37:22.708235 master-0 kubenswrapper[34361]: E0224 05:37:22.708173 34361 secret.go:189] Couldn't get secret openshift-monitoring/metrics-server-tls: failed to sync secret cache: timed out waiting for the condition Feb 24 05:37:22.708235 master-0 kubenswrapper[34361]: E0224 05:37:22.708239 34361 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/2f48332e-92de-42aa-a6e6-db161f005e74-secret-metrics-server-tls podName:2f48332e-92de-42aa-a6e6-db161f005e74 nodeName:}" failed. No retries permitted until 2026-02-24 05:37:23.208224585 +0000 UTC m=+2.910841641 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "secret-metrics-server-tls" (UniqueName: "kubernetes.io/secret/2f48332e-92de-42aa-a6e6-db161f005e74-secret-metrics-server-tls") pod "metrics-server-65cdf565cd-555rj" (UID: "2f48332e-92de-42aa-a6e6-db161f005e74") : failed to sync secret cache: timed out waiting for the condition Feb 24 05:37:22.708535 master-0 kubenswrapper[34361]: E0224 05:37:22.708252 34361 secret.go:189] Couldn't get secret openshift-machine-api/machine-api-operator-tls: failed to sync secret cache: timed out waiting for the condition Feb 24 05:37:22.708535 master-0 kubenswrapper[34361]: E0224 05:37:22.708272 34361 secret.go:189] Couldn't get secret openshift-machine-config-operator/machine-config-server-tls: failed to sync secret cache: timed out waiting for the condition Feb 24 05:37:22.708535 master-0 kubenswrapper[34361]: E0224 05:37:22.708352 34361 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c847d0c0-cc92-4d56-9e47-b83d9a39a745-certs podName:c847d0c0-cc92-4d56-9e47-b83d9a39a745 nodeName:}" failed. No retries permitted until 2026-02-24 05:37:23.208338049 +0000 UTC m=+2.910955285 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "certs" (UniqueName: "kubernetes.io/secret/c847d0c0-cc92-4d56-9e47-b83d9a39a745-certs") pod "machine-config-server-xxl55" (UID: "c847d0c0-cc92-4d56-9e47-b83d9a39a745") : failed to sync secret cache: timed out waiting for the condition Feb 24 05:37:22.708535 master-0 kubenswrapper[34361]: E0224 05:37:22.708257 34361 configmap.go:193] Couldn't get configMap openshift-monitoring/metrics-client-ca: failed to sync configmap cache: timed out waiting for the condition Feb 24 05:37:22.708535 master-0 kubenswrapper[34361]: E0224 05:37:22.708354 34361 configmap.go:193] Couldn't get configMap openshift-monitoring/metrics-client-ca: failed to sync configmap cache: timed out waiting for the condition Feb 24 05:37:22.708535 master-0 kubenswrapper[34361]: E0224 05:37:22.708418 34361 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/116e6b47-d435-49ca-abb5-088788daf16a-machine-api-operator-tls podName:116e6b47-d435-49ca-abb5-088788daf16a nodeName:}" failed. No retries permitted until 2026-02-24 05:37:23.208394691 +0000 UTC m=+2.911011747 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "machine-api-operator-tls" (UniqueName: "kubernetes.io/secret/116e6b47-d435-49ca-abb5-088788daf16a-machine-api-operator-tls") pod "machine-api-operator-5c7cf458b4-65mc5" (UID: "116e6b47-d435-49ca-abb5-088788daf16a") : failed to sync secret cache: timed out waiting for the condition Feb 24 05:37:22.708535 master-0 kubenswrapper[34361]: E0224 05:37:22.708451 34361 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/80cc7ad6-051b-4ee5-94af-611388d9622a-metrics-client-ca podName:80cc7ad6-051b-4ee5-94af-611388d9622a nodeName:}" failed. No retries permitted until 2026-02-24 05:37:23.208438862 +0000 UTC m=+2.911056128 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-client-ca" (UniqueName: "kubernetes.io/configmap/80cc7ad6-051b-4ee5-94af-611388d9622a-metrics-client-ca") pod "kube-state-metrics-59584d565f-gsgxz" (UID: "80cc7ad6-051b-4ee5-94af-611388d9622a") : failed to sync configmap cache: timed out waiting for the condition Feb 24 05:37:22.708535 master-0 kubenswrapper[34361]: E0224 05:37:22.708482 34361 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/bf303acd-b62e-4aa3-bd8d-15f5844302d8-metrics-client-ca podName:bf303acd-b62e-4aa3-bd8d-15f5844302d8 nodeName:}" failed. No retries permitted until 2026-02-24 05:37:23.208468323 +0000 UTC m=+2.911085609 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-client-ca" (UniqueName: "kubernetes.io/configmap/bf303acd-b62e-4aa3-bd8d-15f5844302d8-metrics-client-ca") pod "openshift-state-metrics-6dbff8cb4c-hvjlk" (UID: "bf303acd-b62e-4aa3-bd8d-15f5844302d8") : failed to sync configmap cache: timed out waiting for the condition Feb 24 05:37:22.709504 master-0 kubenswrapper[34361]: E0224 05:37:22.709437 34361 configmap.go:193] Couldn't get configMap openshift-cluster-machine-approver/kube-rbac-proxy: failed to sync configmap cache: timed out waiting for the condition Feb 24 05:37:22.709774 master-0 kubenswrapper[34361]: E0224 05:37:22.709719 34361 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/e6a0fc47-b446-4902-9f8a-04870cbafcab-auth-proxy-config podName:e6a0fc47-b446-4902-9f8a-04870cbafcab nodeName:}" failed. No retries permitted until 2026-02-24 05:37:23.209634484 +0000 UTC m=+2.912251720 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "auth-proxy-config" (UniqueName: "kubernetes.io/configmap/e6a0fc47-b446-4902-9f8a-04870cbafcab-auth-proxy-config") pod "machine-approver-7dd9c7d7b9-pb6sw" (UID: "e6a0fc47-b446-4902-9f8a-04870cbafcab") : failed to sync configmap cache: timed out waiting for the condition Feb 24 05:37:22.710612 master-0 kubenswrapper[34361]: E0224 05:37:22.710569 34361 configmap.go:193] Couldn't get configMap openshift-monitoring/kube-state-metrics-custom-resource-state-configmap: failed to sync configmap cache: timed out waiting for the condition Feb 24 05:37:22.710709 master-0 kubenswrapper[34361]: E0224 05:37:22.710618 34361 configmap.go:193] Couldn't get configMap openshift-machine-config-operator/machine-config-operator-images: failed to sync configmap cache: timed out waiting for the condition Feb 24 05:37:22.710709 master-0 kubenswrapper[34361]: E0224 05:37:22.710636 34361 secret.go:189] Couldn't get secret openshift-monitoring/openshift-state-metrics-tls: failed to sync secret cache: timed out waiting for the condition Feb 24 05:37:22.710709 master-0 kubenswrapper[34361]: E0224 05:37:22.710681 34361 configmap.go:193] Couldn't get configMap openshift-monitoring/metrics-server-audit-profiles: failed to sync configmap cache: timed out waiting for the condition Feb 24 05:37:22.710709 master-0 kubenswrapper[34361]: E0224 05:37:22.710690 34361 configmap.go:193] Couldn't get configMap openshift-cloud-controller-manager-operator/kube-rbac-proxy: failed to sync configmap cache: timed out waiting for the condition Feb 24 05:37:22.710971 master-0 kubenswrapper[34361]: E0224 05:37:22.710637 34361 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/80cc7ad6-051b-4ee5-94af-611388d9622a-kube-state-metrics-custom-resource-state-configmap podName:80cc7ad6-051b-4ee5-94af-611388d9622a nodeName:}" failed. No retries permitted until 2026-02-24 05:37:23.2106206 +0000 UTC m=+2.913237676 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-state-metrics-custom-resource-state-configmap" (UniqueName: "kubernetes.io/configmap/80cc7ad6-051b-4ee5-94af-611388d9622a-kube-state-metrics-custom-resource-state-configmap") pod "kube-state-metrics-59584d565f-gsgxz" (UID: "80cc7ad6-051b-4ee5-94af-611388d9622a") : failed to sync configmap cache: timed out waiting for the condition Feb 24 05:37:22.710971 master-0 kubenswrapper[34361]: E0224 05:37:22.710752 34361 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/f3cd3830-62b5-49d1-917e-bd993d685c65-auth-proxy-config podName:f3cd3830-62b5-49d1-917e-bd993d685c65 nodeName:}" failed. No retries permitted until 2026-02-24 05:37:23.210738573 +0000 UTC m=+2.913355649 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "auth-proxy-config" (UniqueName: "kubernetes.io/configmap/f3cd3830-62b5-49d1-917e-bd993d685c65-auth-proxy-config") pod "cluster-cloud-controller-manager-operator-67dd8d7969-m8d2t" (UID: "f3cd3830-62b5-49d1-917e-bd993d685c65") : failed to sync configmap cache: timed out waiting for the condition Feb 24 05:37:22.710971 master-0 kubenswrapper[34361]: E0224 05:37:22.710778 34361 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/39c4d0aa-c372-4d02-9302-337e68b56784-images podName:39c4d0aa-c372-4d02-9302-337e68b56784 nodeName:}" failed. No retries permitted until 2026-02-24 05:37:23.210766374 +0000 UTC m=+2.913383460 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "images" (UniqueName: "kubernetes.io/configmap/39c4d0aa-c372-4d02-9302-337e68b56784-images") pod "machine-config-operator-7f8c75f984-922md" (UID: "39c4d0aa-c372-4d02-9302-337e68b56784") : failed to sync configmap cache: timed out waiting for the condition Feb 24 05:37:22.710971 master-0 kubenswrapper[34361]: E0224 05:37:22.710799 34361 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/bf303acd-b62e-4aa3-bd8d-15f5844302d8-openshift-state-metrics-tls podName:bf303acd-b62e-4aa3-bd8d-15f5844302d8 nodeName:}" failed. No retries permitted until 2026-02-24 05:37:23.210789525 +0000 UTC m=+2.913406611 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "openshift-state-metrics-tls" (UniqueName: "kubernetes.io/secret/bf303acd-b62e-4aa3-bd8d-15f5844302d8-openshift-state-metrics-tls") pod "openshift-state-metrics-6dbff8cb4c-hvjlk" (UID: "bf303acd-b62e-4aa3-bd8d-15f5844302d8") : failed to sync secret cache: timed out waiting for the condition Feb 24 05:37:22.710971 master-0 kubenswrapper[34361]: E0224 05:37:22.710831 34361 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/2f48332e-92de-42aa-a6e6-db161f005e74-metrics-server-audit-profiles podName:2f48332e-92de-42aa-a6e6-db161f005e74 nodeName:}" failed. No retries permitted until 2026-02-24 05:37:23.210817245 +0000 UTC m=+2.913434321 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-server-audit-profiles" (UniqueName: "kubernetes.io/configmap/2f48332e-92de-42aa-a6e6-db161f005e74-metrics-server-audit-profiles") pod "metrics-server-65cdf565cd-555rj" (UID: "2f48332e-92de-42aa-a6e6-db161f005e74") : failed to sync configmap cache: timed out waiting for the condition Feb 24 05:37:22.712066 master-0 kubenswrapper[34361]: E0224 05:37:22.712015 34361 secret.go:189] Couldn't get secret openshift-monitoring/federate-client-certs: failed to sync secret cache: timed out waiting for the condition Feb 24 05:37:22.712215 master-0 kubenswrapper[34361]: E0224 05:37:22.712075 34361 configmap.go:193] Couldn't get configMap openshift-machine-config-operator/kube-rbac-proxy: failed to sync configmap cache: timed out waiting for the condition Feb 24 05:37:22.712215 master-0 kubenswrapper[34361]: E0224 05:37:22.712110 34361 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/1163571d-f555-41ad-b04c-74c2dc452efe-federate-client-tls podName:1163571d-f555-41ad-b04c-74c2dc452efe nodeName:}" failed. No retries permitted until 2026-02-24 05:37:23.21208707 +0000 UTC m=+2.914704306 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "federate-client-tls" (UniqueName: "kubernetes.io/secret/1163571d-f555-41ad-b04c-74c2dc452efe-federate-client-tls") pod "telemeter-client-96c995bf5-57k8x" (UID: "1163571d-f555-41ad-b04c-74c2dc452efe") : failed to sync secret cache: timed out waiting for the condition Feb 24 05:37:22.712215 master-0 kubenswrapper[34361]: E0224 05:37:22.712148 34361 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/1e7f7c02-4c84-432a-8d59-25dd3bfef5c2-mcc-auth-proxy-config podName:1e7f7c02-4c84-432a-8d59-25dd3bfef5c2 nodeName:}" failed. No retries permitted until 2026-02-24 05:37:23.212134841 +0000 UTC m=+2.914751917 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "mcc-auth-proxy-config" (UniqueName: "kubernetes.io/configmap/1e7f7c02-4c84-432a-8d59-25dd3bfef5c2-mcc-auth-proxy-config") pod "machine-config-controller-54cb48566c-9ww5z" (UID: "1e7f7c02-4c84-432a-8d59-25dd3bfef5c2") : failed to sync configmap cache: timed out waiting for the condition Feb 24 05:37:22.719515 master-0 kubenswrapper[34361]: I0224 05:37:22.719462 34361 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-dockercfg-7tq27" Feb 24 05:37:22.738219 master-0 kubenswrapper[34361]: I0224 05:37:22.738139 34361 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-tls" Feb 24 05:37:22.758973 master-0 kubenswrapper[34361]: I0224 05:37:22.758865 34361 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"machine-api-operator-images" Feb 24 05:37:22.779034 master-0 kubenswrapper[34361]: I0224 05:37:22.778963 34361 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-d22d8" Feb 24 05:37:22.798923 master-0 kubenswrapper[34361]: I0224 05:37:22.798842 34361 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-rbac-proxy" Feb 24 05:37:22.818984 master-0 kubenswrapper[34361]: I0224 05:37:22.818893 34361 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Feb 24 05:37:22.839444 master-0 kubenswrapper[34361]: I0224 05:37:22.839350 34361 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Feb 24 05:37:22.858855 master-0 kubenswrapper[34361]: I0224 05:37:22.858787 34361 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Feb 24 05:37:22.879733 master-0 kubenswrapper[34361]: I0224 05:37:22.879654 34361 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Feb 24 05:37:22.897937 master-0 kubenswrapper[34361]: I0224 05:37:22.897858 34361 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-insights"/"openshift-insights-serving-cert" Feb 24 05:37:22.918634 master-0 kubenswrapper[34361]: I0224 05:37:22.918583 34361 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-insights"/"operator-dockercfg-2gwgm" Feb 24 05:37:22.938732 master-0 kubenswrapper[34361]: I0224 05:37:22.938672 34361 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-insights"/"service-ca-bundle" Feb 24 05:37:22.957827 master-0 kubenswrapper[34361]: I0224 05:37:22.957650 34361 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-3-master-0" Feb 24 05:37:22.960544 master-0 kubenswrapper[34361]: I0224 05:37:22.960503 34361 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-insights"/"kube-root-ca.crt" Feb 24 05:37:22.979405 master-0 kubenswrapper[34361]: I0224 05:37:22.979327 34361 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-insights"/"openshift-service-ca.crt" Feb 24 05:37:23.007595 master-0 kubenswrapper[34361]: I0224 05:37:23.007522 34361 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-insights"/"trusted-ca-bundle" Feb 24 05:37:23.017859 master-0 kubenswrapper[34361]: I0224 05:37:23.017812 34361 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"kube-root-ca.crt" Feb 24 05:37:23.039002 master-0 kubenswrapper[34361]: I0224 05:37:23.038943 34361 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"cluster-autoscaler-operator-dockercfg-44r64" Feb 24 05:37:23.059383 master-0 kubenswrapper[34361]: I0224 05:37:23.059329 34361 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"openshift-config-operator-dockercfg-p74xw" Feb 24 05:37:23.079872 master-0 kubenswrapper[34361]: I0224 05:37:23.079785 34361 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"config-operator-serving-cert" Feb 24 05:37:23.101225 master-0 kubenswrapper[34361]: I0224 05:37:23.100983 34361 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"cluster-autoscaler-operator-cert" Feb 24 05:37:23.119244 master-0 kubenswrapper[34361]: I0224 05:37:23.119147 34361 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"openshift-service-ca.crt" Feb 24 05:37:23.139041 master-0 kubenswrapper[34361]: I0224 05:37:23.138443 34361 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-rbac-proxy-cluster-autoscaler-operator" Feb 24 05:37:23.158808 master-0 kubenswrapper[34361]: I0224 05:37:23.158733 34361 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cloud-credential-operator"/"cloud-credential-operator-serving-cert" Feb 24 05:37:23.187061 master-0 kubenswrapper[34361]: I0224 05:37:23.186999 34361 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cloud-credential-operator"/"cco-trusted-ca" Feb 24 05:37:23.199482 master-0 kubenswrapper[34361]: I0224 05:37:23.199422 34361 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cloud-credential-operator"/"openshift-service-ca.crt" Feb 24 05:37:23.228120 master-0 kubenswrapper[34361]: I0224 05:37:23.227929 34361 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cloud-credential-operator"/"cloud-credential-operator-dockercfg-9cc2t" Feb 24 05:37:23.243340 master-0 kubenswrapper[34361]: I0224 05:37:23.242025 34361 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cloud-credential-operator"/"kube-root-ca.crt" Feb 24 05:37:23.260877 master-0 kubenswrapper[34361]: I0224 05:37:23.260793 34361 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serving-cert" Feb 24 05:37:23.267392 master-0 kubenswrapper[34361]: I0224 05:37:23.266808 34361 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cloud-credential-operator-serving-cert\" (UniqueName: \"kubernetes.io/secret/b46907eb-36d6-4410-b7d8-8012b254c861-cloud-credential-operator-serving-cert\") pod \"cloud-credential-operator-6968c58f46-68rth\" (UID: \"b46907eb-36d6-4410-b7d8-8012b254c861\") " pod="openshift-cloud-credential-operator/cloud-credential-operator-6968c58f46-68rth" Feb 24 05:37:23.267392 master-0 kubenswrapper[34361]: I0224 05:37:23.266409 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cloud-credential-operator-serving-cert\" (UniqueName: \"kubernetes.io/secret/b46907eb-36d6-4410-b7d8-8012b254c861-cloud-credential-operator-serving-cert\") pod \"cloud-credential-operator-6968c58f46-68rth\" (UID: \"b46907eb-36d6-4410-b7d8-8012b254c861\") " pod="openshift-cloud-credential-operator/cloud-credential-operator-6968c58f46-68rth" Feb 24 05:37:23.267392 master-0 kubenswrapper[34361]: I0224 05:37:23.266909 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/5d51ce58-55f6-45d5-9d5d-7b31ae42380a-auth-proxy-config\") pod \"cluster-autoscaler-operator-86b8dc6d6-mcf2z\" (UID: \"5d51ce58-55f6-45d5-9d5d-7b31ae42380a\") " pod="openshift-machine-api/cluster-autoscaler-operator-86b8dc6d6-mcf2z" Feb 24 05:37:23.267392 master-0 kubenswrapper[34361]: I0224 05:37:23.266952 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"configmap-kubelet-serving-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/2f48332e-92de-42aa-a6e6-db161f005e74-configmap-kubelet-serving-ca-bundle\") pod \"metrics-server-65cdf565cd-555rj\" (UID: \"2f48332e-92de-42aa-a6e6-db161f005e74\") " pod="openshift-monitoring/metrics-server-65cdf565cd-555rj" Feb 24 05:37:23.267392 master-0 kubenswrapper[34361]: I0224 05:37:23.266997 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"telemeter-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1163571d-f555-41ad-b04c-74c2dc452efe-telemeter-trusted-ca-bundle\") pod \"telemeter-client-96c995bf5-57k8x\" (UID: \"1163571d-f555-41ad-b04c-74c2dc452efe\") " pod="openshift-monitoring/telemeter-client-96c995bf5-57k8x" Feb 24 05:37:23.267392 master-0 kubenswrapper[34361]: I0224 05:37:23.267058 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/b426cb33-1624-45e6-b8c5-4e8d251f6339-client-ca\") pod \"route-controller-manager-654dcf5585-fgmnd\" (UID: \"b426cb33-1624-45e6-b8c5-4e8d251f6339\") " pod="openshift-route-controller-manager/route-controller-manager-654dcf5585-fgmnd" Feb 24 05:37:23.267392 master-0 kubenswrapper[34361]: I0224 05:37:23.267095 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/80cc7ad6-051b-4ee5-94af-611388d9622a-kube-state-metrics-kube-rbac-proxy-config\") pod \"kube-state-metrics-59584d565f-gsgxz\" (UID: \"80cc7ad6-051b-4ee5-94af-611388d9622a\") " pod="openshift-monitoring/kube-state-metrics-59584d565f-gsgxz" Feb 24 05:37:23.267392 master-0 kubenswrapper[34361]: I0224 05:37:23.267148 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-operator-tls\" (UniqueName: \"kubernetes.io/secret/b8d28792-2365-4e9e-b61a-46cd2ef8b632-prometheus-operator-tls\") pod \"prometheus-operator-754bc4d665-xjddh\" (UID: \"b8d28792-2365-4e9e-b61a-46cd2ef8b632\") " pod="openshift-monitoring/prometheus-operator-754bc4d665-xjddh" Feb 24 05:37:23.267392 master-0 kubenswrapper[34361]: I0224 05:37:23.267202 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/1e7f7c02-4c84-432a-8d59-25dd3bfef5c2-proxy-tls\") pod \"machine-config-controller-54cb48566c-9ww5z\" (UID: \"1e7f7c02-4c84-432a-8d59-25dd3bfef5c2\") " pod="openshift-machine-config-operator/machine-config-controller-54cb48566c-9ww5z" Feb 24 05:37:23.267392 master-0 kubenswrapper[34361]: I0224 05:37:23.267244 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2f48332e-92de-42aa-a6e6-db161f005e74-client-ca-bundle\") pod \"metrics-server-65cdf565cd-555rj\" (UID: \"2f48332e-92de-42aa-a6e6-db161f005e74\") " pod="openshift-monitoring/metrics-server-65cdf565cd-555rj" Feb 24 05:37:23.267392 master-0 kubenswrapper[34361]: I0224 05:37:23.267287 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ee10cd9f-23da-4e40-bc7a-0856a9fa7ae5-service-ca-bundle\") pod \"insights-operator-59b498fcfb-mprnx\" (UID: \"ee10cd9f-23da-4e40-bc7a-0856a9fa7ae5\") " pod="openshift-insights/insights-operator-59b498fcfb-mprnx" Feb 24 05:37:23.267392 master-0 kubenswrapper[34361]: I0224 05:37:23.267351 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/116e6b47-d435-49ca-abb5-088788daf16a-config\") pod \"machine-api-operator-5c7cf458b4-65mc5\" (UID: \"116e6b47-d435-49ca-abb5-088788daf16a\") " pod="openshift-machine-api/machine-api-operator-5c7cf458b4-65mc5" Feb 24 05:37:23.267392 master-0 kubenswrapper[34361]: I0224 05:37:23.267391 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cluster-storage-operator-serving-cert\" (UniqueName: \"kubernetes.io/secret/e1f03d97-1a6a-41e4-9ed3-cd9b01c46400-cluster-storage-operator-serving-cert\") pod \"cluster-storage-operator-f94476f49-tlmg5\" (UID: \"e1f03d97-1a6a-41e4-9ed3-cd9b01c46400\") " pod="openshift-cluster-storage-operator/cluster-storage-operator-f94476f49-tlmg5" Feb 24 05:37:23.267869 master-0 kubenswrapper[34361]: I0224 05:37:23.267451 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/49b426a3-f16e-40e9-a166-7270d4cfcc60-webhook-cert\") pod \"packageserver-df5f88cd4-cwzcs\" (UID: \"49b426a3-f16e-40e9-a166-7270d4cfcc60\") " pod="openshift-operator-lifecycle-manager/packageserver-df5f88cd4-cwzcs" Feb 24 05:37:23.267869 master-0 kubenswrapper[34361]: I0224 05:37:23.267481 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-exporter-tls\" (UniqueName: \"kubernetes.io/secret/f2be5ed6-fdf0-4462-a319-eed1a5a1c778-node-exporter-tls\") pod \"node-exporter-qk7rz\" (UID: \"f2be5ed6-fdf0-4462-a319-eed1a5a1c778\") " pod="openshift-monitoring/node-exporter-qk7rz" Feb 24 05:37:23.267869 master-0 kubenswrapper[34361]: I0224 05:37:23.267520 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-certs-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1163571d-f555-41ad-b04c-74c2dc452efe-serving-certs-ca-bundle\") pod \"telemeter-client-96c995bf5-57k8x\" (UID: \"1163571d-f555-41ad-b04c-74c2dc452efe\") " pod="openshift-monitoring/telemeter-client-96c995bf5-57k8x" Feb 24 05:37:23.267869 master-0 kubenswrapper[34361]: I0224 05:37:23.267547 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-telemeter-client\" (UniqueName: \"kubernetes.io/secret/1163571d-f555-41ad-b04c-74c2dc452efe-secret-telemeter-client\") pod \"telemeter-client-96c995bf5-57k8x\" (UID: \"1163571d-f555-41ad-b04c-74c2dc452efe\") " pod="openshift-monitoring/telemeter-client-96c995bf5-57k8x" Feb 24 05:37:23.267869 master-0 kubenswrapper[34361]: I0224 05:37:23.267573 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/39c4d0aa-c372-4d02-9302-337e68b56784-proxy-tls\") pod \"machine-config-operator-7f8c75f984-922md\" (UID: \"39c4d0aa-c372-4d02-9302-337e68b56784\") " pod="openshift-machine-config-operator/machine-config-operator-7f8c75f984-922md" Feb 24 05:37:23.267869 master-0 kubenswrapper[34361]: I0224 05:37:23.267599 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/a3561f49-0808-4d96-95ec-456fcb5c5bb4-mcd-auth-proxy-config\") pod \"machine-config-daemon-c56dz\" (UID: \"a3561f49-0808-4d96-95ec-456fcb5c5bb4\") " pod="openshift-machine-config-operator/machine-config-daemon-c56dz" Feb 24 05:37:23.267869 master-0 kubenswrapper[34361]: I0224 05:37:23.267628 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/e6a0fc47-b446-4902-9f8a-04870cbafcab-machine-approver-tls\") pod \"machine-approver-7dd9c7d7b9-pb6sw\" (UID: \"e6a0fc47-b446-4902-9f8a-04870cbafcab\") " pod="openshift-cluster-machine-approver/machine-approver-7dd9c7d7b9-pb6sw" Feb 24 05:37:23.267869 master-0 kubenswrapper[34361]: I0224 05:37:23.267667 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-tls\" (UniqueName: \"kubernetes.io/secret/80cc7ad6-051b-4ee5-94af-611388d9622a-kube-state-metrics-tls\") pod \"kube-state-metrics-59584d565f-gsgxz\" (UID: \"80cc7ad6-051b-4ee5-94af-611388d9622a\") " pod="openshift-monitoring/kube-state-metrics-59584d565f-gsgxz" Feb 24 05:37:23.267869 master-0 kubenswrapper[34361]: I0224 05:37:23.267692 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-metrics-client-certs\" (UniqueName: \"kubernetes.io/secret/2f48332e-92de-42aa-a6e6-db161f005e74-secret-metrics-client-certs\") pod \"metrics-server-65cdf565cd-555rj\" (UID: \"2f48332e-92de-42aa-a6e6-db161f005e74\") " pod="openshift-monitoring/metrics-server-65cdf565cd-555rj" Feb 24 05:37:23.267869 master-0 kubenswrapper[34361]: I0224 05:37:23.267742 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/9666fc94-71e3-46af-8b45-26e3a085d076-srv-cert\") pod \"olm-operator-5499d7f7bb-8xdmq\" (UID: \"9666fc94-71e3-46af-8b45-26e3a085d076\") " pod="openshift-operator-lifecycle-manager/olm-operator-5499d7f7bb-8xdmq" Feb 24 05:37:23.267869 master-0 kubenswrapper[34361]: I0224 05:37:23.267798 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ee10cd9f-23da-4e40-bc7a-0856a9fa7ae5-trusted-ca-bundle\") pod \"insights-operator-59b498fcfb-mprnx\" (UID: \"ee10cd9f-23da-4e40-bc7a-0856a9fa7ae5\") " pod="openshift-insights/insights-operator-59b498fcfb-mprnx" Feb 24 05:37:23.267869 master-0 kubenswrapper[34361]: I0224 05:37:23.267822 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/4bb05b64-74d7-41bc-991c-5d3cddc9d8f4-samples-operator-tls\") pod \"cluster-samples-operator-65c5c48b9b-hmlsl\" (UID: \"4bb05b64-74d7-41bc-991c-5d3cddc9d8f4\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-65c5c48b9b-hmlsl" Feb 24 05:37:23.267869 master-0 kubenswrapper[34361]: I0224 05:37:23.267879 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/bf303acd-b62e-4aa3-bd8d-15f5844302d8-metrics-client-ca\") pod \"openshift-state-metrics-6dbff8cb4c-hvjlk\" (UID: \"bf303acd-b62e-4aa3-bd8d-15f5844302d8\") " pod="openshift-monitoring/openshift-state-metrics-6dbff8cb4c-hvjlk" Feb 24 05:37:23.268220 master-0 kubenswrapper[34361]: I0224 05:37:23.267907 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/116e6b47-d435-49ca-abb5-088788daf16a-machine-api-operator-tls\") pod \"machine-api-operator-5c7cf458b4-65mc5\" (UID: \"116e6b47-d435-49ca-abb5-088788daf16a\") " pod="openshift-machine-api/machine-api-operator-5c7cf458b4-65mc5" Feb 24 05:37:23.268220 master-0 kubenswrapper[34361]: I0224 05:37:23.267933 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/80cc7ad6-051b-4ee5-94af-611388d9622a-metrics-client-ca\") pod \"kube-state-metrics-59584d565f-gsgxz\" (UID: \"80cc7ad6-051b-4ee5-94af-611388d9622a\") " pod="openshift-monitoring/kube-state-metrics-59584d565f-gsgxz" Feb 24 05:37:23.268220 master-0 kubenswrapper[34361]: I0224 05:37:23.267965 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/c847d0c0-cc92-4d56-9e47-b83d9a39a745-certs\") pod \"machine-config-server-xxl55\" (UID: \"c847d0c0-cc92-4d56-9e47-b83d9a39a745\") " pod="openshift-machine-config-operator/machine-config-server-xxl55" Feb 24 05:37:23.268220 master-0 kubenswrapper[34361]: I0224 05:37:23.267991 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-metrics-server-tls\" (UniqueName: \"kubernetes.io/secret/2f48332e-92de-42aa-a6e6-db161f005e74-secret-metrics-server-tls\") pod \"metrics-server-65cdf565cd-555rj\" (UID: \"2f48332e-92de-42aa-a6e6-db161f005e74\") " pod="openshift-monitoring/metrics-server-65cdf565cd-555rj" Feb 24 05:37:23.268220 master-0 kubenswrapper[34361]: I0224 05:37:23.268057 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/e6a0fc47-b446-4902-9f8a-04870cbafcab-auth-proxy-config\") pod \"machine-approver-7dd9c7d7b9-pb6sw\" (UID: \"e6a0fc47-b446-4902-9f8a-04870cbafcab\") " pod="openshift-cluster-machine-approver/machine-approver-7dd9c7d7b9-pb6sw" Feb 24 05:37:23.268220 master-0 kubenswrapper[34361]: I0224 05:37:23.268092 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/f3cd3830-62b5-49d1-917e-bd993d685c65-auth-proxy-config\") pod \"cluster-cloud-controller-manager-operator-67dd8d7969-m8d2t\" (UID: \"f3cd3830-62b5-49d1-917e-bd993d685c65\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-67dd8d7969-m8d2t" Feb 24 05:37:23.268220 master-0 kubenswrapper[34361]: I0224 05:37:23.268117 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-custom-resource-state-configmap\" (UniqueName: \"kubernetes.io/configmap/80cc7ad6-051b-4ee5-94af-611388d9622a-kube-state-metrics-custom-resource-state-configmap\") pod \"kube-state-metrics-59584d565f-gsgxz\" (UID: \"80cc7ad6-051b-4ee5-94af-611388d9622a\") " pod="openshift-monitoring/kube-state-metrics-59584d565f-gsgxz" Feb 24 05:37:23.268220 master-0 kubenswrapper[34361]: I0224 05:37:23.268164 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/39c4d0aa-c372-4d02-9302-337e68b56784-images\") pod \"machine-config-operator-7f8c75f984-922md\" (UID: \"39c4d0aa-c372-4d02-9302-337e68b56784\") " pod="openshift-machine-config-operator/machine-config-operator-7f8c75f984-922md" Feb 24 05:37:23.268220 master-0 kubenswrapper[34361]: I0224 05:37:23.268189 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-server-audit-profiles\" (UniqueName: \"kubernetes.io/configmap/2f48332e-92de-42aa-a6e6-db161f005e74-metrics-server-audit-profiles\") pod \"metrics-server-65cdf565cd-555rj\" (UID: \"2f48332e-92de-42aa-a6e6-db161f005e74\") " pod="openshift-monitoring/metrics-server-65cdf565cd-555rj" Feb 24 05:37:23.268220 master-0 kubenswrapper[34361]: I0224 05:37:23.268215 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openshift-state-metrics-tls\" (UniqueName: \"kubernetes.io/secret/bf303acd-b62e-4aa3-bd8d-15f5844302d8-openshift-state-metrics-tls\") pod \"openshift-state-metrics-6dbff8cb4c-hvjlk\" (UID: \"bf303acd-b62e-4aa3-bd8d-15f5844302d8\") " pod="openshift-monitoring/openshift-state-metrics-6dbff8cb4c-hvjlk" Feb 24 05:37:23.268518 master-0 kubenswrapper[34361]: I0224 05:37:23.268249 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/1e7f7c02-4c84-432a-8d59-25dd3bfef5c2-mcc-auth-proxy-config\") pod \"machine-config-controller-54cb48566c-9ww5z\" (UID: \"1e7f7c02-4c84-432a-8d59-25dd3bfef5c2\") " pod="openshift-machine-config-operator/machine-config-controller-54cb48566c-9ww5z" Feb 24 05:37:23.268518 master-0 kubenswrapper[34361]: I0224 05:37:23.268284 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"federate-client-tls\" (UniqueName: \"kubernetes.io/secret/1163571d-f555-41ad-b04c-74c2dc452efe-federate-client-tls\") pod \"telemeter-client-96c995bf5-57k8x\" (UID: \"1163571d-f555-41ad-b04c-74c2dc452efe\") " pod="openshift-monitoring/telemeter-client-96c995bf5-57k8x" Feb 24 05:37:23.268518 master-0 kubenswrapper[34361]: I0224 05:37:23.268339 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/da19bb93-c9ba-4e60-9e83-d92bc0dd33c4-client-ca\") pod \"controller-manager-7657d7494-mmsz6\" (UID: \"da19bb93-c9ba-4e60-9e83-d92bc0dd33c4\") " pod="openshift-controller-manager/controller-manager-7657d7494-mmsz6" Feb 24 05:37:23.268518 master-0 kubenswrapper[34361]: I0224 05:37:23.268379 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cloud-controller-manager-operator-tls\" (UniqueName: \"kubernetes.io/secret/f3cd3830-62b5-49d1-917e-bd993d685c65-cloud-controller-manager-operator-tls\") pod \"cluster-cloud-controller-manager-operator-67dd8d7969-m8d2t\" (UID: \"f3cd3830-62b5-49d1-917e-bd993d685c65\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-67dd8d7969-m8d2t" Feb 24 05:37:23.268948 master-0 kubenswrapper[34361]: I0224 05:37:23.268890 34361 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/b426cb33-1624-45e6-b8c5-4e8d251f6339-client-ca\") pod \"route-controller-manager-654dcf5585-fgmnd\" (UID: \"b426cb33-1624-45e6-b8c5-4e8d251f6339\") " pod="openshift-route-controller-manager/route-controller-manager-654dcf5585-fgmnd" Feb 24 05:37:23.269287 master-0 kubenswrapper[34361]: I0224 05:37:23.269225 34361 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ee10cd9f-23da-4e40-bc7a-0856a9fa7ae5-trusted-ca-bundle\") pod \"insights-operator-59b498fcfb-mprnx\" (UID: \"ee10cd9f-23da-4e40-bc7a-0856a9fa7ae5\") " pod="openshift-insights/insights-operator-59b498fcfb-mprnx" Feb 24 05:37:23.269377 master-0 kubenswrapper[34361]: I0224 05:37:23.269250 34361 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ee10cd9f-23da-4e40-bc7a-0856a9fa7ae5-service-ca-bundle\") pod \"insights-operator-59b498fcfb-mprnx\" (UID: \"ee10cd9f-23da-4e40-bc7a-0856a9fa7ae5\") " pod="openshift-insights/insights-operator-59b498fcfb-mprnx" Feb 24 05:37:23.269377 master-0 kubenswrapper[34361]: I0224 05:37:23.269327 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b426cb33-1624-45e6-b8c5-4e8d251f6339-config\") pod \"route-controller-manager-654dcf5585-fgmnd\" (UID: \"b426cb33-1624-45e6-b8c5-4e8d251f6339\") " pod="openshift-route-controller-manager/route-controller-manager-654dcf5585-fgmnd" Feb 24 05:37:23.269457 master-0 kubenswrapper[34361]: I0224 05:37:23.269343 34361 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/da19bb93-c9ba-4e60-9e83-d92bc0dd33c4-client-ca\") pod \"controller-manager-7657d7494-mmsz6\" (UID: \"da19bb93-c9ba-4e60-9e83-d92bc0dd33c4\") " pod="openshift-controller-manager/controller-manager-7657d7494-mmsz6" Feb 24 05:37:23.269501 master-0 kubenswrapper[34361]: I0224 05:37:23.269448 34361 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/9666fc94-71e3-46af-8b45-26e3a085d076-srv-cert\") pod \"olm-operator-5499d7f7bb-8xdmq\" (UID: \"9666fc94-71e3-46af-8b45-26e3a085d076\") " pod="openshift-operator-lifecycle-manager/olm-operator-5499d7f7bb-8xdmq" Feb 24 05:37:23.269501 master-0 kubenswrapper[34361]: I0224 05:37:23.269468 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/23bdafdd-27c9-4461-be4a-3ea916ac3875-image-registry-operator-tls\") pod \"cluster-image-registry-operator-779979bdf7-t98nr\" (UID: \"23bdafdd-27c9-4461-be4a-3ea916ac3875\") " pod="openshift-image-registry/cluster-image-registry-operator-779979bdf7-t98nr" Feb 24 05:37:23.269622 master-0 kubenswrapper[34361]: I0224 05:37:23.269585 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e6a0fc47-b446-4902-9f8a-04870cbafcab-config\") pod \"machine-approver-7dd9c7d7b9-pb6sw\" (UID: \"e6a0fc47-b446-4902-9f8a-04870cbafcab\") " pod="openshift-cluster-machine-approver/machine-approver-7dd9c7d7b9-pb6sw" Feb 24 05:37:23.269673 master-0 kubenswrapper[34361]: I0224 05:37:23.269642 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cco-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b46907eb-36d6-4410-b7d8-8012b254c861-cco-trusted-ca\") pod \"cloud-credential-operator-6968c58f46-68rth\" (UID: \"b46907eb-36d6-4410-b7d8-8012b254c861\") " pod="openshift-cloud-credential-operator/cloud-credential-operator-6968c58f46-68rth" Feb 24 05:37:23.269719 master-0 kubenswrapper[34361]: I0224 05:37:23.269682 34361 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/116e6b47-d435-49ca-abb5-088788daf16a-config\") pod \"machine-api-operator-5c7cf458b4-65mc5\" (UID: \"116e6b47-d435-49ca-abb5-088788daf16a\") " pod="openshift-machine-api/machine-api-operator-5c7cf458b4-65mc5" Feb 24 05:37:23.269896 master-0 kubenswrapper[34361]: I0224 05:37:23.269785 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ee10cd9f-23da-4e40-bc7a-0856a9fa7ae5-serving-cert\") pod \"insights-operator-59b498fcfb-mprnx\" (UID: \"ee10cd9f-23da-4e40-bc7a-0856a9fa7ae5\") " pod="openshift-insights/insights-operator-59b498fcfb-mprnx" Feb 24 05:37:23.269896 master-0 kubenswrapper[34361]: I0224 05:37:23.269807 34361 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/116e6b47-d435-49ca-abb5-088788daf16a-machine-api-operator-tls\") pod \"machine-api-operator-5c7cf458b4-65mc5\" (UID: \"116e6b47-d435-49ca-abb5-088788daf16a\") " pod="openshift-machine-api/machine-api-operator-5c7cf458b4-65mc5" Feb 24 05:37:23.269896 master-0 kubenswrapper[34361]: I0224 05:37:23.269831 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/c847d0c0-cc92-4d56-9e47-b83d9a39a745-node-bootstrap-token\") pod \"machine-config-server-xxl55\" (UID: \"c847d0c0-cc92-4d56-9e47-b83d9a39a745\") " pod="openshift-machine-config-operator/machine-config-server-xxl55" Feb 24 05:37:23.269896 master-0 kubenswrapper[34361]: I0224 05:37:23.269869 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-operator-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/b8d28792-2365-4e9e-b61a-46cd2ef8b632-prometheus-operator-kube-rbac-proxy-config\") pod \"prometheus-operator-754bc4d665-xjddh\" (UID: \"b8d28792-2365-4e9e-b61a-46cd2ef8b632\") " pod="openshift-monitoring/prometheus-operator-754bc4d665-xjddh" Feb 24 05:37:23.270027 master-0 kubenswrapper[34361]: I0224 05:37:23.269927 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/39c4d0aa-c372-4d02-9302-337e68b56784-auth-proxy-config\") pod \"machine-config-operator-7f8c75f984-922md\" (UID: \"39c4d0aa-c372-4d02-9302-337e68b56784\") " pod="openshift-machine-config-operator/machine-config-operator-7f8c75f984-922md" Feb 24 05:37:23.270027 master-0 kubenswrapper[34361]: I0224 05:37:23.269959 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/6ddb5ab7-0c1f-44ed-84fa-aaeb6b553e03-webhook-certs\") pod \"multus-admission-controller-5f54bf67d4-5tf9t\" (UID: \"6ddb5ab7-0c1f-44ed-84fa-aaeb6b553e03\") " pod="openshift-multus/multus-admission-controller-5f54bf67d4-5tf9t" Feb 24 05:37:23.270027 master-0 kubenswrapper[34361]: I0224 05:37:23.269988 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/1163571d-f555-41ad-b04c-74c2dc452efe-metrics-client-ca\") pod \"telemeter-client-96c995bf5-57k8x\" (UID: \"1163571d-f555-41ad-b04c-74c2dc452efe\") " pod="openshift-monitoring/telemeter-client-96c995bf5-57k8x" Feb 24 05:37:23.270027 master-0 kubenswrapper[34361]: I0224 05:37:23.270016 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/f2be5ed6-fdf0-4462-a319-eed1a5a1c778-metrics-client-ca\") pod \"node-exporter-qk7rz\" (UID: \"f2be5ed6-fdf0-4462-a319-eed1a5a1c778\") " pod="openshift-monitoring/node-exporter-qk7rz" Feb 24 05:37:23.270135 master-0 kubenswrapper[34361]: I0224 05:37:23.270040 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/f938daff-1d36-4348-a689-3d1607058296-cert\") pod \"ingress-canary-5m82s\" (UID: \"f938daff-1d36-4348-a689-3d1607058296\") " pod="openshift-ingress-canary/ingress-canary-5m82s" Feb 24 05:37:23.270135 master-0 kubenswrapper[34361]: I0224 05:37:23.270088 34361 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ee10cd9f-23da-4e40-bc7a-0856a9fa7ae5-serving-cert\") pod \"insights-operator-59b498fcfb-mprnx\" (UID: \"ee10cd9f-23da-4e40-bc7a-0856a9fa7ae5\") " pod="openshift-insights/insights-operator-59b498fcfb-mprnx" Feb 24 05:37:23.270135 master-0 kubenswrapper[34361]: I0224 05:37:23.270092 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-telemeter-client-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/1163571d-f555-41ad-b04c-74c2dc452efe-secret-telemeter-client-kube-rbac-proxy-config\") pod \"telemeter-client-96c995bf5-57k8x\" (UID: \"1163571d-f555-41ad-b04c-74c2dc452efe\") " pod="openshift-monitoring/telemeter-client-96c995bf5-57k8x" Feb 24 05:37:23.270135 master-0 kubenswrapper[34361]: I0224 05:37:23.270108 34361 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cco-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b46907eb-36d6-4410-b7d8-8012b254c861-cco-trusted-ca\") pod \"cloud-credential-operator-6968c58f46-68rth\" (UID: \"b46907eb-36d6-4410-b7d8-8012b254c861\") " pod="openshift-cloud-credential-operator/cloud-credential-operator-6968c58f46-68rth" Feb 24 05:37:23.270135 master-0 kubenswrapper[34361]: I0224 05:37:23.270091 34361 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/5d51ce58-55f6-45d5-9d5d-7b31ae42380a-auth-proxy-config\") pod \"cluster-autoscaler-operator-86b8dc6d6-mcf2z\" (UID: \"5d51ce58-55f6-45d5-9d5d-7b31ae42380a\") " pod="openshift-machine-api/cluster-autoscaler-operator-86b8dc6d6-mcf2z" Feb 24 05:37:23.270337 master-0 kubenswrapper[34361]: I0224 05:37:23.270290 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3f511d03-a182-4968-ba40-5c5c10e5e6be-serving-cert\") pod \"openshift-config-operator-6f47d587d6-7b87v\" (UID: \"3f511d03-a182-4968-ba40-5c5c10e5e6be\") " pod="openshift-config-operator/openshift-config-operator-6f47d587d6-7b87v" Feb 24 05:37:23.270393 master-0 kubenswrapper[34361]: I0224 05:37:23.270369 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/5d51ce58-55f6-45d5-9d5d-7b31ae42380a-cert\") pod \"cluster-autoscaler-operator-86b8dc6d6-mcf2z\" (UID: \"5d51ce58-55f6-45d5-9d5d-7b31ae42380a\") " pod="openshift-machine-api/cluster-autoscaler-operator-86b8dc6d6-mcf2z" Feb 24 05:37:23.270563 master-0 kubenswrapper[34361]: I0224 05:37:23.270528 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"telemeter-client-tls\" (UniqueName: \"kubernetes.io/secret/1163571d-f555-41ad-b04c-74c2dc452efe-telemeter-client-tls\") pod \"telemeter-client-96c995bf5-57k8x\" (UID: \"1163571d-f555-41ad-b04c-74c2dc452efe\") " pod="openshift-monitoring/telemeter-client-96c995bf5-57k8x" Feb 24 05:37:23.270599 master-0 kubenswrapper[34361]: I0224 05:37:23.270570 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/116e6b47-d435-49ca-abb5-088788daf16a-images\") pod \"machine-api-operator-5c7cf458b4-65mc5\" (UID: \"116e6b47-d435-49ca-abb5-088788daf16a\") " pod="openshift-machine-api/machine-api-operator-5c7cf458b4-65mc5" Feb 24 05:37:23.270631 master-0 kubenswrapper[34361]: I0224 05:37:23.270615 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/23bdafdd-27c9-4461-be4a-3ea916ac3875-trusted-ca\") pod \"cluster-image-registry-operator-779979bdf7-t98nr\" (UID: \"23bdafdd-27c9-4461-be4a-3ea916ac3875\") " pod="openshift-image-registry/cluster-image-registry-operator-779979bdf7-t98nr" Feb 24 05:37:23.270801 master-0 kubenswrapper[34361]: I0224 05:37:23.270628 34361 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/5d51ce58-55f6-45d5-9d5d-7b31ae42380a-cert\") pod \"cluster-autoscaler-operator-86b8dc6d6-mcf2z\" (UID: \"5d51ce58-55f6-45d5-9d5d-7b31ae42380a\") " pod="openshift-machine-api/cluster-autoscaler-operator-86b8dc6d6-mcf2z" Feb 24 05:37:23.270801 master-0 kubenswrapper[34361]: I0224 05:37:23.270657 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/f3cd3830-62b5-49d1-917e-bd993d685c65-images\") pod \"cluster-cloud-controller-manager-operator-67dd8d7969-m8d2t\" (UID: \"f3cd3830-62b5-49d1-917e-bd993d685c65\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-67dd8d7969-m8d2t" Feb 24 05:37:23.270801 master-0 kubenswrapper[34361]: I0224 05:37:23.270688 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-exporter-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/f2be5ed6-fdf0-4462-a319-eed1a5a1c778-node-exporter-kube-rbac-proxy-config\") pod \"node-exporter-qk7rz\" (UID: \"f2be5ed6-fdf0-4462-a319-eed1a5a1c778\") " pod="openshift-monitoring/node-exporter-qk7rz" Feb 24 05:37:23.270801 master-0 kubenswrapper[34361]: I0224 05:37:23.270723 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/a3561f49-0808-4d96-95ec-456fcb5c5bb4-proxy-tls\") pod \"machine-config-daemon-c56dz\" (UID: \"a3561f49-0808-4d96-95ec-456fcb5c5bb4\") " pod="openshift-machine-config-operator/machine-config-daemon-c56dz" Feb 24 05:37:23.270801 master-0 kubenswrapper[34361]: I0224 05:37:23.270753 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/b8d28792-2365-4e9e-b61a-46cd2ef8b632-metrics-client-ca\") pod \"prometheus-operator-754bc4d665-xjddh\" (UID: \"b8d28792-2365-4e9e-b61a-46cd2ef8b632\") " pod="openshift-monitoring/prometheus-operator-754bc4d665-xjddh" Feb 24 05:37:23.271067 master-0 kubenswrapper[34361]: I0224 05:37:23.270806 34361 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3f511d03-a182-4968-ba40-5c5c10e5e6be-serving-cert\") pod \"openshift-config-operator-6f47d587d6-7b87v\" (UID: \"3f511d03-a182-4968-ba40-5c5c10e5e6be\") " pod="openshift-config-operator/openshift-config-operator-6f47d587d6-7b87v" Feb 24 05:37:23.271067 master-0 kubenswrapper[34361]: I0224 05:37:23.270825 34361 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/116e6b47-d435-49ca-abb5-088788daf16a-images\") pod \"machine-api-operator-5c7cf458b4-65mc5\" (UID: \"116e6b47-d435-49ca-abb5-088788daf16a\") " pod="openshift-machine-api/machine-api-operator-5c7cf458b4-65mc5" Feb 24 05:37:23.271067 master-0 kubenswrapper[34361]: I0224 05:37:23.270913 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openshift-state-metrics-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/bf303acd-b62e-4aa3-bd8d-15f5844302d8-openshift-state-metrics-kube-rbac-proxy-config\") pod \"openshift-state-metrics-6dbff8cb4c-hvjlk\" (UID: \"bf303acd-b62e-4aa3-bd8d-15f5844302d8\") " pod="openshift-monitoring/openshift-state-metrics-6dbff8cb4c-hvjlk" Feb 24 05:37:23.271067 master-0 kubenswrapper[34361]: I0224 05:37:23.271004 34361 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b426cb33-1624-45e6-b8c5-4e8d251f6339-config\") pod \"route-controller-manager-654dcf5585-fgmnd\" (UID: \"b426cb33-1624-45e6-b8c5-4e8d251f6339\") " pod="openshift-route-controller-manager/route-controller-manager-654dcf5585-fgmnd" Feb 24 05:37:23.271067 master-0 kubenswrapper[34361]: I0224 05:37:23.271046 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/49b426a3-f16e-40e9-a166-7270d4cfcc60-apiservice-cert\") pod \"packageserver-df5f88cd4-cwzcs\" (UID: \"49b426a3-f16e-40e9-a166-7270d4cfcc60\") " pod="openshift-operator-lifecycle-manager/packageserver-df5f88cd4-cwzcs" Feb 24 05:37:23.278493 master-0 kubenswrapper[34361]: I0224 05:37:23.278452 34361 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-storage-operator"/"cluster-storage-operator-dockercfg-jdmr6" Feb 24 05:37:23.299961 master-0 kubenswrapper[34361]: I0224 05:37:23.299923 34361 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-storage-operator"/"cluster-storage-operator-serving-cert" Feb 24 05:37:23.310100 master-0 kubenswrapper[34361]: I0224 05:37:23.310055 34361 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cluster-storage-operator-serving-cert\" (UniqueName: \"kubernetes.io/secret/e1f03d97-1a6a-41e4-9ed3-cd9b01c46400-cluster-storage-operator-serving-cert\") pod \"cluster-storage-operator-f94476f49-tlmg5\" (UID: \"e1f03d97-1a6a-41e4-9ed3-cd9b01c46400\") " pod="openshift-cluster-storage-operator/cluster-storage-operator-f94476f49-tlmg5" Feb 24 05:37:23.318899 master-0 kubenswrapper[34361]: I0224 05:37:23.318863 34361 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"openshift-service-ca.crt" Feb 24 05:37:23.341384 master-0 kubenswrapper[34361]: I0224 05:37:23.339885 34361 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"cluster-image-registry-operator-dockercfg-9rnhs" Feb 24 05:37:23.360376 master-0 kubenswrapper[34361]: I0224 05:37:23.358826 34361 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-operator-tls" Feb 24 05:37:23.360827 master-0 kubenswrapper[34361]: I0224 05:37:23.360784 34361 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/23bdafdd-27c9-4461-be4a-3ea916ac3875-image-registry-operator-tls\") pod \"cluster-image-registry-operator-779979bdf7-t98nr\" (UID: \"23bdafdd-27c9-4461-be4a-3ea916ac3875\") " pod="openshift-image-registry/cluster-image-registry-operator-779979bdf7-t98nr" Feb 24 05:37:23.387451 master-0 kubenswrapper[34361]: I0224 05:37:23.387374 34361 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"trusted-ca" Feb 24 05:37:23.391893 master-0 kubenswrapper[34361]: I0224 05:37:23.391842 34361 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/23bdafdd-27c9-4461-be4a-3ea916ac3875-trusted-ca\") pod \"cluster-image-registry-operator-779979bdf7-t98nr\" (UID: \"23bdafdd-27c9-4461-be4a-3ea916ac3875\") " pod="openshift-image-registry/cluster-image-registry-operator-779979bdf7-t98nr" Feb 24 05:37:23.398556 master-0 kubenswrapper[34361]: I0224 05:37:23.398524 34361 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"kube-root-ca.crt" Feb 24 05:37:23.418304 master-0 kubenswrapper[34361]: I0224 05:37:23.418220 34361 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"openshift-service-ca.crt" Feb 24 05:37:23.441217 master-0 kubenswrapper[34361]: I0224 05:37:23.440957 34361 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"cluster-samples-operator-dockercfg-fxsc2" Feb 24 05:37:23.462809 master-0 kubenswrapper[34361]: I0224 05:37:23.462462 34361 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"samples-operator-tls" Feb 24 05:37:23.470628 master-0 kubenswrapper[34361]: I0224 05:37:23.470422 34361 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/4bb05b64-74d7-41bc-991c-5d3cddc9d8f4-samples-operator-tls\") pod \"cluster-samples-operator-65c5c48b9b-hmlsl\" (UID: \"4bb05b64-74d7-41bc-991c-5d3cddc9d8f4\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-65c5c48b9b-hmlsl" Feb 24 05:37:23.484741 master-0 kubenswrapper[34361]: I0224 05:37:23.484693 34361 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"kube-root-ca.crt" Feb 24 05:37:23.499676 master-0 kubenswrapper[34361]: I0224 05:37:23.499624 34361 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"machine-config-operator-images" Feb 24 05:37:23.509779 master-0 kubenswrapper[34361]: I0224 05:37:23.509741 34361 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/39c4d0aa-c372-4d02-9302-337e68b56784-images\") pod \"machine-config-operator-7f8c75f984-922md\" (UID: \"39c4d0aa-c372-4d02-9302-337e68b56784\") " pod="openshift-machine-config-operator/machine-config-operator-7f8c75f984-922md" Feb 24 05:37:23.518579 master-0 kubenswrapper[34361]: I0224 05:37:23.518484 34361 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-operator-dockercfg-qhzzf" Feb 24 05:37:23.538620 master-0 kubenswrapper[34361]: I0224 05:37:23.538547 34361 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"openshift-service-ca.crt" Feb 24 05:37:23.560222 master-0 kubenswrapper[34361]: I0224 05:37:23.560140 34361 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mco-proxy-tls" Feb 24 05:37:23.571060 master-0 kubenswrapper[34361]: I0224 05:37:23.571006 34361 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/39c4d0aa-c372-4d02-9302-337e68b56784-proxy-tls\") pod \"machine-config-operator-7f8c75f984-922md\" (UID: \"39c4d0aa-c372-4d02-9302-337e68b56784\") " pod="openshift-machine-config-operator/machine-config-operator-7f8c75f984-922md" Feb 24 05:37:23.577110 master-0 kubenswrapper[34361]: I0224 05:37:23.577059 34361 request.go:700] Waited for 2.002728563s due to client-side throttling, not priority and fairness, request: GET:https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-machine-config-operator/configmaps?fieldSelector=metadata.name%3Dkube-rbac-proxy&limit=500&resourceVersion=0 Feb 24 05:37:23.579696 master-0 kubenswrapper[34361]: I0224 05:37:23.579451 34361 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-rbac-proxy" Feb 24 05:37:23.580175 master-0 kubenswrapper[34361]: I0224 05:37:23.580110 34361 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/a3561f49-0808-4d96-95ec-456fcb5c5bb4-mcd-auth-proxy-config\") pod \"machine-config-daemon-c56dz\" (UID: \"a3561f49-0808-4d96-95ec-456fcb5c5bb4\") " pod="openshift-machine-config-operator/machine-config-daemon-c56dz" Feb 24 05:37:23.580737 master-0 kubenswrapper[34361]: I0224 05:37:23.580678 34361 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/39c4d0aa-c372-4d02-9302-337e68b56784-auth-proxy-config\") pod \"machine-config-operator-7f8c75f984-922md\" (UID: \"39c4d0aa-c372-4d02-9302-337e68b56784\") " pod="openshift-machine-config-operator/machine-config-operator-7f8c75f984-922md" Feb 24 05:37:23.589761 master-0 kubenswrapper[34361]: I0224 05:37:23.589738 34361 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/1e7f7c02-4c84-432a-8d59-25dd3bfef5c2-mcc-auth-proxy-config\") pod \"machine-config-controller-54cb48566c-9ww5z\" (UID: \"1e7f7c02-4c84-432a-8d59-25dd3bfef5c2\") " pod="openshift-machine-config-operator/machine-config-controller-54cb48566c-9ww5z" Feb 24 05:37:23.603384 master-0 kubenswrapper[34361]: I0224 05:37:23.603298 34361 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-root-ca.crt" Feb 24 05:37:23.618405 master-0 kubenswrapper[34361]: I0224 05:37:23.618378 34361 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-tls" Feb 24 05:37:23.619853 master-0 kubenswrapper[34361]: I0224 05:37:23.619809 34361 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/e6a0fc47-b446-4902-9f8a-04870cbafcab-machine-approver-tls\") pod \"machine-approver-7dd9c7d7b9-pb6sw\" (UID: \"e6a0fc47-b446-4902-9f8a-04870cbafcab\") " pod="openshift-cluster-machine-approver/machine-approver-7dd9c7d7b9-pb6sw" Feb 24 05:37:23.639124 master-0 kubenswrapper[34361]: I0224 05:37:23.638481 34361 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-rbac-proxy" Feb 24 05:37:23.639124 master-0 kubenswrapper[34361]: I0224 05:37:23.639007 34361 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/e6a0fc47-b446-4902-9f8a-04870cbafcab-auth-proxy-config\") pod \"machine-approver-7dd9c7d7b9-pb6sw\" (UID: \"e6a0fc47-b446-4902-9f8a-04870cbafcab\") " pod="openshift-cluster-machine-approver/machine-approver-7dd9c7d7b9-pb6sw" Feb 24 05:37:23.658898 master-0 kubenswrapper[34361]: I0224 05:37:23.658874 34361 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-sa-dockercfg-w9h5v" Feb 24 05:37:23.691088 master-0 kubenswrapper[34361]: I0224 05:37:23.691052 34361 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"machine-approver-config" Feb 24 05:37:23.695495 master-0 kubenswrapper[34361]: I0224 05:37:23.695371 34361 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e6a0fc47-b446-4902-9f8a-04870cbafcab-config\") pod \"machine-approver-7dd9c7d7b9-pb6sw\" (UID: \"e6a0fc47-b446-4902-9f8a-04870cbafcab\") " pod="openshift-cluster-machine-approver/machine-approver-7dd9c7d7b9-pb6sw" Feb 24 05:37:23.700431 master-0 kubenswrapper[34361]: I0224 05:37:23.699343 34361 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"openshift-service-ca.crt" Feb 24 05:37:23.722434 master-0 kubenswrapper[34361]: I0224 05:37:23.722375 34361 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-root-ca.crt" Feb 24 05:37:23.742070 master-0 kubenswrapper[34361]: I0224 05:37:23.741866 34361 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-qvwf6" Feb 24 05:37:23.762003 master-0 kubenswrapper[34361]: I0224 05:37:23.761902 34361 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"packageserver-service-cert" Feb 24 05:37:23.776689 master-0 kubenswrapper[34361]: I0224 05:37:23.776486 34361 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/49b426a3-f16e-40e9-a166-7270d4cfcc60-apiservice-cert\") pod \"packageserver-df5f88cd4-cwzcs\" (UID: \"49b426a3-f16e-40e9-a166-7270d4cfcc60\") " pod="openshift-operator-lifecycle-manager/packageserver-df5f88cd4-cwzcs" Feb 24 05:37:23.780336 master-0 kubenswrapper[34361]: I0224 05:37:23.777616 34361 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/49b426a3-f16e-40e9-a166-7270d4cfcc60-webhook-cert\") pod \"packageserver-df5f88cd4-cwzcs\" (UID: \"49b426a3-f16e-40e9-a166-7270d4cfcc60\") " pod="openshift-operator-lifecycle-manager/packageserver-df5f88cd4-cwzcs" Feb 24 05:37:23.781892 master-0 kubenswrapper[34361]: I0224 05:37:23.781387 34361 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-srdvz" Feb 24 05:37:23.799217 master-0 kubenswrapper[34361]: I0224 05:37:23.799139 34361 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-46rst" Feb 24 05:37:23.819675 master-0 kubenswrapper[34361]: I0224 05:37:23.819592 34361 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-zm289" Feb 24 05:37:23.842345 master-0 kubenswrapper[34361]: I0224 05:37:23.840198 34361 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-daemon-dockercfg-sdvhz" Feb 24 05:37:23.860617 master-0 kubenswrapper[34361]: I0224 05:37:23.860516 34361 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"proxy-tls" Feb 24 05:37:23.864677 master-0 kubenswrapper[34361]: I0224 05:37:23.864603 34361 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/a3561f49-0808-4d96-95ec-456fcb5c5bb4-proxy-tls\") pod \"machine-config-daemon-c56dz\" (UID: \"a3561f49-0808-4d96-95ec-456fcb5c5bb4\") " pod="openshift-machine-config-operator/machine-config-daemon-c56dz" Feb 24 05:37:23.888826 master-0 kubenswrapper[34361]: I0224 05:37:23.888757 34361 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cloud-controller-manager-operator"/"cloud-controller-manager-images" Feb 24 05:37:23.900231 master-0 kubenswrapper[34361]: I0224 05:37:23.900168 34361 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/f3cd3830-62b5-49d1-917e-bd993d685c65-images\") pod \"cluster-cloud-controller-manager-operator-67dd8d7969-m8d2t\" (UID: \"f3cd3830-62b5-49d1-917e-bd993d685c65\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-67dd8d7969-m8d2t" Feb 24 05:37:23.900231 master-0 kubenswrapper[34361]: I0224 05:37:23.900184 34361 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cloud-controller-manager-operator"/"kube-root-ca.crt" Feb 24 05:37:23.934932 master-0 kubenswrapper[34361]: I0224 05:37:23.934862 34361 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cloud-controller-manager-operator"/"openshift-service-ca.crt" Feb 24 05:37:23.945907 master-0 kubenswrapper[34361]: I0224 05:37:23.945841 34361 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cloud-controller-manager-operator"/"cluster-cloud-controller-manager-dockercfg-zqsq8" Feb 24 05:37:23.960543 master-0 kubenswrapper[34361]: I0224 05:37:23.960486 34361 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cloud-controller-manager-operator"/"cloud-controller-manager-operator-tls" Feb 24 05:37:23.969787 master-0 kubenswrapper[34361]: I0224 05:37:23.969747 34361 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cloud-controller-manager-operator-tls\" (UniqueName: \"kubernetes.io/secret/f3cd3830-62b5-49d1-917e-bd993d685c65-cloud-controller-manager-operator-tls\") pod \"cluster-cloud-controller-manager-operator-67dd8d7969-m8d2t\" (UID: \"f3cd3830-62b5-49d1-917e-bd993d685c65\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-67dd8d7969-m8d2t" Feb 24 05:37:23.980853 master-0 kubenswrapper[34361]: I0224 05:37:23.980787 34361 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cloud-controller-manager-operator"/"kube-rbac-proxy" Feb 24 05:37:23.982783 master-0 kubenswrapper[34361]: I0224 05:37:23.982733 34361 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/f3cd3830-62b5-49d1-917e-bd993d685c65-auth-proxy-config\") pod \"cluster-cloud-controller-manager-operator-67dd8d7969-m8d2t\" (UID: \"f3cd3830-62b5-49d1-917e-bd993d685c65\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-67dd8d7969-m8d2t" Feb 24 05:37:24.005605 master-0 kubenswrapper[34361]: I0224 05:37:24.005449 34361 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"node-bootstrapper-token" Feb 24 05:37:24.010904 master-0 kubenswrapper[34361]: I0224 05:37:24.010843 34361 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/c847d0c0-cc92-4d56-9e47-b83d9a39a745-node-bootstrap-token\") pod \"machine-config-server-xxl55\" (UID: \"c847d0c0-cc92-4d56-9e47-b83d9a39a745\") " pod="openshift-machine-config-operator/machine-config-server-xxl55" Feb 24 05:37:24.031340 master-0 kubenswrapper[34361]: I0224 05:37:24.031287 34361 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"metrics-client-ca" Feb 24 05:37:24.031759 master-0 kubenswrapper[34361]: I0224 05:37:24.031697 34361 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/b8d28792-2365-4e9e-b61a-46cd2ef8b632-metrics-client-ca\") pod \"prometheus-operator-754bc4d665-xjddh\" (UID: \"b8d28792-2365-4e9e-b61a-46cd2ef8b632\") " pod="openshift-monitoring/prometheus-operator-754bc4d665-xjddh" Feb 24 05:37:24.038321 master-0 kubenswrapper[34361]: I0224 05:37:24.038291 34361 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-tls" Feb 24 05:37:24.042712 master-0 kubenswrapper[34361]: I0224 05:37:24.042689 34361 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/80cc7ad6-051b-4ee5-94af-611388d9622a-metrics-client-ca\") pod \"kube-state-metrics-59584d565f-gsgxz\" (UID: \"80cc7ad6-051b-4ee5-94af-611388d9622a\") " pod="openshift-monitoring/kube-state-metrics-59584d565f-gsgxz" Feb 24 05:37:24.042899 master-0 kubenswrapper[34361]: I0224 05:37:24.042853 34361 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/1163571d-f555-41ad-b04c-74c2dc452efe-metrics-client-ca\") pod \"telemeter-client-96c995bf5-57k8x\" (UID: \"1163571d-f555-41ad-b04c-74c2dc452efe\") " pod="openshift-monitoring/telemeter-client-96c995bf5-57k8x" Feb 24 05:37:24.042991 master-0 kubenswrapper[34361]: I0224 05:37:24.042939 34361 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"certs\" (UniqueName: \"kubernetes.io/secret/c847d0c0-cc92-4d56-9e47-b83d9a39a745-certs\") pod \"machine-config-server-xxl55\" (UID: \"c847d0c0-cc92-4d56-9e47-b83d9a39a745\") " pod="openshift-machine-config-operator/machine-config-server-xxl55" Feb 24 05:37:24.043221 master-0 kubenswrapper[34361]: I0224 05:37:24.043204 34361 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/f2be5ed6-fdf0-4462-a319-eed1a5a1c778-metrics-client-ca\") pod \"node-exporter-qk7rz\" (UID: \"f2be5ed6-fdf0-4462-a319-eed1a5a1c778\") " pod="openshift-monitoring/node-exporter-qk7rz" Feb 24 05:37:24.043627 master-0 kubenswrapper[34361]: I0224 05:37:24.043580 34361 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/bf303acd-b62e-4aa3-bd8d-15f5844302d8-metrics-client-ca\") pod \"openshift-state-metrics-6dbff8cb4c-hvjlk\" (UID: \"bf303acd-b62e-4aa3-bd8d-15f5844302d8\") " pod="openshift-monitoring/openshift-state-metrics-6dbff8cb4c-hvjlk" Feb 24 05:37:24.058265 master-0 kubenswrapper[34361]: I0224 05:37:24.058232 34361 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-controller-dockercfg-2bzhs" Feb 24 05:37:24.078247 master-0 kubenswrapper[34361]: I0224 05:37:24.078197 34361 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-operator-kube-rbac-proxy-config" Feb 24 05:37:24.081137 master-0 kubenswrapper[34361]: I0224 05:37:24.081100 34361 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-operator-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/b8d28792-2365-4e9e-b61a-46cd2ef8b632-prometheus-operator-kube-rbac-proxy-config\") pod \"prometheus-operator-754bc4d665-xjddh\" (UID: \"b8d28792-2365-4e9e-b61a-46cd2ef8b632\") " pod="openshift-monitoring/prometheus-operator-754bc4d665-xjddh" Feb 24 05:37:24.098604 master-0 kubenswrapper[34361]: I0224 05:37:24.098465 34361 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"node-exporter-dockercfg-gnn9c" Feb 24 05:37:24.118984 master-0 kubenswrapper[34361]: I0224 05:37:24.118915 34361 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"node-exporter-tls" Feb 24 05:37:24.119482 master-0 kubenswrapper[34361]: I0224 05:37:24.119447 34361 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-exporter-tls\" (UniqueName: \"kubernetes.io/secret/f2be5ed6-fdf0-4462-a319-eed1a5a1c778-node-exporter-tls\") pod \"node-exporter-qk7rz\" (UID: \"f2be5ed6-fdf0-4462-a319-eed1a5a1c778\") " pod="openshift-monitoring/node-exporter-qk7rz" Feb 24 05:37:24.139059 master-0 kubenswrapper[34361]: I0224 05:37:24.138981 34361 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"node-exporter-kube-rbac-proxy-config" Feb 24 05:37:24.141405 master-0 kubenswrapper[34361]: I0224 05:37:24.141360 34361 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-exporter-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/f2be5ed6-fdf0-4462-a319-eed1a5a1c778-node-exporter-kube-rbac-proxy-config\") pod \"node-exporter-qk7rz\" (UID: \"f2be5ed6-fdf0-4462-a319-eed1a5a1c778\") " pod="openshift-monitoring/node-exporter-qk7rz" Feb 24 05:37:24.157717 master-0 kubenswrapper[34361]: I0224 05:37:24.157669 34361 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-operator-dockercfg-9xtkh" Feb 24 05:37:24.178832 master-0 kubenswrapper[34361]: I0224 05:37:24.178757 34361 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-operator-tls" Feb 24 05:37:24.179991 master-0 kubenswrapper[34361]: I0224 05:37:24.179947 34361 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-operator-tls\" (UniqueName: \"kubernetes.io/secret/b8d28792-2365-4e9e-b61a-46cd2ef8b632-prometheus-operator-tls\") pod \"prometheus-operator-754bc4d665-xjddh\" (UID: \"b8d28792-2365-4e9e-b61a-46cd2ef8b632\") " pod="openshift-monitoring/prometheus-operator-754bc4d665-xjddh" Feb 24 05:37:24.198687 master-0 kubenswrapper[34361]: I0224 05:37:24.198612 34361 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-dockercfg-rzbrp" Feb 24 05:37:24.218184 master-0 kubenswrapper[34361]: I0224 05:37:24.218135 34361 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mcc-proxy-tls" Feb 24 05:37:24.219890 master-0 kubenswrapper[34361]: I0224 05:37:24.219870 34361 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/1e7f7c02-4c84-432a-8d59-25dd3bfef5c2-proxy-tls\") pod \"machine-config-controller-54cb48566c-9ww5z\" (UID: \"1e7f7c02-4c84-432a-8d59-25dd3bfef5c2\") " pod="openshift-machine-config-operator/machine-config-controller-54cb48566c-9ww5z" Feb 24 05:37:24.238865 master-0 kubenswrapper[34361]: I0224 05:37:24.238794 34361 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"kube-state-metrics-dockercfg-9sp2t" Feb 24 05:37:24.257991 master-0 kubenswrapper[34361]: I0224 05:37:24.257870 34361 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"kube-state-metrics-tls" Feb 24 05:37:24.259101 master-0 kubenswrapper[34361]: I0224 05:37:24.259068 34361 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-state-metrics-tls\" (UniqueName: \"kubernetes.io/secret/80cc7ad6-051b-4ee5-94af-611388d9622a-kube-state-metrics-tls\") pod \"kube-state-metrics-59584d565f-gsgxz\" (UID: \"80cc7ad6-051b-4ee5-94af-611388d9622a\") " pod="openshift-monitoring/kube-state-metrics-59584d565f-gsgxz" Feb 24 05:37:24.268967 master-0 kubenswrapper[34361]: E0224 05:37:24.268922 34361 secret.go:189] Couldn't get secret openshift-monitoring/federate-client-certs: failed to sync secret cache: timed out waiting for the condition Feb 24 05:37:24.268967 master-0 kubenswrapper[34361]: E0224 05:37:24.268953 34361 secret.go:189] Couldn't get secret openshift-monitoring/metrics-server-tls: failed to sync secret cache: timed out waiting for the condition Feb 24 05:37:24.269111 master-0 kubenswrapper[34361]: E0224 05:37:24.268939 34361 secret.go:189] Couldn't get secret openshift-monitoring/metrics-client-certs: failed to sync secret cache: timed out waiting for the condition Feb 24 05:37:24.269111 master-0 kubenswrapper[34361]: E0224 05:37:24.269037 34361 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/1163571d-f555-41ad-b04c-74c2dc452efe-federate-client-tls podName:1163571d-f555-41ad-b04c-74c2dc452efe nodeName:}" failed. No retries permitted until 2026-02-24 05:37:25.26901478 +0000 UTC m=+4.971631826 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "federate-client-tls" (UniqueName: "kubernetes.io/secret/1163571d-f555-41ad-b04c-74c2dc452efe-federate-client-tls") pod "telemeter-client-96c995bf5-57k8x" (UID: "1163571d-f555-41ad-b04c-74c2dc452efe") : failed to sync secret cache: timed out waiting for the condition Feb 24 05:37:24.269111 master-0 kubenswrapper[34361]: E0224 05:37:24.269059 34361 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/2f48332e-92de-42aa-a6e6-db161f005e74-secret-metrics-server-tls podName:2f48332e-92de-42aa-a6e6-db161f005e74 nodeName:}" failed. No retries permitted until 2026-02-24 05:37:25.269049051 +0000 UTC m=+4.971666097 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "secret-metrics-server-tls" (UniqueName: "kubernetes.io/secret/2f48332e-92de-42aa-a6e6-db161f005e74-secret-metrics-server-tls") pod "metrics-server-65cdf565cd-555rj" (UID: "2f48332e-92de-42aa-a6e6-db161f005e74") : failed to sync secret cache: timed out waiting for the condition Feb 24 05:37:24.269111 master-0 kubenswrapper[34361]: E0224 05:37:24.269073 34361 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/2f48332e-92de-42aa-a6e6-db161f005e74-secret-metrics-client-certs podName:2f48332e-92de-42aa-a6e6-db161f005e74 nodeName:}" failed. No retries permitted until 2026-02-24 05:37:25.269066642 +0000 UTC m=+4.971683688 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "secret-metrics-client-certs" (UniqueName: "kubernetes.io/secret/2f48332e-92de-42aa-a6e6-db161f005e74-secret-metrics-client-certs") pod "metrics-server-65cdf565cd-555rj" (UID: "2f48332e-92de-42aa-a6e6-db161f005e74") : failed to sync secret cache: timed out waiting for the condition Feb 24 05:37:24.269111 master-0 kubenswrapper[34361]: E0224 05:37:24.269073 34361 configmap.go:193] Couldn't get configMap openshift-monitoring/metrics-server-audit-profiles: failed to sync configmap cache: timed out waiting for the condition Feb 24 05:37:24.269499 master-0 kubenswrapper[34361]: E0224 05:37:24.269094 34361 secret.go:189] Couldn't get secret openshift-monitoring/telemeter-client: failed to sync secret cache: timed out waiting for the condition Feb 24 05:37:24.269499 master-0 kubenswrapper[34361]: E0224 05:37:24.269164 34361 secret.go:189] Couldn't get secret openshift-monitoring/kube-state-metrics-kube-rbac-proxy-config: failed to sync secret cache: timed out waiting for the condition Feb 24 05:37:24.269499 master-0 kubenswrapper[34361]: E0224 05:37:24.269423 34361 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/2f48332e-92de-42aa-a6e6-db161f005e74-metrics-server-audit-profiles podName:2f48332e-92de-42aa-a6e6-db161f005e74 nodeName:}" failed. No retries permitted until 2026-02-24 05:37:25.269166764 +0000 UTC m=+4.971783820 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "metrics-server-audit-profiles" (UniqueName: "kubernetes.io/configmap/2f48332e-92de-42aa-a6e6-db161f005e74-metrics-server-audit-profiles") pod "metrics-server-65cdf565cd-555rj" (UID: "2f48332e-92de-42aa-a6e6-db161f005e74") : failed to sync configmap cache: timed out waiting for the condition Feb 24 05:37:24.269499 master-0 kubenswrapper[34361]: E0224 05:37:24.269448 34361 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/1163571d-f555-41ad-b04c-74c2dc452efe-secret-telemeter-client podName:1163571d-f555-41ad-b04c-74c2dc452efe nodeName:}" failed. No retries permitted until 2026-02-24 05:37:25.269438672 +0000 UTC m=+4.972055738 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "secret-telemeter-client" (UniqueName: "kubernetes.io/secret/1163571d-f555-41ad-b04c-74c2dc452efe-secret-telemeter-client") pod "telemeter-client-96c995bf5-57k8x" (UID: "1163571d-f555-41ad-b04c-74c2dc452efe") : failed to sync secret cache: timed out waiting for the condition Feb 24 05:37:24.269499 master-0 kubenswrapper[34361]: E0224 05:37:24.269466 34361 secret.go:189] Couldn't get secret openshift-monitoring/openshift-state-metrics-tls: failed to sync secret cache: timed out waiting for the condition Feb 24 05:37:24.269499 master-0 kubenswrapper[34361]: E0224 05:37:24.269465 34361 configmap.go:193] Couldn't get configMap openshift-monitoring/telemeter-trusted-ca-bundle-8i12ta5c71j38: failed to sync configmap cache: timed out waiting for the condition Feb 24 05:37:24.269677 master-0 kubenswrapper[34361]: E0224 05:37:24.269512 34361 configmap.go:193] Couldn't get configMap openshift-monitoring/kube-state-metrics-custom-resource-state-configmap: failed to sync configmap cache: timed out waiting for the condition Feb 24 05:37:24.269677 master-0 kubenswrapper[34361]: E0224 05:37:24.269518 34361 configmap.go:193] Couldn't get configMap openshift-monitoring/telemeter-client-serving-certs-ca-bundle: failed to sync configmap cache: timed out waiting for the condition Feb 24 05:37:24.269677 master-0 kubenswrapper[34361]: E0224 05:37:24.269164 34361 configmap.go:193] Couldn't get configMap openshift-monitoring/kubelet-serving-ca-bundle: failed to sync configmap cache: timed out waiting for the condition Feb 24 05:37:24.269769 master-0 kubenswrapper[34361]: E0224 05:37:24.269524 34361 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/80cc7ad6-051b-4ee5-94af-611388d9622a-kube-state-metrics-kube-rbac-proxy-config podName:80cc7ad6-051b-4ee5-94af-611388d9622a nodeName:}" failed. No retries permitted until 2026-02-24 05:37:25.269493093 +0000 UTC m=+4.972110139 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-state-metrics-kube-rbac-proxy-config" (UniqueName: "kubernetes.io/secret/80cc7ad6-051b-4ee5-94af-611388d9622a-kube-state-metrics-kube-rbac-proxy-config") pod "kube-state-metrics-59584d565f-gsgxz" (UID: "80cc7ad6-051b-4ee5-94af-611388d9622a") : failed to sync secret cache: timed out waiting for the condition Feb 24 05:37:24.269769 master-0 kubenswrapper[34361]: E0224 05:37:24.269712 34361 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/bf303acd-b62e-4aa3-bd8d-15f5844302d8-openshift-state-metrics-tls podName:bf303acd-b62e-4aa3-bd8d-15f5844302d8 nodeName:}" failed. No retries permitted until 2026-02-24 05:37:25.269698758 +0000 UTC m=+4.972315814 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "openshift-state-metrics-tls" (UniqueName: "kubernetes.io/secret/bf303acd-b62e-4aa3-bd8d-15f5844302d8-openshift-state-metrics-tls") pod "openshift-state-metrics-6dbff8cb4c-hvjlk" (UID: "bf303acd-b62e-4aa3-bd8d-15f5844302d8") : failed to sync secret cache: timed out waiting for the condition Feb 24 05:37:24.269769 master-0 kubenswrapper[34361]: E0224 05:37:24.269721 34361 secret.go:189] Couldn't get secret openshift-monitoring/metrics-server-7qtvbjhkqad41: failed to sync secret cache: timed out waiting for the condition Feb 24 05:37:24.269769 master-0 kubenswrapper[34361]: E0224 05:37:24.269733 34361 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/80cc7ad6-051b-4ee5-94af-611388d9622a-kube-state-metrics-custom-resource-state-configmap podName:80cc7ad6-051b-4ee5-94af-611388d9622a nodeName:}" failed. No retries permitted until 2026-02-24 05:37:25.269723379 +0000 UTC m=+4.972340445 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-state-metrics-custom-resource-state-configmap" (UniqueName: "kubernetes.io/configmap/80cc7ad6-051b-4ee5-94af-611388d9622a-kube-state-metrics-custom-resource-state-configmap") pod "kube-state-metrics-59584d565f-gsgxz" (UID: "80cc7ad6-051b-4ee5-94af-611388d9622a") : failed to sync configmap cache: timed out waiting for the condition Feb 24 05:37:24.269769 master-0 kubenswrapper[34361]: E0224 05:37:24.269764 34361 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/2f48332e-92de-42aa-a6e6-db161f005e74-client-ca-bundle podName:2f48332e-92de-42aa-a6e6-db161f005e74 nodeName:}" failed. No retries permitted until 2026-02-24 05:37:25.26975589 +0000 UTC m=+4.972372936 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "client-ca-bundle" (UniqueName: "kubernetes.io/secret/2f48332e-92de-42aa-a6e6-db161f005e74-client-ca-bundle") pod "metrics-server-65cdf565cd-555rj" (UID: "2f48332e-92de-42aa-a6e6-db161f005e74") : failed to sync secret cache: timed out waiting for the condition Feb 24 05:37:24.269907 master-0 kubenswrapper[34361]: E0224 05:37:24.269782 34361 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/1163571d-f555-41ad-b04c-74c2dc452efe-serving-certs-ca-bundle podName:1163571d-f555-41ad-b04c-74c2dc452efe nodeName:}" failed. No retries permitted until 2026-02-24 05:37:25.269774761 +0000 UTC m=+4.972391807 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "serving-certs-ca-bundle" (UniqueName: "kubernetes.io/configmap/1163571d-f555-41ad-b04c-74c2dc452efe-serving-certs-ca-bundle") pod "telemeter-client-96c995bf5-57k8x" (UID: "1163571d-f555-41ad-b04c-74c2dc452efe") : failed to sync configmap cache: timed out waiting for the condition Feb 24 05:37:24.269907 master-0 kubenswrapper[34361]: E0224 05:37:24.269799 34361 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/1163571d-f555-41ad-b04c-74c2dc452efe-telemeter-trusted-ca-bundle podName:1163571d-f555-41ad-b04c-74c2dc452efe nodeName:}" failed. No retries permitted until 2026-02-24 05:37:25.269792171 +0000 UTC m=+4.972409217 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "telemeter-trusted-ca-bundle" (UniqueName: "kubernetes.io/configmap/1163571d-f555-41ad-b04c-74c2dc452efe-telemeter-trusted-ca-bundle") pod "telemeter-client-96c995bf5-57k8x" (UID: "1163571d-f555-41ad-b04c-74c2dc452efe") : failed to sync configmap cache: timed out waiting for the condition Feb 24 05:37:24.269907 master-0 kubenswrapper[34361]: E0224 05:37:24.269814 34361 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/2f48332e-92de-42aa-a6e6-db161f005e74-configmap-kubelet-serving-ca-bundle podName:2f48332e-92de-42aa-a6e6-db161f005e74 nodeName:}" failed. No retries permitted until 2026-02-24 05:37:25.269807281 +0000 UTC m=+4.972424327 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "configmap-kubelet-serving-ca-bundle" (UniqueName: "kubernetes.io/configmap/2f48332e-92de-42aa-a6e6-db161f005e74-configmap-kubelet-serving-ca-bundle") pod "metrics-server-65cdf565cd-555rj" (UID: "2f48332e-92de-42aa-a6e6-db161f005e74") : failed to sync configmap cache: timed out waiting for the condition Feb 24 05:37:24.270684 master-0 kubenswrapper[34361]: E0224 05:37:24.270648 34361 secret.go:189] Couldn't get secret openshift-ingress-canary/canary-serving-cert: failed to sync secret cache: timed out waiting for the condition Feb 24 05:37:24.270758 master-0 kubenswrapper[34361]: E0224 05:37:24.270718 34361 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/f938daff-1d36-4348-a689-3d1607058296-cert podName:f938daff-1d36-4348-a689-3d1607058296 nodeName:}" failed. No retries permitted until 2026-02-24 05:37:25.270695205 +0000 UTC m=+4.973312251 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/f938daff-1d36-4348-a689-3d1607058296-cert") pod "ingress-canary-5m82s" (UID: "f938daff-1d36-4348-a689-3d1607058296") : failed to sync secret cache: timed out waiting for the condition Feb 24 05:37:24.270758 master-0 kubenswrapper[34361]: E0224 05:37:24.270721 34361 secret.go:189] Couldn't get secret openshift-multus/multus-admission-controller-secret: failed to sync secret cache: timed out waiting for the condition Feb 24 05:37:24.270828 master-0 kubenswrapper[34361]: E0224 05:37:24.270771 34361 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6ddb5ab7-0c1f-44ed-84fa-aaeb6b553e03-webhook-certs podName:6ddb5ab7-0c1f-44ed-84fa-aaeb6b553e03 nodeName:}" failed. No retries permitted until 2026-02-24 05:37:25.270759607 +0000 UTC m=+4.973376653 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/6ddb5ab7-0c1f-44ed-84fa-aaeb6b553e03-webhook-certs") pod "multus-admission-controller-5f54bf67d4-5tf9t" (UID: "6ddb5ab7-0c1f-44ed-84fa-aaeb6b553e03") : failed to sync secret cache: timed out waiting for the condition Feb 24 05:37:24.270828 master-0 kubenswrapper[34361]: E0224 05:37:24.270805 34361 secret.go:189] Couldn't get secret openshift-monitoring/telemeter-client-kube-rbac-proxy-config: failed to sync secret cache: timed out waiting for the condition Feb 24 05:37:24.270970 master-0 kubenswrapper[34361]: E0224 05:37:24.270933 34361 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/1163571d-f555-41ad-b04c-74c2dc452efe-secret-telemeter-client-kube-rbac-proxy-config podName:1163571d-f555-41ad-b04c-74c2dc452efe nodeName:}" failed. No retries permitted until 2026-02-24 05:37:25.27088735 +0000 UTC m=+4.973504606 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "secret-telemeter-client-kube-rbac-proxy-config" (UniqueName: "kubernetes.io/secret/1163571d-f555-41ad-b04c-74c2dc452efe-secret-telemeter-client-kube-rbac-proxy-config") pod "telemeter-client-96c995bf5-57k8x" (UID: "1163571d-f555-41ad-b04c-74c2dc452efe") : failed to sync secret cache: timed out waiting for the condition Feb 24 05:37:24.270970 master-0 kubenswrapper[34361]: E0224 05:37:24.270945 34361 secret.go:189] Couldn't get secret openshift-monitoring/telemeter-client-tls: failed to sync secret cache: timed out waiting for the condition Feb 24 05:37:24.271042 master-0 kubenswrapper[34361]: E0224 05:37:24.270999 34361 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/1163571d-f555-41ad-b04c-74c2dc452efe-telemeter-client-tls podName:1163571d-f555-41ad-b04c-74c2dc452efe nodeName:}" failed. No retries permitted until 2026-02-24 05:37:25.270989603 +0000 UTC m=+4.973606899 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "telemeter-client-tls" (UniqueName: "kubernetes.io/secret/1163571d-f555-41ad-b04c-74c2dc452efe-telemeter-client-tls") pod "telemeter-client-96c995bf5-57k8x" (UID: "1163571d-f555-41ad-b04c-74c2dc452efe") : failed to sync secret cache: timed out waiting for the condition Feb 24 05:37:24.271883 master-0 kubenswrapper[34361]: E0224 05:37:24.271856 34361 secret.go:189] Couldn't get secret openshift-monitoring/openshift-state-metrics-kube-rbac-proxy-config: failed to sync secret cache: timed out waiting for the condition Feb 24 05:37:24.271944 master-0 kubenswrapper[34361]: E0224 05:37:24.271922 34361 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/bf303acd-b62e-4aa3-bd8d-15f5844302d8-openshift-state-metrics-kube-rbac-proxy-config podName:bf303acd-b62e-4aa3-bd8d-15f5844302d8 nodeName:}" failed. No retries permitted until 2026-02-24 05:37:25.271906098 +0000 UTC m=+4.974523144 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "openshift-state-metrics-kube-rbac-proxy-config" (UniqueName: "kubernetes.io/secret/bf303acd-b62e-4aa3-bd8d-15f5844302d8-openshift-state-metrics-kube-rbac-proxy-config") pod "openshift-state-metrics-6dbff8cb4c-hvjlk" (UID: "bf303acd-b62e-4aa3-bd8d-15f5844302d8") : failed to sync secret cache: timed out waiting for the condition Feb 24 05:37:24.277886 master-0 kubenswrapper[34361]: I0224 05:37:24.277847 34361 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"kube-state-metrics-kube-rbac-proxy-config" Feb 24 05:37:24.300965 master-0 kubenswrapper[34361]: I0224 05:37:24.300903 34361 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"kube-state-metrics-custom-resource-state-configmap" Feb 24 05:37:24.319211 master-0 kubenswrapper[34361]: I0224 05:37:24.319164 34361 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"metrics-server-dockercfg-hpmvm" Feb 24 05:37:24.338886 master-0 kubenswrapper[34361]: I0224 05:37:24.338774 34361 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"metrics-client-certs" Feb 24 05:37:24.360403 master-0 kubenswrapper[34361]: I0224 05:37:24.360361 34361 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"openshift-state-metrics-kube-rbac-proxy-config" Feb 24 05:37:24.381223 master-0 kubenswrapper[34361]: I0224 05:37:24.381179 34361 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"kubelet-serving-ca-bundle" Feb 24 05:37:24.398791 master-0 kubenswrapper[34361]: I0224 05:37:24.398747 34361 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"metrics-server-audit-profiles" Feb 24 05:37:24.422082 master-0 kubenswrapper[34361]: I0224 05:37:24.422012 34361 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"metrics-server-tls" Feb 24 05:37:24.451732 master-0 kubenswrapper[34361]: I0224 05:37:24.447374 34361 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"kube-root-ca.crt" Feb 24 05:37:24.458660 master-0 kubenswrapper[34361]: I0224 05:37:24.458625 34361 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"default-dockercfg-l2gcc" Feb 24 05:37:24.479243 master-0 kubenswrapper[34361]: I0224 05:37:24.479162 34361 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"canary-serving-cert" Feb 24 05:37:24.505037 master-0 kubenswrapper[34361]: I0224 05:37:24.504985 34361 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"openshift-service-ca.crt" Feb 24 05:37:24.522453 master-0 kubenswrapper[34361]: I0224 05:37:24.518399 34361 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"openshift-state-metrics-dockercfg-ll4w9" Feb 24 05:37:24.539707 master-0 kubenswrapper[34361]: I0224 05:37:24.539630 34361 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"metrics-server-7qtvbjhkqad41" Feb 24 05:37:24.561694 master-0 kubenswrapper[34361]: I0224 05:37:24.560098 34361 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"openshift-state-metrics-tls" Feb 24 05:37:24.584182 master-0 kubenswrapper[34361]: I0224 05:37:24.578398 34361 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"federate-client-certs" Feb 24 05:37:24.597832 master-0 kubenswrapper[34361]: I0224 05:37:24.597156 34361 request.go:700] Waited for 2.995048381s due to client-side throttling, not priority and fairness, request: GET:https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-monitoring/secrets?fieldSelector=metadata.name%3Dtelemeter-client-kube-rbac-proxy-config&limit=500&resourceVersion=0 Feb 24 05:37:24.599839 master-0 kubenswrapper[34361]: I0224 05:37:24.599780 34361 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"telemeter-client-kube-rbac-proxy-config" Feb 24 05:37:24.627491 master-0 kubenswrapper[34361]: I0224 05:37:24.627431 34361 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"telemeter-trusted-ca-bundle-8i12ta5c71j38" Feb 24 05:37:24.638621 master-0 kubenswrapper[34361]: I0224 05:37:24.638575 34361 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"telemeter-client" Feb 24 05:37:24.660294 master-0 kubenswrapper[34361]: I0224 05:37:24.660197 34361 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ac-dockercfg-zl45m" Feb 24 05:37:24.678125 master-0 kubenswrapper[34361]: I0224 05:37:24.678073 34361 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"telemeter-client-serving-certs-ca-bundle" Feb 24 05:37:24.698636 master-0 kubenswrapper[34361]: I0224 05:37:24.698568 34361 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"telemeter-client-tls" Feb 24 05:37:24.718219 master-0 kubenswrapper[34361]: I0224 05:37:24.718146 34361 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"telemeter-client-dockercfg-l6bv5" Feb 24 05:37:24.738218 master-0 kubenswrapper[34361]: I0224 05:37:24.738151 34361 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-admission-controller-secret" Feb 24 05:37:24.762671 master-0 kubenswrapper[34361]: E0224 05:37:24.762536 34361 kubelet_node_status.go:99] "Unable to register node with API server" err="nodes \"master-0\" is forbidden: autoscaling.openshift.io/ManagedNode infra config cache not synchronized" node="master-0" Feb 24 05:37:24.794183 master-0 kubenswrapper[34361]: I0224 05:37:24.794007 34361 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hgl5l\" (UniqueName: \"kubernetes.io/projected/80cc7ad6-051b-4ee5-94af-611388d9622a-kube-api-access-hgl5l\") pod \"kube-state-metrics-59584d565f-gsgxz\" (UID: \"80cc7ad6-051b-4ee5-94af-611388d9622a\") " pod="openshift-monitoring/kube-state-metrics-59584d565f-gsgxz" Feb 24 05:37:24.819509 master-0 kubenswrapper[34361]: I0224 05:37:24.819436 34361 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9lkf2\" (UniqueName: \"kubernetes.io/projected/da19bb93-c9ba-4e60-9e83-d92bc0dd33c4-kube-api-access-9lkf2\") pod \"controller-manager-7657d7494-mmsz6\" (UID: \"da19bb93-c9ba-4e60-9e83-d92bc0dd33c4\") " pod="openshift-controller-manager/controller-manager-7657d7494-mmsz6" Feb 24 05:37:24.835438 master-0 kubenswrapper[34361]: I0224 05:37:24.835366 34361 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cczbm\" (UniqueName: \"kubernetes.io/projected/23bdafdd-27c9-4461-be4a-3ea916ac3875-kube-api-access-cczbm\") pod \"cluster-image-registry-operator-779979bdf7-t98nr\" (UID: \"23bdafdd-27c9-4461-be4a-3ea916ac3875\") " pod="openshift-image-registry/cluster-image-registry-operator-779979bdf7-t98nr" Feb 24 05:37:24.851992 master-0 kubenswrapper[34361]: I0224 05:37:24.851913 34361 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4vdmz\" (UniqueName: \"kubernetes.io/projected/3f511d03-a182-4968-ba40-5c5c10e5e6be-kube-api-access-4vdmz\") pod \"openshift-config-operator-6f47d587d6-7b87v\" (UID: \"3f511d03-a182-4968-ba40-5c5c10e5e6be\") " pod="openshift-config-operator/openshift-config-operator-6f47d587d6-7b87v" Feb 24 05:37:24.870997 master-0 kubenswrapper[34361]: I0224 05:37:24.870924 34361 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jx4rw\" (UniqueName: \"kubernetes.io/projected/c00ee01c-143b-4e44-823c-c6bfdedb8ed6-kube-api-access-jx4rw\") pod \"multus-8qp5g\" (UID: \"c00ee01c-143b-4e44-823c-c6bfdedb8ed6\") " pod="openshift-multus/multus-8qp5g" Feb 24 05:37:24.892724 master-0 kubenswrapper[34361]: I0224 05:37:24.892638 34361 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mdpfz\" (UniqueName: \"kubernetes.io/projected/c177f8fe-8145-4557-ae78-af121efe001c-kube-api-access-mdpfz\") pod \"cluster-monitoring-operator-6bb6d78bf-mzb7q\" (UID: \"c177f8fe-8145-4557-ae78-af121efe001c\") " pod="openshift-monitoring/cluster-monitoring-operator-6bb6d78bf-mzb7q" Feb 24 05:37:24.913682 master-0 kubenswrapper[34361]: I0224 05:37:24.913615 34361 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-589rv\" (UniqueName: \"kubernetes.io/projected/3363f001-1cfa-41f5-b245-30cc99dd09cb-kube-api-access-589rv\") pod \"dns-default-cdk2w\" (UID: \"3363f001-1cfa-41f5-b245-30cc99dd09cb\") " pod="openshift-dns/dns-default-cdk2w" Feb 24 05:37:24.940819 master-0 kubenswrapper[34361]: I0224 05:37:24.940739 34361 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l8z6s\" (UniqueName: \"kubernetes.io/projected/8f3825c1-975c-40b5-a6ad-0f200968b3cd-kube-api-access-l8z6s\") pod \"redhat-operators-xm8sw\" (UID: \"8f3825c1-975c-40b5-a6ad-0f200968b3cd\") " pod="openshift-marketplace/redhat-operators-xm8sw" Feb 24 05:37:24.967352 master-0 kubenswrapper[34361]: I0224 05:37:24.966725 34361 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-62xzk\" (UniqueName: \"kubernetes.io/projected/633d33a1-e1b1-40b0-b56a-afb0c1085d97-kube-api-access-62xzk\") pod \"cluster-olm-operator-5bd7768f54-qh6j7\" (UID: \"633d33a1-e1b1-40b0-b56a-afb0c1085d97\") " pod="openshift-cluster-olm-operator/cluster-olm-operator-5bd7768f54-qh6j7" Feb 24 05:37:24.978596 master-0 kubenswrapper[34361]: I0224 05:37:24.978508 34361 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f9pp4\" (UniqueName: \"kubernetes.io/projected/03e4cebe-f3df-423f-be2b-7fb22bd58341-kube-api-access-f9pp4\") pod \"migrator-5c85bff57-txt9d\" (UID: \"03e4cebe-f3df-423f-be2b-7fb22bd58341\") " pod="openshift-kube-storage-version-migrator/migrator-5c85bff57-txt9d" Feb 24 05:37:25.000082 master-0 kubenswrapper[34361]: I0224 05:37:25.000018 34361 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jl24z\" (UniqueName: \"kubernetes.io/projected/798dcf46-8377-46b8-8387-5261d9bbefa1-kube-api-access-jl24z\") pod \"node-resolver-ng8tz\" (UID: \"798dcf46-8377-46b8-8387-5261d9bbefa1\") " pod="openshift-dns/node-resolver-ng8tz" Feb 24 05:37:25.016681 master-0 kubenswrapper[34361]: I0224 05:37:25.016619 34361 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hjtv8\" (UniqueName: \"kubernetes.io/projected/b426cb33-1624-45e6-b8c5-4e8d251f6339-kube-api-access-hjtv8\") pod \"route-controller-manager-654dcf5585-fgmnd\" (UID: \"b426cb33-1624-45e6-b8c5-4e8d251f6339\") " pod="openshift-route-controller-manager/route-controller-manager-654dcf5585-fgmnd" Feb 24 05:37:25.032559 master-0 kubenswrapper[34361]: I0224 05:37:25.032487 34361 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hl828\" (UniqueName: \"kubernetes.io/projected/767424fb-babf-4b73-b5e2-0bee65fcf207-kube-api-access-hl828\") pod \"multus-additional-cni-plugins-jknmn\" (UID: \"767424fb-babf-4b73-b5e2-0bee65fcf207\") " pod="openshift-multus/multus-additional-cni-plugins-jknmn" Feb 24 05:37:25.056034 master-0 kubenswrapper[34361]: I0224 05:37:25.055824 34361 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/22813c83-2f60-44ad-9624-ad367cec08f7-kube-api-access\") pod \"kube-controller-manager-operator-7bcfbc574b-8zrj9\" (UID: \"22813c83-2f60-44ad-9624-ad367cec08f7\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-7bcfbc574b-8zrj9" Feb 24 05:37:25.075768 master-0 kubenswrapper[34361]: I0224 05:37:25.075690 34361 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/3d6b1ce7-1213-494c-829d-186d39eac7eb-bound-sa-token\") pod \"ingress-operator-6569778c84-rr8r7\" (UID: \"3d6b1ce7-1213-494c-829d-186d39eac7eb\") " pod="openshift-ingress-operator/ingress-operator-6569778c84-rr8r7" Feb 24 05:37:25.090978 master-0 kubenswrapper[34361]: I0224 05:37:25.090909 34361 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qgl4j\" (UniqueName: \"kubernetes.io/projected/347c43e5-86d5-436f-bdc5-1c7bbe19ab2a-kube-api-access-qgl4j\") pod \"operator-controller-controller-manager-9cc7d7bb-t75jj\" (UID: \"347c43e5-86d5-436f-bdc5-1c7bbe19ab2a\") " pod="openshift-operator-controller/operator-controller-controller-manager-9cc7d7bb-t75jj" Feb 24 05:37:25.110716 master-0 kubenswrapper[34361]: I0224 05:37:25.110658 34361 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-25dbj\" (UniqueName: \"kubernetes.io/projected/cc0cfdd6-99d8-40dc-87d0-06c2a6767f38-kube-api-access-25dbj\") pod \"catalog-operator-596f79dd6f-v22h2\" (UID: \"cc0cfdd6-99d8-40dc-87d0-06c2a6767f38\") " pod="openshift-operator-lifecycle-manager/catalog-operator-596f79dd6f-v22h2" Feb 24 05:37:25.130424 master-0 kubenswrapper[34361]: I0224 05:37:25.130365 34361 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tlwzq\" (UniqueName: \"kubernetes.io/projected/c3fed34f-b275-42c6-af6c-8de3e6fe0f9e-kube-api-access-tlwzq\") pod \"kube-storage-version-migrator-operator-fc889cfd5-r6p58\" (UID: \"c3fed34f-b275-42c6-af6c-8de3e6fe0f9e\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-fc889cfd5-r6p58" Feb 24 05:37:25.155162 master-0 kubenswrapper[34361]: I0224 05:37:25.155099 34361 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qvznm\" (UniqueName: \"kubernetes.io/projected/c847d0c0-cc92-4d56-9e47-b83d9a39a745-kube-api-access-qvznm\") pod \"machine-config-server-xxl55\" (UID: \"c847d0c0-cc92-4d56-9e47-b83d9a39a745\") " pod="openshift-machine-config-operator/machine-config-server-xxl55" Feb 24 05:37:25.163543 master-0 kubenswrapper[34361]: I0224 05:37:25.163483 34361 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 24 05:37:25.166808 master-0 kubenswrapper[34361]: I0224 05:37:25.166740 34361 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Feb 24 05:37:25.166898 master-0 kubenswrapper[34361]: I0224 05:37:25.166813 34361 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Feb 24 05:37:25.166898 master-0 kubenswrapper[34361]: I0224 05:37:25.166833 34361 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Feb 24 05:37:25.167322 master-0 kubenswrapper[34361]: I0224 05:37:25.167244 34361 kubelet_node_status.go:76] "Attempting to register node" node="master-0" Feb 24 05:37:25.174757 master-0 kubenswrapper[34361]: I0224 05:37:25.174704 34361 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7jflg\" (UniqueName: \"kubernetes.io/projected/75b4304c-09f2-499e-8c2f-da603e43ba72-kube-api-access-7jflg\") pod \"redhat-marketplace-v64s6\" (UID: \"75b4304c-09f2-499e-8c2f-da603e43ba72\") " pod="openshift-marketplace/redhat-marketplace-v64s6" Feb 24 05:37:25.194811 master-0 kubenswrapper[34361]: I0224 05:37:25.194745 34361 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dcj62\" (UniqueName: \"kubernetes.io/projected/f77227c8-c52d-4a71-ae1b-792055f6f23d-kube-api-access-dcj62\") pod \"network-operator-7d7db75979-4fk6k\" (UID: \"f77227c8-c52d-4a71-ae1b-792055f6f23d\") " pod="openshift-network-operator/network-operator-7d7db75979-4fk6k" Feb 24 05:37:25.210940 master-0 kubenswrapper[34361]: I0224 05:37:25.210891 34361 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/17f8e10b-88dc-4158-a7c4-aaa2f5d5fb9d-kube-api-access\") pod \"kube-apiserver-operator-5d87bf58c-ncrqj\" (UID: \"17f8e10b-88dc-4158-a7c4-aaa2f5d5fb9d\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-5d87bf58c-ncrqj" Feb 24 05:37:25.234869 master-0 kubenswrapper[34361]: I0224 05:37:25.234801 34361 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zb4rw\" (UniqueName: \"kubernetes.io/projected/b79ef90c-dc66-4d5f-8943-2c3ac68796ba-kube-api-access-zb4rw\") pod \"csi-snapshot-controller-6847bb4785-vqn96\" (UID: \"b79ef90c-dc66-4d5f-8943-2c3ac68796ba\") " pod="openshift-cluster-storage-operator/csi-snapshot-controller-6847bb4785-vqn96" Feb 24 05:37:25.250913 master-0 kubenswrapper[34361]: I0224 05:37:25.250852 34361 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zb68s\" (UniqueName: \"kubernetes.io/projected/6e5ede6a-9d4b-47a2-b4ba-e6018910d05a-kube-api-access-zb68s\") pod \"cluster-node-tuning-operator-bcf775fc9-h99t4\" (UID: \"6e5ede6a-9d4b-47a2-b4ba-e6018910d05a\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-bcf775fc9-h99t4" Feb 24 05:37:25.272111 master-0 kubenswrapper[34361]: I0224 05:37:25.272057 34361 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-77lsr\" (UniqueName: \"kubernetes.io/projected/b8d28792-2365-4e9e-b61a-46cd2ef8b632-kube-api-access-77lsr\") pod \"prometheus-operator-754bc4d665-xjddh\" (UID: \"b8d28792-2365-4e9e-b61a-46cd2ef8b632\") " pod="openshift-monitoring/prometheus-operator-754bc4d665-xjddh" Feb 24 05:37:25.291331 master-0 kubenswrapper[34361]: I0224 05:37:25.291249 34361 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xj8cq\" (UniqueName: \"kubernetes.io/projected/d86d5bbe-3768-4695-810b-245a56e4fd1d-kube-api-access-xj8cq\") pod \"service-ca-operator-c48c8bf7c-mcdrl\" (UID: \"d86d5bbe-3768-4695-810b-245a56e4fd1d\") " pod="openshift-service-ca-operator/service-ca-operator-c48c8bf7c-mcdrl" Feb 24 05:37:25.313255 master-0 kubenswrapper[34361]: I0224 05:37:25.313129 34361 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k8dtv\" (UniqueName: \"kubernetes.io/projected/b46907eb-36d6-4410-b7d8-8012b254c861-kube-api-access-k8dtv\") pod \"cloud-credential-operator-6968c58f46-68rth\" (UID: \"b46907eb-36d6-4410-b7d8-8012b254c861\") " pod="openshift-cloud-credential-operator/cloud-credential-operator-6968c58f46-68rth" Feb 24 05:37:25.328138 master-0 kubenswrapper[34361]: I0224 05:37:25.328080 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"federate-client-tls\" (UniqueName: \"kubernetes.io/secret/1163571d-f555-41ad-b04c-74c2dc452efe-federate-client-tls\") pod \"telemeter-client-96c995bf5-57k8x\" (UID: \"1163571d-f555-41ad-b04c-74c2dc452efe\") " pod="openshift-monitoring/telemeter-client-96c995bf5-57k8x" Feb 24 05:37:25.328138 master-0 kubenswrapper[34361]: I0224 05:37:25.328157 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/6ddb5ab7-0c1f-44ed-84fa-aaeb6b553e03-webhook-certs\") pod \"multus-admission-controller-5f54bf67d4-5tf9t\" (UID: \"6ddb5ab7-0c1f-44ed-84fa-aaeb6b553e03\") " pod="openshift-multus/multus-admission-controller-5f54bf67d4-5tf9t" Feb 24 05:37:25.328492 master-0 kubenswrapper[34361]: I0224 05:37:25.328189 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/f938daff-1d36-4348-a689-3d1607058296-cert\") pod \"ingress-canary-5m82s\" (UID: \"f938daff-1d36-4348-a689-3d1607058296\") " pod="openshift-ingress-canary/ingress-canary-5m82s" Feb 24 05:37:25.328492 master-0 kubenswrapper[34361]: I0224 05:37:25.328445 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-telemeter-client-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/1163571d-f555-41ad-b04c-74c2dc452efe-secret-telemeter-client-kube-rbac-proxy-config\") pod \"telemeter-client-96c995bf5-57k8x\" (UID: \"1163571d-f555-41ad-b04c-74c2dc452efe\") " pod="openshift-monitoring/telemeter-client-96c995bf5-57k8x" Feb 24 05:37:25.328583 master-0 kubenswrapper[34361]: I0224 05:37:25.328556 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"telemeter-client-tls\" (UniqueName: \"kubernetes.io/secret/1163571d-f555-41ad-b04c-74c2dc452efe-telemeter-client-tls\") pod \"telemeter-client-96c995bf5-57k8x\" (UID: \"1163571d-f555-41ad-b04c-74c2dc452efe\") " pod="openshift-monitoring/telemeter-client-96c995bf5-57k8x" Feb 24 05:37:25.328652 master-0 kubenswrapper[34361]: I0224 05:37:25.328603 34361 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"federate-client-tls\" (UniqueName: \"kubernetes.io/secret/1163571d-f555-41ad-b04c-74c2dc452efe-federate-client-tls\") pod \"telemeter-client-96c995bf5-57k8x\" (UID: \"1163571d-f555-41ad-b04c-74c2dc452efe\") " pod="openshift-monitoring/telemeter-client-96c995bf5-57k8x" Feb 24 05:37:25.328884 master-0 kubenswrapper[34361]: I0224 05:37:25.328850 34361 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/f938daff-1d36-4348-a689-3d1607058296-cert\") pod \"ingress-canary-5m82s\" (UID: \"f938daff-1d36-4348-a689-3d1607058296\") " pod="openshift-ingress-canary/ingress-canary-5m82s" Feb 24 05:37:25.328964 master-0 kubenswrapper[34361]: I0224 05:37:25.328927 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openshift-state-metrics-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/bf303acd-b62e-4aa3-bd8d-15f5844302d8-openshift-state-metrics-kube-rbac-proxy-config\") pod \"openshift-state-metrics-6dbff8cb4c-hvjlk\" (UID: \"bf303acd-b62e-4aa3-bd8d-15f5844302d8\") " pod="openshift-monitoring/openshift-state-metrics-6dbff8cb4c-hvjlk" Feb 24 05:37:25.329036 master-0 kubenswrapper[34361]: I0224 05:37:25.329005 34361 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"telemeter-client-tls\" (UniqueName: \"kubernetes.io/secret/1163571d-f555-41ad-b04c-74c2dc452efe-telemeter-client-tls\") pod \"telemeter-client-96c995bf5-57k8x\" (UID: \"1163571d-f555-41ad-b04c-74c2dc452efe\") " pod="openshift-monitoring/telemeter-client-96c995bf5-57k8x" Feb 24 05:37:25.329097 master-0 kubenswrapper[34361]: I0224 05:37:25.329009 34361 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/6ddb5ab7-0c1f-44ed-84fa-aaeb6b553e03-webhook-certs\") pod \"multus-admission-controller-5f54bf67d4-5tf9t\" (UID: \"6ddb5ab7-0c1f-44ed-84fa-aaeb6b553e03\") " pod="openshift-multus/multus-admission-controller-5f54bf67d4-5tf9t" Feb 24 05:37:25.329255 master-0 kubenswrapper[34361]: I0224 05:37:25.329220 34361 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openshift-state-metrics-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/bf303acd-b62e-4aa3-bd8d-15f5844302d8-openshift-state-metrics-kube-rbac-proxy-config\") pod \"openshift-state-metrics-6dbff8cb4c-hvjlk\" (UID: \"bf303acd-b62e-4aa3-bd8d-15f5844302d8\") " pod="openshift-monitoring/openshift-state-metrics-6dbff8cb4c-hvjlk" Feb 24 05:37:25.329325 master-0 kubenswrapper[34361]: I0224 05:37:25.329207 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"configmap-kubelet-serving-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/2f48332e-92de-42aa-a6e6-db161f005e74-configmap-kubelet-serving-ca-bundle\") pod \"metrics-server-65cdf565cd-555rj\" (UID: \"2f48332e-92de-42aa-a6e6-db161f005e74\") " pod="openshift-monitoring/metrics-server-65cdf565cd-555rj" Feb 24 05:37:25.329387 master-0 kubenswrapper[34361]: I0224 05:37:25.329363 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"telemeter-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1163571d-f555-41ad-b04c-74c2dc452efe-telemeter-trusted-ca-bundle\") pod \"telemeter-client-96c995bf5-57k8x\" (UID: \"1163571d-f555-41ad-b04c-74c2dc452efe\") " pod="openshift-monitoring/telemeter-client-96c995bf5-57k8x" Feb 24 05:37:25.329524 master-0 kubenswrapper[34361]: I0224 05:37:25.329496 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/80cc7ad6-051b-4ee5-94af-611388d9622a-kube-state-metrics-kube-rbac-proxy-config\") pod \"kube-state-metrics-59584d565f-gsgxz\" (UID: \"80cc7ad6-051b-4ee5-94af-611388d9622a\") " pod="openshift-monitoring/kube-state-metrics-59584d565f-gsgxz" Feb 24 05:37:25.329665 master-0 kubenswrapper[34361]: I0224 05:37:25.329623 34361 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"configmap-kubelet-serving-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/2f48332e-92de-42aa-a6e6-db161f005e74-configmap-kubelet-serving-ca-bundle\") pod \"metrics-server-65cdf565cd-555rj\" (UID: \"2f48332e-92de-42aa-a6e6-db161f005e74\") " pod="openshift-monitoring/metrics-server-65cdf565cd-555rj" Feb 24 05:37:25.329747 master-0 kubenswrapper[34361]: I0224 05:37:25.329674 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2f48332e-92de-42aa-a6e6-db161f005e74-client-ca-bundle\") pod \"metrics-server-65cdf565cd-555rj\" (UID: \"2f48332e-92de-42aa-a6e6-db161f005e74\") " pod="openshift-monitoring/metrics-server-65cdf565cd-555rj" Feb 24 05:37:25.329909 master-0 kubenswrapper[34361]: I0224 05:37:25.329884 34361 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-state-metrics-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/80cc7ad6-051b-4ee5-94af-611388d9622a-kube-state-metrics-kube-rbac-proxy-config\") pod \"kube-state-metrics-59584d565f-gsgxz\" (UID: \"80cc7ad6-051b-4ee5-94af-611388d9622a\") " pod="openshift-monitoring/kube-state-metrics-59584d565f-gsgxz" Feb 24 05:37:25.329991 master-0 kubenswrapper[34361]: I0224 05:37:25.329895 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-certs-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1163571d-f555-41ad-b04c-74c2dc452efe-serving-certs-ca-bundle\") pod \"telemeter-client-96c995bf5-57k8x\" (UID: \"1163571d-f555-41ad-b04c-74c2dc452efe\") " pod="openshift-monitoring/telemeter-client-96c995bf5-57k8x" Feb 24 05:37:25.329991 master-0 kubenswrapper[34361]: I0224 05:37:25.329953 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-telemeter-client\" (UniqueName: \"kubernetes.io/secret/1163571d-f555-41ad-b04c-74c2dc452efe-secret-telemeter-client\") pod \"telemeter-client-96c995bf5-57k8x\" (UID: \"1163571d-f555-41ad-b04c-74c2dc452efe\") " pod="openshift-monitoring/telemeter-client-96c995bf5-57k8x" Feb 24 05:37:25.330087 master-0 kubenswrapper[34361]: I0224 05:37:25.330028 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-metrics-client-certs\" (UniqueName: \"kubernetes.io/secret/2f48332e-92de-42aa-a6e6-db161f005e74-secret-metrics-client-certs\") pod \"metrics-server-65cdf565cd-555rj\" (UID: \"2f48332e-92de-42aa-a6e6-db161f005e74\") " pod="openshift-monitoring/metrics-server-65cdf565cd-555rj" Feb 24 05:37:25.330206 master-0 kubenswrapper[34361]: I0224 05:37:25.330177 34361 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-certs-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1163571d-f555-41ad-b04c-74c2dc452efe-serving-certs-ca-bundle\") pod \"telemeter-client-96c995bf5-57k8x\" (UID: \"1163571d-f555-41ad-b04c-74c2dc452efe\") " pod="openshift-monitoring/telemeter-client-96c995bf5-57k8x" Feb 24 05:37:25.330426 master-0 kubenswrapper[34361]: I0224 05:37:25.330391 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-metrics-server-tls\" (UniqueName: \"kubernetes.io/secret/2f48332e-92de-42aa-a6e6-db161f005e74-secret-metrics-server-tls\") pod \"metrics-server-65cdf565cd-555rj\" (UID: \"2f48332e-92de-42aa-a6e6-db161f005e74\") " pod="openshift-monitoring/metrics-server-65cdf565cd-555rj" Feb 24 05:37:25.330536 master-0 kubenswrapper[34361]: I0224 05:37:25.330512 34361 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-metrics-client-certs\" (UniqueName: \"kubernetes.io/secret/2f48332e-92de-42aa-a6e6-db161f005e74-secret-metrics-client-certs\") pod \"metrics-server-65cdf565cd-555rj\" (UID: \"2f48332e-92de-42aa-a6e6-db161f005e74\") " pod="openshift-monitoring/metrics-server-65cdf565cd-555rj" Feb 24 05:37:25.330612 master-0 kubenswrapper[34361]: I0224 05:37:25.330520 34361 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2f48332e-92de-42aa-a6e6-db161f005e74-client-ca-bundle\") pod \"metrics-server-65cdf565cd-555rj\" (UID: \"2f48332e-92de-42aa-a6e6-db161f005e74\") " pod="openshift-monitoring/metrics-server-65cdf565cd-555rj" Feb 24 05:37:25.330612 master-0 kubenswrapper[34361]: I0224 05:37:25.330582 34361 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-telemeter-client\" (UniqueName: \"kubernetes.io/secret/1163571d-f555-41ad-b04c-74c2dc452efe-secret-telemeter-client\") pod \"telemeter-client-96c995bf5-57k8x\" (UID: \"1163571d-f555-41ad-b04c-74c2dc452efe\") " pod="openshift-monitoring/telemeter-client-96c995bf5-57k8x" Feb 24 05:37:25.330693 master-0 kubenswrapper[34361]: I0224 05:37:25.330650 34361 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-telemeter-client-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/1163571d-f555-41ad-b04c-74c2dc452efe-secret-telemeter-client-kube-rbac-proxy-config\") pod \"telemeter-client-96c995bf5-57k8x\" (UID: \"1163571d-f555-41ad-b04c-74c2dc452efe\") " pod="openshift-monitoring/telemeter-client-96c995bf5-57k8x" Feb 24 05:37:25.330827 master-0 kubenswrapper[34361]: I0224 05:37:25.330789 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-custom-resource-state-configmap\" (UniqueName: \"kubernetes.io/configmap/80cc7ad6-051b-4ee5-94af-611388d9622a-kube-state-metrics-custom-resource-state-configmap\") pod \"kube-state-metrics-59584d565f-gsgxz\" (UID: \"80cc7ad6-051b-4ee5-94af-611388d9622a\") " pod="openshift-monitoring/kube-state-metrics-59584d565f-gsgxz" Feb 24 05:37:25.331006 master-0 kubenswrapper[34361]: I0224 05:37:25.330948 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-server-audit-profiles\" (UniqueName: \"kubernetes.io/configmap/2f48332e-92de-42aa-a6e6-db161f005e74-metrics-server-audit-profiles\") pod \"metrics-server-65cdf565cd-555rj\" (UID: \"2f48332e-92de-42aa-a6e6-db161f005e74\") " pod="openshift-monitoring/metrics-server-65cdf565cd-555rj" Feb 24 05:37:25.331079 master-0 kubenswrapper[34361]: I0224 05:37:25.331039 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openshift-state-metrics-tls\" (UniqueName: \"kubernetes.io/secret/bf303acd-b62e-4aa3-bd8d-15f5844302d8-openshift-state-metrics-tls\") pod \"openshift-state-metrics-6dbff8cb4c-hvjlk\" (UID: \"bf303acd-b62e-4aa3-bd8d-15f5844302d8\") " pod="openshift-monitoring/openshift-state-metrics-6dbff8cb4c-hvjlk" Feb 24 05:37:25.331123 master-0 kubenswrapper[34361]: I0224 05:37:25.331067 34361 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-metrics-server-tls\" (UniqueName: \"kubernetes.io/secret/2f48332e-92de-42aa-a6e6-db161f005e74-secret-metrics-server-tls\") pod \"metrics-server-65cdf565cd-555rj\" (UID: \"2f48332e-92de-42aa-a6e6-db161f005e74\") " pod="openshift-monitoring/metrics-server-65cdf565cd-555rj" Feb 24 05:37:25.331123 master-0 kubenswrapper[34361]: I0224 05:37:25.331071 34361 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"telemeter-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1163571d-f555-41ad-b04c-74c2dc452efe-telemeter-trusted-ca-bundle\") pod \"telemeter-client-96c995bf5-57k8x\" (UID: \"1163571d-f555-41ad-b04c-74c2dc452efe\") " pod="openshift-monitoring/telemeter-client-96c995bf5-57k8x" Feb 24 05:37:25.331368 master-0 kubenswrapper[34361]: I0224 05:37:25.331347 34361 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-server-audit-profiles\" (UniqueName: \"kubernetes.io/configmap/2f48332e-92de-42aa-a6e6-db161f005e74-metrics-server-audit-profiles\") pod \"metrics-server-65cdf565cd-555rj\" (UID: \"2f48332e-92de-42aa-a6e6-db161f005e74\") " pod="openshift-monitoring/metrics-server-65cdf565cd-555rj" Feb 24 05:37:25.331488 master-0 kubenswrapper[34361]: I0224 05:37:25.331454 34361 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openshift-state-metrics-tls\" (UniqueName: \"kubernetes.io/secret/bf303acd-b62e-4aa3-bd8d-15f5844302d8-openshift-state-metrics-tls\") pod \"openshift-state-metrics-6dbff8cb4c-hvjlk\" (UID: \"bf303acd-b62e-4aa3-bd8d-15f5844302d8\") " pod="openshift-monitoring/openshift-state-metrics-6dbff8cb4c-hvjlk" Feb 24 05:37:25.331584 master-0 kubenswrapper[34361]: I0224 05:37:25.331561 34361 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-state-metrics-custom-resource-state-configmap\" (UniqueName: \"kubernetes.io/configmap/80cc7ad6-051b-4ee5-94af-611388d9622a-kube-state-metrics-custom-resource-state-configmap\") pod \"kube-state-metrics-59584d565f-gsgxz\" (UID: \"80cc7ad6-051b-4ee5-94af-611388d9622a\") " pod="openshift-monitoring/kube-state-metrics-59584d565f-gsgxz" Feb 24 05:37:25.350146 master-0 kubenswrapper[34361]: I0224 05:37:25.350090 34361 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dtnxg\" (UniqueName: \"kubernetes.io/projected/b21148ab-4e3e-4d0b-b198-3278dd8e2e7e-kube-api-access-dtnxg\") pod \"apiserver-fdc9d7cdd-8v72m\" (UID: \"b21148ab-4e3e-4d0b-b198-3278dd8e2e7e\") " pod="openshift-apiserver/apiserver-fdc9d7cdd-8v72m" Feb 24 05:37:25.354218 master-0 kubenswrapper[34361]: I0224 05:37:25.354171 34361 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fzp4b\" (UniqueName: \"kubernetes.io/projected/d9492fbf-d0f4-4ecf-84ba-b089d69535c1-kube-api-access-fzp4b\") pod \"catalogd-controller-manager-84b8d9d697-zvzxs\" (UID: \"d9492fbf-d0f4-4ecf-84ba-b089d69535c1\") " pod="openshift-catalogd/catalogd-controller-manager-84b8d9d697-zvzxs" Feb 24 05:37:25.378079 master-0 kubenswrapper[34361]: I0224 05:37:25.378023 34361 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wvm29\" (UniqueName: \"kubernetes.io/projected/be7a4b9e-1e9a-4298-b804-21b683805c0e-kube-api-access-wvm29\") pod \"router-default-7b65dc9fcb-zxkt2\" (UID: \"be7a4b9e-1e9a-4298-b804-21b683805c0e\") " pod="openshift-ingress/router-default-7b65dc9fcb-zxkt2" Feb 24 05:37:25.390892 master-0 kubenswrapper[34361]: I0224 05:37:25.390820 34361 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5bwl7\" (UniqueName: \"kubernetes.io/projected/9666fc94-71e3-46af-8b45-26e3a085d076-kube-api-access-5bwl7\") pod \"olm-operator-5499d7f7bb-8xdmq\" (UID: \"9666fc94-71e3-46af-8b45-26e3a085d076\") " pod="openshift-operator-lifecycle-manager/olm-operator-5499d7f7bb-8xdmq" Feb 24 05:37:25.418153 master-0 kubenswrapper[34361]: I0224 05:37:25.418085 34361 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7vjzn\" (UniqueName: \"kubernetes.io/projected/4bb05b64-74d7-41bc-991c-5d3cddc9d8f4-kube-api-access-7vjzn\") pod \"cluster-samples-operator-65c5c48b9b-hmlsl\" (UID: \"4bb05b64-74d7-41bc-991c-5d3cddc9d8f4\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-65c5c48b9b-hmlsl" Feb 24 05:37:25.431452 master-0 kubenswrapper[34361]: I0224 05:37:25.431396 34361 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kx4qf\" (UniqueName: \"kubernetes.io/projected/e6a0fc47-b446-4902-9f8a-04870cbafcab-kube-api-access-kx4qf\") pod \"machine-approver-7dd9c7d7b9-pb6sw\" (UID: \"e6a0fc47-b446-4902-9f8a-04870cbafcab\") " pod="openshift-cluster-machine-approver/machine-approver-7dd9c7d7b9-pb6sw" Feb 24 05:37:25.452201 master-0 kubenswrapper[34361]: I0224 05:37:25.452084 34361 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b2fkp\" (UniqueName: \"kubernetes.io/projected/39c4d0aa-c372-4d02-9302-337e68b56784-kube-api-access-b2fkp\") pod \"machine-config-operator-7f8c75f984-922md\" (UID: \"39c4d0aa-c372-4d02-9302-337e68b56784\") " pod="openshift-machine-config-operator/machine-config-operator-7f8c75f984-922md" Feb 24 05:37:25.478226 master-0 kubenswrapper[34361]: I0224 05:37:25.478155 34361 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rkz2q\" (UniqueName: \"kubernetes.io/projected/6ddb5ab7-0c1f-44ed-84fa-aaeb6b553e03-kube-api-access-rkz2q\") pod \"multus-admission-controller-5f54bf67d4-5tf9t\" (UID: \"6ddb5ab7-0c1f-44ed-84fa-aaeb6b553e03\") " pod="openshift-multus/multus-admission-controller-5f54bf67d4-5tf9t" Feb 24 05:37:25.497910 master-0 kubenswrapper[34361]: I0224 05:37:25.497857 34361 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kc42f\" (UniqueName: \"kubernetes.io/projected/2f48332e-92de-42aa-a6e6-db161f005e74-kube-api-access-kc42f\") pod \"metrics-server-65cdf565cd-555rj\" (UID: \"2f48332e-92de-42aa-a6e6-db161f005e74\") " pod="openshift-monitoring/metrics-server-65cdf565cd-555rj" Feb 24 05:37:25.511805 master-0 kubenswrapper[34361]: I0224 05:37:25.511744 34361 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fh2pc\" (UniqueName: \"kubernetes.io/projected/32fd577d-8966-4ab1-95cf-357291084156-kube-api-access-fh2pc\") pod \"control-plane-machine-set-operator-686847ff5f-zzvtt\" (UID: \"32fd577d-8966-4ab1-95cf-357291084156\") " pod="openshift-machine-api/control-plane-machine-set-operator-686847ff5f-zzvtt" Feb 24 05:37:25.532865 master-0 kubenswrapper[34361]: I0224 05:37:25.532797 34361 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m9kf2\" (UniqueName: \"kubernetes.io/projected/58ecd829-4749-4c8a-933b-16b4acccac90-kube-api-access-m9kf2\") pod \"openshift-apiserver-operator-8586dccc9b-49fsv\" (UID: \"58ecd829-4749-4c8a-933b-16b4acccac90\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-8586dccc9b-49fsv" Feb 24 05:37:25.559073 master-0 kubenswrapper[34361]: I0224 05:37:25.558997 34361 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jrhmp\" (UniqueName: \"kubernetes.io/projected/996ae0be-d36c-47f4-98b2-1c89591f9506-kube-api-access-jrhmp\") pod \"dns-operator-8c7d49845-4dhth\" (UID: \"996ae0be-d36c-47f4-98b2-1c89591f9506\") " pod="openshift-dns-operator/dns-operator-8c7d49845-4dhth" Feb 24 05:37:25.572086 master-0 kubenswrapper[34361]: I0224 05:37:25.571977 34361 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d4d5x\" (UniqueName: \"kubernetes.io/projected/49bfccec-61ec-4bef-a561-9f6e6f906215-kube-api-access-d4d5x\") pod \"package-server-manager-5c75f78c8b-9d82f\" (UID: \"49bfccec-61ec-4bef-a561-9f6e6f906215\") " pod="openshift-operator-lifecycle-manager/package-server-manager-5c75f78c8b-9d82f" Feb 24 05:37:25.590448 master-0 kubenswrapper[34361]: I0224 05:37:25.590372 34361 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wwc5b\" (UniqueName: \"kubernetes.io/projected/59333a14-5bdc-4590-a3da-af7300f086da-kube-api-access-wwc5b\") pod \"authentication-operator-5bd7c86784-kbb8z\" (UID: \"59333a14-5bdc-4590-a3da-af7300f086da\") " pod="openshift-authentication-operator/authentication-operator-5bd7c86784-kbb8z" Feb 24 05:37:25.610572 master-0 kubenswrapper[34361]: I0224 05:37:25.610510 34361 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-67qg5\" (UniqueName: \"kubernetes.io/projected/cd674e58-b749-46fb-8a28-66012fd8b401-kube-api-access-67qg5\") pod \"community-operators-68vwc\" (UID: \"cd674e58-b749-46fb-8a28-66012fd8b401\") " pod="openshift-marketplace/community-operators-68vwc" Feb 24 05:37:25.616668 master-0 kubenswrapper[34361]: I0224 05:37:25.616627 34361 request.go:700] Waited for 3.919259205s due to client-side throttling, not priority and fairness, request: POST:https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-cluster-version/serviceaccounts/default/token Feb 24 05:37:25.635561 master-0 kubenswrapper[34361]: I0224 05:37:25.635507 34361 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0e05783d-6bd1-4c71-87d9-1eb3edd827b3-kube-api-access\") pod \"cluster-version-operator-57476485-7g2gq\" (UID: \"0e05783d-6bd1-4c71-87d9-1eb3edd827b3\") " pod="openshift-cluster-version/cluster-version-operator-57476485-7g2gq" Feb 24 05:37:25.655571 master-0 kubenswrapper[34361]: I0224 05:37:25.655537 34361 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zcb72\" (UniqueName: \"kubernetes.io/projected/dd29bef3-d27e-48b3-9aa0-d915e949b3d5-kube-api-access-zcb72\") pod \"marketplace-operator-6f5488b997-dbsnm\" (UID: \"dd29bef3-d27e-48b3-9aa0-d915e949b3d5\") " pod="openshift-marketplace/marketplace-operator-6f5488b997-dbsnm" Feb 24 05:37:25.682989 master-0 kubenswrapper[34361]: I0224 05:37:25.682918 34361 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-957g9\" (UniqueName: \"kubernetes.io/projected/f3cd3830-62b5-49d1-917e-bd993d685c65-kube-api-access-957g9\") pod \"cluster-cloud-controller-manager-operator-67dd8d7969-m8d2t\" (UID: \"f3cd3830-62b5-49d1-917e-bd993d685c65\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-67dd8d7969-m8d2t" Feb 24 05:37:25.702481 master-0 kubenswrapper[34361]: I0224 05:37:25.702420 34361 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bs794\" (UniqueName: \"kubernetes.io/projected/88b915ff-fd94-4998-aa09-70f95c0f1b8a-kube-api-access-bs794\") pod \"ovnkube-control-plane-5d8dfcdc87-b8ght\" (UID: \"88b915ff-fd94-4998-aa09-70f95c0f1b8a\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-5d8dfcdc87-b8ght" Feb 24 05:37:25.721592 master-0 kubenswrapper[34361]: I0224 05:37:25.721551 34361 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4bf6w\" (UniqueName: \"kubernetes.io/projected/1e7f7c02-4c84-432a-8d59-25dd3bfef5c2-kube-api-access-4bf6w\") pod \"machine-config-controller-54cb48566c-9ww5z\" (UID: \"1e7f7c02-4c84-432a-8d59-25dd3bfef5c2\") " pod="openshift-machine-config-operator/machine-config-controller-54cb48566c-9ww5z" Feb 24 05:37:25.733804 master-0 kubenswrapper[34361]: I0224 05:37:25.733749 34361 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r5tgk\" (UniqueName: \"kubernetes.io/projected/a3561f49-0808-4d96-95ec-456fcb5c5bb4-kube-api-access-r5tgk\") pod \"machine-config-daemon-c56dz\" (UID: \"a3561f49-0808-4d96-95ec-456fcb5c5bb4\") " pod="openshift-machine-config-operator/machine-config-daemon-c56dz" Feb 24 05:37:25.763456 master-0 kubenswrapper[34361]: I0224 05:37:25.763388 34361 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dh2rh\" (UniqueName: \"kubernetes.io/projected/073c9b40-bb80-41a2-bcd2-cfdbe040a5a4-kube-api-access-dh2rh\") pod \"tuned-2w6mj\" (UID: \"073c9b40-bb80-41a2-bcd2-cfdbe040a5a4\") " pod="openshift-cluster-node-tuning-operator/tuned-2w6mj" Feb 24 05:37:25.779791 master-0 kubenswrapper[34361]: I0224 05:37:25.779742 34361 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fgf94\" (UniqueName: \"kubernetes.io/projected/7a2c651d-ea1a-41f2-9745-04adc8d88904-kube-api-access-fgf94\") pod \"etcd-operator-545bf96f4d-tfmbs\" (UID: \"7a2c651d-ea1a-41f2-9745-04adc8d88904\") " pod="openshift-etcd-operator/etcd-operator-545bf96f4d-tfmbs" Feb 24 05:37:25.790978 master-0 kubenswrapper[34361]: I0224 05:37:25.790934 34361 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p67bp\" (UniqueName: \"kubernetes.io/projected/ab5afff8-1081-4acc-8ab9-d6bfd8df1d67-kube-api-access-p67bp\") pod \"service-ca-576b4d78bd-fsmrl\" (UID: \"ab5afff8-1081-4acc-8ab9-d6bfd8df1d67\") " pod="openshift-service-ca/service-ca-576b4d78bd-fsmrl" Feb 24 05:37:25.812126 master-0 kubenswrapper[34361]: I0224 05:37:25.812067 34361 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-46fll\" (UniqueName: \"kubernetes.io/projected/1163571d-f555-41ad-b04c-74c2dc452efe-kube-api-access-46fll\") pod \"telemeter-client-96c995bf5-57k8x\" (UID: \"1163571d-f555-41ad-b04c-74c2dc452efe\") " pod="openshift-monitoring/telemeter-client-96c995bf5-57k8x" Feb 24 05:37:25.840990 master-0 kubenswrapper[34361]: I0224 05:37:25.840857 34361 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4lt5r\" (UniqueName: \"kubernetes.io/projected/812552f3-09b1-43f8-b910-c78e776127f8-kube-api-access-4lt5r\") pod \"apiserver-6f8b7f45f7-5df4m\" (UID: \"812552f3-09b1-43f8-b910-c78e776127f8\") " pod="openshift-oauth-apiserver/apiserver-6f8b7f45f7-5df4m" Feb 24 05:37:25.854676 master-0 kubenswrapper[34361]: I0224 05:37:25.854616 34361 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ddfqw\" (UniqueName: \"kubernetes.io/projected/39623346-691b-42c8-af76-409d4f6629af-kube-api-access-ddfqw\") pod \"cluster-baremetal-operator-d6bb9bb76-54hnv\" (UID: \"39623346-691b-42c8-af76-409d4f6629af\") " pod="openshift-machine-api/cluster-baremetal-operator-d6bb9bb76-54hnv" Feb 24 05:37:25.868643 master-0 kubenswrapper[34361]: I0224 05:37:25.868607 34361 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e6f05507-d5c1-4102-a220-1db715a496e3-kube-api-access\") pod \"openshift-kube-scheduler-operator-77cd4d9559-8l7xv\" (UID: \"e6f05507-d5c1-4102-a220-1db715a496e3\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-77cd4d9559-8l7xv" Feb 24 05:37:25.890352 master-0 kubenswrapper[34361]: I0224 05:37:25.890286 34361 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ckfnc\" (UniqueName: \"kubernetes.io/projected/1d922f6f-70a3-46cb-b230-6a1e2b8cfdfa-kube-api-access-ckfnc\") pod \"network-check-target-vp2jg\" (UID: \"1d922f6f-70a3-46cb-b230-6a1e2b8cfdfa\") " pod="openshift-network-diagnostics/network-check-target-vp2jg" Feb 24 05:37:25.913788 master-0 kubenswrapper[34361]: I0224 05:37:25.913728 34361 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6b7f4\" (UniqueName: \"kubernetes.io/projected/f6690909-3a87-4bdc-b0ec-1cdd4df32e4b-kube-api-access-6b7f4\") pod \"iptables-alerter-r2vvc\" (UID: \"f6690909-3a87-4bdc-b0ec-1cdd4df32e4b\") " pod="openshift-network-operator/iptables-alerter-r2vvc" Feb 24 05:37:25.930347 master-0 kubenswrapper[34361]: I0224 05:37:25.930248 34361 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4p8zb\" (UniqueName: \"kubernetes.io/projected/c106275b-72b6-4877-95c3-830f93e35375-kube-api-access-4p8zb\") pod \"network-node-identity-rlg4x\" (UID: \"c106275b-72b6-4877-95c3-830f93e35375\") " pod="openshift-network-node-identity/network-node-identity-rlg4x" Feb 24 05:37:25.954591 master-0 kubenswrapper[34361]: I0224 05:37:25.954503 34361 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jt9fb\" (UniqueName: \"kubernetes.io/projected/116e6b47-d435-49ca-abb5-088788daf16a-kube-api-access-jt9fb\") pod \"machine-api-operator-5c7cf458b4-65mc5\" (UID: \"116e6b47-d435-49ca-abb5-088788daf16a\") " pod="openshift-machine-api/machine-api-operator-5c7cf458b4-65mc5" Feb 24 05:37:25.977245 master-0 kubenswrapper[34361]: I0224 05:37:25.977141 34361 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xbt92\" (UniqueName: \"kubernetes.io/projected/f938daff-1d36-4348-a689-3d1607058296-kube-api-access-xbt92\") pod \"ingress-canary-5m82s\" (UID: \"f938daff-1d36-4348-a689-3d1607058296\") " pod="openshift-ingress-canary/ingress-canary-5m82s" Feb 24 05:37:25.991998 master-0 kubenswrapper[34361]: I0224 05:37:25.991913 34361 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h5djr\" (UniqueName: \"kubernetes.io/projected/feee7fe8-e805-4807-b4c0-ecc7ef0f88d9-kube-api-access-h5djr\") pod \"csi-snapshot-controller-operator-6fb4df594f-8tv99\" (UID: \"feee7fe8-e805-4807-b4c0-ecc7ef0f88d9\") " pod="openshift-cluster-storage-operator/csi-snapshot-controller-operator-6fb4df594f-8tv99" Feb 24 05:37:26.014282 master-0 kubenswrapper[34361]: I0224 05:37:26.014210 34361 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5dwz2\" (UniqueName: \"kubernetes.io/projected/ee10cd9f-23da-4e40-bc7a-0856a9fa7ae5-kube-api-access-5dwz2\") pod \"insights-operator-59b498fcfb-mprnx\" (UID: \"ee10cd9f-23da-4e40-bc7a-0856a9fa7ae5\") " pod="openshift-insights/insights-operator-59b498fcfb-mprnx" Feb 24 05:37:26.034628 master-0 kubenswrapper[34361]: I0224 05:37:26.034566 34361 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2kh6l\" (UniqueName: \"kubernetes.io/projected/5d51ce58-55f6-45d5-9d5d-7b31ae42380a-kube-api-access-2kh6l\") pod \"cluster-autoscaler-operator-86b8dc6d6-mcf2z\" (UID: \"5d51ce58-55f6-45d5-9d5d-7b31ae42380a\") " pod="openshift-machine-api/cluster-autoscaler-operator-86b8dc6d6-mcf2z" Feb 24 05:37:26.054336 master-0 kubenswrapper[34361]: I0224 05:37:26.054282 34361 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-79h66\" (UniqueName: \"kubernetes.io/projected/74e8b3c8-da80-492c-bfcf-199b40bde40b-kube-api-access-79h66\") pod \"ovnkube-node-vd82q\" (UID: \"74e8b3c8-da80-492c-bfcf-199b40bde40b\") " pod="openshift-ovn-kubernetes/ovnkube-node-vd82q" Feb 24 05:37:26.083823 master-0 kubenswrapper[34361]: I0224 05:37:26.083731 34361 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jspzm\" (UniqueName: \"kubernetes.io/projected/1533c5fa-0387-40bd-a959-e714b65cdacc-kube-api-access-jspzm\") pod \"network-check-source-58fb6744f5-kn2z7\" (UID: \"1533c5fa-0387-40bd-a959-e714b65cdacc\") " pod="openshift-network-diagnostics/network-check-source-58fb6744f5-kn2z7" Feb 24 05:37:26.100974 master-0 kubenswrapper[34361]: I0224 05:37:26.100797 34361 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9zxwj\" (UniqueName: \"kubernetes.io/projected/49b426a3-f16e-40e9-a166-7270d4cfcc60-kube-api-access-9zxwj\") pod \"packageserver-df5f88cd4-cwzcs\" (UID: \"49b426a3-f16e-40e9-a166-7270d4cfcc60\") " pod="openshift-operator-lifecycle-manager/packageserver-df5f88cd4-cwzcs" Feb 24 05:37:26.110000 master-0 kubenswrapper[34361]: I0224 05:37:26.109937 34361 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5q2r9\" (UniqueName: \"kubernetes.io/projected/3d6b1ce7-1213-494c-829d-186d39eac7eb-kube-api-access-5q2r9\") pod \"ingress-operator-6569778c84-rr8r7\" (UID: \"3d6b1ce7-1213-494c-829d-186d39eac7eb\") " pod="openshift-ingress-operator/ingress-operator-6569778c84-rr8r7" Feb 24 05:37:26.141436 master-0 kubenswrapper[34361]: I0224 05:37:26.141370 34361 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f92qq\" (UniqueName: \"kubernetes.io/projected/bf303acd-b62e-4aa3-bd8d-15f5844302d8-kube-api-access-f92qq\") pod \"openshift-state-metrics-6dbff8cb4c-hvjlk\" (UID: \"bf303acd-b62e-4aa3-bd8d-15f5844302d8\") " pod="openshift-monitoring/openshift-state-metrics-6dbff8cb4c-hvjlk" Feb 24 05:37:26.160977 master-0 kubenswrapper[34361]: I0224 05:37:26.160872 34361 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pl6rx\" (UniqueName: \"kubernetes.io/projected/2c6bb439-ed17-4761-b193-580be5f6aa00-kube-api-access-pl6rx\") pod \"certified-operators-gn8m8\" (UID: \"2c6bb439-ed17-4761-b193-580be5f6aa00\") " pod="openshift-marketplace/certified-operators-gn8m8" Feb 24 05:37:26.169787 master-0 kubenswrapper[34361]: I0224 05:37:26.169734 34361 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8ktz5\" (UniqueName: \"kubernetes.io/projected/7dcc5520-7aa8-4cd5-b06d-591827ed4e2a-kube-api-access-8ktz5\") pod \"network-metrics-daemon-2vsjh\" (UID: \"7dcc5520-7aa8-4cd5-b06d-591827ed4e2a\") " pod="openshift-multus/network-metrics-daemon-2vsjh" Feb 24 05:37:26.191001 master-0 kubenswrapper[34361]: I0224 05:37:26.190913 34361 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nb75b\" (UniqueName: \"kubernetes.io/projected/e1f03d97-1a6a-41e4-9ed3-cd9b01c46400-kube-api-access-nb75b\") pod \"cluster-storage-operator-f94476f49-tlmg5\" (UID: \"e1f03d97-1a6a-41e4-9ed3-cd9b01c46400\") " pod="openshift-cluster-storage-operator/cluster-storage-operator-f94476f49-tlmg5" Feb 24 05:37:26.210639 master-0 kubenswrapper[34361]: I0224 05:37:26.210540 34361 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gmf87\" (UniqueName: \"kubernetes.io/projected/933beda1-c930-4831-a886-3cc6b7a992ad-kube-api-access-gmf87\") pod \"openshift-controller-manager-operator-584cc7bcb5-zz9fm\" (UID: \"933beda1-c930-4831-a886-3cc6b7a992ad\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-584cc7bcb5-zz9fm" Feb 24 05:37:26.236881 master-0 kubenswrapper[34361]: I0224 05:37:26.236791 34361 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/23bdafdd-27c9-4461-be4a-3ea916ac3875-bound-sa-token\") pod \"cluster-image-registry-operator-779979bdf7-t98nr\" (UID: \"23bdafdd-27c9-4461-be4a-3ea916ac3875\") " pod="openshift-image-registry/cluster-image-registry-operator-779979bdf7-t98nr" Feb 24 05:37:26.255014 master-0 kubenswrapper[34361]: I0224 05:37:26.254952 34361 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lm88x\" (UniqueName: \"kubernetes.io/projected/f2be5ed6-fdf0-4462-a319-eed1a5a1c778-kube-api-access-lm88x\") pod \"node-exporter-qk7rz\" (UID: \"f2be5ed6-fdf0-4462-a319-eed1a5a1c778\") " pod="openshift-monitoring/node-exporter-qk7rz" Feb 24 05:37:26.273122 master-0 kubenswrapper[34361]: E0224 05:37:26.273008 34361 projected.go:288] Couldn't get configMap openshift-kube-apiserver/kube-root-ca.crt: object "openshift-kube-apiserver"/"kube-root-ca.crt" not registered Feb 24 05:37:26.273122 master-0 kubenswrapper[34361]: E0224 05:37:26.273113 34361 projected.go:194] Error preparing data for projected volume kube-api-access for pod openshift-kube-apiserver/installer-3-master-0: object "openshift-kube-apiserver"/"kube-root-ca.crt" not registered Feb 24 05:37:26.273373 master-0 kubenswrapper[34361]: E0224 05:37:26.273253 34361 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/afaa1cb5-3ad5-4c67-8802-9c4db23a2e3a-kube-api-access podName:afaa1cb5-3ad5-4c67-8802-9c4db23a2e3a nodeName:}" failed. No retries permitted until 2026-02-24 05:37:26.773226683 +0000 UTC m=+6.475843739 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access" (UniqueName: "kubernetes.io/projected/afaa1cb5-3ad5-4c67-8802-9c4db23a2e3a-kube-api-access") pod "installer-3-master-0" (UID: "afaa1cb5-3ad5-4c67-8802-9c4db23a2e3a") : object "openshift-kube-apiserver"/"kube-root-ca.crt" not registered Feb 24 05:37:26.310460 master-0 kubenswrapper[34361]: I0224 05:37:26.310358 34361 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-zxkt2 container/router namespace/openshift-ingress: Readiness probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 05:37:26.310460 master-0 kubenswrapper[34361]: [-]has-synced failed: reason withheld Feb 24 05:37:26.310460 master-0 kubenswrapper[34361]: [+]process-running ok Feb 24 05:37:26.310460 master-0 kubenswrapper[34361]: healthz check failed Feb 24 05:37:26.310850 master-0 kubenswrapper[34361]: I0224 05:37:26.310486 34361 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-ingress/router-default-7b65dc9fcb-zxkt2" podUID="be7a4b9e-1e9a-4298-b804-21b683805c0e" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 05:37:26.320433 master-0 kubenswrapper[34361]: E0224 05:37:26.320382 34361 kubelet.go:1929] "Failed creating a mirror pod for" err="pods \"kube-rbac-proxy-crio-master-0\" already exists" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" Feb 24 05:37:26.330933 master-0 kubenswrapper[34361]: E0224 05:37:26.330878 34361 kubelet.go:2526] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="3.735s" Feb 24 05:37:26.331150 master-0 kubenswrapper[34361]: I0224 05:37:26.330942 34361 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-kube-apiserver/bootstrap-kube-apiserver-master-0"] Feb 24 05:37:26.331150 master-0 kubenswrapper[34361]: I0224 05:37:26.330968 34361 kubelet.go:2649] "Unable to find pod for mirror pod, skipping" mirrorPod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" mirrorPodUID="18870904-bc46-4310-ab4a-d3ad9e6837a8" Feb 24 05:37:26.331150 master-0 kubenswrapper[34361]: I0224 05:37:26.331004 34361 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Feb 24 05:37:26.331150 master-0 kubenswrapper[34361]: I0224 05:37:26.331075 34361 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-monitoring/prometheus-operator-admission-webhook-75d56db95f-hw4m2" Feb 24 05:37:26.331150 master-0 kubenswrapper[34361]: I0224 05:37:26.331122 34361 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-master-0" Feb 24 05:37:26.350667 master-0 kubenswrapper[34361]: I0224 05:37:26.349544 34361 scope.go:117] "RemoveContainer" containerID="e5961da58ba0000499976ed125663a28df9508f26428d259f2513e76bb11ef6f" Feb 24 05:37:26.356690 master-0 kubenswrapper[34361]: I0224 05:37:26.356526 34361 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" podUID="" Feb 24 05:37:26.385484 master-0 kubenswrapper[34361]: I0224 05:37:26.385406 34361 kubelet_node_status.go:115] "Node was previously registered" node="master-0" Feb 24 05:37:26.385785 master-0 kubenswrapper[34361]: I0224 05:37:26.385709 34361 kubelet_node_status.go:79] "Successfully registered node" node="master-0" Feb 24 05:37:26.401516 master-0 kubenswrapper[34361]: I0224 05:37:26.401386 34361 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-monitoring/prometheus-operator-admission-webhook-75d56db95f-hw4m2" Feb 24 05:37:26.401516 master-0 kubenswrapper[34361]: I0224 05:37:26.401447 34361 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-kube-apiserver/bootstrap-kube-apiserver-master-0"] Feb 24 05:37:26.401516 master-0 kubenswrapper[34361]: I0224 05:37:26.401476 34361 kubelet.go:2673] "Unable to find pod for mirror pod, skipping" mirrorPod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" mirrorPodUID="18870904-bc46-4310-ab4a-d3ad9e6837a8" Feb 24 05:37:26.401516 master-0 kubenswrapper[34361]: I0224 05:37:26.401506 34361 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-master-0" Feb 24 05:37:26.401516 master-0 kubenswrapper[34361]: I0224 05:37:26.401533 34361 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-apiserver/kube-apiserver-master-0" Feb 24 05:37:26.401826 master-0 kubenswrapper[34361]: I0224 05:37:26.401612 34361 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-config-operator/openshift-config-operator-6f47d587d6-7b87v" Feb 24 05:37:26.401826 master-0 kubenswrapper[34361]: I0224 05:37:26.401705 34361 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Feb 24 05:37:26.401826 master-0 kubenswrapper[34361]: I0224 05:37:26.401769 34361 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Feb 24 05:37:26.401826 master-0 kubenswrapper[34361]: I0224 05:37:26.401815 34361 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-config-operator/openshift-config-operator-6f47d587d6-7b87v" Feb 24 05:37:26.402256 master-0 kubenswrapper[34361]: I0224 05:37:26.402137 34361 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-ingress/router-default-7b65dc9fcb-zxkt2" Feb 24 05:37:26.402823 master-0 kubenswrapper[34361]: I0224 05:37:26.402777 34361 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Feb 24 05:37:26.402881 master-0 kubenswrapper[34361]: I0224 05:37:26.402868 34361 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-dns/dns-default-cdk2w" Feb 24 05:37:26.402942 master-0 kubenswrapper[34361]: I0224 05:37:26.402922 34361 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-dns/dns-default-cdk2w" Feb 24 05:37:26.403022 master-0 kubenswrapper[34361]: I0224 05:37:26.403003 34361 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Feb 24 05:37:26.403131 master-0 kubenswrapper[34361]: I0224 05:37:26.403110 34361 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-apiserver/apiserver-fdc9d7cdd-8v72m" Feb 24 05:37:26.403243 master-0 kubenswrapper[34361]: I0224 05:37:26.403220 34361 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-oauth-apiserver/apiserver-6f8b7f45f7-5df4m" Feb 24 05:37:26.403300 master-0 kubenswrapper[34361]: I0224 05:37:26.403281 34361 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-master-0" Feb 24 05:37:26.403823 master-0 kubenswrapper[34361]: I0224 05:37:26.403753 34361 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-etcd/etcd-master-0" Feb 24 05:37:26.403942 master-0 kubenswrapper[34361]: I0224 05:37:26.403912 34361 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-oauth-apiserver/apiserver-6f8b7f45f7-5df4m" Feb 24 05:37:26.403978 master-0 kubenswrapper[34361]: I0224 05:37:26.403965 34361 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-oauth-apiserver/apiserver-6f8b7f45f7-5df4m" Feb 24 05:37:26.404010 master-0 kubenswrapper[34361]: I0224 05:37:26.403991 34361 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ingress/router-default-7b65dc9fcb-zxkt2" Feb 24 05:37:26.404075 master-0 kubenswrapper[34361]: I0224 05:37:26.404048 34361 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-master-0" Feb 24 05:37:26.404124 master-0 kubenswrapper[34361]: I0224 05:37:26.404104 34361 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/olm-operator-5499d7f7bb-8xdmq" Feb 24 05:37:26.404183 master-0 kubenswrapper[34361]: I0224 05:37:26.404163 34361 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/olm-operator-5499d7f7bb-8xdmq" Feb 24 05:37:26.447650 master-0 kubenswrapper[34361]: I0224 05:37:26.447570 34361 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-zxkt2 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 05:37:26.447650 master-0 kubenswrapper[34361]: [-]has-synced failed: reason withheld Feb 24 05:37:26.447650 master-0 kubenswrapper[34361]: [+]process-running ok Feb 24 05:37:26.447650 master-0 kubenswrapper[34361]: healthz check failed Feb 24 05:37:26.448300 master-0 kubenswrapper[34361]: I0224 05:37:26.447649 34361 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-zxkt2" podUID="be7a4b9e-1e9a-4298-b804-21b683805c0e" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 05:37:26.744114 master-0 kubenswrapper[34361]: I0224 05:37:26.744036 34361 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-7657d7494-mmsz6" Feb 24 05:37:26.748101 master-0 kubenswrapper[34361]: I0224 05:37:26.748054 34361 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-7657d7494-mmsz6" Feb 24 05:37:26.863705 master-0 kubenswrapper[34361]: I0224 05:37:26.863632 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/afaa1cb5-3ad5-4c67-8802-9c4db23a2e3a-kube-api-access\") pod \"installer-3-master-0\" (UID: \"afaa1cb5-3ad5-4c67-8802-9c4db23a2e3a\") " pod="openshift-kube-apiserver/installer-3-master-0" Feb 24 05:37:26.863988 master-0 kubenswrapper[34361]: E0224 05:37:26.863924 34361 projected.go:288] Couldn't get configMap openshift-kube-apiserver/kube-root-ca.crt: object "openshift-kube-apiserver"/"kube-root-ca.crt" not registered Feb 24 05:37:26.863988 master-0 kubenswrapper[34361]: E0224 05:37:26.863986 34361 projected.go:194] Error preparing data for projected volume kube-api-access for pod openshift-kube-apiserver/installer-3-master-0: object "openshift-kube-apiserver"/"kube-root-ca.crt" not registered Feb 24 05:37:26.864151 master-0 kubenswrapper[34361]: E0224 05:37:26.864090 34361 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/afaa1cb5-3ad5-4c67-8802-9c4db23a2e3a-kube-api-access podName:afaa1cb5-3ad5-4c67-8802-9c4db23a2e3a nodeName:}" failed. No retries permitted until 2026-02-24 05:37:27.864057291 +0000 UTC m=+7.566674347 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access" (UniqueName: "kubernetes.io/projected/afaa1cb5-3ad5-4c67-8802-9c4db23a2e3a-kube-api-access") pod "installer-3-master-0" (UID: "afaa1cb5-3ad5-4c67-8802-9c4db23a2e3a") : object "openshift-kube-apiserver"/"kube-root-ca.crt" not registered Feb 24 05:37:26.884603 master-0 kubenswrapper[34361]: I0224 05:37:26.884460 34361 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/package-server-manager-5c75f78c8b-9d82f" Feb 24 05:37:26.890076 master-0 kubenswrapper[34361]: I0224 05:37:26.890032 34361 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/package-server-manager-5c75f78c8b-9d82f" Feb 24 05:37:26.997979 master-0 kubenswrapper[34361]: I0224 05:37:26.997908 34361 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ingress-operator_ingress-operator-6569778c84-rr8r7_3d6b1ce7-1213-494c-829d-186d39eac7eb/ingress-operator/6.log" Feb 24 05:37:26.999847 master-0 kubenswrapper[34361]: I0224 05:37:26.999797 34361 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-6569778c84-rr8r7" event={"ID":"3d6b1ce7-1213-494c-829d-186d39eac7eb","Type":"ContainerStarted","Data":"65b7ffbad19776bdf619ae50589506625d8d78eaccf3ab29e773099a14a72418"} Feb 24 05:37:27.000207 master-0 kubenswrapper[34361]: I0224 05:37:27.000142 34361 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 24 05:37:27.005289 master-0 kubenswrapper[34361]: I0224 05:37:27.004874 34361 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Feb 24 05:37:27.005379 master-0 kubenswrapper[34361]: I0224 05:37:27.005334 34361 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-master-0" Feb 24 05:37:27.172248 master-0 kubenswrapper[34361]: I0224 05:37:27.172107 34361 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-654dcf5585-fgmnd" Feb 24 05:37:27.188631 master-0 kubenswrapper[34361]: I0224 05:37:27.188573 34361 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-654dcf5585-fgmnd" Feb 24 05:37:27.312417 master-0 kubenswrapper[34361]: I0224 05:37:27.312300 34361 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/packageserver-df5f88cd4-cwzcs" Feb 24 05:37:27.318301 master-0 kubenswrapper[34361]: I0224 05:37:27.318230 34361 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/packageserver-df5f88cd4-cwzcs" Feb 24 05:37:27.348441 master-0 kubenswrapper[34361]: I0224 05:37:27.348282 34361 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-68vwc" Feb 24 05:37:27.377656 master-0 kubenswrapper[34361]: I0224 05:37:27.377436 34361 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-vd82q" Feb 24 05:37:27.442416 master-0 kubenswrapper[34361]: I0224 05:37:27.442156 34361 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-68vwc" Feb 24 05:37:27.448631 master-0 kubenswrapper[34361]: I0224 05:37:27.448564 34361 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-zxkt2 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 05:37:27.448631 master-0 kubenswrapper[34361]: [-]has-synced failed: reason withheld Feb 24 05:37:27.448631 master-0 kubenswrapper[34361]: [+]process-running ok Feb 24 05:37:27.448631 master-0 kubenswrapper[34361]: healthz check failed Feb 24 05:37:27.449657 master-0 kubenswrapper[34361]: I0224 05:37:27.449426 34361 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-vd82q" Feb 24 05:37:27.449880 master-0 kubenswrapper[34361]: I0224 05:37:27.449727 34361 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-zxkt2" podUID="be7a4b9e-1e9a-4298-b804-21b683805c0e" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 05:37:27.885101 master-0 kubenswrapper[34361]: I0224 05:37:27.885027 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/afaa1cb5-3ad5-4c67-8802-9c4db23a2e3a-kube-api-access\") pod \"installer-3-master-0\" (UID: \"afaa1cb5-3ad5-4c67-8802-9c4db23a2e3a\") " pod="openshift-kube-apiserver/installer-3-master-0" Feb 24 05:37:27.885447 master-0 kubenswrapper[34361]: E0224 05:37:27.885369 34361 projected.go:288] Couldn't get configMap openshift-kube-apiserver/kube-root-ca.crt: object "openshift-kube-apiserver"/"kube-root-ca.crt" not registered Feb 24 05:37:27.885447 master-0 kubenswrapper[34361]: E0224 05:37:27.885436 34361 projected.go:194] Error preparing data for projected volume kube-api-access for pod openshift-kube-apiserver/installer-3-master-0: object "openshift-kube-apiserver"/"kube-root-ca.crt" not registered Feb 24 05:37:27.885596 master-0 kubenswrapper[34361]: E0224 05:37:27.885536 34361 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/afaa1cb5-3ad5-4c67-8802-9c4db23a2e3a-kube-api-access podName:afaa1cb5-3ad5-4c67-8802-9c4db23a2e3a nodeName:}" failed. No retries permitted until 2026-02-24 05:37:29.885505164 +0000 UTC m=+9.588122220 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access" (UniqueName: "kubernetes.io/projected/afaa1cb5-3ad5-4c67-8802-9c4db23a2e3a-kube-api-access") pod "installer-3-master-0" (UID: "afaa1cb5-3ad5-4c67-8802-9c4db23a2e3a") : object "openshift-kube-apiserver"/"kube-root-ca.crt" not registered Feb 24 05:37:27.900010 master-0 kubenswrapper[34361]: I0224 05:37:27.899911 34361 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-master-0" podStartSLOduration=6.899887881 podStartE2EDuration="6.899887881s" podCreationTimestamp="2026-02-24 05:37:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-24 05:37:27.897070115 +0000 UTC m=+7.599687201" watchObservedRunningTime="2026-02-24 05:37:27.899887881 +0000 UTC m=+7.602504937" Feb 24 05:37:27.985664 master-0 kubenswrapper[34361]: I0224 05:37:27.985578 34361 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-xm8sw" Feb 24 05:37:28.008571 master-0 kubenswrapper[34361]: I0224 05:37:28.008477 34361 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 24 05:37:28.008571 master-0 kubenswrapper[34361]: I0224 05:37:28.008527 34361 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 24 05:37:28.010798 master-0 kubenswrapper[34361]: I0224 05:37:28.009201 34361 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 24 05:37:28.031586 master-0 kubenswrapper[34361]: I0224 05:37:28.031520 34361 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-xm8sw" Feb 24 05:37:28.448370 master-0 kubenswrapper[34361]: I0224 05:37:28.448267 34361 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-zxkt2 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 05:37:28.448370 master-0 kubenswrapper[34361]: [-]has-synced failed: reason withheld Feb 24 05:37:28.448370 master-0 kubenswrapper[34361]: [+]process-running ok Feb 24 05:37:28.448370 master-0 kubenswrapper[34361]: healthz check failed Feb 24 05:37:28.449231 master-0 kubenswrapper[34361]: I0224 05:37:28.448376 34361 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-zxkt2" podUID="be7a4b9e-1e9a-4298-b804-21b683805c0e" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 05:37:28.747986 master-0 kubenswrapper[34361]: I0224 05:37:28.747919 34361 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-etcd/etcd-master-0" Feb 24 05:37:28.766408 master-0 kubenswrapper[34361]: I0224 05:37:28.766356 34361 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-etcd/etcd-master-0" Feb 24 05:37:28.773830 master-0 kubenswrapper[34361]: I0224 05:37:28.773738 34361 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" podStartSLOduration=7.773710379 podStartE2EDuration="7.773710379s" podCreationTimestamp="2026-02-24 05:37:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-24 05:37:28.772978618 +0000 UTC m=+8.475595714" watchObservedRunningTime="2026-02-24 05:37:28.773710379 +0000 UTC m=+8.476327465" Feb 24 05:37:29.016067 master-0 kubenswrapper[34361]: I0224 05:37:29.015898 34361 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 24 05:37:29.063820 master-0 kubenswrapper[34361]: I0224 05:37:29.063742 34361 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-68vwc" Feb 24 05:37:29.066878 master-0 kubenswrapper[34361]: I0224 05:37:29.066815 34361 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-etcd/etcd-master-0" Feb 24 05:37:29.110348 master-0 kubenswrapper[34361]: I0224 05:37:29.110272 34361 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-68vwc" Feb 24 05:37:29.269302 master-0 kubenswrapper[34361]: I0224 05:37:29.267218 34361 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-network-diagnostics/network-check-target-vp2jg" Feb 24 05:37:29.270984 master-0 kubenswrapper[34361]: I0224 05:37:29.270907 34361 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-network-diagnostics/network-check-target-vp2jg" Feb 24 05:37:29.445185 master-0 kubenswrapper[34361]: I0224 05:37:29.445089 34361 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-apiserver/apiserver-fdc9d7cdd-8v72m" Feb 24 05:37:29.448096 master-0 kubenswrapper[34361]: I0224 05:37:29.448037 34361 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-zxkt2 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 05:37:29.448096 master-0 kubenswrapper[34361]: [-]has-synced failed: reason withheld Feb 24 05:37:29.448096 master-0 kubenswrapper[34361]: [+]process-running ok Feb 24 05:37:29.448096 master-0 kubenswrapper[34361]: healthz check failed Feb 24 05:37:29.448437 master-0 kubenswrapper[34361]: I0224 05:37:29.448399 34361 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-zxkt2" podUID="be7a4b9e-1e9a-4298-b804-21b683805c0e" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 05:37:29.635877 master-0 kubenswrapper[34361]: I0224 05:37:29.635675 34361 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" Feb 24 05:37:29.636585 master-0 kubenswrapper[34361]: I0224 05:37:29.636000 34361 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 24 05:37:29.640399 master-0 kubenswrapper[34361]: I0224 05:37:29.640351 34361 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" Feb 24 05:37:29.874341 master-0 kubenswrapper[34361]: I0224 05:37:29.874254 34361 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-v64s6" Feb 24 05:37:29.934490 master-0 kubenswrapper[34361]: I0224 05:37:29.934268 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/afaa1cb5-3ad5-4c67-8802-9c4db23a2e3a-kube-api-access\") pod \"installer-3-master-0\" (UID: \"afaa1cb5-3ad5-4c67-8802-9c4db23a2e3a\") " pod="openshift-kube-apiserver/installer-3-master-0" Feb 24 05:37:29.934853 master-0 kubenswrapper[34361]: E0224 05:37:29.934772 34361 projected.go:288] Couldn't get configMap openshift-kube-apiserver/kube-root-ca.crt: object "openshift-kube-apiserver"/"kube-root-ca.crt" not registered Feb 24 05:37:29.934923 master-0 kubenswrapper[34361]: E0224 05:37:29.934886 34361 projected.go:194] Error preparing data for projected volume kube-api-access for pod openshift-kube-apiserver/installer-3-master-0: object "openshift-kube-apiserver"/"kube-root-ca.crt" not registered Feb 24 05:37:29.935037 master-0 kubenswrapper[34361]: E0224 05:37:29.935003 34361 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/afaa1cb5-3ad5-4c67-8802-9c4db23a2e3a-kube-api-access podName:afaa1cb5-3ad5-4c67-8802-9c4db23a2e3a nodeName:}" failed. No retries permitted until 2026-02-24 05:37:33.934971035 +0000 UTC m=+13.637588111 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access" (UniqueName: "kubernetes.io/projected/afaa1cb5-3ad5-4c67-8802-9c4db23a2e3a-kube-api-access") pod "installer-3-master-0" (UID: "afaa1cb5-3ad5-4c67-8802-9c4db23a2e3a") : object "openshift-kube-apiserver"/"kube-root-ca.crt" not registered Feb 24 05:37:29.955666 master-0 kubenswrapper[34361]: I0224 05:37:29.955584 34361 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-v64s6" Feb 24 05:37:30.005545 master-0 kubenswrapper[34361]: I0224 05:37:30.004763 34361 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-catalogd/catalogd-controller-manager-84b8d9d697-zvzxs" Feb 24 05:37:30.056404 master-0 kubenswrapper[34361]: I0224 05:37:30.056287 34361 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-catalogd/catalogd-controller-manager-84b8d9d697-zvzxs" Feb 24 05:37:30.077594 master-0 kubenswrapper[34361]: I0224 05:37:30.077519 34361 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Feb 24 05:37:30.077931 master-0 kubenswrapper[34361]: I0224 05:37:30.077731 34361 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 24 05:37:30.085127 master-0 kubenswrapper[34361]: I0224 05:37:30.085032 34361 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Feb 24 05:37:30.120260 master-0 kubenswrapper[34361]: I0224 05:37:30.120201 34361 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-6f5488b997-dbsnm" Feb 24 05:37:30.122565 master-0 kubenswrapper[34361]: I0224 05:37:30.122508 34361 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-6f5488b997-dbsnm" Feb 24 05:37:30.449529 master-0 kubenswrapper[34361]: I0224 05:37:30.449433 34361 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-zxkt2 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 05:37:30.449529 master-0 kubenswrapper[34361]: [-]has-synced failed: reason withheld Feb 24 05:37:30.449529 master-0 kubenswrapper[34361]: [+]process-running ok Feb 24 05:37:30.449529 master-0 kubenswrapper[34361]: healthz check failed Feb 24 05:37:30.450227 master-0 kubenswrapper[34361]: I0224 05:37:30.449549 34361 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-zxkt2" podUID="be7a4b9e-1e9a-4298-b804-21b683805c0e" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 05:37:30.759683 master-0 kubenswrapper[34361]: I0224 05:37:30.759569 34361 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-apiserver/apiserver-fdc9d7cdd-8v72m" Feb 24 05:37:30.768381 master-0 kubenswrapper[34361]: I0224 05:37:30.768231 34361 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-apiserver/apiserver-fdc9d7cdd-8v72m" Feb 24 05:37:31.077050 master-0 kubenswrapper[34361]: I0224 05:37:31.076855 34361 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-oauth-apiserver/apiserver-6f8b7f45f7-5df4m" Feb 24 05:37:31.322672 master-0 kubenswrapper[34361]: I0224 05:37:31.322539 34361 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-xm8sw" Feb 24 05:37:31.376200 master-0 kubenswrapper[34361]: I0224 05:37:31.375913 34361 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-xm8sw" Feb 24 05:37:31.437289 master-0 kubenswrapper[34361]: I0224 05:37:31.436388 34361 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/catalog-operator-596f79dd6f-v22h2" Feb 24 05:37:31.445416 master-0 kubenswrapper[34361]: I0224 05:37:31.443533 34361 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/catalog-operator-596f79dd6f-v22h2" Feb 24 05:37:31.450370 master-0 kubenswrapper[34361]: I0224 05:37:31.450252 34361 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-zxkt2 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 05:37:31.450370 master-0 kubenswrapper[34361]: [-]has-synced failed: reason withheld Feb 24 05:37:31.450370 master-0 kubenswrapper[34361]: [+]process-running ok Feb 24 05:37:31.450370 master-0 kubenswrapper[34361]: healthz check failed Feb 24 05:37:31.450743 master-0 kubenswrapper[34361]: I0224 05:37:31.450356 34361 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-zxkt2" podUID="be7a4b9e-1e9a-4298-b804-21b683805c0e" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 05:37:31.688111 master-0 kubenswrapper[34361]: I0224 05:37:31.687933 34361 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-monitoring/metrics-server-65cdf565cd-555rj" Feb 24 05:37:31.695741 master-0 kubenswrapper[34361]: I0224 05:37:31.695683 34361 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-monitoring/metrics-server-65cdf565cd-555rj" Feb 24 05:37:32.448560 master-0 kubenswrapper[34361]: I0224 05:37:32.448394 34361 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-zxkt2 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 05:37:32.448560 master-0 kubenswrapper[34361]: [-]has-synced failed: reason withheld Feb 24 05:37:32.448560 master-0 kubenswrapper[34361]: [+]process-running ok Feb 24 05:37:32.448560 master-0 kubenswrapper[34361]: healthz check failed Feb 24 05:37:32.448560 master-0 kubenswrapper[34361]: I0224 05:37:32.448493 34361 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-zxkt2" podUID="be7a4b9e-1e9a-4298-b804-21b683805c0e" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 05:37:32.720467 master-0 kubenswrapper[34361]: I0224 05:37:32.720261 34361 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-gn8m8" Feb 24 05:37:33.017993 master-0 kubenswrapper[34361]: I0224 05:37:33.017932 34361 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-vd82q" Feb 24 05:37:33.018267 master-0 kubenswrapper[34361]: I0224 05:37:33.018123 34361 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 24 05:37:33.018267 master-0 kubenswrapper[34361]: I0224 05:37:33.018136 34361 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 24 05:37:33.054535 master-0 kubenswrapper[34361]: I0224 05:37:33.054375 34361 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-vd82q" Feb 24 05:37:33.090693 master-0 kubenswrapper[34361]: I0224 05:37:33.090627 34361 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 24 05:37:33.412635 master-0 kubenswrapper[34361]: I0224 05:37:33.412454 34361 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-monitoring/metrics-server-65cdf565cd-555rj" Feb 24 05:37:33.418267 master-0 kubenswrapper[34361]: I0224 05:37:33.418225 34361 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-monitoring/metrics-server-65cdf565cd-555rj" Feb 24 05:37:33.426514 master-0 kubenswrapper[34361]: I0224 05:37:33.426453 34361 kubelet.go:2431] "SyncLoop REMOVE" source="file" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0"] Feb 24 05:37:33.426815 master-0 kubenswrapper[34361]: I0224 05:37:33.426770 34361 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" podUID="5c4f5d60772fa42f26e9c219bffa62b9" containerName="startup-monitor" containerID="cri-o://31d60c0e2062a40dfd5b30452208a7d04b42bc7bacd899eda1a3c59f7769f350" gracePeriod=5 Feb 24 05:37:33.448871 master-0 kubenswrapper[34361]: I0224 05:37:33.448789 34361 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-zxkt2 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 05:37:33.448871 master-0 kubenswrapper[34361]: [-]has-synced failed: reason withheld Feb 24 05:37:33.448871 master-0 kubenswrapper[34361]: [+]process-running ok Feb 24 05:37:33.448871 master-0 kubenswrapper[34361]: healthz check failed Feb 24 05:37:33.449475 master-0 kubenswrapper[34361]: I0224 05:37:33.448886 34361 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-zxkt2" podUID="be7a4b9e-1e9a-4298-b804-21b683805c0e" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 05:37:33.643911 master-0 kubenswrapper[34361]: I0224 05:37:33.643845 34361 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-controller/operator-controller-controller-manager-9cc7d7bb-t75jj" Feb 24 05:37:33.647042 master-0 kubenswrapper[34361]: I0224 05:37:33.647008 34361 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-controller/operator-controller-controller-manager-9cc7d7bb-t75jj" Feb 24 05:37:33.936492 master-0 kubenswrapper[34361]: I0224 05:37:33.936428 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/afaa1cb5-3ad5-4c67-8802-9c4db23a2e3a-kube-api-access\") pod \"installer-3-master-0\" (UID: \"afaa1cb5-3ad5-4c67-8802-9c4db23a2e3a\") " pod="openshift-kube-apiserver/installer-3-master-0" Feb 24 05:37:33.936857 master-0 kubenswrapper[34361]: E0224 05:37:33.936777 34361 projected.go:288] Couldn't get configMap openshift-kube-apiserver/kube-root-ca.crt: object "openshift-kube-apiserver"/"kube-root-ca.crt" not registered Feb 24 05:37:33.936909 master-0 kubenswrapper[34361]: E0224 05:37:33.936874 34361 projected.go:194] Error preparing data for projected volume kube-api-access for pod openshift-kube-apiserver/installer-3-master-0: object "openshift-kube-apiserver"/"kube-root-ca.crt" not registered Feb 24 05:37:33.937053 master-0 kubenswrapper[34361]: E0224 05:37:33.937019 34361 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/afaa1cb5-3ad5-4c67-8802-9c4db23a2e3a-kube-api-access podName:afaa1cb5-3ad5-4c67-8802-9c4db23a2e3a nodeName:}" failed. No retries permitted until 2026-02-24 05:37:41.936970428 +0000 UTC m=+21.639587514 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access" (UniqueName: "kubernetes.io/projected/afaa1cb5-3ad5-4c67-8802-9c4db23a2e3a-kube-api-access") pod "installer-3-master-0" (UID: "afaa1cb5-3ad5-4c67-8802-9c4db23a2e3a") : object "openshift-kube-apiserver"/"kube-root-ca.crt" not registered Feb 24 05:37:34.447585 master-0 kubenswrapper[34361]: I0224 05:37:34.447509 34361 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-zxkt2 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 05:37:34.447585 master-0 kubenswrapper[34361]: [-]has-synced failed: reason withheld Feb 24 05:37:34.447585 master-0 kubenswrapper[34361]: [+]process-running ok Feb 24 05:37:34.447585 master-0 kubenswrapper[34361]: healthz check failed Feb 24 05:37:34.447983 master-0 kubenswrapper[34361]: I0224 05:37:34.447612 34361 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-zxkt2" podUID="be7a4b9e-1e9a-4298-b804-21b683805c0e" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 05:37:34.906347 master-0 kubenswrapper[34361]: I0224 05:37:34.906254 34361 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-gn8m8" Feb 24 05:37:35.050041 master-0 kubenswrapper[34361]: I0224 05:37:35.049940 34361 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-v64s6" Feb 24 05:37:35.107081 master-0 kubenswrapper[34361]: I0224 05:37:35.106538 34361 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-v64s6" Feb 24 05:37:35.450916 master-0 kubenswrapper[34361]: I0224 05:37:35.450863 34361 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-zxkt2 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 05:37:35.450916 master-0 kubenswrapper[34361]: [-]has-synced failed: reason withheld Feb 24 05:37:35.450916 master-0 kubenswrapper[34361]: [+]process-running ok Feb 24 05:37:35.450916 master-0 kubenswrapper[34361]: healthz check failed Feb 24 05:37:35.451377 master-0 kubenswrapper[34361]: I0224 05:37:35.451344 34361 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-zxkt2" podUID="be7a4b9e-1e9a-4298-b804-21b683805c0e" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 05:37:36.448219 master-0 kubenswrapper[34361]: I0224 05:37:36.448143 34361 patch_prober.go:28] interesting pod/router-default-7b65dc9fcb-zxkt2 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 05:37:36.448219 master-0 kubenswrapper[34361]: [-]has-synced failed: reason withheld Feb 24 05:37:36.448219 master-0 kubenswrapper[34361]: [+]process-running ok Feb 24 05:37:36.448219 master-0 kubenswrapper[34361]: healthz check failed Feb 24 05:37:36.448975 master-0 kubenswrapper[34361]: I0224 05:37:36.448240 34361 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7b65dc9fcb-zxkt2" podUID="be7a4b9e-1e9a-4298-b804-21b683805c0e" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 05:37:37.452987 master-0 kubenswrapper[34361]: I0224 05:37:37.452911 34361 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-ingress/router-default-7b65dc9fcb-zxkt2" Feb 24 05:37:37.457872 master-0 kubenswrapper[34361]: I0224 05:37:37.457825 34361 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ingress/router-default-7b65dc9fcb-zxkt2" Feb 24 05:37:38.543454 master-0 kubenswrapper[34361]: I0224 05:37:38.543056 34361 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-master-0_5c4f5d60772fa42f26e9c219bffa62b9/startup-monitor/0.log" Feb 24 05:37:38.543454 master-0 kubenswrapper[34361]: I0224 05:37:38.543173 34361 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Feb 24 05:37:38.604609 master-0 kubenswrapper[34361]: I0224 05:37:38.604528 34361 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" podUID="" Feb 24 05:37:38.615060 master-0 kubenswrapper[34361]: I0224 05:37:38.615002 34361 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/5c4f5d60772fa42f26e9c219bffa62b9-var-lock\") pod \"5c4f5d60772fa42f26e9c219bffa62b9\" (UID: \"5c4f5d60772fa42f26e9c219bffa62b9\") " Feb 24 05:37:38.615060 master-0 kubenswrapper[34361]: I0224 05:37:38.615056 34361 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/5c4f5d60772fa42f26e9c219bffa62b9-manifests\") pod \"5c4f5d60772fa42f26e9c219bffa62b9\" (UID: \"5c4f5d60772fa42f26e9c219bffa62b9\") " Feb 24 05:37:38.615416 master-0 kubenswrapper[34361]: I0224 05:37:38.615152 34361 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5c4f5d60772fa42f26e9c219bffa62b9-var-lock" (OuterVolumeSpecName: "var-lock") pod "5c4f5d60772fa42f26e9c219bffa62b9" (UID: "5c4f5d60772fa42f26e9c219bffa62b9"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 24 05:37:38.615416 master-0 kubenswrapper[34361]: I0224 05:37:38.615236 34361 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5c4f5d60772fa42f26e9c219bffa62b9-manifests" (OuterVolumeSpecName: "manifests") pod "5c4f5d60772fa42f26e9c219bffa62b9" (UID: "5c4f5d60772fa42f26e9c219bffa62b9"). InnerVolumeSpecName "manifests". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 24 05:37:38.615416 master-0 kubenswrapper[34361]: I0224 05:37:38.615360 34361 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/5c4f5d60772fa42f26e9c219bffa62b9-resource-dir\") pod \"5c4f5d60772fa42f26e9c219bffa62b9\" (UID: \"5c4f5d60772fa42f26e9c219bffa62b9\") " Feb 24 05:37:38.615563 master-0 kubenswrapper[34361]: I0224 05:37:38.615432 34361 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5c4f5d60772fa42f26e9c219bffa62b9-resource-dir" (OuterVolumeSpecName: "resource-dir") pod "5c4f5d60772fa42f26e9c219bffa62b9" (UID: "5c4f5d60772fa42f26e9c219bffa62b9"). InnerVolumeSpecName "resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 24 05:37:38.615563 master-0 kubenswrapper[34361]: I0224 05:37:38.615537 34361 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/5c4f5d60772fa42f26e9c219bffa62b9-var-log\") pod \"5c4f5d60772fa42f26e9c219bffa62b9\" (UID: \"5c4f5d60772fa42f26e9c219bffa62b9\") " Feb 24 05:37:38.615656 master-0 kubenswrapper[34361]: I0224 05:37:38.615569 34361 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/5c4f5d60772fa42f26e9c219bffa62b9-pod-resource-dir\") pod \"5c4f5d60772fa42f26e9c219bffa62b9\" (UID: \"5c4f5d60772fa42f26e9c219bffa62b9\") " Feb 24 05:37:38.615755 master-0 kubenswrapper[34361]: I0224 05:37:38.615607 34361 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5c4f5d60772fa42f26e9c219bffa62b9-var-log" (OuterVolumeSpecName: "var-log") pod "5c4f5d60772fa42f26e9c219bffa62b9" (UID: "5c4f5d60772fa42f26e9c219bffa62b9"). InnerVolumeSpecName "var-log". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 24 05:37:38.616006 master-0 kubenswrapper[34361]: I0224 05:37:38.615977 34361 reconciler_common.go:293] "Volume detached for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/5c4f5d60772fa42f26e9c219bffa62b9-resource-dir\") on node \"master-0\" DevicePath \"\"" Feb 24 05:37:38.616006 master-0 kubenswrapper[34361]: I0224 05:37:38.615998 34361 reconciler_common.go:293] "Volume detached for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/5c4f5d60772fa42f26e9c219bffa62b9-var-log\") on node \"master-0\" DevicePath \"\"" Feb 24 05:37:38.616006 master-0 kubenswrapper[34361]: I0224 05:37:38.616007 34361 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/5c4f5d60772fa42f26e9c219bffa62b9-var-lock\") on node \"master-0\" DevicePath \"\"" Feb 24 05:37:38.616145 master-0 kubenswrapper[34361]: I0224 05:37:38.616016 34361 reconciler_common.go:293] "Volume detached for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/5c4f5d60772fa42f26e9c219bffa62b9-manifests\") on node \"master-0\" DevicePath \"\"" Feb 24 05:37:38.621432 master-0 kubenswrapper[34361]: I0224 05:37:38.621387 34361 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5c4f5d60772fa42f26e9c219bffa62b9-pod-resource-dir" (OuterVolumeSpecName: "pod-resource-dir") pod "5c4f5d60772fa42f26e9c219bffa62b9" (UID: "5c4f5d60772fa42f26e9c219bffa62b9"). InnerVolumeSpecName "pod-resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 24 05:37:38.655779 master-0 kubenswrapper[34361]: I0224 05:37:38.655735 34361 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0"] Feb 24 05:37:38.656053 master-0 kubenswrapper[34361]: I0224 05:37:38.656027 34361 kubelet.go:2649] "Unable to find pod for mirror pod, skipping" mirrorPod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" mirrorPodUID="db16fbef-3e6b-44d9-a2cd-56999dfa4101" Feb 24 05:37:38.656142 master-0 kubenswrapper[34361]: I0224 05:37:38.656128 34361 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0"] Feb 24 05:37:38.656234 master-0 kubenswrapper[34361]: I0224 05:37:38.656220 34361 kubelet.go:2673] "Unable to find pod for mirror pod, skipping" mirrorPod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" mirrorPodUID="db16fbef-3e6b-44d9-a2cd-56999dfa4101" Feb 24 05:37:38.718173 master-0 kubenswrapper[34361]: I0224 05:37:38.718019 34361 reconciler_common.go:293] "Volume detached for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/5c4f5d60772fa42f26e9c219bffa62b9-pod-resource-dir\") on node \"master-0\" DevicePath \"\"" Feb 24 05:37:39.132744 master-0 kubenswrapper[34361]: I0224 05:37:39.132690 34361 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-master-0_5c4f5d60772fa42f26e9c219bffa62b9/startup-monitor/0.log" Feb 24 05:37:39.133057 master-0 kubenswrapper[34361]: I0224 05:37:39.132763 34361 generic.go:334] "Generic (PLEG): container finished" podID="5c4f5d60772fa42f26e9c219bffa62b9" containerID="31d60c0e2062a40dfd5b30452208a7d04b42bc7bacd899eda1a3c59f7769f350" exitCode=137 Feb 24 05:37:39.133057 master-0 kubenswrapper[34361]: I0224 05:37:39.132823 34361 scope.go:117] "RemoveContainer" containerID="31d60c0e2062a40dfd5b30452208a7d04b42bc7bacd899eda1a3c59f7769f350" Feb 24 05:37:39.133057 master-0 kubenswrapper[34361]: I0224 05:37:39.132849 34361 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Feb 24 05:37:39.158142 master-0 kubenswrapper[34361]: I0224 05:37:39.157891 34361 scope.go:117] "RemoveContainer" containerID="31d60c0e2062a40dfd5b30452208a7d04b42bc7bacd899eda1a3c59f7769f350" Feb 24 05:37:39.159881 master-0 kubenswrapper[34361]: E0224 05:37:39.159843 34361 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"31d60c0e2062a40dfd5b30452208a7d04b42bc7bacd899eda1a3c59f7769f350\": container with ID starting with 31d60c0e2062a40dfd5b30452208a7d04b42bc7bacd899eda1a3c59f7769f350 not found: ID does not exist" containerID="31d60c0e2062a40dfd5b30452208a7d04b42bc7bacd899eda1a3c59f7769f350" Feb 24 05:37:39.159961 master-0 kubenswrapper[34361]: I0224 05:37:39.159901 34361 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"31d60c0e2062a40dfd5b30452208a7d04b42bc7bacd899eda1a3c59f7769f350"} err="failed to get container status \"31d60c0e2062a40dfd5b30452208a7d04b42bc7bacd899eda1a3c59f7769f350\": rpc error: code = NotFound desc = could not find container \"31d60c0e2062a40dfd5b30452208a7d04b42bc7bacd899eda1a3c59f7769f350\": container with ID starting with 31d60c0e2062a40dfd5b30452208a7d04b42bc7bacd899eda1a3c59f7769f350 not found: ID does not exist" Feb 24 05:37:40.607123 master-0 kubenswrapper[34361]: I0224 05:37:40.607038 34361 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5c4f5d60772fa42f26e9c219bffa62b9" path="/var/lib/kubelet/pods/5c4f5d60772fa42f26e9c219bffa62b9/volumes" Feb 24 05:37:41.974424 master-0 kubenswrapper[34361]: I0224 05:37:41.971507 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/afaa1cb5-3ad5-4c67-8802-9c4db23a2e3a-kube-api-access\") pod \"installer-3-master-0\" (UID: \"afaa1cb5-3ad5-4c67-8802-9c4db23a2e3a\") " pod="openshift-kube-apiserver/installer-3-master-0" Feb 24 05:37:41.974424 master-0 kubenswrapper[34361]: E0224 05:37:41.971955 34361 projected.go:288] Couldn't get configMap openshift-kube-apiserver/kube-root-ca.crt: object "openshift-kube-apiserver"/"kube-root-ca.crt" not registered Feb 24 05:37:41.974424 master-0 kubenswrapper[34361]: E0224 05:37:41.971993 34361 projected.go:194] Error preparing data for projected volume kube-api-access for pod openshift-kube-apiserver/installer-3-master-0: object "openshift-kube-apiserver"/"kube-root-ca.crt" not registered Feb 24 05:37:41.974424 master-0 kubenswrapper[34361]: E0224 05:37:41.972080 34361 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/afaa1cb5-3ad5-4c67-8802-9c4db23a2e3a-kube-api-access podName:afaa1cb5-3ad5-4c67-8802-9c4db23a2e3a nodeName:}" failed. No retries permitted until 2026-02-24 05:37:57.972048791 +0000 UTC m=+37.674665877 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access" (UniqueName: "kubernetes.io/projected/afaa1cb5-3ad5-4c67-8802-9c4db23a2e3a-kube-api-access") pod "installer-3-master-0" (UID: "afaa1cb5-3ad5-4c67-8802-9c4db23a2e3a") : object "openshift-kube-apiserver"/"kube-root-ca.crt" not registered Feb 24 05:37:42.771062 master-0 kubenswrapper[34361]: I0224 05:37:42.770988 34361 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-gn8m8" Feb 24 05:37:42.819840 master-0 kubenswrapper[34361]: I0224 05:37:42.819769 34361 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-gn8m8" Feb 24 05:37:45.589540 master-0 kubenswrapper[34361]: I0224 05:37:45.589462 34361 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication/oauth-openshift-6d4d899fc6-cgn6l"] Feb 24 05:37:45.590180 master-0 kubenswrapper[34361]: E0224 05:37:45.589836 34361 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e44f770d-f88d-446a-a22f-51b30e89690c" containerName="installer" Feb 24 05:37:45.590180 master-0 kubenswrapper[34361]: I0224 05:37:45.589854 34361 state_mem.go:107] "Deleted CPUSet assignment" podUID="e44f770d-f88d-446a-a22f-51b30e89690c" containerName="installer" Feb 24 05:37:45.590180 master-0 kubenswrapper[34361]: E0224 05:37:45.589917 34361 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2d3d57f1-cd67-4f1d-b267-f652b9bb3448" containerName="installer" Feb 24 05:37:45.590180 master-0 kubenswrapper[34361]: I0224 05:37:45.589926 34361 state_mem.go:107] "Deleted CPUSet assignment" podUID="2d3d57f1-cd67-4f1d-b267-f652b9bb3448" containerName="installer" Feb 24 05:37:45.590180 master-0 kubenswrapper[34361]: E0224 05:37:45.589943 34361 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="154c1cd0-d69a-4213-8fc2-2d80217c358e" containerName="installer" Feb 24 05:37:45.590180 master-0 kubenswrapper[34361]: I0224 05:37:45.589952 34361 state_mem.go:107] "Deleted CPUSet assignment" podUID="154c1cd0-d69a-4213-8fc2-2d80217c358e" containerName="installer" Feb 24 05:37:45.590180 master-0 kubenswrapper[34361]: E0224 05:37:45.589973 34361 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="74d070e9-4193-4598-ad68-15955b07d649" containerName="installer" Feb 24 05:37:45.590180 master-0 kubenswrapper[34361]: I0224 05:37:45.589980 34361 state_mem.go:107] "Deleted CPUSet assignment" podUID="74d070e9-4193-4598-ad68-15955b07d649" containerName="installer" Feb 24 05:37:45.590180 master-0 kubenswrapper[34361]: E0224 05:37:45.589992 34361 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8978e4e5-18ef-4b69-a127-5e9409163935" containerName="collect-profiles" Feb 24 05:37:45.590180 master-0 kubenswrapper[34361]: I0224 05:37:45.589999 34361 state_mem.go:107] "Deleted CPUSet assignment" podUID="8978e4e5-18ef-4b69-a127-5e9409163935" containerName="collect-profiles" Feb 24 05:37:45.590180 master-0 kubenswrapper[34361]: E0224 05:37:45.590008 34361 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="afaa1cb5-3ad5-4c67-8802-9c4db23a2e3a" containerName="installer" Feb 24 05:37:45.590180 master-0 kubenswrapper[34361]: I0224 05:37:45.590015 34361 state_mem.go:107] "Deleted CPUSet assignment" podUID="afaa1cb5-3ad5-4c67-8802-9c4db23a2e3a" containerName="installer" Feb 24 05:37:45.590180 master-0 kubenswrapper[34361]: E0224 05:37:45.590036 34361 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="17ac3cae-8c8a-4e8f-9f58-ab82b543ec86" containerName="installer" Feb 24 05:37:45.590180 master-0 kubenswrapper[34361]: I0224 05:37:45.590043 34361 state_mem.go:107] "Deleted CPUSet assignment" podUID="17ac3cae-8c8a-4e8f-9f58-ab82b543ec86" containerName="installer" Feb 24 05:37:45.590180 master-0 kubenswrapper[34361]: E0224 05:37:45.590064 34361 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5c4f5d60772fa42f26e9c219bffa62b9" containerName="startup-monitor" Feb 24 05:37:45.590180 master-0 kubenswrapper[34361]: I0224 05:37:45.590074 34361 state_mem.go:107] "Deleted CPUSet assignment" podUID="5c4f5d60772fa42f26e9c219bffa62b9" containerName="startup-monitor" Feb 24 05:37:45.590180 master-0 kubenswrapper[34361]: E0224 05:37:45.590098 34361 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8a278410-3079-49d9-8c59-4cedf3f50213" containerName="assisted-installer-controller" Feb 24 05:37:45.590180 master-0 kubenswrapper[34361]: I0224 05:37:45.590106 34361 state_mem.go:107] "Deleted CPUSet assignment" podUID="8a278410-3079-49d9-8c59-4cedf3f50213" containerName="assisted-installer-controller" Feb 24 05:37:45.590180 master-0 kubenswrapper[34361]: E0224 05:37:45.590120 34361 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4df29682-0936-44a2-9629-2e90115671e0" containerName="installer" Feb 24 05:37:45.590180 master-0 kubenswrapper[34361]: I0224 05:37:45.590127 34361 state_mem.go:107] "Deleted CPUSet assignment" podUID="4df29682-0936-44a2-9629-2e90115671e0" containerName="installer" Feb 24 05:37:45.590180 master-0 kubenswrapper[34361]: E0224 05:37:45.590146 34361 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4e058a29-f50f-473a-a217-0021923ebc7c" containerName="installer" Feb 24 05:37:45.590180 master-0 kubenswrapper[34361]: I0224 05:37:45.590154 34361 state_mem.go:107] "Deleted CPUSet assignment" podUID="4e058a29-f50f-473a-a217-0021923ebc7c" containerName="installer" Feb 24 05:37:45.590180 master-0 kubenswrapper[34361]: E0224 05:37:45.590187 34361 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7d063f48-5f89-47d0-bafc-84a52839c806" containerName="installer" Feb 24 05:37:45.590180 master-0 kubenswrapper[34361]: I0224 05:37:45.590197 34361 state_mem.go:107] "Deleted CPUSet assignment" podUID="7d063f48-5f89-47d0-bafc-84a52839c806" containerName="installer" Feb 24 05:37:45.590180 master-0 kubenswrapper[34361]: E0224 05:37:45.590221 34361 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="29b0d9bb-1b88-4023-8b08-896d581c79c7" containerName="installer" Feb 24 05:37:45.591065 master-0 kubenswrapper[34361]: I0224 05:37:45.590232 34361 state_mem.go:107] "Deleted CPUSet assignment" podUID="29b0d9bb-1b88-4023-8b08-896d581c79c7" containerName="installer" Feb 24 05:37:45.591065 master-0 kubenswrapper[34361]: E0224 05:37:45.590246 34361 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f2249df3-3ce9-4f96-8f6f-59943125f8ed" containerName="collect-profiles" Feb 24 05:37:45.591065 master-0 kubenswrapper[34361]: I0224 05:37:45.590254 34361 state_mem.go:107] "Deleted CPUSet assignment" podUID="f2249df3-3ce9-4f96-8f6f-59943125f8ed" containerName="collect-profiles" Feb 24 05:37:45.591065 master-0 kubenswrapper[34361]: I0224 05:37:45.590455 34361 memory_manager.go:354] "RemoveStaleState removing state" podUID="8a278410-3079-49d9-8c59-4cedf3f50213" containerName="assisted-installer-controller" Feb 24 05:37:45.591065 master-0 kubenswrapper[34361]: I0224 05:37:45.590473 34361 memory_manager.go:354] "RemoveStaleState removing state" podUID="74d070e9-4193-4598-ad68-15955b07d649" containerName="installer" Feb 24 05:37:45.591065 master-0 kubenswrapper[34361]: I0224 05:37:45.590544 34361 memory_manager.go:354] "RemoveStaleState removing state" podUID="7d063f48-5f89-47d0-bafc-84a52839c806" containerName="installer" Feb 24 05:37:45.591065 master-0 kubenswrapper[34361]: I0224 05:37:45.590563 34361 memory_manager.go:354] "RemoveStaleState removing state" podUID="2d3d57f1-cd67-4f1d-b267-f652b9bb3448" containerName="installer" Feb 24 05:37:45.591065 master-0 kubenswrapper[34361]: I0224 05:37:45.590578 34361 memory_manager.go:354] "RemoveStaleState removing state" podUID="4df29682-0936-44a2-9629-2e90115671e0" containerName="installer" Feb 24 05:37:45.591065 master-0 kubenswrapper[34361]: I0224 05:37:45.590594 34361 memory_manager.go:354] "RemoveStaleState removing state" podUID="8978e4e5-18ef-4b69-a127-5e9409163935" containerName="collect-profiles" Feb 24 05:37:45.591065 master-0 kubenswrapper[34361]: I0224 05:37:45.590611 34361 memory_manager.go:354] "RemoveStaleState removing state" podUID="5c4f5d60772fa42f26e9c219bffa62b9" containerName="startup-monitor" Feb 24 05:37:45.591065 master-0 kubenswrapper[34361]: I0224 05:37:45.590627 34361 memory_manager.go:354] "RemoveStaleState removing state" podUID="f2249df3-3ce9-4f96-8f6f-59943125f8ed" containerName="collect-profiles" Feb 24 05:37:45.591065 master-0 kubenswrapper[34361]: I0224 05:37:45.590643 34361 memory_manager.go:354] "RemoveStaleState removing state" podUID="17ac3cae-8c8a-4e8f-9f58-ab82b543ec86" containerName="installer" Feb 24 05:37:45.591065 master-0 kubenswrapper[34361]: I0224 05:37:45.590656 34361 memory_manager.go:354] "RemoveStaleState removing state" podUID="e44f770d-f88d-446a-a22f-51b30e89690c" containerName="installer" Feb 24 05:37:45.591065 master-0 kubenswrapper[34361]: I0224 05:37:45.590669 34361 memory_manager.go:354] "RemoveStaleState removing state" podUID="154c1cd0-d69a-4213-8fc2-2d80217c358e" containerName="installer" Feb 24 05:37:45.591065 master-0 kubenswrapper[34361]: I0224 05:37:45.590685 34361 memory_manager.go:354] "RemoveStaleState removing state" podUID="afaa1cb5-3ad5-4c67-8802-9c4db23a2e3a" containerName="installer" Feb 24 05:37:45.591065 master-0 kubenswrapper[34361]: I0224 05:37:45.590696 34361 memory_manager.go:354] "RemoveStaleState removing state" podUID="4e058a29-f50f-473a-a217-0021923ebc7c" containerName="installer" Feb 24 05:37:45.591065 master-0 kubenswrapper[34361]: I0224 05:37:45.590713 34361 memory_manager.go:354] "RemoveStaleState removing state" podUID="29b0d9bb-1b88-4023-8b08-896d581c79c7" containerName="installer" Feb 24 05:37:45.591698 master-0 kubenswrapper[34361]: I0224 05:37:45.591373 34361 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-6d4d899fc6-cgn6l" Feb 24 05:37:45.594528 master-0 kubenswrapper[34361]: I0224 05:37:45.594435 34361 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-login" Feb 24 05:37:45.594904 master-0 kubenswrapper[34361]: I0224 05:37:45.594607 34361 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-cliconfig" Feb 24 05:37:45.594904 master-0 kubenswrapper[34361]: I0224 05:37:45.594621 34361 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-session" Feb 24 05:37:45.594904 master-0 kubenswrapper[34361]: I0224 05:37:45.594629 34361 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-provider-selection" Feb 24 05:37:45.596256 master-0 kubenswrapper[34361]: I0224 05:37:45.596212 34361 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-error" Feb 24 05:37:45.596596 master-0 kubenswrapper[34361]: I0224 05:37:45.596561 34361 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"oauth-openshift-dockercfg-b9rnb" Feb 24 05:37:45.596939 master-0 kubenswrapper[34361]: I0224 05:37:45.596905 34361 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"openshift-service-ca.crt" Feb 24 05:37:45.597642 master-0 kubenswrapper[34361]: I0224 05:37:45.597608 34361 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-router-certs" Feb 24 05:37:45.597886 master-0 kubenswrapper[34361]: I0224 05:37:45.597853 34361 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-service-ca" Feb 24 05:37:45.599624 master-0 kubenswrapper[34361]: I0224 05:37:45.599581 34361 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"audit" Feb 24 05:37:45.599690 master-0 kubenswrapper[34361]: I0224 05:37:45.599665 34361 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-serving-cert" Feb 24 05:37:45.606962 master-0 kubenswrapper[34361]: I0224 05:37:45.606916 34361 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"kube-root-ca.crt" Feb 24 05:37:45.609674 master-0 kubenswrapper[34361]: I0224 05:37:45.609617 34361 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-ocp-branding-template" Feb 24 05:37:45.621034 master-0 kubenswrapper[34361]: I0224 05:37:45.620980 34361 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" Feb 24 05:37:45.627467 master-0 kubenswrapper[34361]: I0224 05:37:45.627414 34361 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-6d4d899fc6-cgn6l"] Feb 24 05:37:45.752596 master-0 kubenswrapper[34361]: I0224 05:37:45.752499 34361 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7ab989ea-1de3-497d-9834-889d587a0270-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-6d4d899fc6-cgn6l\" (UID: \"7ab989ea-1de3-497d-9834-889d587a0270\") " pod="openshift-authentication/oauth-openshift-6d4d899fc6-cgn6l" Feb 24 05:37:45.753058 master-0 kubenswrapper[34361]: I0224 05:37:45.752625 34361 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/7ab989ea-1de3-497d-9834-889d587a0270-v4-0-config-system-serving-cert\") pod \"oauth-openshift-6d4d899fc6-cgn6l\" (UID: \"7ab989ea-1de3-497d-9834-889d587a0270\") " pod="openshift-authentication/oauth-openshift-6d4d899fc6-cgn6l" Feb 24 05:37:45.753058 master-0 kubenswrapper[34361]: I0224 05:37:45.752660 34361 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mp7lq\" (UniqueName: \"kubernetes.io/projected/7ab989ea-1de3-497d-9834-889d587a0270-kube-api-access-mp7lq\") pod \"oauth-openshift-6d4d899fc6-cgn6l\" (UID: \"7ab989ea-1de3-497d-9834-889d587a0270\") " pod="openshift-authentication/oauth-openshift-6d4d899fc6-cgn6l" Feb 24 05:37:45.753058 master-0 kubenswrapper[34361]: I0224 05:37:45.752697 34361 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/7ab989ea-1de3-497d-9834-889d587a0270-v4-0-config-system-router-certs\") pod \"oauth-openshift-6d4d899fc6-cgn6l\" (UID: \"7ab989ea-1de3-497d-9834-889d587a0270\") " pod="openshift-authentication/oauth-openshift-6d4d899fc6-cgn6l" Feb 24 05:37:45.753058 master-0 kubenswrapper[34361]: I0224 05:37:45.752742 34361 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/7ab989ea-1de3-497d-9834-889d587a0270-v4-0-config-user-template-login\") pod \"oauth-openshift-6d4d899fc6-cgn6l\" (UID: \"7ab989ea-1de3-497d-9834-889d587a0270\") " pod="openshift-authentication/oauth-openshift-6d4d899fc6-cgn6l" Feb 24 05:37:45.753058 master-0 kubenswrapper[34361]: I0224 05:37:45.752793 34361 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/7ab989ea-1de3-497d-9834-889d587a0270-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-6d4d899fc6-cgn6l\" (UID: \"7ab989ea-1de3-497d-9834-889d587a0270\") " pod="openshift-authentication/oauth-openshift-6d4d899fc6-cgn6l" Feb 24 05:37:45.753351 master-0 kubenswrapper[34361]: I0224 05:37:45.753015 34361 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/7ab989ea-1de3-497d-9834-889d587a0270-v4-0-config-system-session\") pod \"oauth-openshift-6d4d899fc6-cgn6l\" (UID: \"7ab989ea-1de3-497d-9834-889d587a0270\") " pod="openshift-authentication/oauth-openshift-6d4d899fc6-cgn6l" Feb 24 05:37:45.753351 master-0 kubenswrapper[34361]: I0224 05:37:45.753257 34361 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/7ab989ea-1de3-497d-9834-889d587a0270-audit-policies\") pod \"oauth-openshift-6d4d899fc6-cgn6l\" (UID: \"7ab989ea-1de3-497d-9834-889d587a0270\") " pod="openshift-authentication/oauth-openshift-6d4d899fc6-cgn6l" Feb 24 05:37:45.753464 master-0 kubenswrapper[34361]: I0224 05:37:45.753396 34361 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/7ab989ea-1de3-497d-9834-889d587a0270-v4-0-config-user-template-error\") pod \"oauth-openshift-6d4d899fc6-cgn6l\" (UID: \"7ab989ea-1de3-497d-9834-889d587a0270\") " pod="openshift-authentication/oauth-openshift-6d4d899fc6-cgn6l" Feb 24 05:37:45.753564 master-0 kubenswrapper[34361]: I0224 05:37:45.753518 34361 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/7ab989ea-1de3-497d-9834-889d587a0270-audit-dir\") pod \"oauth-openshift-6d4d899fc6-cgn6l\" (UID: \"7ab989ea-1de3-497d-9834-889d587a0270\") " pod="openshift-authentication/oauth-openshift-6d4d899fc6-cgn6l" Feb 24 05:37:45.753621 master-0 kubenswrapper[34361]: I0224 05:37:45.753582 34361 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/7ab989ea-1de3-497d-9834-889d587a0270-v4-0-config-system-service-ca\") pod \"oauth-openshift-6d4d899fc6-cgn6l\" (UID: \"7ab989ea-1de3-497d-9834-889d587a0270\") " pod="openshift-authentication/oauth-openshift-6d4d899fc6-cgn6l" Feb 24 05:37:45.753725 master-0 kubenswrapper[34361]: I0224 05:37:45.753687 34361 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/7ab989ea-1de3-497d-9834-889d587a0270-v4-0-config-system-cliconfig\") pod \"oauth-openshift-6d4d899fc6-cgn6l\" (UID: \"7ab989ea-1de3-497d-9834-889d587a0270\") " pod="openshift-authentication/oauth-openshift-6d4d899fc6-cgn6l" Feb 24 05:37:45.753828 master-0 kubenswrapper[34361]: I0224 05:37:45.753786 34361 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/7ab989ea-1de3-497d-9834-889d587a0270-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-6d4d899fc6-cgn6l\" (UID: \"7ab989ea-1de3-497d-9834-889d587a0270\") " pod="openshift-authentication/oauth-openshift-6d4d899fc6-cgn6l" Feb 24 05:37:45.864028 master-0 kubenswrapper[34361]: I0224 05:37:45.863802 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/7ab989ea-1de3-497d-9834-889d587a0270-audit-dir\") pod \"oauth-openshift-6d4d899fc6-cgn6l\" (UID: \"7ab989ea-1de3-497d-9834-889d587a0270\") " pod="openshift-authentication/oauth-openshift-6d4d899fc6-cgn6l" Feb 24 05:37:45.864028 master-0 kubenswrapper[34361]: I0224 05:37:45.863902 34361 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/7ab989ea-1de3-497d-9834-889d587a0270-audit-dir\") pod \"oauth-openshift-6d4d899fc6-cgn6l\" (UID: \"7ab989ea-1de3-497d-9834-889d587a0270\") " pod="openshift-authentication/oauth-openshift-6d4d899fc6-cgn6l" Feb 24 05:37:45.864377 master-0 kubenswrapper[34361]: I0224 05:37:45.864064 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/7ab989ea-1de3-497d-9834-889d587a0270-v4-0-config-system-service-ca\") pod \"oauth-openshift-6d4d899fc6-cgn6l\" (UID: \"7ab989ea-1de3-497d-9834-889d587a0270\") " pod="openshift-authentication/oauth-openshift-6d4d899fc6-cgn6l" Feb 24 05:37:45.864377 master-0 kubenswrapper[34361]: I0224 05:37:45.864229 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/7ab989ea-1de3-497d-9834-889d587a0270-v4-0-config-system-cliconfig\") pod \"oauth-openshift-6d4d899fc6-cgn6l\" (UID: \"7ab989ea-1de3-497d-9834-889d587a0270\") " pod="openshift-authentication/oauth-openshift-6d4d899fc6-cgn6l" Feb 24 05:37:45.864470 master-0 kubenswrapper[34361]: I0224 05:37:45.864425 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/7ab989ea-1de3-497d-9834-889d587a0270-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-6d4d899fc6-cgn6l\" (UID: \"7ab989ea-1de3-497d-9834-889d587a0270\") " pod="openshift-authentication/oauth-openshift-6d4d899fc6-cgn6l" Feb 24 05:37:45.864591 master-0 kubenswrapper[34361]: I0224 05:37:45.864550 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7ab989ea-1de3-497d-9834-889d587a0270-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-6d4d899fc6-cgn6l\" (UID: \"7ab989ea-1de3-497d-9834-889d587a0270\") " pod="openshift-authentication/oauth-openshift-6d4d899fc6-cgn6l" Feb 24 05:37:45.864815 master-0 kubenswrapper[34361]: I0224 05:37:45.864781 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/7ab989ea-1de3-497d-9834-889d587a0270-v4-0-config-system-serving-cert\") pod \"oauth-openshift-6d4d899fc6-cgn6l\" (UID: \"7ab989ea-1de3-497d-9834-889d587a0270\") " pod="openshift-authentication/oauth-openshift-6d4d899fc6-cgn6l" Feb 24 05:37:45.864917 master-0 kubenswrapper[34361]: I0224 05:37:45.864886 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mp7lq\" (UniqueName: \"kubernetes.io/projected/7ab989ea-1de3-497d-9834-889d587a0270-kube-api-access-mp7lq\") pod \"oauth-openshift-6d4d899fc6-cgn6l\" (UID: \"7ab989ea-1de3-497d-9834-889d587a0270\") " pod="openshift-authentication/oauth-openshift-6d4d899fc6-cgn6l" Feb 24 05:37:45.865015 master-0 kubenswrapper[34361]: I0224 05:37:45.864985 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/7ab989ea-1de3-497d-9834-889d587a0270-v4-0-config-system-router-certs\") pod \"oauth-openshift-6d4d899fc6-cgn6l\" (UID: \"7ab989ea-1de3-497d-9834-889d587a0270\") " pod="openshift-authentication/oauth-openshift-6d4d899fc6-cgn6l" Feb 24 05:37:45.865179 master-0 kubenswrapper[34361]: I0224 05:37:45.865147 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/7ab989ea-1de3-497d-9834-889d587a0270-v4-0-config-user-template-login\") pod \"oauth-openshift-6d4d899fc6-cgn6l\" (UID: \"7ab989ea-1de3-497d-9834-889d587a0270\") " pod="openshift-authentication/oauth-openshift-6d4d899fc6-cgn6l" Feb 24 05:37:45.865265 master-0 kubenswrapper[34361]: I0224 05:37:45.865232 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/7ab989ea-1de3-497d-9834-889d587a0270-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-6d4d899fc6-cgn6l\" (UID: \"7ab989ea-1de3-497d-9834-889d587a0270\") " pod="openshift-authentication/oauth-openshift-6d4d899fc6-cgn6l" Feb 24 05:37:45.865345 master-0 kubenswrapper[34361]: I0224 05:37:45.865282 34361 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/7ab989ea-1de3-497d-9834-889d587a0270-v4-0-config-system-service-ca\") pod \"oauth-openshift-6d4d899fc6-cgn6l\" (UID: \"7ab989ea-1de3-497d-9834-889d587a0270\") " pod="openshift-authentication/oauth-openshift-6d4d899fc6-cgn6l" Feb 24 05:37:45.865400 master-0 kubenswrapper[34361]: I0224 05:37:45.865347 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/7ab989ea-1de3-497d-9834-889d587a0270-v4-0-config-system-session\") pod \"oauth-openshift-6d4d899fc6-cgn6l\" (UID: \"7ab989ea-1de3-497d-9834-889d587a0270\") " pod="openshift-authentication/oauth-openshift-6d4d899fc6-cgn6l" Feb 24 05:37:45.865561 master-0 kubenswrapper[34361]: I0224 05:37:45.865491 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/7ab989ea-1de3-497d-9834-889d587a0270-audit-policies\") pod \"oauth-openshift-6d4d899fc6-cgn6l\" (UID: \"7ab989ea-1de3-497d-9834-889d587a0270\") " pod="openshift-authentication/oauth-openshift-6d4d899fc6-cgn6l" Feb 24 05:37:45.865654 master-0 kubenswrapper[34361]: I0224 05:37:45.865582 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/7ab989ea-1de3-497d-9834-889d587a0270-v4-0-config-user-template-error\") pod \"oauth-openshift-6d4d899fc6-cgn6l\" (UID: \"7ab989ea-1de3-497d-9834-889d587a0270\") " pod="openshift-authentication/oauth-openshift-6d4d899fc6-cgn6l" Feb 24 05:37:45.866003 master-0 kubenswrapper[34361]: I0224 05:37:45.865948 34361 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory" Feb 24 05:37:45.868308 master-0 kubenswrapper[34361]: E0224 05:37:45.868231 34361 configmap.go:193] Couldn't get configMap openshift-authentication/v4-0-config-system-cliconfig: configmap "v4-0-config-system-cliconfig" not found Feb 24 05:37:45.868484 master-0 kubenswrapper[34361]: E0224 05:37:45.868464 34361 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/7ab989ea-1de3-497d-9834-889d587a0270-v4-0-config-system-cliconfig podName:7ab989ea-1de3-497d-9834-889d587a0270 nodeName:}" failed. No retries permitted until 2026-02-24 05:37:46.36839558 +0000 UTC m=+26.071012636 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "v4-0-config-system-cliconfig" (UniqueName: "kubernetes.io/configmap/7ab989ea-1de3-497d-9834-889d587a0270-v4-0-config-system-cliconfig") pod "oauth-openshift-6d4d899fc6-cgn6l" (UID: "7ab989ea-1de3-497d-9834-889d587a0270") : configmap "v4-0-config-system-cliconfig" not found Feb 24 05:37:45.868674 master-0 kubenswrapper[34361]: E0224 05:37:45.868597 34361 secret.go:189] Couldn't get secret openshift-authentication/v4-0-config-system-session: secret "v4-0-config-system-session" not found Feb 24 05:37:45.868793 master-0 kubenswrapper[34361]: E0224 05:37:45.868756 34361 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/7ab989ea-1de3-497d-9834-889d587a0270-v4-0-config-system-session podName:7ab989ea-1de3-497d-9834-889d587a0270 nodeName:}" failed. No retries permitted until 2026-02-24 05:37:46.368723329 +0000 UTC m=+26.071340375 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "v4-0-config-system-session" (UniqueName: "kubernetes.io/secret/7ab989ea-1de3-497d-9834-889d587a0270-v4-0-config-system-session") pod "oauth-openshift-6d4d899fc6-cgn6l" (UID: "7ab989ea-1de3-497d-9834-889d587a0270") : secret "v4-0-config-system-session" not found Feb 24 05:37:45.870704 master-0 kubenswrapper[34361]: I0224 05:37:45.870616 34361 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/7ab989ea-1de3-497d-9834-889d587a0270-audit-policies\") pod \"oauth-openshift-6d4d899fc6-cgn6l\" (UID: \"7ab989ea-1de3-497d-9834-889d587a0270\") " pod="openshift-authentication/oauth-openshift-6d4d899fc6-cgn6l" Feb 24 05:37:45.870985 master-0 kubenswrapper[34361]: I0224 05:37:45.870737 34361 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/7ab989ea-1de3-497d-9834-889d587a0270-v4-0-config-user-template-error\") pod \"oauth-openshift-6d4d899fc6-cgn6l\" (UID: \"7ab989ea-1de3-497d-9834-889d587a0270\") " pod="openshift-authentication/oauth-openshift-6d4d899fc6-cgn6l" Feb 24 05:37:45.871526 master-0 kubenswrapper[34361]: I0224 05:37:45.871453 34361 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7ab989ea-1de3-497d-9834-889d587a0270-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-6d4d899fc6-cgn6l\" (UID: \"7ab989ea-1de3-497d-9834-889d587a0270\") " pod="openshift-authentication/oauth-openshift-6d4d899fc6-cgn6l" Feb 24 05:37:45.875263 master-0 kubenswrapper[34361]: I0224 05:37:45.874752 34361 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/7ab989ea-1de3-497d-9834-889d587a0270-v4-0-config-user-template-login\") pod \"oauth-openshift-6d4d899fc6-cgn6l\" (UID: \"7ab989ea-1de3-497d-9834-889d587a0270\") " pod="openshift-authentication/oauth-openshift-6d4d899fc6-cgn6l" Feb 24 05:37:45.875263 master-0 kubenswrapper[34361]: I0224 05:37:45.874985 34361 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/7ab989ea-1de3-497d-9834-889d587a0270-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-6d4d899fc6-cgn6l\" (UID: \"7ab989ea-1de3-497d-9834-889d587a0270\") " pod="openshift-authentication/oauth-openshift-6d4d899fc6-cgn6l" Feb 24 05:37:45.877832 master-0 kubenswrapper[34361]: I0224 05:37:45.877781 34361 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/7ab989ea-1de3-497d-9834-889d587a0270-v4-0-config-system-serving-cert\") pod \"oauth-openshift-6d4d899fc6-cgn6l\" (UID: \"7ab989ea-1de3-497d-9834-889d587a0270\") " pod="openshift-authentication/oauth-openshift-6d4d899fc6-cgn6l" Feb 24 05:37:45.878691 master-0 kubenswrapper[34361]: I0224 05:37:45.878647 34361 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/7ab989ea-1de3-497d-9834-889d587a0270-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-6d4d899fc6-cgn6l\" (UID: \"7ab989ea-1de3-497d-9834-889d587a0270\") " pod="openshift-authentication/oauth-openshift-6d4d899fc6-cgn6l" Feb 24 05:37:45.879039 master-0 kubenswrapper[34361]: I0224 05:37:45.878955 34361 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/7ab989ea-1de3-497d-9834-889d587a0270-v4-0-config-system-router-certs\") pod \"oauth-openshift-6d4d899fc6-cgn6l\" (UID: \"7ab989ea-1de3-497d-9834-889d587a0270\") " pod="openshift-authentication/oauth-openshift-6d4d899fc6-cgn6l" Feb 24 05:37:45.899930 master-0 kubenswrapper[34361]: I0224 05:37:45.899855 34361 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mp7lq\" (UniqueName: \"kubernetes.io/projected/7ab989ea-1de3-497d-9834-889d587a0270-kube-api-access-mp7lq\") pod \"oauth-openshift-6d4d899fc6-cgn6l\" (UID: \"7ab989ea-1de3-497d-9834-889d587a0270\") " pod="openshift-authentication/oauth-openshift-6d4d899fc6-cgn6l" Feb 24 05:37:46.374339 master-0 kubenswrapper[34361]: I0224 05:37:46.374218 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/7ab989ea-1de3-497d-9834-889d587a0270-v4-0-config-system-cliconfig\") pod \"oauth-openshift-6d4d899fc6-cgn6l\" (UID: \"7ab989ea-1de3-497d-9834-889d587a0270\") " pod="openshift-authentication/oauth-openshift-6d4d899fc6-cgn6l" Feb 24 05:37:46.374620 master-0 kubenswrapper[34361]: I0224 05:37:46.374480 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/7ab989ea-1de3-497d-9834-889d587a0270-v4-0-config-system-session\") pod \"oauth-openshift-6d4d899fc6-cgn6l\" (UID: \"7ab989ea-1de3-497d-9834-889d587a0270\") " pod="openshift-authentication/oauth-openshift-6d4d899fc6-cgn6l" Feb 24 05:37:46.374782 master-0 kubenswrapper[34361]: E0224 05:37:46.374738 34361 configmap.go:193] Couldn't get configMap openshift-authentication/v4-0-config-system-cliconfig: configmap "v4-0-config-system-cliconfig" not found Feb 24 05:37:46.374878 master-0 kubenswrapper[34361]: E0224 05:37:46.374845 34361 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/7ab989ea-1de3-497d-9834-889d587a0270-v4-0-config-system-cliconfig podName:7ab989ea-1de3-497d-9834-889d587a0270 nodeName:}" failed. No retries permitted until 2026-02-24 05:37:47.374815116 +0000 UTC m=+27.077432202 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "v4-0-config-system-cliconfig" (UniqueName: "kubernetes.io/configmap/7ab989ea-1de3-497d-9834-889d587a0270-v4-0-config-system-cliconfig") pod "oauth-openshift-6d4d899fc6-cgn6l" (UID: "7ab989ea-1de3-497d-9834-889d587a0270") : configmap "v4-0-config-system-cliconfig" not found Feb 24 05:37:46.384839 master-0 kubenswrapper[34361]: I0224 05:37:46.384759 34361 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/7ab989ea-1de3-497d-9834-889d587a0270-v4-0-config-system-session\") pod \"oauth-openshift-6d4d899fc6-cgn6l\" (UID: \"7ab989ea-1de3-497d-9834-889d587a0270\") " pod="openshift-authentication/oauth-openshift-6d4d899fc6-cgn6l" Feb 24 05:37:47.393673 master-0 kubenswrapper[34361]: I0224 05:37:47.393557 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/7ab989ea-1de3-497d-9834-889d587a0270-v4-0-config-system-cliconfig\") pod \"oauth-openshift-6d4d899fc6-cgn6l\" (UID: \"7ab989ea-1de3-497d-9834-889d587a0270\") " pod="openshift-authentication/oauth-openshift-6d4d899fc6-cgn6l" Feb 24 05:37:47.394828 master-0 kubenswrapper[34361]: E0224 05:37:47.393846 34361 configmap.go:193] Couldn't get configMap openshift-authentication/v4-0-config-system-cliconfig: configmap "v4-0-config-system-cliconfig" not found Feb 24 05:37:47.394828 master-0 kubenswrapper[34361]: E0224 05:37:47.394022 34361 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/7ab989ea-1de3-497d-9834-889d587a0270-v4-0-config-system-cliconfig podName:7ab989ea-1de3-497d-9834-889d587a0270 nodeName:}" failed. No retries permitted until 2026-02-24 05:37:49.393980906 +0000 UTC m=+29.096597992 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "v4-0-config-system-cliconfig" (UniqueName: "kubernetes.io/configmap/7ab989ea-1de3-497d-9834-889d587a0270-v4-0-config-system-cliconfig") pod "oauth-openshift-6d4d899fc6-cgn6l" (UID: "7ab989ea-1de3-497d-9834-889d587a0270") : configmap "v4-0-config-system-cliconfig" not found Feb 24 05:37:48.321465 master-0 kubenswrapper[34361]: I0224 05:37:48.321390 34361 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-vd82q" Feb 24 05:37:48.321745 master-0 kubenswrapper[34361]: I0224 05:37:48.321655 34361 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 24 05:37:48.395012 master-0 kubenswrapper[34361]: I0224 05:37:48.394892 34361 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-vd82q" Feb 24 05:37:49.426956 master-0 kubenswrapper[34361]: I0224 05:37:49.426883 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/7ab989ea-1de3-497d-9834-889d587a0270-v4-0-config-system-cliconfig\") pod \"oauth-openshift-6d4d899fc6-cgn6l\" (UID: \"7ab989ea-1de3-497d-9834-889d587a0270\") " pod="openshift-authentication/oauth-openshift-6d4d899fc6-cgn6l" Feb 24 05:37:49.427567 master-0 kubenswrapper[34361]: E0224 05:37:49.427070 34361 configmap.go:193] Couldn't get configMap openshift-authentication/v4-0-config-system-cliconfig: configmap "v4-0-config-system-cliconfig" not found Feb 24 05:37:49.427567 master-0 kubenswrapper[34361]: E0224 05:37:49.427131 34361 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/7ab989ea-1de3-497d-9834-889d587a0270-v4-0-config-system-cliconfig podName:7ab989ea-1de3-497d-9834-889d587a0270 nodeName:}" failed. No retries permitted until 2026-02-24 05:37:53.427113377 +0000 UTC m=+33.129730423 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "v4-0-config-system-cliconfig" (UniqueName: "kubernetes.io/configmap/7ab989ea-1de3-497d-9834-889d587a0270-v4-0-config-system-cliconfig") pod "oauth-openshift-6d4d899fc6-cgn6l" (UID: "7ab989ea-1de3-497d-9834-889d587a0270") : configmap "v4-0-config-system-cliconfig" not found Feb 24 05:37:53.504340 master-0 kubenswrapper[34361]: I0224 05:37:53.503393 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/7ab989ea-1de3-497d-9834-889d587a0270-v4-0-config-system-cliconfig\") pod \"oauth-openshift-6d4d899fc6-cgn6l\" (UID: \"7ab989ea-1de3-497d-9834-889d587a0270\") " pod="openshift-authentication/oauth-openshift-6d4d899fc6-cgn6l" Feb 24 05:37:53.504340 master-0 kubenswrapper[34361]: I0224 05:37:53.504294 34361 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/7ab989ea-1de3-497d-9834-889d587a0270-v4-0-config-system-cliconfig\") pod \"oauth-openshift-6d4d899fc6-cgn6l\" (UID: \"7ab989ea-1de3-497d-9834-889d587a0270\") " pod="openshift-authentication/oauth-openshift-6d4d899fc6-cgn6l" Feb 24 05:37:53.577201 master-0 kubenswrapper[34361]: I0224 05:37:53.577139 34361 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console-operator/console-operator-5df5ffc47c-s22jd"] Feb 24 05:37:53.578122 master-0 kubenswrapper[34361]: I0224 05:37:53.578092 34361 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-5df5ffc47c-s22jd" Feb 24 05:37:53.580843 master-0 kubenswrapper[34361]: I0224 05:37:53.580794 34361 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"console-operator-config" Feb 24 05:37:53.581519 master-0 kubenswrapper[34361]: I0224 05:37:53.581472 34361 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"serving-cert" Feb 24 05:37:53.593978 master-0 kubenswrapper[34361]: I0224 05:37:53.593935 34361 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"openshift-service-ca.crt" Feb 24 05:37:53.594194 master-0 kubenswrapper[34361]: I0224 05:37:53.593935 34361 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"console-operator-dockercfg-9mjxb" Feb 24 05:37:53.604553 master-0 kubenswrapper[34361]: I0224 05:37:53.604487 34361 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/3d0c314a-87da-4004-9f18-ba681929e8b3-trusted-ca\") pod \"console-operator-5df5ffc47c-s22jd\" (UID: \"3d0c314a-87da-4004-9f18-ba681929e8b3\") " pod="openshift-console-operator/console-operator-5df5ffc47c-s22jd" Feb 24 05:37:53.604553 master-0 kubenswrapper[34361]: I0224 05:37:53.604406 34361 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"kube-root-ca.crt" Feb 24 05:37:53.604925 master-0 kubenswrapper[34361]: I0224 05:37:53.604614 34361 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3d0c314a-87da-4004-9f18-ba681929e8b3-config\") pod \"console-operator-5df5ffc47c-s22jd\" (UID: \"3d0c314a-87da-4004-9f18-ba681929e8b3\") " pod="openshift-console-operator/console-operator-5df5ffc47c-s22jd" Feb 24 05:37:53.604925 master-0 kubenswrapper[34361]: I0224 05:37:53.604641 34361 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2tf55\" (UniqueName: \"kubernetes.io/projected/3d0c314a-87da-4004-9f18-ba681929e8b3-kube-api-access-2tf55\") pod \"console-operator-5df5ffc47c-s22jd\" (UID: \"3d0c314a-87da-4004-9f18-ba681929e8b3\") " pod="openshift-console-operator/console-operator-5df5ffc47c-s22jd" Feb 24 05:37:53.604925 master-0 kubenswrapper[34361]: I0224 05:37:53.604679 34361 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3d0c314a-87da-4004-9f18-ba681929e8b3-serving-cert\") pod \"console-operator-5df5ffc47c-s22jd\" (UID: \"3d0c314a-87da-4004-9f18-ba681929e8b3\") " pod="openshift-console-operator/console-operator-5df5ffc47c-s22jd" Feb 24 05:37:53.620346 master-0 kubenswrapper[34361]: I0224 05:37:53.613455 34361 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console-operator/console-operator-5df5ffc47c-s22jd"] Feb 24 05:37:53.631338 master-0 kubenswrapper[34361]: I0224 05:37:53.622367 34361 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"trusted-ca" Feb 24 05:37:53.705748 master-0 kubenswrapper[34361]: I0224 05:37:53.705249 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3d0c314a-87da-4004-9f18-ba681929e8b3-config\") pod \"console-operator-5df5ffc47c-s22jd\" (UID: \"3d0c314a-87da-4004-9f18-ba681929e8b3\") " pod="openshift-console-operator/console-operator-5df5ffc47c-s22jd" Feb 24 05:37:53.705748 master-0 kubenswrapper[34361]: I0224 05:37:53.705369 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2tf55\" (UniqueName: \"kubernetes.io/projected/3d0c314a-87da-4004-9f18-ba681929e8b3-kube-api-access-2tf55\") pod \"console-operator-5df5ffc47c-s22jd\" (UID: \"3d0c314a-87da-4004-9f18-ba681929e8b3\") " pod="openshift-console-operator/console-operator-5df5ffc47c-s22jd" Feb 24 05:37:53.705748 master-0 kubenswrapper[34361]: I0224 05:37:53.705413 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3d0c314a-87da-4004-9f18-ba681929e8b3-serving-cert\") pod \"console-operator-5df5ffc47c-s22jd\" (UID: \"3d0c314a-87da-4004-9f18-ba681929e8b3\") " pod="openshift-console-operator/console-operator-5df5ffc47c-s22jd" Feb 24 05:37:53.705748 master-0 kubenswrapper[34361]: I0224 05:37:53.705444 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/3d0c314a-87da-4004-9f18-ba681929e8b3-trusted-ca\") pod \"console-operator-5df5ffc47c-s22jd\" (UID: \"3d0c314a-87da-4004-9f18-ba681929e8b3\") " pod="openshift-console-operator/console-operator-5df5ffc47c-s22jd" Feb 24 05:37:53.707323 master-0 kubenswrapper[34361]: I0224 05:37:53.707176 34361 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/3d0c314a-87da-4004-9f18-ba681929e8b3-trusted-ca\") pod \"console-operator-5df5ffc47c-s22jd\" (UID: \"3d0c314a-87da-4004-9f18-ba681929e8b3\") " pod="openshift-console-operator/console-operator-5df5ffc47c-s22jd" Feb 24 05:37:53.710462 master-0 kubenswrapper[34361]: I0224 05:37:53.710406 34361 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3d0c314a-87da-4004-9f18-ba681929e8b3-config\") pod \"console-operator-5df5ffc47c-s22jd\" (UID: \"3d0c314a-87da-4004-9f18-ba681929e8b3\") " pod="openshift-console-operator/console-operator-5df5ffc47c-s22jd" Feb 24 05:37:53.711335 master-0 kubenswrapper[34361]: I0224 05:37:53.711273 34361 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3d0c314a-87da-4004-9f18-ba681929e8b3-serving-cert\") pod \"console-operator-5df5ffc47c-s22jd\" (UID: \"3d0c314a-87da-4004-9f18-ba681929e8b3\") " pod="openshift-console-operator/console-operator-5df5ffc47c-s22jd" Feb 24 05:37:53.721241 master-0 kubenswrapper[34361]: I0224 05:37:53.721183 34361 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2tf55\" (UniqueName: \"kubernetes.io/projected/3d0c314a-87da-4004-9f18-ba681929e8b3-kube-api-access-2tf55\") pod \"console-operator-5df5ffc47c-s22jd\" (UID: \"3d0c314a-87da-4004-9f18-ba681929e8b3\") " pod="openshift-console-operator/console-operator-5df5ffc47c-s22jd" Feb 24 05:37:53.729238 master-0 kubenswrapper[34361]: I0224 05:37:53.729187 34361 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-6d4d899fc6-cgn6l" Feb 24 05:37:53.896367 master-0 kubenswrapper[34361]: I0224 05:37:53.896281 34361 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-5df5ffc47c-s22jd" Feb 24 05:37:54.162213 master-0 kubenswrapper[34361]: I0224 05:37:54.162146 34361 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-6d4d899fc6-cgn6l"] Feb 24 05:37:54.173033 master-0 kubenswrapper[34361]: I0224 05:37:54.172996 34361 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 24 05:37:54.261086 master-0 kubenswrapper[34361]: I0224 05:37:54.261011 34361 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-6d4d899fc6-cgn6l" event={"ID":"7ab989ea-1de3-497d-9834-889d587a0270","Type":"ContainerStarted","Data":"592b9e064fed96c4747d994821e4b391018fb4ebc4b4fb73f26e97067b1a4a6c"} Feb 24 05:37:54.368116 master-0 kubenswrapper[34361]: I0224 05:37:54.367948 34361 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console-operator/console-operator-5df5ffc47c-s22jd"] Feb 24 05:37:54.381236 master-0 kubenswrapper[34361]: W0224 05:37:54.381182 34361 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3d0c314a_87da_4004_9f18_ba681929e8b3.slice/crio-321fde0a54e635fc93c414c1eb18d7962561e8680105a8a77b7db9f7247b4aab WatchSource:0}: Error finding container 321fde0a54e635fc93c414c1eb18d7962561e8680105a8a77b7db9f7247b4aab: Status 404 returned error can't find the container with id 321fde0a54e635fc93c414c1eb18d7962561e8680105a8a77b7db9f7247b4aab Feb 24 05:37:55.269801 master-0 kubenswrapper[34361]: I0224 05:37:55.269735 34361 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console-operator/console-operator-5df5ffc47c-s22jd" event={"ID":"3d0c314a-87da-4004-9f18-ba681929e8b3","Type":"ContainerStarted","Data":"321fde0a54e635fc93c414c1eb18d7962561e8680105a8a77b7db9f7247b4aab"} Feb 24 05:37:57.206976 master-0 kubenswrapper[34361]: I0224 05:37:57.206069 34361 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-monitoring/monitoring-plugin-755c6d6fd4-4ztmm"] Feb 24 05:37:57.215232 master-0 kubenswrapper[34361]: I0224 05:37:57.214758 34361 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/monitoring-plugin-755c6d6fd4-4ztmm" Feb 24 05:37:57.218096 master-0 kubenswrapper[34361]: I0224 05:37:57.218033 34361 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"default-dockercfg-z9499" Feb 24 05:37:57.218399 master-0 kubenswrapper[34361]: I0224 05:37:57.218058 34361 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"monitoring-plugin-cert" Feb 24 05:37:57.222132 master-0 kubenswrapper[34361]: I0224 05:37:57.221751 34361 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/monitoring-plugin-755c6d6fd4-4ztmm"] Feb 24 05:37:57.288822 master-0 kubenswrapper[34361]: I0224 05:37:57.288728 34361 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"monitoring-plugin-cert\" (UniqueName: \"kubernetes.io/secret/bc18c03f-a464-49f1-8d9a-029d64c1ab0f-monitoring-plugin-cert\") pod \"monitoring-plugin-755c6d6fd4-4ztmm\" (UID: \"bc18c03f-a464-49f1-8d9a-029d64c1ab0f\") " pod="openshift-monitoring/monitoring-plugin-755c6d6fd4-4ztmm" Feb 24 05:37:57.393728 master-0 kubenswrapper[34361]: I0224 05:37:57.390903 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"monitoring-plugin-cert\" (UniqueName: \"kubernetes.io/secret/bc18c03f-a464-49f1-8d9a-029d64c1ab0f-monitoring-plugin-cert\") pod \"monitoring-plugin-755c6d6fd4-4ztmm\" (UID: \"bc18c03f-a464-49f1-8d9a-029d64c1ab0f\") " pod="openshift-monitoring/monitoring-plugin-755c6d6fd4-4ztmm" Feb 24 05:37:57.395511 master-0 kubenswrapper[34361]: I0224 05:37:57.395253 34361 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"monitoring-plugin-cert\" (UniqueName: \"kubernetes.io/secret/bc18c03f-a464-49f1-8d9a-029d64c1ab0f-monitoring-plugin-cert\") pod \"monitoring-plugin-755c6d6fd4-4ztmm\" (UID: \"bc18c03f-a464-49f1-8d9a-029d64c1ab0f\") " pod="openshift-monitoring/monitoring-plugin-755c6d6fd4-4ztmm" Feb 24 05:37:57.605377 master-0 kubenswrapper[34361]: I0224 05:37:57.605295 34361 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/monitoring-plugin-755c6d6fd4-4ztmm" Feb 24 05:37:58.003524 master-0 kubenswrapper[34361]: I0224 05:37:58.003469 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/afaa1cb5-3ad5-4c67-8802-9c4db23a2e3a-kube-api-access\") pod \"installer-3-master-0\" (UID: \"afaa1cb5-3ad5-4c67-8802-9c4db23a2e3a\") " pod="openshift-kube-apiserver/installer-3-master-0" Feb 24 05:37:58.003761 master-0 kubenswrapper[34361]: E0224 05:37:58.003730 34361 projected.go:288] Couldn't get configMap openshift-kube-apiserver/kube-root-ca.crt: object "openshift-kube-apiserver"/"kube-root-ca.crt" not registered Feb 24 05:37:58.003841 master-0 kubenswrapper[34361]: E0224 05:37:58.003813 34361 projected.go:194] Error preparing data for projected volume kube-api-access for pod openshift-kube-apiserver/installer-3-master-0: object "openshift-kube-apiserver"/"kube-root-ca.crt" not registered Feb 24 05:37:58.003896 master-0 kubenswrapper[34361]: E0224 05:37:58.003873 34361 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/afaa1cb5-3ad5-4c67-8802-9c4db23a2e3a-kube-api-access podName:afaa1cb5-3ad5-4c67-8802-9c4db23a2e3a nodeName:}" failed. No retries permitted until 2026-02-24 05:38:30.003854665 +0000 UTC m=+69.706471711 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access" (UniqueName: "kubernetes.io/projected/afaa1cb5-3ad5-4c67-8802-9c4db23a2e3a-kube-api-access") pod "installer-3-master-0" (UID: "afaa1cb5-3ad5-4c67-8802-9c4db23a2e3a") : object "openshift-kube-apiserver"/"kube-root-ca.crt" not registered Feb 24 05:37:58.172654 master-0 kubenswrapper[34361]: I0224 05:37:58.172416 34361 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/monitoring-plugin-755c6d6fd4-4ztmm"] Feb 24 05:37:58.177989 master-0 kubenswrapper[34361]: W0224 05:37:58.177941 34361 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podbc18c03f_a464_49f1_8d9a_029d64c1ab0f.slice/crio-ac42c51a4d16924bd679d5ab2153528ba70d661912c6a04d461d7211a819618d WatchSource:0}: Error finding container ac42c51a4d16924bd679d5ab2153528ba70d661912c6a04d461d7211a819618d: Status 404 returned error can't find the container with id ac42c51a4d16924bd679d5ab2153528ba70d661912c6a04d461d7211a819618d Feb 24 05:37:58.305415 master-0 kubenswrapper[34361]: I0224 05:37:58.305246 34361 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console-operator/console-operator-5df5ffc47c-s22jd" event={"ID":"3d0c314a-87da-4004-9f18-ba681929e8b3","Type":"ContainerStarted","Data":"63d3ca49c0b1b02ceb5c2a3dcd318c6889969926bac71a4f2d0a56e3cc8cd7e5"} Feb 24 05:37:58.306081 master-0 kubenswrapper[34361]: I0224 05:37:58.305448 34361 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console-operator/console-operator-5df5ffc47c-s22jd" Feb 24 05:37:58.307823 master-0 kubenswrapper[34361]: I0224 05:37:58.307754 34361 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/monitoring-plugin-755c6d6fd4-4ztmm" event={"ID":"bc18c03f-a464-49f1-8d9a-029d64c1ab0f","Type":"ContainerStarted","Data":"ac42c51a4d16924bd679d5ab2153528ba70d661912c6a04d461d7211a819618d"} Feb 24 05:37:58.310302 master-0 kubenswrapper[34361]: I0224 05:37:58.310143 34361 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-6d4d899fc6-cgn6l" event={"ID":"7ab989ea-1de3-497d-9834-889d587a0270","Type":"ContainerStarted","Data":"5975ab0155e8aeb506e71a83f7c1f9a9ec653513b28609bf539ddc6275cf7ab1"} Feb 24 05:37:58.310837 master-0 kubenswrapper[34361]: I0224 05:37:58.310782 34361 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-authentication/oauth-openshift-6d4d899fc6-cgn6l" Feb 24 05:37:58.319030 master-0 kubenswrapper[34361]: I0224 05:37:58.318974 34361 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console-operator/console-operator-5df5ffc47c-s22jd" Feb 24 05:37:58.337117 master-0 kubenswrapper[34361]: I0224 05:37:58.336880 34361 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console-operator/console-operator-5df5ffc47c-s22jd" podStartSLOduration=1.9718454090000002 podStartE2EDuration="5.336863781s" podCreationTimestamp="2026-02-24 05:37:53 +0000 UTC" firstStartedPulling="2026-02-24 05:37:54.385598364 +0000 UTC m=+34.088215430" lastFinishedPulling="2026-02-24 05:37:57.750616756 +0000 UTC m=+37.453233802" observedRunningTime="2026-02-24 05:37:58.336629545 +0000 UTC m=+38.039246661" watchObservedRunningTime="2026-02-24 05:37:58.336863781 +0000 UTC m=+38.039480827" Feb 24 05:37:58.575447 master-0 kubenswrapper[34361]: I0224 05:37:58.575331 34361 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication/oauth-openshift-6d4d899fc6-cgn6l" podStartSLOduration=10.014779855 podStartE2EDuration="13.575295641s" podCreationTimestamp="2026-02-24 05:37:45 +0000 UTC" firstStartedPulling="2026-02-24 05:37:54.172893506 +0000 UTC m=+33.875510562" lastFinishedPulling="2026-02-24 05:37:57.733409302 +0000 UTC m=+37.436026348" observedRunningTime="2026-02-24 05:37:58.427911252 +0000 UTC m=+38.130528318" watchObservedRunningTime="2026-02-24 05:37:58.575295641 +0000 UTC m=+38.277912687" Feb 24 05:37:58.575933 master-0 kubenswrapper[34361]: I0224 05:37:58.575905 34361 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/downloads-955b69498-crzjg"] Feb 24 05:37:58.576910 master-0 kubenswrapper[34361]: I0224 05:37:58.576881 34361 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-955b69498-crzjg" Feb 24 05:37:58.580043 master-0 kubenswrapper[34361]: I0224 05:37:58.579994 34361 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"default-dockercfg-bnkwf" Feb 24 05:37:58.580236 master-0 kubenswrapper[34361]: I0224 05:37:58.580211 34361 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"kube-root-ca.crt" Feb 24 05:37:58.582523 master-0 kubenswrapper[34361]: I0224 05:37:58.582495 34361 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"openshift-service-ca.crt" Feb 24 05:37:58.620353 master-0 kubenswrapper[34361]: I0224 05:37:58.620294 34361 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s2dbz\" (UniqueName: \"kubernetes.io/projected/691f1da3-ccd7-416a-9031-dea1b78f71ee-kube-api-access-s2dbz\") pod \"downloads-955b69498-crzjg\" (UID: \"691f1da3-ccd7-416a-9031-dea1b78f71ee\") " pod="openshift-console/downloads-955b69498-crzjg" Feb 24 05:37:58.622290 master-0 kubenswrapper[34361]: I0224 05:37:58.622217 34361 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/downloads-955b69498-crzjg"] Feb 24 05:37:58.693748 master-0 kubenswrapper[34361]: I0224 05:37:58.693695 34361 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-authentication/oauth-openshift-6d4d899fc6-cgn6l" Feb 24 05:37:58.722498 master-0 kubenswrapper[34361]: I0224 05:37:58.722437 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dbz\" (UniqueName: \"kubernetes.io/projected/691f1da3-ccd7-416a-9031-dea1b78f71ee-kube-api-access-s2dbz\") pod \"downloads-955b69498-crzjg\" (UID: \"691f1da3-ccd7-416a-9031-dea1b78f71ee\") " pod="openshift-console/downloads-955b69498-crzjg" Feb 24 05:37:58.767534 master-0 kubenswrapper[34361]: I0224 05:37:58.767461 34361 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s2dbz\" (UniqueName: \"kubernetes.io/projected/691f1da3-ccd7-416a-9031-dea1b78f71ee-kube-api-access-s2dbz\") pod \"downloads-955b69498-crzjg\" (UID: \"691f1da3-ccd7-416a-9031-dea1b78f71ee\") " pod="openshift-console/downloads-955b69498-crzjg" Feb 24 05:37:58.896484 master-0 kubenswrapper[34361]: I0224 05:37:58.893395 34361 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-955b69498-crzjg" Feb 24 05:37:59.389423 master-0 kubenswrapper[34361]: I0224 05:37:59.388587 34361 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/downloads-955b69498-crzjg"] Feb 24 05:37:59.398990 master-0 kubenswrapper[34361]: W0224 05:37:59.398902 34361 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod691f1da3_ccd7_416a_9031_dea1b78f71ee.slice/crio-b614a3b8c9688703457062ca2b6aefc7dd933de2e7e72fdc6f8e76fdb7b1a853 WatchSource:0}: Error finding container b614a3b8c9688703457062ca2b6aefc7dd933de2e7e72fdc6f8e76fdb7b1a853: Status 404 returned error can't find the container with id b614a3b8c9688703457062ca2b6aefc7dd933de2e7e72fdc6f8e76fdb7b1a853 Feb 24 05:38:00.337884 master-0 kubenswrapper[34361]: I0224 05:38:00.337790 34361 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-955b69498-crzjg" event={"ID":"691f1da3-ccd7-416a-9031-dea1b78f71ee","Type":"ContainerStarted","Data":"b614a3b8c9688703457062ca2b6aefc7dd933de2e7e72fdc6f8e76fdb7b1a853"} Feb 24 05:38:00.339970 master-0 kubenswrapper[34361]: I0224 05:38:00.339913 34361 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/monitoring-plugin-755c6d6fd4-4ztmm" event={"ID":"bc18c03f-a464-49f1-8d9a-029d64c1ab0f","Type":"ContainerStarted","Data":"d99f07f875940d7fe6d4d8935a58c6c171d5427ff06bd0915354903adc3d7f62"} Feb 24 05:38:00.372980 master-0 kubenswrapper[34361]: I0224 05:38:00.371076 34361 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-monitoring/monitoring-plugin-755c6d6fd4-4ztmm" podStartSLOduration=1.641727918 podStartE2EDuration="3.371054591s" podCreationTimestamp="2026-02-24 05:37:57 +0000 UTC" firstStartedPulling="2026-02-24 05:37:58.181535808 +0000 UTC m=+37.884152854" lastFinishedPulling="2026-02-24 05:37:59.910862481 +0000 UTC m=+39.613479527" observedRunningTime="2026-02-24 05:38:00.369895109 +0000 UTC m=+40.072512165" watchObservedRunningTime="2026-02-24 05:38:00.371054591 +0000 UTC m=+40.073671637" Feb 24 05:38:01.350968 master-0 kubenswrapper[34361]: I0224 05:38:01.348877 34361 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-monitoring/monitoring-plugin-755c6d6fd4-4ztmm" Feb 24 05:38:01.376334 master-0 kubenswrapper[34361]: I0224 05:38:01.372298 34361 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-monitoring/monitoring-plugin-755c6d6fd4-4ztmm" Feb 24 05:38:09.952342 master-0 kubenswrapper[34361]: I0224 05:38:09.951563 34361 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-5b6cfdbd-5qbf5"] Feb 24 05:38:09.961330 master-0 kubenswrapper[34361]: I0224 05:38:09.955255 34361 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-5b6cfdbd-5qbf5" Feb 24 05:38:09.961330 master-0 kubenswrapper[34361]: I0224 05:38:09.961054 34361 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-oauth-config" Feb 24 05:38:09.961330 master-0 kubenswrapper[34361]: I0224 05:38:09.961252 34361 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-serving-cert" Feb 24 05:38:09.961567 master-0 kubenswrapper[34361]: I0224 05:38:09.961363 34361 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"console-config" Feb 24 05:38:09.961567 master-0 kubenswrapper[34361]: I0224 05:38:09.961076 34361 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"oauth-serving-cert" Feb 24 05:38:09.964706 master-0 kubenswrapper[34361]: I0224 05:38:09.964648 34361 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-dockercfg-bcdxv" Feb 24 05:38:09.964808 master-0 kubenswrapper[34361]: I0224 05:38:09.964734 34361 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"service-ca" Feb 24 05:38:09.984428 master-0 kubenswrapper[34361]: I0224 05:38:09.984377 34361 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-5b6cfdbd-5qbf5"] Feb 24 05:38:10.108880 master-0 kubenswrapper[34361]: I0224 05:38:10.108818 34361 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/f3038676-0c11-4616-bb1e-f5d396e420f4-console-oauth-config\") pod \"console-5b6cfdbd-5qbf5\" (UID: \"f3038676-0c11-4616-bb1e-f5d396e420f4\") " pod="openshift-console/console-5b6cfdbd-5qbf5" Feb 24 05:38:10.109289 master-0 kubenswrapper[34361]: I0224 05:38:10.109266 34361 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v988g\" (UniqueName: \"kubernetes.io/projected/f3038676-0c11-4616-bb1e-f5d396e420f4-kube-api-access-v988g\") pod \"console-5b6cfdbd-5qbf5\" (UID: \"f3038676-0c11-4616-bb1e-f5d396e420f4\") " pod="openshift-console/console-5b6cfdbd-5qbf5" Feb 24 05:38:10.109478 master-0 kubenswrapper[34361]: I0224 05:38:10.109458 34361 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/f3038676-0c11-4616-bb1e-f5d396e420f4-console-serving-cert\") pod \"console-5b6cfdbd-5qbf5\" (UID: \"f3038676-0c11-4616-bb1e-f5d396e420f4\") " pod="openshift-console/console-5b6cfdbd-5qbf5" Feb 24 05:38:10.109614 master-0 kubenswrapper[34361]: I0224 05:38:10.109594 34361 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/f3038676-0c11-4616-bb1e-f5d396e420f4-oauth-serving-cert\") pod \"console-5b6cfdbd-5qbf5\" (UID: \"f3038676-0c11-4616-bb1e-f5d396e420f4\") " pod="openshift-console/console-5b6cfdbd-5qbf5" Feb 24 05:38:10.109733 master-0 kubenswrapper[34361]: I0224 05:38:10.109711 34361 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/f3038676-0c11-4616-bb1e-f5d396e420f4-console-config\") pod \"console-5b6cfdbd-5qbf5\" (UID: \"f3038676-0c11-4616-bb1e-f5d396e420f4\") " pod="openshift-console/console-5b6cfdbd-5qbf5" Feb 24 05:38:10.109847 master-0 kubenswrapper[34361]: I0224 05:38:10.109830 34361 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/f3038676-0c11-4616-bb1e-f5d396e420f4-service-ca\") pod \"console-5b6cfdbd-5qbf5\" (UID: \"f3038676-0c11-4616-bb1e-f5d396e420f4\") " pod="openshift-console/console-5b6cfdbd-5qbf5" Feb 24 05:38:10.211773 master-0 kubenswrapper[34361]: I0224 05:38:10.211543 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/f3038676-0c11-4616-bb1e-f5d396e420f4-console-oauth-config\") pod \"console-5b6cfdbd-5qbf5\" (UID: \"f3038676-0c11-4616-bb1e-f5d396e420f4\") " pod="openshift-console/console-5b6cfdbd-5qbf5" Feb 24 05:38:10.211773 master-0 kubenswrapper[34361]: I0224 05:38:10.211632 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v988g\" (UniqueName: \"kubernetes.io/projected/f3038676-0c11-4616-bb1e-f5d396e420f4-kube-api-access-v988g\") pod \"console-5b6cfdbd-5qbf5\" (UID: \"f3038676-0c11-4616-bb1e-f5d396e420f4\") " pod="openshift-console/console-5b6cfdbd-5qbf5" Feb 24 05:38:10.212252 master-0 kubenswrapper[34361]: I0224 05:38:10.212212 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/f3038676-0c11-4616-bb1e-f5d396e420f4-console-serving-cert\") pod \"console-5b6cfdbd-5qbf5\" (UID: \"f3038676-0c11-4616-bb1e-f5d396e420f4\") " pod="openshift-console/console-5b6cfdbd-5qbf5" Feb 24 05:38:10.212385 master-0 kubenswrapper[34361]: I0224 05:38:10.212281 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/f3038676-0c11-4616-bb1e-f5d396e420f4-oauth-serving-cert\") pod \"console-5b6cfdbd-5qbf5\" (UID: \"f3038676-0c11-4616-bb1e-f5d396e420f4\") " pod="openshift-console/console-5b6cfdbd-5qbf5" Feb 24 05:38:10.212385 master-0 kubenswrapper[34361]: I0224 05:38:10.212330 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/f3038676-0c11-4616-bb1e-f5d396e420f4-console-config\") pod \"console-5b6cfdbd-5qbf5\" (UID: \"f3038676-0c11-4616-bb1e-f5d396e420f4\") " pod="openshift-console/console-5b6cfdbd-5qbf5" Feb 24 05:38:10.212385 master-0 kubenswrapper[34361]: I0224 05:38:10.212364 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/f3038676-0c11-4616-bb1e-f5d396e420f4-service-ca\") pod \"console-5b6cfdbd-5qbf5\" (UID: \"f3038676-0c11-4616-bb1e-f5d396e420f4\") " pod="openshift-console/console-5b6cfdbd-5qbf5" Feb 24 05:38:10.213821 master-0 kubenswrapper[34361]: I0224 05:38:10.213769 34361 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/f3038676-0c11-4616-bb1e-f5d396e420f4-service-ca\") pod \"console-5b6cfdbd-5qbf5\" (UID: \"f3038676-0c11-4616-bb1e-f5d396e420f4\") " pod="openshift-console/console-5b6cfdbd-5qbf5" Feb 24 05:38:10.213821 master-0 kubenswrapper[34361]: I0224 05:38:10.213772 34361 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/f3038676-0c11-4616-bb1e-f5d396e420f4-console-config\") pod \"console-5b6cfdbd-5qbf5\" (UID: \"f3038676-0c11-4616-bb1e-f5d396e420f4\") " pod="openshift-console/console-5b6cfdbd-5qbf5" Feb 24 05:38:10.217414 master-0 kubenswrapper[34361]: I0224 05:38:10.217324 34361 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/f3038676-0c11-4616-bb1e-f5d396e420f4-oauth-serving-cert\") pod \"console-5b6cfdbd-5qbf5\" (UID: \"f3038676-0c11-4616-bb1e-f5d396e420f4\") " pod="openshift-console/console-5b6cfdbd-5qbf5" Feb 24 05:38:10.223387 master-0 kubenswrapper[34361]: I0224 05:38:10.218398 34361 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/f3038676-0c11-4616-bb1e-f5d396e420f4-console-oauth-config\") pod \"console-5b6cfdbd-5qbf5\" (UID: \"f3038676-0c11-4616-bb1e-f5d396e420f4\") " pod="openshift-console/console-5b6cfdbd-5qbf5" Feb 24 05:38:10.224626 master-0 kubenswrapper[34361]: I0224 05:38:10.224556 34361 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/f3038676-0c11-4616-bb1e-f5d396e420f4-console-serving-cert\") pod \"console-5b6cfdbd-5qbf5\" (UID: \"f3038676-0c11-4616-bb1e-f5d396e420f4\") " pod="openshift-console/console-5b6cfdbd-5qbf5" Feb 24 05:38:10.236521 master-0 kubenswrapper[34361]: I0224 05:38:10.236463 34361 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v988g\" (UniqueName: \"kubernetes.io/projected/f3038676-0c11-4616-bb1e-f5d396e420f4-kube-api-access-v988g\") pod \"console-5b6cfdbd-5qbf5\" (UID: \"f3038676-0c11-4616-bb1e-f5d396e420f4\") " pod="openshift-console/console-5b6cfdbd-5qbf5" Feb 24 05:38:10.302178 master-0 kubenswrapper[34361]: I0224 05:38:10.302072 34361 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-5b6cfdbd-5qbf5" Feb 24 05:38:10.826033 master-0 kubenswrapper[34361]: I0224 05:38:10.825937 34361 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-5b6cfdbd-5qbf5"] Feb 24 05:38:10.829789 master-0 kubenswrapper[34361]: W0224 05:38:10.829740 34361 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf3038676_0c11_4616_bb1e_f5d396e420f4.slice/crio-40a246baf7d57e0d6e74e45814f0b62e162e91d045cd732a766d0cc321da8d9a WatchSource:0}: Error finding container 40a246baf7d57e0d6e74e45814f0b62e162e91d045cd732a766d0cc321da8d9a: Status 404 returned error can't find the container with id 40a246baf7d57e0d6e74e45814f0b62e162e91d045cd732a766d0cc321da8d9a Feb 24 05:38:11.185863 master-0 kubenswrapper[34361]: I0224 05:38:11.185540 34361 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-authentication/oauth-openshift-6d4d899fc6-cgn6l"] Feb 24 05:38:11.454485 master-0 kubenswrapper[34361]: I0224 05:38:11.454293 34361 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-5b6cfdbd-5qbf5" event={"ID":"f3038676-0c11-4616-bb1e-f5d396e420f4","Type":"ContainerStarted","Data":"40a246baf7d57e0d6e74e45814f0b62e162e91d045cd732a766d0cc321da8d9a"} Feb 24 05:38:11.651849 master-0 kubenswrapper[34361]: I0224 05:38:11.651765 34361 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/installer-4-master-0"] Feb 24 05:38:11.653584 master-0 kubenswrapper[34361]: I0224 05:38:11.653287 34361 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-4-master-0" Feb 24 05:38:11.658269 master-0 kubenswrapper[34361]: I0224 05:38:11.656582 34361 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver"/"kube-root-ca.crt" Feb 24 05:38:11.665826 master-0 kubenswrapper[34361]: I0224 05:38:11.665765 34361 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver"/"installer-sa-dockercfg-d88q9" Feb 24 05:38:11.666838 master-0 kubenswrapper[34361]: I0224 05:38:11.666797 34361 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-4-master-0"] Feb 24 05:38:11.740718 master-0 kubenswrapper[34361]: I0224 05:38:11.740659 34361 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/60e6a292-a766-471c-90c8-843f10a5820c-kubelet-dir\") pod \"installer-4-master-0\" (UID: \"60e6a292-a766-471c-90c8-843f10a5820c\") " pod="openshift-kube-apiserver/installer-4-master-0" Feb 24 05:38:11.741722 master-0 kubenswrapper[34361]: I0224 05:38:11.740975 34361 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/60e6a292-a766-471c-90c8-843f10a5820c-var-lock\") pod \"installer-4-master-0\" (UID: \"60e6a292-a766-471c-90c8-843f10a5820c\") " pod="openshift-kube-apiserver/installer-4-master-0" Feb 24 05:38:11.741722 master-0 kubenswrapper[34361]: I0224 05:38:11.741212 34361 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/60e6a292-a766-471c-90c8-843f10a5820c-kube-api-access\") pod \"installer-4-master-0\" (UID: \"60e6a292-a766-471c-90c8-843f10a5820c\") " pod="openshift-kube-apiserver/installer-4-master-0" Feb 24 05:38:11.843088 master-0 kubenswrapper[34361]: I0224 05:38:11.843000 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/60e6a292-a766-471c-90c8-843f10a5820c-kubelet-dir\") pod \"installer-4-master-0\" (UID: \"60e6a292-a766-471c-90c8-843f10a5820c\") " pod="openshift-kube-apiserver/installer-4-master-0" Feb 24 05:38:11.843427 master-0 kubenswrapper[34361]: I0224 05:38:11.843164 34361 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/60e6a292-a766-471c-90c8-843f10a5820c-kubelet-dir\") pod \"installer-4-master-0\" (UID: \"60e6a292-a766-471c-90c8-843f10a5820c\") " pod="openshift-kube-apiserver/installer-4-master-0" Feb 24 05:38:11.843427 master-0 kubenswrapper[34361]: I0224 05:38:11.843250 34361 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/60e6a292-a766-471c-90c8-843f10a5820c-var-lock\") pod \"installer-4-master-0\" (UID: \"60e6a292-a766-471c-90c8-843f10a5820c\") " pod="openshift-kube-apiserver/installer-4-master-0" Feb 24 05:38:11.843427 master-0 kubenswrapper[34361]: I0224 05:38:11.843194 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/60e6a292-a766-471c-90c8-843f10a5820c-var-lock\") pod \"installer-4-master-0\" (UID: \"60e6a292-a766-471c-90c8-843f10a5820c\") " pod="openshift-kube-apiserver/installer-4-master-0" Feb 24 05:38:11.843427 master-0 kubenswrapper[34361]: I0224 05:38:11.843354 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/60e6a292-a766-471c-90c8-843f10a5820c-kube-api-access\") pod \"installer-4-master-0\" (UID: \"60e6a292-a766-471c-90c8-843f10a5820c\") " pod="openshift-kube-apiserver/installer-4-master-0" Feb 24 05:38:11.868990 master-0 kubenswrapper[34361]: I0224 05:38:11.868927 34361 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/60e6a292-a766-471c-90c8-843f10a5820c-kube-api-access\") pod \"installer-4-master-0\" (UID: \"60e6a292-a766-471c-90c8-843f10a5820c\") " pod="openshift-kube-apiserver/installer-4-master-0" Feb 24 05:38:11.999041 master-0 kubenswrapper[34361]: I0224 05:38:11.998880 34361 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-4-master-0" Feb 24 05:38:12.420434 master-0 kubenswrapper[34361]: I0224 05:38:12.420190 34361 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-4-master-0"] Feb 24 05:38:12.433959 master-0 kubenswrapper[34361]: W0224 05:38:12.433860 34361 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-pod60e6a292_a766_471c_90c8_843f10a5820c.slice/crio-faab8261c93a00388c02a6daf4ef05b9746c5de9a14514982ae6912236cceade WatchSource:0}: Error finding container faab8261c93a00388c02a6daf4ef05b9746c5de9a14514982ae6912236cceade: Status 404 returned error can't find the container with id faab8261c93a00388c02a6daf4ef05b9746c5de9a14514982ae6912236cceade Feb 24 05:38:12.470711 master-0 kubenswrapper[34361]: I0224 05:38:12.470620 34361 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-4-master-0" event={"ID":"60e6a292-a766-471c-90c8-843f10a5820c","Type":"ContainerStarted","Data":"faab8261c93a00388c02a6daf4ef05b9746c5de9a14514982ae6912236cceade"} Feb 24 05:38:12.679267 master-0 kubenswrapper[34361]: I0224 05:38:12.676742 34361 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-7657d7494-mmsz6"] Feb 24 05:38:12.679267 master-0 kubenswrapper[34361]: I0224 05:38:12.677033 34361 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-7657d7494-mmsz6" podUID="da19bb93-c9ba-4e60-9e83-d92bc0dd33c4" containerName="controller-manager" containerID="cri-o://01f50b460983856284a210d9834ef5eef41fece749b0d8e696f6905032f26d3a" gracePeriod=30 Feb 24 05:38:12.734968 master-0 kubenswrapper[34361]: I0224 05:38:12.734903 34361 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-654dcf5585-fgmnd"] Feb 24 05:38:12.735235 master-0 kubenswrapper[34361]: I0224 05:38:12.735177 34361 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-654dcf5585-fgmnd" podUID="b426cb33-1624-45e6-b8c5-4e8d251f6339" containerName="route-controller-manager" containerID="cri-o://772e9c8aa1fdcd46c0ef3ed958fb64c82190e2210686900e097b65be829cca8b" gracePeriod=30 Feb 24 05:38:13.479707 master-0 kubenswrapper[34361]: I0224 05:38:13.479247 34361 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-654dcf5585-fgmnd" Feb 24 05:38:13.523031 master-0 kubenswrapper[34361]: I0224 05:38:13.514618 34361 generic.go:334] "Generic (PLEG): container finished" podID="da19bb93-c9ba-4e60-9e83-d92bc0dd33c4" containerID="01f50b460983856284a210d9834ef5eef41fece749b0d8e696f6905032f26d3a" exitCode=0 Feb 24 05:38:13.523031 master-0 kubenswrapper[34361]: I0224 05:38:13.514718 34361 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-7657d7494-mmsz6" event={"ID":"da19bb93-c9ba-4e60-9e83-d92bc0dd33c4","Type":"ContainerDied","Data":"01f50b460983856284a210d9834ef5eef41fece749b0d8e696f6905032f26d3a"} Feb 24 05:38:13.523031 master-0 kubenswrapper[34361]: I0224 05:38:13.514778 34361 scope.go:117] "RemoveContainer" containerID="d54fd19b9eb4386cf27b0171bbd26afecfaf6c5721e1c1b2aba9af1126e48295" Feb 24 05:38:13.525860 master-0 kubenswrapper[34361]: I0224 05:38:13.525787 34361 generic.go:334] "Generic (PLEG): container finished" podID="b426cb33-1624-45e6-b8c5-4e8d251f6339" containerID="772e9c8aa1fdcd46c0ef3ed958fb64c82190e2210686900e097b65be829cca8b" exitCode=0 Feb 24 05:38:13.527026 master-0 kubenswrapper[34361]: I0224 05:38:13.525989 34361 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-654dcf5585-fgmnd" event={"ID":"b426cb33-1624-45e6-b8c5-4e8d251f6339","Type":"ContainerDied","Data":"772e9c8aa1fdcd46c0ef3ed958fb64c82190e2210686900e097b65be829cca8b"} Feb 24 05:38:13.527026 master-0 kubenswrapper[34361]: I0224 05:38:13.526064 34361 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-654dcf5585-fgmnd" event={"ID":"b426cb33-1624-45e6-b8c5-4e8d251f6339","Type":"ContainerDied","Data":"937f03ad2559d182c0cdd1d2762487960e12dca202f4d10b53ec97e755cb0a40"} Feb 24 05:38:13.527333 master-0 kubenswrapper[34361]: I0224 05:38:13.527292 34361 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-654dcf5585-fgmnd" Feb 24 05:38:13.528209 master-0 kubenswrapper[34361]: I0224 05:38:13.528148 34361 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-4-master-0" event={"ID":"60e6a292-a766-471c-90c8-843f10a5820c","Type":"ContainerStarted","Data":"c551e23385455f60a1d1ce791e66a617e0f04c1d922be0d890276b70483491f6"} Feb 24 05:38:13.558588 master-0 kubenswrapper[34361]: I0224 05:38:13.555124 34361 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/installer-4-master-0" podStartSLOduration=2.5550943090000002 podStartE2EDuration="2.555094309s" podCreationTimestamp="2026-02-24 05:38:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-24 05:38:13.553379943 +0000 UTC m=+53.255996999" watchObservedRunningTime="2026-02-24 05:38:13.555094309 +0000 UTC m=+53.257711355" Feb 24 05:38:13.626431 master-0 kubenswrapper[34361]: I0224 05:38:13.626289 34361 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b426cb33-1624-45e6-b8c5-4e8d251f6339-config\") pod \"b426cb33-1624-45e6-b8c5-4e8d251f6339\" (UID: \"b426cb33-1624-45e6-b8c5-4e8d251f6339\") " Feb 24 05:38:13.626682 master-0 kubenswrapper[34361]: I0224 05:38:13.626518 34361 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hjtv8\" (UniqueName: \"kubernetes.io/projected/b426cb33-1624-45e6-b8c5-4e8d251f6339-kube-api-access-hjtv8\") pod \"b426cb33-1624-45e6-b8c5-4e8d251f6339\" (UID: \"b426cb33-1624-45e6-b8c5-4e8d251f6339\") " Feb 24 05:38:13.626682 master-0 kubenswrapper[34361]: I0224 05:38:13.626618 34361 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/b426cb33-1624-45e6-b8c5-4e8d251f6339-client-ca\") pod \"b426cb33-1624-45e6-b8c5-4e8d251f6339\" (UID: \"b426cb33-1624-45e6-b8c5-4e8d251f6339\") " Feb 24 05:38:13.626760 master-0 kubenswrapper[34361]: I0224 05:38:13.626683 34361 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b426cb33-1624-45e6-b8c5-4e8d251f6339-serving-cert\") pod \"b426cb33-1624-45e6-b8c5-4e8d251f6339\" (UID: \"b426cb33-1624-45e6-b8c5-4e8d251f6339\") " Feb 24 05:38:13.627288 master-0 kubenswrapper[34361]: I0224 05:38:13.627224 34361 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b426cb33-1624-45e6-b8c5-4e8d251f6339-client-ca" (OuterVolumeSpecName: "client-ca") pod "b426cb33-1624-45e6-b8c5-4e8d251f6339" (UID: "b426cb33-1624-45e6-b8c5-4e8d251f6339"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 24 05:38:13.631991 master-0 kubenswrapper[34361]: I0224 05:38:13.629574 34361 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b426cb33-1624-45e6-b8c5-4e8d251f6339-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "b426cb33-1624-45e6-b8c5-4e8d251f6339" (UID: "b426cb33-1624-45e6-b8c5-4e8d251f6339"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 24 05:38:13.631991 master-0 kubenswrapper[34361]: I0224 05:38:13.629786 34361 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/b426cb33-1624-45e6-b8c5-4e8d251f6339-client-ca\") on node \"master-0\" DevicePath \"\"" Feb 24 05:38:13.631991 master-0 kubenswrapper[34361]: I0224 05:38:13.629894 34361 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b426cb33-1624-45e6-b8c5-4e8d251f6339-config" (OuterVolumeSpecName: "config") pod "b426cb33-1624-45e6-b8c5-4e8d251f6339" (UID: "b426cb33-1624-45e6-b8c5-4e8d251f6339"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 24 05:38:13.634394 master-0 kubenswrapper[34361]: I0224 05:38:13.632285 34361 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b426cb33-1624-45e6-b8c5-4e8d251f6339-kube-api-access-hjtv8" (OuterVolumeSpecName: "kube-api-access-hjtv8") pod "b426cb33-1624-45e6-b8c5-4e8d251f6339" (UID: "b426cb33-1624-45e6-b8c5-4e8d251f6339"). InnerVolumeSpecName "kube-api-access-hjtv8". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 24 05:38:13.731671 master-0 kubenswrapper[34361]: I0224 05:38:13.731593 34361 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b426cb33-1624-45e6-b8c5-4e8d251f6339-serving-cert\") on node \"master-0\" DevicePath \"\"" Feb 24 05:38:13.731671 master-0 kubenswrapper[34361]: I0224 05:38:13.731642 34361 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b426cb33-1624-45e6-b8c5-4e8d251f6339-config\") on node \"master-0\" DevicePath \"\"" Feb 24 05:38:13.731671 master-0 kubenswrapper[34361]: I0224 05:38:13.731653 34361 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hjtv8\" (UniqueName: \"kubernetes.io/projected/b426cb33-1624-45e6-b8c5-4e8d251f6339-kube-api-access-hjtv8\") on node \"master-0\" DevicePath \"\"" Feb 24 05:38:13.868686 master-0 kubenswrapper[34361]: I0224 05:38:13.868619 34361 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-654dcf5585-fgmnd"] Feb 24 05:38:13.872241 master-0 kubenswrapper[34361]: I0224 05:38:13.872179 34361 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-654dcf5585-fgmnd"] Feb 24 05:38:13.984992 master-0 kubenswrapper[34361]: I0224 05:38:13.984906 34361 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-85f8857db4-hhqvj"] Feb 24 05:38:13.985629 master-0 kubenswrapper[34361]: E0224 05:38:13.985417 34361 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b426cb33-1624-45e6-b8c5-4e8d251f6339" containerName="route-controller-manager" Feb 24 05:38:13.985629 master-0 kubenswrapper[34361]: I0224 05:38:13.985444 34361 state_mem.go:107] "Deleted CPUSet assignment" podUID="b426cb33-1624-45e6-b8c5-4e8d251f6339" containerName="route-controller-manager" Feb 24 05:38:13.985629 master-0 kubenswrapper[34361]: E0224 05:38:13.985459 34361 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b426cb33-1624-45e6-b8c5-4e8d251f6339" containerName="route-controller-manager" Feb 24 05:38:13.985629 master-0 kubenswrapper[34361]: I0224 05:38:13.985468 34361 state_mem.go:107] "Deleted CPUSet assignment" podUID="b426cb33-1624-45e6-b8c5-4e8d251f6339" containerName="route-controller-manager" Feb 24 05:38:13.985890 master-0 kubenswrapper[34361]: I0224 05:38:13.985757 34361 memory_manager.go:354] "RemoveStaleState removing state" podUID="b426cb33-1624-45e6-b8c5-4e8d251f6339" containerName="route-controller-manager" Feb 24 05:38:13.985890 master-0 kubenswrapper[34361]: I0224 05:38:13.985805 34361 memory_manager.go:354] "RemoveStaleState removing state" podUID="b426cb33-1624-45e6-b8c5-4e8d251f6339" containerName="route-controller-manager" Feb 24 05:38:13.986462 master-0 kubenswrapper[34361]: I0224 05:38:13.986421 34361 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-85f8857db4-hhqvj" Feb 24 05:38:13.991158 master-0 kubenswrapper[34361]: I0224 05:38:13.991104 34361 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Feb 24 05:38:13.991416 master-0 kubenswrapper[34361]: I0224 05:38:13.991380 34361 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Feb 24 05:38:13.991585 master-0 kubenswrapper[34361]: I0224 05:38:13.991557 34361 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Feb 24 05:38:13.991760 master-0 kubenswrapper[34361]: I0224 05:38:13.991724 34361 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Feb 24 05:38:13.991917 master-0 kubenswrapper[34361]: I0224 05:38:13.991887 34361 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-d22d8" Feb 24 05:38:13.992372 master-0 kubenswrapper[34361]: I0224 05:38:13.992341 34361 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Feb 24 05:38:14.002825 master-0 kubenswrapper[34361]: I0224 05:38:14.002760 34361 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-85f8857db4-hhqvj"] Feb 24 05:38:14.039066 master-0 kubenswrapper[34361]: I0224 05:38:14.038939 34361 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/59c60e7e-4fe2-4405-a1d6-c0300a27bcf6-serving-cert\") pod \"route-controller-manager-85f8857db4-hhqvj\" (UID: \"59c60e7e-4fe2-4405-a1d6-c0300a27bcf6\") " pod="openshift-route-controller-manager/route-controller-manager-85f8857db4-hhqvj" Feb 24 05:38:14.039520 master-0 kubenswrapper[34361]: I0224 05:38:14.039132 34361 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/59c60e7e-4fe2-4405-a1d6-c0300a27bcf6-config\") pod \"route-controller-manager-85f8857db4-hhqvj\" (UID: \"59c60e7e-4fe2-4405-a1d6-c0300a27bcf6\") " pod="openshift-route-controller-manager/route-controller-manager-85f8857db4-hhqvj" Feb 24 05:38:14.039520 master-0 kubenswrapper[34361]: I0224 05:38:14.039417 34361 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/59c60e7e-4fe2-4405-a1d6-c0300a27bcf6-client-ca\") pod \"route-controller-manager-85f8857db4-hhqvj\" (UID: \"59c60e7e-4fe2-4405-a1d6-c0300a27bcf6\") " pod="openshift-route-controller-manager/route-controller-manager-85f8857db4-hhqvj" Feb 24 05:38:14.039520 master-0 kubenswrapper[34361]: I0224 05:38:14.039471 34361 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tnnkq\" (UniqueName: \"kubernetes.io/projected/59c60e7e-4fe2-4405-a1d6-c0300a27bcf6-kube-api-access-tnnkq\") pod \"route-controller-manager-85f8857db4-hhqvj\" (UID: \"59c60e7e-4fe2-4405-a1d6-c0300a27bcf6\") " pod="openshift-route-controller-manager/route-controller-manager-85f8857db4-hhqvj" Feb 24 05:38:14.141199 master-0 kubenswrapper[34361]: I0224 05:38:14.140945 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/59c60e7e-4fe2-4405-a1d6-c0300a27bcf6-client-ca\") pod \"route-controller-manager-85f8857db4-hhqvj\" (UID: \"59c60e7e-4fe2-4405-a1d6-c0300a27bcf6\") " pod="openshift-route-controller-manager/route-controller-manager-85f8857db4-hhqvj" Feb 24 05:38:14.141199 master-0 kubenswrapper[34361]: I0224 05:38:14.141034 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tnnkq\" (UniqueName: \"kubernetes.io/projected/59c60e7e-4fe2-4405-a1d6-c0300a27bcf6-kube-api-access-tnnkq\") pod \"route-controller-manager-85f8857db4-hhqvj\" (UID: \"59c60e7e-4fe2-4405-a1d6-c0300a27bcf6\") " pod="openshift-route-controller-manager/route-controller-manager-85f8857db4-hhqvj" Feb 24 05:38:14.141199 master-0 kubenswrapper[34361]: I0224 05:38:14.141100 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/59c60e7e-4fe2-4405-a1d6-c0300a27bcf6-serving-cert\") pod \"route-controller-manager-85f8857db4-hhqvj\" (UID: \"59c60e7e-4fe2-4405-a1d6-c0300a27bcf6\") " pod="openshift-route-controller-manager/route-controller-manager-85f8857db4-hhqvj" Feb 24 05:38:14.141199 master-0 kubenswrapper[34361]: I0224 05:38:14.141136 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/59c60e7e-4fe2-4405-a1d6-c0300a27bcf6-config\") pod \"route-controller-manager-85f8857db4-hhqvj\" (UID: \"59c60e7e-4fe2-4405-a1d6-c0300a27bcf6\") " pod="openshift-route-controller-manager/route-controller-manager-85f8857db4-hhqvj" Feb 24 05:38:14.142147 master-0 kubenswrapper[34361]: I0224 05:38:14.142084 34361 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/59c60e7e-4fe2-4405-a1d6-c0300a27bcf6-client-ca\") pod \"route-controller-manager-85f8857db4-hhqvj\" (UID: \"59c60e7e-4fe2-4405-a1d6-c0300a27bcf6\") " pod="openshift-route-controller-manager/route-controller-manager-85f8857db4-hhqvj" Feb 24 05:38:14.143141 master-0 kubenswrapper[34361]: I0224 05:38:14.143051 34361 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/59c60e7e-4fe2-4405-a1d6-c0300a27bcf6-config\") pod \"route-controller-manager-85f8857db4-hhqvj\" (UID: \"59c60e7e-4fe2-4405-a1d6-c0300a27bcf6\") " pod="openshift-route-controller-manager/route-controller-manager-85f8857db4-hhqvj" Feb 24 05:38:14.145394 master-0 kubenswrapper[34361]: I0224 05:38:14.145255 34361 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/59c60e7e-4fe2-4405-a1d6-c0300a27bcf6-serving-cert\") pod \"route-controller-manager-85f8857db4-hhqvj\" (UID: \"59c60e7e-4fe2-4405-a1d6-c0300a27bcf6\") " pod="openshift-route-controller-manager/route-controller-manager-85f8857db4-hhqvj" Feb 24 05:38:14.170209 master-0 kubenswrapper[34361]: I0224 05:38:14.170115 34361 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tnnkq\" (UniqueName: \"kubernetes.io/projected/59c60e7e-4fe2-4405-a1d6-c0300a27bcf6-kube-api-access-tnnkq\") pod \"route-controller-manager-85f8857db4-hhqvj\" (UID: \"59c60e7e-4fe2-4405-a1d6-c0300a27bcf6\") " pod="openshift-route-controller-manager/route-controller-manager-85f8857db4-hhqvj" Feb 24 05:38:14.359412 master-0 kubenswrapper[34361]: I0224 05:38:14.359349 34361 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-85f8857db4-hhqvj" Feb 24 05:38:14.611092 master-0 kubenswrapper[34361]: I0224 05:38:14.611009 34361 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b426cb33-1624-45e6-b8c5-4e8d251f6339" path="/var/lib/kubelet/pods/b426cb33-1624-45e6-b8c5-4e8d251f6339/volumes" Feb 24 05:38:15.355387 master-0 kubenswrapper[34361]: I0224 05:38:15.354646 34361 scope.go:117] "RemoveContainer" containerID="772e9c8aa1fdcd46c0ef3ed958fb64c82190e2210686900e097b65be829cca8b" Feb 24 05:38:15.422112 master-0 kubenswrapper[34361]: I0224 05:38:15.422022 34361 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-7657d7494-mmsz6" Feb 24 05:38:15.428363 master-0 kubenswrapper[34361]: I0224 05:38:15.428272 34361 scope.go:117] "RemoveContainer" containerID="adced94d4c33380fc07637d5383f9ac889fe6f6c9825230d6be68622728bb5ff" Feb 24 05:38:15.465187 master-0 kubenswrapper[34361]: I0224 05:38:15.465090 34361 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/da19bb93-c9ba-4e60-9e83-d92bc0dd33c4-serving-cert\") pod \"da19bb93-c9ba-4e60-9e83-d92bc0dd33c4\" (UID: \"da19bb93-c9ba-4e60-9e83-d92bc0dd33c4\") " Feb 24 05:38:15.465495 master-0 kubenswrapper[34361]: I0224 05:38:15.465201 34361 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9lkf2\" (UniqueName: \"kubernetes.io/projected/da19bb93-c9ba-4e60-9e83-d92bc0dd33c4-kube-api-access-9lkf2\") pod \"da19bb93-c9ba-4e60-9e83-d92bc0dd33c4\" (UID: \"da19bb93-c9ba-4e60-9e83-d92bc0dd33c4\") " Feb 24 05:38:15.465495 master-0 kubenswrapper[34361]: I0224 05:38:15.465300 34361 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/da19bb93-c9ba-4e60-9e83-d92bc0dd33c4-config\") pod \"da19bb93-c9ba-4e60-9e83-d92bc0dd33c4\" (UID: \"da19bb93-c9ba-4e60-9e83-d92bc0dd33c4\") " Feb 24 05:38:15.465495 master-0 kubenswrapper[34361]: I0224 05:38:15.465391 34361 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/da19bb93-c9ba-4e60-9e83-d92bc0dd33c4-client-ca\") pod \"da19bb93-c9ba-4e60-9e83-d92bc0dd33c4\" (UID: \"da19bb93-c9ba-4e60-9e83-d92bc0dd33c4\") " Feb 24 05:38:15.465495 master-0 kubenswrapper[34361]: I0224 05:38:15.465428 34361 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/da19bb93-c9ba-4e60-9e83-d92bc0dd33c4-proxy-ca-bundles\") pod \"da19bb93-c9ba-4e60-9e83-d92bc0dd33c4\" (UID: \"da19bb93-c9ba-4e60-9e83-d92bc0dd33c4\") " Feb 24 05:38:15.466439 master-0 kubenswrapper[34361]: I0224 05:38:15.466354 34361 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/da19bb93-c9ba-4e60-9e83-d92bc0dd33c4-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "da19bb93-c9ba-4e60-9e83-d92bc0dd33c4" (UID: "da19bb93-c9ba-4e60-9e83-d92bc0dd33c4"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 24 05:38:15.466577 master-0 kubenswrapper[34361]: I0224 05:38:15.466498 34361 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/da19bb93-c9ba-4e60-9e83-d92bc0dd33c4-config" (OuterVolumeSpecName: "config") pod "da19bb93-c9ba-4e60-9e83-d92bc0dd33c4" (UID: "da19bb93-c9ba-4e60-9e83-d92bc0dd33c4"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 24 05:38:15.469480 master-0 kubenswrapper[34361]: I0224 05:38:15.467771 34361 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/da19bb93-c9ba-4e60-9e83-d92bc0dd33c4-client-ca" (OuterVolumeSpecName: "client-ca") pod "da19bb93-c9ba-4e60-9e83-d92bc0dd33c4" (UID: "da19bb93-c9ba-4e60-9e83-d92bc0dd33c4"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 24 05:38:15.472225 master-0 kubenswrapper[34361]: I0224 05:38:15.472177 34361 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/da19bb93-c9ba-4e60-9e83-d92bc0dd33c4-kube-api-access-9lkf2" (OuterVolumeSpecName: "kube-api-access-9lkf2") pod "da19bb93-c9ba-4e60-9e83-d92bc0dd33c4" (UID: "da19bb93-c9ba-4e60-9e83-d92bc0dd33c4"). InnerVolumeSpecName "kube-api-access-9lkf2". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 24 05:38:15.473036 master-0 kubenswrapper[34361]: I0224 05:38:15.472971 34361 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/da19bb93-c9ba-4e60-9e83-d92bc0dd33c4-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "da19bb93-c9ba-4e60-9e83-d92bc0dd33c4" (UID: "da19bb93-c9ba-4e60-9e83-d92bc0dd33c4"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 24 05:38:15.479360 master-0 kubenswrapper[34361]: I0224 05:38:15.478348 34361 scope.go:117] "RemoveContainer" containerID="772e9c8aa1fdcd46c0ef3ed958fb64c82190e2210686900e097b65be829cca8b" Feb 24 05:38:15.485803 master-0 kubenswrapper[34361]: E0224 05:38:15.480027 34361 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"772e9c8aa1fdcd46c0ef3ed958fb64c82190e2210686900e097b65be829cca8b\": container with ID starting with 772e9c8aa1fdcd46c0ef3ed958fb64c82190e2210686900e097b65be829cca8b not found: ID does not exist" containerID="772e9c8aa1fdcd46c0ef3ed958fb64c82190e2210686900e097b65be829cca8b" Feb 24 05:38:15.485803 master-0 kubenswrapper[34361]: I0224 05:38:15.480118 34361 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"772e9c8aa1fdcd46c0ef3ed958fb64c82190e2210686900e097b65be829cca8b"} err="failed to get container status \"772e9c8aa1fdcd46c0ef3ed958fb64c82190e2210686900e097b65be829cca8b\": rpc error: code = NotFound desc = could not find container \"772e9c8aa1fdcd46c0ef3ed958fb64c82190e2210686900e097b65be829cca8b\": container with ID starting with 772e9c8aa1fdcd46c0ef3ed958fb64c82190e2210686900e097b65be829cca8b not found: ID does not exist" Feb 24 05:38:15.485803 master-0 kubenswrapper[34361]: I0224 05:38:15.480190 34361 scope.go:117] "RemoveContainer" containerID="adced94d4c33380fc07637d5383f9ac889fe6f6c9825230d6be68622728bb5ff" Feb 24 05:38:15.485803 master-0 kubenswrapper[34361]: E0224 05:38:15.480778 34361 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"adced94d4c33380fc07637d5383f9ac889fe6f6c9825230d6be68622728bb5ff\": container with ID starting with adced94d4c33380fc07637d5383f9ac889fe6f6c9825230d6be68622728bb5ff not found: ID does not exist" containerID="adced94d4c33380fc07637d5383f9ac889fe6f6c9825230d6be68622728bb5ff" Feb 24 05:38:15.485803 master-0 kubenswrapper[34361]: I0224 05:38:15.480839 34361 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"adced94d4c33380fc07637d5383f9ac889fe6f6c9825230d6be68622728bb5ff"} err="failed to get container status \"adced94d4c33380fc07637d5383f9ac889fe6f6c9825230d6be68622728bb5ff\": rpc error: code = NotFound desc = could not find container \"adced94d4c33380fc07637d5383f9ac889fe6f6c9825230d6be68622728bb5ff\": container with ID starting with adced94d4c33380fc07637d5383f9ac889fe6f6c9825230d6be68622728bb5ff not found: ID does not exist" Feb 24 05:38:15.585218 master-0 kubenswrapper[34361]: I0224 05:38:15.585115 34361 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/da19bb93-c9ba-4e60-9e83-d92bc0dd33c4-serving-cert\") on node \"master-0\" DevicePath \"\"" Feb 24 05:38:15.585218 master-0 kubenswrapper[34361]: I0224 05:38:15.585164 34361 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9lkf2\" (UniqueName: \"kubernetes.io/projected/da19bb93-c9ba-4e60-9e83-d92bc0dd33c4-kube-api-access-9lkf2\") on node \"master-0\" DevicePath \"\"" Feb 24 05:38:15.585218 master-0 kubenswrapper[34361]: I0224 05:38:15.585181 34361 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/da19bb93-c9ba-4e60-9e83-d92bc0dd33c4-config\") on node \"master-0\" DevicePath \"\"" Feb 24 05:38:15.585218 master-0 kubenswrapper[34361]: I0224 05:38:15.585195 34361 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/da19bb93-c9ba-4e60-9e83-d92bc0dd33c4-client-ca\") on node \"master-0\" DevicePath \"\"" Feb 24 05:38:15.585218 master-0 kubenswrapper[34361]: I0224 05:38:15.585209 34361 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/da19bb93-c9ba-4e60-9e83-d92bc0dd33c4-proxy-ca-bundles\") on node \"master-0\" DevicePath \"\"" Feb 24 05:38:15.587467 master-0 kubenswrapper[34361]: I0224 05:38:15.587407 34361 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-7657d7494-mmsz6" event={"ID":"da19bb93-c9ba-4e60-9e83-d92bc0dd33c4","Type":"ContainerDied","Data":"3aa615a9d796b417e579505462fba818eb63c6e04f0fc9bcc949d228f425e015"} Feb 24 05:38:15.587542 master-0 kubenswrapper[34361]: I0224 05:38:15.587472 34361 scope.go:117] "RemoveContainer" containerID="01f50b460983856284a210d9834ef5eef41fece749b0d8e696f6905032f26d3a" Feb 24 05:38:15.587542 master-0 kubenswrapper[34361]: I0224 05:38:15.587480 34361 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-7657d7494-mmsz6" Feb 24 05:38:15.623359 master-0 kubenswrapper[34361]: I0224 05:38:15.623280 34361 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-7657d7494-mmsz6"] Feb 24 05:38:15.630032 master-0 kubenswrapper[34361]: I0224 05:38:15.629399 34361 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-7657d7494-mmsz6"] Feb 24 05:38:15.845555 master-0 kubenswrapper[34361]: I0224 05:38:15.845467 34361 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-85f8857db4-hhqvj"] Feb 24 05:38:15.851180 master-0 kubenswrapper[34361]: W0224 05:38:15.851078 34361 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod59c60e7e_4fe2_4405_a1d6_c0300a27bcf6.slice/crio-4c88549cd957c4b35b90a0aa003cc430c12efebeb0b1f0056e7a430d086ad60c WatchSource:0}: Error finding container 4c88549cd957c4b35b90a0aa003cc430c12efebeb0b1f0056e7a430d086ad60c: Status 404 returned error can't find the container with id 4c88549cd957c4b35b90a0aa003cc430c12efebeb0b1f0056e7a430d086ad60c Feb 24 05:38:15.987761 master-0 kubenswrapper[34361]: I0224 05:38:15.987688 34361 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-58c8457759-bzjjl"] Feb 24 05:38:15.988146 master-0 kubenswrapper[34361]: E0224 05:38:15.988113 34361 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="da19bb93-c9ba-4e60-9e83-d92bc0dd33c4" containerName="controller-manager" Feb 24 05:38:15.988146 master-0 kubenswrapper[34361]: I0224 05:38:15.988145 34361 state_mem.go:107] "Deleted CPUSet assignment" podUID="da19bb93-c9ba-4e60-9e83-d92bc0dd33c4" containerName="controller-manager" Feb 24 05:38:15.988277 master-0 kubenswrapper[34361]: E0224 05:38:15.988204 34361 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="da19bb93-c9ba-4e60-9e83-d92bc0dd33c4" containerName="controller-manager" Feb 24 05:38:15.988277 master-0 kubenswrapper[34361]: I0224 05:38:15.988220 34361 state_mem.go:107] "Deleted CPUSet assignment" podUID="da19bb93-c9ba-4e60-9e83-d92bc0dd33c4" containerName="controller-manager" Feb 24 05:38:15.989660 master-0 kubenswrapper[34361]: I0224 05:38:15.988515 34361 memory_manager.go:354] "RemoveStaleState removing state" podUID="da19bb93-c9ba-4e60-9e83-d92bc0dd33c4" containerName="controller-manager" Feb 24 05:38:15.989660 master-0 kubenswrapper[34361]: I0224 05:38:15.988574 34361 memory_manager.go:354] "RemoveStaleState removing state" podUID="da19bb93-c9ba-4e60-9e83-d92bc0dd33c4" containerName="controller-manager" Feb 24 05:38:15.989660 master-0 kubenswrapper[34361]: I0224 05:38:15.989306 34361 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-58c8457759-bzjjl" Feb 24 05:38:15.993860 master-0 kubenswrapper[34361]: I0224 05:38:15.991663 34361 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Feb 24 05:38:15.993860 master-0 kubenswrapper[34361]: I0224 05:38:15.992411 34361 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f5hcf\" (UniqueName: \"kubernetes.io/projected/134ee919-d06c-4b68-b7c1-88f015ccfe32-kube-api-access-f5hcf\") pod \"controller-manager-58c8457759-bzjjl\" (UID: \"134ee919-d06c-4b68-b7c1-88f015ccfe32\") " pod="openshift-controller-manager/controller-manager-58c8457759-bzjjl" Feb 24 05:38:15.993860 master-0 kubenswrapper[34361]: I0224 05:38:15.992501 34361 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/134ee919-d06c-4b68-b7c1-88f015ccfe32-proxy-ca-bundles\") pod \"controller-manager-58c8457759-bzjjl\" (UID: \"134ee919-d06c-4b68-b7c1-88f015ccfe32\") " pod="openshift-controller-manager/controller-manager-58c8457759-bzjjl" Feb 24 05:38:15.993860 master-0 kubenswrapper[34361]: I0224 05:38:15.992574 34361 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/134ee919-d06c-4b68-b7c1-88f015ccfe32-client-ca\") pod \"controller-manager-58c8457759-bzjjl\" (UID: \"134ee919-d06c-4b68-b7c1-88f015ccfe32\") " pod="openshift-controller-manager/controller-manager-58c8457759-bzjjl" Feb 24 05:38:15.993860 master-0 kubenswrapper[34361]: I0224 05:38:15.992651 34361 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/134ee919-d06c-4b68-b7c1-88f015ccfe32-config\") pod \"controller-manager-58c8457759-bzjjl\" (UID: \"134ee919-d06c-4b68-b7c1-88f015ccfe32\") " pod="openshift-controller-manager/controller-manager-58c8457759-bzjjl" Feb 24 05:38:15.993860 master-0 kubenswrapper[34361]: I0224 05:38:15.992704 34361 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/134ee919-d06c-4b68-b7c1-88f015ccfe32-serving-cert\") pod \"controller-manager-58c8457759-bzjjl\" (UID: \"134ee919-d06c-4b68-b7c1-88f015ccfe32\") " pod="openshift-controller-manager/controller-manager-58c8457759-bzjjl" Feb 24 05:38:15.993860 master-0 kubenswrapper[34361]: I0224 05:38:15.993735 34361 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Feb 24 05:38:15.994675 master-0 kubenswrapper[34361]: I0224 05:38:15.994006 34361 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Feb 24 05:38:15.994675 master-0 kubenswrapper[34361]: I0224 05:38:15.994260 34361 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Feb 24 05:38:15.994675 master-0 kubenswrapper[34361]: I0224 05:38:15.994475 34361 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Feb 24 05:38:15.997870 master-0 kubenswrapper[34361]: I0224 05:38:15.997807 34361 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-rv6pq" Feb 24 05:38:16.019782 master-0 kubenswrapper[34361]: I0224 05:38:16.019716 34361 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-58c8457759-bzjjl"] Feb 24 05:38:16.019983 master-0 kubenswrapper[34361]: I0224 05:38:16.019783 34361 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Feb 24 05:38:16.096594 master-0 kubenswrapper[34361]: I0224 05:38:16.095621 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f5hcf\" (UniqueName: \"kubernetes.io/projected/134ee919-d06c-4b68-b7c1-88f015ccfe32-kube-api-access-f5hcf\") pod \"controller-manager-58c8457759-bzjjl\" (UID: \"134ee919-d06c-4b68-b7c1-88f015ccfe32\") " pod="openshift-controller-manager/controller-manager-58c8457759-bzjjl" Feb 24 05:38:16.096594 master-0 kubenswrapper[34361]: I0224 05:38:16.096371 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/134ee919-d06c-4b68-b7c1-88f015ccfe32-proxy-ca-bundles\") pod \"controller-manager-58c8457759-bzjjl\" (UID: \"134ee919-d06c-4b68-b7c1-88f015ccfe32\") " pod="openshift-controller-manager/controller-manager-58c8457759-bzjjl" Feb 24 05:38:16.096594 master-0 kubenswrapper[34361]: I0224 05:38:16.096475 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/134ee919-d06c-4b68-b7c1-88f015ccfe32-client-ca\") pod \"controller-manager-58c8457759-bzjjl\" (UID: \"134ee919-d06c-4b68-b7c1-88f015ccfe32\") " pod="openshift-controller-manager/controller-manager-58c8457759-bzjjl" Feb 24 05:38:16.096594 master-0 kubenswrapper[34361]: I0224 05:38:16.096576 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/134ee919-d06c-4b68-b7c1-88f015ccfe32-config\") pod \"controller-manager-58c8457759-bzjjl\" (UID: \"134ee919-d06c-4b68-b7c1-88f015ccfe32\") " pod="openshift-controller-manager/controller-manager-58c8457759-bzjjl" Feb 24 05:38:16.096922 master-0 kubenswrapper[34361]: I0224 05:38:16.096650 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/134ee919-d06c-4b68-b7c1-88f015ccfe32-serving-cert\") pod \"controller-manager-58c8457759-bzjjl\" (UID: \"134ee919-d06c-4b68-b7c1-88f015ccfe32\") " pod="openshift-controller-manager/controller-manager-58c8457759-bzjjl" Feb 24 05:38:16.100635 master-0 kubenswrapper[34361]: I0224 05:38:16.100588 34361 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/134ee919-d06c-4b68-b7c1-88f015ccfe32-proxy-ca-bundles\") pod \"controller-manager-58c8457759-bzjjl\" (UID: \"134ee919-d06c-4b68-b7c1-88f015ccfe32\") " pod="openshift-controller-manager/controller-manager-58c8457759-bzjjl" Feb 24 05:38:16.102801 master-0 kubenswrapper[34361]: I0224 05:38:16.102724 34361 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/134ee919-d06c-4b68-b7c1-88f015ccfe32-client-ca\") pod \"controller-manager-58c8457759-bzjjl\" (UID: \"134ee919-d06c-4b68-b7c1-88f015ccfe32\") " pod="openshift-controller-manager/controller-manager-58c8457759-bzjjl" Feb 24 05:38:16.103939 master-0 kubenswrapper[34361]: I0224 05:38:16.103886 34361 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/134ee919-d06c-4b68-b7c1-88f015ccfe32-config\") pod \"controller-manager-58c8457759-bzjjl\" (UID: \"134ee919-d06c-4b68-b7c1-88f015ccfe32\") " pod="openshift-controller-manager/controller-manager-58c8457759-bzjjl" Feb 24 05:38:16.107174 master-0 kubenswrapper[34361]: I0224 05:38:16.107128 34361 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/134ee919-d06c-4b68-b7c1-88f015ccfe32-serving-cert\") pod \"controller-manager-58c8457759-bzjjl\" (UID: \"134ee919-d06c-4b68-b7c1-88f015ccfe32\") " pod="openshift-controller-manager/controller-manager-58c8457759-bzjjl" Feb 24 05:38:16.124480 master-0 kubenswrapper[34361]: I0224 05:38:16.124376 34361 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f5hcf\" (UniqueName: \"kubernetes.io/projected/134ee919-d06c-4b68-b7c1-88f015ccfe32-kube-api-access-f5hcf\") pod \"controller-manager-58c8457759-bzjjl\" (UID: \"134ee919-d06c-4b68-b7c1-88f015ccfe32\") " pod="openshift-controller-manager/controller-manager-58c8457759-bzjjl" Feb 24 05:38:16.333006 master-0 kubenswrapper[34361]: I0224 05:38:16.332916 34361 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-58c8457759-bzjjl" Feb 24 05:38:16.606599 master-0 kubenswrapper[34361]: I0224 05:38:16.606512 34361 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="da19bb93-c9ba-4e60-9e83-d92bc0dd33c4" path="/var/lib/kubelet/pods/da19bb93-c9ba-4e60-9e83-d92bc0dd33c4/volumes" Feb 24 05:38:16.607235 master-0 kubenswrapper[34361]: I0224 05:38:16.607190 34361 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-85f8857db4-hhqvj" Feb 24 05:38:16.607330 master-0 kubenswrapper[34361]: I0224 05:38:16.607239 34361 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-5b6cfdbd-5qbf5" event={"ID":"f3038676-0c11-4616-bb1e-f5d396e420f4","Type":"ContainerStarted","Data":"3493e9566cfc88b777212b8209a79658c53ac43dd63f8aceabedeb37e714b7a4"} Feb 24 05:38:16.607330 master-0 kubenswrapper[34361]: I0224 05:38:16.607267 34361 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-85f8857db4-hhqvj" event={"ID":"59c60e7e-4fe2-4405-a1d6-c0300a27bcf6","Type":"ContainerStarted","Data":"a4413c4c1c385d31ce79117f9acc18addfb4ab9ebc72d808d18f0e27e0d8aadd"} Feb 24 05:38:16.607483 master-0 kubenswrapper[34361]: I0224 05:38:16.607334 34361 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-85f8857db4-hhqvj" event={"ID":"59c60e7e-4fe2-4405-a1d6-c0300a27bcf6","Type":"ContainerStarted","Data":"4c88549cd957c4b35b90a0aa003cc430c12efebeb0b1f0056e7a430d086ad60c"} Feb 24 05:38:16.632655 master-0 kubenswrapper[34361]: I0224 05:38:16.632346 34361 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-5b6cfdbd-5qbf5" podStartSLOduration=3.047792705 podStartE2EDuration="7.632307133s" podCreationTimestamp="2026-02-24 05:38:09 +0000 UTC" firstStartedPulling="2026-02-24 05:38:10.832933115 +0000 UTC m=+50.535550161" lastFinishedPulling="2026-02-24 05:38:15.417447503 +0000 UTC m=+55.120064589" observedRunningTime="2026-02-24 05:38:16.632185289 +0000 UTC m=+56.334802335" watchObservedRunningTime="2026-02-24 05:38:16.632307133 +0000 UTC m=+56.334924179" Feb 24 05:38:16.657267 master-0 kubenswrapper[34361]: I0224 05:38:16.657083 34361 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-85f8857db4-hhqvj" podStartSLOduration=4.657009177 podStartE2EDuration="4.657009177s" podCreationTimestamp="2026-02-24 05:38:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-24 05:38:16.655583109 +0000 UTC m=+56.358200235" watchObservedRunningTime="2026-02-24 05:38:16.657009177 +0000 UTC m=+56.359626263" Feb 24 05:38:16.672158 master-0 kubenswrapper[34361]: I0224 05:38:16.671977 34361 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-85f8857db4-hhqvj" Feb 24 05:38:16.769122 master-0 kubenswrapper[34361]: I0224 05:38:16.769050 34361 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-58c8457759-bzjjl"] Feb 24 05:38:16.772701 master-0 kubenswrapper[34361]: W0224 05:38:16.772654 34361 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod134ee919_d06c_4b68_b7c1_88f015ccfe32.slice/crio-96e0afd0fbc7d79e5fb50a9cc908aba614e2964bd72f5b69a97c28afdac4fc81 WatchSource:0}: Error finding container 96e0afd0fbc7d79e5fb50a9cc908aba614e2964bd72f5b69a97c28afdac4fc81: Status 404 returned error can't find the container with id 96e0afd0fbc7d79e5fb50a9cc908aba614e2964bd72f5b69a97c28afdac4fc81 Feb 24 05:38:17.616186 master-0 kubenswrapper[34361]: I0224 05:38:17.616099 34361 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-58c8457759-bzjjl" event={"ID":"134ee919-d06c-4b68-b7c1-88f015ccfe32","Type":"ContainerStarted","Data":"3e0143fb6354c2723750925e7f25be566defc23c3e236b1351e542488d094562"} Feb 24 05:38:17.616186 master-0 kubenswrapper[34361]: I0224 05:38:17.616192 34361 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-58c8457759-bzjjl" event={"ID":"134ee919-d06c-4b68-b7c1-88f015ccfe32","Type":"ContainerStarted","Data":"96e0afd0fbc7d79e5fb50a9cc908aba614e2964bd72f5b69a97c28afdac4fc81"} Feb 24 05:38:17.616648 master-0 kubenswrapper[34361]: I0224 05:38:17.616607 34361 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-58c8457759-bzjjl" Feb 24 05:38:17.623753 master-0 kubenswrapper[34361]: I0224 05:38:17.623698 34361 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-58c8457759-bzjjl" Feb 24 05:38:17.658988 master-0 kubenswrapper[34361]: I0224 05:38:17.658792 34361 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-58c8457759-bzjjl" podStartSLOduration=5.65876468 podStartE2EDuration="5.65876468s" podCreationTimestamp="2026-02-24 05:38:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-24 05:38:17.651926446 +0000 UTC m=+57.354543502" watchObservedRunningTime="2026-02-24 05:38:17.65876468 +0000 UTC m=+57.361381726" Feb 24 05:38:17.949925 master-0 kubenswrapper[34361]: I0224 05:38:17.949727 34361 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-67bcb9df49-d2cv6"] Feb 24 05:38:17.952129 master-0 kubenswrapper[34361]: I0224 05:38:17.952080 34361 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-67bcb9df49-d2cv6" Feb 24 05:38:17.963701 master-0 kubenswrapper[34361]: I0224 05:38:17.962717 34361 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"trusted-ca-bundle" Feb 24 05:38:17.968162 master-0 kubenswrapper[34361]: I0224 05:38:17.968072 34361 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-67bcb9df49-d2cv6"] Feb 24 05:38:18.046139 master-0 kubenswrapper[34361]: I0224 05:38:18.046078 34361 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/c300d6c7-66fb-41c5-b099-0e9e4a235e76-console-oauth-config\") pod \"console-67bcb9df49-d2cv6\" (UID: \"c300d6c7-66fb-41c5-b099-0e9e4a235e76\") " pod="openshift-console/console-67bcb9df49-d2cv6" Feb 24 05:38:18.046139 master-0 kubenswrapper[34361]: I0224 05:38:18.046137 34361 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/c300d6c7-66fb-41c5-b099-0e9e4a235e76-oauth-serving-cert\") pod \"console-67bcb9df49-d2cv6\" (UID: \"c300d6c7-66fb-41c5-b099-0e9e4a235e76\") " pod="openshift-console/console-67bcb9df49-d2cv6" Feb 24 05:38:18.046470 master-0 kubenswrapper[34361]: I0224 05:38:18.046172 34361 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/c300d6c7-66fb-41c5-b099-0e9e4a235e76-console-serving-cert\") pod \"console-67bcb9df49-d2cv6\" (UID: \"c300d6c7-66fb-41c5-b099-0e9e4a235e76\") " pod="openshift-console/console-67bcb9df49-d2cv6" Feb 24 05:38:18.046470 master-0 kubenswrapper[34361]: I0224 05:38:18.046209 34361 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8qjkx\" (UniqueName: \"kubernetes.io/projected/c300d6c7-66fb-41c5-b099-0e9e4a235e76-kube-api-access-8qjkx\") pod \"console-67bcb9df49-d2cv6\" (UID: \"c300d6c7-66fb-41c5-b099-0e9e4a235e76\") " pod="openshift-console/console-67bcb9df49-d2cv6" Feb 24 05:38:18.046470 master-0 kubenswrapper[34361]: I0224 05:38:18.046238 34361 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c300d6c7-66fb-41c5-b099-0e9e4a235e76-trusted-ca-bundle\") pod \"console-67bcb9df49-d2cv6\" (UID: \"c300d6c7-66fb-41c5-b099-0e9e4a235e76\") " pod="openshift-console/console-67bcb9df49-d2cv6" Feb 24 05:38:18.046470 master-0 kubenswrapper[34361]: I0224 05:38:18.046434 34361 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/c300d6c7-66fb-41c5-b099-0e9e4a235e76-console-config\") pod \"console-67bcb9df49-d2cv6\" (UID: \"c300d6c7-66fb-41c5-b099-0e9e4a235e76\") " pod="openshift-console/console-67bcb9df49-d2cv6" Feb 24 05:38:18.046699 master-0 kubenswrapper[34361]: I0224 05:38:18.046602 34361 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/c300d6c7-66fb-41c5-b099-0e9e4a235e76-service-ca\") pod \"console-67bcb9df49-d2cv6\" (UID: \"c300d6c7-66fb-41c5-b099-0e9e4a235e76\") " pod="openshift-console/console-67bcb9df49-d2cv6" Feb 24 05:38:18.149857 master-0 kubenswrapper[34361]: I0224 05:38:18.149746 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c300d6c7-66fb-41c5-b099-0e9e4a235e76-trusted-ca-bundle\") pod \"console-67bcb9df49-d2cv6\" (UID: \"c300d6c7-66fb-41c5-b099-0e9e4a235e76\") " pod="openshift-console/console-67bcb9df49-d2cv6" Feb 24 05:38:18.149857 master-0 kubenswrapper[34361]: I0224 05:38:18.149832 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/c300d6c7-66fb-41c5-b099-0e9e4a235e76-console-config\") pod \"console-67bcb9df49-d2cv6\" (UID: \"c300d6c7-66fb-41c5-b099-0e9e4a235e76\") " pod="openshift-console/console-67bcb9df49-d2cv6" Feb 24 05:38:18.150221 master-0 kubenswrapper[34361]: I0224 05:38:18.149902 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/c300d6c7-66fb-41c5-b099-0e9e4a235e76-service-ca\") pod \"console-67bcb9df49-d2cv6\" (UID: \"c300d6c7-66fb-41c5-b099-0e9e4a235e76\") " pod="openshift-console/console-67bcb9df49-d2cv6" Feb 24 05:38:18.150298 master-0 kubenswrapper[34361]: I0224 05:38:18.150228 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/c300d6c7-66fb-41c5-b099-0e9e4a235e76-console-oauth-config\") pod \"console-67bcb9df49-d2cv6\" (UID: \"c300d6c7-66fb-41c5-b099-0e9e4a235e76\") " pod="openshift-console/console-67bcb9df49-d2cv6" Feb 24 05:38:18.150439 master-0 kubenswrapper[34361]: I0224 05:38:18.150333 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/c300d6c7-66fb-41c5-b099-0e9e4a235e76-oauth-serving-cert\") pod \"console-67bcb9df49-d2cv6\" (UID: \"c300d6c7-66fb-41c5-b099-0e9e4a235e76\") " pod="openshift-console/console-67bcb9df49-d2cv6" Feb 24 05:38:18.150439 master-0 kubenswrapper[34361]: I0224 05:38:18.150382 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/c300d6c7-66fb-41c5-b099-0e9e4a235e76-console-serving-cert\") pod \"console-67bcb9df49-d2cv6\" (UID: \"c300d6c7-66fb-41c5-b099-0e9e4a235e76\") " pod="openshift-console/console-67bcb9df49-d2cv6" Feb 24 05:38:18.150621 master-0 kubenswrapper[34361]: I0224 05:38:18.150581 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8qjkx\" (UniqueName: \"kubernetes.io/projected/c300d6c7-66fb-41c5-b099-0e9e4a235e76-kube-api-access-8qjkx\") pod \"console-67bcb9df49-d2cv6\" (UID: \"c300d6c7-66fb-41c5-b099-0e9e4a235e76\") " pod="openshift-console/console-67bcb9df49-d2cv6" Feb 24 05:38:18.151152 master-0 kubenswrapper[34361]: I0224 05:38:18.151109 34361 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/c300d6c7-66fb-41c5-b099-0e9e4a235e76-console-config\") pod \"console-67bcb9df49-d2cv6\" (UID: \"c300d6c7-66fb-41c5-b099-0e9e4a235e76\") " pod="openshift-console/console-67bcb9df49-d2cv6" Feb 24 05:38:18.151942 master-0 kubenswrapper[34361]: I0224 05:38:18.151905 34361 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/c300d6c7-66fb-41c5-b099-0e9e4a235e76-oauth-serving-cert\") pod \"console-67bcb9df49-d2cv6\" (UID: \"c300d6c7-66fb-41c5-b099-0e9e4a235e76\") " pod="openshift-console/console-67bcb9df49-d2cv6" Feb 24 05:38:18.152614 master-0 kubenswrapper[34361]: I0224 05:38:18.152545 34361 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/c300d6c7-66fb-41c5-b099-0e9e4a235e76-service-ca\") pod \"console-67bcb9df49-d2cv6\" (UID: \"c300d6c7-66fb-41c5-b099-0e9e4a235e76\") " pod="openshift-console/console-67bcb9df49-d2cv6" Feb 24 05:38:18.155134 master-0 kubenswrapper[34361]: I0224 05:38:18.154892 34361 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/c300d6c7-66fb-41c5-b099-0e9e4a235e76-console-serving-cert\") pod \"console-67bcb9df49-d2cv6\" (UID: \"c300d6c7-66fb-41c5-b099-0e9e4a235e76\") " pod="openshift-console/console-67bcb9df49-d2cv6" Feb 24 05:38:18.155134 master-0 kubenswrapper[34361]: I0224 05:38:18.155018 34361 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/c300d6c7-66fb-41c5-b099-0e9e4a235e76-console-oauth-config\") pod \"console-67bcb9df49-d2cv6\" (UID: \"c300d6c7-66fb-41c5-b099-0e9e4a235e76\") " pod="openshift-console/console-67bcb9df49-d2cv6" Feb 24 05:38:18.164930 master-0 kubenswrapper[34361]: I0224 05:38:18.164860 34361 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c300d6c7-66fb-41c5-b099-0e9e4a235e76-trusted-ca-bundle\") pod \"console-67bcb9df49-d2cv6\" (UID: \"c300d6c7-66fb-41c5-b099-0e9e4a235e76\") " pod="openshift-console/console-67bcb9df49-d2cv6" Feb 24 05:38:18.169684 master-0 kubenswrapper[34361]: I0224 05:38:18.169630 34361 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8qjkx\" (UniqueName: \"kubernetes.io/projected/c300d6c7-66fb-41c5-b099-0e9e4a235e76-kube-api-access-8qjkx\") pod \"console-67bcb9df49-d2cv6\" (UID: \"c300d6c7-66fb-41c5-b099-0e9e4a235e76\") " pod="openshift-console/console-67bcb9df49-d2cv6" Feb 24 05:38:18.299983 master-0 kubenswrapper[34361]: I0224 05:38:18.299899 34361 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-67bcb9df49-d2cv6" Feb 24 05:38:18.756532 master-0 kubenswrapper[34361]: I0224 05:38:18.756459 34361 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-67bcb9df49-d2cv6"] Feb 24 05:38:18.766719 master-0 kubenswrapper[34361]: W0224 05:38:18.766626 34361 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc300d6c7_66fb_41c5_b099_0e9e4a235e76.slice/crio-aa3668feecd3d666dacc9f48e108a998a72d35ceb407ca62190b08acd01e6da6 WatchSource:0}: Error finding container aa3668feecd3d666dacc9f48e108a998a72d35ceb407ca62190b08acd01e6da6: Status 404 returned error can't find the container with id aa3668feecd3d666dacc9f48e108a998a72d35ceb407ca62190b08acd01e6da6 Feb 24 05:38:19.633648 master-0 kubenswrapper[34361]: I0224 05:38:19.633449 34361 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-67bcb9df49-d2cv6" event={"ID":"c300d6c7-66fb-41c5-b099-0e9e4a235e76","Type":"ContainerStarted","Data":"76eff019d66fe5abbd1ccb06357908f71a83ac02cd16385fcdcc99a4c5ce4117"} Feb 24 05:38:19.633988 master-0 kubenswrapper[34361]: I0224 05:38:19.633746 34361 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-67bcb9df49-d2cv6" event={"ID":"c300d6c7-66fb-41c5-b099-0e9e4a235e76","Type":"ContainerStarted","Data":"aa3668feecd3d666dacc9f48e108a998a72d35ceb407ca62190b08acd01e6da6"} Feb 24 05:38:19.659663 master-0 kubenswrapper[34361]: I0224 05:38:19.659551 34361 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-67bcb9df49-d2cv6" podStartSLOduration=2.65953144 podStartE2EDuration="2.65953144s" podCreationTimestamp="2026-02-24 05:38:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-24 05:38:19.657940698 +0000 UTC m=+59.360557754" watchObservedRunningTime="2026-02-24 05:38:19.65953144 +0000 UTC m=+59.362148486" Feb 24 05:38:20.303432 master-0 kubenswrapper[34361]: I0224 05:38:20.303298 34361 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-5b6cfdbd-5qbf5" Feb 24 05:38:20.303432 master-0 kubenswrapper[34361]: I0224 05:38:20.303368 34361 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-5b6cfdbd-5qbf5" Feb 24 05:38:20.307869 master-0 kubenswrapper[34361]: I0224 05:38:20.307802 34361 patch_prober.go:28] interesting pod/console-5b6cfdbd-5qbf5 container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.128.0.87:8443/health\": dial tcp 10.128.0.87:8443: connect: connection refused" start-of-body= Feb 24 05:38:20.307974 master-0 kubenswrapper[34361]: I0224 05:38:20.307901 34361 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-5b6cfdbd-5qbf5" podUID="f3038676-0c11-4616-bb1e-f5d396e420f4" containerName="console" probeResult="failure" output="Get \"https://10.128.0.87:8443/health\": dial tcp 10.128.0.87:8443: connect: connection refused" Feb 24 05:38:28.300666 master-0 kubenswrapper[34361]: I0224 05:38:28.300572 34361 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-67bcb9df49-d2cv6" Feb 24 05:38:28.301710 master-0 kubenswrapper[34361]: I0224 05:38:28.301621 34361 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-67bcb9df49-d2cv6" Feb 24 05:38:28.303804 master-0 kubenswrapper[34361]: I0224 05:38:28.303722 34361 patch_prober.go:28] interesting pod/console-67bcb9df49-d2cv6 container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.128.0.91:8443/health\": dial tcp 10.128.0.91:8443: connect: connection refused" start-of-body= Feb 24 05:38:28.303889 master-0 kubenswrapper[34361]: I0224 05:38:28.303829 34361 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-67bcb9df49-d2cv6" podUID="c300d6c7-66fb-41c5-b099-0e9e4a235e76" containerName="console" probeResult="failure" output="Get \"https://10.128.0.91:8443/health\": dial tcp 10.128.0.91:8443: connect: connection refused" Feb 24 05:38:30.033827 master-0 kubenswrapper[34361]: I0224 05:38:30.031940 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/afaa1cb5-3ad5-4c67-8802-9c4db23a2e3a-kube-api-access\") pod \"installer-3-master-0\" (UID: \"afaa1cb5-3ad5-4c67-8802-9c4db23a2e3a\") " pod="openshift-kube-apiserver/installer-3-master-0" Feb 24 05:38:30.037250 master-0 kubenswrapper[34361]: I0224 05:38:30.037180 34361 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/afaa1cb5-3ad5-4c67-8802-9c4db23a2e3a-kube-api-access\") pod \"installer-3-master-0\" (UID: \"afaa1cb5-3ad5-4c67-8802-9c4db23a2e3a\") " pod="openshift-kube-apiserver/installer-3-master-0" Feb 24 05:38:30.061241 master-0 kubenswrapper[34361]: I0224 05:38:30.061124 34361 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-kube-apiserver/installer-4-master-0"] Feb 24 05:38:30.061615 master-0 kubenswrapper[34361]: I0224 05:38:30.061499 34361 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/installer-4-master-0" podUID="60e6a292-a766-471c-90c8-843f10a5820c" containerName="installer" containerID="cri-o://c551e23385455f60a1d1ce791e66a617e0f04c1d922be0d890276b70483491f6" gracePeriod=30 Feb 24 05:38:30.135706 master-0 kubenswrapper[34361]: I0224 05:38:30.135644 34361 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/afaa1cb5-3ad5-4c67-8802-9c4db23a2e3a-kube-api-access\") pod \"afaa1cb5-3ad5-4c67-8802-9c4db23a2e3a\" (UID: \"afaa1cb5-3ad5-4c67-8802-9c4db23a2e3a\") " Feb 24 05:38:30.138845 master-0 kubenswrapper[34361]: I0224 05:38:30.138803 34361 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/afaa1cb5-3ad5-4c67-8802-9c4db23a2e3a-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "afaa1cb5-3ad5-4c67-8802-9c4db23a2e3a" (UID: "afaa1cb5-3ad5-4c67-8802-9c4db23a2e3a"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 24 05:38:30.238011 master-0 kubenswrapper[34361]: I0224 05:38:30.237893 34361 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/afaa1cb5-3ad5-4c67-8802-9c4db23a2e3a-kube-api-access\") on node \"master-0\" DevicePath \"\"" Feb 24 05:38:30.303789 master-0 kubenswrapper[34361]: I0224 05:38:30.303576 34361 patch_prober.go:28] interesting pod/console-5b6cfdbd-5qbf5 container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.128.0.87:8443/health\": dial tcp 10.128.0.87:8443: connect: connection refused" start-of-body= Feb 24 05:38:30.303789 master-0 kubenswrapper[34361]: I0224 05:38:30.303642 34361 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-5b6cfdbd-5qbf5" podUID="f3038676-0c11-4616-bb1e-f5d396e420f4" containerName="console" probeResult="failure" output="Get \"https://10.128.0.87:8443/health\": dial tcp 10.128.0.87:8443: connect: connection refused" Feb 24 05:38:33.265689 master-0 kubenswrapper[34361]: I0224 05:38:33.264204 34361 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/installer-5-master-0"] Feb 24 05:38:33.277851 master-0 kubenswrapper[34361]: I0224 05:38:33.277789 34361 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-5-master-0"] Feb 24 05:38:33.277944 master-0 kubenswrapper[34361]: I0224 05:38:33.277933 34361 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-5-master-0" Feb 24 05:38:33.393913 master-0 kubenswrapper[34361]: I0224 05:38:33.393845 34361 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/175e9f88-7ed2-441b-8de2-71aa4d32c9c5-kube-api-access\") pod \"installer-5-master-0\" (UID: \"175e9f88-7ed2-441b-8de2-71aa4d32c9c5\") " pod="openshift-kube-apiserver/installer-5-master-0" Feb 24 05:38:33.394352 master-0 kubenswrapper[34361]: I0224 05:38:33.394019 34361 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/175e9f88-7ed2-441b-8de2-71aa4d32c9c5-var-lock\") pod \"installer-5-master-0\" (UID: \"175e9f88-7ed2-441b-8de2-71aa4d32c9c5\") " pod="openshift-kube-apiserver/installer-5-master-0" Feb 24 05:38:33.394352 master-0 kubenswrapper[34361]: I0224 05:38:33.394260 34361 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/175e9f88-7ed2-441b-8de2-71aa4d32c9c5-kubelet-dir\") pod \"installer-5-master-0\" (UID: \"175e9f88-7ed2-441b-8de2-71aa4d32c9c5\") " pod="openshift-kube-apiserver/installer-5-master-0" Feb 24 05:38:33.495854 master-0 kubenswrapper[34361]: I0224 05:38:33.495768 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/175e9f88-7ed2-441b-8de2-71aa4d32c9c5-var-lock\") pod \"installer-5-master-0\" (UID: \"175e9f88-7ed2-441b-8de2-71aa4d32c9c5\") " pod="openshift-kube-apiserver/installer-5-master-0" Feb 24 05:38:33.495854 master-0 kubenswrapper[34361]: I0224 05:38:33.495884 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/175e9f88-7ed2-441b-8de2-71aa4d32c9c5-kubelet-dir\") pod \"installer-5-master-0\" (UID: \"175e9f88-7ed2-441b-8de2-71aa4d32c9c5\") " pod="openshift-kube-apiserver/installer-5-master-0" Feb 24 05:38:33.496248 master-0 kubenswrapper[34361]: I0224 05:38:33.495907 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/175e9f88-7ed2-441b-8de2-71aa4d32c9c5-kube-api-access\") pod \"installer-5-master-0\" (UID: \"175e9f88-7ed2-441b-8de2-71aa4d32c9c5\") " pod="openshift-kube-apiserver/installer-5-master-0" Feb 24 05:38:33.496248 master-0 kubenswrapper[34361]: I0224 05:38:33.496048 34361 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/175e9f88-7ed2-441b-8de2-71aa4d32c9c5-var-lock\") pod \"installer-5-master-0\" (UID: \"175e9f88-7ed2-441b-8de2-71aa4d32c9c5\") " pod="openshift-kube-apiserver/installer-5-master-0" Feb 24 05:38:33.496248 master-0 kubenswrapper[34361]: I0224 05:38:33.496169 34361 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/175e9f88-7ed2-441b-8de2-71aa4d32c9c5-kubelet-dir\") pod \"installer-5-master-0\" (UID: \"175e9f88-7ed2-441b-8de2-71aa4d32c9c5\") " pod="openshift-kube-apiserver/installer-5-master-0" Feb 24 05:38:33.515867 master-0 kubenswrapper[34361]: I0224 05:38:33.515645 34361 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/175e9f88-7ed2-441b-8de2-71aa4d32c9c5-kube-api-access\") pod \"installer-5-master-0\" (UID: \"175e9f88-7ed2-441b-8de2-71aa4d32c9c5\") " pod="openshift-kube-apiserver/installer-5-master-0" Feb 24 05:38:33.609489 master-0 kubenswrapper[34361]: I0224 05:38:33.609424 34361 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-5-master-0" Feb 24 05:38:35.698815 master-0 kubenswrapper[34361]: I0224 05:38:35.696167 34361 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-5-master-0"] Feb 24 05:38:35.860732 master-0 kubenswrapper[34361]: I0224 05:38:35.859998 34361 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-955b69498-crzjg" event={"ID":"691f1da3-ccd7-416a-9031-dea1b78f71ee","Type":"ContainerStarted","Data":"751ba64bf5160a9e2eb35ef27cb26886bc7054d1cc22692ef4ad586149b38c92"} Feb 24 05:38:35.860732 master-0 kubenswrapper[34361]: I0224 05:38:35.860474 34361 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/downloads-955b69498-crzjg" Feb 24 05:38:35.862947 master-0 kubenswrapper[34361]: I0224 05:38:35.861809 34361 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-5-master-0" event={"ID":"175e9f88-7ed2-441b-8de2-71aa4d32c9c5","Type":"ContainerStarted","Data":"bda664e24fd1e5b2142b4b3e022f42d08642b149d9744bb2d58e88a858a06151"} Feb 24 05:38:35.862947 master-0 kubenswrapper[34361]: I0224 05:38:35.862466 34361 patch_prober.go:28] interesting pod/downloads-955b69498-crzjg container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.128.0.86:8080/\": dial tcp 10.128.0.86:8080: connect: connection refused" start-of-body= Feb 24 05:38:35.862947 master-0 kubenswrapper[34361]: I0224 05:38:35.862519 34361 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-955b69498-crzjg" podUID="691f1da3-ccd7-416a-9031-dea1b78f71ee" containerName="download-server" probeResult="failure" output="Get \"http://10.128.0.86:8080/\": dial tcp 10.128.0.86:8080: connect: connection refused" Feb 24 05:38:35.882360 master-0 kubenswrapper[34361]: I0224 05:38:35.882181 34361 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/downloads-955b69498-crzjg" podStartSLOduration=1.838371479 podStartE2EDuration="37.882150681s" podCreationTimestamp="2026-02-24 05:37:58 +0000 UTC" firstStartedPulling="2026-02-24 05:37:59.402694908 +0000 UTC m=+39.105311954" lastFinishedPulling="2026-02-24 05:38:35.44647411 +0000 UTC m=+75.149091156" observedRunningTime="2026-02-24 05:38:35.881283417 +0000 UTC m=+75.583900483" watchObservedRunningTime="2026-02-24 05:38:35.882150681 +0000 UTC m=+75.584767737" Feb 24 05:38:36.219545 master-0 kubenswrapper[34361]: I0224 05:38:36.219443 34361 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-authentication/oauth-openshift-6d4d899fc6-cgn6l" podUID="7ab989ea-1de3-497d-9834-889d587a0270" containerName="oauth-openshift" containerID="cri-o://5975ab0155e8aeb506e71a83f7c1f9a9ec653513b28609bf539ddc6275cf7ab1" gracePeriod=15 Feb 24 05:38:36.874256 master-0 kubenswrapper[34361]: I0224 05:38:36.874171 34361 generic.go:334] "Generic (PLEG): container finished" podID="7ab989ea-1de3-497d-9834-889d587a0270" containerID="5975ab0155e8aeb506e71a83f7c1f9a9ec653513b28609bf539ddc6275cf7ab1" exitCode=0 Feb 24 05:38:36.875351 master-0 kubenswrapper[34361]: I0224 05:38:36.874295 34361 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-6d4d899fc6-cgn6l" event={"ID":"7ab989ea-1de3-497d-9834-889d587a0270","Type":"ContainerDied","Data":"5975ab0155e8aeb506e71a83f7c1f9a9ec653513b28609bf539ddc6275cf7ab1"} Feb 24 05:38:36.879744 master-0 kubenswrapper[34361]: I0224 05:38:36.879650 34361 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-5-master-0" event={"ID":"175e9f88-7ed2-441b-8de2-71aa4d32c9c5","Type":"ContainerStarted","Data":"4e7136483f5e3e11e16df3e30312d58bc2b27d4e71da0b6be27e273e26105293"} Feb 24 05:38:36.880375 master-0 kubenswrapper[34361]: I0224 05:38:36.880282 34361 patch_prober.go:28] interesting pod/downloads-955b69498-crzjg container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.128.0.86:8080/\": dial tcp 10.128.0.86:8080: connect: connection refused" start-of-body= Feb 24 05:38:36.880375 master-0 kubenswrapper[34361]: I0224 05:38:36.880369 34361 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-955b69498-crzjg" podUID="691f1da3-ccd7-416a-9031-dea1b78f71ee" containerName="download-server" probeResult="failure" output="Get \"http://10.128.0.86:8080/\": dial tcp 10.128.0.86:8080: connect: connection refused" Feb 24 05:38:36.976589 master-0 kubenswrapper[34361]: I0224 05:38:36.976529 34361 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-6d4d899fc6-cgn6l" Feb 24 05:38:37.003880 master-0 kubenswrapper[34361]: I0224 05:38:37.003766 34361 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/installer-5-master-0" podStartSLOduration=4.003739399 podStartE2EDuration="4.003739399s" podCreationTimestamp="2026-02-24 05:38:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-24 05:38:36.906387218 +0000 UTC m=+76.609004274" watchObservedRunningTime="2026-02-24 05:38:37.003739399 +0000 UTC m=+76.706356465" Feb 24 05:38:37.017863 master-0 kubenswrapper[34361]: I0224 05:38:37.017777 34361 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication/oauth-openshift-64b7796859-6g644"] Feb 24 05:38:37.018191 master-0 kubenswrapper[34361]: E0224 05:38:37.018158 34361 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7ab989ea-1de3-497d-9834-889d587a0270" containerName="oauth-openshift" Feb 24 05:38:37.018191 master-0 kubenswrapper[34361]: I0224 05:38:37.018185 34361 state_mem.go:107] "Deleted CPUSet assignment" podUID="7ab989ea-1de3-497d-9834-889d587a0270" containerName="oauth-openshift" Feb 24 05:38:37.018447 master-0 kubenswrapper[34361]: I0224 05:38:37.018418 34361 memory_manager.go:354] "RemoveStaleState removing state" podUID="7ab989ea-1de3-497d-9834-889d587a0270" containerName="oauth-openshift" Feb 24 05:38:37.019016 master-0 kubenswrapper[34361]: I0224 05:38:37.018984 34361 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-64b7796859-6g644" Feb 24 05:38:37.054696 master-0 kubenswrapper[34361]: I0224 05:38:37.054617 34361 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-64b7796859-6g644"] Feb 24 05:38:37.063670 master-0 kubenswrapper[34361]: I0224 05:38:37.063600 34361 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/7ab989ea-1de3-497d-9834-889d587a0270-v4-0-config-user-template-provider-selection\") pod \"7ab989ea-1de3-497d-9834-889d587a0270\" (UID: \"7ab989ea-1de3-497d-9834-889d587a0270\") " Feb 24 05:38:37.064272 master-0 kubenswrapper[34361]: I0224 05:38:37.063701 34361 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/7ab989ea-1de3-497d-9834-889d587a0270-audit-dir\") pod \"7ab989ea-1de3-497d-9834-889d587a0270\" (UID: \"7ab989ea-1de3-497d-9834-889d587a0270\") " Feb 24 05:38:37.064272 master-0 kubenswrapper[34361]: I0224 05:38:37.063748 34361 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/7ab989ea-1de3-497d-9834-889d587a0270-v4-0-config-system-session\") pod \"7ab989ea-1de3-497d-9834-889d587a0270\" (UID: \"7ab989ea-1de3-497d-9834-889d587a0270\") " Feb 24 05:38:37.064272 master-0 kubenswrapper[34361]: I0224 05:38:37.063772 34361 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/7ab989ea-1de3-497d-9834-889d587a0270-v4-0-config-system-serving-cert\") pod \"7ab989ea-1de3-497d-9834-889d587a0270\" (UID: \"7ab989ea-1de3-497d-9834-889d587a0270\") " Feb 24 05:38:37.064272 master-0 kubenswrapper[34361]: I0224 05:38:37.063798 34361 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mp7lq\" (UniqueName: \"kubernetes.io/projected/7ab989ea-1de3-497d-9834-889d587a0270-kube-api-access-mp7lq\") pod \"7ab989ea-1de3-497d-9834-889d587a0270\" (UID: \"7ab989ea-1de3-497d-9834-889d587a0270\") " Feb 24 05:38:37.064272 master-0 kubenswrapper[34361]: I0224 05:38:37.063826 34361 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/7ab989ea-1de3-497d-9834-889d587a0270-audit-policies\") pod \"7ab989ea-1de3-497d-9834-889d587a0270\" (UID: \"7ab989ea-1de3-497d-9834-889d587a0270\") " Feb 24 05:38:37.064272 master-0 kubenswrapper[34361]: I0224 05:38:37.063846 34361 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/7ab989ea-1de3-497d-9834-889d587a0270-v4-0-config-system-router-certs\") pod \"7ab989ea-1de3-497d-9834-889d587a0270\" (UID: \"7ab989ea-1de3-497d-9834-889d587a0270\") " Feb 24 05:38:37.064272 master-0 kubenswrapper[34361]: I0224 05:38:37.063885 34361 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/7ab989ea-1de3-497d-9834-889d587a0270-v4-0-config-system-service-ca\") pod \"7ab989ea-1de3-497d-9834-889d587a0270\" (UID: \"7ab989ea-1de3-497d-9834-889d587a0270\") " Feb 24 05:38:37.064272 master-0 kubenswrapper[34361]: I0224 05:38:37.063919 34361 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/7ab989ea-1de3-497d-9834-889d587a0270-v4-0-config-user-template-login\") pod \"7ab989ea-1de3-497d-9834-889d587a0270\" (UID: \"7ab989ea-1de3-497d-9834-889d587a0270\") " Feb 24 05:38:37.064272 master-0 kubenswrapper[34361]: I0224 05:38:37.064019 34361 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/7ab989ea-1de3-497d-9834-889d587a0270-v4-0-config-system-cliconfig\") pod \"7ab989ea-1de3-497d-9834-889d587a0270\" (UID: \"7ab989ea-1de3-497d-9834-889d587a0270\") " Feb 24 05:38:37.064272 master-0 kubenswrapper[34361]: I0224 05:38:37.064047 34361 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7ab989ea-1de3-497d-9834-889d587a0270-v4-0-config-system-trusted-ca-bundle\") pod \"7ab989ea-1de3-497d-9834-889d587a0270\" (UID: \"7ab989ea-1de3-497d-9834-889d587a0270\") " Feb 24 05:38:37.064272 master-0 kubenswrapper[34361]: I0224 05:38:37.064131 34361 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/7ab989ea-1de3-497d-9834-889d587a0270-v4-0-config-system-ocp-branding-template\") pod \"7ab989ea-1de3-497d-9834-889d587a0270\" (UID: \"7ab989ea-1de3-497d-9834-889d587a0270\") " Feb 24 05:38:37.064682 master-0 kubenswrapper[34361]: I0224 05:38:37.064491 34361 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/7ab989ea-1de3-497d-9834-889d587a0270-v4-0-config-user-template-error\") pod \"7ab989ea-1de3-497d-9834-889d587a0270\" (UID: \"7ab989ea-1de3-497d-9834-889d587a0270\") " Feb 24 05:38:37.065573 master-0 kubenswrapper[34361]: I0224 05:38:37.065511 34361 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7ab989ea-1de3-497d-9834-889d587a0270-v4-0-config-system-cliconfig" (OuterVolumeSpecName: "v4-0-config-system-cliconfig") pod "7ab989ea-1de3-497d-9834-889d587a0270" (UID: "7ab989ea-1de3-497d-9834-889d587a0270"). InnerVolumeSpecName "v4-0-config-system-cliconfig". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 24 05:38:37.066160 master-0 kubenswrapper[34361]: I0224 05:38:37.066128 34361 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7ab989ea-1de3-497d-9834-889d587a0270-v4-0-config-system-trusted-ca-bundle" (OuterVolumeSpecName: "v4-0-config-system-trusted-ca-bundle") pod "7ab989ea-1de3-497d-9834-889d587a0270" (UID: "7ab989ea-1de3-497d-9834-889d587a0270"). InnerVolumeSpecName "v4-0-config-system-trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 24 05:38:37.066480 master-0 kubenswrapper[34361]: I0224 05:38:37.066411 34361 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7ab989ea-1de3-497d-9834-889d587a0270-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "7ab989ea-1de3-497d-9834-889d587a0270" (UID: "7ab989ea-1de3-497d-9834-889d587a0270"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 24 05:38:37.067469 master-0 kubenswrapper[34361]: I0224 05:38:37.067411 34361 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7ab989ea-1de3-497d-9834-889d587a0270-audit-dir" (OuterVolumeSpecName: "audit-dir") pod "7ab989ea-1de3-497d-9834-889d587a0270" (UID: "7ab989ea-1de3-497d-9834-889d587a0270"). InnerVolumeSpecName "audit-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 24 05:38:37.067561 master-0 kubenswrapper[34361]: I0224 05:38:37.067137 34361 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7ab989ea-1de3-497d-9834-889d587a0270-v4-0-config-system-service-ca" (OuterVolumeSpecName: "v4-0-config-system-service-ca") pod "7ab989ea-1de3-497d-9834-889d587a0270" (UID: "7ab989ea-1de3-497d-9834-889d587a0270"). InnerVolumeSpecName "v4-0-config-system-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 24 05:38:37.068237 master-0 kubenswrapper[34361]: I0224 05:38:37.068036 34361 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7ab989ea-1de3-497d-9834-889d587a0270-v4-0-config-user-template-error" (OuterVolumeSpecName: "v4-0-config-user-template-error") pod "7ab989ea-1de3-497d-9834-889d587a0270" (UID: "7ab989ea-1de3-497d-9834-889d587a0270"). InnerVolumeSpecName "v4-0-config-user-template-error". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 24 05:38:37.068808 master-0 kubenswrapper[34361]: I0224 05:38:37.068498 34361 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7ab989ea-1de3-497d-9834-889d587a0270-v4-0-config-user-template-login" (OuterVolumeSpecName: "v4-0-config-user-template-login") pod "7ab989ea-1de3-497d-9834-889d587a0270" (UID: "7ab989ea-1de3-497d-9834-889d587a0270"). InnerVolumeSpecName "v4-0-config-user-template-login". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 24 05:38:37.076102 master-0 kubenswrapper[34361]: I0224 05:38:37.070726 34361 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7ab989ea-1de3-497d-9834-889d587a0270-v4-0-config-user-template-provider-selection" (OuterVolumeSpecName: "v4-0-config-user-template-provider-selection") pod "7ab989ea-1de3-497d-9834-889d587a0270" (UID: "7ab989ea-1de3-497d-9834-889d587a0270"). InnerVolumeSpecName "v4-0-config-user-template-provider-selection". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 24 05:38:37.076102 master-0 kubenswrapper[34361]: I0224 05:38:37.071551 34361 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7ab989ea-1de3-497d-9834-889d587a0270-v4-0-config-system-ocp-branding-template" (OuterVolumeSpecName: "v4-0-config-system-ocp-branding-template") pod "7ab989ea-1de3-497d-9834-889d587a0270" (UID: "7ab989ea-1de3-497d-9834-889d587a0270"). InnerVolumeSpecName "v4-0-config-system-ocp-branding-template". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 24 05:38:37.076102 master-0 kubenswrapper[34361]: I0224 05:38:37.075450 34361 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7ab989ea-1de3-497d-9834-889d587a0270-v4-0-config-system-serving-cert" (OuterVolumeSpecName: "v4-0-config-system-serving-cert") pod "7ab989ea-1de3-497d-9834-889d587a0270" (UID: "7ab989ea-1de3-497d-9834-889d587a0270"). InnerVolumeSpecName "v4-0-config-system-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 24 05:38:37.076102 master-0 kubenswrapper[34361]: I0224 05:38:37.075493 34361 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7ab989ea-1de3-497d-9834-889d587a0270-kube-api-access-mp7lq" (OuterVolumeSpecName: "kube-api-access-mp7lq") pod "7ab989ea-1de3-497d-9834-889d587a0270" (UID: "7ab989ea-1de3-497d-9834-889d587a0270"). InnerVolumeSpecName "kube-api-access-mp7lq". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 24 05:38:37.076102 master-0 kubenswrapper[34361]: I0224 05:38:37.075469 34361 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7ab989ea-1de3-497d-9834-889d587a0270-v4-0-config-system-router-certs" (OuterVolumeSpecName: "v4-0-config-system-router-certs") pod "7ab989ea-1de3-497d-9834-889d587a0270" (UID: "7ab989ea-1de3-497d-9834-889d587a0270"). InnerVolumeSpecName "v4-0-config-system-router-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 24 05:38:37.076102 master-0 kubenswrapper[34361]: I0224 05:38:37.075624 34361 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7ab989ea-1de3-497d-9834-889d587a0270-v4-0-config-system-session" (OuterVolumeSpecName: "v4-0-config-system-session") pod "7ab989ea-1de3-497d-9834-889d587a0270" (UID: "7ab989ea-1de3-497d-9834-889d587a0270"). InnerVolumeSpecName "v4-0-config-system-session". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 24 05:38:37.167444 master-0 kubenswrapper[34361]: I0224 05:38:37.166781 34361 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/5faf6ee1-ba37-4cc9-8ab7-4e63409d90ab-v4-0-config-system-session\") pod \"oauth-openshift-64b7796859-6g644\" (UID: \"5faf6ee1-ba37-4cc9-8ab7-4e63409d90ab\") " pod="openshift-authentication/oauth-openshift-64b7796859-6g644" Feb 24 05:38:37.167444 master-0 kubenswrapper[34361]: I0224 05:38:37.166856 34361 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/5faf6ee1-ba37-4cc9-8ab7-4e63409d90ab-audit-policies\") pod \"oauth-openshift-64b7796859-6g644\" (UID: \"5faf6ee1-ba37-4cc9-8ab7-4e63409d90ab\") " pod="openshift-authentication/oauth-openshift-64b7796859-6g644" Feb 24 05:38:37.167444 master-0 kubenswrapper[34361]: I0224 05:38:37.166896 34361 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/5faf6ee1-ba37-4cc9-8ab7-4e63409d90ab-v4-0-config-user-template-login\") pod \"oauth-openshift-64b7796859-6g644\" (UID: \"5faf6ee1-ba37-4cc9-8ab7-4e63409d90ab\") " pod="openshift-authentication/oauth-openshift-64b7796859-6g644" Feb 24 05:38:37.167444 master-0 kubenswrapper[34361]: I0224 05:38:37.166971 34361 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/5faf6ee1-ba37-4cc9-8ab7-4e63409d90ab-v4-0-config-system-service-ca\") pod \"oauth-openshift-64b7796859-6g644\" (UID: \"5faf6ee1-ba37-4cc9-8ab7-4e63409d90ab\") " pod="openshift-authentication/oauth-openshift-64b7796859-6g644" Feb 24 05:38:37.167444 master-0 kubenswrapper[34361]: I0224 05:38:37.167052 34361 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/5faf6ee1-ba37-4cc9-8ab7-4e63409d90ab-v4-0-config-system-router-certs\") pod \"oauth-openshift-64b7796859-6g644\" (UID: \"5faf6ee1-ba37-4cc9-8ab7-4e63409d90ab\") " pod="openshift-authentication/oauth-openshift-64b7796859-6g644" Feb 24 05:38:37.167444 master-0 kubenswrapper[34361]: I0224 05:38:37.167170 34361 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/5faf6ee1-ba37-4cc9-8ab7-4e63409d90ab-v4-0-config-user-template-error\") pod \"oauth-openshift-64b7796859-6g644\" (UID: \"5faf6ee1-ba37-4cc9-8ab7-4e63409d90ab\") " pod="openshift-authentication/oauth-openshift-64b7796859-6g644" Feb 24 05:38:37.167444 master-0 kubenswrapper[34361]: I0224 05:38:37.167239 34361 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zr2hl\" (UniqueName: \"kubernetes.io/projected/5faf6ee1-ba37-4cc9-8ab7-4e63409d90ab-kube-api-access-zr2hl\") pod \"oauth-openshift-64b7796859-6g644\" (UID: \"5faf6ee1-ba37-4cc9-8ab7-4e63409d90ab\") " pod="openshift-authentication/oauth-openshift-64b7796859-6g644" Feb 24 05:38:37.167444 master-0 kubenswrapper[34361]: I0224 05:38:37.167288 34361 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/5faf6ee1-ba37-4cc9-8ab7-4e63409d90ab-v4-0-config-system-cliconfig\") pod \"oauth-openshift-64b7796859-6g644\" (UID: \"5faf6ee1-ba37-4cc9-8ab7-4e63409d90ab\") " pod="openshift-authentication/oauth-openshift-64b7796859-6g644" Feb 24 05:38:37.167444 master-0 kubenswrapper[34361]: I0224 05:38:37.167386 34361 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/5faf6ee1-ba37-4cc9-8ab7-4e63409d90ab-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-64b7796859-6g644\" (UID: \"5faf6ee1-ba37-4cc9-8ab7-4e63409d90ab\") " pod="openshift-authentication/oauth-openshift-64b7796859-6g644" Feb 24 05:38:37.168064 master-0 kubenswrapper[34361]: I0224 05:38:37.167609 34361 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/5faf6ee1-ba37-4cc9-8ab7-4e63409d90ab-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-64b7796859-6g644\" (UID: \"5faf6ee1-ba37-4cc9-8ab7-4e63409d90ab\") " pod="openshift-authentication/oauth-openshift-64b7796859-6g644" Feb 24 05:38:37.168064 master-0 kubenswrapper[34361]: I0224 05:38:37.167691 34361 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/5faf6ee1-ba37-4cc9-8ab7-4e63409d90ab-audit-dir\") pod \"oauth-openshift-64b7796859-6g644\" (UID: \"5faf6ee1-ba37-4cc9-8ab7-4e63409d90ab\") " pod="openshift-authentication/oauth-openshift-64b7796859-6g644" Feb 24 05:38:37.168064 master-0 kubenswrapper[34361]: I0224 05:38:37.167741 34361 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/5faf6ee1-ba37-4cc9-8ab7-4e63409d90ab-v4-0-config-system-serving-cert\") pod \"oauth-openshift-64b7796859-6g644\" (UID: \"5faf6ee1-ba37-4cc9-8ab7-4e63409d90ab\") " pod="openshift-authentication/oauth-openshift-64b7796859-6g644" Feb 24 05:38:37.168064 master-0 kubenswrapper[34361]: I0224 05:38:37.167806 34361 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5faf6ee1-ba37-4cc9-8ab7-4e63409d90ab-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-64b7796859-6g644\" (UID: \"5faf6ee1-ba37-4cc9-8ab7-4e63409d90ab\") " pod="openshift-authentication/oauth-openshift-64b7796859-6g644" Feb 24 05:38:37.168064 master-0 kubenswrapper[34361]: I0224 05:38:37.167918 34361 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/7ab989ea-1de3-497d-9834-889d587a0270-v4-0-config-user-template-provider-selection\") on node \"master-0\" DevicePath \"\"" Feb 24 05:38:37.168064 master-0 kubenswrapper[34361]: I0224 05:38:37.167942 34361 reconciler_common.go:293] "Volume detached for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/7ab989ea-1de3-497d-9834-889d587a0270-audit-dir\") on node \"master-0\" DevicePath \"\"" Feb 24 05:38:37.168064 master-0 kubenswrapper[34361]: I0224 05:38:37.167957 34361 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/7ab989ea-1de3-497d-9834-889d587a0270-v4-0-config-system-session\") on node \"master-0\" DevicePath \"\"" Feb 24 05:38:37.168064 master-0 kubenswrapper[34361]: I0224 05:38:37.167972 34361 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/7ab989ea-1de3-497d-9834-889d587a0270-v4-0-config-system-serving-cert\") on node \"master-0\" DevicePath \"\"" Feb 24 05:38:37.168064 master-0 kubenswrapper[34361]: I0224 05:38:37.167998 34361 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mp7lq\" (UniqueName: \"kubernetes.io/projected/7ab989ea-1de3-497d-9834-889d587a0270-kube-api-access-mp7lq\") on node \"master-0\" DevicePath \"\"" Feb 24 05:38:37.168064 master-0 kubenswrapper[34361]: I0224 05:38:37.168012 34361 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/7ab989ea-1de3-497d-9834-889d587a0270-v4-0-config-system-router-certs\") on node \"master-0\" DevicePath \"\"" Feb 24 05:38:37.168064 master-0 kubenswrapper[34361]: I0224 05:38:37.168026 34361 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/7ab989ea-1de3-497d-9834-889d587a0270-audit-policies\") on node \"master-0\" DevicePath \"\"" Feb 24 05:38:37.168064 master-0 kubenswrapper[34361]: I0224 05:38:37.168041 34361 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/7ab989ea-1de3-497d-9834-889d587a0270-v4-0-config-system-service-ca\") on node \"master-0\" DevicePath \"\"" Feb 24 05:38:37.168064 master-0 kubenswrapper[34361]: I0224 05:38:37.168053 34361 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/7ab989ea-1de3-497d-9834-889d587a0270-v4-0-config-user-template-login\") on node \"master-0\" DevicePath \"\"" Feb 24 05:38:37.168064 master-0 kubenswrapper[34361]: I0224 05:38:37.168066 34361 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7ab989ea-1de3-497d-9834-889d587a0270-v4-0-config-system-trusted-ca-bundle\") on node \"master-0\" DevicePath \"\"" Feb 24 05:38:37.168677 master-0 kubenswrapper[34361]: I0224 05:38:37.168087 34361 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/7ab989ea-1de3-497d-9834-889d587a0270-v4-0-config-system-cliconfig\") on node \"master-0\" DevicePath \"\"" Feb 24 05:38:37.168677 master-0 kubenswrapper[34361]: I0224 05:38:37.168104 34361 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/7ab989ea-1de3-497d-9834-889d587a0270-v4-0-config-system-ocp-branding-template\") on node \"master-0\" DevicePath \"\"" Feb 24 05:38:37.168677 master-0 kubenswrapper[34361]: I0224 05:38:37.168120 34361 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/7ab989ea-1de3-497d-9834-889d587a0270-v4-0-config-user-template-error\") on node \"master-0\" DevicePath \"\"" Feb 24 05:38:37.270092 master-0 kubenswrapper[34361]: I0224 05:38:37.270010 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/5faf6ee1-ba37-4cc9-8ab7-4e63409d90ab-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-64b7796859-6g644\" (UID: \"5faf6ee1-ba37-4cc9-8ab7-4e63409d90ab\") " pod="openshift-authentication/oauth-openshift-64b7796859-6g644" Feb 24 05:38:37.270388 master-0 kubenswrapper[34361]: I0224 05:38:37.270266 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/5faf6ee1-ba37-4cc9-8ab7-4e63409d90ab-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-64b7796859-6g644\" (UID: \"5faf6ee1-ba37-4cc9-8ab7-4e63409d90ab\") " pod="openshift-authentication/oauth-openshift-64b7796859-6g644" Feb 24 05:38:37.271170 master-0 kubenswrapper[34361]: I0224 05:38:37.270297 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/5faf6ee1-ba37-4cc9-8ab7-4e63409d90ab-audit-dir\") pod \"oauth-openshift-64b7796859-6g644\" (UID: \"5faf6ee1-ba37-4cc9-8ab7-4e63409d90ab\") " pod="openshift-authentication/oauth-openshift-64b7796859-6g644" Feb 24 05:38:37.271215 master-0 kubenswrapper[34361]: I0224 05:38:37.271147 34361 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/5faf6ee1-ba37-4cc9-8ab7-4e63409d90ab-audit-dir\") pod \"oauth-openshift-64b7796859-6g644\" (UID: \"5faf6ee1-ba37-4cc9-8ab7-4e63409d90ab\") " pod="openshift-authentication/oauth-openshift-64b7796859-6g644" Feb 24 05:38:37.271215 master-0 kubenswrapper[34361]: I0224 05:38:37.271192 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/5faf6ee1-ba37-4cc9-8ab7-4e63409d90ab-v4-0-config-system-serving-cert\") pod \"oauth-openshift-64b7796859-6g644\" (UID: \"5faf6ee1-ba37-4cc9-8ab7-4e63409d90ab\") " pod="openshift-authentication/oauth-openshift-64b7796859-6g644" Feb 24 05:38:37.271362 master-0 kubenswrapper[34361]: I0224 05:38:37.271340 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5faf6ee1-ba37-4cc9-8ab7-4e63409d90ab-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-64b7796859-6g644\" (UID: \"5faf6ee1-ba37-4cc9-8ab7-4e63409d90ab\") " pod="openshift-authentication/oauth-openshift-64b7796859-6g644" Feb 24 05:38:37.271457 master-0 kubenswrapper[34361]: I0224 05:38:37.271421 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/5faf6ee1-ba37-4cc9-8ab7-4e63409d90ab-v4-0-config-system-session\") pod \"oauth-openshift-64b7796859-6g644\" (UID: \"5faf6ee1-ba37-4cc9-8ab7-4e63409d90ab\") " pod="openshift-authentication/oauth-openshift-64b7796859-6g644" Feb 24 05:38:37.272573 master-0 kubenswrapper[34361]: I0224 05:38:37.272502 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/5faf6ee1-ba37-4cc9-8ab7-4e63409d90ab-audit-policies\") pod \"oauth-openshift-64b7796859-6g644\" (UID: \"5faf6ee1-ba37-4cc9-8ab7-4e63409d90ab\") " pod="openshift-authentication/oauth-openshift-64b7796859-6g644" Feb 24 05:38:37.272658 master-0 kubenswrapper[34361]: I0224 05:38:37.272620 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/5faf6ee1-ba37-4cc9-8ab7-4e63409d90ab-v4-0-config-user-template-login\") pod \"oauth-openshift-64b7796859-6g644\" (UID: \"5faf6ee1-ba37-4cc9-8ab7-4e63409d90ab\") " pod="openshift-authentication/oauth-openshift-64b7796859-6g644" Feb 24 05:38:37.272752 master-0 kubenswrapper[34361]: I0224 05:38:37.272717 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/5faf6ee1-ba37-4cc9-8ab7-4e63409d90ab-v4-0-config-system-service-ca\") pod \"oauth-openshift-64b7796859-6g644\" (UID: \"5faf6ee1-ba37-4cc9-8ab7-4e63409d90ab\") " pod="openshift-authentication/oauth-openshift-64b7796859-6g644" Feb 24 05:38:37.272891 master-0 kubenswrapper[34361]: I0224 05:38:37.272763 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/5faf6ee1-ba37-4cc9-8ab7-4e63409d90ab-v4-0-config-system-router-certs\") pod \"oauth-openshift-64b7796859-6g644\" (UID: \"5faf6ee1-ba37-4cc9-8ab7-4e63409d90ab\") " pod="openshift-authentication/oauth-openshift-64b7796859-6g644" Feb 24 05:38:37.272891 master-0 kubenswrapper[34361]: I0224 05:38:37.272762 34361 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5faf6ee1-ba37-4cc9-8ab7-4e63409d90ab-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-64b7796859-6g644\" (UID: \"5faf6ee1-ba37-4cc9-8ab7-4e63409d90ab\") " pod="openshift-authentication/oauth-openshift-64b7796859-6g644" Feb 24 05:38:37.273456 master-0 kubenswrapper[34361]: I0224 05:38:37.272935 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/5faf6ee1-ba37-4cc9-8ab7-4e63409d90ab-v4-0-config-user-template-error\") pod \"oauth-openshift-64b7796859-6g644\" (UID: \"5faf6ee1-ba37-4cc9-8ab7-4e63409d90ab\") " pod="openshift-authentication/oauth-openshift-64b7796859-6g644" Feb 24 05:38:37.273456 master-0 kubenswrapper[34361]: I0224 05:38:37.273062 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zr2hl\" (UniqueName: \"kubernetes.io/projected/5faf6ee1-ba37-4cc9-8ab7-4e63409d90ab-kube-api-access-zr2hl\") pod \"oauth-openshift-64b7796859-6g644\" (UID: \"5faf6ee1-ba37-4cc9-8ab7-4e63409d90ab\") " pod="openshift-authentication/oauth-openshift-64b7796859-6g644" Feb 24 05:38:37.273456 master-0 kubenswrapper[34361]: I0224 05:38:37.273117 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/5faf6ee1-ba37-4cc9-8ab7-4e63409d90ab-v4-0-config-system-cliconfig\") pod \"oauth-openshift-64b7796859-6g644\" (UID: \"5faf6ee1-ba37-4cc9-8ab7-4e63409d90ab\") " pod="openshift-authentication/oauth-openshift-64b7796859-6g644" Feb 24 05:38:37.273456 master-0 kubenswrapper[34361]: I0224 05:38:37.273147 34361 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/5faf6ee1-ba37-4cc9-8ab7-4e63409d90ab-audit-policies\") pod \"oauth-openshift-64b7796859-6g644\" (UID: \"5faf6ee1-ba37-4cc9-8ab7-4e63409d90ab\") " pod="openshift-authentication/oauth-openshift-64b7796859-6g644" Feb 24 05:38:37.273752 master-0 kubenswrapper[34361]: I0224 05:38:37.273711 34361 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/5faf6ee1-ba37-4cc9-8ab7-4e63409d90ab-v4-0-config-system-service-ca\") pod \"oauth-openshift-64b7796859-6g644\" (UID: \"5faf6ee1-ba37-4cc9-8ab7-4e63409d90ab\") " pod="openshift-authentication/oauth-openshift-64b7796859-6g644" Feb 24 05:38:37.273993 master-0 kubenswrapper[34361]: I0224 05:38:37.273966 34361 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/5faf6ee1-ba37-4cc9-8ab7-4e63409d90ab-v4-0-config-system-cliconfig\") pod \"oauth-openshift-64b7796859-6g644\" (UID: \"5faf6ee1-ba37-4cc9-8ab7-4e63409d90ab\") " pod="openshift-authentication/oauth-openshift-64b7796859-6g644" Feb 24 05:38:37.274733 master-0 kubenswrapper[34361]: I0224 05:38:37.274696 34361 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/5faf6ee1-ba37-4cc9-8ab7-4e63409d90ab-v4-0-config-system-session\") pod \"oauth-openshift-64b7796859-6g644\" (UID: \"5faf6ee1-ba37-4cc9-8ab7-4e63409d90ab\") " pod="openshift-authentication/oauth-openshift-64b7796859-6g644" Feb 24 05:38:37.275063 master-0 kubenswrapper[34361]: I0224 05:38:37.275019 34361 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/5faf6ee1-ba37-4cc9-8ab7-4e63409d90ab-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-64b7796859-6g644\" (UID: \"5faf6ee1-ba37-4cc9-8ab7-4e63409d90ab\") " pod="openshift-authentication/oauth-openshift-64b7796859-6g644" Feb 24 05:38:37.275176 master-0 kubenswrapper[34361]: I0224 05:38:37.275153 34361 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/5faf6ee1-ba37-4cc9-8ab7-4e63409d90ab-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-64b7796859-6g644\" (UID: \"5faf6ee1-ba37-4cc9-8ab7-4e63409d90ab\") " pod="openshift-authentication/oauth-openshift-64b7796859-6g644" Feb 24 05:38:37.277200 master-0 kubenswrapper[34361]: I0224 05:38:37.277065 34361 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/5faf6ee1-ba37-4cc9-8ab7-4e63409d90ab-v4-0-config-system-serving-cert\") pod \"oauth-openshift-64b7796859-6g644\" (UID: \"5faf6ee1-ba37-4cc9-8ab7-4e63409d90ab\") " pod="openshift-authentication/oauth-openshift-64b7796859-6g644" Feb 24 05:38:37.277819 master-0 kubenswrapper[34361]: I0224 05:38:37.277493 34361 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/5faf6ee1-ba37-4cc9-8ab7-4e63409d90ab-v4-0-config-user-template-error\") pod \"oauth-openshift-64b7796859-6g644\" (UID: \"5faf6ee1-ba37-4cc9-8ab7-4e63409d90ab\") " pod="openshift-authentication/oauth-openshift-64b7796859-6g644" Feb 24 05:38:37.277819 master-0 kubenswrapper[34361]: I0224 05:38:37.277536 34361 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/5faf6ee1-ba37-4cc9-8ab7-4e63409d90ab-v4-0-config-system-router-certs\") pod \"oauth-openshift-64b7796859-6g644\" (UID: \"5faf6ee1-ba37-4cc9-8ab7-4e63409d90ab\") " pod="openshift-authentication/oauth-openshift-64b7796859-6g644" Feb 24 05:38:37.278037 master-0 kubenswrapper[34361]: I0224 05:38:37.277986 34361 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/5faf6ee1-ba37-4cc9-8ab7-4e63409d90ab-v4-0-config-user-template-login\") pod \"oauth-openshift-64b7796859-6g644\" (UID: \"5faf6ee1-ba37-4cc9-8ab7-4e63409d90ab\") " pod="openshift-authentication/oauth-openshift-64b7796859-6g644" Feb 24 05:38:37.301100 master-0 kubenswrapper[34361]: I0224 05:38:37.301051 34361 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zr2hl\" (UniqueName: \"kubernetes.io/projected/5faf6ee1-ba37-4cc9-8ab7-4e63409d90ab-kube-api-access-zr2hl\") pod \"oauth-openshift-64b7796859-6g644\" (UID: \"5faf6ee1-ba37-4cc9-8ab7-4e63409d90ab\") " pod="openshift-authentication/oauth-openshift-64b7796859-6g644" Feb 24 05:38:37.349051 master-0 kubenswrapper[34361]: I0224 05:38:37.348435 34361 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-64b7796859-6g644" Feb 24 05:38:37.861780 master-0 kubenswrapper[34361]: I0224 05:38:37.861664 34361 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-64b7796859-6g644"] Feb 24 05:38:37.873089 master-0 kubenswrapper[34361]: W0224 05:38:37.873019 34361 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5faf6ee1_ba37_4cc9_8ab7_4e63409d90ab.slice/crio-0974584ca3f6e728873b60489619485abaf6cf59d44b69698dad5d7b1cc85115 WatchSource:0}: Error finding container 0974584ca3f6e728873b60489619485abaf6cf59d44b69698dad5d7b1cc85115: Status 404 returned error can't find the container with id 0974584ca3f6e728873b60489619485abaf6cf59d44b69698dad5d7b1cc85115 Feb 24 05:38:37.887225 master-0 kubenswrapper[34361]: I0224 05:38:37.887174 34361 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-64b7796859-6g644" event={"ID":"5faf6ee1-ba37-4cc9-8ab7-4e63409d90ab","Type":"ContainerStarted","Data":"0974584ca3f6e728873b60489619485abaf6cf59d44b69698dad5d7b1cc85115"} Feb 24 05:38:37.889488 master-0 kubenswrapper[34361]: I0224 05:38:37.889386 34361 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-6d4d899fc6-cgn6l" event={"ID":"7ab989ea-1de3-497d-9834-889d587a0270","Type":"ContainerDied","Data":"592b9e064fed96c4747d994821e4b391018fb4ebc4b4fb73f26e97067b1a4a6c"} Feb 24 05:38:37.889632 master-0 kubenswrapper[34361]: I0224 05:38:37.889550 34361 scope.go:117] "RemoveContainer" containerID="5975ab0155e8aeb506e71a83f7c1f9a9ec653513b28609bf539ddc6275cf7ab1" Feb 24 05:38:37.889767 master-0 kubenswrapper[34361]: I0224 05:38:37.889556 34361 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-6d4d899fc6-cgn6l" Feb 24 05:38:37.986496 master-0 kubenswrapper[34361]: I0224 05:38:37.986412 34361 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-authentication/oauth-openshift-6d4d899fc6-cgn6l"] Feb 24 05:38:37.988969 master-0 kubenswrapper[34361]: I0224 05:38:37.988942 34361 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-authentication/oauth-openshift-6d4d899fc6-cgn6l"] Feb 24 05:38:38.301483 master-0 kubenswrapper[34361]: I0224 05:38:38.301300 34361 patch_prober.go:28] interesting pod/console-67bcb9df49-d2cv6 container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.128.0.91:8443/health\": dial tcp 10.128.0.91:8443: connect: connection refused" start-of-body= Feb 24 05:38:38.301775 master-0 kubenswrapper[34361]: I0224 05:38:38.301547 34361 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-67bcb9df49-d2cv6" podUID="c300d6c7-66fb-41c5-b099-0e9e4a235e76" containerName="console" probeResult="failure" output="Get \"https://10.128.0.91:8443/health\": dial tcp 10.128.0.91:8443: connect: connection refused" Feb 24 05:38:38.611962 master-0 kubenswrapper[34361]: I0224 05:38:38.611880 34361 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7ab989ea-1de3-497d-9834-889d587a0270" path="/var/lib/kubelet/pods/7ab989ea-1de3-497d-9834-889d587a0270/volumes" Feb 24 05:38:38.895381 master-0 kubenswrapper[34361]: I0224 05:38:38.895161 34361 patch_prober.go:28] interesting pod/downloads-955b69498-crzjg container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.128.0.86:8080/\": dial tcp 10.128.0.86:8080: connect: connection refused" start-of-body= Feb 24 05:38:38.896795 master-0 kubenswrapper[34361]: I0224 05:38:38.896722 34361 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-955b69498-crzjg" podUID="691f1da3-ccd7-416a-9031-dea1b78f71ee" containerName="download-server" probeResult="failure" output="Get \"http://10.128.0.86:8080/\": dial tcp 10.128.0.86:8080: connect: connection refused" Feb 24 05:38:38.897090 master-0 kubenswrapper[34361]: I0224 05:38:38.895295 34361 patch_prober.go:28] interesting pod/downloads-955b69498-crzjg container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.128.0.86:8080/\": dial tcp 10.128.0.86:8080: connect: connection refused" start-of-body= Feb 24 05:38:38.897241 master-0 kubenswrapper[34361]: I0224 05:38:38.897133 34361 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-955b69498-crzjg" podUID="691f1da3-ccd7-416a-9031-dea1b78f71ee" containerName="download-server" probeResult="failure" output="Get \"http://10.128.0.86:8080/\": dial tcp 10.128.0.86:8080: connect: connection refused" Feb 24 05:38:38.906800 master-0 kubenswrapper[34361]: I0224 05:38:38.906700 34361 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-64b7796859-6g644" event={"ID":"5faf6ee1-ba37-4cc9-8ab7-4e63409d90ab","Type":"ContainerStarted","Data":"f11526df228aae177cff96123462b6e5f895ad9fd0ec164df34b454ee52c6ef8"} Feb 24 05:38:38.907566 master-0 kubenswrapper[34361]: I0224 05:38:38.907519 34361 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-authentication/oauth-openshift-64b7796859-6g644" Feb 24 05:38:38.916284 master-0 kubenswrapper[34361]: I0224 05:38:38.915598 34361 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-authentication/oauth-openshift-64b7796859-6g644" Feb 24 05:38:39.203795 master-0 kubenswrapper[34361]: I0224 05:38:39.203543 34361 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication/oauth-openshift-64b7796859-6g644" podStartSLOduration=12.203505368 podStartE2EDuration="12.203505368s" podCreationTimestamp="2026-02-24 05:38:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-24 05:38:39.191950047 +0000 UTC m=+78.894567123" watchObservedRunningTime="2026-02-24 05:38:39.203505368 +0000 UTC m=+78.906122454" Feb 24 05:38:40.304450 master-0 kubenswrapper[34361]: I0224 05:38:40.304290 34361 patch_prober.go:28] interesting pod/console-5b6cfdbd-5qbf5 container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.128.0.87:8443/health\": dial tcp 10.128.0.87:8443: connect: connection refused" start-of-body= Feb 24 05:38:40.304450 master-0 kubenswrapper[34361]: I0224 05:38:40.304441 34361 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-5b6cfdbd-5qbf5" podUID="f3038676-0c11-4616-bb1e-f5d396e420f4" containerName="console" probeResult="failure" output="Get \"https://10.128.0.87:8443/health\": dial tcp 10.128.0.87:8443: connect: connection refused" Feb 24 05:38:43.970092 master-0 kubenswrapper[34361]: I0224 05:38:43.969909 34361 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_installer-4-master-0_60e6a292-a766-471c-90c8-843f10a5820c/installer/0.log" Feb 24 05:38:43.970092 master-0 kubenswrapper[34361]: I0224 05:38:43.969999 34361 generic.go:334] "Generic (PLEG): container finished" podID="60e6a292-a766-471c-90c8-843f10a5820c" containerID="c551e23385455f60a1d1ce791e66a617e0f04c1d922be0d890276b70483491f6" exitCode=1 Feb 24 05:38:43.970092 master-0 kubenswrapper[34361]: I0224 05:38:43.970044 34361 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-4-master-0" event={"ID":"60e6a292-a766-471c-90c8-843f10a5820c","Type":"ContainerDied","Data":"c551e23385455f60a1d1ce791e66a617e0f04c1d922be0d890276b70483491f6"} Feb 24 05:38:44.094892 master-0 kubenswrapper[34361]: I0224 05:38:44.094706 34361 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_installer-4-master-0_60e6a292-a766-471c-90c8-843f10a5820c/installer/0.log" Feb 24 05:38:44.094892 master-0 kubenswrapper[34361]: I0224 05:38:44.094826 34361 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-4-master-0" Feb 24 05:38:44.216599 master-0 kubenswrapper[34361]: I0224 05:38:44.216530 34361 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/60e6a292-a766-471c-90c8-843f10a5820c-var-lock\") pod \"60e6a292-a766-471c-90c8-843f10a5820c\" (UID: \"60e6a292-a766-471c-90c8-843f10a5820c\") " Feb 24 05:38:44.216869 master-0 kubenswrapper[34361]: I0224 05:38:44.216624 34361 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/60e6a292-a766-471c-90c8-843f10a5820c-kube-api-access\") pod \"60e6a292-a766-471c-90c8-843f10a5820c\" (UID: \"60e6a292-a766-471c-90c8-843f10a5820c\") " Feb 24 05:38:44.216869 master-0 kubenswrapper[34361]: I0224 05:38:44.216679 34361 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/60e6a292-a766-471c-90c8-843f10a5820c-var-lock" (OuterVolumeSpecName: "var-lock") pod "60e6a292-a766-471c-90c8-843f10a5820c" (UID: "60e6a292-a766-471c-90c8-843f10a5820c"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 24 05:38:44.216869 master-0 kubenswrapper[34361]: I0224 05:38:44.216738 34361 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/60e6a292-a766-471c-90c8-843f10a5820c-kubelet-dir\") pod \"60e6a292-a766-471c-90c8-843f10a5820c\" (UID: \"60e6a292-a766-471c-90c8-843f10a5820c\") " Feb 24 05:38:44.217004 master-0 kubenswrapper[34361]: I0224 05:38:44.216885 34361 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/60e6a292-a766-471c-90c8-843f10a5820c-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "60e6a292-a766-471c-90c8-843f10a5820c" (UID: "60e6a292-a766-471c-90c8-843f10a5820c"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 24 05:38:44.217203 master-0 kubenswrapper[34361]: I0224 05:38:44.217175 34361 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/60e6a292-a766-471c-90c8-843f10a5820c-kubelet-dir\") on node \"master-0\" DevicePath \"\"" Feb 24 05:38:44.217203 master-0 kubenswrapper[34361]: I0224 05:38:44.217199 34361 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/60e6a292-a766-471c-90c8-843f10a5820c-var-lock\") on node \"master-0\" DevicePath \"\"" Feb 24 05:38:44.220341 master-0 kubenswrapper[34361]: I0224 05:38:44.220197 34361 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/60e6a292-a766-471c-90c8-843f10a5820c-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "60e6a292-a766-471c-90c8-843f10a5820c" (UID: "60e6a292-a766-471c-90c8-843f10a5820c"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 24 05:38:44.319467 master-0 kubenswrapper[34361]: I0224 05:38:44.319392 34361 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/60e6a292-a766-471c-90c8-843f10a5820c-kube-api-access\") on node \"master-0\" DevicePath \"\"" Feb 24 05:38:44.983926 master-0 kubenswrapper[34361]: I0224 05:38:44.983865 34361 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_installer-4-master-0_60e6a292-a766-471c-90c8-843f10a5820c/installer/0.log" Feb 24 05:38:44.984724 master-0 kubenswrapper[34361]: I0224 05:38:44.983943 34361 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-4-master-0" event={"ID":"60e6a292-a766-471c-90c8-843f10a5820c","Type":"ContainerDied","Data":"faab8261c93a00388c02a6daf4ef05b9746c5de9a14514982ae6912236cceade"} Feb 24 05:38:44.984724 master-0 kubenswrapper[34361]: I0224 05:38:44.984001 34361 scope.go:117] "RemoveContainer" containerID="c551e23385455f60a1d1ce791e66a617e0f04c1d922be0d890276b70483491f6" Feb 24 05:38:44.984724 master-0 kubenswrapper[34361]: I0224 05:38:44.984143 34361 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-4-master-0" Feb 24 05:38:45.011177 master-0 kubenswrapper[34361]: I0224 05:38:45.011089 34361 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-kube-apiserver/installer-4-master-0"] Feb 24 05:38:45.016696 master-0 kubenswrapper[34361]: I0224 05:38:45.016518 34361 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-kube-apiserver/installer-4-master-0"] Feb 24 05:38:46.641709 master-0 kubenswrapper[34361]: I0224 05:38:46.641586 34361 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="60e6a292-a766-471c-90c8-843f10a5820c" path="/var/lib/kubelet/pods/60e6a292-a766-471c-90c8-843f10a5820c/volumes" Feb 24 05:38:48.301374 master-0 kubenswrapper[34361]: I0224 05:38:48.301247 34361 patch_prober.go:28] interesting pod/console-67bcb9df49-d2cv6 container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.128.0.91:8443/health\": dial tcp 10.128.0.91:8443: connect: connection refused" start-of-body= Feb 24 05:38:48.302573 master-0 kubenswrapper[34361]: I0224 05:38:48.301406 34361 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-67bcb9df49-d2cv6" podUID="c300d6c7-66fb-41c5-b099-0e9e4a235e76" containerName="console" probeResult="failure" output="Get \"https://10.128.0.91:8443/health\": dial tcp 10.128.0.91:8443: connect: connection refused" Feb 24 05:38:48.914010 master-0 kubenswrapper[34361]: I0224 05:38:48.913462 34361 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/downloads-955b69498-crzjg" Feb 24 05:38:50.303792 master-0 kubenswrapper[34361]: I0224 05:38:50.303671 34361 patch_prober.go:28] interesting pod/console-5b6cfdbd-5qbf5 container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.128.0.87:8443/health\": dial tcp 10.128.0.87:8443: connect: connection refused" start-of-body= Feb 24 05:38:50.304517 master-0 kubenswrapper[34361]: I0224 05:38:50.303832 34361 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-5b6cfdbd-5qbf5" podUID="f3038676-0c11-4616-bb1e-f5d396e420f4" containerName="console" probeResult="failure" output="Get \"https://10.128.0.87:8443/health\": dial tcp 10.128.0.87:8443: connect: connection refused" Feb 24 05:38:58.301992 master-0 kubenswrapper[34361]: I0224 05:38:58.301887 34361 patch_prober.go:28] interesting pod/console-67bcb9df49-d2cv6 container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.128.0.91:8443/health\": dial tcp 10.128.0.91:8443: connect: connection refused" start-of-body= Feb 24 05:38:58.303185 master-0 kubenswrapper[34361]: I0224 05:38:58.302017 34361 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-67bcb9df49-d2cv6" podUID="c300d6c7-66fb-41c5-b099-0e9e4a235e76" containerName="console" probeResult="failure" output="Get \"https://10.128.0.91:8443/health\": dial tcp 10.128.0.91:8443: connect: connection refused" Feb 24 05:39:00.303992 master-0 kubenswrapper[34361]: I0224 05:39:00.303857 34361 patch_prober.go:28] interesting pod/console-5b6cfdbd-5qbf5 container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.128.0.87:8443/health\": dial tcp 10.128.0.87:8443: connect: connection refused" start-of-body= Feb 24 05:39:00.303992 master-0 kubenswrapper[34361]: I0224 05:39:00.303962 34361 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-5b6cfdbd-5qbf5" podUID="f3038676-0c11-4616-bb1e-f5d396e420f4" containerName="console" probeResult="failure" output="Get \"https://10.128.0.87:8443/health\": dial tcp 10.128.0.87:8443: connect: connection refused" Feb 24 05:39:08.301256 master-0 kubenswrapper[34361]: I0224 05:39:08.301155 34361 patch_prober.go:28] interesting pod/console-67bcb9df49-d2cv6 container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.128.0.91:8443/health\": dial tcp 10.128.0.91:8443: connect: connection refused" start-of-body= Feb 24 05:39:08.301256 master-0 kubenswrapper[34361]: I0224 05:39:08.301255 34361 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-67bcb9df49-d2cv6" podUID="c300d6c7-66fb-41c5-b099-0e9e4a235e76" containerName="console" probeResult="failure" output="Get \"https://10.128.0.91:8443/health\": dial tcp 10.128.0.91:8443: connect: connection refused" Feb 24 05:39:10.304023 master-0 kubenswrapper[34361]: I0224 05:39:10.303936 34361 patch_prober.go:28] interesting pod/console-5b6cfdbd-5qbf5 container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.128.0.87:8443/health\": dial tcp 10.128.0.87:8443: connect: connection refused" start-of-body= Feb 24 05:39:10.305204 master-0 kubenswrapper[34361]: I0224 05:39:10.304052 34361 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-5b6cfdbd-5qbf5" podUID="f3038676-0c11-4616-bb1e-f5d396e420f4" containerName="console" probeResult="failure" output="Get \"https://10.128.0.87:8443/health\": dial tcp 10.128.0.87:8443: connect: connection refused" Feb 24 05:39:18.301342 master-0 kubenswrapper[34361]: I0224 05:39:18.301245 34361 patch_prober.go:28] interesting pod/console-67bcb9df49-d2cv6 container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.128.0.91:8443/health\": dial tcp 10.128.0.91:8443: connect: connection refused" start-of-body= Feb 24 05:39:18.302650 master-0 kubenswrapper[34361]: I0224 05:39:18.301370 34361 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-67bcb9df49-d2cv6" podUID="c300d6c7-66fb-41c5-b099-0e9e4a235e76" containerName="console" probeResult="failure" output="Get \"https://10.128.0.91:8443/health\": dial tcp 10.128.0.91:8443: connect: connection refused" Feb 24 05:39:20.304703 master-0 kubenswrapper[34361]: I0224 05:39:20.304590 34361 patch_prober.go:28] interesting pod/console-5b6cfdbd-5qbf5 container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.128.0.87:8443/health\": dial tcp 10.128.0.87:8443: connect: connection refused" start-of-body= Feb 24 05:39:20.305728 master-0 kubenswrapper[34361]: I0224 05:39:20.304729 34361 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-5b6cfdbd-5qbf5" podUID="f3038676-0c11-4616-bb1e-f5d396e420f4" containerName="console" probeResult="failure" output="Get \"https://10.128.0.87:8443/health\": dial tcp 10.128.0.87:8443: connect: connection refused" Feb 24 05:39:24.389810 master-0 kubenswrapper[34361]: I0224 05:39:24.389729 34361 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0"] Feb 24 05:39:24.393503 master-0 kubenswrapper[34361]: E0224 05:39:24.390247 34361 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="60e6a292-a766-471c-90c8-843f10a5820c" containerName="installer" Feb 24 05:39:24.393503 master-0 kubenswrapper[34361]: I0224 05:39:24.390272 34361 state_mem.go:107] "Deleted CPUSet assignment" podUID="60e6a292-a766-471c-90c8-843f10a5820c" containerName="installer" Feb 24 05:39:24.393503 master-0 kubenswrapper[34361]: I0224 05:39:24.390618 34361 memory_manager.go:354] "RemoveStaleState removing state" podUID="60e6a292-a766-471c-90c8-843f10a5820c" containerName="installer" Feb 24 05:39:24.393503 master-0 kubenswrapper[34361]: I0224 05:39:24.391409 34361 kubelet.go:2431] "SyncLoop REMOVE" source="file" pods=["openshift-kube-apiserver/kube-apiserver-master-0"] Feb 24 05:39:24.393503 master-0 kubenswrapper[34361]: I0224 05:39:24.391604 34361 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Feb 24 05:39:24.393503 master-0 kubenswrapper[34361]: I0224 05:39:24.391831 34361 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-master-0" podUID="eb342c942d3d92fd08ed7cf68fafb94c" containerName="kube-apiserver" containerID="cri-o://ced751750522f9c03e2e52e91588c4be42691ecfec782aec5dc06ccc927894a3" gracePeriod=15 Feb 24 05:39:24.393503 master-0 kubenswrapper[34361]: I0224 05:39:24.391950 34361 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-master-0" podUID="eb342c942d3d92fd08ed7cf68fafb94c" containerName="kube-apiserver-insecure-readyz" containerID="cri-o://dc7f334ba093edb1de3b24fa9de1ce5cc2e49dfb4a9a68f400dd46d39244c4c0" gracePeriod=15 Feb 24 05:39:24.393503 master-0 kubenswrapper[34361]: I0224 05:39:24.391953 34361 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-master-0" podUID="eb342c942d3d92fd08ed7cf68fafb94c" containerName="kube-apiserver-check-endpoints" containerID="cri-o://8f8f553fcbcac28ff76f32289d1c9a6da6236c150e8a334600ddcd18ae702328" gracePeriod=15 Feb 24 05:39:24.393503 master-0 kubenswrapper[34361]: I0224 05:39:24.391985 34361 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-master-0" podUID="eb342c942d3d92fd08ed7cf68fafb94c" containerName="kube-apiserver-cert-regeneration-controller" containerID="cri-o://b58a590ac15050b699587076d87598dc3616423922f5c5d6a6a0ff788a44d80e" gracePeriod=15 Feb 24 05:39:24.393503 master-0 kubenswrapper[34361]: I0224 05:39:24.392129 34361 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-master-0" podUID="eb342c942d3d92fd08ed7cf68fafb94c" containerName="kube-apiserver-cert-syncer" containerID="cri-o://0d935090be6ba5c3e4e9afb96a81c728e02b21ff904a964bb0876a43723149d9" gracePeriod=15 Feb 24 05:39:24.396865 master-0 kubenswrapper[34361]: I0224 05:39:24.396452 34361 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-apiserver/kube-apiserver-master-0"] Feb 24 05:39:24.396865 master-0 kubenswrapper[34361]: E0224 05:39:24.396800 34361 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="eb342c942d3d92fd08ed7cf68fafb94c" containerName="kube-apiserver-cert-syncer" Feb 24 05:39:24.396865 master-0 kubenswrapper[34361]: I0224 05:39:24.396823 34361 state_mem.go:107] "Deleted CPUSet assignment" podUID="eb342c942d3d92fd08ed7cf68fafb94c" containerName="kube-apiserver-cert-syncer" Feb 24 05:39:24.396865 master-0 kubenswrapper[34361]: E0224 05:39:24.396857 34361 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="eb342c942d3d92fd08ed7cf68fafb94c" containerName="kube-apiserver-insecure-readyz" Feb 24 05:39:24.397050 master-0 kubenswrapper[34361]: I0224 05:39:24.396872 34361 state_mem.go:107] "Deleted CPUSet assignment" podUID="eb342c942d3d92fd08ed7cf68fafb94c" containerName="kube-apiserver-insecure-readyz" Feb 24 05:39:24.397050 master-0 kubenswrapper[34361]: E0224 05:39:24.396898 34361 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="eb342c942d3d92fd08ed7cf68fafb94c" containerName="kube-apiserver-check-endpoints" Feb 24 05:39:24.397050 master-0 kubenswrapper[34361]: I0224 05:39:24.396913 34361 state_mem.go:107] "Deleted CPUSet assignment" podUID="eb342c942d3d92fd08ed7cf68fafb94c" containerName="kube-apiserver-check-endpoints" Feb 24 05:39:24.397050 master-0 kubenswrapper[34361]: E0224 05:39:24.396935 34361 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="eb342c942d3d92fd08ed7cf68fafb94c" containerName="kube-apiserver-cert-regeneration-controller" Feb 24 05:39:24.397050 master-0 kubenswrapper[34361]: I0224 05:39:24.396950 34361 state_mem.go:107] "Deleted CPUSet assignment" podUID="eb342c942d3d92fd08ed7cf68fafb94c" containerName="kube-apiserver-cert-regeneration-controller" Feb 24 05:39:24.397050 master-0 kubenswrapper[34361]: E0224 05:39:24.396981 34361 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="eb342c942d3d92fd08ed7cf68fafb94c" containerName="setup" Feb 24 05:39:24.397050 master-0 kubenswrapper[34361]: I0224 05:39:24.396995 34361 state_mem.go:107] "Deleted CPUSet assignment" podUID="eb342c942d3d92fd08ed7cf68fafb94c" containerName="setup" Feb 24 05:39:24.397050 master-0 kubenswrapper[34361]: E0224 05:39:24.397037 34361 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="eb342c942d3d92fd08ed7cf68fafb94c" containerName="kube-apiserver" Feb 24 05:39:24.397050 master-0 kubenswrapper[34361]: I0224 05:39:24.397051 34361 state_mem.go:107] "Deleted CPUSet assignment" podUID="eb342c942d3d92fd08ed7cf68fafb94c" containerName="kube-apiserver" Feb 24 05:39:24.397457 master-0 kubenswrapper[34361]: I0224 05:39:24.397304 34361 memory_manager.go:354] "RemoveStaleState removing state" podUID="eb342c942d3d92fd08ed7cf68fafb94c" containerName="kube-apiserver-cert-regeneration-controller" Feb 24 05:39:24.397457 master-0 kubenswrapper[34361]: I0224 05:39:24.397379 34361 memory_manager.go:354] "RemoveStaleState removing state" podUID="eb342c942d3d92fd08ed7cf68fafb94c" containerName="kube-apiserver" Feb 24 05:39:24.397457 master-0 kubenswrapper[34361]: I0224 05:39:24.397401 34361 memory_manager.go:354] "RemoveStaleState removing state" podUID="eb342c942d3d92fd08ed7cf68fafb94c" containerName="kube-apiserver-cert-syncer" Feb 24 05:39:24.397457 master-0 kubenswrapper[34361]: I0224 05:39:24.397440 34361 memory_manager.go:354] "RemoveStaleState removing state" podUID="eb342c942d3d92fd08ed7cf68fafb94c" containerName="kube-apiserver-insecure-readyz" Feb 24 05:39:24.397457 master-0 kubenswrapper[34361]: I0224 05:39:24.397467 34361 memory_manager.go:354] "RemoveStaleState removing state" podUID="eb342c942d3d92fd08ed7cf68fafb94c" containerName="setup" Feb 24 05:39:24.397767 master-0 kubenswrapper[34361]: I0224 05:39:24.397498 34361 memory_manager.go:354] "RemoveStaleState removing state" podUID="eb342c942d3d92fd08ed7cf68fafb94c" containerName="kube-apiserver-check-endpoints" Feb 24 05:39:24.543631 master-0 kubenswrapper[34361]: I0224 05:39:24.543536 34361 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/487622064474ed0ec70f7bf2a0fcb80b-audit-dir\") pod \"kube-apiserver-master-0\" (UID: \"487622064474ed0ec70f7bf2a0fcb80b\") " pod="openshift-kube-apiserver/kube-apiserver-master-0" Feb 24 05:39:24.543892 master-0 kubenswrapper[34361]: I0224 05:39:24.543705 34361 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/2146f0e3671998cad8bbc2464b009ab7-var-lock\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"2146f0e3671998cad8bbc2464b009ab7\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Feb 24 05:39:24.543892 master-0 kubenswrapper[34361]: I0224 05:39:24.543755 34361 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2146f0e3671998cad8bbc2464b009ab7-resource-dir\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"2146f0e3671998cad8bbc2464b009ab7\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Feb 24 05:39:24.543892 master-0 kubenswrapper[34361]: I0224 05:39:24.543783 34361 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/2146f0e3671998cad8bbc2464b009ab7-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"2146f0e3671998cad8bbc2464b009ab7\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Feb 24 05:39:24.543892 master-0 kubenswrapper[34361]: I0224 05:39:24.543829 34361 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/2146f0e3671998cad8bbc2464b009ab7-var-log\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"2146f0e3671998cad8bbc2464b009ab7\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Feb 24 05:39:24.543892 master-0 kubenswrapper[34361]: I0224 05:39:24.543856 34361 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/487622064474ed0ec70f7bf2a0fcb80b-cert-dir\") pod \"kube-apiserver-master-0\" (UID: \"487622064474ed0ec70f7bf2a0fcb80b\") " pod="openshift-kube-apiserver/kube-apiserver-master-0" Feb 24 05:39:24.544182 master-0 kubenswrapper[34361]: I0224 05:39:24.544128 34361 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/487622064474ed0ec70f7bf2a0fcb80b-resource-dir\") pod \"kube-apiserver-master-0\" (UID: \"487622064474ed0ec70f7bf2a0fcb80b\") " pod="openshift-kube-apiserver/kube-apiserver-master-0" Feb 24 05:39:24.544233 master-0 kubenswrapper[34361]: I0224 05:39:24.544189 34361 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/2146f0e3671998cad8bbc2464b009ab7-manifests\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"2146f0e3671998cad8bbc2464b009ab7\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Feb 24 05:39:24.645911 master-0 kubenswrapper[34361]: I0224 05:39:24.645738 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/2146f0e3671998cad8bbc2464b009ab7-var-log\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"2146f0e3671998cad8bbc2464b009ab7\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Feb 24 05:39:24.646128 master-0 kubenswrapper[34361]: I0224 05:39:24.645949 34361 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/2146f0e3671998cad8bbc2464b009ab7-var-log\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"2146f0e3671998cad8bbc2464b009ab7\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Feb 24 05:39:24.646199 master-0 kubenswrapper[34361]: I0224 05:39:24.646136 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/487622064474ed0ec70f7bf2a0fcb80b-cert-dir\") pod \"kube-apiserver-master-0\" (UID: \"487622064474ed0ec70f7bf2a0fcb80b\") " pod="openshift-kube-apiserver/kube-apiserver-master-0" Feb 24 05:39:24.646235 master-0 kubenswrapper[34361]: I0224 05:39:24.646221 34361 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/487622064474ed0ec70f7bf2a0fcb80b-cert-dir\") pod \"kube-apiserver-master-0\" (UID: \"487622064474ed0ec70f7bf2a0fcb80b\") " pod="openshift-kube-apiserver/kube-apiserver-master-0" Feb 24 05:39:24.646267 master-0 kubenswrapper[34361]: I0224 05:39:24.646249 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/2146f0e3671998cad8bbc2464b009ab7-manifests\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"2146f0e3671998cad8bbc2464b009ab7\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Feb 24 05:39:24.646301 master-0 kubenswrapper[34361]: I0224 05:39:24.646274 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/487622064474ed0ec70f7bf2a0fcb80b-resource-dir\") pod \"kube-apiserver-master-0\" (UID: \"487622064474ed0ec70f7bf2a0fcb80b\") " pod="openshift-kube-apiserver/kube-apiserver-master-0" Feb 24 05:39:24.646359 master-0 kubenswrapper[34361]: I0224 05:39:24.646303 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/487622064474ed0ec70f7bf2a0fcb80b-audit-dir\") pod \"kube-apiserver-master-0\" (UID: \"487622064474ed0ec70f7bf2a0fcb80b\") " pod="openshift-kube-apiserver/kube-apiserver-master-0" Feb 24 05:39:24.646359 master-0 kubenswrapper[34361]: I0224 05:39:24.646336 34361 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/2146f0e3671998cad8bbc2464b009ab7-manifests\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"2146f0e3671998cad8bbc2464b009ab7\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Feb 24 05:39:24.646422 master-0 kubenswrapper[34361]: I0224 05:39:24.646367 34361 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/487622064474ed0ec70f7bf2a0fcb80b-resource-dir\") pod \"kube-apiserver-master-0\" (UID: \"487622064474ed0ec70f7bf2a0fcb80b\") " pod="openshift-kube-apiserver/kube-apiserver-master-0" Feb 24 05:39:24.646422 master-0 kubenswrapper[34361]: I0224 05:39:24.646412 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/2146f0e3671998cad8bbc2464b009ab7-var-lock\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"2146f0e3671998cad8bbc2464b009ab7\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Feb 24 05:39:24.646480 master-0 kubenswrapper[34361]: I0224 05:39:24.646423 34361 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/487622064474ed0ec70f7bf2a0fcb80b-audit-dir\") pod \"kube-apiserver-master-0\" (UID: \"487622064474ed0ec70f7bf2a0fcb80b\") " pod="openshift-kube-apiserver/kube-apiserver-master-0" Feb 24 05:39:24.646480 master-0 kubenswrapper[34361]: I0224 05:39:24.646449 34361 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2146f0e3671998cad8bbc2464b009ab7-resource-dir\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"2146f0e3671998cad8bbc2464b009ab7\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Feb 24 05:39:24.646480 master-0 kubenswrapper[34361]: I0224 05:39:24.646432 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2146f0e3671998cad8bbc2464b009ab7-resource-dir\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"2146f0e3671998cad8bbc2464b009ab7\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Feb 24 05:39:24.646582 master-0 kubenswrapper[34361]: I0224 05:39:24.646511 34361 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/2146f0e3671998cad8bbc2464b009ab7-var-lock\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"2146f0e3671998cad8bbc2464b009ab7\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Feb 24 05:39:24.646582 master-0 kubenswrapper[34361]: I0224 05:39:24.646560 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/2146f0e3671998cad8bbc2464b009ab7-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"2146f0e3671998cad8bbc2464b009ab7\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Feb 24 05:39:24.646711 master-0 kubenswrapper[34361]: I0224 05:39:24.646652 34361 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/2146f0e3671998cad8bbc2464b009ab7-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"2146f0e3671998cad8bbc2464b009ab7\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Feb 24 05:39:25.441916 master-0 kubenswrapper[34361]: I0224 05:39:25.441829 34361 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-master-0_eb342c942d3d92fd08ed7cf68fafb94c/kube-apiserver-cert-syncer/0.log" Feb 24 05:39:25.443572 master-0 kubenswrapper[34361]: I0224 05:39:25.443511 34361 generic.go:334] "Generic (PLEG): container finished" podID="eb342c942d3d92fd08ed7cf68fafb94c" containerID="8f8f553fcbcac28ff76f32289d1c9a6da6236c150e8a334600ddcd18ae702328" exitCode=0 Feb 24 05:39:25.443572 master-0 kubenswrapper[34361]: I0224 05:39:25.443564 34361 generic.go:334] "Generic (PLEG): container finished" podID="eb342c942d3d92fd08ed7cf68fafb94c" containerID="dc7f334ba093edb1de3b24fa9de1ce5cc2e49dfb4a9a68f400dd46d39244c4c0" exitCode=0 Feb 24 05:39:25.443700 master-0 kubenswrapper[34361]: I0224 05:39:25.443585 34361 generic.go:334] "Generic (PLEG): container finished" podID="eb342c942d3d92fd08ed7cf68fafb94c" containerID="b58a590ac15050b699587076d87598dc3616423922f5c5d6a6a0ff788a44d80e" exitCode=0 Feb 24 05:39:25.443700 master-0 kubenswrapper[34361]: I0224 05:39:25.443612 34361 generic.go:334] "Generic (PLEG): container finished" podID="eb342c942d3d92fd08ed7cf68fafb94c" containerID="0d935090be6ba5c3e4e9afb96a81c728e02b21ff904a964bb0876a43723149d9" exitCode=2 Feb 24 05:39:25.447608 master-0 kubenswrapper[34361]: I0224 05:39:25.447560 34361 generic.go:334] "Generic (PLEG): container finished" podID="175e9f88-7ed2-441b-8de2-71aa4d32c9c5" containerID="4e7136483f5e3e11e16df3e30312d58bc2b27d4e71da0b6be27e273e26105293" exitCode=0 Feb 24 05:39:25.447747 master-0 kubenswrapper[34361]: I0224 05:39:25.447675 34361 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-5-master-0" event={"ID":"175e9f88-7ed2-441b-8de2-71aa4d32c9c5","Type":"ContainerDied","Data":"4e7136483f5e3e11e16df3e30312d58bc2b27d4e71da0b6be27e273e26105293"} Feb 24 05:39:25.449682 master-0 kubenswrapper[34361]: I0224 05:39:25.449603 34361 status_manager.go:851] "Failed to get status for pod" podUID="175e9f88-7ed2-441b-8de2-71aa4d32c9c5" pod="openshift-kube-apiserver/installer-5-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-5-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 24 05:39:26.881587 master-0 kubenswrapper[34361]: I0224 05:39:26.881546 34361 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-master-0_eb342c942d3d92fd08ed7cf68fafb94c/kube-apiserver-cert-syncer/0.log" Feb 24 05:39:26.882867 master-0 kubenswrapper[34361]: I0224 05:39:26.882842 34361 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-master-0" Feb 24 05:39:26.884536 master-0 kubenswrapper[34361]: I0224 05:39:26.884457 34361 status_manager.go:851] "Failed to get status for pod" podUID="175e9f88-7ed2-441b-8de2-71aa4d32c9c5" pod="openshift-kube-apiserver/installer-5-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-5-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 24 05:39:26.885958 master-0 kubenswrapper[34361]: I0224 05:39:26.885901 34361 status_manager.go:851] "Failed to get status for pod" podUID="eb342c942d3d92fd08ed7cf68fafb94c" pod="openshift-kube-apiserver/kube-apiserver-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 24 05:39:26.999986 master-0 kubenswrapper[34361]: I0224 05:39:26.999904 34361 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/eb342c942d3d92fd08ed7cf68fafb94c-cert-dir\") pod \"eb342c942d3d92fd08ed7cf68fafb94c\" (UID: \"eb342c942d3d92fd08ed7cf68fafb94c\") " Feb 24 05:39:26.999986 master-0 kubenswrapper[34361]: I0224 05:39:26.999993 34361 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/eb342c942d3d92fd08ed7cf68fafb94c-audit-dir\") pod \"eb342c942d3d92fd08ed7cf68fafb94c\" (UID: \"eb342c942d3d92fd08ed7cf68fafb94c\") " Feb 24 05:39:27.000129 master-0 kubenswrapper[34361]: I0224 05:39:27.000011 34361 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/eb342c942d3d92fd08ed7cf68fafb94c-resource-dir\") pod \"eb342c942d3d92fd08ed7cf68fafb94c\" (UID: \"eb342c942d3d92fd08ed7cf68fafb94c\") " Feb 24 05:39:27.000129 master-0 kubenswrapper[34361]: I0224 05:39:27.000061 34361 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/eb342c942d3d92fd08ed7cf68fafb94c-cert-dir" (OuterVolumeSpecName: "cert-dir") pod "eb342c942d3d92fd08ed7cf68fafb94c" (UID: "eb342c942d3d92fd08ed7cf68fafb94c"). InnerVolumeSpecName "cert-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 24 05:39:27.000219 master-0 kubenswrapper[34361]: I0224 05:39:27.000187 34361 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/eb342c942d3d92fd08ed7cf68fafb94c-audit-dir" (OuterVolumeSpecName: "audit-dir") pod "eb342c942d3d92fd08ed7cf68fafb94c" (UID: "eb342c942d3d92fd08ed7cf68fafb94c"). InnerVolumeSpecName "audit-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 24 05:39:27.000390 master-0 kubenswrapper[34361]: I0224 05:39:27.000347 34361 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/eb342c942d3d92fd08ed7cf68fafb94c-resource-dir" (OuterVolumeSpecName: "resource-dir") pod "eb342c942d3d92fd08ed7cf68fafb94c" (UID: "eb342c942d3d92fd08ed7cf68fafb94c"). InnerVolumeSpecName "resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 24 05:39:27.000487 master-0 kubenswrapper[34361]: I0224 05:39:27.000393 34361 reconciler_common.go:293] "Volume detached for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/eb342c942d3d92fd08ed7cf68fafb94c-cert-dir\") on node \"master-0\" DevicePath \"\"" Feb 24 05:39:27.000581 master-0 kubenswrapper[34361]: I0224 05:39:27.000567 34361 reconciler_common.go:293] "Volume detached for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/eb342c942d3d92fd08ed7cf68fafb94c-audit-dir\") on node \"master-0\" DevicePath \"\"" Feb 24 05:39:27.033893 master-0 kubenswrapper[34361]: I0224 05:39:27.033851 34361 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-5-master-0" Feb 24 05:39:27.035682 master-0 kubenswrapper[34361]: I0224 05:39:27.035593 34361 status_manager.go:851] "Failed to get status for pod" podUID="eb342c942d3d92fd08ed7cf68fafb94c" pod="openshift-kube-apiserver/kube-apiserver-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 24 05:39:27.036559 master-0 kubenswrapper[34361]: I0224 05:39:27.036507 34361 status_manager.go:851] "Failed to get status for pod" podUID="175e9f88-7ed2-441b-8de2-71aa4d32c9c5" pod="openshift-kube-apiserver/installer-5-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-5-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 24 05:39:27.103536 master-0 kubenswrapper[34361]: I0224 05:39:27.103441 34361 reconciler_common.go:293] "Volume detached for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/eb342c942d3d92fd08ed7cf68fafb94c-resource-dir\") on node \"master-0\" DevicePath \"\"" Feb 24 05:39:27.205106 master-0 kubenswrapper[34361]: I0224 05:39:27.205014 34361 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/175e9f88-7ed2-441b-8de2-71aa4d32c9c5-kubelet-dir\") pod \"175e9f88-7ed2-441b-8de2-71aa4d32c9c5\" (UID: \"175e9f88-7ed2-441b-8de2-71aa4d32c9c5\") " Feb 24 05:39:27.205376 master-0 kubenswrapper[34361]: I0224 05:39:27.205182 34361 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/175e9f88-7ed2-441b-8de2-71aa4d32c9c5-var-lock\") pod \"175e9f88-7ed2-441b-8de2-71aa4d32c9c5\" (UID: \"175e9f88-7ed2-441b-8de2-71aa4d32c9c5\") " Feb 24 05:39:27.205376 master-0 kubenswrapper[34361]: I0224 05:39:27.205249 34361 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/175e9f88-7ed2-441b-8de2-71aa4d32c9c5-kube-api-access\") pod \"175e9f88-7ed2-441b-8de2-71aa4d32c9c5\" (UID: \"175e9f88-7ed2-441b-8de2-71aa4d32c9c5\") " Feb 24 05:39:27.205452 master-0 kubenswrapper[34361]: I0224 05:39:27.205381 34361 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/175e9f88-7ed2-441b-8de2-71aa4d32c9c5-var-lock" (OuterVolumeSpecName: "var-lock") pod "175e9f88-7ed2-441b-8de2-71aa4d32c9c5" (UID: "175e9f88-7ed2-441b-8de2-71aa4d32c9c5"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 24 05:39:27.206199 master-0 kubenswrapper[34361]: I0224 05:39:27.206155 34361 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/175e9f88-7ed2-441b-8de2-71aa4d32c9c5-var-lock\") on node \"master-0\" DevicePath \"\"" Feb 24 05:39:27.206301 master-0 kubenswrapper[34361]: I0224 05:39:27.206240 34361 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/175e9f88-7ed2-441b-8de2-71aa4d32c9c5-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "175e9f88-7ed2-441b-8de2-71aa4d32c9c5" (UID: "175e9f88-7ed2-441b-8de2-71aa4d32c9c5"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 24 05:39:27.210895 master-0 kubenswrapper[34361]: I0224 05:39:27.210830 34361 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/175e9f88-7ed2-441b-8de2-71aa4d32c9c5-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "175e9f88-7ed2-441b-8de2-71aa4d32c9c5" (UID: "175e9f88-7ed2-441b-8de2-71aa4d32c9c5"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 24 05:39:27.308078 master-0 kubenswrapper[34361]: I0224 05:39:27.307867 34361 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/175e9f88-7ed2-441b-8de2-71aa4d32c9c5-kubelet-dir\") on node \"master-0\" DevicePath \"\"" Feb 24 05:39:27.308078 master-0 kubenswrapper[34361]: I0224 05:39:27.307958 34361 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/175e9f88-7ed2-441b-8de2-71aa4d32c9c5-kube-api-access\") on node \"master-0\" DevicePath \"\"" Feb 24 05:39:27.471063 master-0 kubenswrapper[34361]: I0224 05:39:27.470971 34361 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-master-0_eb342c942d3d92fd08ed7cf68fafb94c/kube-apiserver-cert-syncer/0.log" Feb 24 05:39:27.473210 master-0 kubenswrapper[34361]: I0224 05:39:27.472754 34361 generic.go:334] "Generic (PLEG): container finished" podID="eb342c942d3d92fd08ed7cf68fafb94c" containerID="ced751750522f9c03e2e52e91588c4be42691ecfec782aec5dc06ccc927894a3" exitCode=0 Feb 24 05:39:27.473210 master-0 kubenswrapper[34361]: I0224 05:39:27.472952 34361 scope.go:117] "RemoveContainer" containerID="8f8f553fcbcac28ff76f32289d1c9a6da6236c150e8a334600ddcd18ae702328" Feb 24 05:39:27.473210 master-0 kubenswrapper[34361]: I0224 05:39:27.472948 34361 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-master-0" Feb 24 05:39:27.476869 master-0 kubenswrapper[34361]: I0224 05:39:27.476582 34361 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-5-master-0" event={"ID":"175e9f88-7ed2-441b-8de2-71aa4d32c9c5","Type":"ContainerDied","Data":"bda664e24fd1e5b2142b4b3e022f42d08642b149d9744bb2d58e88a858a06151"} Feb 24 05:39:27.476869 master-0 kubenswrapper[34361]: I0224 05:39:27.476630 34361 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="bda664e24fd1e5b2142b4b3e022f42d08642b149d9744bb2d58e88a858a06151" Feb 24 05:39:27.476869 master-0 kubenswrapper[34361]: I0224 05:39:27.476695 34361 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-5-master-0" Feb 24 05:39:27.509584 master-0 kubenswrapper[34361]: I0224 05:39:27.509356 34361 scope.go:117] "RemoveContainer" containerID="dc7f334ba093edb1de3b24fa9de1ce5cc2e49dfb4a9a68f400dd46d39244c4c0" Feb 24 05:39:27.523141 master-0 kubenswrapper[34361]: I0224 05:39:27.523063 34361 status_manager.go:851] "Failed to get status for pod" podUID="eb342c942d3d92fd08ed7cf68fafb94c" pod="openshift-kube-apiserver/kube-apiserver-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 24 05:39:27.524477 master-0 kubenswrapper[34361]: I0224 05:39:27.524113 34361 status_manager.go:851] "Failed to get status for pod" podUID="175e9f88-7ed2-441b-8de2-71aa4d32c9c5" pod="openshift-kube-apiserver/installer-5-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-5-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 24 05:39:27.526114 master-0 kubenswrapper[34361]: I0224 05:39:27.525840 34361 status_manager.go:851] "Failed to get status for pod" podUID="175e9f88-7ed2-441b-8de2-71aa4d32c9c5" pod="openshift-kube-apiserver/installer-5-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-5-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 24 05:39:27.526499 master-0 kubenswrapper[34361]: I0224 05:39:27.526456 34361 status_manager.go:851] "Failed to get status for pod" podUID="eb342c942d3d92fd08ed7cf68fafb94c" pod="openshift-kube-apiserver/kube-apiserver-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 24 05:39:27.547113 master-0 kubenswrapper[34361]: I0224 05:39:27.547058 34361 scope.go:117] "RemoveContainer" containerID="b58a590ac15050b699587076d87598dc3616423922f5c5d6a6a0ff788a44d80e" Feb 24 05:39:27.571409 master-0 kubenswrapper[34361]: I0224 05:39:27.571354 34361 scope.go:117] "RemoveContainer" containerID="0d935090be6ba5c3e4e9afb96a81c728e02b21ff904a964bb0876a43723149d9" Feb 24 05:39:27.598641 master-0 kubenswrapper[34361]: I0224 05:39:27.598575 34361 scope.go:117] "RemoveContainer" containerID="ced751750522f9c03e2e52e91588c4be42691ecfec782aec5dc06ccc927894a3" Feb 24 05:39:27.625161 master-0 kubenswrapper[34361]: I0224 05:39:27.625102 34361 scope.go:117] "RemoveContainer" containerID="adcff6c4727c14e380617b95acac9bc44a9d20883cf6a8223ff49c60fefabbf8" Feb 24 05:39:27.656073 master-0 kubenswrapper[34361]: I0224 05:39:27.656003 34361 scope.go:117] "RemoveContainer" containerID="8f8f553fcbcac28ff76f32289d1c9a6da6236c150e8a334600ddcd18ae702328" Feb 24 05:39:27.656946 master-0 kubenswrapper[34361]: E0224 05:39:27.656858 34361 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8f8f553fcbcac28ff76f32289d1c9a6da6236c150e8a334600ddcd18ae702328\": container with ID starting with 8f8f553fcbcac28ff76f32289d1c9a6da6236c150e8a334600ddcd18ae702328 not found: ID does not exist" containerID="8f8f553fcbcac28ff76f32289d1c9a6da6236c150e8a334600ddcd18ae702328" Feb 24 05:39:27.657041 master-0 kubenswrapper[34361]: I0224 05:39:27.656941 34361 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8f8f553fcbcac28ff76f32289d1c9a6da6236c150e8a334600ddcd18ae702328"} err="failed to get container status \"8f8f553fcbcac28ff76f32289d1c9a6da6236c150e8a334600ddcd18ae702328\": rpc error: code = NotFound desc = could not find container \"8f8f553fcbcac28ff76f32289d1c9a6da6236c150e8a334600ddcd18ae702328\": container with ID starting with 8f8f553fcbcac28ff76f32289d1c9a6da6236c150e8a334600ddcd18ae702328 not found: ID does not exist" Feb 24 05:39:27.657041 master-0 kubenswrapper[34361]: I0224 05:39:27.656989 34361 scope.go:117] "RemoveContainer" containerID="dc7f334ba093edb1de3b24fa9de1ce5cc2e49dfb4a9a68f400dd46d39244c4c0" Feb 24 05:39:27.657785 master-0 kubenswrapper[34361]: E0224 05:39:27.657729 34361 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"dc7f334ba093edb1de3b24fa9de1ce5cc2e49dfb4a9a68f400dd46d39244c4c0\": container with ID starting with dc7f334ba093edb1de3b24fa9de1ce5cc2e49dfb4a9a68f400dd46d39244c4c0 not found: ID does not exist" containerID="dc7f334ba093edb1de3b24fa9de1ce5cc2e49dfb4a9a68f400dd46d39244c4c0" Feb 24 05:39:27.657907 master-0 kubenswrapper[34361]: I0224 05:39:27.657802 34361 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"dc7f334ba093edb1de3b24fa9de1ce5cc2e49dfb4a9a68f400dd46d39244c4c0"} err="failed to get container status \"dc7f334ba093edb1de3b24fa9de1ce5cc2e49dfb4a9a68f400dd46d39244c4c0\": rpc error: code = NotFound desc = could not find container \"dc7f334ba093edb1de3b24fa9de1ce5cc2e49dfb4a9a68f400dd46d39244c4c0\": container with ID starting with dc7f334ba093edb1de3b24fa9de1ce5cc2e49dfb4a9a68f400dd46d39244c4c0 not found: ID does not exist" Feb 24 05:39:27.657907 master-0 kubenswrapper[34361]: I0224 05:39:27.657848 34361 scope.go:117] "RemoveContainer" containerID="b58a590ac15050b699587076d87598dc3616423922f5c5d6a6a0ff788a44d80e" Feb 24 05:39:27.658760 master-0 kubenswrapper[34361]: E0224 05:39:27.658705 34361 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b58a590ac15050b699587076d87598dc3616423922f5c5d6a6a0ff788a44d80e\": container with ID starting with b58a590ac15050b699587076d87598dc3616423922f5c5d6a6a0ff788a44d80e not found: ID does not exist" containerID="b58a590ac15050b699587076d87598dc3616423922f5c5d6a6a0ff788a44d80e" Feb 24 05:39:27.658826 master-0 kubenswrapper[34361]: I0224 05:39:27.658740 34361 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b58a590ac15050b699587076d87598dc3616423922f5c5d6a6a0ff788a44d80e"} err="failed to get container status \"b58a590ac15050b699587076d87598dc3616423922f5c5d6a6a0ff788a44d80e\": rpc error: code = NotFound desc = could not find container \"b58a590ac15050b699587076d87598dc3616423922f5c5d6a6a0ff788a44d80e\": container with ID starting with b58a590ac15050b699587076d87598dc3616423922f5c5d6a6a0ff788a44d80e not found: ID does not exist" Feb 24 05:39:27.658826 master-0 kubenswrapper[34361]: I0224 05:39:27.658807 34361 scope.go:117] "RemoveContainer" containerID="0d935090be6ba5c3e4e9afb96a81c728e02b21ff904a964bb0876a43723149d9" Feb 24 05:39:27.659866 master-0 kubenswrapper[34361]: E0224 05:39:27.659799 34361 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0d935090be6ba5c3e4e9afb96a81c728e02b21ff904a964bb0876a43723149d9\": container with ID starting with 0d935090be6ba5c3e4e9afb96a81c728e02b21ff904a964bb0876a43723149d9 not found: ID does not exist" containerID="0d935090be6ba5c3e4e9afb96a81c728e02b21ff904a964bb0876a43723149d9" Feb 24 05:39:27.659954 master-0 kubenswrapper[34361]: I0224 05:39:27.659869 34361 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0d935090be6ba5c3e4e9afb96a81c728e02b21ff904a964bb0876a43723149d9"} err="failed to get container status \"0d935090be6ba5c3e4e9afb96a81c728e02b21ff904a964bb0876a43723149d9\": rpc error: code = NotFound desc = could not find container \"0d935090be6ba5c3e4e9afb96a81c728e02b21ff904a964bb0876a43723149d9\": container with ID starting with 0d935090be6ba5c3e4e9afb96a81c728e02b21ff904a964bb0876a43723149d9 not found: ID does not exist" Feb 24 05:39:27.659954 master-0 kubenswrapper[34361]: I0224 05:39:27.659919 34361 scope.go:117] "RemoveContainer" containerID="ced751750522f9c03e2e52e91588c4be42691ecfec782aec5dc06ccc927894a3" Feb 24 05:39:27.660877 master-0 kubenswrapper[34361]: E0224 05:39:27.660831 34361 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ced751750522f9c03e2e52e91588c4be42691ecfec782aec5dc06ccc927894a3\": container with ID starting with ced751750522f9c03e2e52e91588c4be42691ecfec782aec5dc06ccc927894a3 not found: ID does not exist" containerID="ced751750522f9c03e2e52e91588c4be42691ecfec782aec5dc06ccc927894a3" Feb 24 05:39:27.661010 master-0 kubenswrapper[34361]: I0224 05:39:27.660975 34361 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ced751750522f9c03e2e52e91588c4be42691ecfec782aec5dc06ccc927894a3"} err="failed to get container status \"ced751750522f9c03e2e52e91588c4be42691ecfec782aec5dc06ccc927894a3\": rpc error: code = NotFound desc = could not find container \"ced751750522f9c03e2e52e91588c4be42691ecfec782aec5dc06ccc927894a3\": container with ID starting with ced751750522f9c03e2e52e91588c4be42691ecfec782aec5dc06ccc927894a3 not found: ID does not exist" Feb 24 05:39:27.661098 master-0 kubenswrapper[34361]: I0224 05:39:27.661084 34361 scope.go:117] "RemoveContainer" containerID="adcff6c4727c14e380617b95acac9bc44a9d20883cf6a8223ff49c60fefabbf8" Feb 24 05:39:27.662020 master-0 kubenswrapper[34361]: E0224 05:39:27.661717 34361 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"adcff6c4727c14e380617b95acac9bc44a9d20883cf6a8223ff49c60fefabbf8\": container with ID starting with adcff6c4727c14e380617b95acac9bc44a9d20883cf6a8223ff49c60fefabbf8 not found: ID does not exist" containerID="adcff6c4727c14e380617b95acac9bc44a9d20883cf6a8223ff49c60fefabbf8" Feb 24 05:39:27.662020 master-0 kubenswrapper[34361]: I0224 05:39:27.661784 34361 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"adcff6c4727c14e380617b95acac9bc44a9d20883cf6a8223ff49c60fefabbf8"} err="failed to get container status \"adcff6c4727c14e380617b95acac9bc44a9d20883cf6a8223ff49c60fefabbf8\": rpc error: code = NotFound desc = could not find container \"adcff6c4727c14e380617b95acac9bc44a9d20883cf6a8223ff49c60fefabbf8\": container with ID starting with adcff6c4727c14e380617b95acac9bc44a9d20883cf6a8223ff49c60fefabbf8 not found: ID does not exist" Feb 24 05:39:27.884446 master-0 kubenswrapper[34361]: E0224 05:39:27.884304 34361 controller.go:195] "Failed to update lease" err="Put \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 24 05:39:27.885446 master-0 kubenswrapper[34361]: E0224 05:39:27.885379 34361 controller.go:195] "Failed to update lease" err="Put \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 24 05:39:27.886200 master-0 kubenswrapper[34361]: E0224 05:39:27.886130 34361 controller.go:195] "Failed to update lease" err="Put \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 24 05:39:27.887022 master-0 kubenswrapper[34361]: E0224 05:39:27.886965 34361 controller.go:195] "Failed to update lease" err="Put \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 24 05:39:27.887830 master-0 kubenswrapper[34361]: E0224 05:39:27.887770 34361 controller.go:195] "Failed to update lease" err="Put \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 24 05:39:27.888712 master-0 kubenswrapper[34361]: I0224 05:39:27.888615 34361 controller.go:115] "failed to update lease using latest lease, fallback to ensure lease" err="failed 5 attempts to update lease" Feb 24 05:39:27.889950 master-0 kubenswrapper[34361]: E0224 05:39:27.889843 34361 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": dial tcp 192.168.32.10:6443: connect: connection refused" interval="200ms" Feb 24 05:39:28.091404 master-0 kubenswrapper[34361]: E0224 05:39:28.091292 34361 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": dial tcp 192.168.32.10:6443: connect: connection refused" interval="400ms" Feb 24 05:39:28.301141 master-0 kubenswrapper[34361]: I0224 05:39:28.301048 34361 patch_prober.go:28] interesting pod/console-67bcb9df49-d2cv6 container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.128.0.91:8443/health\": dial tcp 10.128.0.91:8443: connect: connection refused" start-of-body= Feb 24 05:39:28.301404 master-0 kubenswrapper[34361]: I0224 05:39:28.301170 34361 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-67bcb9df49-d2cv6" podUID="c300d6c7-66fb-41c5-b099-0e9e4a235e76" containerName="console" probeResult="failure" output="Get \"https://10.128.0.91:8443/health\": dial tcp 10.128.0.91:8443: connect: connection refused" Feb 24 05:39:28.493298 master-0 kubenswrapper[34361]: E0224 05:39:28.493182 34361 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": dial tcp 192.168.32.10:6443: connect: connection refused" interval="800ms" Feb 24 05:39:28.613618 master-0 kubenswrapper[34361]: I0224 05:39:28.613422 34361 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="eb342c942d3d92fd08ed7cf68fafb94c" path="/var/lib/kubelet/pods/eb342c942d3d92fd08ed7cf68fafb94c/volumes" Feb 24 05:39:29.294954 master-0 kubenswrapper[34361]: E0224 05:39:29.294847 34361 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": dial tcp 192.168.32.10:6443: connect: connection refused" interval="1.6s" Feb 24 05:39:29.451479 master-0 kubenswrapper[34361]: E0224 05:39:29.451126 34361 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/events\": dial tcp 192.168.32.10:6443: connect: connection refused" event="&Event{ObjectMeta:{kube-apiserver-master-0.1897182e08fecd35 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-master-0,UID:eb342c942d3d92fd08ed7cf68fafb94c,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-cert-syncer},},Reason:Killing,Message:Stopping container kube-apiserver-cert-syncer,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-02-24 05:39:24.392058165 +0000 UTC m=+124.094675241,LastTimestamp:2026-02-24 05:39:24.392058165 +0000 UTC m=+124.094675241,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Feb 24 05:39:29.494237 master-0 kubenswrapper[34361]: E0224 05:39:29.494144 34361 kubelet.go:1929] "Failed creating a mirror pod for" err="Post \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods\": dial tcp 192.168.32.10:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Feb 24 05:39:29.495098 master-0 kubenswrapper[34361]: I0224 05:39:29.495054 34361 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Feb 24 05:39:29.538433 master-0 kubenswrapper[34361]: W0224 05:39:29.538282 34361 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2146f0e3671998cad8bbc2464b009ab7.slice/crio-a19ff3d04752651b937939a7260430ac8a94d6b2cf8952355fa430744cf3adc4 WatchSource:0}: Error finding container a19ff3d04752651b937939a7260430ac8a94d6b2cf8952355fa430744cf3adc4: Status 404 returned error can't find the container with id a19ff3d04752651b937939a7260430ac8a94d6b2cf8952355fa430744cf3adc4 Feb 24 05:39:30.303874 master-0 kubenswrapper[34361]: I0224 05:39:30.303751 34361 patch_prober.go:28] interesting pod/console-5b6cfdbd-5qbf5 container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.128.0.87:8443/health\": dial tcp 10.128.0.87:8443: connect: connection refused" start-of-body= Feb 24 05:39:30.303874 master-0 kubenswrapper[34361]: I0224 05:39:30.303839 34361 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-5b6cfdbd-5qbf5" podUID="f3038676-0c11-4616-bb1e-f5d396e420f4" containerName="console" probeResult="failure" output="Get \"https://10.128.0.87:8443/health\": dial tcp 10.128.0.87:8443: connect: connection refused" Feb 24 05:39:30.524162 master-0 kubenswrapper[34361]: I0224 05:39:30.524084 34361 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" event={"ID":"2146f0e3671998cad8bbc2464b009ab7","Type":"ContainerStarted","Data":"72e021a31a37c96d72f75db831712f6fa7bf4d4e9a446833d13646a29ff48a2e"} Feb 24 05:39:30.524162 master-0 kubenswrapper[34361]: I0224 05:39:30.524173 34361 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" event={"ID":"2146f0e3671998cad8bbc2464b009ab7","Type":"ContainerStarted","Data":"a19ff3d04752651b937939a7260430ac8a94d6b2cf8952355fa430744cf3adc4"} Feb 24 05:39:30.525923 master-0 kubenswrapper[34361]: E0224 05:39:30.525671 34361 kubelet.go:1929] "Failed creating a mirror pod for" err="Post \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods\": dial tcp 192.168.32.10:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Feb 24 05:39:30.526213 master-0 kubenswrapper[34361]: I0224 05:39:30.526118 34361 status_manager.go:851] "Failed to get status for pod" podUID="175e9f88-7ed2-441b-8de2-71aa4d32c9c5" pod="openshift-kube-apiserver/installer-5-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-5-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 24 05:39:30.608651 master-0 kubenswrapper[34361]: I0224 05:39:30.608452 34361 status_manager.go:851] "Failed to get status for pod" podUID="175e9f88-7ed2-441b-8de2-71aa4d32c9c5" pod="openshift-kube-apiserver/installer-5-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-5-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 24 05:39:30.896005 master-0 kubenswrapper[34361]: E0224 05:39:30.895805 34361 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": dial tcp 192.168.32.10:6443: connect: connection refused" interval="3.2s" Feb 24 05:39:31.536261 master-0 kubenswrapper[34361]: E0224 05:39:31.536136 34361 kubelet.go:1929] "Failed creating a mirror pod for" err="Post \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods\": dial tcp 192.168.32.10:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Feb 24 05:39:34.099525 master-0 kubenswrapper[34361]: E0224 05:39:34.098297 34361 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": dial tcp 192.168.32.10:6443: connect: connection refused" interval="6.4s" Feb 24 05:39:35.155140 master-0 kubenswrapper[34361]: E0224 05:39:35.154914 34361 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/events\": dial tcp 192.168.32.10:6443: connect: connection refused" event="&Event{ObjectMeta:{kube-apiserver-master-0.1897182e08fecd35 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-master-0,UID:eb342c942d3d92fd08ed7cf68fafb94c,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-cert-syncer},},Reason:Killing,Message:Stopping container kube-apiserver-cert-syncer,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-02-24 05:39:24.392058165 +0000 UTC m=+124.094675241,LastTimestamp:2026-02-24 05:39:24.392058165 +0000 UTC m=+124.094675241,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Feb 24 05:39:37.596779 master-0 kubenswrapper[34361]: I0224 05:39:37.596706 34361 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-master-0" Feb 24 05:39:37.600104 master-0 kubenswrapper[34361]: I0224 05:39:37.599998 34361 status_manager.go:851] "Failed to get status for pod" podUID="175e9f88-7ed2-441b-8de2-71aa4d32c9c5" pod="openshift-kube-apiserver/installer-5-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-5-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 24 05:39:37.637823 master-0 kubenswrapper[34361]: I0224 05:39:37.637699 34361 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" podUID="b16fd761-45fd-42e1-b670-d5e4d80b990b" Feb 24 05:39:37.637823 master-0 kubenswrapper[34361]: I0224 05:39:37.637773 34361 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" podUID="b16fd761-45fd-42e1-b670-d5e4d80b990b" Feb 24 05:39:37.639367 master-0 kubenswrapper[34361]: E0224 05:39:37.639276 34361 mirror_client.go:138] "Failed deleting a mirror pod" err="Delete \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-master-0" Feb 24 05:39:37.641297 master-0 kubenswrapper[34361]: I0224 05:39:37.641228 34361 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-master-0" Feb 24 05:39:37.685636 master-0 kubenswrapper[34361]: W0224 05:39:37.685544 34361 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod487622064474ed0ec70f7bf2a0fcb80b.slice/crio-748656d32a82c7e6de7e5a3016d2f6da14927ffce6436ae787ca3e94733fa7bc WatchSource:0}: Error finding container 748656d32a82c7e6de7e5a3016d2f6da14927ffce6436ae787ca3e94733fa7bc: Status 404 returned error can't find the container with id 748656d32a82c7e6de7e5a3016d2f6da14927ffce6436ae787ca3e94733fa7bc Feb 24 05:39:38.301465 master-0 kubenswrapper[34361]: I0224 05:39:38.301349 34361 patch_prober.go:28] interesting pod/console-67bcb9df49-d2cv6 container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.128.0.91:8443/health\": dial tcp 10.128.0.91:8443: connect: connection refused" start-of-body= Feb 24 05:39:38.301826 master-0 kubenswrapper[34361]: I0224 05:39:38.301520 34361 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-67bcb9df49-d2cv6" podUID="c300d6c7-66fb-41c5-b099-0e9e4a235e76" containerName="console" probeResult="failure" output="Get \"https://10.128.0.91:8443/health\": dial tcp 10.128.0.91:8443: connect: connection refused" Feb 24 05:39:38.611822 master-0 kubenswrapper[34361]: I0224 05:39:38.611603 34361 generic.go:334] "Generic (PLEG): container finished" podID="487622064474ed0ec70f7bf2a0fcb80b" containerID="012fc5c7b114320d95f9b5d360b87f658e8515173c538d2394a89a54f1fefe23" exitCode=0 Feb 24 05:39:38.613776 master-0 kubenswrapper[34361]: I0224 05:39:38.613684 34361 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" event={"ID":"487622064474ed0ec70f7bf2a0fcb80b","Type":"ContainerDied","Data":"012fc5c7b114320d95f9b5d360b87f658e8515173c538d2394a89a54f1fefe23"} Feb 24 05:39:38.613881 master-0 kubenswrapper[34361]: I0224 05:39:38.613790 34361 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" event={"ID":"487622064474ed0ec70f7bf2a0fcb80b","Type":"ContainerStarted","Data":"748656d32a82c7e6de7e5a3016d2f6da14927ffce6436ae787ca3e94733fa7bc"} Feb 24 05:39:38.614941 master-0 kubenswrapper[34361]: I0224 05:39:38.614846 34361 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" podUID="b16fd761-45fd-42e1-b670-d5e4d80b990b" Feb 24 05:39:38.614941 master-0 kubenswrapper[34361]: I0224 05:39:38.614939 34361 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" podUID="b16fd761-45fd-42e1-b670-d5e4d80b990b" Feb 24 05:39:38.615842 master-0 kubenswrapper[34361]: E0224 05:39:38.615773 34361 mirror_client.go:138] "Failed deleting a mirror pod" err="Delete \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-master-0" Feb 24 05:39:38.615948 master-0 kubenswrapper[34361]: I0224 05:39:38.615824 34361 status_manager.go:851] "Failed to get status for pod" podUID="175e9f88-7ed2-441b-8de2-71aa4d32c9c5" pod="openshift-kube-apiserver/installer-5-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-5-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 24 05:39:39.648633 master-0 kubenswrapper[34361]: I0224 05:39:39.648573 34361 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-master-0_c0305da6e0b04a4394ef2888a487bfa1/kube-controller-manager/0.log" Feb 24 05:39:39.649230 master-0 kubenswrapper[34361]: I0224 05:39:39.648656 34361 generic.go:334] "Generic (PLEG): container finished" podID="c0305da6e0b04a4394ef2888a487bfa1" containerID="e0f72d95db3b526338789b8fcf2468920b15351bce1ec3d46e5d53624269cc95" exitCode=1 Feb 24 05:39:39.649230 master-0 kubenswrapper[34361]: I0224 05:39:39.648758 34361 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" event={"ID":"c0305da6e0b04a4394ef2888a487bfa1","Type":"ContainerDied","Data":"e0f72d95db3b526338789b8fcf2468920b15351bce1ec3d46e5d53624269cc95"} Feb 24 05:39:39.649439 master-0 kubenswrapper[34361]: I0224 05:39:39.649398 34361 scope.go:117] "RemoveContainer" containerID="e0f72d95db3b526338789b8fcf2468920b15351bce1ec3d46e5d53624269cc95" Feb 24 05:39:39.651515 master-0 kubenswrapper[34361]: I0224 05:39:39.651492 34361 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" event={"ID":"487622064474ed0ec70f7bf2a0fcb80b","Type":"ContainerStarted","Data":"fe8249a9b23c49f02efbacdb61523fd688bd4e197fcb5a93eb1e55de6b3841ba"} Feb 24 05:39:39.651515 master-0 kubenswrapper[34361]: I0224 05:39:39.651539 34361 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" event={"ID":"487622064474ed0ec70f7bf2a0fcb80b","Type":"ContainerStarted","Data":"a9e75a0285a266fd5413b99d5f61e94e662705b4565d671a59d98436ac540c53"} Feb 24 05:39:40.077603 master-0 kubenswrapper[34361]: I0224 05:39:40.077517 34361 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Feb 24 05:39:40.123118 master-0 kubenswrapper[34361]: I0224 05:39:40.119254 34361 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Feb 24 05:39:40.304330 master-0 kubenswrapper[34361]: I0224 05:39:40.304253 34361 patch_prober.go:28] interesting pod/console-5b6cfdbd-5qbf5 container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.128.0.87:8443/health\": dial tcp 10.128.0.87:8443: connect: connection refused" start-of-body= Feb 24 05:39:40.304567 master-0 kubenswrapper[34361]: I0224 05:39:40.304527 34361 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-5b6cfdbd-5qbf5" podUID="f3038676-0c11-4616-bb1e-f5d396e420f4" containerName="console" probeResult="failure" output="Get \"https://10.128.0.87:8443/health\": dial tcp 10.128.0.87:8443: connect: connection refused" Feb 24 05:39:40.665243 master-0 kubenswrapper[34361]: I0224 05:39:40.665190 34361 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-master-0_c0305da6e0b04a4394ef2888a487bfa1/kube-controller-manager/0.log" Feb 24 05:39:40.666046 master-0 kubenswrapper[34361]: I0224 05:39:40.665283 34361 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" event={"ID":"c0305da6e0b04a4394ef2888a487bfa1","Type":"ContainerStarted","Data":"a6c4b7c7c8f2d6f7a5d9574827c1d87fc9e887e6f38197076ff1b4325039d136"} Feb 24 05:39:40.682963 master-0 kubenswrapper[34361]: I0224 05:39:40.682901 34361 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" event={"ID":"487622064474ed0ec70f7bf2a0fcb80b","Type":"ContainerStarted","Data":"da1f32e0fdebe000032a3bba060d854644ea58d3b9fe8793e871611b13a7d8b7"} Feb 24 05:39:40.682963 master-0 kubenswrapper[34361]: I0224 05:39:40.682951 34361 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" event={"ID":"487622064474ed0ec70f7bf2a0fcb80b","Type":"ContainerStarted","Data":"2260bd3810d61e77bcd8c5e16ab5a465bb714f0f876789638d2f8ae7815be6bd"} Feb 24 05:39:40.682963 master-0 kubenswrapper[34361]: I0224 05:39:40.682963 34361 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" event={"ID":"487622064474ed0ec70f7bf2a0fcb80b","Type":"ContainerStarted","Data":"9eb8cd13348abdd4359b93e13bb2af3966f1d546e6bcc4ddde68b5641678b500"} Feb 24 05:39:40.683359 master-0 kubenswrapper[34361]: I0224 05:39:40.683239 34361 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" podUID="b16fd761-45fd-42e1-b670-d5e4d80b990b" Feb 24 05:39:40.683359 master-0 kubenswrapper[34361]: I0224 05:39:40.683256 34361 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" podUID="b16fd761-45fd-42e1-b670-d5e4d80b990b" Feb 24 05:39:40.684336 master-0 kubenswrapper[34361]: I0224 05:39:40.683449 34361 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-master-0" Feb 24 05:39:42.642732 master-0 kubenswrapper[34361]: I0224 05:39:42.642658 34361 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-master-0" Feb 24 05:39:42.643780 master-0 kubenswrapper[34361]: I0224 05:39:42.643585 34361 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-master-0" Feb 24 05:39:42.653128 master-0 kubenswrapper[34361]: I0224 05:39:42.653054 34361 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-apiserver/kube-apiserver-master-0" Feb 24 05:39:45.590662 master-0 kubenswrapper[34361]: I0224 05:39:45.590588 34361 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Feb 24 05:39:45.591661 master-0 kubenswrapper[34361]: I0224 05:39:45.591609 34361 patch_prober.go:28] interesting pod/kube-controller-manager-master-0 container/kube-controller-manager namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.32.10:10257/healthz\": dial tcp 192.168.32.10:10257: connect: connection refused" start-of-body= Feb 24 05:39:45.591850 master-0 kubenswrapper[34361]: I0224 05:39:45.591810 34361 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" podUID="c0305da6e0b04a4394ef2888a487bfa1" containerName="kube-controller-manager" probeResult="failure" output="Get \"https://192.168.32.10:10257/healthz\": dial tcp 192.168.32.10:10257: connect: connection refused" Feb 24 05:39:45.703827 master-0 kubenswrapper[34361]: I0224 05:39:45.703765 34361 kubelet.go:1914] "Deleted mirror pod because it is outdated" pod="openshift-kube-apiserver/kube-apiserver-master-0" Feb 24 05:39:45.726156 master-0 kubenswrapper[34361]: I0224 05:39:45.726109 34361 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" podUID="b16fd761-45fd-42e1-b670-d5e4d80b990b" Feb 24 05:39:45.726458 master-0 kubenswrapper[34361]: I0224 05:39:45.726445 34361 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" podUID="b16fd761-45fd-42e1-b670-d5e4d80b990b" Feb 24 05:39:45.731850 master-0 kubenswrapper[34361]: I0224 05:39:45.731786 34361 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-master-0" Feb 24 05:39:45.735391 master-0 kubenswrapper[34361]: I0224 05:39:45.735236 34361 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-master-0" oldPodUID="487622064474ed0ec70f7bf2a0fcb80b" podUID="cd57353b-c549-43fa-a696-c8d703eace8a" Feb 24 05:39:46.735095 master-0 kubenswrapper[34361]: I0224 05:39:46.734988 34361 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" podUID="b16fd761-45fd-42e1-b670-d5e4d80b990b" Feb 24 05:39:46.735095 master-0 kubenswrapper[34361]: I0224 05:39:46.735060 34361 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" podUID="b16fd761-45fd-42e1-b670-d5e4d80b990b" Feb 24 05:39:48.300811 master-0 kubenswrapper[34361]: I0224 05:39:48.300697 34361 patch_prober.go:28] interesting pod/console-67bcb9df49-d2cv6 container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.128.0.91:8443/health\": dial tcp 10.128.0.91:8443: connect: connection refused" start-of-body= Feb 24 05:39:48.301962 master-0 kubenswrapper[34361]: I0224 05:39:48.300822 34361 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-67bcb9df49-d2cv6" podUID="c300d6c7-66fb-41c5-b099-0e9e4a235e76" containerName="console" probeResult="failure" output="Get \"https://10.128.0.91:8443/health\": dial tcp 10.128.0.91:8443: connect: connection refused" Feb 24 05:39:50.077685 master-0 kubenswrapper[34361]: I0224 05:39:50.077567 34361 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Feb 24 05:39:50.304339 master-0 kubenswrapper[34361]: I0224 05:39:50.304168 34361 patch_prober.go:28] interesting pod/console-5b6cfdbd-5qbf5 container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.128.0.87:8443/health\": dial tcp 10.128.0.87:8443: connect: connection refused" start-of-body= Feb 24 05:39:50.304723 master-0 kubenswrapper[34361]: I0224 05:39:50.304387 34361 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-5b6cfdbd-5qbf5" podUID="f3038676-0c11-4616-bb1e-f5d396e420f4" containerName="console" probeResult="failure" output="Get \"https://10.128.0.87:8443/health\": dial tcp 10.128.0.87:8443: connect: connection refused" Feb 24 05:39:50.623876 master-0 kubenswrapper[34361]: I0224 05:39:50.623772 34361 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-master-0" oldPodUID="487622064474ed0ec70f7bf2a0fcb80b" podUID="cd57353b-c549-43fa-a696-c8d703eace8a" Feb 24 05:39:55.066875 master-0 kubenswrapper[34361]: I0224 05:39:55.066773 34361 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"openshift-service-ca.crt" Feb 24 05:39:55.591217 master-0 kubenswrapper[34361]: I0224 05:39:55.591121 34361 patch_prober.go:28] interesting pod/kube-controller-manager-master-0 container/kube-controller-manager namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.32.10:10257/healthz\": dial tcp 192.168.32.10:10257: connect: connection refused" start-of-body= Feb 24 05:39:55.591217 master-0 kubenswrapper[34361]: I0224 05:39:55.591221 34361 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" podUID="c0305da6e0b04a4394ef2888a487bfa1" containerName="kube-controller-manager" probeResult="failure" output="Get \"https://192.168.32.10:10257/healthz\": dial tcp 192.168.32.10:10257: connect: connection refused" Feb 24 05:39:55.860448 master-0 kubenswrapper[34361]: I0224 05:39:55.860175 34361 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"openshift-service-ca.crt" Feb 24 05:39:55.884791 master-0 kubenswrapper[34361]: I0224 05:39:55.884705 34361 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"default-dockercfg-z9499" Feb 24 05:39:56.182037 master-0 kubenswrapper[34361]: I0224 05:39:56.181814 34361 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cloud-controller-manager-operator"/"kube-root-ca.crt" Feb 24 05:39:56.208497 master-0 kubenswrapper[34361]: I0224 05:39:56.208399 34361 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-46rst" Feb 24 05:39:56.305851 master-0 kubenswrapper[34361]: I0224 05:39:56.305753 34361 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"kube-root-ca.crt" Feb 24 05:39:56.329700 master-0 kubenswrapper[34361]: I0224 05:39:56.329602 34361 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config" Feb 24 05:39:56.666603 master-0 kubenswrapper[34361]: I0224 05:39:56.666510 34361 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"metrics-server-dockercfg-hpmvm" Feb 24 05:39:57.239712 master-0 kubenswrapper[34361]: I0224 05:39:57.239601 34361 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"openshift-service-ca.crt" Feb 24 05:39:57.289527 master-0 kubenswrapper[34361]: I0224 05:39:57.289408 34361 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"env-overrides" Feb 24 05:39:57.397587 master-0 kubenswrapper[34361]: I0224 05:39:57.397454 34361 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"console-config" Feb 24 05:39:57.426357 master-0 kubenswrapper[34361]: I0224 05:39:57.426238 34361 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-rbac-proxy" Feb 24 05:39:57.533111 master-0 kubenswrapper[34361]: I0224 05:39:57.532903 34361 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"node-exporter-tls" Feb 24 05:39:57.733014 master-0 kubenswrapper[34361]: I0224 05:39:57.732892 34361 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-qvwf6" Feb 24 05:39:57.823483 master-0 kubenswrapper[34361]: I0224 05:39:57.823194 34361 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"etcd-client" Feb 24 05:39:57.900494 master-0 kubenswrapper[34361]: I0224 05:39:57.900387 34361 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"openshift-service-ca.crt" Feb 24 05:39:58.260006 master-0 kubenswrapper[34361]: I0224 05:39:58.259926 34361 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"openshift-service-ca.crt" Feb 24 05:39:58.301831 master-0 kubenswrapper[34361]: I0224 05:39:58.301719 34361 patch_prober.go:28] interesting pod/console-67bcb9df49-d2cv6 container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.128.0.91:8443/health\": dial tcp 10.128.0.91:8443: connect: connection refused" start-of-body= Feb 24 05:39:58.302229 master-0 kubenswrapper[34361]: I0224 05:39:58.301825 34361 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-67bcb9df49-d2cv6" podUID="c300d6c7-66fb-41c5-b099-0e9e4a235e76" containerName="console" probeResult="failure" output="Get \"https://10.128.0.91:8443/health\": dial tcp 10.128.0.91:8443: connect: connection refused" Feb 24 05:39:58.322434 master-0 kubenswrapper[34361]: I0224 05:39:58.322332 34361 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"openshift-service-ca.crt" Feb 24 05:39:58.470912 master-0 kubenswrapper[34361]: I0224 05:39:58.470800 34361 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cloud-controller-manager-operator"/"cloud-controller-manager-images" Feb 24 05:39:58.483542 master-0 kubenswrapper[34361]: I0224 05:39:58.483454 34361 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"openshift-service-ca.crt" Feb 24 05:39:58.585745 master-0 kubenswrapper[34361]: I0224 05:39:58.585524 34361 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"node-exporter-kube-rbac-proxy-config" Feb 24 05:39:58.627527 master-0 kubenswrapper[34361]: I0224 05:39:58.627431 34361 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-storage-operator"/"openshift-service-ca.crt" Feb 24 05:39:58.639517 master-0 kubenswrapper[34361]: I0224 05:39:58.639415 34361 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"cluster-version-operator-serving-cert" Feb 24 05:39:58.732773 master-0 kubenswrapper[34361]: I0224 05:39:58.732697 34361 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"kube-state-metrics-dockercfg-9sp2t" Feb 24 05:39:58.776088 master-0 kubenswrapper[34361]: I0224 05:39:58.776002 34361 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-config" Feb 24 05:39:58.861681 master-0 kubenswrapper[34361]: I0224 05:39:58.861251 34361 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-olm-operator"/"openshift-service-ca.crt" Feb 24 05:39:59.062034 master-0 kubenswrapper[34361]: I0224 05:39:59.061921 34361 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"etcd-client" Feb 24 05:39:59.112152 master-0 kubenswrapper[34361]: I0224 05:39:59.111921 34361 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"kube-root-ca.crt" Feb 24 05:39:59.163407 master-0 kubenswrapper[34361]: I0224 05:39:59.163283 34361 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert" Feb 24 05:39:59.174479 master-0 kubenswrapper[34361]: I0224 05:39:59.174393 34361 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"samples-operator-tls" Feb 24 05:39:59.292374 master-0 kubenswrapper[34361]: I0224 05:39:59.292191 34361 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-catalogd"/"openshift-service-ca.crt" Feb 24 05:39:59.304882 master-0 kubenswrapper[34361]: I0224 05:39:59.304773 34361 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Feb 24 05:39:59.342051 master-0 kubenswrapper[34361]: I0224 05:39:59.339249 34361 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-control-plane-metrics-cert" Feb 24 05:39:59.386103 master-0 kubenswrapper[34361]: I0224 05:39:59.385900 34361 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-ca-bundle" Feb 24 05:39:59.452141 master-0 kubenswrapper[34361]: I0224 05:39:59.452037 34361 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"image-import-ca" Feb 24 05:39:59.557348 master-0 kubenswrapper[34361]: I0224 05:39:59.557221 34361 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"openshift-service-ca.crt" Feb 24 05:39:59.570402 master-0 kubenswrapper[34361]: I0224 05:39:59.570286 34361 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cloud-credential-operator"/"kube-root-ca.crt" Feb 24 05:39:59.602606 master-0 kubenswrapper[34361]: I0224 05:39:59.602524 34361 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"metrics-tls" Feb 24 05:39:59.620669 master-0 kubenswrapper[34361]: I0224 05:39:59.620574 34361 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-client" Feb 24 05:39:59.692633 master-0 kubenswrapper[34361]: I0224 05:39:59.692414 34361 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"console-operator-config" Feb 24 05:39:59.715529 master-0 kubenswrapper[34361]: I0224 05:39:59.715441 34361 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"trusted-ca" Feb 24 05:39:59.737434 master-0 kubenswrapper[34361]: I0224 05:39:59.737369 34361 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" Feb 24 05:39:59.801154 master-0 kubenswrapper[34361]: I0224 05:39:59.801092 34361 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"kube-root-ca.crt" Feb 24 05:39:59.847011 master-0 kubenswrapper[34361]: I0224 05:39:59.846932 34361 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-config" Feb 24 05:39:59.847011 master-0 kubenswrapper[34361]: I0224 05:39:59.846982 34361 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-operator-dockercfg-qhzzf" Feb 24 05:39:59.865925 master-0 kubenswrapper[34361]: I0224 05:39:59.865845 34361 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"config" Feb 24 05:39:59.885159 master-0 kubenswrapper[34361]: I0224 05:39:59.885062 34361 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"telemeter-client-serving-certs-ca-bundle" Feb 24 05:39:59.892003 master-0 kubenswrapper[34361]: I0224 05:39:59.891955 34361 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"kube-root-ca.crt" Feb 24 05:39:59.900934 master-0 kubenswrapper[34361]: I0224 05:39:59.900901 34361 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-node-tuning-operator"/"performance-addon-operator-webhook-cert" Feb 24 05:39:59.998032 master-0 kubenswrapper[34361]: I0224 05:39:59.997948 34361 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-provider-selection" Feb 24 05:40:00.088106 master-0 kubenswrapper[34361]: I0224 05:40:00.088050 34361 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-root-ca.crt" Feb 24 05:40:00.167242 master-0 kubenswrapper[34361]: I0224 05:40:00.167184 34361 reflector.go:368] Caches populated for *v1.RuntimeClass from k8s.io/client-go/informers/factory.go:160 Feb 24 05:40:00.189986 master-0 kubenswrapper[34361]: I0224 05:40:00.189932 34361 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-script-lib" Feb 24 05:40:00.266503 master-0 kubenswrapper[34361]: I0224 05:40:00.266299 34361 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-olm-operator"/"kube-root-ca.crt" Feb 24 05:40:00.304382 master-0 kubenswrapper[34361]: I0224 05:40:00.304269 34361 patch_prober.go:28] interesting pod/console-5b6cfdbd-5qbf5 container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.128.0.87:8443/health\": dial tcp 10.128.0.87:8443: connect: connection refused" start-of-body= Feb 24 05:40:00.305671 master-0 kubenswrapper[34361]: I0224 05:40:00.304439 34361 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-5b6cfdbd-5qbf5" podUID="f3038676-0c11-4616-bb1e-f5d396e420f4" containerName="console" probeResult="failure" output="Get \"https://10.128.0.87:8443/health\": dial tcp 10.128.0.87:8443: connect: connection refused" Feb 24 05:40:00.323574 master-0 kubenswrapper[34361]: I0224 05:40:00.323469 34361 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"env-overrides" Feb 24 05:40:00.341845 master-0 kubenswrapper[34361]: I0224 05:40:00.341793 34361 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-service-ca.crt" Feb 24 05:40:00.437009 master-0 kubenswrapper[34361]: I0224 05:40:00.436915 34361 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"metrics-server-audit-profiles" Feb 24 05:40:00.442168 master-0 kubenswrapper[34361]: I0224 05:40:00.442091 34361 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Feb 24 05:40:00.480552 master-0 kubenswrapper[34361]: I0224 05:40:00.480477 34361 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"cluster-autoscaler-operator-dockercfg-44r64" Feb 24 05:40:00.526629 master-0 kubenswrapper[34361]: I0224 05:40:00.526370 34361 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"kube-root-ca.crt" Feb 24 05:40:00.602154 master-0 kubenswrapper[34361]: I0224 05:40:00.600582 34361 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cloud-credential-operator"/"cloud-credential-operator-dockercfg-9cc2t" Feb 24 05:40:00.609455 master-0 kubenswrapper[34361]: I0224 05:40:00.609394 34361 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"signing-key" Feb 24 05:40:00.709145 master-0 kubenswrapper[34361]: I0224 05:40:00.709054 34361 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-olm-operator"/"cluster-olm-operator-serving-cert" Feb 24 05:40:00.728115 master-0 kubenswrapper[34361]: I0224 05:40:00.728039 34361 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"kube-scheduler-operator-serving-cert" Feb 24 05:40:00.747692 master-0 kubenswrapper[34361]: I0224 05:40:00.747611 34361 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-catalogd"/"kube-root-ca.crt" Feb 24 05:40:00.755651 master-0 kubenswrapper[34361]: I0224 05:40:00.755613 34361 reflector.go:368] Caches populated for *v1.Pod from pkg/kubelet/config/apiserver.go:66 Feb 24 05:40:00.768763 master-0 kubenswrapper[34361]: I0224 05:40:00.768701 34361 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-kube-apiserver/kube-apiserver-master-0"] Feb 24 05:40:00.768904 master-0 kubenswrapper[34361]: I0224 05:40:00.768811 34361 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/kube-apiserver-master-0"] Feb 24 05:40:00.778482 master-0 kubenswrapper[34361]: I0224 05:40:00.778352 34361 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-master-0" Feb 24 05:40:00.812356 master-0 kubenswrapper[34361]: I0224 05:40:00.812180 34361 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-master-0" podStartSLOduration=15.812148445 podStartE2EDuration="15.812148445s" podCreationTimestamp="2026-02-24 05:39:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-24 05:40:00.804776367 +0000 UTC m=+160.507393453" watchObservedRunningTime="2026-02-24 05:40:00.812148445 +0000 UTC m=+160.514765521" Feb 24 05:40:00.962649 master-0 kubenswrapper[34361]: I0224 05:40:00.962573 34361 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-insights"/"trusted-ca-bundle" Feb 24 05:40:00.978847 master-0 kubenswrapper[34361]: I0224 05:40:00.978770 34361 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cloud-credential-operator"/"cloud-credential-operator-serving-cert" Feb 24 05:40:01.014879 master-0 kubenswrapper[34361]: I0224 05:40:01.014798 34361 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"machine-api-operator-images" Feb 24 05:40:01.035463 master-0 kubenswrapper[34361]: I0224 05:40:01.035287 34361 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"kube-root-ca.crt" Feb 24 05:40:01.074129 master-0 kubenswrapper[34361]: I0224 05:40:01.073987 34361 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"multus-daemon-config" Feb 24 05:40:01.154345 master-0 kubenswrapper[34361]: I0224 05:40:01.153696 34361 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-metrics-certs-default" Feb 24 05:40:01.154345 master-0 kubenswrapper[34361]: I0224 05:40:01.153945 34361 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-error" Feb 24 05:40:01.175352 master-0 kubenswrapper[34361]: I0224 05:40:01.174575 34361 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-cliconfig" Feb 24 05:40:01.181336 master-0 kubenswrapper[34361]: I0224 05:40:01.180682 34361 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"iptables-alerter-script" Feb 24 05:40:01.225372 master-0 kubenswrapper[34361]: I0224 05:40:01.225250 34361 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" Feb 24 05:40:01.324190 master-0 kubenswrapper[34361]: I0224 05:40:01.323940 34361 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"cluster-baremetal-operator-images" Feb 24 05:40:01.334087 master-0 kubenswrapper[34361]: I0224 05:40:01.334031 34361 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"kube-state-metrics-tls" Feb 24 05:40:01.361837 master-0 kubenswrapper[34361]: I0224 05:40:01.361753 34361 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-operator-config" Feb 24 05:40:01.388192 master-0 kubenswrapper[34361]: I0224 05:40:01.388032 34361 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Feb 24 05:40:01.399462 master-0 kubenswrapper[34361]: I0224 05:40:01.399384 34361 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"kube-root-ca.crt" Feb 24 05:40:01.481952 master-0 kubenswrapper[34361]: I0224 05:40:01.481825 34361 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Feb 24 05:40:01.501080 master-0 kubenswrapper[34361]: I0224 05:40:01.500996 34361 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"kubelet-serving-ca-bundle" Feb 24 05:40:01.503168 master-0 kubenswrapper[34361]: I0224 05:40:01.503087 34361 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"kube-root-ca.crt" Feb 24 05:40:01.527869 master-0 kubenswrapper[34361]: I0224 05:40:01.527763 34361 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-insights"/"openshift-insights-serving-cert" Feb 24 05:40:01.535536 master-0 kubenswrapper[34361]: I0224 05:40:01.535493 34361 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"default-cni-sysctl-allowlist" Feb 24 05:40:01.606379 master-0 kubenswrapper[34361]: I0224 05:40:01.606178 34361 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"etcd-serving-ca" Feb 24 05:40:01.615340 master-0 kubenswrapper[34361]: I0224 05:40:01.615253 34361 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"openshift-service-ca.crt" Feb 24 05:40:01.623346 master-0 kubenswrapper[34361]: I0224 05:40:01.623263 34361 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"console-operator-dockercfg-9mjxb" Feb 24 05:40:01.655172 master-0 kubenswrapper[34361]: I0224 05:40:01.655088 34361 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-rv6pq" Feb 24 05:40:01.670885 master-0 kubenswrapper[34361]: I0224 05:40:01.670787 34361 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"whereabouts-config" Feb 24 05:40:01.742575 master-0 kubenswrapper[34361]: I0224 05:40:01.742492 34361 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-operator-tls" Feb 24 05:40:01.743834 master-0 kubenswrapper[34361]: I0224 05:40:01.743766 34361 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"serving-cert" Feb 24 05:40:01.763123 master-0 kubenswrapper[34361]: I0224 05:40:01.763025 34361 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"proxy-tls" Feb 24 05:40:01.786126 master-0 kubenswrapper[34361]: I0224 05:40:01.786024 34361 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"openshift-service-ca.crt" Feb 24 05:40:01.834637 master-0 kubenswrapper[34361]: I0224 05:40:01.834540 34361 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"monitoring-plugin-cert" Feb 24 05:40:01.847660 master-0 kubenswrapper[34361]: I0224 05:40:01.847574 34361 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cloud-credential-operator"/"cco-trusted-ca" Feb 24 05:40:01.905898 master-0 kubenswrapper[34361]: I0224 05:40:01.905720 34361 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert" Feb 24 05:40:01.930858 master-0 kubenswrapper[34361]: I0224 05:40:01.930718 34361 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"openshift-service-ca.crt" Feb 24 05:40:01.947157 master-0 kubenswrapper[34361]: I0224 05:40:01.947101 34361 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mcc-proxy-tls" Feb 24 05:40:01.963819 master-0 kubenswrapper[34361]: I0224 05:40:01.963751 34361 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-storage-operator"/"kube-root-ca.crt" Feb 24 05:40:02.035180 master-0 kubenswrapper[34361]: I0224 05:40:02.035120 34361 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"oauth-serving-cert" Feb 24 05:40:02.085530 master-0 kubenswrapper[34361]: I0224 05:40:02.085467 34361 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"trusted-ca-bundle" Feb 24 05:40:02.088286 master-0 kubenswrapper[34361]: I0224 05:40:02.088195 34361 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-daemon-dockercfg-sdvhz" Feb 24 05:40:02.202590 master-0 kubenswrapper[34361]: I0224 05:40:02.202393 34361 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"machine-config-operator-images" Feb 24 05:40:02.325301 master-0 kubenswrapper[34361]: I0224 05:40:02.325210 34361 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"kube-root-ca.crt" Feb 24 05:40:02.337541 master-0 kubenswrapper[34361]: I0224 05:40:02.337473 34361 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-operator-admission-webhook-tls" Feb 24 05:40:02.551417 master-0 kubenswrapper[34361]: I0224 05:40:02.551347 34361 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"node-bootstrapper-token" Feb 24 05:40:02.571013 master-0 kubenswrapper[34361]: I0224 05:40:02.570904 34361 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serving-cert" Feb 24 05:40:02.624117 master-0 kubenswrapper[34361]: I0224 05:40:02.624044 34361 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-config" Feb 24 05:40:02.646423 master-0 kubenswrapper[34361]: I0224 05:40:02.646337 34361 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"cluster-baremetal-operator-dockercfg-85vp6" Feb 24 05:40:02.672705 master-0 kubenswrapper[34361]: I0224 05:40:02.672588 34361 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-node-metrics-cert" Feb 24 05:40:02.712751 master-0 kubenswrapper[34361]: I0224 05:40:02.712645 34361 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"kube-root-ca.crt" Feb 24 05:40:02.793025 master-0 kubenswrapper[34361]: I0224 05:40:02.792898 34361 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"telemeter-trusted-ca-bundle-8i12ta5c71j38" Feb 24 05:40:02.852806 master-0 kubenswrapper[34361]: I0224 05:40:02.852583 34361 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-controller"/"operator-controller-trusted-ca-bundle" Feb 24 05:40:02.918785 master-0 kubenswrapper[34361]: I0224 05:40:02.918659 34361 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-catalogd"/"catalogserver-cert" Feb 24 05:40:03.023456 master-0 kubenswrapper[34361]: I0224 05:40:03.023367 34361 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"authentication-operator-config" Feb 24 05:40:03.103292 master-0 kubenswrapper[34361]: I0224 05:40:03.103112 34361 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-dockercfg-kf8b6" Feb 24 05:40:03.106854 master-0 kubenswrapper[34361]: I0224 05:40:03.106778 34361 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"kube-root-ca.crt" Feb 24 05:40:03.191539 master-0 kubenswrapper[34361]: I0224 05:40:03.191428 34361 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-config" Feb 24 05:40:03.247739 master-0 kubenswrapper[34361]: I0224 05:40:03.247643 34361 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Feb 24 05:40:03.316023 master-0 kubenswrapper[34361]: I0224 05:40:03.315924 34361 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"default-dockercfg-bnkwf" Feb 24 05:40:03.363527 master-0 kubenswrapper[34361]: I0224 05:40:03.363336 34361 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"openshift-service-ca.crt" Feb 24 05:40:03.370761 master-0 kubenswrapper[34361]: I0224 05:40:03.370707 34361 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert" Feb 24 05:40:03.410861 master-0 kubenswrapper[34361]: I0224 05:40:03.410773 34361 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"machine-approver-config" Feb 24 05:40:03.457720 master-0 kubenswrapper[34361]: I0224 05:40:03.457603 34361 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"metrics-server-tls" Feb 24 05:40:03.519711 master-0 kubenswrapper[34361]: I0224 05:40:03.519603 34361 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"metrics-tls" Feb 24 05:40:03.555957 master-0 kubenswrapper[34361]: I0224 05:40:03.555882 34361 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-oauth-config" Feb 24 05:40:03.558238 master-0 kubenswrapper[34361]: I0224 05:40:03.558189 34361 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-sa-dockercfg-w9h5v" Feb 24 05:40:03.595805 master-0 kubenswrapper[34361]: I0224 05:40:03.595746 34361 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-operator-tls" Feb 24 05:40:03.609862 master-0 kubenswrapper[34361]: I0224 05:40:03.609800 34361 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" Feb 24 05:40:03.630989 master-0 kubenswrapper[34361]: I0224 05:40:03.630854 34361 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-service-ca-bundle" Feb 24 05:40:03.699851 master-0 kubenswrapper[34361]: I0224 05:40:03.699633 34361 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"openshift-service-ca.crt" Feb 24 05:40:03.718383 master-0 kubenswrapper[34361]: I0224 05:40:03.718216 34361 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"openshift-service-ca.crt" Feb 24 05:40:03.740934 master-0 kubenswrapper[34361]: I0224 05:40:03.740841 34361 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"serving-cert" Feb 24 05:40:03.780233 master-0 kubenswrapper[34361]: I0224 05:40:03.780030 34361 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"openshift-service-ca.crt" Feb 24 05:40:03.845120 master-0 kubenswrapper[34361]: I0224 05:40:03.845029 34361 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"kube-state-metrics-kube-rbac-proxy-config" Feb 24 05:40:03.868276 master-0 kubenswrapper[34361]: I0224 05:40:03.868147 34361 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"kube-root-ca.crt" Feb 24 05:40:03.877281 master-0 kubenswrapper[34361]: I0224 05:40:03.877223 34361 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"trusted-ca" Feb 24 05:40:03.946633 master-0 kubenswrapper[34361]: I0224 05:40:03.946468 34361 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"canary-serving-cert" Feb 24 05:40:03.971730 master-0 kubenswrapper[34361]: I0224 05:40:03.971631 34361 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"telemeter-client-kube-rbac-proxy-config" Feb 24 05:40:04.017258 master-0 kubenswrapper[34361]: I0224 05:40:04.017167 34361 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Feb 24 05:40:04.055207 master-0 kubenswrapper[34361]: I0224 05:40:04.055130 34361 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"telemetry-config" Feb 24 05:40:04.179795 master-0 kubenswrapper[34361]: I0224 05:40:04.179710 34361 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Feb 24 05:40:04.197934 master-0 kubenswrapper[34361]: I0224 05:40:04.197746 34361 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"cluster-monitoring-operator-tls" Feb 24 05:40:04.225551 master-0 kubenswrapper[34361]: I0224 05:40:04.225501 34361 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" Feb 24 05:40:04.236289 master-0 kubenswrapper[34361]: I0224 05:40:04.236250 34361 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"cluster-samples-operator-dockercfg-fxsc2" Feb 24 05:40:04.357114 master-0 kubenswrapper[34361]: I0224 05:40:04.357017 34361 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"openshift-service-ca.crt" Feb 24 05:40:04.435066 master-0 kubenswrapper[34361]: I0224 05:40:04.434978 34361 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-zm289" Feb 24 05:40:04.455174 master-0 kubenswrapper[34361]: I0224 05:40:04.455015 34361 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-login" Feb 24 05:40:04.539503 master-0 kubenswrapper[34361]: I0224 05:40:04.539411 34361 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"baremetal-kube-rbac-proxy" Feb 24 05:40:04.551391 master-0 kubenswrapper[34361]: I0224 05:40:04.551297 34361 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"openshift-service-ca.crt" Feb 24 05:40:04.735444 master-0 kubenswrapper[34361]: I0224 05:40:04.735367 34361 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"default-dockercfg-l2gcc" Feb 24 05:40:04.753751 master-0 kubenswrapper[34361]: I0224 05:40:04.753632 34361 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"openshift-service-ca.crt" Feb 24 05:40:04.838183 master-0 kubenswrapper[34361]: I0224 05:40:04.838085 34361 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"kube-root-ca.crt" Feb 24 05:40:04.844227 master-0 kubenswrapper[34361]: I0224 05:40:04.844134 34361 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-rbac-proxy" Feb 24 05:40:04.899600 master-0 kubenswrapper[34361]: I0224 05:40:04.899458 34361 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-tls" Feb 24 05:40:04.900371 master-0 kubenswrapper[34361]: I0224 05:40:04.900264 34361 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"cluster-baremetal-webhook-server-cert" Feb 24 05:40:04.913587 master-0 kubenswrapper[34361]: I0224 05:40:04.913528 34361 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"telemeter-client" Feb 24 05:40:04.957447 master-0 kubenswrapper[34361]: I0224 05:40:04.957284 34361 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Feb 24 05:40:05.015899 master-0 kubenswrapper[34361]: I0224 05:40:05.015784 34361 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"catalog-operator-serving-cert" Feb 24 05:40:05.027808 master-0 kubenswrapper[34361]: I0224 05:40:05.027491 34361 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-insights"/"service-ca-bundle" Feb 24 05:40:05.030393 master-0 kubenswrapper[34361]: I0224 05:40:05.030251 34361 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"service-ca-bundle" Feb 24 05:40:05.107955 master-0 kubenswrapper[34361]: I0224 05:40:05.107848 34361 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"ovnkube-identity-cm" Feb 24 05:40:05.194921 master-0 kubenswrapper[34361]: I0224 05:40:05.194669 34361 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-storage-operator"/"cluster-storage-operator-dockercfg-jdmr6" Feb 24 05:40:05.253048 master-0 kubenswrapper[34361]: I0224 05:40:05.252805 34361 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-service-ca" Feb 24 05:40:05.279678 master-0 kubenswrapper[34361]: I0224 05:40:05.279589 34361 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"encryption-config-1" Feb 24 05:40:05.285076 master-0 kubenswrapper[34361]: I0224 05:40:05.285014 34361 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"kube-root-ca.crt" Feb 24 05:40:05.304028 master-0 kubenswrapper[34361]: I0224 05:40:05.303923 34361 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-insights"/"operator-dockercfg-2gwgm" Feb 24 05:40:05.313923 master-0 kubenswrapper[34361]: I0224 05:40:05.313821 34361 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-metrics" Feb 24 05:40:05.602140 master-0 kubenswrapper[34361]: I0224 05:40:05.601924 34361 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Feb 24 05:40:05.605488 master-0 kubenswrapper[34361]: I0224 05:40:05.605412 34361 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-session" Feb 24 05:40:05.613905 master-0 kubenswrapper[34361]: I0224 05:40:05.613837 34361 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Feb 24 05:40:05.667302 master-0 kubenswrapper[34361]: I0224 05:40:05.667216 34361 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"openshift-state-metrics-tls" Feb 24 05:40:05.709703 master-0 kubenswrapper[34361]: I0224 05:40:05.709605 34361 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-controller-dockercfg-2bzhs" Feb 24 05:40:05.727865 master-0 kubenswrapper[34361]: I0224 05:40:05.727781 34361 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"metrics-client-ca" Feb 24 05:40:05.741914 master-0 kubenswrapper[34361]: I0224 05:40:05.741797 34361 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"kube-root-ca.crt" Feb 24 05:40:05.742182 master-0 kubenswrapper[34361]: I0224 05:40:05.742119 34361 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-serving-cert" Feb 24 05:40:05.745629 master-0 kubenswrapper[34361]: I0224 05:40:05.745582 34361 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"telemeter-client-dockercfg-l6bv5" Feb 24 05:40:05.761135 master-0 kubenswrapper[34361]: I0224 05:40:05.761076 34361 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"openshift-service-ca.crt" Feb 24 05:40:05.770430 master-0 kubenswrapper[34361]: I0224 05:40:05.770367 34361 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cloud-credential-operator"/"openshift-service-ca.crt" Feb 24 05:40:05.782481 master-0 kubenswrapper[34361]: I0224 05:40:05.782425 34361 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-tls" Feb 24 05:40:05.793129 master-0 kubenswrapper[34361]: I0224 05:40:05.793068 34361 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"kube-root-ca.crt" Feb 24 05:40:05.893774 master-0 kubenswrapper[34361]: I0224 05:40:05.893560 34361 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-router-certs" Feb 24 05:40:05.930769 master-0 kubenswrapper[34361]: I0224 05:40:05.930686 34361 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-root-ca.crt" Feb 24 05:40:05.992703 master-0 kubenswrapper[34361]: I0224 05:40:05.992592 34361 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"federate-client-certs" Feb 24 05:40:06.006561 master-0 kubenswrapper[34361]: I0224 05:40:06.006163 34361 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-controller"/"openshift-service-ca.crt" Feb 24 05:40:06.093927 master-0 kubenswrapper[34361]: I0224 05:40:06.093806 34361 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"service-ca" Feb 24 05:40:06.126518 master-0 kubenswrapper[34361]: I0224 05:40:06.126412 34361 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-root-ca.crt" Feb 24 05:40:06.250538 master-0 kubenswrapper[34361]: I0224 05:40:06.250452 34361 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-node-tuning-operator"/"kube-root-ca.crt" Feb 24 05:40:06.267540 master-0 kubenswrapper[34361]: I0224 05:40:06.267340 34361 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-insights"/"kube-root-ca.crt" Feb 24 05:40:06.284463 master-0 kubenswrapper[34361]: I0224 05:40:06.284368 34361 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"kube-root-ca.crt" Feb 24 05:40:06.341560 master-0 kubenswrapper[34361]: I0224 05:40:06.341488 34361 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"marketplace-trusted-ca" Feb 24 05:40:06.419941 master-0 kubenswrapper[34361]: I0224 05:40:06.419896 34361 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"audit-1" Feb 24 05:40:06.425045 master-0 kubenswrapper[34361]: I0224 05:40:06.424954 34361 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-operator-dockercfg-9xtkh" Feb 24 05:40:06.458895 master-0 kubenswrapper[34361]: I0224 05:40:06.458814 34361 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-d22d8" Feb 24 05:40:06.647019 master-0 kubenswrapper[34361]: I0224 05:40:06.646822 34361 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-serving-cert" Feb 24 05:40:06.679565 master-0 kubenswrapper[34361]: I0224 05:40:06.679460 34361 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-serving-cert" Feb 24 05:40:06.738291 master-0 kubenswrapper[34361]: I0224 05:40:06.738193 34361 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mco-proxy-tls" Feb 24 05:40:06.744670 master-0 kubenswrapper[34361]: I0224 05:40:06.744591 34361 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Feb 24 05:40:06.747129 master-0 kubenswrapper[34361]: I0224 05:40:06.746995 34361 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"kube-root-ca.crt" Feb 24 05:40:06.781542 master-0 kubenswrapper[34361]: I0224 05:40:06.781344 34361 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-node-identity"/"network-node-identity-cert" Feb 24 05:40:06.786622 master-0 kubenswrapper[34361]: I0224 05:40:06.786541 34361 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"cluster-baremetal-operator-tls" Feb 24 05:40:06.996201 master-0 kubenswrapper[34361]: I0224 05:40:06.995827 34361 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"openshift-service-ca.crt" Feb 24 05:40:07.022818 master-0 kubenswrapper[34361]: I0224 05:40:07.022677 34361 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"openshift-service-ca.crt" Feb 24 05:40:07.034498 master-0 kubenswrapper[34361]: I0224 05:40:07.034350 34361 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cloud-controller-manager-operator"/"cluster-cloud-controller-manager-dockercfg-zqsq8" Feb 24 05:40:07.131731 master-0 kubenswrapper[34361]: I0224 05:40:07.131642 34361 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serviceaccount-dockercfg-ckqkb" Feb 24 05:40:07.151357 master-0 kubenswrapper[34361]: I0224 05:40:07.151234 34361 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-dockercfg-bcdxv" Feb 24 05:40:07.220154 master-0 kubenswrapper[34361]: I0224 05:40:07.220076 34361 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-root-ca.crt" Feb 24 05:40:07.241496 master-0 kubenswrapper[34361]: I0224 05:40:07.241396 34361 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-operator"/"metrics-tls" Feb 24 05:40:07.280201 master-0 kubenswrapper[34361]: I0224 05:40:07.279955 34361 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Feb 24 05:40:07.294224 master-0 kubenswrapper[34361]: I0224 05:40:07.294162 34361 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"trusted-ca-bundle" Feb 24 05:40:07.306964 master-0 kubenswrapper[34361]: I0224 05:40:07.306882 34361 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cloud-controller-manager-operator"/"openshift-service-ca.crt" Feb 24 05:40:07.307857 master-0 kubenswrapper[34361]: I0224 05:40:07.307783 34361 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"kube-root-ca.crt" Feb 24 05:40:07.353119 master-0 kubenswrapper[34361]: I0224 05:40:07.353024 34361 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert" Feb 24 05:40:07.431936 master-0 kubenswrapper[34361]: I0224 05:40:07.431851 34361 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" Feb 24 05:40:07.475923 master-0 kubenswrapper[34361]: I0224 05:40:07.475774 34361 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"openshift-service-ca.crt" Feb 24 05:40:07.541826 master-0 kubenswrapper[34361]: I0224 05:40:07.541702 34361 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-node-tuning-operator"/"openshift-service-ca.crt" Feb 24 05:40:07.582678 master-0 kubenswrapper[34361]: I0224 05:40:07.582604 34361 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"openshift-service-ca.crt" Feb 24 05:40:07.591730 master-0 kubenswrapper[34361]: I0224 05:40:07.591580 34361 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"cluster-image-registry-operator-dockercfg-9rnhs" Feb 24 05:40:07.605923 master-0 kubenswrapper[34361]: I0224 05:40:07.605849 34361 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"config-operator-serving-cert" Feb 24 05:40:07.615646 master-0 kubenswrapper[34361]: I0224 05:40:07.615601 34361 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"dns-default" Feb 24 05:40:07.654541 master-0 kubenswrapper[34361]: I0224 05:40:07.654461 34361 reflector.go:368] Caches populated for *v1.Service from k8s.io/client-go/informers/factory.go:160 Feb 24 05:40:07.710663 master-0 kubenswrapper[34361]: I0224 05:40:07.710577 34361 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"metrics-server-7qtvbjhkqad41" Feb 24 05:40:07.740112 master-0 kubenswrapper[34361]: I0224 05:40:07.740040 34361 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"kube-root-ca.crt" Feb 24 05:40:07.774390 master-0 kubenswrapper[34361]: I0224 05:40:07.774338 34361 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"cluster-autoscaler-operator-cert" Feb 24 05:40:07.852583 master-0 kubenswrapper[34361]: I0224 05:40:07.852404 34361 reflector.go:368] Caches populated for *v1.Node from k8s.io/client-go/informers/factory.go:160 Feb 24 05:40:07.855506 master-0 kubenswrapper[34361]: I0224 05:40:07.855118 34361 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-dockercfg-rzbrp" Feb 24 05:40:07.858950 master-0 kubenswrapper[34361]: I0224 05:40:07.858869 34361 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"kube-root-ca.crt" Feb 24 05:40:07.875640 master-0 kubenswrapper[34361]: I0224 05:40:07.875558 34361 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"trusted-ca-bundle" Feb 24 05:40:07.903720 master-0 kubenswrapper[34361]: I0224 05:40:07.903586 34361 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"telemeter-client-tls" Feb 24 05:40:07.913731 master-0 kubenswrapper[34361]: I0224 05:40:07.913669 34361 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"signing-cabundle" Feb 24 05:40:07.920971 master-0 kubenswrapper[34361]: I0224 05:40:07.920816 34361 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"kube-root-ca.crt" Feb 24 05:40:07.923232 master-0 kubenswrapper[34361]: I0224 05:40:07.923192 34361 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"pprof-cert" Feb 24 05:40:08.002610 master-0 kubenswrapper[34361]: I0224 05:40:08.002533 34361 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-operator-kube-rbac-proxy-config" Feb 24 05:40:08.111020 master-0 kubenswrapper[34361]: I0224 05:40:08.110779 34361 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"serving-cert" Feb 24 05:40:08.145728 master-0 kubenswrapper[34361]: I0224 05:40:08.145200 34361 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"node-exporter-dockercfg-gnn9c" Feb 24 05:40:08.168960 master-0 kubenswrapper[34361]: I0224 05:40:08.168862 34361 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"kube-state-metrics-custom-resource-state-configmap" Feb 24 05:40:08.200905 master-0 kubenswrapper[34361]: I0224 05:40:08.200825 34361 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ac-dockercfg-zl45m" Feb 24 05:40:08.223582 master-0 kubenswrapper[34361]: I0224 05:40:08.223507 34361 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-dockercfg-7tq27" Feb 24 05:40:08.225719 master-0 kubenswrapper[34361]: I0224 05:40:08.225673 34361 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"serving-cert" Feb 24 05:40:08.242841 master-0 kubenswrapper[34361]: I0224 05:40:08.242768 34361 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"openshift-state-metrics-kube-rbac-proxy-config" Feb 24 05:40:08.296792 master-0 kubenswrapper[34361]: I0224 05:40:08.296675 34361 kubelet.go:2431] "SyncLoop REMOVE" source="file" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0"] Feb 24 05:40:08.297231 master-0 kubenswrapper[34361]: I0224 05:40:08.297144 34361 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" podUID="2146f0e3671998cad8bbc2464b009ab7" containerName="startup-monitor" containerID="cri-o://72e021a31a37c96d72f75db831712f6fa7bf4d4e9a446833d13646a29ff48a2e" gracePeriod=5 Feb 24 05:40:08.301683 master-0 kubenswrapper[34361]: I0224 05:40:08.301572 34361 patch_prober.go:28] interesting pod/console-67bcb9df49-d2cv6 container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.128.0.91:8443/health\": dial tcp 10.128.0.91:8443: connect: connection refused" start-of-body= Feb 24 05:40:08.301683 master-0 kubenswrapper[34361]: I0224 05:40:08.301661 34361 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-67bcb9df49-d2cv6" podUID="c300d6c7-66fb-41c5-b099-0e9e4a235e76" containerName="console" probeResult="failure" output="Get \"https://10.128.0.91:8443/health\": dial tcp 10.128.0.91:8443: connect: connection refused" Feb 24 05:40:08.358535 master-0 kubenswrapper[34361]: I0224 05:40:08.358429 34361 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-srdvz" Feb 24 05:40:08.460795 master-0 kubenswrapper[34361]: I0224 05:40:08.460623 34361 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-controller"/"kube-root-ca.crt" Feb 24 05:40:08.563698 master-0 kubenswrapper[34361]: I0224 05:40:08.557220 34361 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"openshift-service-ca.crt" Feb 24 05:40:08.563698 master-0 kubenswrapper[34361]: I0224 05:40:08.557414 34361 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-tls" Feb 24 05:40:08.579217 master-0 kubenswrapper[34361]: I0224 05:40:08.579149 34361 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"trusted-ca" Feb 24 05:40:08.667169 master-0 kubenswrapper[34361]: I0224 05:40:08.667073 34361 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"serving-cert" Feb 24 05:40:08.670466 master-0 kubenswrapper[34361]: I0224 05:40:08.670426 34361 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-node-tuning-operator"/"node-tuning-operator-tls" Feb 24 05:40:08.682772 master-0 kubenswrapper[34361]: I0224 05:40:08.681689 34361 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-ocp-branding-template" Feb 24 05:40:08.730330 master-0 kubenswrapper[34361]: I0224 05:40:08.728136 34361 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-default-metrics-tls" Feb 24 05:40:08.967364 master-0 kubenswrapper[34361]: I0224 05:40:08.967252 34361 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-node-tuning-operator"/"trusted-ca" Feb 24 05:40:09.004647 master-0 kubenswrapper[34361]: I0224 05:40:09.004547 34361 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-admission-controller-secret" Feb 24 05:40:09.169262 master-0 kubenswrapper[34361]: I0224 05:40:09.169186 34361 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"encryption-config-1" Feb 24 05:40:09.233080 master-0 kubenswrapper[34361]: I0224 05:40:09.232997 34361 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"packageserver-service-cert" Feb 24 05:40:09.248571 master-0 kubenswrapper[34361]: I0224 05:40:09.248512 34361 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"kube-root-ca.crt" Feb 24 05:40:09.355705 master-0 kubenswrapper[34361]: I0224 05:40:09.355561 34361 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert" Feb 24 05:40:09.445891 master-0 kubenswrapper[34361]: I0224 05:40:09.445819 34361 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"kube-root-ca.crt" Feb 24 05:40:09.559732 master-0 kubenswrapper[34361]: I0224 05:40:09.559645 34361 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"etcd-serving-ca" Feb 24 05:40:09.564096 master-0 kubenswrapper[34361]: I0224 05:40:09.564027 34361 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-tls" Feb 24 05:40:09.645808 master-0 kubenswrapper[34361]: I0224 05:40:09.645643 34361 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"kube-root-ca.crt" Feb 24 05:40:09.666587 master-0 kubenswrapper[34361]: I0224 05:40:09.666521 34361 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-config" Feb 24 05:40:09.762246 master-0 kubenswrapper[34361]: I0224 05:40:09.762152 34361 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-storage-operator"/"cluster-storage-operator-serving-cert" Feb 24 05:40:09.828493 master-0 kubenswrapper[34361]: I0224 05:40:09.828430 34361 reflector.go:368] Caches populated for *v1.CSIDriver from k8s.io/client-go/informers/factory.go:160 Feb 24 05:40:09.836142 master-0 kubenswrapper[34361]: I0224 05:40:09.836068 34361 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"cni-copy-resources" Feb 24 05:40:09.844918 master-0 kubenswrapper[34361]: I0224 05:40:09.844880 34361 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"audit" Feb 24 05:40:09.848542 master-0 kubenswrapper[34361]: I0224 05:40:09.848255 34361 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"kube-root-ca.crt" Feb 24 05:40:09.895124 master-0 kubenswrapper[34361]: I0224 05:40:09.894555 34361 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"openshift-config-operator-dockercfg-p74xw" Feb 24 05:40:09.909965 master-0 kubenswrapper[34361]: I0224 05:40:09.909831 34361 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"audit-1" Feb 24 05:40:09.949234 master-0 kubenswrapper[34361]: I0224 05:40:09.949169 34361 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"kube-root-ca.crt" Feb 24 05:40:09.959676 master-0 kubenswrapper[34361]: I0224 05:40:09.959645 34361 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"serving-cert" Feb 24 05:40:10.019241 master-0 kubenswrapper[34361]: I0224 05:40:10.019154 34361 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-rbac-proxy-cluster-autoscaler-operator" Feb 24 05:40:10.131404 master-0 kubenswrapper[34361]: I0224 05:40:10.131339 34361 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-secret" Feb 24 05:40:10.207868 master-0 kubenswrapper[34361]: I0224 05:40:10.207701 34361 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-root-ca.crt" Feb 24 05:40:10.251542 master-0 kubenswrapper[34361]: I0224 05:40:10.251272 34361 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Feb 24 05:40:10.304385 master-0 kubenswrapper[34361]: I0224 05:40:10.304303 34361 patch_prober.go:28] interesting pod/console-5b6cfdbd-5qbf5 container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.128.0.87:8443/health\": dial tcp 10.128.0.87:8443: connect: connection refused" start-of-body= Feb 24 05:40:10.304759 master-0 kubenswrapper[34361]: I0224 05:40:10.304397 34361 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-5b6cfdbd-5qbf5" podUID="f3038676-0c11-4616-bb1e-f5d396e420f4" containerName="console" probeResult="failure" output="Get \"https://10.128.0.87:8443/health\": dial tcp 10.128.0.87:8443: connect: connection refused" Feb 24 05:40:10.360356 master-0 kubenswrapper[34361]: I0224 05:40:10.360273 34361 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-service-ca.crt" Feb 24 05:40:10.446116 master-0 kubenswrapper[34361]: I0224 05:40:10.446066 34361 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cloud-controller-manager-operator"/"cloud-controller-manager-operator-tls" Feb 24 05:40:10.607754 master-0 kubenswrapper[34361]: I0224 05:40:10.607686 34361 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"config" Feb 24 05:40:10.627669 master-0 kubenswrapper[34361]: I0224 05:40:10.627630 34361 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"service-ca-operator-config" Feb 24 05:40:10.654461 master-0 kubenswrapper[34361]: I0224 05:40:10.654439 34361 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"kube-root-ca.crt" Feb 24 05:40:10.786015 master-0 kubenswrapper[34361]: I0224 05:40:10.785974 34361 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"trusted-ca-bundle" Feb 24 05:40:10.824486 master-0 kubenswrapper[34361]: I0224 05:40:10.824417 34361 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"openshift-service-ca.crt" Feb 24 05:40:10.824842 master-0 kubenswrapper[34361]: I0224 05:40:10.824806 34361 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-rbac-proxy" Feb 24 05:40:10.825883 master-0 kubenswrapper[34361]: I0224 05:40:10.825840 34361 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-stats-default" Feb 24 05:40:10.884193 master-0 kubenswrapper[34361]: I0224 05:40:10.884000 34361 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-certs-default" Feb 24 05:40:10.922554 master-0 kubenswrapper[34361]: I0224 05:40:10.922465 34361 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-catalogd"/"catalogd-trusted-ca-bundle" Feb 24 05:40:10.924056 master-0 kubenswrapper[34361]: I0224 05:40:10.924013 34361 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"openshift-service-ca.crt" Feb 24 05:40:11.010629 master-0 kubenswrapper[34361]: I0224 05:40:11.010545 34361 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-insights"/"openshift-service-ca.crt" Feb 24 05:40:11.067936 master-0 kubenswrapper[34361]: I0224 05:40:11.067771 34361 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"openshift-service-ca.crt" Feb 24 05:40:11.316727 master-0 kubenswrapper[34361]: I0224 05:40:11.316637 34361 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"openshift-state-metrics-dockercfg-ll4w9" Feb 24 05:40:11.498209 master-0 kubenswrapper[34361]: I0224 05:40:11.498125 34361 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"openshift-service-ca.crt" Feb 24 05:40:11.621300 master-0 kubenswrapper[34361]: I0224 05:40:11.621143 34361 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"oauth-openshift-dockercfg-b9rnb" Feb 24 05:40:12.157733 master-0 kubenswrapper[34361]: I0224 05:40:12.157584 34361 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"kube-root-ca.crt" Feb 24 05:40:12.366954 master-0 kubenswrapper[34361]: I0224 05:40:12.366823 34361 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"kube-root-ca.crt" Feb 24 05:40:12.622735 master-0 kubenswrapper[34361]: I0224 05:40:12.622610 34361 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cloud-controller-manager-operator"/"kube-rbac-proxy" Feb 24 05:40:12.870142 master-0 kubenswrapper[34361]: I0224 05:40:12.870007 34361 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"service-ca-bundle" Feb 24 05:40:13.746358 master-0 kubenswrapper[34361]: I0224 05:40:13.746222 34361 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"metrics-client-certs" Feb 24 05:40:13.912174 master-0 kubenswrapper[34361]: I0224 05:40:13.912103 34361 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-master-0_2146f0e3671998cad8bbc2464b009ab7/startup-monitor/0.log" Feb 24 05:40:13.912453 master-0 kubenswrapper[34361]: I0224 05:40:13.912259 34361 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Feb 24 05:40:13.991293 master-0 kubenswrapper[34361]: I0224 05:40:13.991185 34361 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/2146f0e3671998cad8bbc2464b009ab7-manifests\") pod \"2146f0e3671998cad8bbc2464b009ab7\" (UID: \"2146f0e3671998cad8bbc2464b009ab7\") " Feb 24 05:40:13.991634 master-0 kubenswrapper[34361]: I0224 05:40:13.991427 34361 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/2146f0e3671998cad8bbc2464b009ab7-pod-resource-dir\") pod \"2146f0e3671998cad8bbc2464b009ab7\" (UID: \"2146f0e3671998cad8bbc2464b009ab7\") " Feb 24 05:40:13.991634 master-0 kubenswrapper[34361]: I0224 05:40:13.991461 34361 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2146f0e3671998cad8bbc2464b009ab7-manifests" (OuterVolumeSpecName: "manifests") pod "2146f0e3671998cad8bbc2464b009ab7" (UID: "2146f0e3671998cad8bbc2464b009ab7"). InnerVolumeSpecName "manifests". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 24 05:40:13.991634 master-0 kubenswrapper[34361]: I0224 05:40:13.991487 34361 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/2146f0e3671998cad8bbc2464b009ab7-var-lock\") pod \"2146f0e3671998cad8bbc2464b009ab7\" (UID: \"2146f0e3671998cad8bbc2464b009ab7\") " Feb 24 05:40:13.991634 master-0 kubenswrapper[34361]: I0224 05:40:13.991595 34361 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2146f0e3671998cad8bbc2464b009ab7-var-lock" (OuterVolumeSpecName: "var-lock") pod "2146f0e3671998cad8bbc2464b009ab7" (UID: "2146f0e3671998cad8bbc2464b009ab7"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 24 05:40:13.991850 master-0 kubenswrapper[34361]: I0224 05:40:13.991644 34361 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2146f0e3671998cad8bbc2464b009ab7-resource-dir\") pod \"2146f0e3671998cad8bbc2464b009ab7\" (UID: \"2146f0e3671998cad8bbc2464b009ab7\") " Feb 24 05:40:13.991850 master-0 kubenswrapper[34361]: I0224 05:40:13.991760 34361 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2146f0e3671998cad8bbc2464b009ab7-resource-dir" (OuterVolumeSpecName: "resource-dir") pod "2146f0e3671998cad8bbc2464b009ab7" (UID: "2146f0e3671998cad8bbc2464b009ab7"). InnerVolumeSpecName "resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 24 05:40:13.991973 master-0 kubenswrapper[34361]: I0224 05:40:13.991927 34361 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/2146f0e3671998cad8bbc2464b009ab7-var-log\") pod \"2146f0e3671998cad8bbc2464b009ab7\" (UID: \"2146f0e3671998cad8bbc2464b009ab7\") " Feb 24 05:40:13.992684 master-0 kubenswrapper[34361]: I0224 05:40:13.992643 34361 reconciler_common.go:293] "Volume detached for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/2146f0e3671998cad8bbc2464b009ab7-manifests\") on node \"master-0\" DevicePath \"\"" Feb 24 05:40:13.992684 master-0 kubenswrapper[34361]: I0224 05:40:13.992678 34361 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/2146f0e3671998cad8bbc2464b009ab7-var-lock\") on node \"master-0\" DevicePath \"\"" Feb 24 05:40:13.992798 master-0 kubenswrapper[34361]: I0224 05:40:13.992698 34361 reconciler_common.go:293] "Volume detached for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2146f0e3671998cad8bbc2464b009ab7-resource-dir\") on node \"master-0\" DevicePath \"\"" Feb 24 05:40:13.992798 master-0 kubenswrapper[34361]: I0224 05:40:13.992753 34361 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2146f0e3671998cad8bbc2464b009ab7-var-log" (OuterVolumeSpecName: "var-log") pod "2146f0e3671998cad8bbc2464b009ab7" (UID: "2146f0e3671998cad8bbc2464b009ab7"). InnerVolumeSpecName "var-log". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 24 05:40:14.001568 master-0 kubenswrapper[34361]: I0224 05:40:14.000294 34361 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2146f0e3671998cad8bbc2464b009ab7-pod-resource-dir" (OuterVolumeSpecName: "pod-resource-dir") pod "2146f0e3671998cad8bbc2464b009ab7" (UID: "2146f0e3671998cad8bbc2464b009ab7"). InnerVolumeSpecName "pod-resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 24 05:40:14.031575 master-0 kubenswrapper[34361]: I0224 05:40:14.031462 34361 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-master-0_2146f0e3671998cad8bbc2464b009ab7/startup-monitor/0.log" Feb 24 05:40:14.031726 master-0 kubenswrapper[34361]: I0224 05:40:14.031601 34361 generic.go:334] "Generic (PLEG): container finished" podID="2146f0e3671998cad8bbc2464b009ab7" containerID="72e021a31a37c96d72f75db831712f6fa7bf4d4e9a446833d13646a29ff48a2e" exitCode=137 Feb 24 05:40:14.031726 master-0 kubenswrapper[34361]: I0224 05:40:14.031688 34361 scope.go:117] "RemoveContainer" containerID="72e021a31a37c96d72f75db831712f6fa7bf4d4e9a446833d13646a29ff48a2e" Feb 24 05:40:14.031881 master-0 kubenswrapper[34361]: I0224 05:40:14.031756 34361 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Feb 24 05:40:14.072061 master-0 kubenswrapper[34361]: I0224 05:40:14.071938 34361 scope.go:117] "RemoveContainer" containerID="72e021a31a37c96d72f75db831712f6fa7bf4d4e9a446833d13646a29ff48a2e" Feb 24 05:40:14.073022 master-0 kubenswrapper[34361]: E0224 05:40:14.072924 34361 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"72e021a31a37c96d72f75db831712f6fa7bf4d4e9a446833d13646a29ff48a2e\": container with ID starting with 72e021a31a37c96d72f75db831712f6fa7bf4d4e9a446833d13646a29ff48a2e not found: ID does not exist" containerID="72e021a31a37c96d72f75db831712f6fa7bf4d4e9a446833d13646a29ff48a2e" Feb 24 05:40:14.073182 master-0 kubenswrapper[34361]: I0224 05:40:14.073019 34361 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"72e021a31a37c96d72f75db831712f6fa7bf4d4e9a446833d13646a29ff48a2e"} err="failed to get container status \"72e021a31a37c96d72f75db831712f6fa7bf4d4e9a446833d13646a29ff48a2e\": rpc error: code = NotFound desc = could not find container \"72e021a31a37c96d72f75db831712f6fa7bf4d4e9a446833d13646a29ff48a2e\": container with ID starting with 72e021a31a37c96d72f75db831712f6fa7bf4d4e9a446833d13646a29ff48a2e not found: ID does not exist" Feb 24 05:40:14.095006 master-0 kubenswrapper[34361]: I0224 05:40:14.094936 34361 reconciler_common.go:293] "Volume detached for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/2146f0e3671998cad8bbc2464b009ab7-var-log\") on node \"master-0\" DevicePath \"\"" Feb 24 05:40:14.095427 master-0 kubenswrapper[34361]: I0224 05:40:14.095401 34361 reconciler_common.go:293] "Volume detached for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/2146f0e3671998cad8bbc2464b009ab7-pod-resource-dir\") on node \"master-0\" DevicePath \"\"" Feb 24 05:40:14.617113 master-0 kubenswrapper[34361]: I0224 05:40:14.616578 34361 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2146f0e3671998cad8bbc2464b009ab7" path="/var/lib/kubelet/pods/2146f0e3671998cad8bbc2464b009ab7/volumes" Feb 24 05:40:18.287333 master-0 kubenswrapper[34361]: I0224 05:40:18.287242 34361 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-monitoring/metrics-server-65cdf565cd-555rj"] Feb 24 05:40:18.288111 master-0 kubenswrapper[34361]: I0224 05:40:18.287550 34361 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-monitoring/metrics-server-65cdf565cd-555rj" podUID="2f48332e-92de-42aa-a6e6-db161f005e74" containerName="metrics-server" containerID="cri-o://4858fceb65a04923cb067f166343a6fb2307e9f4257f862dde38618470f1f9bb" gracePeriod=170 Feb 24 05:40:18.308562 master-0 kubenswrapper[34361]: I0224 05:40:18.308483 34361 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-monitoring/metrics-server-7bf9b765b9-b9fxz"] Feb 24 05:40:18.308953 master-0 kubenswrapper[34361]: E0224 05:40:18.308923 34361 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2146f0e3671998cad8bbc2464b009ab7" containerName="startup-monitor" Feb 24 05:40:18.308953 master-0 kubenswrapper[34361]: I0224 05:40:18.308950 34361 state_mem.go:107] "Deleted CPUSet assignment" podUID="2146f0e3671998cad8bbc2464b009ab7" containerName="startup-monitor" Feb 24 05:40:18.309042 master-0 kubenswrapper[34361]: E0224 05:40:18.308962 34361 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="175e9f88-7ed2-441b-8de2-71aa4d32c9c5" containerName="installer" Feb 24 05:40:18.309042 master-0 kubenswrapper[34361]: I0224 05:40:18.308973 34361 state_mem.go:107] "Deleted CPUSet assignment" podUID="175e9f88-7ed2-441b-8de2-71aa4d32c9c5" containerName="installer" Feb 24 05:40:18.309186 master-0 kubenswrapper[34361]: I0224 05:40:18.309161 34361 memory_manager.go:354] "RemoveStaleState removing state" podUID="2146f0e3671998cad8bbc2464b009ab7" containerName="startup-monitor" Feb 24 05:40:18.309234 master-0 kubenswrapper[34361]: I0224 05:40:18.309207 34361 memory_manager.go:354] "RemoveStaleState removing state" podUID="175e9f88-7ed2-441b-8de2-71aa4d32c9c5" containerName="installer" Feb 24 05:40:18.309996 master-0 kubenswrapper[34361]: I0224 05:40:18.309959 34361 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/metrics-server-7bf9b765b9-b9fxz" Feb 24 05:40:18.310813 master-0 kubenswrapper[34361]: I0224 05:40:18.310758 34361 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-67bcb9df49-d2cv6" Feb 24 05:40:18.312514 master-0 kubenswrapper[34361]: I0224 05:40:18.312468 34361 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"metrics-server-8csin008gjsd0" Feb 24 05:40:18.315136 master-0 kubenswrapper[34361]: I0224 05:40:18.315089 34361 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-network-console/networking-console-plugin-79f587d78f-bctpb"] Feb 24 05:40:18.316383 master-0 kubenswrapper[34361]: I0224 05:40:18.316348 34361 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-79f587d78f-bctpb" Feb 24 05:40:18.316786 master-0 kubenswrapper[34361]: I0224 05:40:18.316746 34361 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-67bcb9df49-d2cv6" Feb 24 05:40:18.319944 master-0 kubenswrapper[34361]: I0224 05:40:18.319880 34361 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-monitoring/thanos-querier-d588d74dc-gmlm4"] Feb 24 05:40:18.325396 master-0 kubenswrapper[34361]: I0224 05:40:18.322390 34361 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/thanos-querier-d588d74dc-gmlm4" Feb 24 05:40:18.325396 master-0 kubenswrapper[34361]: I0224 05:40:18.322398 34361 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-console"/"networking-console-plugin-cert" Feb 24 05:40:18.325396 master-0 kubenswrapper[34361]: I0224 05:40:18.322426 34361 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-console"/"networking-console-plugin" Feb 24 05:40:18.326757 master-0 kubenswrapper[34361]: I0224 05:40:18.326146 34361 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"thanos-querier-kube-rbac-proxy-web" Feb 24 05:40:18.326757 master-0 kubenswrapper[34361]: I0224 05:40:18.326475 34361 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"thanos-querier-grpc-tls-7fmeibjvdhibm" Feb 24 05:40:18.326757 master-0 kubenswrapper[34361]: I0224 05:40:18.326582 34361 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"thanos-querier-kube-rbac-proxy-metrics" Feb 24 05:40:18.326757 master-0 kubenswrapper[34361]: I0224 05:40:18.326654 34361 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"thanos-querier-kube-rbac-proxy" Feb 24 05:40:18.326943 master-0 kubenswrapper[34361]: I0224 05:40:18.326833 34361 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"thanos-querier-kube-rbac-proxy-rules" Feb 24 05:40:18.330086 master-0 kubenswrapper[34361]: I0224 05:40:18.326989 34361 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"thanos-querier-tls" Feb 24 05:40:18.336930 master-0 kubenswrapper[34361]: I0224 05:40:18.334436 34361 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-network-console/networking-console-plugin-79f587d78f-bctpb"] Feb 24 05:40:18.347746 master-0 kubenswrapper[34361]: I0224 05:40:18.347539 34361 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/metrics-server-7bf9b765b9-b9fxz"] Feb 24 05:40:18.352370 master-0 kubenswrapper[34361]: I0224 05:40:18.352292 34361 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/thanos-querier-d588d74dc-gmlm4"] Feb 24 05:40:18.443408 master-0 kubenswrapper[34361]: I0224 05:40:18.442285 34361 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-5b6cfdbd-5qbf5"] Feb 24 05:40:18.478906 master-0 kubenswrapper[34361]: I0224 05:40:18.478796 34361 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-metrics-client-certs\" (UniqueName: \"kubernetes.io/secret/54b8a889-884a-4511-8569-42da64109ef8-secret-metrics-client-certs\") pod \"metrics-server-7bf9b765b9-b9fxz\" (UID: \"54b8a889-884a-4511-8569-42da64109ef8\") " pod="openshift-monitoring/metrics-server-7bf9b765b9-b9fxz" Feb 24 05:40:18.479219 master-0 kubenswrapper[34361]: I0224 05:40:18.479164 34361 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-log\" (UniqueName: \"kubernetes.io/empty-dir/54b8a889-884a-4511-8569-42da64109ef8-audit-log\") pod \"metrics-server-7bf9b765b9-b9fxz\" (UID: \"54b8a889-884a-4511-8569-42da64109ef8\") " pod="openshift-monitoring/metrics-server-7bf9b765b9-b9fxz" Feb 24 05:40:18.479589 master-0 kubenswrapper[34361]: I0224 05:40:18.479536 34361 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-thanos-querier-kube-rbac-proxy-rules\" (UniqueName: \"kubernetes.io/secret/a01a30b1-dc80-4442-b760-3f92d4906df6-secret-thanos-querier-kube-rbac-proxy-rules\") pod \"thanos-querier-d588d74dc-gmlm4\" (UID: \"a01a30b1-dc80-4442-b760-3f92d4906df6\") " pod="openshift-monitoring/thanos-querier-d588d74dc-gmlm4" Feb 24 05:40:18.479657 master-0 kubenswrapper[34361]: I0224 05:40:18.479596 34361 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/a01a30b1-dc80-4442-b760-3f92d4906df6-metrics-client-ca\") pod \"thanos-querier-d588d74dc-gmlm4\" (UID: \"a01a30b1-dc80-4442-b760-3f92d4906df6\") " pod="openshift-monitoring/thanos-querier-d588d74dc-gmlm4" Feb 24 05:40:18.479657 master-0 kubenswrapper[34361]: I0224 05:40:18.479632 34361 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-thanos-querier-tls\" (UniqueName: \"kubernetes.io/secret/a01a30b1-dc80-4442-b760-3f92d4906df6-secret-thanos-querier-tls\") pod \"thanos-querier-d588d74dc-gmlm4\" (UID: \"a01a30b1-dc80-4442-b760-3f92d4906df6\") " pod="openshift-monitoring/thanos-querier-d588d74dc-gmlm4" Feb 24 05:40:18.479725 master-0 kubenswrapper[34361]: I0224 05:40:18.479665 34361 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"configmap-kubelet-serving-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/54b8a889-884a-4511-8569-42da64109ef8-configmap-kubelet-serving-ca-bundle\") pod \"metrics-server-7bf9b765b9-b9fxz\" (UID: \"54b8a889-884a-4511-8569-42da64109ef8\") " pod="openshift-monitoring/metrics-server-7bf9b765b9-b9fxz" Feb 24 05:40:18.479907 master-0 kubenswrapper[34361]: I0224 05:40:18.479879 34361 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-thanos-querier-kube-rbac-proxy-web\" (UniqueName: \"kubernetes.io/secret/a01a30b1-dc80-4442-b760-3f92d4906df6-secret-thanos-querier-kube-rbac-proxy-web\") pod \"thanos-querier-d588d74dc-gmlm4\" (UID: \"a01a30b1-dc80-4442-b760-3f92d4906df6\") " pod="openshift-monitoring/thanos-querier-d588d74dc-gmlm4" Feb 24 05:40:18.479956 master-0 kubenswrapper[34361]: I0224 05:40:18.479920 34361 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zmfjr\" (UniqueName: \"kubernetes.io/projected/54b8a889-884a-4511-8569-42da64109ef8-kube-api-access-zmfjr\") pod \"metrics-server-7bf9b765b9-b9fxz\" (UID: \"54b8a889-884a-4511-8569-42da64109ef8\") " pod="openshift-monitoring/metrics-server-7bf9b765b9-b9fxz" Feb 24 05:40:18.479956 master-0 kubenswrapper[34361]: I0224 05:40:18.479949 34361 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca-bundle\" (UniqueName: \"kubernetes.io/secret/54b8a889-884a-4511-8569-42da64109ef8-client-ca-bundle\") pod \"metrics-server-7bf9b765b9-b9fxz\" (UID: \"54b8a889-884a-4511-8569-42da64109ef8\") " pod="openshift-monitoring/metrics-server-7bf9b765b9-b9fxz" Feb 24 05:40:18.480172 master-0 kubenswrapper[34361]: I0224 05:40:18.480112 34361 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-thanos-querier-kube-rbac-proxy\" (UniqueName: \"kubernetes.io/secret/a01a30b1-dc80-4442-b760-3f92d4906df6-secret-thanos-querier-kube-rbac-proxy\") pod \"thanos-querier-d588d74dc-gmlm4\" (UID: \"a01a30b1-dc80-4442-b760-3f92d4906df6\") " pod="openshift-monitoring/thanos-querier-d588d74dc-gmlm4" Feb 24 05:40:18.480224 master-0 kubenswrapper[34361]: I0224 05:40:18.480182 34361 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-server-audit-profiles\" (UniqueName: \"kubernetes.io/configmap/54b8a889-884a-4511-8569-42da64109ef8-metrics-server-audit-profiles\") pod \"metrics-server-7bf9b765b9-b9fxz\" (UID: \"54b8a889-884a-4511-8569-42da64109ef8\") " pod="openshift-monitoring/metrics-server-7bf9b765b9-b9fxz" Feb 24 05:40:18.480224 master-0 kubenswrapper[34361]: I0224 05:40:18.480217 34361 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/b0a03ff3-e39b-4be9-bb1f-827d00437e62-networking-console-plugin-cert\") pod \"networking-console-plugin-79f587d78f-bctpb\" (UID: \"b0a03ff3-e39b-4be9-bb1f-827d00437e62\") " pod="openshift-network-console/networking-console-plugin-79f587d78f-bctpb" Feb 24 05:40:18.480520 master-0 kubenswrapper[34361]: I0224 05:40:18.480303 34361 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-thanos-querier-kube-rbac-proxy-metrics\" (UniqueName: \"kubernetes.io/secret/a01a30b1-dc80-4442-b760-3f92d4906df6-secret-thanos-querier-kube-rbac-proxy-metrics\") pod \"thanos-querier-d588d74dc-gmlm4\" (UID: \"a01a30b1-dc80-4442-b760-3f92d4906df6\") " pod="openshift-monitoring/thanos-querier-d588d74dc-gmlm4" Feb 24 05:40:18.480520 master-0 kubenswrapper[34361]: I0224 05:40:18.480376 34361 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-grpc-tls\" (UniqueName: \"kubernetes.io/secret/a01a30b1-dc80-4442-b760-3f92d4906df6-secret-grpc-tls\") pod \"thanos-querier-d588d74dc-gmlm4\" (UID: \"a01a30b1-dc80-4442-b760-3f92d4906df6\") " pod="openshift-monitoring/thanos-querier-d588d74dc-gmlm4" Feb 24 05:40:18.480520 master-0 kubenswrapper[34361]: I0224 05:40:18.480434 34361 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jbgfm\" (UniqueName: \"kubernetes.io/projected/a01a30b1-dc80-4442-b760-3f92d4906df6-kube-api-access-jbgfm\") pod \"thanos-querier-d588d74dc-gmlm4\" (UID: \"a01a30b1-dc80-4442-b760-3f92d4906df6\") " pod="openshift-monitoring/thanos-querier-d588d74dc-gmlm4" Feb 24 05:40:18.480520 master-0 kubenswrapper[34361]: I0224 05:40:18.480457 34361 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-metrics-server-tls\" (UniqueName: \"kubernetes.io/secret/54b8a889-884a-4511-8569-42da64109ef8-secret-metrics-server-tls\") pod \"metrics-server-7bf9b765b9-b9fxz\" (UID: \"54b8a889-884a-4511-8569-42da64109ef8\") " pod="openshift-monitoring/metrics-server-7bf9b765b9-b9fxz" Feb 24 05:40:18.480671 master-0 kubenswrapper[34361]: I0224 05:40:18.480587 34361 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/b0a03ff3-e39b-4be9-bb1f-827d00437e62-nginx-conf\") pod \"networking-console-plugin-79f587d78f-bctpb\" (UID: \"b0a03ff3-e39b-4be9-bb1f-827d00437e62\") " pod="openshift-network-console/networking-console-plugin-79f587d78f-bctpb" Feb 24 05:40:18.582679 master-0 kubenswrapper[34361]: I0224 05:40:18.582497 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-thanos-querier-kube-rbac-proxy-web\" (UniqueName: \"kubernetes.io/secret/a01a30b1-dc80-4442-b760-3f92d4906df6-secret-thanos-querier-kube-rbac-proxy-web\") pod \"thanos-querier-d588d74dc-gmlm4\" (UID: \"a01a30b1-dc80-4442-b760-3f92d4906df6\") " pod="openshift-monitoring/thanos-querier-d588d74dc-gmlm4" Feb 24 05:40:18.582679 master-0 kubenswrapper[34361]: I0224 05:40:18.582589 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zmfjr\" (UniqueName: \"kubernetes.io/projected/54b8a889-884a-4511-8569-42da64109ef8-kube-api-access-zmfjr\") pod \"metrics-server-7bf9b765b9-b9fxz\" (UID: \"54b8a889-884a-4511-8569-42da64109ef8\") " pod="openshift-monitoring/metrics-server-7bf9b765b9-b9fxz" Feb 24 05:40:18.582679 master-0 kubenswrapper[34361]: I0224 05:40:18.582633 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca-bundle\" (UniqueName: \"kubernetes.io/secret/54b8a889-884a-4511-8569-42da64109ef8-client-ca-bundle\") pod \"metrics-server-7bf9b765b9-b9fxz\" (UID: \"54b8a889-884a-4511-8569-42da64109ef8\") " pod="openshift-monitoring/metrics-server-7bf9b765b9-b9fxz" Feb 24 05:40:18.583025 master-0 kubenswrapper[34361]: I0224 05:40:18.582692 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-thanos-querier-kube-rbac-proxy\" (UniqueName: \"kubernetes.io/secret/a01a30b1-dc80-4442-b760-3f92d4906df6-secret-thanos-querier-kube-rbac-proxy\") pod \"thanos-querier-d588d74dc-gmlm4\" (UID: \"a01a30b1-dc80-4442-b760-3f92d4906df6\") " pod="openshift-monitoring/thanos-querier-d588d74dc-gmlm4" Feb 24 05:40:18.583025 master-0 kubenswrapper[34361]: I0224 05:40:18.582732 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-server-audit-profiles\" (UniqueName: \"kubernetes.io/configmap/54b8a889-884a-4511-8569-42da64109ef8-metrics-server-audit-profiles\") pod \"metrics-server-7bf9b765b9-b9fxz\" (UID: \"54b8a889-884a-4511-8569-42da64109ef8\") " pod="openshift-monitoring/metrics-server-7bf9b765b9-b9fxz" Feb 24 05:40:18.583025 master-0 kubenswrapper[34361]: I0224 05:40:18.582774 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/b0a03ff3-e39b-4be9-bb1f-827d00437e62-networking-console-plugin-cert\") pod \"networking-console-plugin-79f587d78f-bctpb\" (UID: \"b0a03ff3-e39b-4be9-bb1f-827d00437e62\") " pod="openshift-network-console/networking-console-plugin-79f587d78f-bctpb" Feb 24 05:40:18.583025 master-0 kubenswrapper[34361]: I0224 05:40:18.582813 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-thanos-querier-kube-rbac-proxy-metrics\" (UniqueName: \"kubernetes.io/secret/a01a30b1-dc80-4442-b760-3f92d4906df6-secret-thanos-querier-kube-rbac-proxy-metrics\") pod \"thanos-querier-d588d74dc-gmlm4\" (UID: \"a01a30b1-dc80-4442-b760-3f92d4906df6\") " pod="openshift-monitoring/thanos-querier-d588d74dc-gmlm4" Feb 24 05:40:18.583025 master-0 kubenswrapper[34361]: I0224 05:40:18.582848 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-grpc-tls\" (UniqueName: \"kubernetes.io/secret/a01a30b1-dc80-4442-b760-3f92d4906df6-secret-grpc-tls\") pod \"thanos-querier-d588d74dc-gmlm4\" (UID: \"a01a30b1-dc80-4442-b760-3f92d4906df6\") " pod="openshift-monitoring/thanos-querier-d588d74dc-gmlm4" Feb 24 05:40:18.583025 master-0 kubenswrapper[34361]: I0224 05:40:18.582891 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jbgfm\" (UniqueName: \"kubernetes.io/projected/a01a30b1-dc80-4442-b760-3f92d4906df6-kube-api-access-jbgfm\") pod \"thanos-querier-d588d74dc-gmlm4\" (UID: \"a01a30b1-dc80-4442-b760-3f92d4906df6\") " pod="openshift-monitoring/thanos-querier-d588d74dc-gmlm4" Feb 24 05:40:18.583025 master-0 kubenswrapper[34361]: I0224 05:40:18.582923 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-metrics-server-tls\" (UniqueName: \"kubernetes.io/secret/54b8a889-884a-4511-8569-42da64109ef8-secret-metrics-server-tls\") pod \"metrics-server-7bf9b765b9-b9fxz\" (UID: \"54b8a889-884a-4511-8569-42da64109ef8\") " pod="openshift-monitoring/metrics-server-7bf9b765b9-b9fxz" Feb 24 05:40:18.583025 master-0 kubenswrapper[34361]: I0224 05:40:18.582971 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/b0a03ff3-e39b-4be9-bb1f-827d00437e62-nginx-conf\") pod \"networking-console-plugin-79f587d78f-bctpb\" (UID: \"b0a03ff3-e39b-4be9-bb1f-827d00437e62\") " pod="openshift-network-console/networking-console-plugin-79f587d78f-bctpb" Feb 24 05:40:18.583025 master-0 kubenswrapper[34361]: I0224 05:40:18.583019 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-metrics-client-certs\" (UniqueName: \"kubernetes.io/secret/54b8a889-884a-4511-8569-42da64109ef8-secret-metrics-client-certs\") pod \"metrics-server-7bf9b765b9-b9fxz\" (UID: \"54b8a889-884a-4511-8569-42da64109ef8\") " pod="openshift-monitoring/metrics-server-7bf9b765b9-b9fxz" Feb 24 05:40:18.583527 master-0 kubenswrapper[34361]: I0224 05:40:18.583055 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-log\" (UniqueName: \"kubernetes.io/empty-dir/54b8a889-884a-4511-8569-42da64109ef8-audit-log\") pod \"metrics-server-7bf9b765b9-b9fxz\" (UID: \"54b8a889-884a-4511-8569-42da64109ef8\") " pod="openshift-monitoring/metrics-server-7bf9b765b9-b9fxz" Feb 24 05:40:18.583527 master-0 kubenswrapper[34361]: I0224 05:40:18.583189 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-thanos-querier-kube-rbac-proxy-rules\" (UniqueName: \"kubernetes.io/secret/a01a30b1-dc80-4442-b760-3f92d4906df6-secret-thanos-querier-kube-rbac-proxy-rules\") pod \"thanos-querier-d588d74dc-gmlm4\" (UID: \"a01a30b1-dc80-4442-b760-3f92d4906df6\") " pod="openshift-monitoring/thanos-querier-d588d74dc-gmlm4" Feb 24 05:40:18.583527 master-0 kubenswrapper[34361]: I0224 05:40:18.583221 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/a01a30b1-dc80-4442-b760-3f92d4906df6-metrics-client-ca\") pod \"thanos-querier-d588d74dc-gmlm4\" (UID: \"a01a30b1-dc80-4442-b760-3f92d4906df6\") " pod="openshift-monitoring/thanos-querier-d588d74dc-gmlm4" Feb 24 05:40:18.583527 master-0 kubenswrapper[34361]: I0224 05:40:18.583263 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-thanos-querier-tls\" (UniqueName: \"kubernetes.io/secret/a01a30b1-dc80-4442-b760-3f92d4906df6-secret-thanos-querier-tls\") pod \"thanos-querier-d588d74dc-gmlm4\" (UID: \"a01a30b1-dc80-4442-b760-3f92d4906df6\") " pod="openshift-monitoring/thanos-querier-d588d74dc-gmlm4" Feb 24 05:40:18.583527 master-0 kubenswrapper[34361]: I0224 05:40:18.583298 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"configmap-kubelet-serving-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/54b8a889-884a-4511-8569-42da64109ef8-configmap-kubelet-serving-ca-bundle\") pod \"metrics-server-7bf9b765b9-b9fxz\" (UID: \"54b8a889-884a-4511-8569-42da64109ef8\") " pod="openshift-monitoring/metrics-server-7bf9b765b9-b9fxz" Feb 24 05:40:18.585062 master-0 kubenswrapper[34361]: I0224 05:40:18.585023 34361 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"configmap-kubelet-serving-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/54b8a889-884a-4511-8569-42da64109ef8-configmap-kubelet-serving-ca-bundle\") pod \"metrics-server-7bf9b765b9-b9fxz\" (UID: \"54b8a889-884a-4511-8569-42da64109ef8\") " pod="openshift-monitoring/metrics-server-7bf9b765b9-b9fxz" Feb 24 05:40:18.588327 master-0 kubenswrapper[34361]: I0224 05:40:18.586287 34361 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-log\" (UniqueName: \"kubernetes.io/empty-dir/54b8a889-884a-4511-8569-42da64109ef8-audit-log\") pod \"metrics-server-7bf9b765b9-b9fxz\" (UID: \"54b8a889-884a-4511-8569-42da64109ef8\") " pod="openshift-monitoring/metrics-server-7bf9b765b9-b9fxz" Feb 24 05:40:18.594334 master-0 kubenswrapper[34361]: I0224 05:40:18.592347 34361 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-metrics-server-tls\" (UniqueName: \"kubernetes.io/secret/54b8a889-884a-4511-8569-42da64109ef8-secret-metrics-server-tls\") pod \"metrics-server-7bf9b765b9-b9fxz\" (UID: \"54b8a889-884a-4511-8569-42da64109ef8\") " pod="openshift-monitoring/metrics-server-7bf9b765b9-b9fxz" Feb 24 05:40:18.594334 master-0 kubenswrapper[34361]: I0224 05:40:18.593633 34361 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/b0a03ff3-e39b-4be9-bb1f-827d00437e62-nginx-conf\") pod \"networking-console-plugin-79f587d78f-bctpb\" (UID: \"b0a03ff3-e39b-4be9-bb1f-827d00437e62\") " pod="openshift-network-console/networking-console-plugin-79f587d78f-bctpb" Feb 24 05:40:18.598344 master-0 kubenswrapper[34361]: I0224 05:40:18.595683 34361 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca-bundle\" (UniqueName: \"kubernetes.io/secret/54b8a889-884a-4511-8569-42da64109ef8-client-ca-bundle\") pod \"metrics-server-7bf9b765b9-b9fxz\" (UID: \"54b8a889-884a-4511-8569-42da64109ef8\") " pod="openshift-monitoring/metrics-server-7bf9b765b9-b9fxz" Feb 24 05:40:18.602333 master-0 kubenswrapper[34361]: I0224 05:40:18.600587 34361 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-metrics-client-certs\" (UniqueName: \"kubernetes.io/secret/54b8a889-884a-4511-8569-42da64109ef8-secret-metrics-client-certs\") pod \"metrics-server-7bf9b765b9-b9fxz\" (UID: \"54b8a889-884a-4511-8569-42da64109ef8\") " pod="openshift-monitoring/metrics-server-7bf9b765b9-b9fxz" Feb 24 05:40:18.602333 master-0 kubenswrapper[34361]: I0224 05:40:18.601111 34361 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-thanos-querier-kube-rbac-proxy-web\" (UniqueName: \"kubernetes.io/secret/a01a30b1-dc80-4442-b760-3f92d4906df6-secret-thanos-querier-kube-rbac-proxy-web\") pod \"thanos-querier-d588d74dc-gmlm4\" (UID: \"a01a30b1-dc80-4442-b760-3f92d4906df6\") " pod="openshift-monitoring/thanos-querier-d588d74dc-gmlm4" Feb 24 05:40:18.602333 master-0 kubenswrapper[34361]: E0224 05:40:18.601940 34361 secret.go:189] Couldn't get secret openshift-network-console/networking-console-plugin-cert: secret "networking-console-plugin-cert" not found Feb 24 05:40:18.602333 master-0 kubenswrapper[34361]: E0224 05:40:18.602021 34361 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b0a03ff3-e39b-4be9-bb1f-827d00437e62-networking-console-plugin-cert podName:b0a03ff3-e39b-4be9-bb1f-827d00437e62 nodeName:}" failed. No retries permitted until 2026-02-24 05:40:19.101993932 +0000 UTC m=+178.804611018 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/b0a03ff3-e39b-4be9-bb1f-827d00437e62-networking-console-plugin-cert") pod "networking-console-plugin-79f587d78f-bctpb" (UID: "b0a03ff3-e39b-4be9-bb1f-827d00437e62") : secret "networking-console-plugin-cert" not found Feb 24 05:40:18.602559 master-0 kubenswrapper[34361]: I0224 05:40:18.601942 34361 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/a01a30b1-dc80-4442-b760-3f92d4906df6-metrics-client-ca\") pod \"thanos-querier-d588d74dc-gmlm4\" (UID: \"a01a30b1-dc80-4442-b760-3f92d4906df6\") " pod="openshift-monitoring/thanos-querier-d588d74dc-gmlm4" Feb 24 05:40:18.607325 master-0 kubenswrapper[34361]: I0224 05:40:18.602986 34361 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-server-audit-profiles\" (UniqueName: \"kubernetes.io/configmap/54b8a889-884a-4511-8569-42da64109ef8-metrics-server-audit-profiles\") pod \"metrics-server-7bf9b765b9-b9fxz\" (UID: \"54b8a889-884a-4511-8569-42da64109ef8\") " pod="openshift-monitoring/metrics-server-7bf9b765b9-b9fxz" Feb 24 05:40:18.612192 master-0 kubenswrapper[34361]: I0224 05:40:18.612127 34361 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jbgfm\" (UniqueName: \"kubernetes.io/projected/a01a30b1-dc80-4442-b760-3f92d4906df6-kube-api-access-jbgfm\") pod \"thanos-querier-d588d74dc-gmlm4\" (UID: \"a01a30b1-dc80-4442-b760-3f92d4906df6\") " pod="openshift-monitoring/thanos-querier-d588d74dc-gmlm4" Feb 24 05:40:18.613447 master-0 kubenswrapper[34361]: I0224 05:40:18.613111 34361 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-thanos-querier-tls\" (UniqueName: \"kubernetes.io/secret/a01a30b1-dc80-4442-b760-3f92d4906df6-secret-thanos-querier-tls\") pod \"thanos-querier-d588d74dc-gmlm4\" (UID: \"a01a30b1-dc80-4442-b760-3f92d4906df6\") " pod="openshift-monitoring/thanos-querier-d588d74dc-gmlm4" Feb 24 05:40:18.613447 master-0 kubenswrapper[34361]: I0224 05:40:18.613358 34361 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-grpc-tls\" (UniqueName: \"kubernetes.io/secret/a01a30b1-dc80-4442-b760-3f92d4906df6-secret-grpc-tls\") pod \"thanos-querier-d588d74dc-gmlm4\" (UID: \"a01a30b1-dc80-4442-b760-3f92d4906df6\") " pod="openshift-monitoring/thanos-querier-d588d74dc-gmlm4" Feb 24 05:40:18.613975 master-0 kubenswrapper[34361]: I0224 05:40:18.613838 34361 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-thanos-querier-kube-rbac-proxy\" (UniqueName: \"kubernetes.io/secret/a01a30b1-dc80-4442-b760-3f92d4906df6-secret-thanos-querier-kube-rbac-proxy\") pod \"thanos-querier-d588d74dc-gmlm4\" (UID: \"a01a30b1-dc80-4442-b760-3f92d4906df6\") " pod="openshift-monitoring/thanos-querier-d588d74dc-gmlm4" Feb 24 05:40:18.613975 master-0 kubenswrapper[34361]: I0224 05:40:18.613936 34361 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-thanos-querier-kube-rbac-proxy-rules\" (UniqueName: \"kubernetes.io/secret/a01a30b1-dc80-4442-b760-3f92d4906df6-secret-thanos-querier-kube-rbac-proxy-rules\") pod \"thanos-querier-d588d74dc-gmlm4\" (UID: \"a01a30b1-dc80-4442-b760-3f92d4906df6\") " pod="openshift-monitoring/thanos-querier-d588d74dc-gmlm4" Feb 24 05:40:18.622334 master-0 kubenswrapper[34361]: I0224 05:40:18.617811 34361 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-thanos-querier-kube-rbac-proxy-metrics\" (UniqueName: \"kubernetes.io/secret/a01a30b1-dc80-4442-b760-3f92d4906df6-secret-thanos-querier-kube-rbac-proxy-metrics\") pod \"thanos-querier-d588d74dc-gmlm4\" (UID: \"a01a30b1-dc80-4442-b760-3f92d4906df6\") " pod="openshift-monitoring/thanos-querier-d588d74dc-gmlm4" Feb 24 05:40:18.632536 master-0 kubenswrapper[34361]: I0224 05:40:18.632489 34361 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zmfjr\" (UniqueName: \"kubernetes.io/projected/54b8a889-884a-4511-8569-42da64109ef8-kube-api-access-zmfjr\") pod \"metrics-server-7bf9b765b9-b9fxz\" (UID: \"54b8a889-884a-4511-8569-42da64109ef8\") " pod="openshift-monitoring/metrics-server-7bf9b765b9-b9fxz" Feb 24 05:40:18.639175 master-0 kubenswrapper[34361]: I0224 05:40:18.639112 34361 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/metrics-server-7bf9b765b9-b9fxz" Feb 24 05:40:18.679877 master-0 kubenswrapper[34361]: I0224 05:40:18.679251 34361 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/thanos-querier-d588d74dc-gmlm4" Feb 24 05:40:19.064560 master-0 kubenswrapper[34361]: I0224 05:40:19.064509 34361 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/metrics-server-7bf9b765b9-b9fxz"] Feb 24 05:40:19.069524 master-0 kubenswrapper[34361]: W0224 05:40:19.069473 34361 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod54b8a889_884a_4511_8569_42da64109ef8.slice/crio-cb7f5f389cd7be5d671c548067298562972cdedee1ffbf28b40fc28c7572718a WatchSource:0}: Error finding container cb7f5f389cd7be5d671c548067298562972cdedee1ffbf28b40fc28c7572718a: Status 404 returned error can't find the container with id cb7f5f389cd7be5d671c548067298562972cdedee1ffbf28b40fc28c7572718a Feb 24 05:40:19.083693 master-0 kubenswrapper[34361]: I0224 05:40:19.083599 34361 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/metrics-server-7bf9b765b9-b9fxz" event={"ID":"54b8a889-884a-4511-8569-42da64109ef8","Type":"ContainerStarted","Data":"cb7f5f389cd7be5d671c548067298562972cdedee1ffbf28b40fc28c7572718a"} Feb 24 05:40:19.196561 master-0 kubenswrapper[34361]: I0224 05:40:19.196478 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/b0a03ff3-e39b-4be9-bb1f-827d00437e62-networking-console-plugin-cert\") pod \"networking-console-plugin-79f587d78f-bctpb\" (UID: \"b0a03ff3-e39b-4be9-bb1f-827d00437e62\") " pod="openshift-network-console/networking-console-plugin-79f587d78f-bctpb" Feb 24 05:40:19.197454 master-0 kubenswrapper[34361]: E0224 05:40:19.197258 34361 secret.go:189] Couldn't get secret openshift-network-console/networking-console-plugin-cert: secret "networking-console-plugin-cert" not found Feb 24 05:40:19.197454 master-0 kubenswrapper[34361]: E0224 05:40:19.197349 34361 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b0a03ff3-e39b-4be9-bb1f-827d00437e62-networking-console-plugin-cert podName:b0a03ff3-e39b-4be9-bb1f-827d00437e62 nodeName:}" failed. No retries permitted until 2026-02-24 05:40:20.197306164 +0000 UTC m=+179.899923220 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/b0a03ff3-e39b-4be9-bb1f-827d00437e62-networking-console-plugin-cert") pod "networking-console-plugin-79f587d78f-bctpb" (UID: "b0a03ff3-e39b-4be9-bb1f-827d00437e62") : secret "networking-console-plugin-cert" not found Feb 24 05:40:19.266531 master-0 kubenswrapper[34361]: I0224 05:40:19.265490 34361 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/thanos-querier-d588d74dc-gmlm4"] Feb 24 05:40:19.275949 master-0 kubenswrapper[34361]: W0224 05:40:19.275822 34361 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda01a30b1_dc80_4442_b760_3f92d4906df6.slice/crio-5e2f759af6577ad2d8ae2bdce3aa5dc3f334783b12d9a54973d041e63796d0e2 WatchSource:0}: Error finding container 5e2f759af6577ad2d8ae2bdce3aa5dc3f334783b12d9a54973d041e63796d0e2: Status 404 returned error can't find the container with id 5e2f759af6577ad2d8ae2bdce3aa5dc3f334783b12d9a54973d041e63796d0e2 Feb 24 05:40:20.094093 master-0 kubenswrapper[34361]: I0224 05:40:20.093990 34361 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/metrics-server-7bf9b765b9-b9fxz" event={"ID":"54b8a889-884a-4511-8569-42da64109ef8","Type":"ContainerStarted","Data":"8f9d9c5152d6d1acb2eb0bd857b9b1bf60e8600927cfca69b7effa7b39585b18"} Feb 24 05:40:20.095546 master-0 kubenswrapper[34361]: I0224 05:40:20.095483 34361 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/thanos-querier-d588d74dc-gmlm4" event={"ID":"a01a30b1-dc80-4442-b760-3f92d4906df6","Type":"ContainerStarted","Data":"5e2f759af6577ad2d8ae2bdce3aa5dc3f334783b12d9a54973d041e63796d0e2"} Feb 24 05:40:20.124971 master-0 kubenswrapper[34361]: I0224 05:40:20.124842 34361 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-monitoring/metrics-server-7bf9b765b9-b9fxz" podStartSLOduration=2.124809924 podStartE2EDuration="2.124809924s" podCreationTimestamp="2026-02-24 05:40:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-24 05:40:20.116491649 +0000 UTC m=+179.819108695" watchObservedRunningTime="2026-02-24 05:40:20.124809924 +0000 UTC m=+179.827426980" Feb 24 05:40:20.217832 master-0 kubenswrapper[34361]: I0224 05:40:20.217731 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/b0a03ff3-e39b-4be9-bb1f-827d00437e62-networking-console-plugin-cert\") pod \"networking-console-plugin-79f587d78f-bctpb\" (UID: \"b0a03ff3-e39b-4be9-bb1f-827d00437e62\") " pod="openshift-network-console/networking-console-plugin-79f587d78f-bctpb" Feb 24 05:40:20.218398 master-0 kubenswrapper[34361]: E0224 05:40:20.218274 34361 secret.go:189] Couldn't get secret openshift-network-console/networking-console-plugin-cert: secret "networking-console-plugin-cert" not found Feb 24 05:40:20.218545 master-0 kubenswrapper[34361]: E0224 05:40:20.218510 34361 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b0a03ff3-e39b-4be9-bb1f-827d00437e62-networking-console-plugin-cert podName:b0a03ff3-e39b-4be9-bb1f-827d00437e62 nodeName:}" failed. No retries permitted until 2026-02-24 05:40:22.218455089 +0000 UTC m=+181.921072325 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/b0a03ff3-e39b-4be9-bb1f-827d00437e62-networking-console-plugin-cert") pod "networking-console-plugin-79f587d78f-bctpb" (UID: "b0a03ff3-e39b-4be9-bb1f-827d00437e62") : secret "networking-console-plugin-cert" not found Feb 24 05:40:22.124774 master-0 kubenswrapper[34361]: I0224 05:40:22.124681 34361 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/thanos-querier-d588d74dc-gmlm4" event={"ID":"a01a30b1-dc80-4442-b760-3f92d4906df6","Type":"ContainerStarted","Data":"cb72672080dca42c3b5cb6a2917b9ab9cfe78224f4326d0e4e2f3476d49f126d"} Feb 24 05:40:22.124774 master-0 kubenswrapper[34361]: I0224 05:40:22.124768 34361 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/thanos-querier-d588d74dc-gmlm4" event={"ID":"a01a30b1-dc80-4442-b760-3f92d4906df6","Type":"ContainerStarted","Data":"1fda9e3da225d14ad964cbb027d0e2eb471eaac05ea77e3beebc8b632d492a42"} Feb 24 05:40:22.261871 master-0 kubenswrapper[34361]: I0224 05:40:22.261759 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/b0a03ff3-e39b-4be9-bb1f-827d00437e62-networking-console-plugin-cert\") pod \"networking-console-plugin-79f587d78f-bctpb\" (UID: \"b0a03ff3-e39b-4be9-bb1f-827d00437e62\") " pod="openshift-network-console/networking-console-plugin-79f587d78f-bctpb" Feb 24 05:40:22.262152 master-0 kubenswrapper[34361]: E0224 05:40:22.261993 34361 secret.go:189] Couldn't get secret openshift-network-console/networking-console-plugin-cert: secret "networking-console-plugin-cert" not found Feb 24 05:40:22.262152 master-0 kubenswrapper[34361]: E0224 05:40:22.262120 34361 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b0a03ff3-e39b-4be9-bb1f-827d00437e62-networking-console-plugin-cert podName:b0a03ff3-e39b-4be9-bb1f-827d00437e62 nodeName:}" failed. No retries permitted until 2026-02-24 05:40:26.262088374 +0000 UTC m=+185.964705420 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/b0a03ff3-e39b-4be9-bb1f-827d00437e62-networking-console-plugin-cert") pod "networking-console-plugin-79f587d78f-bctpb" (UID: "b0a03ff3-e39b-4be9-bb1f-827d00437e62") : secret "networking-console-plugin-cert" not found Feb 24 05:40:23.140690 master-0 kubenswrapper[34361]: I0224 05:40:23.140611 34361 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/thanos-querier-d588d74dc-gmlm4" event={"ID":"a01a30b1-dc80-4442-b760-3f92d4906df6","Type":"ContainerStarted","Data":"a8c1976c051c98205b229d8b44d1cd15d1143fb4d0327d9b58aef222a75ed472"} Feb 24 05:40:24.175067 master-0 kubenswrapper[34361]: I0224 05:40:24.174827 34361 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/thanos-querier-d588d74dc-gmlm4" event={"ID":"a01a30b1-dc80-4442-b760-3f92d4906df6","Type":"ContainerStarted","Data":"bf12b2add5f9fe8551e1df74a35721d04f0d044b3c8cd62a0567a60132fc557b"} Feb 24 05:40:24.175067 master-0 kubenswrapper[34361]: I0224 05:40:24.174924 34361 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/thanos-querier-d588d74dc-gmlm4" event={"ID":"a01a30b1-dc80-4442-b760-3f92d4906df6","Type":"ContainerStarted","Data":"4d48eedd9d8a9e968a046e58a8dd1afd8474f3f5a89015d5d3d676f63b5ba079"} Feb 24 05:40:24.175067 master-0 kubenswrapper[34361]: I0224 05:40:24.174947 34361 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/thanos-querier-d588d74dc-gmlm4" event={"ID":"a01a30b1-dc80-4442-b760-3f92d4906df6","Type":"ContainerStarted","Data":"2c31c5b33ce2a82bec40adc965fafd3fead3464c0826aed4f14e34c50d9d5bb2"} Feb 24 05:40:24.177449 master-0 kubenswrapper[34361]: I0224 05:40:24.177365 34361 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-monitoring/thanos-querier-d588d74dc-gmlm4" Feb 24 05:40:24.235064 master-0 kubenswrapper[34361]: I0224 05:40:24.234916 34361 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-monitoring/thanos-querier-d588d74dc-gmlm4" podStartSLOduration=2.251448607 podStartE2EDuration="6.234875189s" podCreationTimestamp="2026-02-24 05:40:18 +0000 UTC" firstStartedPulling="2026-02-24 05:40:19.281516044 +0000 UTC m=+178.984133090" lastFinishedPulling="2026-02-24 05:40:23.264942596 +0000 UTC m=+182.967559672" observedRunningTime="2026-02-24 05:40:24.219085694 +0000 UTC m=+183.921702800" watchObservedRunningTime="2026-02-24 05:40:24.234875189 +0000 UTC m=+183.937492275" Feb 24 05:40:26.347890 master-0 kubenswrapper[34361]: I0224 05:40:26.347810 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/b0a03ff3-e39b-4be9-bb1f-827d00437e62-networking-console-plugin-cert\") pod \"networking-console-plugin-79f587d78f-bctpb\" (UID: \"b0a03ff3-e39b-4be9-bb1f-827d00437e62\") " pod="openshift-network-console/networking-console-plugin-79f587d78f-bctpb" Feb 24 05:40:26.349884 master-0 kubenswrapper[34361]: E0224 05:40:26.349807 34361 secret.go:189] Couldn't get secret openshift-network-console/networking-console-plugin-cert: secret "networking-console-plugin-cert" not found Feb 24 05:40:26.350015 master-0 kubenswrapper[34361]: E0224 05:40:26.349934 34361 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b0a03ff3-e39b-4be9-bb1f-827d00437e62-networking-console-plugin-cert podName:b0a03ff3-e39b-4be9-bb1f-827d00437e62 nodeName:}" failed. No retries permitted until 2026-02-24 05:40:34.349903442 +0000 UTC m=+194.052520528 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/b0a03ff3-e39b-4be9-bb1f-827d00437e62-networking-console-plugin-cert") pod "networking-console-plugin-79f587d78f-bctpb" (UID: "b0a03ff3-e39b-4be9-bb1f-827d00437e62") : secret "networking-console-plugin-cert" not found Feb 24 05:40:28.699384 master-0 kubenswrapper[34361]: I0224 05:40:28.698913 34361 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-monitoring/thanos-querier-d588d74dc-gmlm4" Feb 24 05:40:30.932021 master-0 kubenswrapper[34361]: I0224 05:40:30.931923 34361 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/node-ca-flsqf"] Feb 24 05:40:30.933442 master-0 kubenswrapper[34361]: I0224 05:40:30.933380 34361 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/node-ca-flsqf" Feb 24 05:40:30.936200 master-0 kubenswrapper[34361]: I0224 05:40:30.936161 34361 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"node-ca-dockercfg-zjdvw" Feb 24 05:40:30.936513 master-0 kubenswrapper[34361]: I0224 05:40:30.936182 34361 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"image-registry-certificates" Feb 24 05:40:31.044061 master-0 kubenswrapper[34361]: I0224 05:40:31.043937 34361 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/3eeeb866-cd6b-46f0-b338-8d8fa09824b6-host\") pod \"node-ca-flsqf\" (UID: \"3eeeb866-cd6b-46f0-b338-8d8fa09824b6\") " pod="openshift-image-registry/node-ca-flsqf" Feb 24 05:40:31.044481 master-0 kubenswrapper[34361]: I0224 05:40:31.044160 34361 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5x7sp\" (UniqueName: \"kubernetes.io/projected/3eeeb866-cd6b-46f0-b338-8d8fa09824b6-kube-api-access-5x7sp\") pod \"node-ca-flsqf\" (UID: \"3eeeb866-cd6b-46f0-b338-8d8fa09824b6\") " pod="openshift-image-registry/node-ca-flsqf" Feb 24 05:40:31.044481 master-0 kubenswrapper[34361]: I0224 05:40:31.044216 34361 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/3eeeb866-cd6b-46f0-b338-8d8fa09824b6-serviceca\") pod \"node-ca-flsqf\" (UID: \"3eeeb866-cd6b-46f0-b338-8d8fa09824b6\") " pod="openshift-image-registry/node-ca-flsqf" Feb 24 05:40:31.146087 master-0 kubenswrapper[34361]: I0224 05:40:31.145959 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5x7sp\" (UniqueName: \"kubernetes.io/projected/3eeeb866-cd6b-46f0-b338-8d8fa09824b6-kube-api-access-5x7sp\") pod \"node-ca-flsqf\" (UID: \"3eeeb866-cd6b-46f0-b338-8d8fa09824b6\") " pod="openshift-image-registry/node-ca-flsqf" Feb 24 05:40:31.146087 master-0 kubenswrapper[34361]: I0224 05:40:31.146090 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/3eeeb866-cd6b-46f0-b338-8d8fa09824b6-serviceca\") pod \"node-ca-flsqf\" (UID: \"3eeeb866-cd6b-46f0-b338-8d8fa09824b6\") " pod="openshift-image-registry/node-ca-flsqf" Feb 24 05:40:31.146461 master-0 kubenswrapper[34361]: I0224 05:40:31.146225 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/3eeeb866-cd6b-46f0-b338-8d8fa09824b6-host\") pod \"node-ca-flsqf\" (UID: \"3eeeb866-cd6b-46f0-b338-8d8fa09824b6\") " pod="openshift-image-registry/node-ca-flsqf" Feb 24 05:40:31.146510 master-0 kubenswrapper[34361]: I0224 05:40:31.146478 34361 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/3eeeb866-cd6b-46f0-b338-8d8fa09824b6-host\") pod \"node-ca-flsqf\" (UID: \"3eeeb866-cd6b-46f0-b338-8d8fa09824b6\") " pod="openshift-image-registry/node-ca-flsqf" Feb 24 05:40:31.146958 master-0 kubenswrapper[34361]: I0224 05:40:31.146902 34361 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/3eeeb866-cd6b-46f0-b338-8d8fa09824b6-serviceca\") pod \"node-ca-flsqf\" (UID: \"3eeeb866-cd6b-46f0-b338-8d8fa09824b6\") " pod="openshift-image-registry/node-ca-flsqf" Feb 24 05:40:31.168749 master-0 kubenswrapper[34361]: I0224 05:40:31.168682 34361 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5x7sp\" (UniqueName: \"kubernetes.io/projected/3eeeb866-cd6b-46f0-b338-8d8fa09824b6-kube-api-access-5x7sp\") pod \"node-ca-flsqf\" (UID: \"3eeeb866-cd6b-46f0-b338-8d8fa09824b6\") " pod="openshift-image-registry/node-ca-flsqf" Feb 24 05:40:31.266837 master-0 kubenswrapper[34361]: I0224 05:40:31.266727 34361 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/node-ca-flsqf" Feb 24 05:40:31.310001 master-0 kubenswrapper[34361]: W0224 05:40:31.309868 34361 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3eeeb866_cd6b_46f0_b338_8d8fa09824b6.slice/crio-ede0842c15799b0b455f3b8fbc3903a0108768ea772795ea385109b57e723c66 WatchSource:0}: Error finding container ede0842c15799b0b455f3b8fbc3903a0108768ea772795ea385109b57e723c66: Status 404 returned error can't find the container with id ede0842c15799b0b455f3b8fbc3903a0108768ea772795ea385109b57e723c66 Feb 24 05:40:32.268529 master-0 kubenswrapper[34361]: I0224 05:40:32.268422 34361 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/node-ca-flsqf" event={"ID":"3eeeb866-cd6b-46f0-b338-8d8fa09824b6","Type":"ContainerStarted","Data":"ede0842c15799b0b455f3b8fbc3903a0108768ea772795ea385109b57e723c66"} Feb 24 05:40:34.297334 master-0 kubenswrapper[34361]: I0224 05:40:34.297229 34361 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/node-ca-flsqf" event={"ID":"3eeeb866-cd6b-46f0-b338-8d8fa09824b6","Type":"ContainerStarted","Data":"51a4a8f0371561dd8d9313476143c774968ac5a0fe9021c3778e64a3caf7a3c9"} Feb 24 05:40:34.323788 master-0 kubenswrapper[34361]: I0224 05:40:34.323571 34361 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/node-ca-flsqf" podStartSLOduration=2.313022608 podStartE2EDuration="4.3235394s" podCreationTimestamp="2026-02-24 05:40:30 +0000 UTC" firstStartedPulling="2026-02-24 05:40:31.314424071 +0000 UTC m=+191.017041157" lastFinishedPulling="2026-02-24 05:40:33.324940893 +0000 UTC m=+193.027557949" observedRunningTime="2026-02-24 05:40:34.319505671 +0000 UTC m=+194.022122757" watchObservedRunningTime="2026-02-24 05:40:34.3235394 +0000 UTC m=+194.026156476" Feb 24 05:40:34.407549 master-0 kubenswrapper[34361]: I0224 05:40:34.407454 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/b0a03ff3-e39b-4be9-bb1f-827d00437e62-networking-console-plugin-cert\") pod \"networking-console-plugin-79f587d78f-bctpb\" (UID: \"b0a03ff3-e39b-4be9-bb1f-827d00437e62\") " pod="openshift-network-console/networking-console-plugin-79f587d78f-bctpb" Feb 24 05:40:34.407887 master-0 kubenswrapper[34361]: E0224 05:40:34.407705 34361 secret.go:189] Couldn't get secret openshift-network-console/networking-console-plugin-cert: secret "networking-console-plugin-cert" not found Feb 24 05:40:34.407887 master-0 kubenswrapper[34361]: E0224 05:40:34.407809 34361 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b0a03ff3-e39b-4be9-bb1f-827d00437e62-networking-console-plugin-cert podName:b0a03ff3-e39b-4be9-bb1f-827d00437e62 nodeName:}" failed. No retries permitted until 2026-02-24 05:40:50.407784282 +0000 UTC m=+210.110401328 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/b0a03ff3-e39b-4be9-bb1f-827d00437e62-networking-console-plugin-cert") pod "networking-console-plugin-79f587d78f-bctpb" (UID: "b0a03ff3-e39b-4be9-bb1f-827d00437e62") : secret "networking-console-plugin-cert" not found Feb 24 05:40:38.640409 master-0 kubenswrapper[34361]: I0224 05:40:38.640306 34361 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-monitoring/metrics-server-7bf9b765b9-b9fxz" Feb 24 05:40:38.641519 master-0 kubenswrapper[34361]: I0224 05:40:38.641444 34361 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-monitoring/metrics-server-7bf9b765b9-b9fxz" Feb 24 05:40:43.494600 master-0 kubenswrapper[34361]: I0224 05:40:43.494499 34361 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-console/console-5b6cfdbd-5qbf5" podUID="f3038676-0c11-4616-bb1e-f5d396e420f4" containerName="console" containerID="cri-o://3493e9566cfc88b777212b8209a79658c53ac43dd63f8aceabedeb37e714b7a4" gracePeriod=15 Feb 24 05:40:44.073182 master-0 kubenswrapper[34361]: I0224 05:40:44.073110 34361 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-5b6cfdbd-5qbf5_f3038676-0c11-4616-bb1e-f5d396e420f4/console/0.log" Feb 24 05:40:44.073443 master-0 kubenswrapper[34361]: I0224 05:40:44.073232 34361 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-5b6cfdbd-5qbf5" Feb 24 05:40:44.115979 master-0 kubenswrapper[34361]: I0224 05:40:44.115915 34361 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/f3038676-0c11-4616-bb1e-f5d396e420f4-console-config\") pod \"f3038676-0c11-4616-bb1e-f5d396e420f4\" (UID: \"f3038676-0c11-4616-bb1e-f5d396e420f4\") " Feb 24 05:40:44.116225 master-0 kubenswrapper[34361]: I0224 05:40:44.116003 34361 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/f3038676-0c11-4616-bb1e-f5d396e420f4-service-ca\") pod \"f3038676-0c11-4616-bb1e-f5d396e420f4\" (UID: \"f3038676-0c11-4616-bb1e-f5d396e420f4\") " Feb 24 05:40:44.116225 master-0 kubenswrapper[34361]: I0224 05:40:44.116081 34361 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/f3038676-0c11-4616-bb1e-f5d396e420f4-console-serving-cert\") pod \"f3038676-0c11-4616-bb1e-f5d396e420f4\" (UID: \"f3038676-0c11-4616-bb1e-f5d396e420f4\") " Feb 24 05:40:44.116225 master-0 kubenswrapper[34361]: I0224 05:40:44.116106 34361 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/f3038676-0c11-4616-bb1e-f5d396e420f4-oauth-serving-cert\") pod \"f3038676-0c11-4616-bb1e-f5d396e420f4\" (UID: \"f3038676-0c11-4616-bb1e-f5d396e420f4\") " Feb 24 05:40:44.116225 master-0 kubenswrapper[34361]: I0224 05:40:44.116163 34361 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/f3038676-0c11-4616-bb1e-f5d396e420f4-console-oauth-config\") pod \"f3038676-0c11-4616-bb1e-f5d396e420f4\" (UID: \"f3038676-0c11-4616-bb1e-f5d396e420f4\") " Feb 24 05:40:44.116225 master-0 kubenswrapper[34361]: I0224 05:40:44.116183 34361 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v988g\" (UniqueName: \"kubernetes.io/projected/f3038676-0c11-4616-bb1e-f5d396e420f4-kube-api-access-v988g\") pod \"f3038676-0c11-4616-bb1e-f5d396e420f4\" (UID: \"f3038676-0c11-4616-bb1e-f5d396e420f4\") " Feb 24 05:40:44.117813 master-0 kubenswrapper[34361]: I0224 05:40:44.117673 34361 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f3038676-0c11-4616-bb1e-f5d396e420f4-console-config" (OuterVolumeSpecName: "console-config") pod "f3038676-0c11-4616-bb1e-f5d396e420f4" (UID: "f3038676-0c11-4616-bb1e-f5d396e420f4"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 24 05:40:44.117813 master-0 kubenswrapper[34361]: I0224 05:40:44.117758 34361 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f3038676-0c11-4616-bb1e-f5d396e420f4-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "f3038676-0c11-4616-bb1e-f5d396e420f4" (UID: "f3038676-0c11-4616-bb1e-f5d396e420f4"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 24 05:40:44.118075 master-0 kubenswrapper[34361]: I0224 05:40:44.117997 34361 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f3038676-0c11-4616-bb1e-f5d396e420f4-service-ca" (OuterVolumeSpecName: "service-ca") pod "f3038676-0c11-4616-bb1e-f5d396e420f4" (UID: "f3038676-0c11-4616-bb1e-f5d396e420f4"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 24 05:40:44.129137 master-0 kubenswrapper[34361]: I0224 05:40:44.128978 34361 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f3038676-0c11-4616-bb1e-f5d396e420f4-kube-api-access-v988g" (OuterVolumeSpecName: "kube-api-access-v988g") pod "f3038676-0c11-4616-bb1e-f5d396e420f4" (UID: "f3038676-0c11-4616-bb1e-f5d396e420f4"). InnerVolumeSpecName "kube-api-access-v988g". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 24 05:40:44.129137 master-0 kubenswrapper[34361]: I0224 05:40:44.128996 34361 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f3038676-0c11-4616-bb1e-f5d396e420f4-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "f3038676-0c11-4616-bb1e-f5d396e420f4" (UID: "f3038676-0c11-4616-bb1e-f5d396e420f4"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 24 05:40:44.129137 master-0 kubenswrapper[34361]: I0224 05:40:44.129062 34361 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f3038676-0c11-4616-bb1e-f5d396e420f4-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "f3038676-0c11-4616-bb1e-f5d396e420f4" (UID: "f3038676-0c11-4616-bb1e-f5d396e420f4"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 24 05:40:44.218657 master-0 kubenswrapper[34361]: I0224 05:40:44.218553 34361 reconciler_common.go:293] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/f3038676-0c11-4616-bb1e-f5d396e420f4-console-serving-cert\") on node \"master-0\" DevicePath \"\"" Feb 24 05:40:44.218657 master-0 kubenswrapper[34361]: I0224 05:40:44.218619 34361 reconciler_common.go:293] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/f3038676-0c11-4616-bb1e-f5d396e420f4-oauth-serving-cert\") on node \"master-0\" DevicePath \"\"" Feb 24 05:40:44.218657 master-0 kubenswrapper[34361]: I0224 05:40:44.218636 34361 reconciler_common.go:293] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/f3038676-0c11-4616-bb1e-f5d396e420f4-console-oauth-config\") on node \"master-0\" DevicePath \"\"" Feb 24 05:40:44.218657 master-0 kubenswrapper[34361]: I0224 05:40:44.218650 34361 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v988g\" (UniqueName: \"kubernetes.io/projected/f3038676-0c11-4616-bb1e-f5d396e420f4-kube-api-access-v988g\") on node \"master-0\" DevicePath \"\"" Feb 24 05:40:44.218657 master-0 kubenswrapper[34361]: I0224 05:40:44.218665 34361 reconciler_common.go:293] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/f3038676-0c11-4616-bb1e-f5d396e420f4-console-config\") on node \"master-0\" DevicePath \"\"" Feb 24 05:40:44.218657 master-0 kubenswrapper[34361]: I0224 05:40:44.218677 34361 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/f3038676-0c11-4616-bb1e-f5d396e420f4-service-ca\") on node \"master-0\" DevicePath \"\"" Feb 24 05:40:44.415346 master-0 kubenswrapper[34361]: I0224 05:40:44.415139 34361 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-5b6cfdbd-5qbf5_f3038676-0c11-4616-bb1e-f5d396e420f4/console/0.log" Feb 24 05:40:44.415866 master-0 kubenswrapper[34361]: I0224 05:40:44.415822 34361 generic.go:334] "Generic (PLEG): container finished" podID="f3038676-0c11-4616-bb1e-f5d396e420f4" containerID="3493e9566cfc88b777212b8209a79658c53ac43dd63f8aceabedeb37e714b7a4" exitCode=2 Feb 24 05:40:44.416046 master-0 kubenswrapper[34361]: I0224 05:40:44.415952 34361 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-5b6cfdbd-5qbf5" event={"ID":"f3038676-0c11-4616-bb1e-f5d396e420f4","Type":"ContainerDied","Data":"3493e9566cfc88b777212b8209a79658c53ac43dd63f8aceabedeb37e714b7a4"} Feb 24 05:40:44.416170 master-0 kubenswrapper[34361]: I0224 05:40:44.416087 34361 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-5b6cfdbd-5qbf5" event={"ID":"f3038676-0c11-4616-bb1e-f5d396e420f4","Type":"ContainerDied","Data":"40a246baf7d57e0d6e74e45814f0b62e162e91d045cd732a766d0cc321da8d9a"} Feb 24 05:40:44.416170 master-0 kubenswrapper[34361]: I0224 05:40:44.415939 34361 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-5b6cfdbd-5qbf5" Feb 24 05:40:44.416337 master-0 kubenswrapper[34361]: I0224 05:40:44.416127 34361 scope.go:117] "RemoveContainer" containerID="3493e9566cfc88b777212b8209a79658c53ac43dd63f8aceabedeb37e714b7a4" Feb 24 05:40:44.450268 master-0 kubenswrapper[34361]: I0224 05:40:44.450203 34361 scope.go:117] "RemoveContainer" containerID="3493e9566cfc88b777212b8209a79658c53ac43dd63f8aceabedeb37e714b7a4" Feb 24 05:40:44.451125 master-0 kubenswrapper[34361]: E0224 05:40:44.451052 34361 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3493e9566cfc88b777212b8209a79658c53ac43dd63f8aceabedeb37e714b7a4\": container with ID starting with 3493e9566cfc88b777212b8209a79658c53ac43dd63f8aceabedeb37e714b7a4 not found: ID does not exist" containerID="3493e9566cfc88b777212b8209a79658c53ac43dd63f8aceabedeb37e714b7a4" Feb 24 05:40:44.451220 master-0 kubenswrapper[34361]: I0224 05:40:44.451144 34361 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3493e9566cfc88b777212b8209a79658c53ac43dd63f8aceabedeb37e714b7a4"} err="failed to get container status \"3493e9566cfc88b777212b8209a79658c53ac43dd63f8aceabedeb37e714b7a4\": rpc error: code = NotFound desc = could not find container \"3493e9566cfc88b777212b8209a79658c53ac43dd63f8aceabedeb37e714b7a4\": container with ID starting with 3493e9566cfc88b777212b8209a79658c53ac43dd63f8aceabedeb37e714b7a4 not found: ID does not exist" Feb 24 05:40:44.515141 master-0 kubenswrapper[34361]: I0224 05:40:44.515034 34361 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-5b6cfdbd-5qbf5"] Feb 24 05:40:44.530701 master-0 kubenswrapper[34361]: I0224 05:40:44.530595 34361 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-console/console-5b6cfdbd-5qbf5"] Feb 24 05:40:44.613589 master-0 kubenswrapper[34361]: I0224 05:40:44.613486 34361 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f3038676-0c11-4616-bb1e-f5d396e420f4" path="/var/lib/kubelet/pods/f3038676-0c11-4616-bb1e-f5d396e420f4/volumes" Feb 24 05:40:48.426891 master-0 kubenswrapper[34361]: I0224 05:40:48.426791 34361 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-monitoring/alertmanager-main-0"] Feb 24 05:40:48.427664 master-0 kubenswrapper[34361]: E0224 05:40:48.427263 34361 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f3038676-0c11-4616-bb1e-f5d396e420f4" containerName="console" Feb 24 05:40:48.427664 master-0 kubenswrapper[34361]: I0224 05:40:48.427286 34361 state_mem.go:107] "Deleted CPUSet assignment" podUID="f3038676-0c11-4616-bb1e-f5d396e420f4" containerName="console" Feb 24 05:40:48.427664 master-0 kubenswrapper[34361]: I0224 05:40:48.427500 34361 memory_manager.go:354] "RemoveStaleState removing state" podUID="f3038676-0c11-4616-bb1e-f5d396e420f4" containerName="console" Feb 24 05:40:48.430062 master-0 kubenswrapper[34361]: I0224 05:40:48.430018 34361 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/alertmanager-main-0" Feb 24 05:40:48.433704 master-0 kubenswrapper[34361]: I0224 05:40:48.433640 34361 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"alertmanager-kube-rbac-proxy-metric" Feb 24 05:40:48.434482 master-0 kubenswrapper[34361]: I0224 05:40:48.434405 34361 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"alertmanager-main-tls" Feb 24 05:40:48.435146 master-0 kubenswrapper[34361]: I0224 05:40:48.435111 34361 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"alertmanager-kube-rbac-proxy-web" Feb 24 05:40:48.435838 master-0 kubenswrapper[34361]: I0224 05:40:48.435800 34361 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"alertmanager-main-tls-assets-0" Feb 24 05:40:48.436864 master-0 kubenswrapper[34361]: I0224 05:40:48.436824 34361 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"alertmanager-main-web-config" Feb 24 05:40:48.437506 master-0 kubenswrapper[34361]: I0224 05:40:48.437466 34361 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"alertmanager-kube-rbac-proxy" Feb 24 05:40:48.438201 master-0 kubenswrapper[34361]: I0224 05:40:48.438164 34361 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"alertmanager-main-generated" Feb 24 05:40:48.446741 master-0 kubenswrapper[34361]: I0224 05:40:48.446631 34361 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"alertmanager-trusted-ca-bundle" Feb 24 05:40:48.453215 master-0 kubenswrapper[34361]: I0224 05:40:48.453092 34361 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/alertmanager-main-0"] Feb 24 05:40:48.497049 master-0 kubenswrapper[34361]: I0224 05:40:48.496954 34361 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-alertmanager-kube-rbac-proxy-metric\" (UniqueName: \"kubernetes.io/secret/c4b6cbff-c5df-420d-a923-0d27b8e1e896-secret-alertmanager-kube-rbac-proxy-metric\") pod \"alertmanager-main-0\" (UID: \"c4b6cbff-c5df-420d-a923-0d27b8e1e896\") " pod="openshift-monitoring/alertmanager-main-0" Feb 24 05:40:48.497049 master-0 kubenswrapper[34361]: I0224 05:40:48.497053 34361 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/c4b6cbff-c5df-420d-a923-0d27b8e1e896-tls-assets\") pod \"alertmanager-main-0\" (UID: \"c4b6cbff-c5df-420d-a923-0d27b8e1e896\") " pod="openshift-monitoring/alertmanager-main-0" Feb 24 05:40:48.497404 master-0 kubenswrapper[34361]: I0224 05:40:48.497114 34361 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/c4b6cbff-c5df-420d-a923-0d27b8e1e896-web-config\") pod \"alertmanager-main-0\" (UID: \"c4b6cbff-c5df-420d-a923-0d27b8e1e896\") " pod="openshift-monitoring/alertmanager-main-0" Feb 24 05:40:48.497404 master-0 kubenswrapper[34361]: I0224 05:40:48.497174 34361 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/c4b6cbff-c5df-420d-a923-0d27b8e1e896-config-out\") pod \"alertmanager-main-0\" (UID: \"c4b6cbff-c5df-420d-a923-0d27b8e1e896\") " pod="openshift-monitoring/alertmanager-main-0" Feb 24 05:40:48.497404 master-0 kubenswrapper[34361]: I0224 05:40:48.497224 34361 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/c4b6cbff-c5df-420d-a923-0d27b8e1e896-metrics-client-ca\") pod \"alertmanager-main-0\" (UID: \"c4b6cbff-c5df-420d-a923-0d27b8e1e896\") " pod="openshift-monitoring/alertmanager-main-0" Feb 24 05:40:48.497404 master-0 kubenswrapper[34361]: I0224 05:40:48.497284 34361 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-alertmanager-kube-rbac-proxy-web\" (UniqueName: \"kubernetes.io/secret/c4b6cbff-c5df-420d-a923-0d27b8e1e896-secret-alertmanager-kube-rbac-proxy-web\") pod \"alertmanager-main-0\" (UID: \"c4b6cbff-c5df-420d-a923-0d27b8e1e896\") " pod="openshift-monitoring/alertmanager-main-0" Feb 24 05:40:48.497404 master-0 kubenswrapper[34361]: I0224 05:40:48.497390 34361 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"alertmanager-main-db\" (UniqueName: \"kubernetes.io/empty-dir/c4b6cbff-c5df-420d-a923-0d27b8e1e896-alertmanager-main-db\") pod \"alertmanager-main-0\" (UID: \"c4b6cbff-c5df-420d-a923-0d27b8e1e896\") " pod="openshift-monitoring/alertmanager-main-0" Feb 24 05:40:48.497662 master-0 kubenswrapper[34361]: I0224 05:40:48.497429 34361 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-alertmanager-main-tls\" (UniqueName: \"kubernetes.io/secret/c4b6cbff-c5df-420d-a923-0d27b8e1e896-secret-alertmanager-main-tls\") pod \"alertmanager-main-0\" (UID: \"c4b6cbff-c5df-420d-a923-0d27b8e1e896\") " pod="openshift-monitoring/alertmanager-main-0" Feb 24 05:40:48.497662 master-0 kubenswrapper[34361]: I0224 05:40:48.497472 34361 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/secret/c4b6cbff-c5df-420d-a923-0d27b8e1e896-config-volume\") pod \"alertmanager-main-0\" (UID: \"c4b6cbff-c5df-420d-a923-0d27b8e1e896\") " pod="openshift-monitoring/alertmanager-main-0" Feb 24 05:40:48.497662 master-0 kubenswrapper[34361]: I0224 05:40:48.497596 34361 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-alertmanager-kube-rbac-proxy\" (UniqueName: \"kubernetes.io/secret/c4b6cbff-c5df-420d-a923-0d27b8e1e896-secret-alertmanager-kube-rbac-proxy\") pod \"alertmanager-main-0\" (UID: \"c4b6cbff-c5df-420d-a923-0d27b8e1e896\") " pod="openshift-monitoring/alertmanager-main-0" Feb 24 05:40:48.497803 master-0 kubenswrapper[34361]: I0224 05:40:48.497745 34361 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fj2pt\" (UniqueName: \"kubernetes.io/projected/c4b6cbff-c5df-420d-a923-0d27b8e1e896-kube-api-access-fj2pt\") pod \"alertmanager-main-0\" (UID: \"c4b6cbff-c5df-420d-a923-0d27b8e1e896\") " pod="openshift-monitoring/alertmanager-main-0" Feb 24 05:40:48.497856 master-0 kubenswrapper[34361]: I0224 05:40:48.497835 34361 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"alertmanager-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c4b6cbff-c5df-420d-a923-0d27b8e1e896-alertmanager-trusted-ca-bundle\") pod \"alertmanager-main-0\" (UID: \"c4b6cbff-c5df-420d-a923-0d27b8e1e896\") " pod="openshift-monitoring/alertmanager-main-0" Feb 24 05:40:48.600267 master-0 kubenswrapper[34361]: I0224 05:40:48.600139 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"alertmanager-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c4b6cbff-c5df-420d-a923-0d27b8e1e896-alertmanager-trusted-ca-bundle\") pod \"alertmanager-main-0\" (UID: \"c4b6cbff-c5df-420d-a923-0d27b8e1e896\") " pod="openshift-monitoring/alertmanager-main-0" Feb 24 05:40:48.600267 master-0 kubenswrapper[34361]: I0224 05:40:48.600253 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-alertmanager-kube-rbac-proxy-metric\" (UniqueName: \"kubernetes.io/secret/c4b6cbff-c5df-420d-a923-0d27b8e1e896-secret-alertmanager-kube-rbac-proxy-metric\") pod \"alertmanager-main-0\" (UID: \"c4b6cbff-c5df-420d-a923-0d27b8e1e896\") " pod="openshift-monitoring/alertmanager-main-0" Feb 24 05:40:48.600725 master-0 kubenswrapper[34361]: I0224 05:40:48.600524 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/c4b6cbff-c5df-420d-a923-0d27b8e1e896-tls-assets\") pod \"alertmanager-main-0\" (UID: \"c4b6cbff-c5df-420d-a923-0d27b8e1e896\") " pod="openshift-monitoring/alertmanager-main-0" Feb 24 05:40:48.600725 master-0 kubenswrapper[34361]: I0224 05:40:48.600724 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/c4b6cbff-c5df-420d-a923-0d27b8e1e896-web-config\") pod \"alertmanager-main-0\" (UID: \"c4b6cbff-c5df-420d-a923-0d27b8e1e896\") " pod="openshift-monitoring/alertmanager-main-0" Feb 24 05:40:48.600923 master-0 kubenswrapper[34361]: I0224 05:40:48.600877 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/c4b6cbff-c5df-420d-a923-0d27b8e1e896-config-out\") pod \"alertmanager-main-0\" (UID: \"c4b6cbff-c5df-420d-a923-0d27b8e1e896\") " pod="openshift-monitoring/alertmanager-main-0" Feb 24 05:40:48.601862 master-0 kubenswrapper[34361]: I0224 05:40:48.601816 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/c4b6cbff-c5df-420d-a923-0d27b8e1e896-metrics-client-ca\") pod \"alertmanager-main-0\" (UID: \"c4b6cbff-c5df-420d-a923-0d27b8e1e896\") " pod="openshift-monitoring/alertmanager-main-0" Feb 24 05:40:48.601998 master-0 kubenswrapper[34361]: I0224 05:40:48.601922 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-alertmanager-kube-rbac-proxy-web\" (UniqueName: \"kubernetes.io/secret/c4b6cbff-c5df-420d-a923-0d27b8e1e896-secret-alertmanager-kube-rbac-proxy-web\") pod \"alertmanager-main-0\" (UID: \"c4b6cbff-c5df-420d-a923-0d27b8e1e896\") " pod="openshift-monitoring/alertmanager-main-0" Feb 24 05:40:48.602104 master-0 kubenswrapper[34361]: I0224 05:40:48.602001 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"alertmanager-main-db\" (UniqueName: \"kubernetes.io/empty-dir/c4b6cbff-c5df-420d-a923-0d27b8e1e896-alertmanager-main-db\") pod \"alertmanager-main-0\" (UID: \"c4b6cbff-c5df-420d-a923-0d27b8e1e896\") " pod="openshift-monitoring/alertmanager-main-0" Feb 24 05:40:48.602104 master-0 kubenswrapper[34361]: I0224 05:40:48.602025 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-alertmanager-main-tls\" (UniqueName: \"kubernetes.io/secret/c4b6cbff-c5df-420d-a923-0d27b8e1e896-secret-alertmanager-main-tls\") pod \"alertmanager-main-0\" (UID: \"c4b6cbff-c5df-420d-a923-0d27b8e1e896\") " pod="openshift-monitoring/alertmanager-main-0" Feb 24 05:40:48.602262 master-0 kubenswrapper[34361]: I0224 05:40:48.602104 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/secret/c4b6cbff-c5df-420d-a923-0d27b8e1e896-config-volume\") pod \"alertmanager-main-0\" (UID: \"c4b6cbff-c5df-420d-a923-0d27b8e1e896\") " pod="openshift-monitoring/alertmanager-main-0" Feb 24 05:40:48.602262 master-0 kubenswrapper[34361]: I0224 05:40:48.602168 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-alertmanager-kube-rbac-proxy\" (UniqueName: \"kubernetes.io/secret/c4b6cbff-c5df-420d-a923-0d27b8e1e896-secret-alertmanager-kube-rbac-proxy\") pod \"alertmanager-main-0\" (UID: \"c4b6cbff-c5df-420d-a923-0d27b8e1e896\") " pod="openshift-monitoring/alertmanager-main-0" Feb 24 05:40:48.603095 master-0 kubenswrapper[34361]: I0224 05:40:48.602484 34361 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/c4b6cbff-c5df-420d-a923-0d27b8e1e896-metrics-client-ca\") pod \"alertmanager-main-0\" (UID: \"c4b6cbff-c5df-420d-a923-0d27b8e1e896\") " pod="openshift-monitoring/alertmanager-main-0" Feb 24 05:40:48.603095 master-0 kubenswrapper[34361]: E0224 05:40:48.602667 34361 secret.go:189] Couldn't get secret openshift-monitoring/alertmanager-main-tls: secret "alertmanager-main-tls" not found Feb 24 05:40:48.603095 master-0 kubenswrapper[34361]: E0224 05:40:48.602799 34361 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c4b6cbff-c5df-420d-a923-0d27b8e1e896-secret-alertmanager-main-tls podName:c4b6cbff-c5df-420d-a923-0d27b8e1e896 nodeName:}" failed. No retries permitted until 2026-02-24 05:40:49.102760984 +0000 UTC m=+208.805378070 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "secret-alertmanager-main-tls" (UniqueName: "kubernetes.io/secret/c4b6cbff-c5df-420d-a923-0d27b8e1e896-secret-alertmanager-main-tls") pod "alertmanager-main-0" (UID: "c4b6cbff-c5df-420d-a923-0d27b8e1e896") : secret "alertmanager-main-tls" not found Feb 24 05:40:48.603439 master-0 kubenswrapper[34361]: I0224 05:40:48.603384 34361 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"alertmanager-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c4b6cbff-c5df-420d-a923-0d27b8e1e896-alertmanager-trusted-ca-bundle\") pod \"alertmanager-main-0\" (UID: \"c4b6cbff-c5df-420d-a923-0d27b8e1e896\") " pod="openshift-monitoring/alertmanager-main-0" Feb 24 05:40:48.603639 master-0 kubenswrapper[34361]: I0224 05:40:48.602225 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fj2pt\" (UniqueName: \"kubernetes.io/projected/c4b6cbff-c5df-420d-a923-0d27b8e1e896-kube-api-access-fj2pt\") pod \"alertmanager-main-0\" (UID: \"c4b6cbff-c5df-420d-a923-0d27b8e1e896\") " pod="openshift-monitoring/alertmanager-main-0" Feb 24 05:40:48.604210 master-0 kubenswrapper[34361]: I0224 05:40:48.604088 34361 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"alertmanager-main-db\" (UniqueName: \"kubernetes.io/empty-dir/c4b6cbff-c5df-420d-a923-0d27b8e1e896-alertmanager-main-db\") pod \"alertmanager-main-0\" (UID: \"c4b6cbff-c5df-420d-a923-0d27b8e1e896\") " pod="openshift-monitoring/alertmanager-main-0" Feb 24 05:40:48.606279 master-0 kubenswrapper[34361]: I0224 05:40:48.606236 34361 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-alertmanager-kube-rbac-proxy-web\" (UniqueName: \"kubernetes.io/secret/c4b6cbff-c5df-420d-a923-0d27b8e1e896-secret-alertmanager-kube-rbac-proxy-web\") pod \"alertmanager-main-0\" (UID: \"c4b6cbff-c5df-420d-a923-0d27b8e1e896\") " pod="openshift-monitoring/alertmanager-main-0" Feb 24 05:40:48.606506 master-0 kubenswrapper[34361]: I0224 05:40:48.606458 34361 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/c4b6cbff-c5df-420d-a923-0d27b8e1e896-config-out\") pod \"alertmanager-main-0\" (UID: \"c4b6cbff-c5df-420d-a923-0d27b8e1e896\") " pod="openshift-monitoring/alertmanager-main-0" Feb 24 05:40:48.607974 master-0 kubenswrapper[34361]: I0224 05:40:48.607344 34361 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/c4b6cbff-c5df-420d-a923-0d27b8e1e896-web-config\") pod \"alertmanager-main-0\" (UID: \"c4b6cbff-c5df-420d-a923-0d27b8e1e896\") " pod="openshift-monitoring/alertmanager-main-0" Feb 24 05:40:48.607974 master-0 kubenswrapper[34361]: I0224 05:40:48.607352 34361 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-alertmanager-kube-rbac-proxy-metric\" (UniqueName: \"kubernetes.io/secret/c4b6cbff-c5df-420d-a923-0d27b8e1e896-secret-alertmanager-kube-rbac-proxy-metric\") pod \"alertmanager-main-0\" (UID: \"c4b6cbff-c5df-420d-a923-0d27b8e1e896\") " pod="openshift-monitoring/alertmanager-main-0" Feb 24 05:40:48.608203 master-0 kubenswrapper[34361]: I0224 05:40:48.608057 34361 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/c4b6cbff-c5df-420d-a923-0d27b8e1e896-tls-assets\") pod \"alertmanager-main-0\" (UID: \"c4b6cbff-c5df-420d-a923-0d27b8e1e896\") " pod="openshift-monitoring/alertmanager-main-0" Feb 24 05:40:48.610742 master-0 kubenswrapper[34361]: I0224 05:40:48.608338 34361 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-alertmanager-kube-rbac-proxy\" (UniqueName: \"kubernetes.io/secret/c4b6cbff-c5df-420d-a923-0d27b8e1e896-secret-alertmanager-kube-rbac-proxy\") pod \"alertmanager-main-0\" (UID: \"c4b6cbff-c5df-420d-a923-0d27b8e1e896\") " pod="openshift-monitoring/alertmanager-main-0" Feb 24 05:40:48.610742 master-0 kubenswrapper[34361]: I0224 05:40:48.610026 34361 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/secret/c4b6cbff-c5df-420d-a923-0d27b8e1e896-config-volume\") pod \"alertmanager-main-0\" (UID: \"c4b6cbff-c5df-420d-a923-0d27b8e1e896\") " pod="openshift-monitoring/alertmanager-main-0" Feb 24 05:40:48.636641 master-0 kubenswrapper[34361]: I0224 05:40:48.636130 34361 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fj2pt\" (UniqueName: \"kubernetes.io/projected/c4b6cbff-c5df-420d-a923-0d27b8e1e896-kube-api-access-fj2pt\") pod \"alertmanager-main-0\" (UID: \"c4b6cbff-c5df-420d-a923-0d27b8e1e896\") " pod="openshift-monitoring/alertmanager-main-0" Feb 24 05:40:49.113090 master-0 kubenswrapper[34361]: I0224 05:40:49.113005 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-alertmanager-main-tls\" (UniqueName: \"kubernetes.io/secret/c4b6cbff-c5df-420d-a923-0d27b8e1e896-secret-alertmanager-main-tls\") pod \"alertmanager-main-0\" (UID: \"c4b6cbff-c5df-420d-a923-0d27b8e1e896\") " pod="openshift-monitoring/alertmanager-main-0" Feb 24 05:40:49.113555 master-0 kubenswrapper[34361]: E0224 05:40:49.113293 34361 secret.go:189] Couldn't get secret openshift-monitoring/alertmanager-main-tls: secret "alertmanager-main-tls" not found Feb 24 05:40:49.113555 master-0 kubenswrapper[34361]: E0224 05:40:49.113457 34361 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c4b6cbff-c5df-420d-a923-0d27b8e1e896-secret-alertmanager-main-tls podName:c4b6cbff-c5df-420d-a923-0d27b8e1e896 nodeName:}" failed. No retries permitted until 2026-02-24 05:40:50.113427584 +0000 UTC m=+209.816044640 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "secret-alertmanager-main-tls" (UniqueName: "kubernetes.io/secret/c4b6cbff-c5df-420d-a923-0d27b8e1e896-secret-alertmanager-main-tls") pod "alertmanager-main-0" (UID: "c4b6cbff-c5df-420d-a923-0d27b8e1e896") : secret "alertmanager-main-tls" not found Feb 24 05:40:50.131784 master-0 kubenswrapper[34361]: I0224 05:40:50.131626 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-alertmanager-main-tls\" (UniqueName: \"kubernetes.io/secret/c4b6cbff-c5df-420d-a923-0d27b8e1e896-secret-alertmanager-main-tls\") pod \"alertmanager-main-0\" (UID: \"c4b6cbff-c5df-420d-a923-0d27b8e1e896\") " pod="openshift-monitoring/alertmanager-main-0" Feb 24 05:40:50.133029 master-0 kubenswrapper[34361]: E0224 05:40:50.131851 34361 secret.go:189] Couldn't get secret openshift-monitoring/alertmanager-main-tls: secret "alertmanager-main-tls" not found Feb 24 05:40:50.133029 master-0 kubenswrapper[34361]: E0224 05:40:50.131973 34361 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c4b6cbff-c5df-420d-a923-0d27b8e1e896-secret-alertmanager-main-tls podName:c4b6cbff-c5df-420d-a923-0d27b8e1e896 nodeName:}" failed. No retries permitted until 2026-02-24 05:40:52.131939698 +0000 UTC m=+211.834556784 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "secret-alertmanager-main-tls" (UniqueName: "kubernetes.io/secret/c4b6cbff-c5df-420d-a923-0d27b8e1e896-secret-alertmanager-main-tls") pod "alertmanager-main-0" (UID: "c4b6cbff-c5df-420d-a923-0d27b8e1e896") : secret "alertmanager-main-tls" not found Feb 24 05:40:50.437481 master-0 kubenswrapper[34361]: I0224 05:40:50.437146 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/b0a03ff3-e39b-4be9-bb1f-827d00437e62-networking-console-plugin-cert\") pod \"networking-console-plugin-79f587d78f-bctpb\" (UID: \"b0a03ff3-e39b-4be9-bb1f-827d00437e62\") " pod="openshift-network-console/networking-console-plugin-79f587d78f-bctpb" Feb 24 05:40:50.438090 master-0 kubenswrapper[34361]: E0224 05:40:50.437539 34361 secret.go:189] Couldn't get secret openshift-network-console/networking-console-plugin-cert: secret "networking-console-plugin-cert" not found Feb 24 05:40:50.438090 master-0 kubenswrapper[34361]: E0224 05:40:50.437721 34361 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b0a03ff3-e39b-4be9-bb1f-827d00437e62-networking-console-plugin-cert podName:b0a03ff3-e39b-4be9-bb1f-827d00437e62 nodeName:}" failed. No retries permitted until 2026-02-24 05:41:22.437674452 +0000 UTC m=+242.140291538 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/b0a03ff3-e39b-4be9-bb1f-827d00437e62-networking-console-plugin-cert") pod "networking-console-plugin-79f587d78f-bctpb" (UID: "b0a03ff3-e39b-4be9-bb1f-827d00437e62") : secret "networking-console-plugin-cert" not found Feb 24 05:40:52.172835 master-0 kubenswrapper[34361]: I0224 05:40:52.172697 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-alertmanager-main-tls\" (UniqueName: \"kubernetes.io/secret/c4b6cbff-c5df-420d-a923-0d27b8e1e896-secret-alertmanager-main-tls\") pod \"alertmanager-main-0\" (UID: \"c4b6cbff-c5df-420d-a923-0d27b8e1e896\") " pod="openshift-monitoring/alertmanager-main-0" Feb 24 05:40:52.173948 master-0 kubenswrapper[34361]: E0224 05:40:52.172955 34361 secret.go:189] Couldn't get secret openshift-monitoring/alertmanager-main-tls: secret "alertmanager-main-tls" not found Feb 24 05:40:52.173948 master-0 kubenswrapper[34361]: E0224 05:40:52.173079 34361 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c4b6cbff-c5df-420d-a923-0d27b8e1e896-secret-alertmanager-main-tls podName:c4b6cbff-c5df-420d-a923-0d27b8e1e896 nodeName:}" failed. No retries permitted until 2026-02-24 05:40:56.173056175 +0000 UTC m=+215.875673221 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "secret-alertmanager-main-tls" (UniqueName: "kubernetes.io/secret/c4b6cbff-c5df-420d-a923-0d27b8e1e896-secret-alertmanager-main-tls") pod "alertmanager-main-0" (UID: "c4b6cbff-c5df-420d-a923-0d27b8e1e896") : secret "alertmanager-main-tls" not found Feb 24 05:40:56.258703 master-0 kubenswrapper[34361]: I0224 05:40:56.258548 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-alertmanager-main-tls\" (UniqueName: \"kubernetes.io/secret/c4b6cbff-c5df-420d-a923-0d27b8e1e896-secret-alertmanager-main-tls\") pod \"alertmanager-main-0\" (UID: \"c4b6cbff-c5df-420d-a923-0d27b8e1e896\") " pod="openshift-monitoring/alertmanager-main-0" Feb 24 05:40:56.260171 master-0 kubenswrapper[34361]: E0224 05:40:56.259076 34361 secret.go:189] Couldn't get secret openshift-monitoring/alertmanager-main-tls: secret "alertmanager-main-tls" not found Feb 24 05:40:56.260171 master-0 kubenswrapper[34361]: E0224 05:40:56.259205 34361 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c4b6cbff-c5df-420d-a923-0d27b8e1e896-secret-alertmanager-main-tls podName:c4b6cbff-c5df-420d-a923-0d27b8e1e896 nodeName:}" failed. No retries permitted until 2026-02-24 05:41:04.259163996 +0000 UTC m=+223.961781072 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "secret-alertmanager-main-tls" (UniqueName: "kubernetes.io/secret/c4b6cbff-c5df-420d-a923-0d27b8e1e896-secret-alertmanager-main-tls") pod "alertmanager-main-0" (UID: "c4b6cbff-c5df-420d-a923-0d27b8e1e896") : secret "alertmanager-main-tls" not found Feb 24 05:40:58.649690 master-0 kubenswrapper[34361]: I0224 05:40:58.649499 34361 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-monitoring/metrics-server-7bf9b765b9-b9fxz" Feb 24 05:40:58.658517 master-0 kubenswrapper[34361]: I0224 05:40:58.657198 34361 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-monitoring/metrics-server-7bf9b765b9-b9fxz" Feb 24 05:41:03.104437 master-0 kubenswrapper[34361]: I0224 05:41:03.104302 34361 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-monitoring/prometheus-k8s-0"] Feb 24 05:41:03.107732 master-0 kubenswrapper[34361]: I0224 05:41:03.107678 34361 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/prometheus-k8s-0" Feb 24 05:41:03.110879 master-0 kubenswrapper[34361]: I0224 05:41:03.110833 34361 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-k8s-tls" Feb 24 05:41:03.111216 master-0 kubenswrapper[34361]: I0224 05:41:03.111187 34361 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"kube-rbac-proxy" Feb 24 05:41:03.111430 master-0 kubenswrapper[34361]: I0224 05:41:03.111405 34361 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-k8s-kube-rbac-proxy-web" Feb 24 05:41:03.111769 master-0 kubenswrapper[34361]: I0224 05:41:03.111739 34361 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-k8s" Feb 24 05:41:03.111957 master-0 kubenswrapper[34361]: I0224 05:41:03.111929 34361 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-k8s-grpc-tls-fkuahuqkfbhtv" Feb 24 05:41:03.112367 master-0 kubenswrapper[34361]: I0224 05:41:03.112328 34361 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-k8s-tls-assets-0" Feb 24 05:41:03.112443 master-0 kubenswrapper[34361]: I0224 05:41:03.112418 34361 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-k8s-thanos-prometheus-http-client-file" Feb 24 05:41:03.112539 master-0 kubenswrapper[34361]: I0224 05:41:03.112481 34361 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-k8s-thanos-sidecar-tls" Feb 24 05:41:03.112599 master-0 kubenswrapper[34361]: I0224 05:41:03.112583 34361 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"serving-certs-ca-bundle" Feb 24 05:41:03.112653 master-0 kubenswrapper[34361]: I0224 05:41:03.112491 34361 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-k8s-web-config" Feb 24 05:41:03.117823 master-0 kubenswrapper[34361]: I0224 05:41:03.117759 34361 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"prometheus-k8s-rulefiles-0" Feb 24 05:41:03.121395 master-0 kubenswrapper[34361]: I0224 05:41:03.121340 34361 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"prometheus-trusted-ca-bundle" Feb 24 05:41:03.142176 master-0 kubenswrapper[34361]: I0224 05:41:03.142109 34361 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/prometheus-k8s-0"] Feb 24 05:41:03.204014 master-0 kubenswrapper[34361]: I0224 05:41:03.203918 34361 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/45536e5e-cd30-4946-a98f-1454c7a2f5e1-thanos-prometheus-http-client-file\") pod \"prometheus-k8s-0\" (UID: \"45536e5e-cd30-4946-a98f-1454c7a2f5e1\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 24 05:41:03.204014 master-0 kubenswrapper[34361]: I0224 05:41:03.204000 34361 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/45536e5e-cd30-4946-a98f-1454c7a2f5e1-config\") pod \"prometheus-k8s-0\" (UID: \"45536e5e-cd30-4946-a98f-1454c7a2f5e1\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 24 05:41:03.204450 master-0 kubenswrapper[34361]: I0224 05:41:03.204071 34361 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"configmap-serving-certs-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/45536e5e-cd30-4946-a98f-1454c7a2f5e1-configmap-serving-certs-ca-bundle\") pod \"prometheus-k8s-0\" (UID: \"45536e5e-cd30-4946-a98f-1454c7a2f5e1\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 24 05:41:03.204450 master-0 kubenswrapper[34361]: I0224 05:41:03.204106 34361 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-prometheus-k8s-kube-rbac-proxy-web\" (UniqueName: \"kubernetes.io/secret/45536e5e-cd30-4946-a98f-1454c7a2f5e1-secret-prometheus-k8s-kube-rbac-proxy-web\") pod \"prometheus-k8s-0\" (UID: \"45536e5e-cd30-4946-a98f-1454c7a2f5e1\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 24 05:41:03.204450 master-0 kubenswrapper[34361]: I0224 05:41:03.204176 34361 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-prometheus-k8s-thanos-sidecar-tls\" (UniqueName: \"kubernetes.io/secret/45536e5e-cd30-4946-a98f-1454c7a2f5e1-secret-prometheus-k8s-thanos-sidecar-tls\") pod \"prometheus-k8s-0\" (UID: \"45536e5e-cd30-4946-a98f-1454c7a2f5e1\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 24 05:41:03.204450 master-0 kubenswrapper[34361]: I0224 05:41:03.204210 34361 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/45536e5e-cd30-4946-a98f-1454c7a2f5e1-prometheus-trusted-ca-bundle\") pod \"prometheus-k8s-0\" (UID: \"45536e5e-cd30-4946-a98f-1454c7a2f5e1\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 24 05:41:03.204450 master-0 kubenswrapper[34361]: I0224 05:41:03.204375 34361 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/45536e5e-cd30-4946-a98f-1454c7a2f5e1-web-config\") pod \"prometheus-k8s-0\" (UID: \"45536e5e-cd30-4946-a98f-1454c7a2f5e1\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 24 05:41:03.204603 master-0 kubenswrapper[34361]: I0224 05:41:03.204512 34361 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-prometheus-k8s-tls\" (UniqueName: \"kubernetes.io/secret/45536e5e-cd30-4946-a98f-1454c7a2f5e1-secret-prometheus-k8s-tls\") pod \"prometheus-k8s-0\" (UID: \"45536e5e-cd30-4946-a98f-1454c7a2f5e1\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 24 05:41:03.204603 master-0 kubenswrapper[34361]: I0224 05:41:03.204582 34361 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/45536e5e-cd30-4946-a98f-1454c7a2f5e1-tls-assets\") pod \"prometheus-k8s-0\" (UID: \"45536e5e-cd30-4946-a98f-1454c7a2f5e1\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 24 05:41:03.204664 master-0 kubenswrapper[34361]: I0224 05:41:03.204641 34361 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-k8s-db\" (UniqueName: \"kubernetes.io/empty-dir/45536e5e-cd30-4946-a98f-1454c7a2f5e1-prometheus-k8s-db\") pod \"prometheus-k8s-0\" (UID: \"45536e5e-cd30-4946-a98f-1454c7a2f5e1\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 24 05:41:03.204747 master-0 kubenswrapper[34361]: I0224 05:41:03.204715 34361 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7ljbz\" (UniqueName: \"kubernetes.io/projected/45536e5e-cd30-4946-a98f-1454c7a2f5e1-kube-api-access-7ljbz\") pod \"prometheus-k8s-0\" (UID: \"45536e5e-cd30-4946-a98f-1454c7a2f5e1\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 24 05:41:03.204798 master-0 kubenswrapper[34361]: I0224 05:41:03.204767 34361 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-k8s-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/45536e5e-cd30-4946-a98f-1454c7a2f5e1-prometheus-k8s-rulefiles-0\") pod \"prometheus-k8s-0\" (UID: \"45536e5e-cd30-4946-a98f-1454c7a2f5e1\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 24 05:41:03.204972 master-0 kubenswrapper[34361]: I0224 05:41:03.204933 34361 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"configmap-kubelet-serving-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/45536e5e-cd30-4946-a98f-1454c7a2f5e1-configmap-kubelet-serving-ca-bundle\") pod \"prometheus-k8s-0\" (UID: \"45536e5e-cd30-4946-a98f-1454c7a2f5e1\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 24 05:41:03.205064 master-0 kubenswrapper[34361]: I0224 05:41:03.205026 34361 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"configmap-metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/45536e5e-cd30-4946-a98f-1454c7a2f5e1-configmap-metrics-client-ca\") pod \"prometheus-k8s-0\" (UID: \"45536e5e-cd30-4946-a98f-1454c7a2f5e1\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 24 05:41:03.205165 master-0 kubenswrapper[34361]: I0224 05:41:03.205128 34361 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/45536e5e-cd30-4946-a98f-1454c7a2f5e1-config-out\") pod \"prometheus-k8s-0\" (UID: \"45536e5e-cd30-4946-a98f-1454c7a2f5e1\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 24 05:41:03.205208 master-0 kubenswrapper[34361]: I0224 05:41:03.205184 34361 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-kube-rbac-proxy\" (UniqueName: \"kubernetes.io/secret/45536e5e-cd30-4946-a98f-1454c7a2f5e1-secret-kube-rbac-proxy\") pod \"prometheus-k8s-0\" (UID: \"45536e5e-cd30-4946-a98f-1454c7a2f5e1\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 24 05:41:03.205243 master-0 kubenswrapper[34361]: I0224 05:41:03.205212 34361 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-grpc-tls\" (UniqueName: \"kubernetes.io/secret/45536e5e-cd30-4946-a98f-1454c7a2f5e1-secret-grpc-tls\") pod \"prometheus-k8s-0\" (UID: \"45536e5e-cd30-4946-a98f-1454c7a2f5e1\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 24 05:41:03.205289 master-0 kubenswrapper[34361]: I0224 05:41:03.205263 34361 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-metrics-client-certs\" (UniqueName: \"kubernetes.io/secret/45536e5e-cd30-4946-a98f-1454c7a2f5e1-secret-metrics-client-certs\") pod \"prometheus-k8s-0\" (UID: \"45536e5e-cd30-4946-a98f-1454c7a2f5e1\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 24 05:41:03.309950 master-0 kubenswrapper[34361]: I0224 05:41:03.309773 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-kube-rbac-proxy\" (UniqueName: \"kubernetes.io/secret/45536e5e-cd30-4946-a98f-1454c7a2f5e1-secret-kube-rbac-proxy\") pod \"prometheus-k8s-0\" (UID: \"45536e5e-cd30-4946-a98f-1454c7a2f5e1\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 24 05:41:03.310445 master-0 kubenswrapper[34361]: I0224 05:41:03.309973 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-grpc-tls\" (UniqueName: \"kubernetes.io/secret/45536e5e-cd30-4946-a98f-1454c7a2f5e1-secret-grpc-tls\") pod \"prometheus-k8s-0\" (UID: \"45536e5e-cd30-4946-a98f-1454c7a2f5e1\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 24 05:41:03.310445 master-0 kubenswrapper[34361]: I0224 05:41:03.310083 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-metrics-client-certs\" (UniqueName: \"kubernetes.io/secret/45536e5e-cd30-4946-a98f-1454c7a2f5e1-secret-metrics-client-certs\") pod \"prometheus-k8s-0\" (UID: \"45536e5e-cd30-4946-a98f-1454c7a2f5e1\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 24 05:41:03.310719 master-0 kubenswrapper[34361]: I0224 05:41:03.310436 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/45536e5e-cd30-4946-a98f-1454c7a2f5e1-thanos-prometheus-http-client-file\") pod \"prometheus-k8s-0\" (UID: \"45536e5e-cd30-4946-a98f-1454c7a2f5e1\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 24 05:41:03.310719 master-0 kubenswrapper[34361]: I0224 05:41:03.310585 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/45536e5e-cd30-4946-a98f-1454c7a2f5e1-config\") pod \"prometheus-k8s-0\" (UID: \"45536e5e-cd30-4946-a98f-1454c7a2f5e1\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 24 05:41:03.310940 master-0 kubenswrapper[34361]: I0224 05:41:03.310719 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"configmap-serving-certs-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/45536e5e-cd30-4946-a98f-1454c7a2f5e1-configmap-serving-certs-ca-bundle\") pod \"prometheus-k8s-0\" (UID: \"45536e5e-cd30-4946-a98f-1454c7a2f5e1\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 24 05:41:03.310940 master-0 kubenswrapper[34361]: I0224 05:41:03.310789 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-prometheus-k8s-kube-rbac-proxy-web\" (UniqueName: \"kubernetes.io/secret/45536e5e-cd30-4946-a98f-1454c7a2f5e1-secret-prometheus-k8s-kube-rbac-proxy-web\") pod \"prometheus-k8s-0\" (UID: \"45536e5e-cd30-4946-a98f-1454c7a2f5e1\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 24 05:41:03.313243 master-0 kubenswrapper[34361]: I0224 05:41:03.313021 34361 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"configmap-serving-certs-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/45536e5e-cd30-4946-a98f-1454c7a2f5e1-configmap-serving-certs-ca-bundle\") pod \"prometheus-k8s-0\" (UID: \"45536e5e-cd30-4946-a98f-1454c7a2f5e1\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 24 05:41:03.313484 master-0 kubenswrapper[34361]: I0224 05:41:03.313252 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-prometheus-k8s-thanos-sidecar-tls\" (UniqueName: \"kubernetes.io/secret/45536e5e-cd30-4946-a98f-1454c7a2f5e1-secret-prometheus-k8s-thanos-sidecar-tls\") pod \"prometheus-k8s-0\" (UID: \"45536e5e-cd30-4946-a98f-1454c7a2f5e1\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 24 05:41:03.313484 master-0 kubenswrapper[34361]: I0224 05:41:03.313402 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/45536e5e-cd30-4946-a98f-1454c7a2f5e1-prometheus-trusted-ca-bundle\") pod \"prometheus-k8s-0\" (UID: \"45536e5e-cd30-4946-a98f-1454c7a2f5e1\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 24 05:41:03.313675 master-0 kubenswrapper[34361]: E0224 05:41:03.313528 34361 secret.go:189] Couldn't get secret openshift-monitoring/prometheus-k8s-thanos-sidecar-tls: secret "prometheus-k8s-thanos-sidecar-tls" not found Feb 24 05:41:03.313773 master-0 kubenswrapper[34361]: E0224 05:41:03.313678 34361 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/45536e5e-cd30-4946-a98f-1454c7a2f5e1-secret-prometheus-k8s-thanos-sidecar-tls podName:45536e5e-cd30-4946-a98f-1454c7a2f5e1 nodeName:}" failed. No retries permitted until 2026-02-24 05:41:03.813632177 +0000 UTC m=+223.516249393 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "secret-prometheus-k8s-thanos-sidecar-tls" (UniqueName: "kubernetes.io/secret/45536e5e-cd30-4946-a98f-1454c7a2f5e1-secret-prometheus-k8s-thanos-sidecar-tls") pod "prometheus-k8s-0" (UID: "45536e5e-cd30-4946-a98f-1454c7a2f5e1") : secret "prometheus-k8s-thanos-sidecar-tls" not found Feb 24 05:41:03.313773 master-0 kubenswrapper[34361]: I0224 05:41:03.313488 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/45536e5e-cd30-4946-a98f-1454c7a2f5e1-web-config\") pod \"prometheus-k8s-0\" (UID: \"45536e5e-cd30-4946-a98f-1454c7a2f5e1\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 24 05:41:03.314071 master-0 kubenswrapper[34361]: I0224 05:41:03.313870 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-prometheus-k8s-tls\" (UniqueName: \"kubernetes.io/secret/45536e5e-cd30-4946-a98f-1454c7a2f5e1-secret-prometheus-k8s-tls\") pod \"prometheus-k8s-0\" (UID: \"45536e5e-cd30-4946-a98f-1454c7a2f5e1\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 24 05:41:03.314071 master-0 kubenswrapper[34361]: I0224 05:41:03.313974 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/45536e5e-cd30-4946-a98f-1454c7a2f5e1-tls-assets\") pod \"prometheus-k8s-0\" (UID: \"45536e5e-cd30-4946-a98f-1454c7a2f5e1\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 24 05:41:03.314071 master-0 kubenswrapper[34361]: E0224 05:41:03.314039 34361 secret.go:189] Couldn't get secret openshift-monitoring/prometheus-k8s-tls: secret "prometheus-k8s-tls" not found Feb 24 05:41:03.314547 master-0 kubenswrapper[34361]: I0224 05:41:03.314067 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-k8s-db\" (UniqueName: \"kubernetes.io/empty-dir/45536e5e-cd30-4946-a98f-1454c7a2f5e1-prometheus-k8s-db\") pod \"prometheus-k8s-0\" (UID: \"45536e5e-cd30-4946-a98f-1454c7a2f5e1\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 24 05:41:03.314547 master-0 kubenswrapper[34361]: I0224 05:41:03.314137 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7ljbz\" (UniqueName: \"kubernetes.io/projected/45536e5e-cd30-4946-a98f-1454c7a2f5e1-kube-api-access-7ljbz\") pod \"prometheus-k8s-0\" (UID: \"45536e5e-cd30-4946-a98f-1454c7a2f5e1\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 24 05:41:03.314547 master-0 kubenswrapper[34361]: E0224 05:41:03.314383 34361 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/45536e5e-cd30-4946-a98f-1454c7a2f5e1-secret-prometheus-k8s-tls podName:45536e5e-cd30-4946-a98f-1454c7a2f5e1 nodeName:}" failed. No retries permitted until 2026-02-24 05:41:03.81413062 +0000 UTC m=+223.516747706 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "secret-prometheus-k8s-tls" (UniqueName: "kubernetes.io/secret/45536e5e-cd30-4946-a98f-1454c7a2f5e1-secret-prometheus-k8s-tls") pod "prometheus-k8s-0" (UID: "45536e5e-cd30-4946-a98f-1454c7a2f5e1") : secret "prometheus-k8s-tls" not found Feb 24 05:41:03.314840 master-0 kubenswrapper[34361]: I0224 05:41:03.314638 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-k8s-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/45536e5e-cd30-4946-a98f-1454c7a2f5e1-prometheus-k8s-rulefiles-0\") pod \"prometheus-k8s-0\" (UID: \"45536e5e-cd30-4946-a98f-1454c7a2f5e1\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 24 05:41:03.315021 master-0 kubenswrapper[34361]: I0224 05:41:03.314943 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"configmap-kubelet-serving-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/45536e5e-cd30-4946-a98f-1454c7a2f5e1-configmap-kubelet-serving-ca-bundle\") pod \"prometheus-k8s-0\" (UID: \"45536e5e-cd30-4946-a98f-1454c7a2f5e1\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 24 05:41:03.315167 master-0 kubenswrapper[34361]: I0224 05:41:03.315065 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"configmap-metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/45536e5e-cd30-4946-a98f-1454c7a2f5e1-configmap-metrics-client-ca\") pod \"prometheus-k8s-0\" (UID: \"45536e5e-cd30-4946-a98f-1454c7a2f5e1\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 24 05:41:03.315289 master-0 kubenswrapper[34361]: I0224 05:41:03.315163 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/45536e5e-cd30-4946-a98f-1454c7a2f5e1-config-out\") pod \"prometheus-k8s-0\" (UID: \"45536e5e-cd30-4946-a98f-1454c7a2f5e1\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 24 05:41:03.317713 master-0 kubenswrapper[34361]: I0224 05:41:03.315421 34361 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-k8s-db\" (UniqueName: \"kubernetes.io/empty-dir/45536e5e-cd30-4946-a98f-1454c7a2f5e1-prometheus-k8s-db\") pod \"prometheus-k8s-0\" (UID: \"45536e5e-cd30-4946-a98f-1454c7a2f5e1\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 24 05:41:03.317713 master-0 kubenswrapper[34361]: I0224 05:41:03.315913 34361 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/45536e5e-cd30-4946-a98f-1454c7a2f5e1-prometheus-trusted-ca-bundle\") pod \"prometheus-k8s-0\" (UID: \"45536e5e-cd30-4946-a98f-1454c7a2f5e1\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 24 05:41:03.317713 master-0 kubenswrapper[34361]: I0224 05:41:03.316437 34361 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/45536e5e-cd30-4946-a98f-1454c7a2f5e1-thanos-prometheus-http-client-file\") pod \"prometheus-k8s-0\" (UID: \"45536e5e-cd30-4946-a98f-1454c7a2f5e1\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 24 05:41:03.317713 master-0 kubenswrapper[34361]: I0224 05:41:03.317358 34361 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"configmap-kubelet-serving-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/45536e5e-cd30-4946-a98f-1454c7a2f5e1-configmap-kubelet-serving-ca-bundle\") pod \"prometheus-k8s-0\" (UID: \"45536e5e-cd30-4946-a98f-1454c7a2f5e1\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 24 05:41:03.317713 master-0 kubenswrapper[34361]: I0224 05:41:03.317617 34361 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"configmap-metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/45536e5e-cd30-4946-a98f-1454c7a2f5e1-configmap-metrics-client-ca\") pod \"prometheus-k8s-0\" (UID: \"45536e5e-cd30-4946-a98f-1454c7a2f5e1\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 24 05:41:03.319909 master-0 kubenswrapper[34361]: I0224 05:41:03.319844 34361 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-kube-rbac-proxy\" (UniqueName: \"kubernetes.io/secret/45536e5e-cd30-4946-a98f-1454c7a2f5e1-secret-kube-rbac-proxy\") pod \"prometheus-k8s-0\" (UID: \"45536e5e-cd30-4946-a98f-1454c7a2f5e1\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 24 05:41:03.320253 master-0 kubenswrapper[34361]: I0224 05:41:03.320178 34361 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-prometheus-k8s-kube-rbac-proxy-web\" (UniqueName: \"kubernetes.io/secret/45536e5e-cd30-4946-a98f-1454c7a2f5e1-secret-prometheus-k8s-kube-rbac-proxy-web\") pod \"prometheus-k8s-0\" (UID: \"45536e5e-cd30-4946-a98f-1454c7a2f5e1\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 24 05:41:03.320712 master-0 kubenswrapper[34361]: I0224 05:41:03.320644 34361 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-grpc-tls\" (UniqueName: \"kubernetes.io/secret/45536e5e-cd30-4946-a98f-1454c7a2f5e1-secret-grpc-tls\") pod \"prometheus-k8s-0\" (UID: \"45536e5e-cd30-4946-a98f-1454c7a2f5e1\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 24 05:41:03.321424 master-0 kubenswrapper[34361]: I0224 05:41:03.321372 34361 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-metrics-client-certs\" (UniqueName: \"kubernetes.io/secret/45536e5e-cd30-4946-a98f-1454c7a2f5e1-secret-metrics-client-certs\") pod \"prometheus-k8s-0\" (UID: \"45536e5e-cd30-4946-a98f-1454c7a2f5e1\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 24 05:41:03.323536 master-0 kubenswrapper[34361]: I0224 05:41:03.323462 34361 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/45536e5e-cd30-4946-a98f-1454c7a2f5e1-web-config\") pod \"prometheus-k8s-0\" (UID: \"45536e5e-cd30-4946-a98f-1454c7a2f5e1\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 24 05:41:03.326277 master-0 kubenswrapper[34361]: I0224 05:41:03.326205 34361 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/45536e5e-cd30-4946-a98f-1454c7a2f5e1-config-out\") pod \"prometheus-k8s-0\" (UID: \"45536e5e-cd30-4946-a98f-1454c7a2f5e1\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 24 05:41:03.326929 master-0 kubenswrapper[34361]: I0224 05:41:03.326856 34361 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/45536e5e-cd30-4946-a98f-1454c7a2f5e1-config\") pod \"prometheus-k8s-0\" (UID: \"45536e5e-cd30-4946-a98f-1454c7a2f5e1\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 24 05:41:03.327059 master-0 kubenswrapper[34361]: I0224 05:41:03.326942 34361 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/45536e5e-cd30-4946-a98f-1454c7a2f5e1-tls-assets\") pod \"prometheus-k8s-0\" (UID: \"45536e5e-cd30-4946-a98f-1454c7a2f5e1\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 24 05:41:03.330171 master-0 kubenswrapper[34361]: I0224 05:41:03.330108 34361 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-k8s-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/45536e5e-cd30-4946-a98f-1454c7a2f5e1-prometheus-k8s-rulefiles-0\") pod \"prometheus-k8s-0\" (UID: \"45536e5e-cd30-4946-a98f-1454c7a2f5e1\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 24 05:41:03.348073 master-0 kubenswrapper[34361]: I0224 05:41:03.348012 34361 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7ljbz\" (UniqueName: \"kubernetes.io/projected/45536e5e-cd30-4946-a98f-1454c7a2f5e1-kube-api-access-7ljbz\") pod \"prometheus-k8s-0\" (UID: \"45536e5e-cd30-4946-a98f-1454c7a2f5e1\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 24 05:41:03.833460 master-0 kubenswrapper[34361]: I0224 05:41:03.833223 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-prometheus-k8s-thanos-sidecar-tls\" (UniqueName: \"kubernetes.io/secret/45536e5e-cd30-4946-a98f-1454c7a2f5e1-secret-prometheus-k8s-thanos-sidecar-tls\") pod \"prometheus-k8s-0\" (UID: \"45536e5e-cd30-4946-a98f-1454c7a2f5e1\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 24 05:41:03.833990 master-0 kubenswrapper[34361]: I0224 05:41:03.833515 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-prometheus-k8s-tls\" (UniqueName: \"kubernetes.io/secret/45536e5e-cd30-4946-a98f-1454c7a2f5e1-secret-prometheus-k8s-tls\") pod \"prometheus-k8s-0\" (UID: \"45536e5e-cd30-4946-a98f-1454c7a2f5e1\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 24 05:41:03.833990 master-0 kubenswrapper[34361]: E0224 05:41:03.833651 34361 secret.go:189] Couldn't get secret openshift-monitoring/prometheus-k8s-thanos-sidecar-tls: secret "prometheus-k8s-thanos-sidecar-tls" not found Feb 24 05:41:03.833990 master-0 kubenswrapper[34361]: E0224 05:41:03.833843 34361 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/45536e5e-cd30-4946-a98f-1454c7a2f5e1-secret-prometheus-k8s-thanos-sidecar-tls podName:45536e5e-cd30-4946-a98f-1454c7a2f5e1 nodeName:}" failed. No retries permitted until 2026-02-24 05:41:04.833797284 +0000 UTC m=+224.536414370 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "secret-prometheus-k8s-thanos-sidecar-tls" (UniqueName: "kubernetes.io/secret/45536e5e-cd30-4946-a98f-1454c7a2f5e1-secret-prometheus-k8s-thanos-sidecar-tls") pod "prometheus-k8s-0" (UID: "45536e5e-cd30-4946-a98f-1454c7a2f5e1") : secret "prometheus-k8s-thanos-sidecar-tls" not found Feb 24 05:41:03.834397 master-0 kubenswrapper[34361]: E0224 05:41:03.834011 34361 secret.go:189] Couldn't get secret openshift-monitoring/prometheus-k8s-tls: secret "prometheus-k8s-tls" not found Feb 24 05:41:03.834397 master-0 kubenswrapper[34361]: E0224 05:41:03.834169 34361 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/45536e5e-cd30-4946-a98f-1454c7a2f5e1-secret-prometheus-k8s-tls podName:45536e5e-cd30-4946-a98f-1454c7a2f5e1 nodeName:}" failed. No retries permitted until 2026-02-24 05:41:04.834145073 +0000 UTC m=+224.536762159 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "secret-prometheus-k8s-tls" (UniqueName: "kubernetes.io/secret/45536e5e-cd30-4946-a98f-1454c7a2f5e1-secret-prometheus-k8s-tls") pod "prometheus-k8s-0" (UID: "45536e5e-cd30-4946-a98f-1454c7a2f5e1") : secret "prometheus-k8s-tls" not found Feb 24 05:41:04.342511 master-0 kubenswrapper[34361]: I0224 05:41:04.342388 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-alertmanager-main-tls\" (UniqueName: \"kubernetes.io/secret/c4b6cbff-c5df-420d-a923-0d27b8e1e896-secret-alertmanager-main-tls\") pod \"alertmanager-main-0\" (UID: \"c4b6cbff-c5df-420d-a923-0d27b8e1e896\") " pod="openshift-monitoring/alertmanager-main-0" Feb 24 05:41:04.343757 master-0 kubenswrapper[34361]: E0224 05:41:04.342746 34361 secret.go:189] Couldn't get secret openshift-monitoring/alertmanager-main-tls: secret "alertmanager-main-tls" not found Feb 24 05:41:04.343757 master-0 kubenswrapper[34361]: E0224 05:41:04.342892 34361 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c4b6cbff-c5df-420d-a923-0d27b8e1e896-secret-alertmanager-main-tls podName:c4b6cbff-c5df-420d-a923-0d27b8e1e896 nodeName:}" failed. No retries permitted until 2026-02-24 05:41:20.34285266 +0000 UTC m=+240.045469746 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "secret-alertmanager-main-tls" (UniqueName: "kubernetes.io/secret/c4b6cbff-c5df-420d-a923-0d27b8e1e896-secret-alertmanager-main-tls") pod "alertmanager-main-0" (UID: "c4b6cbff-c5df-420d-a923-0d27b8e1e896") : secret "alertmanager-main-tls" not found Feb 24 05:41:04.855531 master-0 kubenswrapper[34361]: I0224 05:41:04.855396 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-prometheus-k8s-thanos-sidecar-tls\" (UniqueName: \"kubernetes.io/secret/45536e5e-cd30-4946-a98f-1454c7a2f5e1-secret-prometheus-k8s-thanos-sidecar-tls\") pod \"prometheus-k8s-0\" (UID: \"45536e5e-cd30-4946-a98f-1454c7a2f5e1\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 24 05:41:04.855531 master-0 kubenswrapper[34361]: I0224 05:41:04.855541 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-prometheus-k8s-tls\" (UniqueName: \"kubernetes.io/secret/45536e5e-cd30-4946-a98f-1454c7a2f5e1-secret-prometheus-k8s-tls\") pod \"prometheus-k8s-0\" (UID: \"45536e5e-cd30-4946-a98f-1454c7a2f5e1\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 24 05:41:04.856153 master-0 kubenswrapper[34361]: E0224 05:41:04.855657 34361 secret.go:189] Couldn't get secret openshift-monitoring/prometheus-k8s-thanos-sidecar-tls: secret "prometheus-k8s-thanos-sidecar-tls" not found Feb 24 05:41:04.856153 master-0 kubenswrapper[34361]: E0224 05:41:04.855803 34361 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/45536e5e-cd30-4946-a98f-1454c7a2f5e1-secret-prometheus-k8s-thanos-sidecar-tls podName:45536e5e-cd30-4946-a98f-1454c7a2f5e1 nodeName:}" failed. No retries permitted until 2026-02-24 05:41:06.855774701 +0000 UTC m=+226.558391767 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "secret-prometheus-k8s-thanos-sidecar-tls" (UniqueName: "kubernetes.io/secret/45536e5e-cd30-4946-a98f-1454c7a2f5e1-secret-prometheus-k8s-thanos-sidecar-tls") pod "prometheus-k8s-0" (UID: "45536e5e-cd30-4946-a98f-1454c7a2f5e1") : secret "prometheus-k8s-thanos-sidecar-tls" not found Feb 24 05:41:04.856153 master-0 kubenswrapper[34361]: E0224 05:41:04.855895 34361 secret.go:189] Couldn't get secret openshift-monitoring/prometheus-k8s-tls: secret "prometheus-k8s-tls" not found Feb 24 05:41:04.856153 master-0 kubenswrapper[34361]: E0224 05:41:04.856063 34361 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/45536e5e-cd30-4946-a98f-1454c7a2f5e1-secret-prometheus-k8s-tls podName:45536e5e-cd30-4946-a98f-1454c7a2f5e1 nodeName:}" failed. No retries permitted until 2026-02-24 05:41:06.856018148 +0000 UTC m=+226.558635354 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "secret-prometheus-k8s-tls" (UniqueName: "kubernetes.io/secret/45536e5e-cd30-4946-a98f-1454c7a2f5e1-secret-prometheus-k8s-tls") pod "prometheus-k8s-0" (UID: "45536e5e-cd30-4946-a98f-1454c7a2f5e1") : secret "prometheus-k8s-tls" not found Feb 24 05:41:06.906811 master-0 kubenswrapper[34361]: I0224 05:41:06.906692 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-prometheus-k8s-tls\" (UniqueName: \"kubernetes.io/secret/45536e5e-cd30-4946-a98f-1454c7a2f5e1-secret-prometheus-k8s-tls\") pod \"prometheus-k8s-0\" (UID: \"45536e5e-cd30-4946-a98f-1454c7a2f5e1\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 24 05:41:06.907793 master-0 kubenswrapper[34361]: I0224 05:41:06.906987 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-prometheus-k8s-thanos-sidecar-tls\" (UniqueName: \"kubernetes.io/secret/45536e5e-cd30-4946-a98f-1454c7a2f5e1-secret-prometheus-k8s-thanos-sidecar-tls\") pod \"prometheus-k8s-0\" (UID: \"45536e5e-cd30-4946-a98f-1454c7a2f5e1\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 24 05:41:06.907793 master-0 kubenswrapper[34361]: E0224 05:41:06.907238 34361 secret.go:189] Couldn't get secret openshift-monitoring/prometheus-k8s-thanos-sidecar-tls: secret "prometheus-k8s-thanos-sidecar-tls" not found Feb 24 05:41:06.907793 master-0 kubenswrapper[34361]: E0224 05:41:06.907387 34361 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/45536e5e-cd30-4946-a98f-1454c7a2f5e1-secret-prometheus-k8s-thanos-sidecar-tls podName:45536e5e-cd30-4946-a98f-1454c7a2f5e1 nodeName:}" failed. No retries permitted until 2026-02-24 05:41:10.907351752 +0000 UTC m=+230.609968848 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "secret-prometheus-k8s-thanos-sidecar-tls" (UniqueName: "kubernetes.io/secret/45536e5e-cd30-4946-a98f-1454c7a2f5e1-secret-prometheus-k8s-thanos-sidecar-tls") pod "prometheus-k8s-0" (UID: "45536e5e-cd30-4946-a98f-1454c7a2f5e1") : secret "prometheus-k8s-thanos-sidecar-tls" not found Feb 24 05:41:06.907793 master-0 kubenswrapper[34361]: E0224 05:41:06.907686 34361 secret.go:189] Couldn't get secret openshift-monitoring/prometheus-k8s-tls: secret "prometheus-k8s-tls" not found Feb 24 05:41:06.908175 master-0 kubenswrapper[34361]: E0224 05:41:06.907863 34361 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/45536e5e-cd30-4946-a98f-1454c7a2f5e1-secret-prometheus-k8s-tls podName:45536e5e-cd30-4946-a98f-1454c7a2f5e1 nodeName:}" failed. No retries permitted until 2026-02-24 05:41:10.907819604 +0000 UTC m=+230.610436680 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "secret-prometheus-k8s-tls" (UniqueName: "kubernetes.io/secret/45536e5e-cd30-4946-a98f-1454c7a2f5e1-secret-prometheus-k8s-tls") pod "prometheus-k8s-0" (UID: "45536e5e-cd30-4946-a98f-1454c7a2f5e1") : secret "prometheus-k8s-tls" not found Feb 24 05:41:10.999614 master-0 kubenswrapper[34361]: I0224 05:41:10.999526 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-prometheus-k8s-thanos-sidecar-tls\" (UniqueName: \"kubernetes.io/secret/45536e5e-cd30-4946-a98f-1454c7a2f5e1-secret-prometheus-k8s-thanos-sidecar-tls\") pod \"prometheus-k8s-0\" (UID: \"45536e5e-cd30-4946-a98f-1454c7a2f5e1\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 24 05:41:11.000932 master-0 kubenswrapper[34361]: I0224 05:41:10.999807 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-prometheus-k8s-tls\" (UniqueName: \"kubernetes.io/secret/45536e5e-cd30-4946-a98f-1454c7a2f5e1-secret-prometheus-k8s-tls\") pod \"prometheus-k8s-0\" (UID: \"45536e5e-cd30-4946-a98f-1454c7a2f5e1\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 24 05:41:11.000932 master-0 kubenswrapper[34361]: E0224 05:41:10.999870 34361 secret.go:189] Couldn't get secret openshift-monitoring/prometheus-k8s-thanos-sidecar-tls: secret "prometheus-k8s-thanos-sidecar-tls" not found Feb 24 05:41:11.000932 master-0 kubenswrapper[34361]: E0224 05:41:11.000078 34361 secret.go:189] Couldn't get secret openshift-monitoring/prometheus-k8s-tls: secret "prometheus-k8s-tls" not found Feb 24 05:41:11.000932 master-0 kubenswrapper[34361]: E0224 05:41:11.000056 34361 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/45536e5e-cd30-4946-a98f-1454c7a2f5e1-secret-prometheus-k8s-thanos-sidecar-tls podName:45536e5e-cd30-4946-a98f-1454c7a2f5e1 nodeName:}" failed. No retries permitted until 2026-02-24 05:41:19.000012099 +0000 UTC m=+238.702629215 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "secret-prometheus-k8s-thanos-sidecar-tls" (UniqueName: "kubernetes.io/secret/45536e5e-cd30-4946-a98f-1454c7a2f5e1-secret-prometheus-k8s-thanos-sidecar-tls") pod "prometheus-k8s-0" (UID: "45536e5e-cd30-4946-a98f-1454c7a2f5e1") : secret "prometheus-k8s-thanos-sidecar-tls" not found Feb 24 05:41:11.000932 master-0 kubenswrapper[34361]: E0224 05:41:11.000205 34361 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/45536e5e-cd30-4946-a98f-1454c7a2f5e1-secret-prometheus-k8s-tls podName:45536e5e-cd30-4946-a98f-1454c7a2f5e1 nodeName:}" failed. No retries permitted until 2026-02-24 05:41:19.000179173 +0000 UTC m=+238.702796219 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "secret-prometheus-k8s-tls" (UniqueName: "kubernetes.io/secret/45536e5e-cd30-4946-a98f-1454c7a2f5e1-secret-prometheus-k8s-tls") pod "prometheus-k8s-0" (UID: "45536e5e-cd30-4946-a98f-1454c7a2f5e1") : secret "prometheus-k8s-tls" not found Feb 24 05:41:19.067004 master-0 kubenswrapper[34361]: I0224 05:41:19.066874 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-prometheus-k8s-thanos-sidecar-tls\" (UniqueName: \"kubernetes.io/secret/45536e5e-cd30-4946-a98f-1454c7a2f5e1-secret-prometheus-k8s-thanos-sidecar-tls\") pod \"prometheus-k8s-0\" (UID: \"45536e5e-cd30-4946-a98f-1454c7a2f5e1\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 24 05:41:19.067004 master-0 kubenswrapper[34361]: I0224 05:41:19.067015 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-prometheus-k8s-tls\" (UniqueName: \"kubernetes.io/secret/45536e5e-cd30-4946-a98f-1454c7a2f5e1-secret-prometheus-k8s-tls\") pod \"prometheus-k8s-0\" (UID: \"45536e5e-cd30-4946-a98f-1454c7a2f5e1\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 24 05:41:19.068432 master-0 kubenswrapper[34361]: E0224 05:41:19.067415 34361 secret.go:189] Couldn't get secret openshift-monitoring/prometheus-k8s-tls: secret "prometheus-k8s-tls" not found Feb 24 05:41:19.068432 master-0 kubenswrapper[34361]: E0224 05:41:19.067512 34361 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/45536e5e-cd30-4946-a98f-1454c7a2f5e1-secret-prometheus-k8s-tls podName:45536e5e-cd30-4946-a98f-1454c7a2f5e1 nodeName:}" failed. No retries permitted until 2026-02-24 05:41:35.067480736 +0000 UTC m=+254.770097812 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "secret-prometheus-k8s-tls" (UniqueName: "kubernetes.io/secret/45536e5e-cd30-4946-a98f-1454c7a2f5e1-secret-prometheus-k8s-tls") pod "prometheus-k8s-0" (UID: "45536e5e-cd30-4946-a98f-1454c7a2f5e1") : secret "prometheus-k8s-tls" not found Feb 24 05:41:19.068432 master-0 kubenswrapper[34361]: E0224 05:41:19.067574 34361 secret.go:189] Couldn't get secret openshift-monitoring/prometheus-k8s-thanos-sidecar-tls: secret "prometheus-k8s-thanos-sidecar-tls" not found Feb 24 05:41:19.068432 master-0 kubenswrapper[34361]: E0224 05:41:19.067733 34361 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/45536e5e-cd30-4946-a98f-1454c7a2f5e1-secret-prometheus-k8s-thanos-sidecar-tls podName:45536e5e-cd30-4946-a98f-1454c7a2f5e1 nodeName:}" failed. No retries permitted until 2026-02-24 05:41:35.067692302 +0000 UTC m=+254.770309388 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "secret-prometheus-k8s-thanos-sidecar-tls" (UniqueName: "kubernetes.io/secret/45536e5e-cd30-4946-a98f-1454c7a2f5e1-secret-prometheus-k8s-thanos-sidecar-tls") pod "prometheus-k8s-0" (UID: "45536e5e-cd30-4946-a98f-1454c7a2f5e1") : secret "prometheus-k8s-thanos-sidecar-tls" not found Feb 24 05:41:20.394305 master-0 kubenswrapper[34361]: I0224 05:41:20.394146 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-alertmanager-main-tls\" (UniqueName: \"kubernetes.io/secret/c4b6cbff-c5df-420d-a923-0d27b8e1e896-secret-alertmanager-main-tls\") pod \"alertmanager-main-0\" (UID: \"c4b6cbff-c5df-420d-a923-0d27b8e1e896\") " pod="openshift-monitoring/alertmanager-main-0" Feb 24 05:41:20.395490 master-0 kubenswrapper[34361]: E0224 05:41:20.394512 34361 secret.go:189] Couldn't get secret openshift-monitoring/alertmanager-main-tls: secret "alertmanager-main-tls" not found Feb 24 05:41:20.395490 master-0 kubenswrapper[34361]: E0224 05:41:20.394704 34361 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c4b6cbff-c5df-420d-a923-0d27b8e1e896-secret-alertmanager-main-tls podName:c4b6cbff-c5df-420d-a923-0d27b8e1e896 nodeName:}" failed. No retries permitted until 2026-02-24 05:41:52.394654663 +0000 UTC m=+272.097271749 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "secret-alertmanager-main-tls" (UniqueName: "kubernetes.io/secret/c4b6cbff-c5df-420d-a923-0d27b8e1e896-secret-alertmanager-main-tls") pod "alertmanager-main-0" (UID: "c4b6cbff-c5df-420d-a923-0d27b8e1e896") : secret "alertmanager-main-tls" not found Feb 24 05:41:22.540734 master-0 kubenswrapper[34361]: I0224 05:41:22.540559 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/b0a03ff3-e39b-4be9-bb1f-827d00437e62-networking-console-plugin-cert\") pod \"networking-console-plugin-79f587d78f-bctpb\" (UID: \"b0a03ff3-e39b-4be9-bb1f-827d00437e62\") " pod="openshift-network-console/networking-console-plugin-79f587d78f-bctpb" Feb 24 05:41:22.541745 master-0 kubenswrapper[34361]: E0224 05:41:22.540788 34361 secret.go:189] Couldn't get secret openshift-network-console/networking-console-plugin-cert: secret "networking-console-plugin-cert" not found Feb 24 05:41:22.541745 master-0 kubenswrapper[34361]: E0224 05:41:22.540956 34361 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b0a03ff3-e39b-4be9-bb1f-827d00437e62-networking-console-plugin-cert podName:b0a03ff3-e39b-4be9-bb1f-827d00437e62 nodeName:}" failed. No retries permitted until 2026-02-24 05:42:26.540924136 +0000 UTC m=+306.243541182 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/b0a03ff3-e39b-4be9-bb1f-827d00437e62-networking-console-plugin-cert") pod "networking-console-plugin-79f587d78f-bctpb" (UID: "b0a03ff3-e39b-4be9-bb1f-827d00437e62") : secret "networking-console-plugin-cert" not found Feb 24 05:41:23.795872 master-0 kubenswrapper[34361]: I0224 05:41:23.795784 34361 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-7875b98987-bmnll"] Feb 24 05:41:23.796884 master-0 kubenswrapper[34361]: I0224 05:41:23.796828 34361 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-7875b98987-bmnll" Feb 24 05:41:23.826649 master-0 kubenswrapper[34361]: I0224 05:41:23.826553 34361 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-7875b98987-bmnll"] Feb 24 05:41:23.878562 master-0 kubenswrapper[34361]: I0224 05:41:23.878434 34361 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d573eeb3-8e03-4793-9e4a-33d4a50c5b70-trusted-ca-bundle\") pod \"console-7875b98987-bmnll\" (UID: \"d573eeb3-8e03-4793-9e4a-33d4a50c5b70\") " pod="openshift-console/console-7875b98987-bmnll" Feb 24 05:41:23.878562 master-0 kubenswrapper[34361]: I0224 05:41:23.878531 34361 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qzrks\" (UniqueName: \"kubernetes.io/projected/d573eeb3-8e03-4793-9e4a-33d4a50c5b70-kube-api-access-qzrks\") pod \"console-7875b98987-bmnll\" (UID: \"d573eeb3-8e03-4793-9e4a-33d4a50c5b70\") " pod="openshift-console/console-7875b98987-bmnll" Feb 24 05:41:23.878562 master-0 kubenswrapper[34361]: I0224 05:41:23.878557 34361 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/d573eeb3-8e03-4793-9e4a-33d4a50c5b70-console-oauth-config\") pod \"console-7875b98987-bmnll\" (UID: \"d573eeb3-8e03-4793-9e4a-33d4a50c5b70\") " pod="openshift-console/console-7875b98987-bmnll" Feb 24 05:41:23.878562 master-0 kubenswrapper[34361]: I0224 05:41:23.878574 34361 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/d573eeb3-8e03-4793-9e4a-33d4a50c5b70-service-ca\") pod \"console-7875b98987-bmnll\" (UID: \"d573eeb3-8e03-4793-9e4a-33d4a50c5b70\") " pod="openshift-console/console-7875b98987-bmnll" Feb 24 05:41:23.879099 master-0 kubenswrapper[34361]: I0224 05:41:23.878696 34361 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/d573eeb3-8e03-4793-9e4a-33d4a50c5b70-console-config\") pod \"console-7875b98987-bmnll\" (UID: \"d573eeb3-8e03-4793-9e4a-33d4a50c5b70\") " pod="openshift-console/console-7875b98987-bmnll" Feb 24 05:41:23.879099 master-0 kubenswrapper[34361]: I0224 05:41:23.878722 34361 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/d573eeb3-8e03-4793-9e4a-33d4a50c5b70-oauth-serving-cert\") pod \"console-7875b98987-bmnll\" (UID: \"d573eeb3-8e03-4793-9e4a-33d4a50c5b70\") " pod="openshift-console/console-7875b98987-bmnll" Feb 24 05:41:23.879099 master-0 kubenswrapper[34361]: I0224 05:41:23.878742 34361 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/d573eeb3-8e03-4793-9e4a-33d4a50c5b70-console-serving-cert\") pod \"console-7875b98987-bmnll\" (UID: \"d573eeb3-8e03-4793-9e4a-33d4a50c5b70\") " pod="openshift-console/console-7875b98987-bmnll" Feb 24 05:41:23.980953 master-0 kubenswrapper[34361]: I0224 05:41:23.980844 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/d573eeb3-8e03-4793-9e4a-33d4a50c5b70-console-config\") pod \"console-7875b98987-bmnll\" (UID: \"d573eeb3-8e03-4793-9e4a-33d4a50c5b70\") " pod="openshift-console/console-7875b98987-bmnll" Feb 24 05:41:23.980953 master-0 kubenswrapper[34361]: I0224 05:41:23.980911 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/d573eeb3-8e03-4793-9e4a-33d4a50c5b70-oauth-serving-cert\") pod \"console-7875b98987-bmnll\" (UID: \"d573eeb3-8e03-4793-9e4a-33d4a50c5b70\") " pod="openshift-console/console-7875b98987-bmnll" Feb 24 05:41:23.980953 master-0 kubenswrapper[34361]: I0224 05:41:23.980932 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/d573eeb3-8e03-4793-9e4a-33d4a50c5b70-console-serving-cert\") pod \"console-7875b98987-bmnll\" (UID: \"d573eeb3-8e03-4793-9e4a-33d4a50c5b70\") " pod="openshift-console/console-7875b98987-bmnll" Feb 24 05:41:23.981566 master-0 kubenswrapper[34361]: I0224 05:41:23.981235 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d573eeb3-8e03-4793-9e4a-33d4a50c5b70-trusted-ca-bundle\") pod \"console-7875b98987-bmnll\" (UID: \"d573eeb3-8e03-4793-9e4a-33d4a50c5b70\") " pod="openshift-console/console-7875b98987-bmnll" Feb 24 05:41:23.981566 master-0 kubenswrapper[34361]: I0224 05:41:23.981463 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qzrks\" (UniqueName: \"kubernetes.io/projected/d573eeb3-8e03-4793-9e4a-33d4a50c5b70-kube-api-access-qzrks\") pod \"console-7875b98987-bmnll\" (UID: \"d573eeb3-8e03-4793-9e4a-33d4a50c5b70\") " pod="openshift-console/console-7875b98987-bmnll" Feb 24 05:41:23.982132 master-0 kubenswrapper[34361]: I0224 05:41:23.982033 34361 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/d573eeb3-8e03-4793-9e4a-33d4a50c5b70-console-config\") pod \"console-7875b98987-bmnll\" (UID: \"d573eeb3-8e03-4793-9e4a-33d4a50c5b70\") " pod="openshift-console/console-7875b98987-bmnll" Feb 24 05:41:23.982358 master-0 kubenswrapper[34361]: I0224 05:41:23.982283 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/d573eeb3-8e03-4793-9e4a-33d4a50c5b70-console-oauth-config\") pod \"console-7875b98987-bmnll\" (UID: \"d573eeb3-8e03-4793-9e4a-33d4a50c5b70\") " pod="openshift-console/console-7875b98987-bmnll" Feb 24 05:41:23.983359 master-0 kubenswrapper[34361]: I0224 05:41:23.983294 34361 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d573eeb3-8e03-4793-9e4a-33d4a50c5b70-trusted-ca-bundle\") pod \"console-7875b98987-bmnll\" (UID: \"d573eeb3-8e03-4793-9e4a-33d4a50c5b70\") " pod="openshift-console/console-7875b98987-bmnll" Feb 24 05:41:23.983359 master-0 kubenswrapper[34361]: I0224 05:41:23.983344 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/d573eeb3-8e03-4793-9e4a-33d4a50c5b70-service-ca\") pod \"console-7875b98987-bmnll\" (UID: \"d573eeb3-8e03-4793-9e4a-33d4a50c5b70\") " pod="openshift-console/console-7875b98987-bmnll" Feb 24 05:41:23.983548 master-0 kubenswrapper[34361]: I0224 05:41:23.983414 34361 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/d573eeb3-8e03-4793-9e4a-33d4a50c5b70-oauth-serving-cert\") pod \"console-7875b98987-bmnll\" (UID: \"d573eeb3-8e03-4793-9e4a-33d4a50c5b70\") " pod="openshift-console/console-7875b98987-bmnll" Feb 24 05:41:23.984573 master-0 kubenswrapper[34361]: I0224 05:41:23.984503 34361 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/d573eeb3-8e03-4793-9e4a-33d4a50c5b70-service-ca\") pod \"console-7875b98987-bmnll\" (UID: \"d573eeb3-8e03-4793-9e4a-33d4a50c5b70\") " pod="openshift-console/console-7875b98987-bmnll" Feb 24 05:41:23.985649 master-0 kubenswrapper[34361]: I0224 05:41:23.985613 34361 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/d573eeb3-8e03-4793-9e4a-33d4a50c5b70-console-serving-cert\") pod \"console-7875b98987-bmnll\" (UID: \"d573eeb3-8e03-4793-9e4a-33d4a50c5b70\") " pod="openshift-console/console-7875b98987-bmnll" Feb 24 05:41:23.990664 master-0 kubenswrapper[34361]: I0224 05:41:23.989939 34361 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/d573eeb3-8e03-4793-9e4a-33d4a50c5b70-console-oauth-config\") pod \"console-7875b98987-bmnll\" (UID: \"d573eeb3-8e03-4793-9e4a-33d4a50c5b70\") " pod="openshift-console/console-7875b98987-bmnll" Feb 24 05:41:24.009360 master-0 kubenswrapper[34361]: I0224 05:41:24.009256 34361 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qzrks\" (UniqueName: \"kubernetes.io/projected/d573eeb3-8e03-4793-9e4a-33d4a50c5b70-kube-api-access-qzrks\") pod \"console-7875b98987-bmnll\" (UID: \"d573eeb3-8e03-4793-9e4a-33d4a50c5b70\") " pod="openshift-console/console-7875b98987-bmnll" Feb 24 05:41:24.134293 master-0 kubenswrapper[34361]: I0224 05:41:24.134001 34361 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-7875b98987-bmnll" Feb 24 05:41:24.676537 master-0 kubenswrapper[34361]: I0224 05:41:24.672658 34361 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-7875b98987-bmnll"] Feb 24 05:41:24.676537 master-0 kubenswrapper[34361]: W0224 05:41:24.674727 34361 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd573eeb3_8e03_4793_9e4a_33d4a50c5b70.slice/crio-268c1f2e5888514bd98103357fabe5d6a4a6ba92fc2f501a925cc56e83119b02 WatchSource:0}: Error finding container 268c1f2e5888514bd98103357fabe5d6a4a6ba92fc2f501a925cc56e83119b02: Status 404 returned error can't find the container with id 268c1f2e5888514bd98103357fabe5d6a4a6ba92fc2f501a925cc56e83119b02 Feb 24 05:41:24.871228 master-0 kubenswrapper[34361]: I0224 05:41:24.871159 34361 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-7875b98987-bmnll" event={"ID":"d573eeb3-8e03-4793-9e4a-33d4a50c5b70","Type":"ContainerStarted","Data":"268c1f2e5888514bd98103357fabe5d6a4a6ba92fc2f501a925cc56e83119b02"} Feb 24 05:41:25.883083 master-0 kubenswrapper[34361]: I0224 05:41:25.882967 34361 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-7875b98987-bmnll" event={"ID":"d573eeb3-8e03-4793-9e4a-33d4a50c5b70","Type":"ContainerStarted","Data":"00f94470d50e12eca35d3f8fd71ce2f34471b0407a225974b9dc1ddb97de6ca8"} Feb 24 05:41:25.924615 master-0 kubenswrapper[34361]: I0224 05:41:25.924354 34361 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-7875b98987-bmnll" podStartSLOduration=2.924244635 podStartE2EDuration="2.924244635s" podCreationTimestamp="2026-02-24 05:41:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-24 05:41:25.914793941 +0000 UTC m=+245.617411047" watchObservedRunningTime="2026-02-24 05:41:25.924244635 +0000 UTC m=+245.626861711" Feb 24 05:41:34.134777 master-0 kubenswrapper[34361]: I0224 05:41:34.134687 34361 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-7875b98987-bmnll" Feb 24 05:41:34.134777 master-0 kubenswrapper[34361]: I0224 05:41:34.134767 34361 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-7875b98987-bmnll" Feb 24 05:41:34.141519 master-0 kubenswrapper[34361]: I0224 05:41:34.140562 34361 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-7875b98987-bmnll" Feb 24 05:41:34.987380 master-0 kubenswrapper[34361]: I0224 05:41:34.987322 34361 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-7875b98987-bmnll" Feb 24 05:41:35.085299 master-0 kubenswrapper[34361]: I0224 05:41:35.085228 34361 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-67bcb9df49-d2cv6"] Feb 24 05:41:35.164047 master-0 kubenswrapper[34361]: I0224 05:41:35.162718 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-prometheus-k8s-thanos-sidecar-tls\" (UniqueName: \"kubernetes.io/secret/45536e5e-cd30-4946-a98f-1454c7a2f5e1-secret-prometheus-k8s-thanos-sidecar-tls\") pod \"prometheus-k8s-0\" (UID: \"45536e5e-cd30-4946-a98f-1454c7a2f5e1\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 24 05:41:35.164047 master-0 kubenswrapper[34361]: I0224 05:41:35.162784 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-prometheus-k8s-tls\" (UniqueName: \"kubernetes.io/secret/45536e5e-cd30-4946-a98f-1454c7a2f5e1-secret-prometheus-k8s-tls\") pod \"prometheus-k8s-0\" (UID: \"45536e5e-cd30-4946-a98f-1454c7a2f5e1\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 24 05:41:35.167133 master-0 kubenswrapper[34361]: I0224 05:41:35.167095 34361 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-prometheus-k8s-thanos-sidecar-tls\" (UniqueName: \"kubernetes.io/secret/45536e5e-cd30-4946-a98f-1454c7a2f5e1-secret-prometheus-k8s-thanos-sidecar-tls\") pod \"prometheus-k8s-0\" (UID: \"45536e5e-cd30-4946-a98f-1454c7a2f5e1\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 24 05:41:35.173864 master-0 kubenswrapper[34361]: I0224 05:41:35.173800 34361 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-prometheus-k8s-tls\" (UniqueName: \"kubernetes.io/secret/45536e5e-cd30-4946-a98f-1454c7a2f5e1-secret-prometheus-k8s-tls\") pod \"prometheus-k8s-0\" (UID: \"45536e5e-cd30-4946-a98f-1454c7a2f5e1\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 24 05:41:35.238810 master-0 kubenswrapper[34361]: I0224 05:41:35.238612 34361 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/prometheus-k8s-0" Feb 24 05:41:35.697699 master-0 kubenswrapper[34361]: I0224 05:41:35.697597 34361 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/prometheus-k8s-0"] Feb 24 05:41:35.698985 master-0 kubenswrapper[34361]: W0224 05:41:35.698921 34361 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod45536e5e_cd30_4946_a98f_1454c7a2f5e1.slice/crio-148f85b47ac9d34fc962c1151b7c78bf606ddade16a3b04860588b138d65de35 WatchSource:0}: Error finding container 148f85b47ac9d34fc962c1151b7c78bf606ddade16a3b04860588b138d65de35: Status 404 returned error can't find the container with id 148f85b47ac9d34fc962c1151b7c78bf606ddade16a3b04860588b138d65de35 Feb 24 05:41:35.989934 master-0 kubenswrapper[34361]: I0224 05:41:35.989865 34361 generic.go:334] "Generic (PLEG): container finished" podID="45536e5e-cd30-4946-a98f-1454c7a2f5e1" containerID="c995005dc276d42720c67805b7c9570ecb828fa0e1100eab0ba18b2be829eba7" exitCode=0 Feb 24 05:41:35.990234 master-0 kubenswrapper[34361]: I0224 05:41:35.990014 34361 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-k8s-0" event={"ID":"45536e5e-cd30-4946-a98f-1454c7a2f5e1","Type":"ContainerDied","Data":"c995005dc276d42720c67805b7c9570ecb828fa0e1100eab0ba18b2be829eba7"} Feb 24 05:41:35.991810 master-0 kubenswrapper[34361]: I0224 05:41:35.991557 34361 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-k8s-0" event={"ID":"45536e5e-cd30-4946-a98f-1454c7a2f5e1","Type":"ContainerStarted","Data":"148f85b47ac9d34fc962c1151b7c78bf606ddade16a3b04860588b138d65de35"} Feb 24 05:41:37.910923 master-0 kubenswrapper[34361]: I0224 05:41:37.910840 34361 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-6f64db7f86-6brp5"] Feb 24 05:41:37.912377 master-0 kubenswrapper[34361]: I0224 05:41:37.912285 34361 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-6f64db7f86-6brp5" Feb 24 05:41:37.927476 master-0 kubenswrapper[34361]: I0224 05:41:37.922030 34361 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-6f64db7f86-6brp5"] Feb 24 05:41:38.027253 master-0 kubenswrapper[34361]: I0224 05:41:38.027187 34361 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j6f8p\" (UniqueName: \"kubernetes.io/projected/4976bb0c-7870-482c-ab61-fcafe26f0e8c-kube-api-access-j6f8p\") pod \"console-6f64db7f86-6brp5\" (UID: \"4976bb0c-7870-482c-ab61-fcafe26f0e8c\") " pod="openshift-console/console-6f64db7f86-6brp5" Feb 24 05:41:38.027253 master-0 kubenswrapper[34361]: I0224 05:41:38.027254 34361 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/4976bb0c-7870-482c-ab61-fcafe26f0e8c-console-serving-cert\") pod \"console-6f64db7f86-6brp5\" (UID: \"4976bb0c-7870-482c-ab61-fcafe26f0e8c\") " pod="openshift-console/console-6f64db7f86-6brp5" Feb 24 05:41:38.027586 master-0 kubenswrapper[34361]: I0224 05:41:38.027287 34361 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/4976bb0c-7870-482c-ab61-fcafe26f0e8c-console-oauth-config\") pod \"console-6f64db7f86-6brp5\" (UID: \"4976bb0c-7870-482c-ab61-fcafe26f0e8c\") " pod="openshift-console/console-6f64db7f86-6brp5" Feb 24 05:41:38.027586 master-0 kubenswrapper[34361]: I0224 05:41:38.027410 34361 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/4976bb0c-7870-482c-ab61-fcafe26f0e8c-trusted-ca-bundle\") pod \"console-6f64db7f86-6brp5\" (UID: \"4976bb0c-7870-482c-ab61-fcafe26f0e8c\") " pod="openshift-console/console-6f64db7f86-6brp5" Feb 24 05:41:38.027671 master-0 kubenswrapper[34361]: I0224 05:41:38.027593 34361 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/4976bb0c-7870-482c-ab61-fcafe26f0e8c-oauth-serving-cert\") pod \"console-6f64db7f86-6brp5\" (UID: \"4976bb0c-7870-482c-ab61-fcafe26f0e8c\") " pod="openshift-console/console-6f64db7f86-6brp5" Feb 24 05:41:38.027985 master-0 kubenswrapper[34361]: I0224 05:41:38.027924 34361 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/4976bb0c-7870-482c-ab61-fcafe26f0e8c-console-config\") pod \"console-6f64db7f86-6brp5\" (UID: \"4976bb0c-7870-482c-ab61-fcafe26f0e8c\") " pod="openshift-console/console-6f64db7f86-6brp5" Feb 24 05:41:38.028031 master-0 kubenswrapper[34361]: I0224 05:41:38.028001 34361 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/4976bb0c-7870-482c-ab61-fcafe26f0e8c-service-ca\") pod \"console-6f64db7f86-6brp5\" (UID: \"4976bb0c-7870-482c-ab61-fcafe26f0e8c\") " pod="openshift-console/console-6f64db7f86-6brp5" Feb 24 05:41:38.130237 master-0 kubenswrapper[34361]: I0224 05:41:38.130177 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/4976bb0c-7870-482c-ab61-fcafe26f0e8c-console-oauth-config\") pod \"console-6f64db7f86-6brp5\" (UID: \"4976bb0c-7870-482c-ab61-fcafe26f0e8c\") " pod="openshift-console/console-6f64db7f86-6brp5" Feb 24 05:41:38.130567 master-0 kubenswrapper[34361]: I0224 05:41:38.130336 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/4976bb0c-7870-482c-ab61-fcafe26f0e8c-trusted-ca-bundle\") pod \"console-6f64db7f86-6brp5\" (UID: \"4976bb0c-7870-482c-ab61-fcafe26f0e8c\") " pod="openshift-console/console-6f64db7f86-6brp5" Feb 24 05:41:38.130567 master-0 kubenswrapper[34361]: I0224 05:41:38.130377 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/4976bb0c-7870-482c-ab61-fcafe26f0e8c-oauth-serving-cert\") pod \"console-6f64db7f86-6brp5\" (UID: \"4976bb0c-7870-482c-ab61-fcafe26f0e8c\") " pod="openshift-console/console-6f64db7f86-6brp5" Feb 24 05:41:38.130917 master-0 kubenswrapper[34361]: I0224 05:41:38.130806 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/4976bb0c-7870-482c-ab61-fcafe26f0e8c-console-config\") pod \"console-6f64db7f86-6brp5\" (UID: \"4976bb0c-7870-482c-ab61-fcafe26f0e8c\") " pod="openshift-console/console-6f64db7f86-6brp5" Feb 24 05:41:38.131025 master-0 kubenswrapper[34361]: I0224 05:41:38.130989 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/4976bb0c-7870-482c-ab61-fcafe26f0e8c-service-ca\") pod \"console-6f64db7f86-6brp5\" (UID: \"4976bb0c-7870-482c-ab61-fcafe26f0e8c\") " pod="openshift-console/console-6f64db7f86-6brp5" Feb 24 05:41:38.131370 master-0 kubenswrapper[34361]: I0224 05:41:38.131340 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j6f8p\" (UniqueName: \"kubernetes.io/projected/4976bb0c-7870-482c-ab61-fcafe26f0e8c-kube-api-access-j6f8p\") pod \"console-6f64db7f86-6brp5\" (UID: \"4976bb0c-7870-482c-ab61-fcafe26f0e8c\") " pod="openshift-console/console-6f64db7f86-6brp5" Feb 24 05:41:38.131462 master-0 kubenswrapper[34361]: I0224 05:41:38.131434 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/4976bb0c-7870-482c-ab61-fcafe26f0e8c-console-serving-cert\") pod \"console-6f64db7f86-6brp5\" (UID: \"4976bb0c-7870-482c-ab61-fcafe26f0e8c\") " pod="openshift-console/console-6f64db7f86-6brp5" Feb 24 05:41:38.131701 master-0 kubenswrapper[34361]: I0224 05:41:38.131661 34361 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/4976bb0c-7870-482c-ab61-fcafe26f0e8c-oauth-serving-cert\") pod \"console-6f64db7f86-6brp5\" (UID: \"4976bb0c-7870-482c-ab61-fcafe26f0e8c\") " pod="openshift-console/console-6f64db7f86-6brp5" Feb 24 05:41:38.131862 master-0 kubenswrapper[34361]: I0224 05:41:38.131807 34361 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/4976bb0c-7870-482c-ab61-fcafe26f0e8c-trusted-ca-bundle\") pod \"console-6f64db7f86-6brp5\" (UID: \"4976bb0c-7870-482c-ab61-fcafe26f0e8c\") " pod="openshift-console/console-6f64db7f86-6brp5" Feb 24 05:41:38.132100 master-0 kubenswrapper[34361]: I0224 05:41:38.132057 34361 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/4976bb0c-7870-482c-ab61-fcafe26f0e8c-service-ca\") pod \"console-6f64db7f86-6brp5\" (UID: \"4976bb0c-7870-482c-ab61-fcafe26f0e8c\") " pod="openshift-console/console-6f64db7f86-6brp5" Feb 24 05:41:38.132215 master-0 kubenswrapper[34361]: I0224 05:41:38.132057 34361 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/4976bb0c-7870-482c-ab61-fcafe26f0e8c-console-config\") pod \"console-6f64db7f86-6brp5\" (UID: \"4976bb0c-7870-482c-ab61-fcafe26f0e8c\") " pod="openshift-console/console-6f64db7f86-6brp5" Feb 24 05:41:38.136560 master-0 kubenswrapper[34361]: I0224 05:41:38.136522 34361 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/4976bb0c-7870-482c-ab61-fcafe26f0e8c-console-oauth-config\") pod \"console-6f64db7f86-6brp5\" (UID: \"4976bb0c-7870-482c-ab61-fcafe26f0e8c\") " pod="openshift-console/console-6f64db7f86-6brp5" Feb 24 05:41:38.152821 master-0 kubenswrapper[34361]: I0224 05:41:38.152766 34361 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/4976bb0c-7870-482c-ab61-fcafe26f0e8c-console-serving-cert\") pod \"console-6f64db7f86-6brp5\" (UID: \"4976bb0c-7870-482c-ab61-fcafe26f0e8c\") " pod="openshift-console/console-6f64db7f86-6brp5" Feb 24 05:41:38.154058 master-0 kubenswrapper[34361]: I0224 05:41:38.153998 34361 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j6f8p\" (UniqueName: \"kubernetes.io/projected/4976bb0c-7870-482c-ab61-fcafe26f0e8c-kube-api-access-j6f8p\") pod \"console-6f64db7f86-6brp5\" (UID: \"4976bb0c-7870-482c-ab61-fcafe26f0e8c\") " pod="openshift-console/console-6f64db7f86-6brp5" Feb 24 05:41:38.248832 master-0 kubenswrapper[34361]: I0224 05:41:38.248773 34361 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-6f64db7f86-6brp5" Feb 24 05:41:39.772405 master-0 kubenswrapper[34361]: I0224 05:41:39.771889 34361 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-6f64db7f86-6brp5"] Feb 24 05:41:39.779050 master-0 kubenswrapper[34361]: W0224 05:41:39.776916 34361 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4976bb0c_7870_482c_ab61_fcafe26f0e8c.slice/crio-f9a68acf5c7c050de6b1c1e54bff5cd5929bdb9506df4224740abe85844d0896 WatchSource:0}: Error finding container f9a68acf5c7c050de6b1c1e54bff5cd5929bdb9506df4224740abe85844d0896: Status 404 returned error can't find the container with id f9a68acf5c7c050de6b1c1e54bff5cd5929bdb9506df4224740abe85844d0896 Feb 24 05:41:40.052688 master-0 kubenswrapper[34361]: I0224 05:41:40.052627 34361 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-6f64db7f86-6brp5" event={"ID":"4976bb0c-7870-482c-ab61-fcafe26f0e8c","Type":"ContainerStarted","Data":"48ab9caee7e8e3d9f3190b4432726245c0d7ade0b16d3266dd224f51c2716174"} Feb 24 05:41:40.052688 master-0 kubenswrapper[34361]: I0224 05:41:40.052685 34361 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-6f64db7f86-6brp5" event={"ID":"4976bb0c-7870-482c-ab61-fcafe26f0e8c","Type":"ContainerStarted","Data":"f9a68acf5c7c050de6b1c1e54bff5cd5929bdb9506df4224740abe85844d0896"} Feb 24 05:41:40.059442 master-0 kubenswrapper[34361]: I0224 05:41:40.056349 34361 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-k8s-0" event={"ID":"45536e5e-cd30-4946-a98f-1454c7a2f5e1","Type":"ContainerStarted","Data":"74a7dcfd376ebd9bddfd052df6a8fa32c0552810cf9619f52b2208cb2b4adc3a"} Feb 24 05:41:40.059442 master-0 kubenswrapper[34361]: I0224 05:41:40.056389 34361 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-k8s-0" event={"ID":"45536e5e-cd30-4946-a98f-1454c7a2f5e1","Type":"ContainerStarted","Data":"74b26d91635c6f01e278c665e8c901c32aaad94717488652214c59fe0ea7a6d1"} Feb 24 05:41:40.059442 master-0 kubenswrapper[34361]: I0224 05:41:40.056406 34361 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-k8s-0" event={"ID":"45536e5e-cd30-4946-a98f-1454c7a2f5e1","Type":"ContainerStarted","Data":"1e741db1c4898da8e0c0137bb2522ea179c7adfb8cc37b51e22d5c133d1fc3bb"} Feb 24 05:41:40.077460 master-0 kubenswrapper[34361]: I0224 05:41:40.075808 34361 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-6f64db7f86-6brp5" podStartSLOduration=3.075768876 podStartE2EDuration="3.075768876s" podCreationTimestamp="2026-02-24 05:41:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-24 05:41:40.074611235 +0000 UTC m=+259.777228321" watchObservedRunningTime="2026-02-24 05:41:40.075768876 +0000 UTC m=+259.778385922" Feb 24 05:41:41.080032 master-0 kubenswrapper[34361]: I0224 05:41:41.079973 34361 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-k8s-0" event={"ID":"45536e5e-cd30-4946-a98f-1454c7a2f5e1","Type":"ContainerStarted","Data":"3866eee69b9876bdd23e5826e7fc98def7c6cb7ae5a22b5a13190da3e038f90d"} Feb 24 05:41:41.080816 master-0 kubenswrapper[34361]: I0224 05:41:41.080788 34361 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-k8s-0" event={"ID":"45536e5e-cd30-4946-a98f-1454c7a2f5e1","Type":"ContainerStarted","Data":"526a9e4a03739f4e2b939b77399d7f1326cb63bfdc2f216fc0d0ff69fcb807e0"} Feb 24 05:41:41.080924 master-0 kubenswrapper[34361]: I0224 05:41:41.080905 34361 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-k8s-0" event={"ID":"45536e5e-cd30-4946-a98f-1454c7a2f5e1","Type":"ContainerStarted","Data":"b31c8856b3d5321fed0a848efc675c42b58621e22b08a0cc9e576f7feaef3e18"} Feb 24 05:41:41.127794 master-0 kubenswrapper[34361]: I0224 05:41:41.127648 34361 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-monitoring/prometheus-k8s-0" podStartSLOduration=34.806047073 podStartE2EDuration="38.127615848s" podCreationTimestamp="2026-02-24 05:41:03 +0000 UTC" firstStartedPulling="2026-02-24 05:41:35.992143272 +0000 UTC m=+255.694760308" lastFinishedPulling="2026-02-24 05:41:39.313712037 +0000 UTC m=+259.016329083" observedRunningTime="2026-02-24 05:41:41.119123059 +0000 UTC m=+260.821740155" watchObservedRunningTime="2026-02-24 05:41:41.127615848 +0000 UTC m=+260.830232924" Feb 24 05:41:45.239931 master-0 kubenswrapper[34361]: I0224 05:41:45.239838 34361 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-monitoring/prometheus-k8s-0" Feb 24 05:41:48.250429 master-0 kubenswrapper[34361]: I0224 05:41:48.250280 34361 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-6f64db7f86-6brp5" Feb 24 05:41:48.250429 master-0 kubenswrapper[34361]: I0224 05:41:48.250392 34361 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-6f64db7f86-6brp5" Feb 24 05:41:48.255637 master-0 kubenswrapper[34361]: I0224 05:41:48.255586 34361 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-6f64db7f86-6brp5" Feb 24 05:41:49.156754 master-0 kubenswrapper[34361]: I0224 05:41:49.156688 34361 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-6f64db7f86-6brp5" Feb 24 05:41:49.225327 master-0 kubenswrapper[34361]: I0224 05:41:49.225212 34361 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-7875b98987-bmnll"] Feb 24 05:41:52.439801 master-0 kubenswrapper[34361]: I0224 05:41:52.439691 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-alertmanager-main-tls\" (UniqueName: \"kubernetes.io/secret/c4b6cbff-c5df-420d-a923-0d27b8e1e896-secret-alertmanager-main-tls\") pod \"alertmanager-main-0\" (UID: \"c4b6cbff-c5df-420d-a923-0d27b8e1e896\") " pod="openshift-monitoring/alertmanager-main-0" Feb 24 05:41:52.451086 master-0 kubenswrapper[34361]: I0224 05:41:52.451013 34361 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-alertmanager-main-tls\" (UniqueName: \"kubernetes.io/secret/c4b6cbff-c5df-420d-a923-0d27b8e1e896-secret-alertmanager-main-tls\") pod \"alertmanager-main-0\" (UID: \"c4b6cbff-c5df-420d-a923-0d27b8e1e896\") " pod="openshift-monitoring/alertmanager-main-0" Feb 24 05:41:52.665813 master-0 kubenswrapper[34361]: I0224 05:41:52.665735 34361 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/alertmanager-main-0" Feb 24 05:41:53.221454 master-0 kubenswrapper[34361]: W0224 05:41:53.219387 34361 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc4b6cbff_c5df_420d_a923_0d27b8e1e896.slice/crio-b1af57fdac2c7aeccb30fe79a2f2f5006be761aa5bae5768146e42b229a76063 WatchSource:0}: Error finding container b1af57fdac2c7aeccb30fe79a2f2f5006be761aa5bae5768146e42b229a76063: Status 404 returned error can't find the container with id b1af57fdac2c7aeccb30fe79a2f2f5006be761aa5bae5768146e42b229a76063 Feb 24 05:41:53.232363 master-0 kubenswrapper[34361]: I0224 05:41:53.231938 34361 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/alertmanager-main-0"] Feb 24 05:41:54.232087 master-0 kubenswrapper[34361]: I0224 05:41:54.231988 34361 generic.go:334] "Generic (PLEG): container finished" podID="c4b6cbff-c5df-420d-a923-0d27b8e1e896" containerID="d7609a3f151d2b26f71f4f93d3f68bfe884c886d2859a93d7173beda50ca5efc" exitCode=0 Feb 24 05:41:54.232739 master-0 kubenswrapper[34361]: I0224 05:41:54.232088 34361 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/alertmanager-main-0" event={"ID":"c4b6cbff-c5df-420d-a923-0d27b8e1e896","Type":"ContainerDied","Data":"d7609a3f151d2b26f71f4f93d3f68bfe884c886d2859a93d7173beda50ca5efc"} Feb 24 05:41:54.232739 master-0 kubenswrapper[34361]: I0224 05:41:54.232203 34361 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/alertmanager-main-0" event={"ID":"c4b6cbff-c5df-420d-a923-0d27b8e1e896","Type":"ContainerStarted","Data":"b1af57fdac2c7aeccb30fe79a2f2f5006be761aa5bae5768146e42b229a76063"} Feb 24 05:41:56.256365 master-0 kubenswrapper[34361]: I0224 05:41:56.256274 34361 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/alertmanager-main-0" event={"ID":"c4b6cbff-c5df-420d-a923-0d27b8e1e896","Type":"ContainerStarted","Data":"3749c196fdd5d00d7eed40bdde070feb7ae1d6c640d3dbf5c4182cab5626b867"} Feb 24 05:41:56.256365 master-0 kubenswrapper[34361]: I0224 05:41:56.256372 34361 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/alertmanager-main-0" event={"ID":"c4b6cbff-c5df-420d-a923-0d27b8e1e896","Type":"ContainerStarted","Data":"f9f4c5d44e57be7e1754c2b0825504ca92e24a9361f96dbe0fe79c4e08381f9e"} Feb 24 05:41:57.282762 master-0 kubenswrapper[34361]: I0224 05:41:57.282657 34361 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/alertmanager-main-0" event={"ID":"c4b6cbff-c5df-420d-a923-0d27b8e1e896","Type":"ContainerStarted","Data":"d0167f6568d629f7f0258ea535a4657380df49839434368bf84036bdfbc7b335"} Feb 24 05:41:57.282762 master-0 kubenswrapper[34361]: I0224 05:41:57.282759 34361 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/alertmanager-main-0" event={"ID":"c4b6cbff-c5df-420d-a923-0d27b8e1e896","Type":"ContainerStarted","Data":"a475ac141f045010d40315f23e547ca952cb16d73cb3ac1983e469fc74d34492"} Feb 24 05:41:57.283605 master-0 kubenswrapper[34361]: I0224 05:41:57.282784 34361 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/alertmanager-main-0" event={"ID":"c4b6cbff-c5df-420d-a923-0d27b8e1e896","Type":"ContainerStarted","Data":"291bb70c7bcff1f12ba96375c2f476c898be82019a51b904ba4962059ff90bdd"} Feb 24 05:41:57.283605 master-0 kubenswrapper[34361]: I0224 05:41:57.282808 34361 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/alertmanager-main-0" event={"ID":"c4b6cbff-c5df-420d-a923-0d27b8e1e896","Type":"ContainerStarted","Data":"e8a88d95830f126cb7e8ae55c75c465e0fee1ad934bf4d09706e8509980e3fcd"} Feb 24 05:41:57.337286 master-0 kubenswrapper[34361]: I0224 05:41:57.337138 34361 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-monitoring/alertmanager-main-0" podStartSLOduration=67.773507722 podStartE2EDuration="1m9.337106713s" podCreationTimestamp="2026-02-24 05:40:48 +0000 UTC" firstStartedPulling="2026-02-24 05:41:54.234682217 +0000 UTC m=+273.937299293" lastFinishedPulling="2026-02-24 05:41:55.798281218 +0000 UTC m=+275.500898284" observedRunningTime="2026-02-24 05:41:57.328178472 +0000 UTC m=+277.030795538" watchObservedRunningTime="2026-02-24 05:41:57.337106713 +0000 UTC m=+277.039723789" Feb 24 05:42:00.149168 master-0 kubenswrapper[34361]: I0224 05:42:00.149097 34361 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-console/console-67bcb9df49-d2cv6" podUID="c300d6c7-66fb-41c5-b099-0e9e4a235e76" containerName="console" containerID="cri-o://76eff019d66fe5abbd1ccb06357908f71a83ac02cd16385fcdcc99a4c5ce4117" gracePeriod=15 Feb 24 05:42:00.316963 master-0 kubenswrapper[34361]: I0224 05:42:00.316816 34361 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-67bcb9df49-d2cv6_c300d6c7-66fb-41c5-b099-0e9e4a235e76/console/0.log" Feb 24 05:42:00.317406 master-0 kubenswrapper[34361]: I0224 05:42:00.317053 34361 generic.go:334] "Generic (PLEG): container finished" podID="c300d6c7-66fb-41c5-b099-0e9e4a235e76" containerID="76eff019d66fe5abbd1ccb06357908f71a83ac02cd16385fcdcc99a4c5ce4117" exitCode=2 Feb 24 05:42:00.317406 master-0 kubenswrapper[34361]: I0224 05:42:00.317214 34361 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-67bcb9df49-d2cv6" event={"ID":"c300d6c7-66fb-41c5-b099-0e9e4a235e76","Type":"ContainerDied","Data":"76eff019d66fe5abbd1ccb06357908f71a83ac02cd16385fcdcc99a4c5ce4117"} Feb 24 05:42:00.648672 master-0 kubenswrapper[34361]: I0224 05:42:00.648635 34361 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-67bcb9df49-d2cv6_c300d6c7-66fb-41c5-b099-0e9e4a235e76/console/0.log" Feb 24 05:42:00.649007 master-0 kubenswrapper[34361]: I0224 05:42:00.648992 34361 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-67bcb9df49-d2cv6" Feb 24 05:42:00.808250 master-0 kubenswrapper[34361]: I0224 05:42:00.808155 34361 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c300d6c7-66fb-41c5-b099-0e9e4a235e76-trusted-ca-bundle\") pod \"c300d6c7-66fb-41c5-b099-0e9e4a235e76\" (UID: \"c300d6c7-66fb-41c5-b099-0e9e4a235e76\") " Feb 24 05:42:00.808250 master-0 kubenswrapper[34361]: I0224 05:42:00.808235 34361 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8qjkx\" (UniqueName: \"kubernetes.io/projected/c300d6c7-66fb-41c5-b099-0e9e4a235e76-kube-api-access-8qjkx\") pod \"c300d6c7-66fb-41c5-b099-0e9e4a235e76\" (UID: \"c300d6c7-66fb-41c5-b099-0e9e4a235e76\") " Feb 24 05:42:00.808757 master-0 kubenswrapper[34361]: I0224 05:42:00.808412 34361 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/c300d6c7-66fb-41c5-b099-0e9e4a235e76-console-config\") pod \"c300d6c7-66fb-41c5-b099-0e9e4a235e76\" (UID: \"c300d6c7-66fb-41c5-b099-0e9e4a235e76\") " Feb 24 05:42:00.808757 master-0 kubenswrapper[34361]: I0224 05:42:00.808494 34361 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/c300d6c7-66fb-41c5-b099-0e9e4a235e76-console-oauth-config\") pod \"c300d6c7-66fb-41c5-b099-0e9e4a235e76\" (UID: \"c300d6c7-66fb-41c5-b099-0e9e4a235e76\") " Feb 24 05:42:00.808993 master-0 kubenswrapper[34361]: I0224 05:42:00.808908 34361 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c300d6c7-66fb-41c5-b099-0e9e4a235e76-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "c300d6c7-66fb-41c5-b099-0e9e4a235e76" (UID: "c300d6c7-66fb-41c5-b099-0e9e4a235e76"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 24 05:42:00.809286 master-0 kubenswrapper[34361]: I0224 05:42:00.809246 34361 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/c300d6c7-66fb-41c5-b099-0e9e4a235e76-console-serving-cert\") pod \"c300d6c7-66fb-41c5-b099-0e9e4a235e76\" (UID: \"c300d6c7-66fb-41c5-b099-0e9e4a235e76\") " Feb 24 05:42:00.809561 master-0 kubenswrapper[34361]: I0224 05:42:00.809518 34361 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/c300d6c7-66fb-41c5-b099-0e9e4a235e76-oauth-serving-cert\") pod \"c300d6c7-66fb-41c5-b099-0e9e4a235e76\" (UID: \"c300d6c7-66fb-41c5-b099-0e9e4a235e76\") " Feb 24 05:42:00.809774 master-0 kubenswrapper[34361]: I0224 05:42:00.809736 34361 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/c300d6c7-66fb-41c5-b099-0e9e4a235e76-service-ca\") pod \"c300d6c7-66fb-41c5-b099-0e9e4a235e76\" (UID: \"c300d6c7-66fb-41c5-b099-0e9e4a235e76\") " Feb 24 05:42:00.809979 master-0 kubenswrapper[34361]: I0224 05:42:00.809876 34361 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c300d6c7-66fb-41c5-b099-0e9e4a235e76-console-config" (OuterVolumeSpecName: "console-config") pod "c300d6c7-66fb-41c5-b099-0e9e4a235e76" (UID: "c300d6c7-66fb-41c5-b099-0e9e4a235e76"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 24 05:42:00.810471 master-0 kubenswrapper[34361]: I0224 05:42:00.810428 34361 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c300d6c7-66fb-41c5-b099-0e9e4a235e76-trusted-ca-bundle\") on node \"master-0\" DevicePath \"\"" Feb 24 05:42:00.810471 master-0 kubenswrapper[34361]: I0224 05:42:00.810465 34361 reconciler_common.go:293] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/c300d6c7-66fb-41c5-b099-0e9e4a235e76-console-config\") on node \"master-0\" DevicePath \"\"" Feb 24 05:42:00.810633 master-0 kubenswrapper[34361]: I0224 05:42:00.810576 34361 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c300d6c7-66fb-41c5-b099-0e9e4a235e76-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "c300d6c7-66fb-41c5-b099-0e9e4a235e76" (UID: "c300d6c7-66fb-41c5-b099-0e9e4a235e76"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 24 05:42:00.810708 master-0 kubenswrapper[34361]: I0224 05:42:00.810605 34361 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c300d6c7-66fb-41c5-b099-0e9e4a235e76-service-ca" (OuterVolumeSpecName: "service-ca") pod "c300d6c7-66fb-41c5-b099-0e9e4a235e76" (UID: "c300d6c7-66fb-41c5-b099-0e9e4a235e76"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 24 05:42:00.813752 master-0 kubenswrapper[34361]: I0224 05:42:00.813687 34361 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c300d6c7-66fb-41c5-b099-0e9e4a235e76-kube-api-access-8qjkx" (OuterVolumeSpecName: "kube-api-access-8qjkx") pod "c300d6c7-66fb-41c5-b099-0e9e4a235e76" (UID: "c300d6c7-66fb-41c5-b099-0e9e4a235e76"). InnerVolumeSpecName "kube-api-access-8qjkx". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 24 05:42:00.815795 master-0 kubenswrapper[34361]: I0224 05:42:00.815695 34361 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c300d6c7-66fb-41c5-b099-0e9e4a235e76-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "c300d6c7-66fb-41c5-b099-0e9e4a235e76" (UID: "c300d6c7-66fb-41c5-b099-0e9e4a235e76"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 24 05:42:00.816800 master-0 kubenswrapper[34361]: I0224 05:42:00.816676 34361 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c300d6c7-66fb-41c5-b099-0e9e4a235e76-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "c300d6c7-66fb-41c5-b099-0e9e4a235e76" (UID: "c300d6c7-66fb-41c5-b099-0e9e4a235e76"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 24 05:42:00.912936 master-0 kubenswrapper[34361]: I0224 05:42:00.912800 34361 reconciler_common.go:293] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/c300d6c7-66fb-41c5-b099-0e9e4a235e76-console-serving-cert\") on node \"master-0\" DevicePath \"\"" Feb 24 05:42:00.912936 master-0 kubenswrapper[34361]: I0224 05:42:00.912898 34361 reconciler_common.go:293] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/c300d6c7-66fb-41c5-b099-0e9e4a235e76-oauth-serving-cert\") on node \"master-0\" DevicePath \"\"" Feb 24 05:42:00.912936 master-0 kubenswrapper[34361]: I0224 05:42:00.912920 34361 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/c300d6c7-66fb-41c5-b099-0e9e4a235e76-service-ca\") on node \"master-0\" DevicePath \"\"" Feb 24 05:42:00.912936 master-0 kubenswrapper[34361]: I0224 05:42:00.912942 34361 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8qjkx\" (UniqueName: \"kubernetes.io/projected/c300d6c7-66fb-41c5-b099-0e9e4a235e76-kube-api-access-8qjkx\") on node \"master-0\" DevicePath \"\"" Feb 24 05:42:00.912936 master-0 kubenswrapper[34361]: I0224 05:42:00.912962 34361 reconciler_common.go:293] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/c300d6c7-66fb-41c5-b099-0e9e4a235e76-console-oauth-config\") on node \"master-0\" DevicePath \"\"" Feb 24 05:42:01.332982 master-0 kubenswrapper[34361]: I0224 05:42:01.332849 34361 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-67bcb9df49-d2cv6_c300d6c7-66fb-41c5-b099-0e9e4a235e76/console/0.log" Feb 24 05:42:01.334123 master-0 kubenswrapper[34361]: I0224 05:42:01.332993 34361 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-67bcb9df49-d2cv6" event={"ID":"c300d6c7-66fb-41c5-b099-0e9e4a235e76","Type":"ContainerDied","Data":"aa3668feecd3d666dacc9f48e108a998a72d35ceb407ca62190b08acd01e6da6"} Feb 24 05:42:01.334123 master-0 kubenswrapper[34361]: I0224 05:42:01.333110 34361 scope.go:117] "RemoveContainer" containerID="76eff019d66fe5abbd1ccb06357908f71a83ac02cd16385fcdcc99a4c5ce4117" Feb 24 05:42:01.334123 master-0 kubenswrapper[34361]: I0224 05:42:01.333116 34361 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-67bcb9df49-d2cv6" Feb 24 05:42:01.412385 master-0 kubenswrapper[34361]: I0224 05:42:01.412260 34361 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-67bcb9df49-d2cv6"] Feb 24 05:42:01.434299 master-0 kubenswrapper[34361]: I0224 05:42:01.434158 34361 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-console/console-67bcb9df49-d2cv6"] Feb 24 05:42:02.608468 master-0 kubenswrapper[34361]: I0224 05:42:02.608221 34361 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c300d6c7-66fb-41c5-b099-0e9e4a235e76" path="/var/lib/kubelet/pods/c300d6c7-66fb-41c5-b099-0e9e4a235e76/volumes" Feb 24 05:42:14.267513 master-0 kubenswrapper[34361]: I0224 05:42:14.267367 34361 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-console/console-7875b98987-bmnll" podUID="d573eeb3-8e03-4793-9e4a-33d4a50c5b70" containerName="console" containerID="cri-o://00f94470d50e12eca35d3f8fd71ce2f34471b0407a225974b9dc1ddb97de6ca8" gracePeriod=15 Feb 24 05:42:14.499305 master-0 kubenswrapper[34361]: I0224 05:42:14.499206 34361 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-7875b98987-bmnll_d573eeb3-8e03-4793-9e4a-33d4a50c5b70/console/0.log" Feb 24 05:42:14.499680 master-0 kubenswrapper[34361]: I0224 05:42:14.499340 34361 generic.go:334] "Generic (PLEG): container finished" podID="d573eeb3-8e03-4793-9e4a-33d4a50c5b70" containerID="00f94470d50e12eca35d3f8fd71ce2f34471b0407a225974b9dc1ddb97de6ca8" exitCode=2 Feb 24 05:42:14.499680 master-0 kubenswrapper[34361]: I0224 05:42:14.499416 34361 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-7875b98987-bmnll" event={"ID":"d573eeb3-8e03-4793-9e4a-33d4a50c5b70","Type":"ContainerDied","Data":"00f94470d50e12eca35d3f8fd71ce2f34471b0407a225974b9dc1ddb97de6ca8"} Feb 24 05:42:14.858394 master-0 kubenswrapper[34361]: I0224 05:42:14.858272 34361 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-7875b98987-bmnll_d573eeb3-8e03-4793-9e4a-33d4a50c5b70/console/0.log" Feb 24 05:42:14.859132 master-0 kubenswrapper[34361]: I0224 05:42:14.858431 34361 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-7875b98987-bmnll" Feb 24 05:42:15.031437 master-0 kubenswrapper[34361]: I0224 05:42:15.031272 34361 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qzrks\" (UniqueName: \"kubernetes.io/projected/d573eeb3-8e03-4793-9e4a-33d4a50c5b70-kube-api-access-qzrks\") pod \"d573eeb3-8e03-4793-9e4a-33d4a50c5b70\" (UID: \"d573eeb3-8e03-4793-9e4a-33d4a50c5b70\") " Feb 24 05:42:15.031871 master-0 kubenswrapper[34361]: I0224 05:42:15.031504 34361 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/d573eeb3-8e03-4793-9e4a-33d4a50c5b70-oauth-serving-cert\") pod \"d573eeb3-8e03-4793-9e4a-33d4a50c5b70\" (UID: \"d573eeb3-8e03-4793-9e4a-33d4a50c5b70\") " Feb 24 05:42:15.031871 master-0 kubenswrapper[34361]: I0224 05:42:15.031613 34361 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/d573eeb3-8e03-4793-9e4a-33d4a50c5b70-console-oauth-config\") pod \"d573eeb3-8e03-4793-9e4a-33d4a50c5b70\" (UID: \"d573eeb3-8e03-4793-9e4a-33d4a50c5b70\") " Feb 24 05:42:15.031871 master-0 kubenswrapper[34361]: I0224 05:42:15.031789 34361 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/d573eeb3-8e03-4793-9e4a-33d4a50c5b70-service-ca\") pod \"d573eeb3-8e03-4793-9e4a-33d4a50c5b70\" (UID: \"d573eeb3-8e03-4793-9e4a-33d4a50c5b70\") " Feb 24 05:42:15.031871 master-0 kubenswrapper[34361]: I0224 05:42:15.031851 34361 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d573eeb3-8e03-4793-9e4a-33d4a50c5b70-trusted-ca-bundle\") pod \"d573eeb3-8e03-4793-9e4a-33d4a50c5b70\" (UID: \"d573eeb3-8e03-4793-9e4a-33d4a50c5b70\") " Feb 24 05:42:15.032284 master-0 kubenswrapper[34361]: I0224 05:42:15.031901 34361 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/d573eeb3-8e03-4793-9e4a-33d4a50c5b70-console-serving-cert\") pod \"d573eeb3-8e03-4793-9e4a-33d4a50c5b70\" (UID: \"d573eeb3-8e03-4793-9e4a-33d4a50c5b70\") " Feb 24 05:42:15.032284 master-0 kubenswrapper[34361]: I0224 05:42:15.032020 34361 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/d573eeb3-8e03-4793-9e4a-33d4a50c5b70-console-config\") pod \"d573eeb3-8e03-4793-9e4a-33d4a50c5b70\" (UID: \"d573eeb3-8e03-4793-9e4a-33d4a50c5b70\") " Feb 24 05:42:15.032715 master-0 kubenswrapper[34361]: I0224 05:42:15.032627 34361 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d573eeb3-8e03-4793-9e4a-33d4a50c5b70-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "d573eeb3-8e03-4793-9e4a-33d4a50c5b70" (UID: "d573eeb3-8e03-4793-9e4a-33d4a50c5b70"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 24 05:42:15.032860 master-0 kubenswrapper[34361]: I0224 05:42:15.032705 34361 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d573eeb3-8e03-4793-9e4a-33d4a50c5b70-service-ca" (OuterVolumeSpecName: "service-ca") pod "d573eeb3-8e03-4793-9e4a-33d4a50c5b70" (UID: "d573eeb3-8e03-4793-9e4a-33d4a50c5b70"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 24 05:42:15.033184 master-0 kubenswrapper[34361]: I0224 05:42:15.033091 34361 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d573eeb3-8e03-4793-9e4a-33d4a50c5b70-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "d573eeb3-8e03-4793-9e4a-33d4a50c5b70" (UID: "d573eeb3-8e03-4793-9e4a-33d4a50c5b70"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 24 05:42:15.033473 master-0 kubenswrapper[34361]: I0224 05:42:15.033263 34361 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d573eeb3-8e03-4793-9e4a-33d4a50c5b70-console-config" (OuterVolumeSpecName: "console-config") pod "d573eeb3-8e03-4793-9e4a-33d4a50c5b70" (UID: "d573eeb3-8e03-4793-9e4a-33d4a50c5b70"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 24 05:42:15.033586 master-0 kubenswrapper[34361]: I0224 05:42:15.033556 34361 reconciler_common.go:293] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/d573eeb3-8e03-4793-9e4a-33d4a50c5b70-oauth-serving-cert\") on node \"master-0\" DevicePath \"\"" Feb 24 05:42:15.033654 master-0 kubenswrapper[34361]: I0224 05:42:15.033592 34361 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/d573eeb3-8e03-4793-9e4a-33d4a50c5b70-service-ca\") on node \"master-0\" DevicePath \"\"" Feb 24 05:42:15.033654 master-0 kubenswrapper[34361]: I0224 05:42:15.033618 34361 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d573eeb3-8e03-4793-9e4a-33d4a50c5b70-trusted-ca-bundle\") on node \"master-0\" DevicePath \"\"" Feb 24 05:42:15.036294 master-0 kubenswrapper[34361]: I0224 05:42:15.036182 34361 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d573eeb3-8e03-4793-9e4a-33d4a50c5b70-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "d573eeb3-8e03-4793-9e4a-33d4a50c5b70" (UID: "d573eeb3-8e03-4793-9e4a-33d4a50c5b70"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 24 05:42:15.036294 master-0 kubenswrapper[34361]: I0224 05:42:15.036213 34361 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d573eeb3-8e03-4793-9e4a-33d4a50c5b70-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "d573eeb3-8e03-4793-9e4a-33d4a50c5b70" (UID: "d573eeb3-8e03-4793-9e4a-33d4a50c5b70"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 24 05:42:15.037059 master-0 kubenswrapper[34361]: I0224 05:42:15.036995 34361 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d573eeb3-8e03-4793-9e4a-33d4a50c5b70-kube-api-access-qzrks" (OuterVolumeSpecName: "kube-api-access-qzrks") pod "d573eeb3-8e03-4793-9e4a-33d4a50c5b70" (UID: "d573eeb3-8e03-4793-9e4a-33d4a50c5b70"). InnerVolumeSpecName "kube-api-access-qzrks". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 24 05:42:15.136163 master-0 kubenswrapper[34361]: I0224 05:42:15.135859 34361 reconciler_common.go:293] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/d573eeb3-8e03-4793-9e4a-33d4a50c5b70-console-serving-cert\") on node \"master-0\" DevicePath \"\"" Feb 24 05:42:15.136163 master-0 kubenswrapper[34361]: I0224 05:42:15.135917 34361 reconciler_common.go:293] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/d573eeb3-8e03-4793-9e4a-33d4a50c5b70-console-config\") on node \"master-0\" DevicePath \"\"" Feb 24 05:42:15.136163 master-0 kubenswrapper[34361]: I0224 05:42:15.135932 34361 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qzrks\" (UniqueName: \"kubernetes.io/projected/d573eeb3-8e03-4793-9e4a-33d4a50c5b70-kube-api-access-qzrks\") on node \"master-0\" DevicePath \"\"" Feb 24 05:42:15.136163 master-0 kubenswrapper[34361]: I0224 05:42:15.135947 34361 reconciler_common.go:293] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/d573eeb3-8e03-4793-9e4a-33d4a50c5b70-console-oauth-config\") on node \"master-0\" DevicePath \"\"" Feb 24 05:42:15.510753 master-0 kubenswrapper[34361]: I0224 05:42:15.510695 34361 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-7875b98987-bmnll_d573eeb3-8e03-4793-9e4a-33d4a50c5b70/console/0.log" Feb 24 05:42:15.511480 master-0 kubenswrapper[34361]: I0224 05:42:15.510767 34361 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-7875b98987-bmnll" event={"ID":"d573eeb3-8e03-4793-9e4a-33d4a50c5b70","Type":"ContainerDied","Data":"268c1f2e5888514bd98103357fabe5d6a4a6ba92fc2f501a925cc56e83119b02"} Feb 24 05:42:15.511480 master-0 kubenswrapper[34361]: I0224 05:42:15.510829 34361 scope.go:117] "RemoveContainer" containerID="00f94470d50e12eca35d3f8fd71ce2f34471b0407a225974b9dc1ddb97de6ca8" Feb 24 05:42:15.511480 master-0 kubenswrapper[34361]: I0224 05:42:15.510882 34361 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-7875b98987-bmnll" Feb 24 05:42:15.566259 master-0 kubenswrapper[34361]: I0224 05:42:15.566188 34361 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-7875b98987-bmnll"] Feb 24 05:42:15.572709 master-0 kubenswrapper[34361]: I0224 05:42:15.572640 34361 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-console/console-7875b98987-bmnll"] Feb 24 05:42:16.612633 master-0 kubenswrapper[34361]: I0224 05:42:16.612571 34361 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d573eeb3-8e03-4793-9e4a-33d4a50c5b70" path="/var/lib/kubelet/pods/d573eeb3-8e03-4793-9e4a-33d4a50c5b70/volumes" Feb 24 05:42:20.564349 master-0 kubenswrapper[34361]: I0224 05:42:20.564243 34361 kubelet.go:1505] "Image garbage collection succeeded" Feb 24 05:42:21.356254 master-0 kubenswrapper[34361]: E0224 05:42:21.356119 34361 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[networking-console-plugin-cert], unattached volumes=[], failed to process volumes=[]: context deadline exceeded" pod="openshift-network-console/networking-console-plugin-79f587d78f-bctpb" podUID="b0a03ff3-e39b-4be9-bb1f-827d00437e62" Feb 24 05:42:21.597304 master-0 kubenswrapper[34361]: I0224 05:42:21.597223 34361 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-79f587d78f-bctpb" Feb 24 05:42:26.588344 master-0 kubenswrapper[34361]: I0224 05:42:26.588192 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/b0a03ff3-e39b-4be9-bb1f-827d00437e62-networking-console-plugin-cert\") pod \"networking-console-plugin-79f587d78f-bctpb\" (UID: \"b0a03ff3-e39b-4be9-bb1f-827d00437e62\") " pod="openshift-network-console/networking-console-plugin-79f587d78f-bctpb" Feb 24 05:42:26.594110 master-0 kubenswrapper[34361]: I0224 05:42:26.594035 34361 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/b0a03ff3-e39b-4be9-bb1f-827d00437e62-networking-console-plugin-cert\") pod \"networking-console-plugin-79f587d78f-bctpb\" (UID: \"b0a03ff3-e39b-4be9-bb1f-827d00437e62\") " pod="openshift-network-console/networking-console-plugin-79f587d78f-bctpb" Feb 24 05:42:26.698442 master-0 kubenswrapper[34361]: I0224 05:42:26.698339 34361 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-79f587d78f-bctpb" Feb 24 05:42:27.219290 master-0 kubenswrapper[34361]: W0224 05:42:27.219205 34361 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb0a03ff3_e39b_4be9_bb1f_827d00437e62.slice/crio-e095aef41fed11fdffb36686ca65dbf89781e83813a82332b2ac2348ad25960e WatchSource:0}: Error finding container e095aef41fed11fdffb36686ca65dbf89781e83813a82332b2ac2348ad25960e: Status 404 returned error can't find the container with id e095aef41fed11fdffb36686ca65dbf89781e83813a82332b2ac2348ad25960e Feb 24 05:42:27.222720 master-0 kubenswrapper[34361]: I0224 05:42:27.222668 34361 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-network-console/networking-console-plugin-79f587d78f-bctpb"] Feb 24 05:42:27.654038 master-0 kubenswrapper[34361]: I0224 05:42:27.653939 34361 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-console/networking-console-plugin-79f587d78f-bctpb" event={"ID":"b0a03ff3-e39b-4be9-bb1f-827d00437e62","Type":"ContainerStarted","Data":"e095aef41fed11fdffb36686ca65dbf89781e83813a82332b2ac2348ad25960e"} Feb 24 05:42:29.675040 master-0 kubenswrapper[34361]: I0224 05:42:29.674812 34361 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-console/networking-console-plugin-79f587d78f-bctpb" event={"ID":"b0a03ff3-e39b-4be9-bb1f-827d00437e62","Type":"ContainerStarted","Data":"25533676ee07a8649536549422e4045e9f80b906942cb29f5b1ae580b3f55414"} Feb 24 05:42:29.698842 master-0 kubenswrapper[34361]: I0224 05:42:29.698713 34361 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-network-console/networking-console-plugin-79f587d78f-bctpb" podStartSLOduration=130.295500987 podStartE2EDuration="2m11.698694482s" podCreationTimestamp="2026-02-24 05:40:18 +0000 UTC" firstStartedPulling="2026-02-24 05:42:27.221829795 +0000 UTC m=+306.924446851" lastFinishedPulling="2026-02-24 05:42:28.62502329 +0000 UTC m=+308.327640346" observedRunningTime="2026-02-24 05:42:29.698546788 +0000 UTC m=+309.401163844" watchObservedRunningTime="2026-02-24 05:42:29.698694482 +0000 UTC m=+309.401311528" Feb 24 05:42:35.239073 master-0 kubenswrapper[34361]: I0224 05:42:35.238950 34361 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-monitoring/prometheus-k8s-0" Feb 24 05:42:35.278921 master-0 kubenswrapper[34361]: I0224 05:42:35.278825 34361 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-monitoring/prometheus-k8s-0" Feb 24 05:42:35.785824 master-0 kubenswrapper[34361]: I0224 05:42:35.785673 34361 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-monitoring/prometheus-k8s-0" Feb 24 05:42:48.784707 master-0 kubenswrapper[34361]: I0224 05:42:48.784554 34361 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/metrics-server-65cdf565cd-555rj" Feb 24 05:42:48.875550 master-0 kubenswrapper[34361]: I0224 05:42:48.875297 34361 generic.go:334] "Generic (PLEG): container finished" podID="2f48332e-92de-42aa-a6e6-db161f005e74" containerID="4858fceb65a04923cb067f166343a6fb2307e9f4257f862dde38618470f1f9bb" exitCode=0 Feb 24 05:42:48.875550 master-0 kubenswrapper[34361]: I0224 05:42:48.875401 34361 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/metrics-server-65cdf565cd-555rj" event={"ID":"2f48332e-92de-42aa-a6e6-db161f005e74","Type":"ContainerDied","Data":"4858fceb65a04923cb067f166343a6fb2307e9f4257f862dde38618470f1f9bb"} Feb 24 05:42:48.875550 master-0 kubenswrapper[34361]: I0224 05:42:48.875448 34361 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/metrics-server-65cdf565cd-555rj" event={"ID":"2f48332e-92de-42aa-a6e6-db161f005e74","Type":"ContainerDied","Data":"4ebd137aadd86a90697f1884cb52d1970bb5138e39026928308cfa18816924e6"} Feb 24 05:42:48.875550 master-0 kubenswrapper[34361]: I0224 05:42:48.875491 34361 scope.go:117] "RemoveContainer" containerID="4858fceb65a04923cb067f166343a6fb2307e9f4257f862dde38618470f1f9bb" Feb 24 05:42:48.875550 master-0 kubenswrapper[34361]: I0224 05:42:48.875554 34361 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/metrics-server-65cdf565cd-555rj" Feb 24 05:42:48.890528 master-0 kubenswrapper[34361]: I0224 05:42:48.890465 34361 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-server-audit-profiles\" (UniqueName: \"kubernetes.io/configmap/2f48332e-92de-42aa-a6e6-db161f005e74-metrics-server-audit-profiles\") pod \"2f48332e-92de-42aa-a6e6-db161f005e74\" (UID: \"2f48332e-92de-42aa-a6e6-db161f005e74\") " Feb 24 05:42:48.890528 master-0 kubenswrapper[34361]: I0224 05:42:48.890533 34361 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2f48332e-92de-42aa-a6e6-db161f005e74-client-ca-bundle\") pod \"2f48332e-92de-42aa-a6e6-db161f005e74\" (UID: \"2f48332e-92de-42aa-a6e6-db161f005e74\") " Feb 24 05:42:48.890764 master-0 kubenswrapper[34361]: I0224 05:42:48.890575 34361 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"configmap-kubelet-serving-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/2f48332e-92de-42aa-a6e6-db161f005e74-configmap-kubelet-serving-ca-bundle\") pod \"2f48332e-92de-42aa-a6e6-db161f005e74\" (UID: \"2f48332e-92de-42aa-a6e6-db161f005e74\") " Feb 24 05:42:48.890764 master-0 kubenswrapper[34361]: I0224 05:42:48.890596 34361 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-metrics-client-certs\" (UniqueName: \"kubernetes.io/secret/2f48332e-92de-42aa-a6e6-db161f005e74-secret-metrics-client-certs\") pod \"2f48332e-92de-42aa-a6e6-db161f005e74\" (UID: \"2f48332e-92de-42aa-a6e6-db161f005e74\") " Feb 24 05:42:48.890764 master-0 kubenswrapper[34361]: I0224 05:42:48.890650 34361 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-log\" (UniqueName: \"kubernetes.io/empty-dir/2f48332e-92de-42aa-a6e6-db161f005e74-audit-log\") pod \"2f48332e-92de-42aa-a6e6-db161f005e74\" (UID: \"2f48332e-92de-42aa-a6e6-db161f005e74\") " Feb 24 05:42:48.890764 master-0 kubenswrapper[34361]: I0224 05:42:48.890705 34361 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-metrics-server-tls\" (UniqueName: \"kubernetes.io/secret/2f48332e-92de-42aa-a6e6-db161f005e74-secret-metrics-server-tls\") pod \"2f48332e-92de-42aa-a6e6-db161f005e74\" (UID: \"2f48332e-92de-42aa-a6e6-db161f005e74\") " Feb 24 05:42:48.890764 master-0 kubenswrapper[34361]: I0224 05:42:48.890764 34361 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kc42f\" (UniqueName: \"kubernetes.io/projected/2f48332e-92de-42aa-a6e6-db161f005e74-kube-api-access-kc42f\") pod \"2f48332e-92de-42aa-a6e6-db161f005e74\" (UID: \"2f48332e-92de-42aa-a6e6-db161f005e74\") " Feb 24 05:42:48.891253 master-0 kubenswrapper[34361]: I0224 05:42:48.891200 34361 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2f48332e-92de-42aa-a6e6-db161f005e74-configmap-kubelet-serving-ca-bundle" (OuterVolumeSpecName: "configmap-kubelet-serving-ca-bundle") pod "2f48332e-92de-42aa-a6e6-db161f005e74" (UID: "2f48332e-92de-42aa-a6e6-db161f005e74"). InnerVolumeSpecName "configmap-kubelet-serving-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 24 05:42:48.891944 master-0 kubenswrapper[34361]: I0224 05:42:48.891893 34361 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2f48332e-92de-42aa-a6e6-db161f005e74-audit-log" (OuterVolumeSpecName: "audit-log") pod "2f48332e-92de-42aa-a6e6-db161f005e74" (UID: "2f48332e-92de-42aa-a6e6-db161f005e74"). InnerVolumeSpecName "audit-log". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 24 05:42:48.891944 master-0 kubenswrapper[34361]: I0224 05:42:48.891898 34361 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2f48332e-92de-42aa-a6e6-db161f005e74-metrics-server-audit-profiles" (OuterVolumeSpecName: "metrics-server-audit-profiles") pod "2f48332e-92de-42aa-a6e6-db161f005e74" (UID: "2f48332e-92de-42aa-a6e6-db161f005e74"). InnerVolumeSpecName "metrics-server-audit-profiles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 24 05:42:48.896558 master-0 kubenswrapper[34361]: I0224 05:42:48.896391 34361 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2f48332e-92de-42aa-a6e6-db161f005e74-kube-api-access-kc42f" (OuterVolumeSpecName: "kube-api-access-kc42f") pod "2f48332e-92de-42aa-a6e6-db161f005e74" (UID: "2f48332e-92de-42aa-a6e6-db161f005e74"). InnerVolumeSpecName "kube-api-access-kc42f". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 24 05:42:48.896783 master-0 kubenswrapper[34361]: I0224 05:42:48.896564 34361 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2f48332e-92de-42aa-a6e6-db161f005e74-client-ca-bundle" (OuterVolumeSpecName: "client-ca-bundle") pod "2f48332e-92de-42aa-a6e6-db161f005e74" (UID: "2f48332e-92de-42aa-a6e6-db161f005e74"). InnerVolumeSpecName "client-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 24 05:42:48.899017 master-0 kubenswrapper[34361]: I0224 05:42:48.898919 34361 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2f48332e-92de-42aa-a6e6-db161f005e74-secret-metrics-server-tls" (OuterVolumeSpecName: "secret-metrics-server-tls") pod "2f48332e-92de-42aa-a6e6-db161f005e74" (UID: "2f48332e-92de-42aa-a6e6-db161f005e74"). InnerVolumeSpecName "secret-metrics-server-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 24 05:42:48.901670 master-0 kubenswrapper[34361]: I0224 05:42:48.901617 34361 scope.go:117] "RemoveContainer" containerID="4858fceb65a04923cb067f166343a6fb2307e9f4257f862dde38618470f1f9bb" Feb 24 05:42:48.902199 master-0 kubenswrapper[34361]: E0224 05:42:48.902149 34361 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4858fceb65a04923cb067f166343a6fb2307e9f4257f862dde38618470f1f9bb\": container with ID starting with 4858fceb65a04923cb067f166343a6fb2307e9f4257f862dde38618470f1f9bb not found: ID does not exist" containerID="4858fceb65a04923cb067f166343a6fb2307e9f4257f862dde38618470f1f9bb" Feb 24 05:42:48.902340 master-0 kubenswrapper[34361]: I0224 05:42:48.902192 34361 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4858fceb65a04923cb067f166343a6fb2307e9f4257f862dde38618470f1f9bb"} err="failed to get container status \"4858fceb65a04923cb067f166343a6fb2307e9f4257f862dde38618470f1f9bb\": rpc error: code = NotFound desc = could not find container \"4858fceb65a04923cb067f166343a6fb2307e9f4257f862dde38618470f1f9bb\": container with ID starting with 4858fceb65a04923cb067f166343a6fb2307e9f4257f862dde38618470f1f9bb not found: ID does not exist" Feb 24 05:42:48.902648 master-0 kubenswrapper[34361]: I0224 05:42:48.902542 34361 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2f48332e-92de-42aa-a6e6-db161f005e74-secret-metrics-client-certs" (OuterVolumeSpecName: "secret-metrics-client-certs") pod "2f48332e-92de-42aa-a6e6-db161f005e74" (UID: "2f48332e-92de-42aa-a6e6-db161f005e74"). InnerVolumeSpecName "secret-metrics-client-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 24 05:42:48.995534 master-0 kubenswrapper[34361]: I0224 05:42:48.992888 34361 reconciler_common.go:293] "Volume detached for volume \"metrics-server-audit-profiles\" (UniqueName: \"kubernetes.io/configmap/2f48332e-92de-42aa-a6e6-db161f005e74-metrics-server-audit-profiles\") on node \"master-0\" DevicePath \"\"" Feb 24 05:42:48.995534 master-0 kubenswrapper[34361]: I0224 05:42:48.993002 34361 reconciler_common.go:293] "Volume detached for volume \"client-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2f48332e-92de-42aa-a6e6-db161f005e74-client-ca-bundle\") on node \"master-0\" DevicePath \"\"" Feb 24 05:42:48.995534 master-0 kubenswrapper[34361]: I0224 05:42:48.993068 34361 reconciler_common.go:293] "Volume detached for volume \"configmap-kubelet-serving-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/2f48332e-92de-42aa-a6e6-db161f005e74-configmap-kubelet-serving-ca-bundle\") on node \"master-0\" DevicePath \"\"" Feb 24 05:42:48.995534 master-0 kubenswrapper[34361]: I0224 05:42:48.993096 34361 reconciler_common.go:293] "Volume detached for volume \"secret-metrics-client-certs\" (UniqueName: \"kubernetes.io/secret/2f48332e-92de-42aa-a6e6-db161f005e74-secret-metrics-client-certs\") on node \"master-0\" DevicePath \"\"" Feb 24 05:42:48.995534 master-0 kubenswrapper[34361]: I0224 05:42:48.993157 34361 reconciler_common.go:293] "Volume detached for volume \"audit-log\" (UniqueName: \"kubernetes.io/empty-dir/2f48332e-92de-42aa-a6e6-db161f005e74-audit-log\") on node \"master-0\" DevicePath \"\"" Feb 24 05:42:48.995534 master-0 kubenswrapper[34361]: I0224 05:42:48.993180 34361 reconciler_common.go:293] "Volume detached for volume \"secret-metrics-server-tls\" (UniqueName: \"kubernetes.io/secret/2f48332e-92de-42aa-a6e6-db161f005e74-secret-metrics-server-tls\") on node \"master-0\" DevicePath \"\"" Feb 24 05:42:48.995534 master-0 kubenswrapper[34361]: I0224 05:42:48.993199 34361 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kc42f\" (UniqueName: \"kubernetes.io/projected/2f48332e-92de-42aa-a6e6-db161f005e74-kube-api-access-kc42f\") on node \"master-0\" DevicePath \"\"" Feb 24 05:42:49.229848 master-0 kubenswrapper[34361]: I0224 05:42:49.229704 34361 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-monitoring/metrics-server-65cdf565cd-555rj"] Feb 24 05:42:49.236959 master-0 kubenswrapper[34361]: I0224 05:42:49.236882 34361 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-monitoring/metrics-server-65cdf565cd-555rj"] Feb 24 05:42:50.300250 master-0 kubenswrapper[34361]: I0224 05:42:50.300079 34361 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager/installer-4-master-0"] Feb 24 05:42:50.301118 master-0 kubenswrapper[34361]: E0224 05:42:50.300413 34361 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2f48332e-92de-42aa-a6e6-db161f005e74" containerName="metrics-server" Feb 24 05:42:50.301118 master-0 kubenswrapper[34361]: I0224 05:42:50.300429 34361 state_mem.go:107] "Deleted CPUSet assignment" podUID="2f48332e-92de-42aa-a6e6-db161f005e74" containerName="metrics-server" Feb 24 05:42:50.301118 master-0 kubenswrapper[34361]: E0224 05:42:50.300447 34361 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c300d6c7-66fb-41c5-b099-0e9e4a235e76" containerName="console" Feb 24 05:42:50.301118 master-0 kubenswrapper[34361]: I0224 05:42:50.300453 34361 state_mem.go:107] "Deleted CPUSet assignment" podUID="c300d6c7-66fb-41c5-b099-0e9e4a235e76" containerName="console" Feb 24 05:42:50.301118 master-0 kubenswrapper[34361]: E0224 05:42:50.300502 34361 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d573eeb3-8e03-4793-9e4a-33d4a50c5b70" containerName="console" Feb 24 05:42:50.301118 master-0 kubenswrapper[34361]: I0224 05:42:50.300511 34361 state_mem.go:107] "Deleted CPUSet assignment" podUID="d573eeb3-8e03-4793-9e4a-33d4a50c5b70" containerName="console" Feb 24 05:42:50.301118 master-0 kubenswrapper[34361]: I0224 05:42:50.300667 34361 memory_manager.go:354] "RemoveStaleState removing state" podUID="d573eeb3-8e03-4793-9e4a-33d4a50c5b70" containerName="console" Feb 24 05:42:50.301118 master-0 kubenswrapper[34361]: I0224 05:42:50.300748 34361 memory_manager.go:354] "RemoveStaleState removing state" podUID="2f48332e-92de-42aa-a6e6-db161f005e74" containerName="metrics-server" Feb 24 05:42:50.301118 master-0 kubenswrapper[34361]: I0224 05:42:50.300772 34361 memory_manager.go:354] "RemoveStaleState removing state" podUID="c300d6c7-66fb-41c5-b099-0e9e4a235e76" containerName="console" Feb 24 05:42:50.301572 master-0 kubenswrapper[34361]: I0224 05:42:50.301422 34361 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/installer-4-master-0" Feb 24 05:42:50.305158 master-0 kubenswrapper[34361]: I0224 05:42:50.304298 34361 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager"/"installer-sa-dockercfg-kjfbr" Feb 24 05:42:50.311467 master-0 kubenswrapper[34361]: I0224 05:42:50.311206 34361 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager"/"kube-root-ca.crt" Feb 24 05:42:50.313727 master-0 kubenswrapper[34361]: I0224 05:42:50.313696 34361 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/installer-4-master-0"] Feb 24 05:42:50.417562 master-0 kubenswrapper[34361]: I0224 05:42:50.417461 34361 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/449763cc-c98c-4652-b9c3-893213b0efce-kube-api-access\") pod \"installer-4-master-0\" (UID: \"449763cc-c98c-4652-b9c3-893213b0efce\") " pod="openshift-kube-controller-manager/installer-4-master-0" Feb 24 05:42:50.417940 master-0 kubenswrapper[34361]: I0224 05:42:50.417635 34361 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/449763cc-c98c-4652-b9c3-893213b0efce-kubelet-dir\") pod \"installer-4-master-0\" (UID: \"449763cc-c98c-4652-b9c3-893213b0efce\") " pod="openshift-kube-controller-manager/installer-4-master-0" Feb 24 05:42:50.417940 master-0 kubenswrapper[34361]: I0224 05:42:50.417688 34361 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/449763cc-c98c-4652-b9c3-893213b0efce-var-lock\") pod \"installer-4-master-0\" (UID: \"449763cc-c98c-4652-b9c3-893213b0efce\") " pod="openshift-kube-controller-manager/installer-4-master-0" Feb 24 05:42:50.519782 master-0 kubenswrapper[34361]: I0224 05:42:50.519714 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/449763cc-c98c-4652-b9c3-893213b0efce-kubelet-dir\") pod \"installer-4-master-0\" (UID: \"449763cc-c98c-4652-b9c3-893213b0efce\") " pod="openshift-kube-controller-manager/installer-4-master-0" Feb 24 05:42:50.520367 master-0 kubenswrapper[34361]: I0224 05:42:50.520347 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/449763cc-c98c-4652-b9c3-893213b0efce-var-lock\") pod \"installer-4-master-0\" (UID: \"449763cc-c98c-4652-b9c3-893213b0efce\") " pod="openshift-kube-controller-manager/installer-4-master-0" Feb 24 05:42:50.520540 master-0 kubenswrapper[34361]: I0224 05:42:50.520467 34361 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/449763cc-c98c-4652-b9c3-893213b0efce-var-lock\") pod \"installer-4-master-0\" (UID: \"449763cc-c98c-4652-b9c3-893213b0efce\") " pod="openshift-kube-controller-manager/installer-4-master-0" Feb 24 05:42:50.520737 master-0 kubenswrapper[34361]: I0224 05:42:50.520036 34361 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/449763cc-c98c-4652-b9c3-893213b0efce-kubelet-dir\") pod \"installer-4-master-0\" (UID: \"449763cc-c98c-4652-b9c3-893213b0efce\") " pod="openshift-kube-controller-manager/installer-4-master-0" Feb 24 05:42:50.520819 master-0 kubenswrapper[34361]: I0224 05:42:50.520701 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/449763cc-c98c-4652-b9c3-893213b0efce-kube-api-access\") pod \"installer-4-master-0\" (UID: \"449763cc-c98c-4652-b9c3-893213b0efce\") " pod="openshift-kube-controller-manager/installer-4-master-0" Feb 24 05:42:50.551484 master-0 kubenswrapper[34361]: I0224 05:42:50.551288 34361 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/449763cc-c98c-4652-b9c3-893213b0efce-kube-api-access\") pod \"installer-4-master-0\" (UID: \"449763cc-c98c-4652-b9c3-893213b0efce\") " pod="openshift-kube-controller-manager/installer-4-master-0" Feb 24 05:42:50.607373 master-0 kubenswrapper[34361]: I0224 05:42:50.607325 34361 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2f48332e-92de-42aa-a6e6-db161f005e74" path="/var/lib/kubelet/pods/2f48332e-92de-42aa-a6e6-db161f005e74/volumes" Feb 24 05:42:50.646647 master-0 kubenswrapper[34361]: I0224 05:42:50.645360 34361 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/installer-4-master-0" Feb 24 05:42:51.214200 master-0 kubenswrapper[34361]: I0224 05:42:51.214113 34361 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/installer-4-master-0"] Feb 24 05:42:51.219571 master-0 kubenswrapper[34361]: W0224 05:42:51.219506 34361 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-pod449763cc_c98c_4652_b9c3_893213b0efce.slice/crio-4a2300b300a7f7e7bf562a0ca9e918e3a90fffeaf6369ddbd455b3dab58aa74e WatchSource:0}: Error finding container 4a2300b300a7f7e7bf562a0ca9e918e3a90fffeaf6369ddbd455b3dab58aa74e: Status 404 returned error can't find the container with id 4a2300b300a7f7e7bf562a0ca9e918e3a90fffeaf6369ddbd455b3dab58aa74e Feb 24 05:42:51.910540 master-0 kubenswrapper[34361]: I0224 05:42:51.910451 34361 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/installer-4-master-0" event={"ID":"449763cc-c98c-4652-b9c3-893213b0efce","Type":"ContainerStarted","Data":"7d8009c5dbdcd8a53dc663b491feee2798b80b08ab18c247d6e2f8a8ed15ae85"} Feb 24 05:42:51.910540 master-0 kubenswrapper[34361]: I0224 05:42:51.910526 34361 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/installer-4-master-0" event={"ID":"449763cc-c98c-4652-b9c3-893213b0efce","Type":"ContainerStarted","Data":"4a2300b300a7f7e7bf562a0ca9e918e3a90fffeaf6369ddbd455b3dab58aa74e"} Feb 24 05:42:51.941281 master-0 kubenswrapper[34361]: I0224 05:42:51.941150 34361 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager/installer-4-master-0" podStartSLOduration=1.9411201409999999 podStartE2EDuration="1.941120141s" podCreationTimestamp="2026-02-24 05:42:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-24 05:42:51.93739523 +0000 UTC m=+331.640012356" watchObservedRunningTime="2026-02-24 05:42:51.941120141 +0000 UTC m=+331.643737197" Feb 24 05:42:59.022561 master-0 kubenswrapper[34361]: I0224 05:42:59.022492 34361 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-6d5c5b46fd-qr4b5"] Feb 24 05:42:59.024659 master-0 kubenswrapper[34361]: I0224 05:42:59.024639 34361 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-6d5c5b46fd-qr4b5" Feb 24 05:42:59.034546 master-0 kubenswrapper[34361]: I0224 05:42:59.034517 34361 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-6d5c5b46fd-qr4b5"] Feb 24 05:42:59.095104 master-0 kubenswrapper[34361]: I0224 05:42:59.092759 34361 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/033badcd-d62d-4a0c-a069-874f0892c4d7-console-serving-cert\") pod \"console-6d5c5b46fd-qr4b5\" (UID: \"033badcd-d62d-4a0c-a069-874f0892c4d7\") " pod="openshift-console/console-6d5c5b46fd-qr4b5" Feb 24 05:42:59.095104 master-0 kubenswrapper[34361]: I0224 05:42:59.092849 34361 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/033badcd-d62d-4a0c-a069-874f0892c4d7-console-config\") pod \"console-6d5c5b46fd-qr4b5\" (UID: \"033badcd-d62d-4a0c-a069-874f0892c4d7\") " pod="openshift-console/console-6d5c5b46fd-qr4b5" Feb 24 05:42:59.095104 master-0 kubenswrapper[34361]: I0224 05:42:59.092887 34361 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/033badcd-d62d-4a0c-a069-874f0892c4d7-trusted-ca-bundle\") pod \"console-6d5c5b46fd-qr4b5\" (UID: \"033badcd-d62d-4a0c-a069-874f0892c4d7\") " pod="openshift-console/console-6d5c5b46fd-qr4b5" Feb 24 05:42:59.095104 master-0 kubenswrapper[34361]: I0224 05:42:59.092956 34361 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/033badcd-d62d-4a0c-a069-874f0892c4d7-console-oauth-config\") pod \"console-6d5c5b46fd-qr4b5\" (UID: \"033badcd-d62d-4a0c-a069-874f0892c4d7\") " pod="openshift-console/console-6d5c5b46fd-qr4b5" Feb 24 05:42:59.095104 master-0 kubenswrapper[34361]: I0224 05:42:59.093047 34361 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/033badcd-d62d-4a0c-a069-874f0892c4d7-oauth-serving-cert\") pod \"console-6d5c5b46fd-qr4b5\" (UID: \"033badcd-d62d-4a0c-a069-874f0892c4d7\") " pod="openshift-console/console-6d5c5b46fd-qr4b5" Feb 24 05:42:59.095104 master-0 kubenswrapper[34361]: I0224 05:42:59.093095 34361 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wdb7l\" (UniqueName: \"kubernetes.io/projected/033badcd-d62d-4a0c-a069-874f0892c4d7-kube-api-access-wdb7l\") pod \"console-6d5c5b46fd-qr4b5\" (UID: \"033badcd-d62d-4a0c-a069-874f0892c4d7\") " pod="openshift-console/console-6d5c5b46fd-qr4b5" Feb 24 05:42:59.095104 master-0 kubenswrapper[34361]: I0224 05:42:59.093120 34361 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/033badcd-d62d-4a0c-a069-874f0892c4d7-service-ca\") pod \"console-6d5c5b46fd-qr4b5\" (UID: \"033badcd-d62d-4a0c-a069-874f0892c4d7\") " pod="openshift-console/console-6d5c5b46fd-qr4b5" Feb 24 05:42:59.195111 master-0 kubenswrapper[34361]: I0224 05:42:59.195006 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wdb7l\" (UniqueName: \"kubernetes.io/projected/033badcd-d62d-4a0c-a069-874f0892c4d7-kube-api-access-wdb7l\") pod \"console-6d5c5b46fd-qr4b5\" (UID: \"033badcd-d62d-4a0c-a069-874f0892c4d7\") " pod="openshift-console/console-6d5c5b46fd-qr4b5" Feb 24 05:42:59.195405 master-0 kubenswrapper[34361]: I0224 05:42:59.195126 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/033badcd-d62d-4a0c-a069-874f0892c4d7-service-ca\") pod \"console-6d5c5b46fd-qr4b5\" (UID: \"033badcd-d62d-4a0c-a069-874f0892c4d7\") " pod="openshift-console/console-6d5c5b46fd-qr4b5" Feb 24 05:42:59.195405 master-0 kubenswrapper[34361]: I0224 05:42:59.195246 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/033badcd-d62d-4a0c-a069-874f0892c4d7-console-serving-cert\") pod \"console-6d5c5b46fd-qr4b5\" (UID: \"033badcd-d62d-4a0c-a069-874f0892c4d7\") " pod="openshift-console/console-6d5c5b46fd-qr4b5" Feb 24 05:42:59.195405 master-0 kubenswrapper[34361]: I0224 05:42:59.195288 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/033badcd-d62d-4a0c-a069-874f0892c4d7-console-config\") pod \"console-6d5c5b46fd-qr4b5\" (UID: \"033badcd-d62d-4a0c-a069-874f0892c4d7\") " pod="openshift-console/console-6d5c5b46fd-qr4b5" Feb 24 05:42:59.195405 master-0 kubenswrapper[34361]: I0224 05:42:59.195346 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/033badcd-d62d-4a0c-a069-874f0892c4d7-trusted-ca-bundle\") pod \"console-6d5c5b46fd-qr4b5\" (UID: \"033badcd-d62d-4a0c-a069-874f0892c4d7\") " pod="openshift-console/console-6d5c5b46fd-qr4b5" Feb 24 05:42:59.195405 master-0 kubenswrapper[34361]: I0224 05:42:59.195390 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/033badcd-d62d-4a0c-a069-874f0892c4d7-console-oauth-config\") pod \"console-6d5c5b46fd-qr4b5\" (UID: \"033badcd-d62d-4a0c-a069-874f0892c4d7\") " pod="openshift-console/console-6d5c5b46fd-qr4b5" Feb 24 05:42:59.195575 master-0 kubenswrapper[34361]: I0224 05:42:59.195464 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/033badcd-d62d-4a0c-a069-874f0892c4d7-oauth-serving-cert\") pod \"console-6d5c5b46fd-qr4b5\" (UID: \"033badcd-d62d-4a0c-a069-874f0892c4d7\") " pod="openshift-console/console-6d5c5b46fd-qr4b5" Feb 24 05:42:59.196628 master-0 kubenswrapper[34361]: I0224 05:42:59.196594 34361 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/033badcd-d62d-4a0c-a069-874f0892c4d7-console-config\") pod \"console-6d5c5b46fd-qr4b5\" (UID: \"033badcd-d62d-4a0c-a069-874f0892c4d7\") " pod="openshift-console/console-6d5c5b46fd-qr4b5" Feb 24 05:42:59.197052 master-0 kubenswrapper[34361]: I0224 05:42:59.197008 34361 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/033badcd-d62d-4a0c-a069-874f0892c4d7-oauth-serving-cert\") pod \"console-6d5c5b46fd-qr4b5\" (UID: \"033badcd-d62d-4a0c-a069-874f0892c4d7\") " pod="openshift-console/console-6d5c5b46fd-qr4b5" Feb 24 05:42:59.200519 master-0 kubenswrapper[34361]: I0224 05:42:59.200369 34361 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/033badcd-d62d-4a0c-a069-874f0892c4d7-console-serving-cert\") pod \"console-6d5c5b46fd-qr4b5\" (UID: \"033badcd-d62d-4a0c-a069-874f0892c4d7\") " pod="openshift-console/console-6d5c5b46fd-qr4b5" Feb 24 05:42:59.200749 master-0 kubenswrapper[34361]: I0224 05:42:59.200707 34361 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/033badcd-d62d-4a0c-a069-874f0892c4d7-trusted-ca-bundle\") pod \"console-6d5c5b46fd-qr4b5\" (UID: \"033badcd-d62d-4a0c-a069-874f0892c4d7\") " pod="openshift-console/console-6d5c5b46fd-qr4b5" Feb 24 05:42:59.203460 master-0 kubenswrapper[34361]: I0224 05:42:59.203370 34361 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/033badcd-d62d-4a0c-a069-874f0892c4d7-service-ca\") pod \"console-6d5c5b46fd-qr4b5\" (UID: \"033badcd-d62d-4a0c-a069-874f0892c4d7\") " pod="openshift-console/console-6d5c5b46fd-qr4b5" Feb 24 05:42:59.207218 master-0 kubenswrapper[34361]: I0224 05:42:59.207179 34361 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/033badcd-d62d-4a0c-a069-874f0892c4d7-console-oauth-config\") pod \"console-6d5c5b46fd-qr4b5\" (UID: \"033badcd-d62d-4a0c-a069-874f0892c4d7\") " pod="openshift-console/console-6d5c5b46fd-qr4b5" Feb 24 05:42:59.226554 master-0 kubenswrapper[34361]: I0224 05:42:59.226427 34361 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wdb7l\" (UniqueName: \"kubernetes.io/projected/033badcd-d62d-4a0c-a069-874f0892c4d7-kube-api-access-wdb7l\") pod \"console-6d5c5b46fd-qr4b5\" (UID: \"033badcd-d62d-4a0c-a069-874f0892c4d7\") " pod="openshift-console/console-6d5c5b46fd-qr4b5" Feb 24 05:42:59.415566 master-0 kubenswrapper[34361]: I0224 05:42:59.415416 34361 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-6d5c5b46fd-qr4b5" Feb 24 05:42:59.923542 master-0 kubenswrapper[34361]: I0224 05:42:59.923461 34361 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-6d5c5b46fd-qr4b5"] Feb 24 05:42:59.931078 master-0 kubenswrapper[34361]: W0224 05:42:59.930984 34361 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod033badcd_d62d_4a0c_a069_874f0892c4d7.slice/crio-75ce61a1b47f2a4a477a18a448c0d0e922c19aa144e429f7999f36e188d64164 WatchSource:0}: Error finding container 75ce61a1b47f2a4a477a18a448c0d0e922c19aa144e429f7999f36e188d64164: Status 404 returned error can't find the container with id 75ce61a1b47f2a4a477a18a448c0d0e922c19aa144e429f7999f36e188d64164 Feb 24 05:43:00.001761 master-0 kubenswrapper[34361]: I0224 05:43:00.001658 34361 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-6d5c5b46fd-qr4b5" event={"ID":"033badcd-d62d-4a0c-a069-874f0892c4d7","Type":"ContainerStarted","Data":"75ce61a1b47f2a4a477a18a448c0d0e922c19aa144e429f7999f36e188d64164"} Feb 24 05:43:01.019766 master-0 kubenswrapper[34361]: I0224 05:43:01.019422 34361 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-6d5c5b46fd-qr4b5" event={"ID":"033badcd-d62d-4a0c-a069-874f0892c4d7","Type":"ContainerStarted","Data":"c5d108c7a5fab59ef65c16a63d18af708b4c39dc3257d78b881f6e901084431a"} Feb 24 05:43:01.059815 master-0 kubenswrapper[34361]: I0224 05:43:01.059674 34361 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-6d5c5b46fd-qr4b5" podStartSLOduration=3.059639522 podStartE2EDuration="3.059639522s" podCreationTimestamp="2026-02-24 05:42:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-24 05:43:01.047129884 +0000 UTC m=+340.749747000" watchObservedRunningTime="2026-02-24 05:43:01.059639522 +0000 UTC m=+340.762256598" Feb 24 05:43:09.416036 master-0 kubenswrapper[34361]: I0224 05:43:09.415931 34361 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-6d5c5b46fd-qr4b5" Feb 24 05:43:09.416036 master-0 kubenswrapper[34361]: I0224 05:43:09.416023 34361 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-6d5c5b46fd-qr4b5" Feb 24 05:43:09.423975 master-0 kubenswrapper[34361]: I0224 05:43:09.423919 34361 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-6d5c5b46fd-qr4b5" Feb 24 05:43:10.136802 master-0 kubenswrapper[34361]: I0224 05:43:10.136717 34361 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-6d5c5b46fd-qr4b5" Feb 24 05:43:10.250032 master-0 kubenswrapper[34361]: I0224 05:43:10.247024 34361 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-6f64db7f86-6brp5"] Feb 24 05:43:20.910400 master-0 kubenswrapper[34361]: I0224 05:43:20.910276 34361 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-576fb8b7f5-srlps"] Feb 24 05:43:20.911933 master-0 kubenswrapper[34361]: I0224 05:43:20.911899 34361 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-576fb8b7f5-srlps" Feb 24 05:43:20.936062 master-0 kubenswrapper[34361]: I0224 05:43:20.935287 34361 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/94166387-6f51-45e5-9ca0-0408bf7067ef-oauth-serving-cert\") pod \"console-576fb8b7f5-srlps\" (UID: \"94166387-6f51-45e5-9ca0-0408bf7067ef\") " pod="openshift-console/console-576fb8b7f5-srlps" Feb 24 05:43:20.936062 master-0 kubenswrapper[34361]: I0224 05:43:20.935490 34361 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wxsxj\" (UniqueName: \"kubernetes.io/projected/94166387-6f51-45e5-9ca0-0408bf7067ef-kube-api-access-wxsxj\") pod \"console-576fb8b7f5-srlps\" (UID: \"94166387-6f51-45e5-9ca0-0408bf7067ef\") " pod="openshift-console/console-576fb8b7f5-srlps" Feb 24 05:43:20.936062 master-0 kubenswrapper[34361]: I0224 05:43:20.935682 34361 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/94166387-6f51-45e5-9ca0-0408bf7067ef-console-oauth-config\") pod \"console-576fb8b7f5-srlps\" (UID: \"94166387-6f51-45e5-9ca0-0408bf7067ef\") " pod="openshift-console/console-576fb8b7f5-srlps" Feb 24 05:43:20.936062 master-0 kubenswrapper[34361]: I0224 05:43:20.935721 34361 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/94166387-6f51-45e5-9ca0-0408bf7067ef-console-serving-cert\") pod \"console-576fb8b7f5-srlps\" (UID: \"94166387-6f51-45e5-9ca0-0408bf7067ef\") " pod="openshift-console/console-576fb8b7f5-srlps" Feb 24 05:43:20.936062 master-0 kubenswrapper[34361]: I0224 05:43:20.935827 34361 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/94166387-6f51-45e5-9ca0-0408bf7067ef-service-ca\") pod \"console-576fb8b7f5-srlps\" (UID: \"94166387-6f51-45e5-9ca0-0408bf7067ef\") " pod="openshift-console/console-576fb8b7f5-srlps" Feb 24 05:43:20.938421 master-0 kubenswrapper[34361]: I0224 05:43:20.938295 34361 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/94166387-6f51-45e5-9ca0-0408bf7067ef-trusted-ca-bundle\") pod \"console-576fb8b7f5-srlps\" (UID: \"94166387-6f51-45e5-9ca0-0408bf7067ef\") " pod="openshift-console/console-576fb8b7f5-srlps" Feb 24 05:43:20.938601 master-0 kubenswrapper[34361]: I0224 05:43:20.938452 34361 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/94166387-6f51-45e5-9ca0-0408bf7067ef-console-config\") pod \"console-576fb8b7f5-srlps\" (UID: \"94166387-6f51-45e5-9ca0-0408bf7067ef\") " pod="openshift-console/console-576fb8b7f5-srlps" Feb 24 05:43:20.941418 master-0 kubenswrapper[34361]: I0224 05:43:20.941133 34361 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-576fb8b7f5-srlps"] Feb 24 05:43:21.040328 master-0 kubenswrapper[34361]: I0224 05:43:21.040236 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/94166387-6f51-45e5-9ca0-0408bf7067ef-console-config\") pod \"console-576fb8b7f5-srlps\" (UID: \"94166387-6f51-45e5-9ca0-0408bf7067ef\") " pod="openshift-console/console-576fb8b7f5-srlps" Feb 24 05:43:21.040596 master-0 kubenswrapper[34361]: I0224 05:43:21.040348 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/94166387-6f51-45e5-9ca0-0408bf7067ef-oauth-serving-cert\") pod \"console-576fb8b7f5-srlps\" (UID: \"94166387-6f51-45e5-9ca0-0408bf7067ef\") " pod="openshift-console/console-576fb8b7f5-srlps" Feb 24 05:43:21.040596 master-0 kubenswrapper[34361]: I0224 05:43:21.040437 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wxsxj\" (UniqueName: \"kubernetes.io/projected/94166387-6f51-45e5-9ca0-0408bf7067ef-kube-api-access-wxsxj\") pod \"console-576fb8b7f5-srlps\" (UID: \"94166387-6f51-45e5-9ca0-0408bf7067ef\") " pod="openshift-console/console-576fb8b7f5-srlps" Feb 24 05:43:21.040596 master-0 kubenswrapper[34361]: I0224 05:43:21.040495 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/94166387-6f51-45e5-9ca0-0408bf7067ef-console-oauth-config\") pod \"console-576fb8b7f5-srlps\" (UID: \"94166387-6f51-45e5-9ca0-0408bf7067ef\") " pod="openshift-console/console-576fb8b7f5-srlps" Feb 24 05:43:21.040596 master-0 kubenswrapper[34361]: I0224 05:43:21.040523 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/94166387-6f51-45e5-9ca0-0408bf7067ef-console-serving-cert\") pod \"console-576fb8b7f5-srlps\" (UID: \"94166387-6f51-45e5-9ca0-0408bf7067ef\") " pod="openshift-console/console-576fb8b7f5-srlps" Feb 24 05:43:21.040596 master-0 kubenswrapper[34361]: I0224 05:43:21.040563 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/94166387-6f51-45e5-9ca0-0408bf7067ef-service-ca\") pod \"console-576fb8b7f5-srlps\" (UID: \"94166387-6f51-45e5-9ca0-0408bf7067ef\") " pod="openshift-console/console-576fb8b7f5-srlps" Feb 24 05:43:21.040596 master-0 kubenswrapper[34361]: I0224 05:43:21.040593 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/94166387-6f51-45e5-9ca0-0408bf7067ef-trusted-ca-bundle\") pod \"console-576fb8b7f5-srlps\" (UID: \"94166387-6f51-45e5-9ca0-0408bf7067ef\") " pod="openshift-console/console-576fb8b7f5-srlps" Feb 24 05:43:21.042221 master-0 kubenswrapper[34361]: I0224 05:43:21.042181 34361 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/94166387-6f51-45e5-9ca0-0408bf7067ef-trusted-ca-bundle\") pod \"console-576fb8b7f5-srlps\" (UID: \"94166387-6f51-45e5-9ca0-0408bf7067ef\") " pod="openshift-console/console-576fb8b7f5-srlps" Feb 24 05:43:21.043372 master-0 kubenswrapper[34361]: I0224 05:43:21.043268 34361 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/94166387-6f51-45e5-9ca0-0408bf7067ef-service-ca\") pod \"console-576fb8b7f5-srlps\" (UID: \"94166387-6f51-45e5-9ca0-0408bf7067ef\") " pod="openshift-console/console-576fb8b7f5-srlps" Feb 24 05:43:21.043953 master-0 kubenswrapper[34361]: I0224 05:43:21.043909 34361 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/94166387-6f51-45e5-9ca0-0408bf7067ef-oauth-serving-cert\") pod \"console-576fb8b7f5-srlps\" (UID: \"94166387-6f51-45e5-9ca0-0408bf7067ef\") " pod="openshift-console/console-576fb8b7f5-srlps" Feb 24 05:43:21.045418 master-0 kubenswrapper[34361]: I0224 05:43:21.044145 34361 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/94166387-6f51-45e5-9ca0-0408bf7067ef-console-config\") pod \"console-576fb8b7f5-srlps\" (UID: \"94166387-6f51-45e5-9ca0-0408bf7067ef\") " pod="openshift-console/console-576fb8b7f5-srlps" Feb 24 05:43:21.049476 master-0 kubenswrapper[34361]: I0224 05:43:21.047096 34361 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/94166387-6f51-45e5-9ca0-0408bf7067ef-console-serving-cert\") pod \"console-576fb8b7f5-srlps\" (UID: \"94166387-6f51-45e5-9ca0-0408bf7067ef\") " pod="openshift-console/console-576fb8b7f5-srlps" Feb 24 05:43:21.051082 master-0 kubenswrapper[34361]: I0224 05:43:21.051014 34361 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/94166387-6f51-45e5-9ca0-0408bf7067ef-console-oauth-config\") pod \"console-576fb8b7f5-srlps\" (UID: \"94166387-6f51-45e5-9ca0-0408bf7067ef\") " pod="openshift-console/console-576fb8b7f5-srlps" Feb 24 05:43:21.062530 master-0 kubenswrapper[34361]: I0224 05:43:21.061689 34361 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wxsxj\" (UniqueName: \"kubernetes.io/projected/94166387-6f51-45e5-9ca0-0408bf7067ef-kube-api-access-wxsxj\") pod \"console-576fb8b7f5-srlps\" (UID: \"94166387-6f51-45e5-9ca0-0408bf7067ef\") " pod="openshift-console/console-576fb8b7f5-srlps" Feb 24 05:43:21.275567 master-0 kubenswrapper[34361]: I0224 05:43:21.275409 34361 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-576fb8b7f5-srlps" Feb 24 05:43:21.804171 master-0 kubenswrapper[34361]: I0224 05:43:21.804070 34361 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-576fb8b7f5-srlps"] Feb 24 05:43:21.818477 master-0 kubenswrapper[34361]: W0224 05:43:21.817589 34361 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod94166387_6f51_45e5_9ca0_0408bf7067ef.slice/crio-26de91720c854b164a674efb57fff696e1de839fba0f42b312430a2bf460afa8 WatchSource:0}: Error finding container 26de91720c854b164a674efb57fff696e1de839fba0f42b312430a2bf460afa8: Status 404 returned error can't find the container with id 26de91720c854b164a674efb57fff696e1de839fba0f42b312430a2bf460afa8 Feb 24 05:43:22.263282 master-0 kubenswrapper[34361]: I0224 05:43:22.263135 34361 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-576fb8b7f5-srlps" event={"ID":"94166387-6f51-45e5-9ca0-0408bf7067ef","Type":"ContainerStarted","Data":"cf00c8e7123005eda0406a98c2b3995657cdb2d9ccb99201bc063a38dc540e73"} Feb 24 05:43:22.263282 master-0 kubenswrapper[34361]: I0224 05:43:22.263208 34361 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-576fb8b7f5-srlps" event={"ID":"94166387-6f51-45e5-9ca0-0408bf7067ef","Type":"ContainerStarted","Data":"26de91720c854b164a674efb57fff696e1de839fba0f42b312430a2bf460afa8"} Feb 24 05:43:22.288211 master-0 kubenswrapper[34361]: I0224 05:43:22.288054 34361 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-576fb8b7f5-srlps" podStartSLOduration=2.288025318 podStartE2EDuration="2.288025318s" podCreationTimestamp="2026-02-24 05:43:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-24 05:43:22.286469617 +0000 UTC m=+361.989086713" watchObservedRunningTime="2026-02-24 05:43:22.288025318 +0000 UTC m=+361.990642394" Feb 24 05:43:25.014556 master-0 kubenswrapper[34361]: I0224 05:43:25.014439 34361 kubelet.go:2431] "SyncLoop REMOVE" source="file" pods=["openshift-kube-controller-manager/kube-controller-manager-master-0"] Feb 24 05:43:25.015606 master-0 kubenswrapper[34361]: I0224 05:43:25.014953 34361 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" podUID="c0305da6e0b04a4394ef2888a487bfa1" containerName="cluster-policy-controller" containerID="cri-o://25ae168ba418dfc4c1b33e602fae0945e84f4e24a75587f39220f0946080e548" gracePeriod=30 Feb 24 05:43:25.015606 master-0 kubenswrapper[34361]: I0224 05:43:25.015050 34361 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" podUID="c0305da6e0b04a4394ef2888a487bfa1" containerName="kube-controller-manager" containerID="cri-o://a6c4b7c7c8f2d6f7a5d9574827c1d87fc9e887e6f38197076ff1b4325039d136" gracePeriod=30 Feb 24 05:43:25.015606 master-0 kubenswrapper[34361]: I0224 05:43:25.015077 34361 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" podUID="c0305da6e0b04a4394ef2888a487bfa1" containerName="kube-controller-manager-recovery-controller" containerID="cri-o://7b398e544e2416957c4399885f805d9a52847bdbb755fa9e7b753808f3ff7fcb" gracePeriod=30 Feb 24 05:43:25.015606 master-0 kubenswrapper[34361]: I0224 05:43:25.015101 34361 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" podUID="c0305da6e0b04a4394ef2888a487bfa1" containerName="kube-controller-manager-cert-syncer" containerID="cri-o://5f3f429a73b99edab07440134a29330648aee1055142d0e2a471d2ca4da191ec" gracePeriod=30 Feb 24 05:43:25.015949 master-0 kubenswrapper[34361]: I0224 05:43:25.015897 34361 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-controller-manager/kube-controller-manager-master-0"] Feb 24 05:43:25.016724 master-0 kubenswrapper[34361]: E0224 05:43:25.016685 34361 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c0305da6e0b04a4394ef2888a487bfa1" containerName="cluster-policy-controller" Feb 24 05:43:25.016903 master-0 kubenswrapper[34361]: I0224 05:43:25.016718 34361 state_mem.go:107] "Deleted CPUSet assignment" podUID="c0305da6e0b04a4394ef2888a487bfa1" containerName="cluster-policy-controller" Feb 24 05:43:25.016903 master-0 kubenswrapper[34361]: E0224 05:43:25.016788 34361 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c0305da6e0b04a4394ef2888a487bfa1" containerName="kube-controller-manager-cert-syncer" Feb 24 05:43:25.016903 master-0 kubenswrapper[34361]: I0224 05:43:25.016800 34361 state_mem.go:107] "Deleted CPUSet assignment" podUID="c0305da6e0b04a4394ef2888a487bfa1" containerName="kube-controller-manager-cert-syncer" Feb 24 05:43:25.016903 master-0 kubenswrapper[34361]: E0224 05:43:25.016818 34361 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c0305da6e0b04a4394ef2888a487bfa1" containerName="kube-controller-manager-recovery-controller" Feb 24 05:43:25.016903 master-0 kubenswrapper[34361]: I0224 05:43:25.016827 34361 state_mem.go:107] "Deleted CPUSet assignment" podUID="c0305da6e0b04a4394ef2888a487bfa1" containerName="kube-controller-manager-recovery-controller" Feb 24 05:43:25.016903 master-0 kubenswrapper[34361]: E0224 05:43:25.016898 34361 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c0305da6e0b04a4394ef2888a487bfa1" containerName="kube-controller-manager" Feb 24 05:43:25.018509 master-0 kubenswrapper[34361]: I0224 05:43:25.016910 34361 state_mem.go:107] "Deleted CPUSet assignment" podUID="c0305da6e0b04a4394ef2888a487bfa1" containerName="kube-controller-manager" Feb 24 05:43:25.018509 master-0 kubenswrapper[34361]: E0224 05:43:25.016961 34361 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c0305da6e0b04a4394ef2888a487bfa1" containerName="kube-controller-manager" Feb 24 05:43:25.018509 master-0 kubenswrapper[34361]: I0224 05:43:25.016971 34361 state_mem.go:107] "Deleted CPUSet assignment" podUID="c0305da6e0b04a4394ef2888a487bfa1" containerName="kube-controller-manager" Feb 24 05:43:25.018509 master-0 kubenswrapper[34361]: I0224 05:43:25.017350 34361 memory_manager.go:354] "RemoveStaleState removing state" podUID="c0305da6e0b04a4394ef2888a487bfa1" containerName="kube-controller-manager-cert-syncer" Feb 24 05:43:25.018509 master-0 kubenswrapper[34361]: I0224 05:43:25.017607 34361 memory_manager.go:354] "RemoveStaleState removing state" podUID="c0305da6e0b04a4394ef2888a487bfa1" containerName="kube-controller-manager" Feb 24 05:43:25.018509 master-0 kubenswrapper[34361]: I0224 05:43:25.017633 34361 memory_manager.go:354] "RemoveStaleState removing state" podUID="c0305da6e0b04a4394ef2888a487bfa1" containerName="cluster-policy-controller" Feb 24 05:43:25.018509 master-0 kubenswrapper[34361]: I0224 05:43:25.017804 34361 memory_manager.go:354] "RemoveStaleState removing state" podUID="c0305da6e0b04a4394ef2888a487bfa1" containerName="kube-controller-manager" Feb 24 05:43:25.018509 master-0 kubenswrapper[34361]: I0224 05:43:25.017823 34361 memory_manager.go:354] "RemoveStaleState removing state" podUID="c0305da6e0b04a4394ef2888a487bfa1" containerName="kube-controller-manager-recovery-controller" Feb 24 05:43:25.127753 master-0 kubenswrapper[34361]: I0224 05:43:25.127654 34361 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/1f3072e55d7ec41fa9e7ebda1b58ca13-resource-dir\") pod \"kube-controller-manager-master-0\" (UID: \"1f3072e55d7ec41fa9e7ebda1b58ca13\") " pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Feb 24 05:43:25.127753 master-0 kubenswrapper[34361]: I0224 05:43:25.127708 34361 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/1f3072e55d7ec41fa9e7ebda1b58ca13-cert-dir\") pod \"kube-controller-manager-master-0\" (UID: \"1f3072e55d7ec41fa9e7ebda1b58ca13\") " pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Feb 24 05:43:25.230536 master-0 kubenswrapper[34361]: I0224 05:43:25.230435 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/1f3072e55d7ec41fa9e7ebda1b58ca13-resource-dir\") pod \"kube-controller-manager-master-0\" (UID: \"1f3072e55d7ec41fa9e7ebda1b58ca13\") " pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Feb 24 05:43:25.230737 master-0 kubenswrapper[34361]: I0224 05:43:25.230608 34361 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/1f3072e55d7ec41fa9e7ebda1b58ca13-resource-dir\") pod \"kube-controller-manager-master-0\" (UID: \"1f3072e55d7ec41fa9e7ebda1b58ca13\") " pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Feb 24 05:43:25.230737 master-0 kubenswrapper[34361]: I0224 05:43:25.230692 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/1f3072e55d7ec41fa9e7ebda1b58ca13-cert-dir\") pod \"kube-controller-manager-master-0\" (UID: \"1f3072e55d7ec41fa9e7ebda1b58ca13\") " pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Feb 24 05:43:25.231067 master-0 kubenswrapper[34361]: I0224 05:43:25.230885 34361 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/1f3072e55d7ec41fa9e7ebda1b58ca13-cert-dir\") pod \"kube-controller-manager-master-0\" (UID: \"1f3072e55d7ec41fa9e7ebda1b58ca13\") " pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Feb 24 05:43:25.298084 master-0 kubenswrapper[34361]: I0224 05:43:25.297937 34361 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-master-0_c0305da6e0b04a4394ef2888a487bfa1/kube-controller-manager-cert-syncer/0.log" Feb 24 05:43:25.298084 master-0 kubenswrapper[34361]: I0224 05:43:25.297992 34361 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-master-0_c0305da6e0b04a4394ef2888a487bfa1/kube-controller-manager-cert-syncer/0.log" Feb 24 05:43:25.299454 master-0 kubenswrapper[34361]: I0224 05:43:25.299429 34361 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-master-0_c0305da6e0b04a4394ef2888a487bfa1/kube-controller-manager/0.log" Feb 24 05:43:25.299518 master-0 kubenswrapper[34361]: I0224 05:43:25.299481 34361 generic.go:334] "Generic (PLEG): container finished" podID="c0305da6e0b04a4394ef2888a487bfa1" containerID="a6c4b7c7c8f2d6f7a5d9574827c1d87fc9e887e6f38197076ff1b4325039d136" exitCode=0 Feb 24 05:43:25.299518 master-0 kubenswrapper[34361]: I0224 05:43:25.299503 34361 generic.go:334] "Generic (PLEG): container finished" podID="c0305da6e0b04a4394ef2888a487bfa1" containerID="7b398e544e2416957c4399885f805d9a52847bdbb755fa9e7b753808f3ff7fcb" exitCode=0 Feb 24 05:43:25.299518 master-0 kubenswrapper[34361]: I0224 05:43:25.299512 34361 generic.go:334] "Generic (PLEG): container finished" podID="c0305da6e0b04a4394ef2888a487bfa1" containerID="5f3f429a73b99edab07440134a29330648aee1055142d0e2a471d2ca4da191ec" exitCode=2 Feb 24 05:43:25.299611 master-0 kubenswrapper[34361]: I0224 05:43:25.299522 34361 generic.go:334] "Generic (PLEG): container finished" podID="c0305da6e0b04a4394ef2888a487bfa1" containerID="25ae168ba418dfc4c1b33e602fae0945e84f4e24a75587f39220f0946080e548" exitCode=0 Feb 24 05:43:25.299673 master-0 kubenswrapper[34361]: I0224 05:43:25.299627 34361 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b23dfe329a1134a3919827a4fef6a742a5c3a54647b515a5ae24efa737eaeba7" Feb 24 05:43:25.299721 master-0 kubenswrapper[34361]: I0224 05:43:25.299656 34361 scope.go:117] "RemoveContainer" containerID="e0f72d95db3b526338789b8fcf2468920b15351bce1ec3d46e5d53624269cc95" Feb 24 05:43:25.299770 master-0 kubenswrapper[34361]: I0224 05:43:25.299756 34361 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-master-0_c0305da6e0b04a4394ef2888a487bfa1/kube-controller-manager/0.log" Feb 24 05:43:25.299858 master-0 kubenswrapper[34361]: I0224 05:43:25.299838 34361 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Feb 24 05:43:25.303265 master-0 kubenswrapper[34361]: I0224 05:43:25.303222 34361 generic.go:334] "Generic (PLEG): container finished" podID="449763cc-c98c-4652-b9c3-893213b0efce" containerID="7d8009c5dbdcd8a53dc663b491feee2798b80b08ab18c247d6e2f8a8ed15ae85" exitCode=0 Feb 24 05:43:25.303344 master-0 kubenswrapper[34361]: I0224 05:43:25.303274 34361 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/installer-4-master-0" event={"ID":"449763cc-c98c-4652-b9c3-893213b0efce","Type":"ContainerDied","Data":"7d8009c5dbdcd8a53dc663b491feee2798b80b08ab18c247d6e2f8a8ed15ae85"} Feb 24 05:43:25.304573 master-0 kubenswrapper[34361]: I0224 05:43:25.304526 34361 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" oldPodUID="c0305da6e0b04a4394ef2888a487bfa1" podUID="1f3072e55d7ec41fa9e7ebda1b58ca13" Feb 24 05:43:25.335202 master-0 kubenswrapper[34361]: I0224 05:43:25.334516 34361 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/c0305da6e0b04a4394ef2888a487bfa1-resource-dir\") pod \"c0305da6e0b04a4394ef2888a487bfa1\" (UID: \"c0305da6e0b04a4394ef2888a487bfa1\") " Feb 24 05:43:25.335202 master-0 kubenswrapper[34361]: I0224 05:43:25.334608 34361 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/c0305da6e0b04a4394ef2888a487bfa1-cert-dir\") pod \"c0305da6e0b04a4394ef2888a487bfa1\" (UID: \"c0305da6e0b04a4394ef2888a487bfa1\") " Feb 24 05:43:25.335202 master-0 kubenswrapper[34361]: I0224 05:43:25.334761 34361 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c0305da6e0b04a4394ef2888a487bfa1-resource-dir" (OuterVolumeSpecName: "resource-dir") pod "c0305da6e0b04a4394ef2888a487bfa1" (UID: "c0305da6e0b04a4394ef2888a487bfa1"). InnerVolumeSpecName "resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 24 05:43:25.335202 master-0 kubenswrapper[34361]: I0224 05:43:25.334843 34361 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c0305da6e0b04a4394ef2888a487bfa1-cert-dir" (OuterVolumeSpecName: "cert-dir") pod "c0305da6e0b04a4394ef2888a487bfa1" (UID: "c0305da6e0b04a4394ef2888a487bfa1"). InnerVolumeSpecName "cert-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 24 05:43:25.335741 master-0 kubenswrapper[34361]: I0224 05:43:25.335284 34361 reconciler_common.go:293] "Volume detached for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/c0305da6e0b04a4394ef2888a487bfa1-resource-dir\") on node \"master-0\" DevicePath \"\"" Feb 24 05:43:25.335741 master-0 kubenswrapper[34361]: I0224 05:43:25.335300 34361 reconciler_common.go:293] "Volume detached for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/c0305da6e0b04a4394ef2888a487bfa1-cert-dir\") on node \"master-0\" DevicePath \"\"" Feb 24 05:43:26.320881 master-0 kubenswrapper[34361]: I0224 05:43:26.320774 34361 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-master-0_c0305da6e0b04a4394ef2888a487bfa1/kube-controller-manager-cert-syncer/0.log" Feb 24 05:43:26.322165 master-0 kubenswrapper[34361]: I0224 05:43:26.322001 34361 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Feb 24 05:43:26.327774 master-0 kubenswrapper[34361]: I0224 05:43:26.327696 34361 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" oldPodUID="c0305da6e0b04a4394ef2888a487bfa1" podUID="1f3072e55d7ec41fa9e7ebda1b58ca13" Feb 24 05:43:26.354731 master-0 kubenswrapper[34361]: I0224 05:43:26.354640 34361 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" oldPodUID="c0305da6e0b04a4394ef2888a487bfa1" podUID="1f3072e55d7ec41fa9e7ebda1b58ca13" Feb 24 05:43:26.608137 master-0 kubenswrapper[34361]: I0224 05:43:26.607937 34361 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c0305da6e0b04a4394ef2888a487bfa1" path="/var/lib/kubelet/pods/c0305da6e0b04a4394ef2888a487bfa1/volumes" Feb 24 05:43:26.773818 master-0 kubenswrapper[34361]: I0224 05:43:26.773757 34361 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/installer-4-master-0" Feb 24 05:43:26.866244 master-0 kubenswrapper[34361]: I0224 05:43:26.866023 34361 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/449763cc-c98c-4652-b9c3-893213b0efce-var-lock\") pod \"449763cc-c98c-4652-b9c3-893213b0efce\" (UID: \"449763cc-c98c-4652-b9c3-893213b0efce\") " Feb 24 05:43:26.866244 master-0 kubenswrapper[34361]: I0224 05:43:26.866116 34361 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/449763cc-c98c-4652-b9c3-893213b0efce-kubelet-dir\") pod \"449763cc-c98c-4652-b9c3-893213b0efce\" (UID: \"449763cc-c98c-4652-b9c3-893213b0efce\") " Feb 24 05:43:26.866653 master-0 kubenswrapper[34361]: I0224 05:43:26.866271 34361 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/449763cc-c98c-4652-b9c3-893213b0efce-var-lock" (OuterVolumeSpecName: "var-lock") pod "449763cc-c98c-4652-b9c3-893213b0efce" (UID: "449763cc-c98c-4652-b9c3-893213b0efce"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 24 05:43:26.866653 master-0 kubenswrapper[34361]: I0224 05:43:26.866353 34361 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/449763cc-c98c-4652-b9c3-893213b0efce-kube-api-access\") pod \"449763cc-c98c-4652-b9c3-893213b0efce\" (UID: \"449763cc-c98c-4652-b9c3-893213b0efce\") " Feb 24 05:43:26.866653 master-0 kubenswrapper[34361]: I0224 05:43:26.866434 34361 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/449763cc-c98c-4652-b9c3-893213b0efce-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "449763cc-c98c-4652-b9c3-893213b0efce" (UID: "449763cc-c98c-4652-b9c3-893213b0efce"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 24 05:43:26.867753 master-0 kubenswrapper[34361]: I0224 05:43:26.867706 34361 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/449763cc-c98c-4652-b9c3-893213b0efce-var-lock\") on node \"master-0\" DevicePath \"\"" Feb 24 05:43:26.867753 master-0 kubenswrapper[34361]: I0224 05:43:26.867743 34361 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/449763cc-c98c-4652-b9c3-893213b0efce-kubelet-dir\") on node \"master-0\" DevicePath \"\"" Feb 24 05:43:26.870271 master-0 kubenswrapper[34361]: I0224 05:43:26.870179 34361 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/449763cc-c98c-4652-b9c3-893213b0efce-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "449763cc-c98c-4652-b9c3-893213b0efce" (UID: "449763cc-c98c-4652-b9c3-893213b0efce"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 24 05:43:26.970758 master-0 kubenswrapper[34361]: I0224 05:43:26.970622 34361 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/449763cc-c98c-4652-b9c3-893213b0efce-kube-api-access\") on node \"master-0\" DevicePath \"\"" Feb 24 05:43:27.335106 master-0 kubenswrapper[34361]: I0224 05:43:27.334975 34361 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/installer-4-master-0" event={"ID":"449763cc-c98c-4652-b9c3-893213b0efce","Type":"ContainerDied","Data":"4a2300b300a7f7e7bf562a0ca9e918e3a90fffeaf6369ddbd455b3dab58aa74e"} Feb 24 05:43:27.335106 master-0 kubenswrapper[34361]: I0224 05:43:27.335089 34361 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/installer-4-master-0" Feb 24 05:43:27.336489 master-0 kubenswrapper[34361]: I0224 05:43:27.335103 34361 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4a2300b300a7f7e7bf562a0ca9e918e3a90fffeaf6369ddbd455b3dab58aa74e" Feb 24 05:43:31.276641 master-0 kubenswrapper[34361]: I0224 05:43:31.276553 34361 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-576fb8b7f5-srlps" Feb 24 05:43:31.276641 master-0 kubenswrapper[34361]: I0224 05:43:31.276658 34361 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-576fb8b7f5-srlps" Feb 24 05:43:31.283448 master-0 kubenswrapper[34361]: I0224 05:43:31.283414 34361 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-576fb8b7f5-srlps" Feb 24 05:43:31.493369 master-0 kubenswrapper[34361]: I0224 05:43:31.493281 34361 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-576fb8b7f5-srlps" Feb 24 05:43:35.332685 master-0 kubenswrapper[34361]: I0224 05:43:35.332380 34361 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-console/console-6f64db7f86-6brp5" podUID="4976bb0c-7870-482c-ab61-fcafe26f0e8c" containerName="console" containerID="cri-o://48ab9caee7e8e3d9f3190b4432726245c0d7ade0b16d3266dd224f51c2716174" gracePeriod=15 Feb 24 05:43:35.548154 master-0 kubenswrapper[34361]: I0224 05:43:35.548061 34361 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-6f64db7f86-6brp5_4976bb0c-7870-482c-ab61-fcafe26f0e8c/console/0.log" Feb 24 05:43:35.548515 master-0 kubenswrapper[34361]: I0224 05:43:35.548176 34361 generic.go:334] "Generic (PLEG): container finished" podID="4976bb0c-7870-482c-ab61-fcafe26f0e8c" containerID="48ab9caee7e8e3d9f3190b4432726245c0d7ade0b16d3266dd224f51c2716174" exitCode=2 Feb 24 05:43:35.548515 master-0 kubenswrapper[34361]: I0224 05:43:35.548237 34361 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-6f64db7f86-6brp5" event={"ID":"4976bb0c-7870-482c-ab61-fcafe26f0e8c","Type":"ContainerDied","Data":"48ab9caee7e8e3d9f3190b4432726245c0d7ade0b16d3266dd224f51c2716174"} Feb 24 05:43:35.913186 master-0 kubenswrapper[34361]: I0224 05:43:35.913113 34361 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-6f64db7f86-6brp5_4976bb0c-7870-482c-ab61-fcafe26f0e8c/console/0.log" Feb 24 05:43:35.913328 master-0 kubenswrapper[34361]: I0224 05:43:35.913242 34361 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-6f64db7f86-6brp5" Feb 24 05:43:35.977053 master-0 kubenswrapper[34361]: I0224 05:43:35.976955 34361 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/4976bb0c-7870-482c-ab61-fcafe26f0e8c-oauth-serving-cert\") pod \"4976bb0c-7870-482c-ab61-fcafe26f0e8c\" (UID: \"4976bb0c-7870-482c-ab61-fcafe26f0e8c\") " Feb 24 05:43:35.977053 master-0 kubenswrapper[34361]: I0224 05:43:35.977064 34361 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/4976bb0c-7870-482c-ab61-fcafe26f0e8c-console-config\") pod \"4976bb0c-7870-482c-ab61-fcafe26f0e8c\" (UID: \"4976bb0c-7870-482c-ab61-fcafe26f0e8c\") " Feb 24 05:43:35.977449 master-0 kubenswrapper[34361]: I0224 05:43:35.977090 34361 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/4976bb0c-7870-482c-ab61-fcafe26f0e8c-service-ca\") pod \"4976bb0c-7870-482c-ab61-fcafe26f0e8c\" (UID: \"4976bb0c-7870-482c-ab61-fcafe26f0e8c\") " Feb 24 05:43:35.977449 master-0 kubenswrapper[34361]: I0224 05:43:35.977123 34361 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-j6f8p\" (UniqueName: \"kubernetes.io/projected/4976bb0c-7870-482c-ab61-fcafe26f0e8c-kube-api-access-j6f8p\") pod \"4976bb0c-7870-482c-ab61-fcafe26f0e8c\" (UID: \"4976bb0c-7870-482c-ab61-fcafe26f0e8c\") " Feb 24 05:43:35.977449 master-0 kubenswrapper[34361]: I0224 05:43:35.977187 34361 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/4976bb0c-7870-482c-ab61-fcafe26f0e8c-console-serving-cert\") pod \"4976bb0c-7870-482c-ab61-fcafe26f0e8c\" (UID: \"4976bb0c-7870-482c-ab61-fcafe26f0e8c\") " Feb 24 05:43:35.977449 master-0 kubenswrapper[34361]: I0224 05:43:35.977281 34361 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/4976bb0c-7870-482c-ab61-fcafe26f0e8c-console-oauth-config\") pod \"4976bb0c-7870-482c-ab61-fcafe26f0e8c\" (UID: \"4976bb0c-7870-482c-ab61-fcafe26f0e8c\") " Feb 24 05:43:35.977449 master-0 kubenswrapper[34361]: I0224 05:43:35.977356 34361 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/4976bb0c-7870-482c-ab61-fcafe26f0e8c-trusted-ca-bundle\") pod \"4976bb0c-7870-482c-ab61-fcafe26f0e8c\" (UID: \"4976bb0c-7870-482c-ab61-fcafe26f0e8c\") " Feb 24 05:43:35.978199 master-0 kubenswrapper[34361]: I0224 05:43:35.978128 34361 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4976bb0c-7870-482c-ab61-fcafe26f0e8c-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "4976bb0c-7870-482c-ab61-fcafe26f0e8c" (UID: "4976bb0c-7870-482c-ab61-fcafe26f0e8c"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 24 05:43:35.978669 master-0 kubenswrapper[34361]: I0224 05:43:35.978632 34361 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4976bb0c-7870-482c-ab61-fcafe26f0e8c-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "4976bb0c-7870-482c-ab61-fcafe26f0e8c" (UID: "4976bb0c-7870-482c-ab61-fcafe26f0e8c"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 24 05:43:35.979935 master-0 kubenswrapper[34361]: I0224 05:43:35.979867 34361 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4976bb0c-7870-482c-ab61-fcafe26f0e8c-console-config" (OuterVolumeSpecName: "console-config") pod "4976bb0c-7870-482c-ab61-fcafe26f0e8c" (UID: "4976bb0c-7870-482c-ab61-fcafe26f0e8c"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 24 05:43:35.980754 master-0 kubenswrapper[34361]: I0224 05:43:35.980709 34361 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4976bb0c-7870-482c-ab61-fcafe26f0e8c-service-ca" (OuterVolumeSpecName: "service-ca") pod "4976bb0c-7870-482c-ab61-fcafe26f0e8c" (UID: "4976bb0c-7870-482c-ab61-fcafe26f0e8c"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 24 05:43:35.981995 master-0 kubenswrapper[34361]: I0224 05:43:35.981930 34361 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4976bb0c-7870-482c-ab61-fcafe26f0e8c-kube-api-access-j6f8p" (OuterVolumeSpecName: "kube-api-access-j6f8p") pod "4976bb0c-7870-482c-ab61-fcafe26f0e8c" (UID: "4976bb0c-7870-482c-ab61-fcafe26f0e8c"). InnerVolumeSpecName "kube-api-access-j6f8p". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 24 05:43:35.984386 master-0 kubenswrapper[34361]: I0224 05:43:35.984339 34361 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4976bb0c-7870-482c-ab61-fcafe26f0e8c-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "4976bb0c-7870-482c-ab61-fcafe26f0e8c" (UID: "4976bb0c-7870-482c-ab61-fcafe26f0e8c"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 24 05:43:35.989693 master-0 kubenswrapper[34361]: I0224 05:43:35.989622 34361 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4976bb0c-7870-482c-ab61-fcafe26f0e8c-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "4976bb0c-7870-482c-ab61-fcafe26f0e8c" (UID: "4976bb0c-7870-482c-ab61-fcafe26f0e8c"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 24 05:43:36.079798 master-0 kubenswrapper[34361]: I0224 05:43:36.079694 34361 reconciler_common.go:293] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/4976bb0c-7870-482c-ab61-fcafe26f0e8c-console-oauth-config\") on node \"master-0\" DevicePath \"\"" Feb 24 05:43:36.079798 master-0 kubenswrapper[34361]: I0224 05:43:36.079747 34361 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/4976bb0c-7870-482c-ab61-fcafe26f0e8c-trusted-ca-bundle\") on node \"master-0\" DevicePath \"\"" Feb 24 05:43:36.079798 master-0 kubenswrapper[34361]: I0224 05:43:36.079758 34361 reconciler_common.go:293] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/4976bb0c-7870-482c-ab61-fcafe26f0e8c-oauth-serving-cert\") on node \"master-0\" DevicePath \"\"" Feb 24 05:43:36.079798 master-0 kubenswrapper[34361]: I0224 05:43:36.079770 34361 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/4976bb0c-7870-482c-ab61-fcafe26f0e8c-service-ca\") on node \"master-0\" DevicePath \"\"" Feb 24 05:43:36.079798 master-0 kubenswrapper[34361]: I0224 05:43:36.079783 34361 reconciler_common.go:293] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/4976bb0c-7870-482c-ab61-fcafe26f0e8c-console-config\") on node \"master-0\" DevicePath \"\"" Feb 24 05:43:36.079798 master-0 kubenswrapper[34361]: I0224 05:43:36.079793 34361 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-j6f8p\" (UniqueName: \"kubernetes.io/projected/4976bb0c-7870-482c-ab61-fcafe26f0e8c-kube-api-access-j6f8p\") on node \"master-0\" DevicePath \"\"" Feb 24 05:43:36.079798 master-0 kubenswrapper[34361]: I0224 05:43:36.079803 34361 reconciler_common.go:293] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/4976bb0c-7870-482c-ab61-fcafe26f0e8c-console-serving-cert\") on node \"master-0\" DevicePath \"\"" Feb 24 05:43:36.562412 master-0 kubenswrapper[34361]: I0224 05:43:36.562254 34361 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-6f64db7f86-6brp5_4976bb0c-7870-482c-ab61-fcafe26f0e8c/console/0.log" Feb 24 05:43:36.562412 master-0 kubenswrapper[34361]: I0224 05:43:36.562414 34361 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-6f64db7f86-6brp5" event={"ID":"4976bb0c-7870-482c-ab61-fcafe26f0e8c","Type":"ContainerDied","Data":"f9a68acf5c7c050de6b1c1e54bff5cd5929bdb9506df4224740abe85844d0896"} Feb 24 05:43:36.563454 master-0 kubenswrapper[34361]: I0224 05:43:36.562498 34361 scope.go:117] "RemoveContainer" containerID="48ab9caee7e8e3d9f3190b4432726245c0d7ade0b16d3266dd224f51c2716174" Feb 24 05:43:36.563454 master-0 kubenswrapper[34361]: I0224 05:43:36.562585 34361 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-6f64db7f86-6brp5" Feb 24 05:43:36.643599 master-0 kubenswrapper[34361]: I0224 05:43:36.643440 34361 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-6f64db7f86-6brp5"] Feb 24 05:43:36.658393 master-0 kubenswrapper[34361]: I0224 05:43:36.657702 34361 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-console/console-6f64db7f86-6brp5"] Feb 24 05:43:38.613941 master-0 kubenswrapper[34361]: I0224 05:43:38.613838 34361 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4976bb0c-7870-482c-ab61-fcafe26f0e8c" path="/var/lib/kubelet/pods/4976bb0c-7870-482c-ab61-fcafe26f0e8c/volumes" Feb 24 05:43:39.597501 master-0 kubenswrapper[34361]: I0224 05:43:39.597392 34361 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Feb 24 05:43:39.623673 master-0 kubenswrapper[34361]: I0224 05:43:39.623586 34361 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" podUID="bb49036d-0a5f-422d-99b9-fe9a01eac354" Feb 24 05:43:39.623673 master-0 kubenswrapper[34361]: I0224 05:43:39.623670 34361 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" podUID="bb49036d-0a5f-422d-99b9-fe9a01eac354" Feb 24 05:43:39.651177 master-0 kubenswrapper[34361]: I0224 05:43:39.651062 34361 kubelet.go:1914] "Deleted mirror pod because it is outdated" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Feb 24 05:43:39.652680 master-0 kubenswrapper[34361]: I0224 05:43:39.652607 34361 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-kube-controller-manager/kube-controller-manager-master-0"] Feb 24 05:43:39.666005 master-0 kubenswrapper[34361]: I0224 05:43:39.665929 34361 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-kube-controller-manager/kube-controller-manager-master-0"] Feb 24 05:43:39.670804 master-0 kubenswrapper[34361]: I0224 05:43:39.670725 34361 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Feb 24 05:43:39.684825 master-0 kubenswrapper[34361]: I0224 05:43:39.683033 34361 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager/kube-controller-manager-master-0"] Feb 24 05:43:40.614619 master-0 kubenswrapper[34361]: I0224 05:43:40.614554 34361 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" event={"ID":"1f3072e55d7ec41fa9e7ebda1b58ca13","Type":"ContainerStarted","Data":"b179543cc03c75bf9b7d196166b522130f3da36b9e459a8b51387e87203bdcf8"} Feb 24 05:43:40.614619 master-0 kubenswrapper[34361]: I0224 05:43:40.614619 34361 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" event={"ID":"1f3072e55d7ec41fa9e7ebda1b58ca13","Type":"ContainerStarted","Data":"d01472063b92145d3da353134d70b3d800048cc98ab48b298bb67f77b6b3c059"} Feb 24 05:43:40.614895 master-0 kubenswrapper[34361]: I0224 05:43:40.614637 34361 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" event={"ID":"1f3072e55d7ec41fa9e7ebda1b58ca13","Type":"ContainerStarted","Data":"fbf64fa04255d5e1e72ce4a236cd5ba39038f4dd6a454063f6442c24cc711843"} Feb 24 05:43:41.618789 master-0 kubenswrapper[34361]: I0224 05:43:41.618714 34361 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" event={"ID":"1f3072e55d7ec41fa9e7ebda1b58ca13","Type":"ContainerStarted","Data":"bb508f1c12079229d2cb94cc5c672cae22789ce4b3e2af7f1dc6c7cf7037d904"} Feb 24 05:43:41.619702 master-0 kubenswrapper[34361]: I0224 05:43:41.619675 34361 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" event={"ID":"1f3072e55d7ec41fa9e7ebda1b58ca13","Type":"ContainerStarted","Data":"2b7f95c5d9f7cb0adb7d88fbbb53b7a639c5742b71c4dad2a59e646a955ed3f0"} Feb 24 05:43:41.639436 master-0 kubenswrapper[34361]: I0224 05:43:41.639281 34361 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" podStartSLOduration=2.6392537799999998 podStartE2EDuration="2.63925378s" podCreationTimestamp="2026-02-24 05:43:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-24 05:43:41.638642953 +0000 UTC m=+381.341260009" watchObservedRunningTime="2026-02-24 05:43:41.63925378 +0000 UTC m=+381.341870836" Feb 24 05:43:49.672013 master-0 kubenswrapper[34361]: I0224 05:43:49.671909 34361 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Feb 24 05:43:49.672013 master-0 kubenswrapper[34361]: I0224 05:43:49.672008 34361 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Feb 24 05:43:49.672013 master-0 kubenswrapper[34361]: I0224 05:43:49.672032 34361 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Feb 24 05:43:49.673646 master-0 kubenswrapper[34361]: I0224 05:43:49.672051 34361 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Feb 24 05:43:49.681104 master-0 kubenswrapper[34361]: I0224 05:43:49.681028 34361 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Feb 24 05:43:49.681587 master-0 kubenswrapper[34361]: I0224 05:43:49.681518 34361 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Feb 24 05:43:49.705619 master-0 kubenswrapper[34361]: I0224 05:43:49.705498 34361 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Feb 24 05:43:50.716438 master-0 kubenswrapper[34361]: I0224 05:43:50.716276 34361 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Feb 24 05:43:59.068480 master-0 kubenswrapper[34361]: I0224 05:43:59.067755 34361 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["sushy-emulator/sushy-emulator-78f6d7d749-rjgth"] Feb 24 05:43:59.068480 master-0 kubenswrapper[34361]: E0224 05:43:59.068131 34361 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4976bb0c-7870-482c-ab61-fcafe26f0e8c" containerName="console" Feb 24 05:43:59.068480 master-0 kubenswrapper[34361]: I0224 05:43:59.068149 34361 state_mem.go:107] "Deleted CPUSet assignment" podUID="4976bb0c-7870-482c-ab61-fcafe26f0e8c" containerName="console" Feb 24 05:43:59.068480 master-0 kubenswrapper[34361]: E0224 05:43:59.068179 34361 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="449763cc-c98c-4652-b9c3-893213b0efce" containerName="installer" Feb 24 05:43:59.068480 master-0 kubenswrapper[34361]: I0224 05:43:59.068188 34361 state_mem.go:107] "Deleted CPUSet assignment" podUID="449763cc-c98c-4652-b9c3-893213b0efce" containerName="installer" Feb 24 05:43:59.073178 master-0 kubenswrapper[34361]: I0224 05:43:59.070301 34361 memory_manager.go:354] "RemoveStaleState removing state" podUID="449763cc-c98c-4652-b9c3-893213b0efce" containerName="installer" Feb 24 05:43:59.073178 master-0 kubenswrapper[34361]: I0224 05:43:59.070369 34361 memory_manager.go:354] "RemoveStaleState removing state" podUID="4976bb0c-7870-482c-ab61-fcafe26f0e8c" containerName="console" Feb 24 05:43:59.073178 master-0 kubenswrapper[34361]: I0224 05:43:59.071016 34361 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="sushy-emulator/sushy-emulator-78f6d7d749-rjgth" Feb 24 05:43:59.073540 master-0 kubenswrapper[34361]: I0224 05:43:59.073211 34361 reflector.go:368] Caches populated for *v1.Secret from object-"sushy-emulator"/"os-client-config" Feb 24 05:43:59.073540 master-0 kubenswrapper[34361]: I0224 05:43:59.073484 34361 reflector.go:368] Caches populated for *v1.ConfigMap from object-"sushy-emulator"/"sushy-emulator-config" Feb 24 05:43:59.073738 master-0 kubenswrapper[34361]: I0224 05:43:59.073700 34361 reflector.go:368] Caches populated for *v1.ConfigMap from object-"sushy-emulator"/"openshift-service-ca.crt" Feb 24 05:43:59.074054 master-0 kubenswrapper[34361]: I0224 05:43:59.073973 34361 reflector.go:368] Caches populated for *v1.ConfigMap from object-"sushy-emulator"/"kube-root-ca.crt" Feb 24 05:43:59.101018 master-0 kubenswrapper[34361]: I0224 05:43:59.095839 34361 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-6d5c5b46fd-qr4b5"] Feb 24 05:43:59.115188 master-0 kubenswrapper[34361]: I0224 05:43:59.115091 34361 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["sushy-emulator/sushy-emulator-78f6d7d749-rjgth"] Feb 24 05:43:59.188521 master-0 kubenswrapper[34361]: I0224 05:43:59.188424 34361 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sushy-emulator-config\" (UniqueName: \"kubernetes.io/configmap/0b2fa994-aafc-4629-a833-1dc2435b42f4-sushy-emulator-config\") pod \"sushy-emulator-78f6d7d749-rjgth\" (UID: \"0b2fa994-aafc-4629-a833-1dc2435b42f4\") " pod="sushy-emulator/sushy-emulator-78f6d7d749-rjgth" Feb 24 05:43:59.188837 master-0 kubenswrapper[34361]: I0224 05:43:59.188541 34361 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-client-config\" (UniqueName: \"kubernetes.io/secret/0b2fa994-aafc-4629-a833-1dc2435b42f4-os-client-config\") pod \"sushy-emulator-78f6d7d749-rjgth\" (UID: \"0b2fa994-aafc-4629-a833-1dc2435b42f4\") " pod="sushy-emulator/sushy-emulator-78f6d7d749-rjgth" Feb 24 05:43:59.188837 master-0 kubenswrapper[34361]: I0224 05:43:59.188626 34361 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4gpl8\" (UniqueName: \"kubernetes.io/projected/0b2fa994-aafc-4629-a833-1dc2435b42f4-kube-api-access-4gpl8\") pod \"sushy-emulator-78f6d7d749-rjgth\" (UID: \"0b2fa994-aafc-4629-a833-1dc2435b42f4\") " pod="sushy-emulator/sushy-emulator-78f6d7d749-rjgth" Feb 24 05:43:59.291026 master-0 kubenswrapper[34361]: I0224 05:43:59.290942 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sushy-emulator-config\" (UniqueName: \"kubernetes.io/configmap/0b2fa994-aafc-4629-a833-1dc2435b42f4-sushy-emulator-config\") pod \"sushy-emulator-78f6d7d749-rjgth\" (UID: \"0b2fa994-aafc-4629-a833-1dc2435b42f4\") " pod="sushy-emulator/sushy-emulator-78f6d7d749-rjgth" Feb 24 05:43:59.291366 master-0 kubenswrapper[34361]: I0224 05:43:59.291141 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"os-client-config\" (UniqueName: \"kubernetes.io/secret/0b2fa994-aafc-4629-a833-1dc2435b42f4-os-client-config\") pod \"sushy-emulator-78f6d7d749-rjgth\" (UID: \"0b2fa994-aafc-4629-a833-1dc2435b42f4\") " pod="sushy-emulator/sushy-emulator-78f6d7d749-rjgth" Feb 24 05:43:59.291366 master-0 kubenswrapper[34361]: I0224 05:43:59.291188 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4gpl8\" (UniqueName: \"kubernetes.io/projected/0b2fa994-aafc-4629-a833-1dc2435b42f4-kube-api-access-4gpl8\") pod \"sushy-emulator-78f6d7d749-rjgth\" (UID: \"0b2fa994-aafc-4629-a833-1dc2435b42f4\") " pod="sushy-emulator/sushy-emulator-78f6d7d749-rjgth" Feb 24 05:43:59.292558 master-0 kubenswrapper[34361]: I0224 05:43:59.292512 34361 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sushy-emulator-config\" (UniqueName: \"kubernetes.io/configmap/0b2fa994-aafc-4629-a833-1dc2435b42f4-sushy-emulator-config\") pod \"sushy-emulator-78f6d7d749-rjgth\" (UID: \"0b2fa994-aafc-4629-a833-1dc2435b42f4\") " pod="sushy-emulator/sushy-emulator-78f6d7d749-rjgth" Feb 24 05:43:59.297162 master-0 kubenswrapper[34361]: I0224 05:43:59.297095 34361 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"os-client-config\" (UniqueName: \"kubernetes.io/secret/0b2fa994-aafc-4629-a833-1dc2435b42f4-os-client-config\") pod \"sushy-emulator-78f6d7d749-rjgth\" (UID: \"0b2fa994-aafc-4629-a833-1dc2435b42f4\") " pod="sushy-emulator/sushy-emulator-78f6d7d749-rjgth" Feb 24 05:43:59.311628 master-0 kubenswrapper[34361]: I0224 05:43:59.311570 34361 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4gpl8\" (UniqueName: \"kubernetes.io/projected/0b2fa994-aafc-4629-a833-1dc2435b42f4-kube-api-access-4gpl8\") pod \"sushy-emulator-78f6d7d749-rjgth\" (UID: \"0b2fa994-aafc-4629-a833-1dc2435b42f4\") " pod="sushy-emulator/sushy-emulator-78f6d7d749-rjgth" Feb 24 05:43:59.421145 master-0 kubenswrapper[34361]: I0224 05:43:59.420924 34361 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="sushy-emulator/sushy-emulator-78f6d7d749-rjgth" Feb 24 05:43:59.873029 master-0 kubenswrapper[34361]: I0224 05:43:59.872936 34361 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["sushy-emulator/sushy-emulator-78f6d7d749-rjgth"] Feb 24 05:43:59.874499 master-0 kubenswrapper[34361]: W0224 05:43:59.873986 34361 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod0b2fa994_aafc_4629_a833_1dc2435b42f4.slice/crio-39cc02a2f5a4e465de05fad2e8bc40077a91b3c4f719bdecd649d3bb4da0cee2 WatchSource:0}: Error finding container 39cc02a2f5a4e465de05fad2e8bc40077a91b3c4f719bdecd649d3bb4da0cee2: Status 404 returned error can't find the container with id 39cc02a2f5a4e465de05fad2e8bc40077a91b3c4f719bdecd649d3bb4da0cee2 Feb 24 05:43:59.879367 master-0 kubenswrapper[34361]: I0224 05:43:59.879229 34361 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 24 05:44:00.863337 master-0 kubenswrapper[34361]: I0224 05:44:00.861878 34361 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="sushy-emulator/sushy-emulator-78f6d7d749-rjgth" event={"ID":"0b2fa994-aafc-4629-a833-1dc2435b42f4","Type":"ContainerStarted","Data":"39cc02a2f5a4e465de05fad2e8bc40077a91b3c4f719bdecd649d3bb4da0cee2"} Feb 24 05:44:06.920356 master-0 kubenswrapper[34361]: I0224 05:44:06.920113 34361 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="sushy-emulator/sushy-emulator-78f6d7d749-rjgth" event={"ID":"0b2fa994-aafc-4629-a833-1dc2435b42f4","Type":"ContainerStarted","Data":"499d7907cff13e523f56adf0fa8f5df83fc3c5415a327eb0d78ff44229bc4782"} Feb 24 05:44:06.949292 master-0 kubenswrapper[34361]: I0224 05:44:06.949135 34361 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="sushy-emulator/sushy-emulator-78f6d7d749-rjgth" podStartSLOduration=1.297682478 podStartE2EDuration="7.949103965s" podCreationTimestamp="2026-02-24 05:43:59 +0000 UTC" firstStartedPulling="2026-02-24 05:43:59.879156501 +0000 UTC m=+399.581773557" lastFinishedPulling="2026-02-24 05:44:06.530577988 +0000 UTC m=+406.233195044" observedRunningTime="2026-02-24 05:44:06.942821921 +0000 UTC m=+406.645439157" watchObservedRunningTime="2026-02-24 05:44:06.949103965 +0000 UTC m=+406.651721041" Feb 24 05:44:09.421647 master-0 kubenswrapper[34361]: I0224 05:44:09.421530 34361 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="sushy-emulator/sushy-emulator-78f6d7d749-rjgth" Feb 24 05:44:09.421647 master-0 kubenswrapper[34361]: I0224 05:44:09.421647 34361 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="sushy-emulator/sushy-emulator-78f6d7d749-rjgth" Feb 24 05:44:09.435411 master-0 kubenswrapper[34361]: I0224 05:44:09.435347 34361 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="sushy-emulator/sushy-emulator-78f6d7d749-rjgth" Feb 24 05:44:09.957489 master-0 kubenswrapper[34361]: I0224 05:44:09.957406 34361 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="sushy-emulator/sushy-emulator-78f6d7d749-rjgth" Feb 24 05:44:20.946204 master-0 kubenswrapper[34361]: I0224 05:44:20.946076 34361 scope.go:117] "RemoveContainer" containerID="7b398e544e2416957c4399885f805d9a52847bdbb755fa9e7b753808f3ff7fcb" Feb 24 05:44:20.983005 master-0 kubenswrapper[34361]: I0224 05:44:20.982932 34361 scope.go:117] "RemoveContainer" containerID="25ae168ba418dfc4c1b33e602fae0945e84f4e24a75587f39220f0946080e548" Feb 24 05:44:21.013190 master-0 kubenswrapper[34361]: I0224 05:44:21.013106 34361 scope.go:117] "RemoveContainer" containerID="5f3f429a73b99edab07440134a29330648aee1055142d0e2a471d2ca4da191ec" Feb 24 05:44:24.159770 master-0 kubenswrapper[34361]: I0224 05:44:24.159681 34361 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-console/console-6d5c5b46fd-qr4b5" podUID="033badcd-d62d-4a0c-a069-874f0892c4d7" containerName="console" containerID="cri-o://c5d108c7a5fab59ef65c16a63d18af708b4c39dc3257d78b881f6e901084431a" gracePeriod=15 Feb 24 05:44:24.787880 master-0 kubenswrapper[34361]: I0224 05:44:24.787769 34361 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-6d5c5b46fd-qr4b5_033badcd-d62d-4a0c-a069-874f0892c4d7/console/0.log" Feb 24 05:44:24.788229 master-0 kubenswrapper[34361]: I0224 05:44:24.787929 34361 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-6d5c5b46fd-qr4b5" Feb 24 05:44:24.911995 master-0 kubenswrapper[34361]: I0224 05:44:24.911892 34361 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/033badcd-d62d-4a0c-a069-874f0892c4d7-trusted-ca-bundle\") pod \"033badcd-d62d-4a0c-a069-874f0892c4d7\" (UID: \"033badcd-d62d-4a0c-a069-874f0892c4d7\") " Feb 24 05:44:24.912448 master-0 kubenswrapper[34361]: I0224 05:44:24.912053 34361 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/033badcd-d62d-4a0c-a069-874f0892c4d7-console-serving-cert\") pod \"033badcd-d62d-4a0c-a069-874f0892c4d7\" (UID: \"033badcd-d62d-4a0c-a069-874f0892c4d7\") " Feb 24 05:44:24.912448 master-0 kubenswrapper[34361]: I0224 05:44:24.912143 34361 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wdb7l\" (UniqueName: \"kubernetes.io/projected/033badcd-d62d-4a0c-a069-874f0892c4d7-kube-api-access-wdb7l\") pod \"033badcd-d62d-4a0c-a069-874f0892c4d7\" (UID: \"033badcd-d62d-4a0c-a069-874f0892c4d7\") " Feb 24 05:44:24.912448 master-0 kubenswrapper[34361]: I0224 05:44:24.912205 34361 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/033badcd-d62d-4a0c-a069-874f0892c4d7-console-oauth-config\") pod \"033badcd-d62d-4a0c-a069-874f0892c4d7\" (UID: \"033badcd-d62d-4a0c-a069-874f0892c4d7\") " Feb 24 05:44:24.912448 master-0 kubenswrapper[34361]: I0224 05:44:24.912230 34361 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/033badcd-d62d-4a0c-a069-874f0892c4d7-oauth-serving-cert\") pod \"033badcd-d62d-4a0c-a069-874f0892c4d7\" (UID: \"033badcd-d62d-4a0c-a069-874f0892c4d7\") " Feb 24 05:44:24.912448 master-0 kubenswrapper[34361]: I0224 05:44:24.912343 34361 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/033badcd-d62d-4a0c-a069-874f0892c4d7-service-ca\") pod \"033badcd-d62d-4a0c-a069-874f0892c4d7\" (UID: \"033badcd-d62d-4a0c-a069-874f0892c4d7\") " Feb 24 05:44:24.912448 master-0 kubenswrapper[34361]: I0224 05:44:24.912386 34361 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/033badcd-d62d-4a0c-a069-874f0892c4d7-console-config\") pod \"033badcd-d62d-4a0c-a069-874f0892c4d7\" (UID: \"033badcd-d62d-4a0c-a069-874f0892c4d7\") " Feb 24 05:44:24.913251 master-0 kubenswrapper[34361]: I0224 05:44:24.912922 34361 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/033badcd-d62d-4a0c-a069-874f0892c4d7-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "033badcd-d62d-4a0c-a069-874f0892c4d7" (UID: "033badcd-d62d-4a0c-a069-874f0892c4d7"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 24 05:44:24.913505 master-0 kubenswrapper[34361]: I0224 05:44:24.913429 34361 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/033badcd-d62d-4a0c-a069-874f0892c4d7-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "033badcd-d62d-4a0c-a069-874f0892c4d7" (UID: "033badcd-d62d-4a0c-a069-874f0892c4d7"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 24 05:44:24.914221 master-0 kubenswrapper[34361]: I0224 05:44:24.914129 34361 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/033badcd-d62d-4a0c-a069-874f0892c4d7-console-config" (OuterVolumeSpecName: "console-config") pod "033badcd-d62d-4a0c-a069-874f0892c4d7" (UID: "033badcd-d62d-4a0c-a069-874f0892c4d7"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 24 05:44:24.914429 master-0 kubenswrapper[34361]: I0224 05:44:24.914267 34361 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/033badcd-d62d-4a0c-a069-874f0892c4d7-service-ca" (OuterVolumeSpecName: "service-ca") pod "033badcd-d62d-4a0c-a069-874f0892c4d7" (UID: "033badcd-d62d-4a0c-a069-874f0892c4d7"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 24 05:44:24.915951 master-0 kubenswrapper[34361]: I0224 05:44:24.915820 34361 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/033badcd-d62d-4a0c-a069-874f0892c4d7-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "033badcd-d62d-4a0c-a069-874f0892c4d7" (UID: "033badcd-d62d-4a0c-a069-874f0892c4d7"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 24 05:44:24.919356 master-0 kubenswrapper[34361]: I0224 05:44:24.919254 34361 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/033badcd-d62d-4a0c-a069-874f0892c4d7-kube-api-access-wdb7l" (OuterVolumeSpecName: "kube-api-access-wdb7l") pod "033badcd-d62d-4a0c-a069-874f0892c4d7" (UID: "033badcd-d62d-4a0c-a069-874f0892c4d7"). InnerVolumeSpecName "kube-api-access-wdb7l". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 24 05:44:24.920154 master-0 kubenswrapper[34361]: I0224 05:44:24.920087 34361 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/033badcd-d62d-4a0c-a069-874f0892c4d7-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "033badcd-d62d-4a0c-a069-874f0892c4d7" (UID: "033badcd-d62d-4a0c-a069-874f0892c4d7"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 24 05:44:25.015831 master-0 kubenswrapper[34361]: I0224 05:44:25.015717 34361 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wdb7l\" (UniqueName: \"kubernetes.io/projected/033badcd-d62d-4a0c-a069-874f0892c4d7-kube-api-access-wdb7l\") on node \"master-0\" DevicePath \"\"" Feb 24 05:44:25.015831 master-0 kubenswrapper[34361]: I0224 05:44:25.015807 34361 reconciler_common.go:293] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/033badcd-d62d-4a0c-a069-874f0892c4d7-console-oauth-config\") on node \"master-0\" DevicePath \"\"" Feb 24 05:44:25.015831 master-0 kubenswrapper[34361]: I0224 05:44:25.015835 34361 reconciler_common.go:293] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/033badcd-d62d-4a0c-a069-874f0892c4d7-oauth-serving-cert\") on node \"master-0\" DevicePath \"\"" Feb 24 05:44:25.016658 master-0 kubenswrapper[34361]: I0224 05:44:25.015860 34361 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/033badcd-d62d-4a0c-a069-874f0892c4d7-service-ca\") on node \"master-0\" DevicePath \"\"" Feb 24 05:44:25.016658 master-0 kubenswrapper[34361]: I0224 05:44:25.015888 34361 reconciler_common.go:293] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/033badcd-d62d-4a0c-a069-874f0892c4d7-console-config\") on node \"master-0\" DevicePath \"\"" Feb 24 05:44:25.016658 master-0 kubenswrapper[34361]: I0224 05:44:25.015913 34361 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/033badcd-d62d-4a0c-a069-874f0892c4d7-trusted-ca-bundle\") on node \"master-0\" DevicePath \"\"" Feb 24 05:44:25.016658 master-0 kubenswrapper[34361]: I0224 05:44:25.015942 34361 reconciler_common.go:293] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/033badcd-d62d-4a0c-a069-874f0892c4d7-console-serving-cert\") on node \"master-0\" DevicePath \"\"" Feb 24 05:44:25.114598 master-0 kubenswrapper[34361]: I0224 05:44:25.114484 34361 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-6d5c5b46fd-qr4b5_033badcd-d62d-4a0c-a069-874f0892c4d7/console/0.log" Feb 24 05:44:25.114927 master-0 kubenswrapper[34361]: I0224 05:44:25.114642 34361 generic.go:334] "Generic (PLEG): container finished" podID="033badcd-d62d-4a0c-a069-874f0892c4d7" containerID="c5d108c7a5fab59ef65c16a63d18af708b4c39dc3257d78b881f6e901084431a" exitCode=2 Feb 24 05:44:25.114927 master-0 kubenswrapper[34361]: I0224 05:44:25.114717 34361 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-6d5c5b46fd-qr4b5" event={"ID":"033badcd-d62d-4a0c-a069-874f0892c4d7","Type":"ContainerDied","Data":"c5d108c7a5fab59ef65c16a63d18af708b4c39dc3257d78b881f6e901084431a"} Feb 24 05:44:25.114927 master-0 kubenswrapper[34361]: I0224 05:44:25.114759 34361 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-6d5c5b46fd-qr4b5" Feb 24 05:44:25.114927 master-0 kubenswrapper[34361]: I0224 05:44:25.114782 34361 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-6d5c5b46fd-qr4b5" event={"ID":"033badcd-d62d-4a0c-a069-874f0892c4d7","Type":"ContainerDied","Data":"75ce61a1b47f2a4a477a18a448c0d0e922c19aa144e429f7999f36e188d64164"} Feb 24 05:44:25.114927 master-0 kubenswrapper[34361]: I0224 05:44:25.114830 34361 scope.go:117] "RemoveContainer" containerID="c5d108c7a5fab59ef65c16a63d18af708b4c39dc3257d78b881f6e901084431a" Feb 24 05:44:25.142770 master-0 kubenswrapper[34361]: I0224 05:44:25.142717 34361 scope.go:117] "RemoveContainer" containerID="c5d108c7a5fab59ef65c16a63d18af708b4c39dc3257d78b881f6e901084431a" Feb 24 05:44:25.143736 master-0 kubenswrapper[34361]: E0224 05:44:25.143649 34361 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c5d108c7a5fab59ef65c16a63d18af708b4c39dc3257d78b881f6e901084431a\": container with ID starting with c5d108c7a5fab59ef65c16a63d18af708b4c39dc3257d78b881f6e901084431a not found: ID does not exist" containerID="c5d108c7a5fab59ef65c16a63d18af708b4c39dc3257d78b881f6e901084431a" Feb 24 05:44:25.143843 master-0 kubenswrapper[34361]: I0224 05:44:25.143757 34361 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c5d108c7a5fab59ef65c16a63d18af708b4c39dc3257d78b881f6e901084431a"} err="failed to get container status \"c5d108c7a5fab59ef65c16a63d18af708b4c39dc3257d78b881f6e901084431a\": rpc error: code = NotFound desc = could not find container \"c5d108c7a5fab59ef65c16a63d18af708b4c39dc3257d78b881f6e901084431a\": container with ID starting with c5d108c7a5fab59ef65c16a63d18af708b4c39dc3257d78b881f6e901084431a not found: ID does not exist" Feb 24 05:44:25.173556 master-0 kubenswrapper[34361]: I0224 05:44:25.173501 34361 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-6d5c5b46fd-qr4b5"] Feb 24 05:44:25.183486 master-0 kubenswrapper[34361]: I0224 05:44:25.183409 34361 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-console/console-6d5c5b46fd-qr4b5"] Feb 24 05:44:26.612061 master-0 kubenswrapper[34361]: I0224 05:44:26.611947 34361 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="033badcd-d62d-4a0c-a069-874f0892c4d7" path="/var/lib/kubelet/pods/033badcd-d62d-4a0c-a069-874f0892c4d7/volumes" Feb 24 05:44:29.301744 master-0 kubenswrapper[34361]: I0224 05:44:29.301665 34361 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["sushy-emulator/nova-console-poller-5bbdbdc4dc-t2lxm"] Feb 24 05:44:29.302752 master-0 kubenswrapper[34361]: E0224 05:44:29.301993 34361 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="033badcd-d62d-4a0c-a069-874f0892c4d7" containerName="console" Feb 24 05:44:29.302752 master-0 kubenswrapper[34361]: I0224 05:44:29.302008 34361 state_mem.go:107] "Deleted CPUSet assignment" podUID="033badcd-d62d-4a0c-a069-874f0892c4d7" containerName="console" Feb 24 05:44:29.302752 master-0 kubenswrapper[34361]: I0224 05:44:29.302181 34361 memory_manager.go:354] "RemoveStaleState removing state" podUID="033badcd-d62d-4a0c-a069-874f0892c4d7" containerName="console" Feb 24 05:44:29.302996 master-0 kubenswrapper[34361]: I0224 05:44:29.302960 34361 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="sushy-emulator/nova-console-poller-5bbdbdc4dc-t2lxm" Feb 24 05:44:29.327868 master-0 kubenswrapper[34361]: I0224 05:44:29.327773 34361 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["sushy-emulator/nova-console-poller-5bbdbdc4dc-t2lxm"] Feb 24 05:44:29.407041 master-0 kubenswrapper[34361]: I0224 05:44:29.406902 34361 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-client-config\" (UniqueName: \"kubernetes.io/secret/ee843e04-e1fe-41bf-afd4-763016268156-os-client-config\") pod \"nova-console-poller-5bbdbdc4dc-t2lxm\" (UID: \"ee843e04-e1fe-41bf-afd4-763016268156\") " pod="sushy-emulator/nova-console-poller-5bbdbdc4dc-t2lxm" Feb 24 05:44:29.407450 master-0 kubenswrapper[34361]: I0224 05:44:29.407072 34361 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cd6cf\" (UniqueName: \"kubernetes.io/projected/ee843e04-e1fe-41bf-afd4-763016268156-kube-api-access-cd6cf\") pod \"nova-console-poller-5bbdbdc4dc-t2lxm\" (UID: \"ee843e04-e1fe-41bf-afd4-763016268156\") " pod="sushy-emulator/nova-console-poller-5bbdbdc4dc-t2lxm" Feb 24 05:44:29.509045 master-0 kubenswrapper[34361]: I0224 05:44:29.508916 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"os-client-config\" (UniqueName: \"kubernetes.io/secret/ee843e04-e1fe-41bf-afd4-763016268156-os-client-config\") pod \"nova-console-poller-5bbdbdc4dc-t2lxm\" (UID: \"ee843e04-e1fe-41bf-afd4-763016268156\") " pod="sushy-emulator/nova-console-poller-5bbdbdc4dc-t2lxm" Feb 24 05:44:29.509045 master-0 kubenswrapper[34361]: I0224 05:44:29.509056 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cd6cf\" (UniqueName: \"kubernetes.io/projected/ee843e04-e1fe-41bf-afd4-763016268156-kube-api-access-cd6cf\") pod \"nova-console-poller-5bbdbdc4dc-t2lxm\" (UID: \"ee843e04-e1fe-41bf-afd4-763016268156\") " pod="sushy-emulator/nova-console-poller-5bbdbdc4dc-t2lxm" Feb 24 05:44:29.515171 master-0 kubenswrapper[34361]: I0224 05:44:29.515103 34361 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"os-client-config\" (UniqueName: \"kubernetes.io/secret/ee843e04-e1fe-41bf-afd4-763016268156-os-client-config\") pod \"nova-console-poller-5bbdbdc4dc-t2lxm\" (UID: \"ee843e04-e1fe-41bf-afd4-763016268156\") " pod="sushy-emulator/nova-console-poller-5bbdbdc4dc-t2lxm" Feb 24 05:44:29.540804 master-0 kubenswrapper[34361]: I0224 05:44:29.540682 34361 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cd6cf\" (UniqueName: \"kubernetes.io/projected/ee843e04-e1fe-41bf-afd4-763016268156-kube-api-access-cd6cf\") pod \"nova-console-poller-5bbdbdc4dc-t2lxm\" (UID: \"ee843e04-e1fe-41bf-afd4-763016268156\") " pod="sushy-emulator/nova-console-poller-5bbdbdc4dc-t2lxm" Feb 24 05:44:29.677441 master-0 kubenswrapper[34361]: I0224 05:44:29.675915 34361 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="sushy-emulator/nova-console-poller-5bbdbdc4dc-t2lxm" Feb 24 05:44:30.194175 master-0 kubenswrapper[34361]: I0224 05:44:30.194065 34361 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["sushy-emulator/nova-console-poller-5bbdbdc4dc-t2lxm"] Feb 24 05:44:30.195409 master-0 kubenswrapper[34361]: W0224 05:44:30.195284 34361 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podee843e04_e1fe_41bf_afd4_763016268156.slice/crio-86636ecfac74e5492fc52e7caa0cb2344daa55859c05fdbb6aa914e9649f7ebb WatchSource:0}: Error finding container 86636ecfac74e5492fc52e7caa0cb2344daa55859c05fdbb6aa914e9649f7ebb: Status 404 returned error can't find the container with id 86636ecfac74e5492fc52e7caa0cb2344daa55859c05fdbb6aa914e9649f7ebb Feb 24 05:44:31.183566 master-0 kubenswrapper[34361]: I0224 05:44:31.183493 34361 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="sushy-emulator/nova-console-poller-5bbdbdc4dc-t2lxm" event={"ID":"ee843e04-e1fe-41bf-afd4-763016268156","Type":"ContainerStarted","Data":"86636ecfac74e5492fc52e7caa0cb2344daa55859c05fdbb6aa914e9649f7ebb"} Feb 24 05:44:36.240058 master-0 kubenswrapper[34361]: I0224 05:44:36.239899 34361 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="sushy-emulator/nova-console-poller-5bbdbdc4dc-t2lxm" event={"ID":"ee843e04-e1fe-41bf-afd4-763016268156","Type":"ContainerStarted","Data":"a292156191d302a4b9dada33ad49969396dbbd1f6a7bb3f8042983cd79cb558e"} Feb 24 05:44:37.252679 master-0 kubenswrapper[34361]: I0224 05:44:37.252498 34361 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="sushy-emulator/nova-console-poller-5bbdbdc4dc-t2lxm" event={"ID":"ee843e04-e1fe-41bf-afd4-763016268156","Type":"ContainerStarted","Data":"bfd61ae709bc9084696ef3ffff2e0e8e097b21d89c4b97220fa8b6e588bf5358"} Feb 24 05:44:37.293678 master-0 kubenswrapper[34361]: I0224 05:44:37.293497 34361 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="sushy-emulator/nova-console-poller-5bbdbdc4dc-t2lxm" podStartSLOduration=2.406093759 podStartE2EDuration="8.293464609s" podCreationTimestamp="2026-02-24 05:44:29 +0000 UTC" firstStartedPulling="2026-02-24 05:44:30.200005004 +0000 UTC m=+429.902622090" lastFinishedPulling="2026-02-24 05:44:36.087375894 +0000 UTC m=+435.789992940" observedRunningTime="2026-02-24 05:44:37.281775325 +0000 UTC m=+436.984392451" watchObservedRunningTime="2026-02-24 05:44:37.293464609 +0000 UTC m=+436.996081685" Feb 24 05:45:00.232361 master-0 kubenswrapper[34361]: I0224 05:45:00.232189 34361 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29531865-5wmht"] Feb 24 05:45:00.234101 master-0 kubenswrapper[34361]: I0224 05:45:00.234027 34361 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29531865-5wmht" Feb 24 05:45:00.237808 master-0 kubenswrapper[34361]: I0224 05:45:00.237707 34361 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-27rfg" Feb 24 05:45:00.238226 master-0 kubenswrapper[34361]: I0224 05:45:00.238129 34361 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Feb 24 05:45:00.251854 master-0 kubenswrapper[34361]: I0224 05:45:00.251603 34361 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29531865-5wmht"] Feb 24 05:45:00.376526 master-0 kubenswrapper[34361]: I0224 05:45:00.376443 34361 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/ec4713f8-2961-462e-bdf0-ba653bd29445-config-volume\") pod \"collect-profiles-29531865-5wmht\" (UID: \"ec4713f8-2961-462e-bdf0-ba653bd29445\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29531865-5wmht" Feb 24 05:45:00.377062 master-0 kubenswrapper[34361]: I0224 05:45:00.377028 34361 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/ec4713f8-2961-462e-bdf0-ba653bd29445-secret-volume\") pod \"collect-profiles-29531865-5wmht\" (UID: \"ec4713f8-2961-462e-bdf0-ba653bd29445\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29531865-5wmht" Feb 24 05:45:00.377456 master-0 kubenswrapper[34361]: I0224 05:45:00.377416 34361 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vmdcx\" (UniqueName: \"kubernetes.io/projected/ec4713f8-2961-462e-bdf0-ba653bd29445-kube-api-access-vmdcx\") pod \"collect-profiles-29531865-5wmht\" (UID: \"ec4713f8-2961-462e-bdf0-ba653bd29445\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29531865-5wmht" Feb 24 05:45:00.480195 master-0 kubenswrapper[34361]: I0224 05:45:00.480095 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/ec4713f8-2961-462e-bdf0-ba653bd29445-secret-volume\") pod \"collect-profiles-29531865-5wmht\" (UID: \"ec4713f8-2961-462e-bdf0-ba653bd29445\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29531865-5wmht" Feb 24 05:45:00.480703 master-0 kubenswrapper[34361]: I0224 05:45:00.480223 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vmdcx\" (UniqueName: \"kubernetes.io/projected/ec4713f8-2961-462e-bdf0-ba653bd29445-kube-api-access-vmdcx\") pod \"collect-profiles-29531865-5wmht\" (UID: \"ec4713f8-2961-462e-bdf0-ba653bd29445\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29531865-5wmht" Feb 24 05:45:00.480703 master-0 kubenswrapper[34361]: I0224 05:45:00.480277 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/ec4713f8-2961-462e-bdf0-ba653bd29445-config-volume\") pod \"collect-profiles-29531865-5wmht\" (UID: \"ec4713f8-2961-462e-bdf0-ba653bd29445\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29531865-5wmht" Feb 24 05:45:00.481751 master-0 kubenswrapper[34361]: I0224 05:45:00.481654 34361 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/ec4713f8-2961-462e-bdf0-ba653bd29445-config-volume\") pod \"collect-profiles-29531865-5wmht\" (UID: \"ec4713f8-2961-462e-bdf0-ba653bd29445\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29531865-5wmht" Feb 24 05:45:00.486221 master-0 kubenswrapper[34361]: I0224 05:45:00.486078 34361 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/ec4713f8-2961-462e-bdf0-ba653bd29445-secret-volume\") pod \"collect-profiles-29531865-5wmht\" (UID: \"ec4713f8-2961-462e-bdf0-ba653bd29445\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29531865-5wmht" Feb 24 05:45:00.503175 master-0 kubenswrapper[34361]: I0224 05:45:00.503081 34361 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vmdcx\" (UniqueName: \"kubernetes.io/projected/ec4713f8-2961-462e-bdf0-ba653bd29445-kube-api-access-vmdcx\") pod \"collect-profiles-29531865-5wmht\" (UID: \"ec4713f8-2961-462e-bdf0-ba653bd29445\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29531865-5wmht" Feb 24 05:45:00.577167 master-0 kubenswrapper[34361]: I0224 05:45:00.577071 34361 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29531865-5wmht" Feb 24 05:45:01.111439 master-0 kubenswrapper[34361]: I0224 05:45:01.110021 34361 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29531865-5wmht"] Feb 24 05:45:01.501645 master-0 kubenswrapper[34361]: I0224 05:45:01.501562 34361 generic.go:334] "Generic (PLEG): container finished" podID="ec4713f8-2961-462e-bdf0-ba653bd29445" containerID="c1ac0cee8a525ab68a46ab6804577690a88d31fed8c7b6cb524c5f069f7f51c4" exitCode=0 Feb 24 05:45:01.501645 master-0 kubenswrapper[34361]: I0224 05:45:01.501634 34361 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29531865-5wmht" event={"ID":"ec4713f8-2961-462e-bdf0-ba653bd29445","Type":"ContainerDied","Data":"c1ac0cee8a525ab68a46ab6804577690a88d31fed8c7b6cb524c5f069f7f51c4"} Feb 24 05:45:01.502410 master-0 kubenswrapper[34361]: I0224 05:45:01.501675 34361 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29531865-5wmht" event={"ID":"ec4713f8-2961-462e-bdf0-ba653bd29445","Type":"ContainerStarted","Data":"8f0ce42d96de6c3aae8eecd1f40dc0fd04f10c5d231890de3a10f5f73bc1f6b5"} Feb 24 05:45:02.363661 master-0 kubenswrapper[34361]: I0224 05:45:02.363549 34361 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["sushy-emulator/nova-console-recorder-7b97cdbf9f-vzh2n"] Feb 24 05:45:02.366766 master-0 kubenswrapper[34361]: I0224 05:45:02.366698 34361 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="sushy-emulator/nova-console-recorder-7b97cdbf9f-vzh2n" Feb 24 05:45:02.371334 master-0 kubenswrapper[34361]: I0224 05:45:02.371091 34361 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["sushy-emulator/nova-console-recorder-7b97cdbf9f-vzh2n"] Feb 24 05:45:02.517197 master-0 kubenswrapper[34361]: I0224 05:45:02.517092 34361 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-client-config\" (UniqueName: \"kubernetes.io/secret/dbc49689-be99-4a51-a9a7-080d8843e05c-os-client-config\") pod \"nova-console-recorder-7b97cdbf9f-vzh2n\" (UID: \"dbc49689-be99-4a51-a9a7-080d8843e05c\") " pod="sushy-emulator/nova-console-recorder-7b97cdbf9f-vzh2n" Feb 24 05:45:02.517623 master-0 kubenswrapper[34361]: I0224 05:45:02.517353 34361 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5x2z5\" (UniqueName: \"kubernetes.io/projected/dbc49689-be99-4a51-a9a7-080d8843e05c-kube-api-access-5x2z5\") pod \"nova-console-recorder-7b97cdbf9f-vzh2n\" (UID: \"dbc49689-be99-4a51-a9a7-080d8843e05c\") " pod="sushy-emulator/nova-console-recorder-7b97cdbf9f-vzh2n" Feb 24 05:45:02.517623 master-0 kubenswrapper[34361]: I0224 05:45:02.517479 34361 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-console-recordings-pv\" (UniqueName: \"kubernetes.io/nfs/dbc49689-be99-4a51-a9a7-080d8843e05c-nova-console-recordings-pv\") pod \"nova-console-recorder-7b97cdbf9f-vzh2n\" (UID: \"dbc49689-be99-4a51-a9a7-080d8843e05c\") " pod="sushy-emulator/nova-console-recorder-7b97cdbf9f-vzh2n" Feb 24 05:45:02.620058 master-0 kubenswrapper[34361]: I0224 05:45:02.619964 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"os-client-config\" (UniqueName: \"kubernetes.io/secret/dbc49689-be99-4a51-a9a7-080d8843e05c-os-client-config\") pod \"nova-console-recorder-7b97cdbf9f-vzh2n\" (UID: \"dbc49689-be99-4a51-a9a7-080d8843e05c\") " pod="sushy-emulator/nova-console-recorder-7b97cdbf9f-vzh2n" Feb 24 05:45:02.620444 master-0 kubenswrapper[34361]: I0224 05:45:02.620125 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5x2z5\" (UniqueName: \"kubernetes.io/projected/dbc49689-be99-4a51-a9a7-080d8843e05c-kube-api-access-5x2z5\") pod \"nova-console-recorder-7b97cdbf9f-vzh2n\" (UID: \"dbc49689-be99-4a51-a9a7-080d8843e05c\") " pod="sushy-emulator/nova-console-recorder-7b97cdbf9f-vzh2n" Feb 24 05:45:02.620444 master-0 kubenswrapper[34361]: I0224 05:45:02.620421 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-console-recordings-pv\" (UniqueName: \"kubernetes.io/nfs/dbc49689-be99-4a51-a9a7-080d8843e05c-nova-console-recordings-pv\") pod \"nova-console-recorder-7b97cdbf9f-vzh2n\" (UID: \"dbc49689-be99-4a51-a9a7-080d8843e05c\") " pod="sushy-emulator/nova-console-recorder-7b97cdbf9f-vzh2n" Feb 24 05:45:02.630372 master-0 kubenswrapper[34361]: I0224 05:45:02.628788 34361 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"os-client-config\" (UniqueName: \"kubernetes.io/secret/dbc49689-be99-4a51-a9a7-080d8843e05c-os-client-config\") pod \"nova-console-recorder-7b97cdbf9f-vzh2n\" (UID: \"dbc49689-be99-4a51-a9a7-080d8843e05c\") " pod="sushy-emulator/nova-console-recorder-7b97cdbf9f-vzh2n" Feb 24 05:45:02.641587 master-0 kubenswrapper[34361]: I0224 05:45:02.641510 34361 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5x2z5\" (UniqueName: \"kubernetes.io/projected/dbc49689-be99-4a51-a9a7-080d8843e05c-kube-api-access-5x2z5\") pod \"nova-console-recorder-7b97cdbf9f-vzh2n\" (UID: \"dbc49689-be99-4a51-a9a7-080d8843e05c\") " pod="sushy-emulator/nova-console-recorder-7b97cdbf9f-vzh2n" Feb 24 05:45:02.905321 master-0 kubenswrapper[34361]: I0224 05:45:02.905157 34361 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29531865-5wmht" Feb 24 05:45:03.028280 master-0 kubenswrapper[34361]: I0224 05:45:03.028206 34361 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vmdcx\" (UniqueName: \"kubernetes.io/projected/ec4713f8-2961-462e-bdf0-ba653bd29445-kube-api-access-vmdcx\") pod \"ec4713f8-2961-462e-bdf0-ba653bd29445\" (UID: \"ec4713f8-2961-462e-bdf0-ba653bd29445\") " Feb 24 05:45:03.028618 master-0 kubenswrapper[34361]: I0224 05:45:03.028524 34361 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/ec4713f8-2961-462e-bdf0-ba653bd29445-secret-volume\") pod \"ec4713f8-2961-462e-bdf0-ba653bd29445\" (UID: \"ec4713f8-2961-462e-bdf0-ba653bd29445\") " Feb 24 05:45:03.028618 master-0 kubenswrapper[34361]: I0224 05:45:03.028569 34361 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/ec4713f8-2961-462e-bdf0-ba653bd29445-config-volume\") pod \"ec4713f8-2961-462e-bdf0-ba653bd29445\" (UID: \"ec4713f8-2961-462e-bdf0-ba653bd29445\") " Feb 24 05:45:03.029213 master-0 kubenswrapper[34361]: I0224 05:45:03.029152 34361 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ec4713f8-2961-462e-bdf0-ba653bd29445-config-volume" (OuterVolumeSpecName: "config-volume") pod "ec4713f8-2961-462e-bdf0-ba653bd29445" (UID: "ec4713f8-2961-462e-bdf0-ba653bd29445"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 24 05:45:03.031962 master-0 kubenswrapper[34361]: I0224 05:45:03.031893 34361 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ec4713f8-2961-462e-bdf0-ba653bd29445-kube-api-access-vmdcx" (OuterVolumeSpecName: "kube-api-access-vmdcx") pod "ec4713f8-2961-462e-bdf0-ba653bd29445" (UID: "ec4713f8-2961-462e-bdf0-ba653bd29445"). InnerVolumeSpecName "kube-api-access-vmdcx". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 24 05:45:03.032386 master-0 kubenswrapper[34361]: I0224 05:45:03.032299 34361 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ec4713f8-2961-462e-bdf0-ba653bd29445-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "ec4713f8-2961-462e-bdf0-ba653bd29445" (UID: "ec4713f8-2961-462e-bdf0-ba653bd29445"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 24 05:45:03.131356 master-0 kubenswrapper[34361]: I0224 05:45:03.131245 34361 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/ec4713f8-2961-462e-bdf0-ba653bd29445-secret-volume\") on node \"master-0\" DevicePath \"\"" Feb 24 05:45:03.131356 master-0 kubenswrapper[34361]: I0224 05:45:03.131346 34361 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/ec4713f8-2961-462e-bdf0-ba653bd29445-config-volume\") on node \"master-0\" DevicePath \"\"" Feb 24 05:45:03.131356 master-0 kubenswrapper[34361]: I0224 05:45:03.131364 34361 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vmdcx\" (UniqueName: \"kubernetes.io/projected/ec4713f8-2961-462e-bdf0-ba653bd29445-kube-api-access-vmdcx\") on node \"master-0\" DevicePath \"\"" Feb 24 05:45:03.286613 master-0 kubenswrapper[34361]: I0224 05:45:03.286509 34361 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-console-recordings-pv\" (UniqueName: \"kubernetes.io/nfs/dbc49689-be99-4a51-a9a7-080d8843e05c-nova-console-recordings-pv\") pod \"nova-console-recorder-7b97cdbf9f-vzh2n\" (UID: \"dbc49689-be99-4a51-a9a7-080d8843e05c\") " pod="sushy-emulator/nova-console-recorder-7b97cdbf9f-vzh2n" Feb 24 05:45:03.316339 master-0 kubenswrapper[34361]: I0224 05:45:03.316229 34361 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="sushy-emulator/nova-console-recorder-7b97cdbf9f-vzh2n" Feb 24 05:45:03.527373 master-0 kubenswrapper[34361]: I0224 05:45:03.527214 34361 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29531865-5wmht" event={"ID":"ec4713f8-2961-462e-bdf0-ba653bd29445","Type":"ContainerDied","Data":"8f0ce42d96de6c3aae8eecd1f40dc0fd04f10c5d231890de3a10f5f73bc1f6b5"} Feb 24 05:45:03.527373 master-0 kubenswrapper[34361]: I0224 05:45:03.527281 34361 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29531865-5wmht" Feb 24 05:45:03.527373 master-0 kubenswrapper[34361]: I0224 05:45:03.527340 34361 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8f0ce42d96de6c3aae8eecd1f40dc0fd04f10c5d231890de3a10f5f73bc1f6b5" Feb 24 05:45:03.844337 master-0 kubenswrapper[34361]: W0224 05:45:03.844224 34361 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poddbc49689_be99_4a51_a9a7_080d8843e05c.slice/crio-ea89716cdb54db006971d453d467702521ee172bb94371b56fd34cba2523fa2e WatchSource:0}: Error finding container ea89716cdb54db006971d453d467702521ee172bb94371b56fd34cba2523fa2e: Status 404 returned error can't find the container with id ea89716cdb54db006971d453d467702521ee172bb94371b56fd34cba2523fa2e Feb 24 05:45:03.852438 master-0 kubenswrapper[34361]: I0224 05:45:03.852356 34361 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["sushy-emulator/nova-console-recorder-7b97cdbf9f-vzh2n"] Feb 24 05:45:04.558168 master-0 kubenswrapper[34361]: I0224 05:45:04.558073 34361 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="sushy-emulator/nova-console-recorder-7b97cdbf9f-vzh2n" event={"ID":"dbc49689-be99-4a51-a9a7-080d8843e05c","Type":"ContainerStarted","Data":"ea89716cdb54db006971d453d467702521ee172bb94371b56fd34cba2523fa2e"} Feb 24 05:45:12.644666 master-0 kubenswrapper[34361]: I0224 05:45:12.644536 34361 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="sushy-emulator/nova-console-recorder-7b97cdbf9f-vzh2n" event={"ID":"dbc49689-be99-4a51-a9a7-080d8843e05c","Type":"ContainerStarted","Data":"61239cf8d410f66549c3853b9d62b884755bc49b52831f9c8e64ebd32c989785"} Feb 24 05:45:12.646227 master-0 kubenswrapper[34361]: I0224 05:45:12.646096 34361 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="sushy-emulator/nova-console-recorder-7b97cdbf9f-vzh2n" event={"ID":"dbc49689-be99-4a51-a9a7-080d8843e05c","Type":"ContainerStarted","Data":"80777e12c3aa00c56961c60508321e703a8e1adc13318259b8c42beec9481b47"} Feb 24 05:45:12.686163 master-0 kubenswrapper[34361]: I0224 05:45:12.686052 34361 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="sushy-emulator/nova-console-recorder-7b97cdbf9f-vzh2n" podStartSLOduration=2.336199766 podStartE2EDuration="10.685279169s" podCreationTimestamp="2026-02-24 05:45:02 +0000 UTC" firstStartedPulling="2026-02-24 05:45:03.847638972 +0000 UTC m=+463.550256058" lastFinishedPulling="2026-02-24 05:45:12.196718405 +0000 UTC m=+471.899335461" observedRunningTime="2026-02-24 05:45:12.67338167 +0000 UTC m=+472.375998756" watchObservedRunningTime="2026-02-24 05:45:12.685279169 +0000 UTC m=+472.387896225" Feb 24 05:45:38.471609 master-0 kubenswrapper[34361]: I0224 05:45:38.471503 34361 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/7f6062bfcf66f08711c4d599873349559e66916847a22b4b74a32f97d4g99lp"] Feb 24 05:45:38.472895 master-0 kubenswrapper[34361]: E0224 05:45:38.472019 34361 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ec4713f8-2961-462e-bdf0-ba653bd29445" containerName="collect-profiles" Feb 24 05:45:38.472895 master-0 kubenswrapper[34361]: I0224 05:45:38.472042 34361 state_mem.go:107] "Deleted CPUSet assignment" podUID="ec4713f8-2961-462e-bdf0-ba653bd29445" containerName="collect-profiles" Feb 24 05:45:38.472895 master-0 kubenswrapper[34361]: I0224 05:45:38.472410 34361 memory_manager.go:354] "RemoveStaleState removing state" podUID="ec4713f8-2961-462e-bdf0-ba653bd29445" containerName="collect-profiles" Feb 24 05:45:38.473951 master-0 kubenswrapper[34361]: I0224 05:45:38.473891 34361 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/7f6062bfcf66f08711c4d599873349559e66916847a22b4b74a32f97d4g99lp" Feb 24 05:45:38.480849 master-0 kubenswrapper[34361]: I0224 05:45:38.480774 34361 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-4lhzk" Feb 24 05:45:38.497347 master-0 kubenswrapper[34361]: I0224 05:45:38.495786 34361 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/7f6062bfcf66f08711c4d599873349559e66916847a22b4b74a32f97d4g99lp"] Feb 24 05:45:38.575383 master-0 kubenswrapper[34361]: I0224 05:45:38.575245 34361 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/1bec92ed-ae5d-49a5-88d4-a4892243947c-util\") pod \"7f6062bfcf66f08711c4d599873349559e66916847a22b4b74a32f97d4g99lp\" (UID: \"1bec92ed-ae5d-49a5-88d4-a4892243947c\") " pod="openshift-marketplace/7f6062bfcf66f08711c4d599873349559e66916847a22b4b74a32f97d4g99lp" Feb 24 05:45:38.575844 master-0 kubenswrapper[34361]: I0224 05:45:38.575420 34361 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qzrch\" (UniqueName: \"kubernetes.io/projected/1bec92ed-ae5d-49a5-88d4-a4892243947c-kube-api-access-qzrch\") pod \"7f6062bfcf66f08711c4d599873349559e66916847a22b4b74a32f97d4g99lp\" (UID: \"1bec92ed-ae5d-49a5-88d4-a4892243947c\") " pod="openshift-marketplace/7f6062bfcf66f08711c4d599873349559e66916847a22b4b74a32f97d4g99lp" Feb 24 05:45:38.575844 master-0 kubenswrapper[34361]: I0224 05:45:38.575588 34361 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/1bec92ed-ae5d-49a5-88d4-a4892243947c-bundle\") pod \"7f6062bfcf66f08711c4d599873349559e66916847a22b4b74a32f97d4g99lp\" (UID: \"1bec92ed-ae5d-49a5-88d4-a4892243947c\") " pod="openshift-marketplace/7f6062bfcf66f08711c4d599873349559e66916847a22b4b74a32f97d4g99lp" Feb 24 05:45:38.677646 master-0 kubenswrapper[34361]: I0224 05:45:38.677592 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/1bec92ed-ae5d-49a5-88d4-a4892243947c-bundle\") pod \"7f6062bfcf66f08711c4d599873349559e66916847a22b4b74a32f97d4g99lp\" (UID: \"1bec92ed-ae5d-49a5-88d4-a4892243947c\") " pod="openshift-marketplace/7f6062bfcf66f08711c4d599873349559e66916847a22b4b74a32f97d4g99lp" Feb 24 05:45:38.678046 master-0 kubenswrapper[34361]: I0224 05:45:38.678023 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/1bec92ed-ae5d-49a5-88d4-a4892243947c-util\") pod \"7f6062bfcf66f08711c4d599873349559e66916847a22b4b74a32f97d4g99lp\" (UID: \"1bec92ed-ae5d-49a5-88d4-a4892243947c\") " pod="openshift-marketplace/7f6062bfcf66f08711c4d599873349559e66916847a22b4b74a32f97d4g99lp" Feb 24 05:45:38.678249 master-0 kubenswrapper[34361]: I0224 05:45:38.678226 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qzrch\" (UniqueName: \"kubernetes.io/projected/1bec92ed-ae5d-49a5-88d4-a4892243947c-kube-api-access-qzrch\") pod \"7f6062bfcf66f08711c4d599873349559e66916847a22b4b74a32f97d4g99lp\" (UID: \"1bec92ed-ae5d-49a5-88d4-a4892243947c\") " pod="openshift-marketplace/7f6062bfcf66f08711c4d599873349559e66916847a22b4b74a32f97d4g99lp" Feb 24 05:45:38.678775 master-0 kubenswrapper[34361]: I0224 05:45:38.678703 34361 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/1bec92ed-ae5d-49a5-88d4-a4892243947c-bundle\") pod \"7f6062bfcf66f08711c4d599873349559e66916847a22b4b74a32f97d4g99lp\" (UID: \"1bec92ed-ae5d-49a5-88d4-a4892243947c\") " pod="openshift-marketplace/7f6062bfcf66f08711c4d599873349559e66916847a22b4b74a32f97d4g99lp" Feb 24 05:45:38.679460 master-0 kubenswrapper[34361]: I0224 05:45:38.679281 34361 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/1bec92ed-ae5d-49a5-88d4-a4892243947c-util\") pod \"7f6062bfcf66f08711c4d599873349559e66916847a22b4b74a32f97d4g99lp\" (UID: \"1bec92ed-ae5d-49a5-88d4-a4892243947c\") " pod="openshift-marketplace/7f6062bfcf66f08711c4d599873349559e66916847a22b4b74a32f97d4g99lp" Feb 24 05:45:38.713248 master-0 kubenswrapper[34361]: I0224 05:45:38.713156 34361 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qzrch\" (UniqueName: \"kubernetes.io/projected/1bec92ed-ae5d-49a5-88d4-a4892243947c-kube-api-access-qzrch\") pod \"7f6062bfcf66f08711c4d599873349559e66916847a22b4b74a32f97d4g99lp\" (UID: \"1bec92ed-ae5d-49a5-88d4-a4892243947c\") " pod="openshift-marketplace/7f6062bfcf66f08711c4d599873349559e66916847a22b4b74a32f97d4g99lp" Feb 24 05:45:38.802779 master-0 kubenswrapper[34361]: I0224 05:45:38.802681 34361 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/7f6062bfcf66f08711c4d599873349559e66916847a22b4b74a32f97d4g99lp" Feb 24 05:45:39.365705 master-0 kubenswrapper[34361]: I0224 05:45:39.365602 34361 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/7f6062bfcf66f08711c4d599873349559e66916847a22b4b74a32f97d4g99lp"] Feb 24 05:45:39.373281 master-0 kubenswrapper[34361]: W0224 05:45:39.373217 34361 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod1bec92ed_ae5d_49a5_88d4_a4892243947c.slice/crio-84710dcbfb1fc7f74fa90ee1d21de01a0c552521a48d65d988ef9b34eb7e84bf WatchSource:0}: Error finding container 84710dcbfb1fc7f74fa90ee1d21de01a0c552521a48d65d988ef9b34eb7e84bf: Status 404 returned error can't find the container with id 84710dcbfb1fc7f74fa90ee1d21de01a0c552521a48d65d988ef9b34eb7e84bf Feb 24 05:45:39.936185 master-0 kubenswrapper[34361]: I0224 05:45:39.935999 34361 generic.go:334] "Generic (PLEG): container finished" podID="1bec92ed-ae5d-49a5-88d4-a4892243947c" containerID="58d5fa9fa3f622a0cfadf04cb5c818ff81f7c3ba8b044a0cd57e6d9a22c50478" exitCode=0 Feb 24 05:45:39.936185 master-0 kubenswrapper[34361]: I0224 05:45:39.936074 34361 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/7f6062bfcf66f08711c4d599873349559e66916847a22b4b74a32f97d4g99lp" event={"ID":"1bec92ed-ae5d-49a5-88d4-a4892243947c","Type":"ContainerDied","Data":"58d5fa9fa3f622a0cfadf04cb5c818ff81f7c3ba8b044a0cd57e6d9a22c50478"} Feb 24 05:45:39.936185 master-0 kubenswrapper[34361]: I0224 05:45:39.936111 34361 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/7f6062bfcf66f08711c4d599873349559e66916847a22b4b74a32f97d4g99lp" event={"ID":"1bec92ed-ae5d-49a5-88d4-a4892243947c","Type":"ContainerStarted","Data":"84710dcbfb1fc7f74fa90ee1d21de01a0c552521a48d65d988ef9b34eb7e84bf"} Feb 24 05:45:42.967997 master-0 kubenswrapper[34361]: I0224 05:45:42.967882 34361 generic.go:334] "Generic (PLEG): container finished" podID="1bec92ed-ae5d-49a5-88d4-a4892243947c" containerID="5cb4a9bfbe2f00d8ac0241de35a15c623e6ae055dce8a3c435e857124f07288c" exitCode=0 Feb 24 05:45:42.967997 master-0 kubenswrapper[34361]: I0224 05:45:42.967946 34361 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/7f6062bfcf66f08711c4d599873349559e66916847a22b4b74a32f97d4g99lp" event={"ID":"1bec92ed-ae5d-49a5-88d4-a4892243947c","Type":"ContainerDied","Data":"5cb4a9bfbe2f00d8ac0241de35a15c623e6ae055dce8a3c435e857124f07288c"} Feb 24 05:45:43.984073 master-0 kubenswrapper[34361]: I0224 05:45:43.983974 34361 generic.go:334] "Generic (PLEG): container finished" podID="1bec92ed-ae5d-49a5-88d4-a4892243947c" containerID="2439dc61519f04c82da69a26abbe560ebec1c80208317380c43491aa3b46cca3" exitCode=0 Feb 24 05:45:43.985166 master-0 kubenswrapper[34361]: I0224 05:45:43.984069 34361 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/7f6062bfcf66f08711c4d599873349559e66916847a22b4b74a32f97d4g99lp" event={"ID":"1bec92ed-ae5d-49a5-88d4-a4892243947c","Type":"ContainerDied","Data":"2439dc61519f04c82da69a26abbe560ebec1c80208317380c43491aa3b46cca3"} Feb 24 05:45:45.477557 master-0 kubenswrapper[34361]: I0224 05:45:45.477447 34361 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/7f6062bfcf66f08711c4d599873349559e66916847a22b4b74a32f97d4g99lp" Feb 24 05:45:45.625980 master-0 kubenswrapper[34361]: I0224 05:45:45.625808 34361 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qzrch\" (UniqueName: \"kubernetes.io/projected/1bec92ed-ae5d-49a5-88d4-a4892243947c-kube-api-access-qzrch\") pod \"1bec92ed-ae5d-49a5-88d4-a4892243947c\" (UID: \"1bec92ed-ae5d-49a5-88d4-a4892243947c\") " Feb 24 05:45:45.626425 master-0 kubenswrapper[34361]: I0224 05:45:45.626394 34361 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/1bec92ed-ae5d-49a5-88d4-a4892243947c-bundle\") pod \"1bec92ed-ae5d-49a5-88d4-a4892243947c\" (UID: \"1bec92ed-ae5d-49a5-88d4-a4892243947c\") " Feb 24 05:45:45.627891 master-0 kubenswrapper[34361]: I0224 05:45:45.627529 34361 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/1bec92ed-ae5d-49a5-88d4-a4892243947c-util\") pod \"1bec92ed-ae5d-49a5-88d4-a4892243947c\" (UID: \"1bec92ed-ae5d-49a5-88d4-a4892243947c\") " Feb 24 05:45:45.628721 master-0 kubenswrapper[34361]: I0224 05:45:45.628636 34361 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1bec92ed-ae5d-49a5-88d4-a4892243947c-bundle" (OuterVolumeSpecName: "bundle") pod "1bec92ed-ae5d-49a5-88d4-a4892243947c" (UID: "1bec92ed-ae5d-49a5-88d4-a4892243947c"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 24 05:45:45.636580 master-0 kubenswrapper[34361]: I0224 05:45:45.636465 34361 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1bec92ed-ae5d-49a5-88d4-a4892243947c-kube-api-access-qzrch" (OuterVolumeSpecName: "kube-api-access-qzrch") pod "1bec92ed-ae5d-49a5-88d4-a4892243947c" (UID: "1bec92ed-ae5d-49a5-88d4-a4892243947c"). InnerVolumeSpecName "kube-api-access-qzrch". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 24 05:45:45.650292 master-0 kubenswrapper[34361]: I0224 05:45:45.650042 34361 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1bec92ed-ae5d-49a5-88d4-a4892243947c-util" (OuterVolumeSpecName: "util") pod "1bec92ed-ae5d-49a5-88d4-a4892243947c" (UID: "1bec92ed-ae5d-49a5-88d4-a4892243947c"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 24 05:45:45.733765 master-0 kubenswrapper[34361]: I0224 05:45:45.733486 34361 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/1bec92ed-ae5d-49a5-88d4-a4892243947c-util\") on node \"master-0\" DevicePath \"\"" Feb 24 05:45:45.733765 master-0 kubenswrapper[34361]: I0224 05:45:45.733571 34361 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qzrch\" (UniqueName: \"kubernetes.io/projected/1bec92ed-ae5d-49a5-88d4-a4892243947c-kube-api-access-qzrch\") on node \"master-0\" DevicePath \"\"" Feb 24 05:45:45.733765 master-0 kubenswrapper[34361]: I0224 05:45:45.733641 34361 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/1bec92ed-ae5d-49a5-88d4-a4892243947c-bundle\") on node \"master-0\" DevicePath \"\"" Feb 24 05:45:46.011498 master-0 kubenswrapper[34361]: I0224 05:45:46.011399 34361 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/7f6062bfcf66f08711c4d599873349559e66916847a22b4b74a32f97d4g99lp" event={"ID":"1bec92ed-ae5d-49a5-88d4-a4892243947c","Type":"ContainerDied","Data":"84710dcbfb1fc7f74fa90ee1d21de01a0c552521a48d65d988ef9b34eb7e84bf"} Feb 24 05:45:46.011956 master-0 kubenswrapper[34361]: I0224 05:45:46.011923 34361 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="84710dcbfb1fc7f74fa90ee1d21de01a0c552521a48d65d988ef9b34eb7e84bf" Feb 24 05:45:46.012151 master-0 kubenswrapper[34361]: I0224 05:45:46.011524 34361 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/7f6062bfcf66f08711c4d599873349559e66916847a22b4b74a32f97d4g99lp" Feb 24 05:45:53.362267 master-0 kubenswrapper[34361]: I0224 05:45:53.362185 34361 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-storage/lvms-operator-7fd9747c7b-h8dsz"] Feb 24 05:45:53.363071 master-0 kubenswrapper[34361]: E0224 05:45:53.362563 34361 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1bec92ed-ae5d-49a5-88d4-a4892243947c" containerName="pull" Feb 24 05:45:53.363071 master-0 kubenswrapper[34361]: I0224 05:45:53.362578 34361 state_mem.go:107] "Deleted CPUSet assignment" podUID="1bec92ed-ae5d-49a5-88d4-a4892243947c" containerName="pull" Feb 24 05:45:53.363071 master-0 kubenswrapper[34361]: E0224 05:45:53.362618 34361 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1bec92ed-ae5d-49a5-88d4-a4892243947c" containerName="extract" Feb 24 05:45:53.363071 master-0 kubenswrapper[34361]: I0224 05:45:53.362627 34361 state_mem.go:107] "Deleted CPUSet assignment" podUID="1bec92ed-ae5d-49a5-88d4-a4892243947c" containerName="extract" Feb 24 05:45:53.363071 master-0 kubenswrapper[34361]: E0224 05:45:53.362642 34361 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1bec92ed-ae5d-49a5-88d4-a4892243947c" containerName="util" Feb 24 05:45:53.363071 master-0 kubenswrapper[34361]: I0224 05:45:53.362648 34361 state_mem.go:107] "Deleted CPUSet assignment" podUID="1bec92ed-ae5d-49a5-88d4-a4892243947c" containerName="util" Feb 24 05:45:53.363071 master-0 kubenswrapper[34361]: I0224 05:45:53.362814 34361 memory_manager.go:354] "RemoveStaleState removing state" podUID="1bec92ed-ae5d-49a5-88d4-a4892243947c" containerName="extract" Feb 24 05:45:53.363444 master-0 kubenswrapper[34361]: I0224 05:45:53.363397 34361 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-storage/lvms-operator-7fd9747c7b-h8dsz" Feb 24 05:45:53.366376 master-0 kubenswrapper[34361]: I0224 05:45:53.366294 34361 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-storage"/"lvms-operator-service-cert" Feb 24 05:45:53.366565 master-0 kubenswrapper[34361]: I0224 05:45:53.366511 34361 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-storage"/"openshift-service-ca.crt" Feb 24 05:45:53.366806 master-0 kubenswrapper[34361]: I0224 05:45:53.366747 34361 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-storage"/"lvms-operator-metrics-cert" Feb 24 05:45:53.366938 master-0 kubenswrapper[34361]: I0224 05:45:53.366747 34361 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-storage"/"lvms-operator-webhook-server-cert" Feb 24 05:45:53.367006 master-0 kubenswrapper[34361]: I0224 05:45:53.366984 34361 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-storage"/"kube-root-ca.crt" Feb 24 05:45:53.388168 master-0 kubenswrapper[34361]: I0224 05:45:53.388072 34361 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-storage/lvms-operator-7fd9747c7b-h8dsz"] Feb 24 05:45:53.489281 master-0 kubenswrapper[34361]: I0224 05:45:53.489196 34361 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-crk2n\" (UniqueName: \"kubernetes.io/projected/005c2fd8-2bdb-443c-9f33-fbe6925b74c9-kube-api-access-crk2n\") pod \"lvms-operator-7fd9747c7b-h8dsz\" (UID: \"005c2fd8-2bdb-443c-9f33-fbe6925b74c9\") " pod="openshift-storage/lvms-operator-7fd9747c7b-h8dsz" Feb 24 05:45:53.489593 master-0 kubenswrapper[34361]: I0224 05:45:53.489332 34361 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/empty-dir/005c2fd8-2bdb-443c-9f33-fbe6925b74c9-socket-dir\") pod \"lvms-operator-7fd9747c7b-h8dsz\" (UID: \"005c2fd8-2bdb-443c-9f33-fbe6925b74c9\") " pod="openshift-storage/lvms-operator-7fd9747c7b-h8dsz" Feb 24 05:45:53.489593 master-0 kubenswrapper[34361]: I0224 05:45:53.489379 34361 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/005c2fd8-2bdb-443c-9f33-fbe6925b74c9-webhook-cert\") pod \"lvms-operator-7fd9747c7b-h8dsz\" (UID: \"005c2fd8-2bdb-443c-9f33-fbe6925b74c9\") " pod="openshift-storage/lvms-operator-7fd9747c7b-h8dsz" Feb 24 05:45:53.489593 master-0 kubenswrapper[34361]: I0224 05:45:53.489448 34361 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-cert\" (UniqueName: \"kubernetes.io/secret/005c2fd8-2bdb-443c-9f33-fbe6925b74c9-metrics-cert\") pod \"lvms-operator-7fd9747c7b-h8dsz\" (UID: \"005c2fd8-2bdb-443c-9f33-fbe6925b74c9\") " pod="openshift-storage/lvms-operator-7fd9747c7b-h8dsz" Feb 24 05:45:53.489593 master-0 kubenswrapper[34361]: I0224 05:45:53.489468 34361 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/005c2fd8-2bdb-443c-9f33-fbe6925b74c9-apiservice-cert\") pod \"lvms-operator-7fd9747c7b-h8dsz\" (UID: \"005c2fd8-2bdb-443c-9f33-fbe6925b74c9\") " pod="openshift-storage/lvms-operator-7fd9747c7b-h8dsz" Feb 24 05:45:53.591838 master-0 kubenswrapper[34361]: I0224 05:45:53.591781 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/005c2fd8-2bdb-443c-9f33-fbe6925b74c9-apiservice-cert\") pod \"lvms-operator-7fd9747c7b-h8dsz\" (UID: \"005c2fd8-2bdb-443c-9f33-fbe6925b74c9\") " pod="openshift-storage/lvms-operator-7fd9747c7b-h8dsz" Feb 24 05:45:53.592177 master-0 kubenswrapper[34361]: I0224 05:45:53.592160 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-cert\" (UniqueName: \"kubernetes.io/secret/005c2fd8-2bdb-443c-9f33-fbe6925b74c9-metrics-cert\") pod \"lvms-operator-7fd9747c7b-h8dsz\" (UID: \"005c2fd8-2bdb-443c-9f33-fbe6925b74c9\") " pod="openshift-storage/lvms-operator-7fd9747c7b-h8dsz" Feb 24 05:45:53.592332 master-0 kubenswrapper[34361]: I0224 05:45:53.592298 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-crk2n\" (UniqueName: \"kubernetes.io/projected/005c2fd8-2bdb-443c-9f33-fbe6925b74c9-kube-api-access-crk2n\") pod \"lvms-operator-7fd9747c7b-h8dsz\" (UID: \"005c2fd8-2bdb-443c-9f33-fbe6925b74c9\") " pod="openshift-storage/lvms-operator-7fd9747c7b-h8dsz" Feb 24 05:45:53.592463 master-0 kubenswrapper[34361]: I0224 05:45:53.592451 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/empty-dir/005c2fd8-2bdb-443c-9f33-fbe6925b74c9-socket-dir\") pod \"lvms-operator-7fd9747c7b-h8dsz\" (UID: \"005c2fd8-2bdb-443c-9f33-fbe6925b74c9\") " pod="openshift-storage/lvms-operator-7fd9747c7b-h8dsz" Feb 24 05:45:53.592652 master-0 kubenswrapper[34361]: I0224 05:45:53.592635 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/005c2fd8-2bdb-443c-9f33-fbe6925b74c9-webhook-cert\") pod \"lvms-operator-7fd9747c7b-h8dsz\" (UID: \"005c2fd8-2bdb-443c-9f33-fbe6925b74c9\") " pod="openshift-storage/lvms-operator-7fd9747c7b-h8dsz" Feb 24 05:45:53.593396 master-0 kubenswrapper[34361]: I0224 05:45:53.593270 34361 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"socket-dir\" (UniqueName: \"kubernetes.io/empty-dir/005c2fd8-2bdb-443c-9f33-fbe6925b74c9-socket-dir\") pod \"lvms-operator-7fd9747c7b-h8dsz\" (UID: \"005c2fd8-2bdb-443c-9f33-fbe6925b74c9\") " pod="openshift-storage/lvms-operator-7fd9747c7b-h8dsz" Feb 24 05:45:53.597297 master-0 kubenswrapper[34361]: I0224 05:45:53.597275 34361 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/005c2fd8-2bdb-443c-9f33-fbe6925b74c9-apiservice-cert\") pod \"lvms-operator-7fd9747c7b-h8dsz\" (UID: \"005c2fd8-2bdb-443c-9f33-fbe6925b74c9\") " pod="openshift-storage/lvms-operator-7fd9747c7b-h8dsz" Feb 24 05:45:53.598896 master-0 kubenswrapper[34361]: I0224 05:45:53.598875 34361 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/005c2fd8-2bdb-443c-9f33-fbe6925b74c9-webhook-cert\") pod \"lvms-operator-7fd9747c7b-h8dsz\" (UID: \"005c2fd8-2bdb-443c-9f33-fbe6925b74c9\") " pod="openshift-storage/lvms-operator-7fd9747c7b-h8dsz" Feb 24 05:45:53.607271 master-0 kubenswrapper[34361]: I0224 05:45:53.607190 34361 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-cert\" (UniqueName: \"kubernetes.io/secret/005c2fd8-2bdb-443c-9f33-fbe6925b74c9-metrics-cert\") pod \"lvms-operator-7fd9747c7b-h8dsz\" (UID: \"005c2fd8-2bdb-443c-9f33-fbe6925b74c9\") " pod="openshift-storage/lvms-operator-7fd9747c7b-h8dsz" Feb 24 05:45:53.616375 master-0 kubenswrapper[34361]: I0224 05:45:53.616236 34361 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-crk2n\" (UniqueName: \"kubernetes.io/projected/005c2fd8-2bdb-443c-9f33-fbe6925b74c9-kube-api-access-crk2n\") pod \"lvms-operator-7fd9747c7b-h8dsz\" (UID: \"005c2fd8-2bdb-443c-9f33-fbe6925b74c9\") " pod="openshift-storage/lvms-operator-7fd9747c7b-h8dsz" Feb 24 05:45:53.680594 master-0 kubenswrapper[34361]: I0224 05:45:53.680532 34361 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-storage/lvms-operator-7fd9747c7b-h8dsz" Feb 24 05:45:54.128559 master-0 kubenswrapper[34361]: I0224 05:45:54.128492 34361 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-storage/lvms-operator-7fd9747c7b-h8dsz"] Feb 24 05:45:54.134201 master-0 kubenswrapper[34361]: W0224 05:45:54.134146 34361 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod005c2fd8_2bdb_443c_9f33_fbe6925b74c9.slice/crio-7edddf524b0608ce44536ebbb84c2644d5ecb2115e47cd2fd9674a9df4215924 WatchSource:0}: Error finding container 7edddf524b0608ce44536ebbb84c2644d5ecb2115e47cd2fd9674a9df4215924: Status 404 returned error can't find the container with id 7edddf524b0608ce44536ebbb84c2644d5ecb2115e47cd2fd9674a9df4215924 Feb 24 05:45:55.109091 master-0 kubenswrapper[34361]: I0224 05:45:55.108908 34361 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-storage/lvms-operator-7fd9747c7b-h8dsz" event={"ID":"005c2fd8-2bdb-443c-9f33-fbe6925b74c9","Type":"ContainerStarted","Data":"7edddf524b0608ce44536ebbb84c2644d5ecb2115e47cd2fd9674a9df4215924"} Feb 24 05:46:00.178671 master-0 kubenswrapper[34361]: I0224 05:46:00.178571 34361 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-storage/lvms-operator-7fd9747c7b-h8dsz" event={"ID":"005c2fd8-2bdb-443c-9f33-fbe6925b74c9","Type":"ContainerStarted","Data":"3f9ecf452cd10a2e17bb969fa80cc825aff570b647f46e41842a004057ef2909"} Feb 24 05:46:00.179829 master-0 kubenswrapper[34361]: I0224 05:46:00.179301 34361 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-storage/lvms-operator-7fd9747c7b-h8dsz" Feb 24 05:46:00.187246 master-0 kubenswrapper[34361]: I0224 05:46:00.187136 34361 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-storage/lvms-operator-7fd9747c7b-h8dsz" Feb 24 05:46:00.217912 master-0 kubenswrapper[34361]: I0224 05:46:00.217761 34361 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-storage/lvms-operator-7fd9747c7b-h8dsz" podStartSLOduration=2.349834496 podStartE2EDuration="7.217723543s" podCreationTimestamp="2026-02-24 05:45:53 +0000 UTC" firstStartedPulling="2026-02-24 05:45:54.138727473 +0000 UTC m=+513.841344529" lastFinishedPulling="2026-02-24 05:45:59.00661653 +0000 UTC m=+518.709233576" observedRunningTime="2026-02-24 05:46:00.203777653 +0000 UTC m=+519.906394799" watchObservedRunningTime="2026-02-24 05:46:00.217723543 +0000 UTC m=+519.920340629" Feb 24 05:46:03.188291 master-0 kubenswrapper[34361]: I0224 05:46:03.188234 34361 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213v84c6"] Feb 24 05:46:03.190471 master-0 kubenswrapper[34361]: I0224 05:46:03.190449 34361 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213v84c6" Feb 24 05:46:03.192875 master-0 kubenswrapper[34361]: I0224 05:46:03.192824 34361 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-4lhzk" Feb 24 05:46:03.223119 master-0 kubenswrapper[34361]: I0224 05:46:03.223031 34361 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213v84c6"] Feb 24 05:46:03.291870 master-0 kubenswrapper[34361]: I0224 05:46:03.291785 34361 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wbgrj\" (UniqueName: \"kubernetes.io/projected/54f54781-46d3-40ab-8e73-140a23fd1d20-kube-api-access-wbgrj\") pod \"a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213v84c6\" (UID: \"54f54781-46d3-40ab-8e73-140a23fd1d20\") " pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213v84c6" Feb 24 05:46:03.292184 master-0 kubenswrapper[34361]: I0224 05:46:03.291901 34361 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/54f54781-46d3-40ab-8e73-140a23fd1d20-bundle\") pod \"a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213v84c6\" (UID: \"54f54781-46d3-40ab-8e73-140a23fd1d20\") " pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213v84c6" Feb 24 05:46:03.292184 master-0 kubenswrapper[34361]: I0224 05:46:03.291989 34361 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/54f54781-46d3-40ab-8e73-140a23fd1d20-util\") pod \"a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213v84c6\" (UID: \"54f54781-46d3-40ab-8e73-140a23fd1d20\") " pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213v84c6" Feb 24 05:46:03.394301 master-0 kubenswrapper[34361]: I0224 05:46:03.394199 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wbgrj\" (UniqueName: \"kubernetes.io/projected/54f54781-46d3-40ab-8e73-140a23fd1d20-kube-api-access-wbgrj\") pod \"a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213v84c6\" (UID: \"54f54781-46d3-40ab-8e73-140a23fd1d20\") " pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213v84c6" Feb 24 05:46:03.394610 master-0 kubenswrapper[34361]: I0224 05:46:03.394536 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/54f54781-46d3-40ab-8e73-140a23fd1d20-bundle\") pod \"a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213v84c6\" (UID: \"54f54781-46d3-40ab-8e73-140a23fd1d20\") " pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213v84c6" Feb 24 05:46:03.394745 master-0 kubenswrapper[34361]: I0224 05:46:03.394707 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/54f54781-46d3-40ab-8e73-140a23fd1d20-util\") pod \"a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213v84c6\" (UID: \"54f54781-46d3-40ab-8e73-140a23fd1d20\") " pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213v84c6" Feb 24 05:46:03.395408 master-0 kubenswrapper[34361]: I0224 05:46:03.395373 34361 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/54f54781-46d3-40ab-8e73-140a23fd1d20-bundle\") pod \"a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213v84c6\" (UID: \"54f54781-46d3-40ab-8e73-140a23fd1d20\") " pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213v84c6" Feb 24 05:46:03.395527 master-0 kubenswrapper[34361]: I0224 05:46:03.395478 34361 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/54f54781-46d3-40ab-8e73-140a23fd1d20-util\") pod \"a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213v84c6\" (UID: \"54f54781-46d3-40ab-8e73-140a23fd1d20\") " pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213v84c6" Feb 24 05:46:03.396713 master-0 kubenswrapper[34361]: I0224 05:46:03.396671 34361 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5drb7f"] Feb 24 05:46:03.398796 master-0 kubenswrapper[34361]: I0224 05:46:03.398755 34361 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5drb7f" Feb 24 05:46:03.424572 master-0 kubenswrapper[34361]: I0224 05:46:03.423944 34361 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5drb7f"] Feb 24 05:46:03.435719 master-0 kubenswrapper[34361]: I0224 05:46:03.435384 34361 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wbgrj\" (UniqueName: \"kubernetes.io/projected/54f54781-46d3-40ab-8e73-140a23fd1d20-kube-api-access-wbgrj\") pod \"a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213v84c6\" (UID: \"54f54781-46d3-40ab-8e73-140a23fd1d20\") " pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213v84c6" Feb 24 05:46:03.496599 master-0 kubenswrapper[34361]: I0224 05:46:03.496509 34361 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d59l4\" (UniqueName: \"kubernetes.io/projected/e5a1de3d-c243-4671-b274-15840c7999e4-kube-api-access-d59l4\") pod \"925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5drb7f\" (UID: \"e5a1de3d-c243-4671-b274-15840c7999e4\") " pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5drb7f" Feb 24 05:46:03.497036 master-0 kubenswrapper[34361]: I0224 05:46:03.496710 34361 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/e5a1de3d-c243-4671-b274-15840c7999e4-bundle\") pod \"925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5drb7f\" (UID: \"e5a1de3d-c243-4671-b274-15840c7999e4\") " pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5drb7f" Feb 24 05:46:03.497168 master-0 kubenswrapper[34361]: I0224 05:46:03.496993 34361 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/e5a1de3d-c243-4671-b274-15840c7999e4-util\") pod \"925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5drb7f\" (UID: \"e5a1de3d-c243-4671-b274-15840c7999e4\") " pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5drb7f" Feb 24 05:46:03.513403 master-0 kubenswrapper[34361]: I0224 05:46:03.513289 34361 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213v84c6" Feb 24 05:46:03.599813 master-0 kubenswrapper[34361]: I0224 05:46:03.599722 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d59l4\" (UniqueName: \"kubernetes.io/projected/e5a1de3d-c243-4671-b274-15840c7999e4-kube-api-access-d59l4\") pod \"925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5drb7f\" (UID: \"e5a1de3d-c243-4671-b274-15840c7999e4\") " pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5drb7f" Feb 24 05:46:03.599813 master-0 kubenswrapper[34361]: I0224 05:46:03.599811 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/e5a1de3d-c243-4671-b274-15840c7999e4-bundle\") pod \"925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5drb7f\" (UID: \"e5a1de3d-c243-4671-b274-15840c7999e4\") " pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5drb7f" Feb 24 05:46:03.600202 master-0 kubenswrapper[34361]: I0224 05:46:03.599851 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/e5a1de3d-c243-4671-b274-15840c7999e4-util\") pod \"925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5drb7f\" (UID: \"e5a1de3d-c243-4671-b274-15840c7999e4\") " pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5drb7f" Feb 24 05:46:03.600601 master-0 kubenswrapper[34361]: I0224 05:46:03.600559 34361 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/e5a1de3d-c243-4671-b274-15840c7999e4-util\") pod \"925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5drb7f\" (UID: \"e5a1de3d-c243-4671-b274-15840c7999e4\") " pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5drb7f" Feb 24 05:46:03.600947 master-0 kubenswrapper[34361]: I0224 05:46:03.600908 34361 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/e5a1de3d-c243-4671-b274-15840c7999e4-bundle\") pod \"925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5drb7f\" (UID: \"e5a1de3d-c243-4671-b274-15840c7999e4\") " pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5drb7f" Feb 24 05:46:03.634029 master-0 kubenswrapper[34361]: I0224 05:46:03.633939 34361 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d59l4\" (UniqueName: \"kubernetes.io/projected/e5a1de3d-c243-4671-b274-15840c7999e4-kube-api-access-d59l4\") pod \"925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5drb7f\" (UID: \"e5a1de3d-c243-4671-b274-15840c7999e4\") " pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5drb7f" Feb 24 05:46:03.725154 master-0 kubenswrapper[34361]: I0224 05:46:03.725085 34361 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5drb7f" Feb 24 05:46:04.062992 master-0 kubenswrapper[34361]: W0224 05:46:04.062913 34361 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod54f54781_46d3_40ab_8e73_140a23fd1d20.slice/crio-1b74af6b8bd3e0397feaac6c885cb4fac0e65e28c24a927630389f53417889fa WatchSource:0}: Error finding container 1b74af6b8bd3e0397feaac6c885cb4fac0e65e28c24a927630389f53417889fa: Status 404 returned error can't find the container with id 1b74af6b8bd3e0397feaac6c885cb4fac0e65e28c24a927630389f53417889fa Feb 24 05:46:04.075740 master-0 kubenswrapper[34361]: I0224 05:46:04.075682 34361 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213v84c6"] Feb 24 05:46:04.175346 master-0 kubenswrapper[34361]: I0224 05:46:04.173752 34361 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecaqw89h"] Feb 24 05:46:04.175346 master-0 kubenswrapper[34361]: I0224 05:46:04.175285 34361 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecaqw89h" Feb 24 05:46:04.199925 master-0 kubenswrapper[34361]: I0224 05:46:04.199146 34361 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecaqw89h"] Feb 24 05:46:04.223735 master-0 kubenswrapper[34361]: I0224 05:46:04.223634 34361 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5drb7f"] Feb 24 05:46:04.225060 master-0 kubenswrapper[34361]: I0224 05:46:04.224967 34361 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213v84c6" event={"ID":"54f54781-46d3-40ab-8e73-140a23fd1d20","Type":"ContainerStarted","Data":"1b74af6b8bd3e0397feaac6c885cb4fac0e65e28c24a927630389f53417889fa"} Feb 24 05:46:04.239495 master-0 kubenswrapper[34361]: W0224 05:46:04.239416 34361 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode5a1de3d_c243_4671_b274_15840c7999e4.slice/crio-3c25d62fb8a6bb34366323a6f4d41c8f13043853d47eef11c5eb8816ff92cad9 WatchSource:0}: Error finding container 3c25d62fb8a6bb34366323a6f4d41c8f13043853d47eef11c5eb8816ff92cad9: Status 404 returned error can't find the container with id 3c25d62fb8a6bb34366323a6f4d41c8f13043853d47eef11c5eb8816ff92cad9 Feb 24 05:46:04.320789 master-0 kubenswrapper[34361]: I0224 05:46:04.320623 34361 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rblfs\" (UniqueName: \"kubernetes.io/projected/2955113d-b1b9-4c0f-84e0-2baf3f0e7125-kube-api-access-rblfs\") pod \"f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecaqw89h\" (UID: \"2955113d-b1b9-4c0f-84e0-2baf3f0e7125\") " pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecaqw89h" Feb 24 05:46:04.321168 master-0 kubenswrapper[34361]: I0224 05:46:04.321087 34361 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/2955113d-b1b9-4c0f-84e0-2baf3f0e7125-bundle\") pod \"f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecaqw89h\" (UID: \"2955113d-b1b9-4c0f-84e0-2baf3f0e7125\") " pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecaqw89h" Feb 24 05:46:04.321168 master-0 kubenswrapper[34361]: I0224 05:46:04.321158 34361 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/2955113d-b1b9-4c0f-84e0-2baf3f0e7125-util\") pod \"f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecaqw89h\" (UID: \"2955113d-b1b9-4c0f-84e0-2baf3f0e7125\") " pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecaqw89h" Feb 24 05:46:04.423384 master-0 kubenswrapper[34361]: I0224 05:46:04.423109 34361 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/2955113d-b1b9-4c0f-84e0-2baf3f0e7125-bundle\") pod \"f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecaqw89h\" (UID: \"2955113d-b1b9-4c0f-84e0-2baf3f0e7125\") " pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecaqw89h" Feb 24 05:46:04.423384 master-0 kubenswrapper[34361]: I0224 05:46:04.422518 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/2955113d-b1b9-4c0f-84e0-2baf3f0e7125-bundle\") pod \"f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecaqw89h\" (UID: \"2955113d-b1b9-4c0f-84e0-2baf3f0e7125\") " pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecaqw89h" Feb 24 05:46:04.423384 master-0 kubenswrapper[34361]: I0224 05:46:04.423216 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/2955113d-b1b9-4c0f-84e0-2baf3f0e7125-util\") pod \"f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecaqw89h\" (UID: \"2955113d-b1b9-4c0f-84e0-2baf3f0e7125\") " pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecaqw89h" Feb 24 05:46:04.423865 master-0 kubenswrapper[34361]: I0224 05:46:04.423526 34361 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/2955113d-b1b9-4c0f-84e0-2baf3f0e7125-util\") pod \"f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecaqw89h\" (UID: \"2955113d-b1b9-4c0f-84e0-2baf3f0e7125\") " pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecaqw89h" Feb 24 05:46:04.423865 master-0 kubenswrapper[34361]: I0224 05:46:04.423650 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rblfs\" (UniqueName: \"kubernetes.io/projected/2955113d-b1b9-4c0f-84e0-2baf3f0e7125-kube-api-access-rblfs\") pod \"f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecaqw89h\" (UID: \"2955113d-b1b9-4c0f-84e0-2baf3f0e7125\") " pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecaqw89h" Feb 24 05:46:04.449304 master-0 kubenswrapper[34361]: I0224 05:46:04.448831 34361 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rblfs\" (UniqueName: \"kubernetes.io/projected/2955113d-b1b9-4c0f-84e0-2baf3f0e7125-kube-api-access-rblfs\") pod \"f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecaqw89h\" (UID: \"2955113d-b1b9-4c0f-84e0-2baf3f0e7125\") " pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecaqw89h" Feb 24 05:46:04.520237 master-0 kubenswrapper[34361]: I0224 05:46:04.520160 34361 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecaqw89h" Feb 24 05:46:05.062612 master-0 kubenswrapper[34361]: I0224 05:46:05.062536 34361 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecaqw89h"] Feb 24 05:46:05.065908 master-0 kubenswrapper[34361]: W0224 05:46:05.065811 34361 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2955113d_b1b9_4c0f_84e0_2baf3f0e7125.slice/crio-cae97f566c00278e3470c54f697d004617502e0945f3b877dd17e60814fee9af WatchSource:0}: Error finding container cae97f566c00278e3470c54f697d004617502e0945f3b877dd17e60814fee9af: Status 404 returned error can't find the container with id cae97f566c00278e3470c54f697d004617502e0945f3b877dd17e60814fee9af Feb 24 05:46:05.236647 master-0 kubenswrapper[34361]: I0224 05:46:05.236570 34361 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecaqw89h" event={"ID":"2955113d-b1b9-4c0f-84e0-2baf3f0e7125","Type":"ContainerStarted","Data":"cae97f566c00278e3470c54f697d004617502e0945f3b877dd17e60814fee9af"} Feb 24 05:46:05.239168 master-0 kubenswrapper[34361]: I0224 05:46:05.239121 34361 generic.go:334] "Generic (PLEG): container finished" podID="54f54781-46d3-40ab-8e73-140a23fd1d20" containerID="3caa4f98836c44f2d2b9733847179db797098679ea1802be80928891787eec2d" exitCode=0 Feb 24 05:46:05.239373 master-0 kubenswrapper[34361]: I0224 05:46:05.239249 34361 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213v84c6" event={"ID":"54f54781-46d3-40ab-8e73-140a23fd1d20","Type":"ContainerDied","Data":"3caa4f98836c44f2d2b9733847179db797098679ea1802be80928891787eec2d"} Feb 24 05:46:05.249600 master-0 kubenswrapper[34361]: I0224 05:46:05.249484 34361 generic.go:334] "Generic (PLEG): container finished" podID="e5a1de3d-c243-4671-b274-15840c7999e4" containerID="b49f85661ed3a858325f2611d03beaf1f8ae6b2bc3667f20ba782e1b34846de1" exitCode=0 Feb 24 05:46:05.249600 master-0 kubenswrapper[34361]: I0224 05:46:05.249579 34361 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5drb7f" event={"ID":"e5a1de3d-c243-4671-b274-15840c7999e4","Type":"ContainerDied","Data":"b49f85661ed3a858325f2611d03beaf1f8ae6b2bc3667f20ba782e1b34846de1"} Feb 24 05:46:05.249841 master-0 kubenswrapper[34361]: I0224 05:46:05.249618 34361 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5drb7f" event={"ID":"e5a1de3d-c243-4671-b274-15840c7999e4","Type":"ContainerStarted","Data":"3c25d62fb8a6bb34366323a6f4d41c8f13043853d47eef11c5eb8816ff92cad9"} Feb 24 05:46:06.265143 master-0 kubenswrapper[34361]: I0224 05:46:06.264978 34361 generic.go:334] "Generic (PLEG): container finished" podID="2955113d-b1b9-4c0f-84e0-2baf3f0e7125" containerID="509291e7edd7a82c2dc14c551aed626cfae6dd05521c875d7e4712399fc5aa69" exitCode=0 Feb 24 05:46:06.265143 master-0 kubenswrapper[34361]: I0224 05:46:06.265063 34361 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecaqw89h" event={"ID":"2955113d-b1b9-4c0f-84e0-2baf3f0e7125","Type":"ContainerDied","Data":"509291e7edd7a82c2dc14c551aed626cfae6dd05521c875d7e4712399fc5aa69"} Feb 24 05:46:07.276834 master-0 kubenswrapper[34361]: I0224 05:46:07.276755 34361 generic.go:334] "Generic (PLEG): container finished" podID="54f54781-46d3-40ab-8e73-140a23fd1d20" containerID="03cda38045bac2ab58cc099a8ef00e0f8cff11783f7428fb1b79c7edc4a0d6bf" exitCode=0 Feb 24 05:46:07.276834 master-0 kubenswrapper[34361]: I0224 05:46:07.276831 34361 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213v84c6" event={"ID":"54f54781-46d3-40ab-8e73-140a23fd1d20","Type":"ContainerDied","Data":"03cda38045bac2ab58cc099a8ef00e0f8cff11783f7428fb1b79c7edc4a0d6bf"} Feb 24 05:46:09.303538 master-0 kubenswrapper[34361]: I0224 05:46:09.303428 34361 generic.go:334] "Generic (PLEG): container finished" podID="54f54781-46d3-40ab-8e73-140a23fd1d20" containerID="67d79a196b0f08ccdbd0913e4abacbeb2048ed4de251d505abb7b5a8057f4983" exitCode=0 Feb 24 05:46:09.304584 master-0 kubenswrapper[34361]: I0224 05:46:09.303556 34361 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213v84c6" event={"ID":"54f54781-46d3-40ab-8e73-140a23fd1d20","Type":"ContainerDied","Data":"67d79a196b0f08ccdbd0913e4abacbeb2048ed4de251d505abb7b5a8057f4983"} Feb 24 05:46:09.307755 master-0 kubenswrapper[34361]: I0224 05:46:09.307669 34361 generic.go:334] "Generic (PLEG): container finished" podID="e5a1de3d-c243-4671-b274-15840c7999e4" containerID="f3addb5d67f943cea849e42ee95c900b7446f39b5dd6a3210c44fb91d8d843ba" exitCode=0 Feb 24 05:46:09.307908 master-0 kubenswrapper[34361]: I0224 05:46:09.307825 34361 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5drb7f" event={"ID":"e5a1de3d-c243-4671-b274-15840c7999e4","Type":"ContainerDied","Data":"f3addb5d67f943cea849e42ee95c900b7446f39b5dd6a3210c44fb91d8d843ba"} Feb 24 05:46:09.310614 master-0 kubenswrapper[34361]: I0224 05:46:09.310516 34361 generic.go:334] "Generic (PLEG): container finished" podID="2955113d-b1b9-4c0f-84e0-2baf3f0e7125" containerID="e9384011965325e10d9f857f1d5f958257a7fe8217ac5698f5711714720b95a9" exitCode=0 Feb 24 05:46:09.310614 master-0 kubenswrapper[34361]: I0224 05:46:09.310603 34361 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecaqw89h" event={"ID":"2955113d-b1b9-4c0f-84e0-2baf3f0e7125","Type":"ContainerDied","Data":"e9384011965325e10d9f857f1d5f958257a7fe8217ac5698f5711714720b95a9"} Feb 24 05:46:10.324297 master-0 kubenswrapper[34361]: I0224 05:46:10.324210 34361 generic.go:334] "Generic (PLEG): container finished" podID="e5a1de3d-c243-4671-b274-15840c7999e4" containerID="9f982a5801261de46df2a0c7128084b034e36b7a49c7b798d626dcc0fd37d9c3" exitCode=0 Feb 24 05:46:10.325530 master-0 kubenswrapper[34361]: I0224 05:46:10.324379 34361 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5drb7f" event={"ID":"e5a1de3d-c243-4671-b274-15840c7999e4","Type":"ContainerDied","Data":"9f982a5801261de46df2a0c7128084b034e36b7a49c7b798d626dcc0fd37d9c3"} Feb 24 05:46:10.330060 master-0 kubenswrapper[34361]: I0224 05:46:10.330001 34361 generic.go:334] "Generic (PLEG): container finished" podID="2955113d-b1b9-4c0f-84e0-2baf3f0e7125" containerID="db29390fc9f77280e9f1c9b1e535e8ba61121cb40e021b9f9ee70a2f11675465" exitCode=0 Feb 24 05:46:10.330301 master-0 kubenswrapper[34361]: I0224 05:46:10.330086 34361 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecaqw89h" event={"ID":"2955113d-b1b9-4c0f-84e0-2baf3f0e7125","Type":"ContainerDied","Data":"db29390fc9f77280e9f1c9b1e535e8ba61121cb40e021b9f9ee70a2f11675465"} Feb 24 05:46:10.785814 master-0 kubenswrapper[34361]: I0224 05:46:10.785664 34361 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08xbhw6"] Feb 24 05:46:10.787416 master-0 kubenswrapper[34361]: I0224 05:46:10.787355 34361 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08xbhw6" Feb 24 05:46:10.810435 master-0 kubenswrapper[34361]: I0224 05:46:10.810339 34361 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08xbhw6"] Feb 24 05:46:10.840824 master-0 kubenswrapper[34361]: I0224 05:46:10.840761 34361 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213v84c6" Feb 24 05:46:10.857769 master-0 kubenswrapper[34361]: I0224 05:46:10.857715 34361 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j7hc4\" (UniqueName: \"kubernetes.io/projected/8e470f26-9bf0-41e5-b781-7217b844131e-kube-api-access-j7hc4\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08xbhw6\" (UID: \"8e470f26-9bf0-41e5-b781-7217b844131e\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08xbhw6" Feb 24 05:46:10.857886 master-0 kubenswrapper[34361]: I0224 05:46:10.857833 34361 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/8e470f26-9bf0-41e5-b781-7217b844131e-util\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08xbhw6\" (UID: \"8e470f26-9bf0-41e5-b781-7217b844131e\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08xbhw6" Feb 24 05:46:10.857886 master-0 kubenswrapper[34361]: I0224 05:46:10.857865 34361 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/8e470f26-9bf0-41e5-b781-7217b844131e-bundle\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08xbhw6\" (UID: \"8e470f26-9bf0-41e5-b781-7217b844131e\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08xbhw6" Feb 24 05:46:10.958730 master-0 kubenswrapper[34361]: I0224 05:46:10.958598 34361 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/54f54781-46d3-40ab-8e73-140a23fd1d20-bundle\") pod \"54f54781-46d3-40ab-8e73-140a23fd1d20\" (UID: \"54f54781-46d3-40ab-8e73-140a23fd1d20\") " Feb 24 05:46:10.958730 master-0 kubenswrapper[34361]: I0224 05:46:10.958708 34361 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wbgrj\" (UniqueName: \"kubernetes.io/projected/54f54781-46d3-40ab-8e73-140a23fd1d20-kube-api-access-wbgrj\") pod \"54f54781-46d3-40ab-8e73-140a23fd1d20\" (UID: \"54f54781-46d3-40ab-8e73-140a23fd1d20\") " Feb 24 05:46:10.959193 master-0 kubenswrapper[34361]: I0224 05:46:10.958874 34361 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/54f54781-46d3-40ab-8e73-140a23fd1d20-util\") pod \"54f54781-46d3-40ab-8e73-140a23fd1d20\" (UID: \"54f54781-46d3-40ab-8e73-140a23fd1d20\") " Feb 24 05:46:10.959193 master-0 kubenswrapper[34361]: I0224 05:46:10.959135 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j7hc4\" (UniqueName: \"kubernetes.io/projected/8e470f26-9bf0-41e5-b781-7217b844131e-kube-api-access-j7hc4\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08xbhw6\" (UID: \"8e470f26-9bf0-41e5-b781-7217b844131e\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08xbhw6" Feb 24 05:46:10.959405 master-0 kubenswrapper[34361]: I0224 05:46:10.959375 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/8e470f26-9bf0-41e5-b781-7217b844131e-util\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08xbhw6\" (UID: \"8e470f26-9bf0-41e5-b781-7217b844131e\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08xbhw6" Feb 24 05:46:10.959534 master-0 kubenswrapper[34361]: I0224 05:46:10.959452 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/8e470f26-9bf0-41e5-b781-7217b844131e-bundle\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08xbhw6\" (UID: \"8e470f26-9bf0-41e5-b781-7217b844131e\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08xbhw6" Feb 24 05:46:10.960455 master-0 kubenswrapper[34361]: I0224 05:46:10.960365 34361 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/54f54781-46d3-40ab-8e73-140a23fd1d20-bundle" (OuterVolumeSpecName: "bundle") pod "54f54781-46d3-40ab-8e73-140a23fd1d20" (UID: "54f54781-46d3-40ab-8e73-140a23fd1d20"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 24 05:46:10.960681 master-0 kubenswrapper[34361]: I0224 05:46:10.960628 34361 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/8e470f26-9bf0-41e5-b781-7217b844131e-util\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08xbhw6\" (UID: \"8e470f26-9bf0-41e5-b781-7217b844131e\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08xbhw6" Feb 24 05:46:10.960933 master-0 kubenswrapper[34361]: I0224 05:46:10.960858 34361 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/8e470f26-9bf0-41e5-b781-7217b844131e-bundle\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08xbhw6\" (UID: \"8e470f26-9bf0-41e5-b781-7217b844131e\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08xbhw6" Feb 24 05:46:10.963716 master-0 kubenswrapper[34361]: I0224 05:46:10.963645 34361 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/54f54781-46d3-40ab-8e73-140a23fd1d20-kube-api-access-wbgrj" (OuterVolumeSpecName: "kube-api-access-wbgrj") pod "54f54781-46d3-40ab-8e73-140a23fd1d20" (UID: "54f54781-46d3-40ab-8e73-140a23fd1d20"). InnerVolumeSpecName "kube-api-access-wbgrj". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 24 05:46:10.977117 master-0 kubenswrapper[34361]: I0224 05:46:10.977002 34361 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/54f54781-46d3-40ab-8e73-140a23fd1d20-util" (OuterVolumeSpecName: "util") pod "54f54781-46d3-40ab-8e73-140a23fd1d20" (UID: "54f54781-46d3-40ab-8e73-140a23fd1d20"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 24 05:46:10.978877 master-0 kubenswrapper[34361]: I0224 05:46:10.978748 34361 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j7hc4\" (UniqueName: \"kubernetes.io/projected/8e470f26-9bf0-41e5-b781-7217b844131e-kube-api-access-j7hc4\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08xbhw6\" (UID: \"8e470f26-9bf0-41e5-b781-7217b844131e\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08xbhw6" Feb 24 05:46:11.061424 master-0 kubenswrapper[34361]: I0224 05:46:11.061201 34361 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/54f54781-46d3-40ab-8e73-140a23fd1d20-bundle\") on node \"master-0\" DevicePath \"\"" Feb 24 05:46:11.061424 master-0 kubenswrapper[34361]: I0224 05:46:11.061292 34361 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wbgrj\" (UniqueName: \"kubernetes.io/projected/54f54781-46d3-40ab-8e73-140a23fd1d20-kube-api-access-wbgrj\") on node \"master-0\" DevicePath \"\"" Feb 24 05:46:11.061424 master-0 kubenswrapper[34361]: I0224 05:46:11.061354 34361 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/54f54781-46d3-40ab-8e73-140a23fd1d20-util\") on node \"master-0\" DevicePath \"\"" Feb 24 05:46:11.155412 master-0 kubenswrapper[34361]: I0224 05:46:11.154536 34361 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08xbhw6" Feb 24 05:46:11.372213 master-0 kubenswrapper[34361]: I0224 05:46:11.372125 34361 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213v84c6" Feb 24 05:46:11.372972 master-0 kubenswrapper[34361]: I0224 05:46:11.372385 34361 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213v84c6" event={"ID":"54f54781-46d3-40ab-8e73-140a23fd1d20","Type":"ContainerDied","Data":"1b74af6b8bd3e0397feaac6c885cb4fac0e65e28c24a927630389f53417889fa"} Feb 24 05:46:11.372972 master-0 kubenswrapper[34361]: I0224 05:46:11.372454 34361 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1b74af6b8bd3e0397feaac6c885cb4fac0e65e28c24a927630389f53417889fa" Feb 24 05:46:11.670167 master-0 kubenswrapper[34361]: I0224 05:46:11.670072 34361 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08xbhw6"] Feb 24 05:46:11.682980 master-0 kubenswrapper[34361]: W0224 05:46:11.682931 34361 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8e470f26_9bf0_41e5_b781_7217b844131e.slice/crio-9eb53ce2df3d1f0998073f6ad2ff8f792dd9a67470b49ca4cc1ae04519b2a3dc WatchSource:0}: Error finding container 9eb53ce2df3d1f0998073f6ad2ff8f792dd9a67470b49ca4cc1ae04519b2a3dc: Status 404 returned error can't find the container with id 9eb53ce2df3d1f0998073f6ad2ff8f792dd9a67470b49ca4cc1ae04519b2a3dc Feb 24 05:46:11.847383 master-0 kubenswrapper[34361]: I0224 05:46:11.847229 34361 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5drb7f" Feb 24 05:46:11.855945 master-0 kubenswrapper[34361]: I0224 05:46:11.855277 34361 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecaqw89h" Feb 24 05:46:11.886084 master-0 kubenswrapper[34361]: I0224 05:46:11.880828 34361 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/e5a1de3d-c243-4671-b274-15840c7999e4-bundle\") pod \"e5a1de3d-c243-4671-b274-15840c7999e4\" (UID: \"e5a1de3d-c243-4671-b274-15840c7999e4\") " Feb 24 05:46:11.886084 master-0 kubenswrapper[34361]: I0224 05:46:11.880918 34361 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d59l4\" (UniqueName: \"kubernetes.io/projected/e5a1de3d-c243-4671-b274-15840c7999e4-kube-api-access-d59l4\") pod \"e5a1de3d-c243-4671-b274-15840c7999e4\" (UID: \"e5a1de3d-c243-4671-b274-15840c7999e4\") " Feb 24 05:46:11.886084 master-0 kubenswrapper[34361]: I0224 05:46:11.881103 34361 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/e5a1de3d-c243-4671-b274-15840c7999e4-util\") pod \"e5a1de3d-c243-4671-b274-15840c7999e4\" (UID: \"e5a1de3d-c243-4671-b274-15840c7999e4\") " Feb 24 05:46:11.886084 master-0 kubenswrapper[34361]: I0224 05:46:11.884115 34361 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e5a1de3d-c243-4671-b274-15840c7999e4-bundle" (OuterVolumeSpecName: "bundle") pod "e5a1de3d-c243-4671-b274-15840c7999e4" (UID: "e5a1de3d-c243-4671-b274-15840c7999e4"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 24 05:46:11.886084 master-0 kubenswrapper[34361]: I0224 05:46:11.884836 34361 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e5a1de3d-c243-4671-b274-15840c7999e4-kube-api-access-d59l4" (OuterVolumeSpecName: "kube-api-access-d59l4") pod "e5a1de3d-c243-4671-b274-15840c7999e4" (UID: "e5a1de3d-c243-4671-b274-15840c7999e4"). InnerVolumeSpecName "kube-api-access-d59l4". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 24 05:46:11.887846 master-0 kubenswrapper[34361]: I0224 05:46:11.887778 34361 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/e5a1de3d-c243-4671-b274-15840c7999e4-bundle\") on node \"master-0\" DevicePath \"\"" Feb 24 05:46:11.887919 master-0 kubenswrapper[34361]: I0224 05:46:11.887863 34361 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d59l4\" (UniqueName: \"kubernetes.io/projected/e5a1de3d-c243-4671-b274-15840c7999e4-kube-api-access-d59l4\") on node \"master-0\" DevicePath \"\"" Feb 24 05:46:11.907601 master-0 kubenswrapper[34361]: I0224 05:46:11.907501 34361 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e5a1de3d-c243-4671-b274-15840c7999e4-util" (OuterVolumeSpecName: "util") pod "e5a1de3d-c243-4671-b274-15840c7999e4" (UID: "e5a1de3d-c243-4671-b274-15840c7999e4"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 24 05:46:11.989542 master-0 kubenswrapper[34361]: I0224 05:46:11.989437 34361 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/2955113d-b1b9-4c0f-84e0-2baf3f0e7125-util\") pod \"2955113d-b1b9-4c0f-84e0-2baf3f0e7125\" (UID: \"2955113d-b1b9-4c0f-84e0-2baf3f0e7125\") " Feb 24 05:46:11.989753 master-0 kubenswrapper[34361]: I0224 05:46:11.989658 34361 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rblfs\" (UniqueName: \"kubernetes.io/projected/2955113d-b1b9-4c0f-84e0-2baf3f0e7125-kube-api-access-rblfs\") pod \"2955113d-b1b9-4c0f-84e0-2baf3f0e7125\" (UID: \"2955113d-b1b9-4c0f-84e0-2baf3f0e7125\") " Feb 24 05:46:11.989800 master-0 kubenswrapper[34361]: I0224 05:46:11.989750 34361 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/2955113d-b1b9-4c0f-84e0-2baf3f0e7125-bundle\") pod \"2955113d-b1b9-4c0f-84e0-2baf3f0e7125\" (UID: \"2955113d-b1b9-4c0f-84e0-2baf3f0e7125\") " Feb 24 05:46:11.990539 master-0 kubenswrapper[34361]: I0224 05:46:11.990496 34361 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/e5a1de3d-c243-4671-b274-15840c7999e4-util\") on node \"master-0\" DevicePath \"\"" Feb 24 05:46:11.990667 master-0 kubenswrapper[34361]: I0224 05:46:11.990619 34361 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2955113d-b1b9-4c0f-84e0-2baf3f0e7125-bundle" (OuterVolumeSpecName: "bundle") pod "2955113d-b1b9-4c0f-84e0-2baf3f0e7125" (UID: "2955113d-b1b9-4c0f-84e0-2baf3f0e7125"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 24 05:46:11.993869 master-0 kubenswrapper[34361]: I0224 05:46:11.993776 34361 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2955113d-b1b9-4c0f-84e0-2baf3f0e7125-kube-api-access-rblfs" (OuterVolumeSpecName: "kube-api-access-rblfs") pod "2955113d-b1b9-4c0f-84e0-2baf3f0e7125" (UID: "2955113d-b1b9-4c0f-84e0-2baf3f0e7125"). InnerVolumeSpecName "kube-api-access-rblfs". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 24 05:46:12.003998 master-0 kubenswrapper[34361]: I0224 05:46:12.003945 34361 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2955113d-b1b9-4c0f-84e0-2baf3f0e7125-util" (OuterVolumeSpecName: "util") pod "2955113d-b1b9-4c0f-84e0-2baf3f0e7125" (UID: "2955113d-b1b9-4c0f-84e0-2baf3f0e7125"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 24 05:46:12.093417 master-0 kubenswrapper[34361]: I0224 05:46:12.093297 34361 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rblfs\" (UniqueName: \"kubernetes.io/projected/2955113d-b1b9-4c0f-84e0-2baf3f0e7125-kube-api-access-rblfs\") on node \"master-0\" DevicePath \"\"" Feb 24 05:46:12.093417 master-0 kubenswrapper[34361]: I0224 05:46:12.093410 34361 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/2955113d-b1b9-4c0f-84e0-2baf3f0e7125-bundle\") on node \"master-0\" DevicePath \"\"" Feb 24 05:46:12.093417 master-0 kubenswrapper[34361]: I0224 05:46:12.093435 34361 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/2955113d-b1b9-4c0f-84e0-2baf3f0e7125-util\") on node \"master-0\" DevicePath \"\"" Feb 24 05:46:12.395564 master-0 kubenswrapper[34361]: I0224 05:46:12.395400 34361 generic.go:334] "Generic (PLEG): container finished" podID="8e470f26-9bf0-41e5-b781-7217b844131e" containerID="075a62a11ea7134293f46a7a959401b3c0d98aead0fc43796aae4c421761ea9d" exitCode=0 Feb 24 05:46:12.395564 master-0 kubenswrapper[34361]: I0224 05:46:12.395526 34361 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08xbhw6" event={"ID":"8e470f26-9bf0-41e5-b781-7217b844131e","Type":"ContainerDied","Data":"075a62a11ea7134293f46a7a959401b3c0d98aead0fc43796aae4c421761ea9d"} Feb 24 05:46:12.395564 master-0 kubenswrapper[34361]: I0224 05:46:12.395569 34361 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08xbhw6" event={"ID":"8e470f26-9bf0-41e5-b781-7217b844131e","Type":"ContainerStarted","Data":"9eb53ce2df3d1f0998073f6ad2ff8f792dd9a67470b49ca4cc1ae04519b2a3dc"} Feb 24 05:46:12.400038 master-0 kubenswrapper[34361]: I0224 05:46:12.399956 34361 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5drb7f" event={"ID":"e5a1de3d-c243-4671-b274-15840c7999e4","Type":"ContainerDied","Data":"3c25d62fb8a6bb34366323a6f4d41c8f13043853d47eef11c5eb8816ff92cad9"} Feb 24 05:46:12.400094 master-0 kubenswrapper[34361]: I0224 05:46:12.400048 34361 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3c25d62fb8a6bb34366323a6f4d41c8f13043853d47eef11c5eb8816ff92cad9" Feb 24 05:46:12.400094 master-0 kubenswrapper[34361]: I0224 05:46:12.399985 34361 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5drb7f" Feb 24 05:46:12.404465 master-0 kubenswrapper[34361]: I0224 05:46:12.404400 34361 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecaqw89h" event={"ID":"2955113d-b1b9-4c0f-84e0-2baf3f0e7125","Type":"ContainerDied","Data":"cae97f566c00278e3470c54f697d004617502e0945f3b877dd17e60814fee9af"} Feb 24 05:46:12.404465 master-0 kubenswrapper[34361]: I0224 05:46:12.404442 34361 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="cae97f566c00278e3470c54f697d004617502e0945f3b877dd17e60814fee9af" Feb 24 05:46:12.404596 master-0 kubenswrapper[34361]: I0224 05:46:12.404524 34361 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecaqw89h" Feb 24 05:46:14.427272 master-0 kubenswrapper[34361]: I0224 05:46:14.427179 34361 generic.go:334] "Generic (PLEG): container finished" podID="8e470f26-9bf0-41e5-b781-7217b844131e" containerID="d2cf0e25518cf833233fcc464287b167ccac1f52a67ea9ec69889427e586910d" exitCode=0 Feb 24 05:46:14.427272 master-0 kubenswrapper[34361]: I0224 05:46:14.427261 34361 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08xbhw6" event={"ID":"8e470f26-9bf0-41e5-b781-7217b844131e","Type":"ContainerDied","Data":"d2cf0e25518cf833233fcc464287b167ccac1f52a67ea9ec69889427e586910d"} Feb 24 05:46:15.438401 master-0 kubenswrapper[34361]: I0224 05:46:15.438163 34361 generic.go:334] "Generic (PLEG): container finished" podID="8e470f26-9bf0-41e5-b781-7217b844131e" containerID="334d5ba3f835515e435a0e5527149e823e779c3d60724caca7d2d46078a381e1" exitCode=0 Feb 24 05:46:15.438401 master-0 kubenswrapper[34361]: I0224 05:46:15.438243 34361 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08xbhw6" event={"ID":"8e470f26-9bf0-41e5-b781-7217b844131e","Type":"ContainerDied","Data":"334d5ba3f835515e435a0e5527149e823e779c3d60724caca7d2d46078a381e1"} Feb 24 05:46:16.925341 master-0 kubenswrapper[34361]: I0224 05:46:16.925276 34361 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08xbhw6" Feb 24 05:46:17.016421 master-0 kubenswrapper[34361]: I0224 05:46:17.014037 34361 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-j7hc4\" (UniqueName: \"kubernetes.io/projected/8e470f26-9bf0-41e5-b781-7217b844131e-kube-api-access-j7hc4\") pod \"8e470f26-9bf0-41e5-b781-7217b844131e\" (UID: \"8e470f26-9bf0-41e5-b781-7217b844131e\") " Feb 24 05:46:17.016421 master-0 kubenswrapper[34361]: I0224 05:46:17.014138 34361 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/8e470f26-9bf0-41e5-b781-7217b844131e-bundle\") pod \"8e470f26-9bf0-41e5-b781-7217b844131e\" (UID: \"8e470f26-9bf0-41e5-b781-7217b844131e\") " Feb 24 05:46:17.016421 master-0 kubenswrapper[34361]: I0224 05:46:17.014207 34361 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/8e470f26-9bf0-41e5-b781-7217b844131e-util\") pod \"8e470f26-9bf0-41e5-b781-7217b844131e\" (UID: \"8e470f26-9bf0-41e5-b781-7217b844131e\") " Feb 24 05:46:17.019767 master-0 kubenswrapper[34361]: I0224 05:46:17.019683 34361 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8e470f26-9bf0-41e5-b781-7217b844131e-bundle" (OuterVolumeSpecName: "bundle") pod "8e470f26-9bf0-41e5-b781-7217b844131e" (UID: "8e470f26-9bf0-41e5-b781-7217b844131e"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 24 05:46:17.021098 master-0 kubenswrapper[34361]: I0224 05:46:17.021024 34361 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8e470f26-9bf0-41e5-b781-7217b844131e-kube-api-access-j7hc4" (OuterVolumeSpecName: "kube-api-access-j7hc4") pod "8e470f26-9bf0-41e5-b781-7217b844131e" (UID: "8e470f26-9bf0-41e5-b781-7217b844131e"). InnerVolumeSpecName "kube-api-access-j7hc4". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 24 05:46:17.030983 master-0 kubenswrapper[34361]: I0224 05:46:17.030908 34361 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8e470f26-9bf0-41e5-b781-7217b844131e-util" (OuterVolumeSpecName: "util") pod "8e470f26-9bf0-41e5-b781-7217b844131e" (UID: "8e470f26-9bf0-41e5-b781-7217b844131e"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 24 05:46:17.117927 master-0 kubenswrapper[34361]: I0224 05:46:17.117769 34361 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-j7hc4\" (UniqueName: \"kubernetes.io/projected/8e470f26-9bf0-41e5-b781-7217b844131e-kube-api-access-j7hc4\") on node \"master-0\" DevicePath \"\"" Feb 24 05:46:17.117927 master-0 kubenswrapper[34361]: I0224 05:46:17.117842 34361 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/8e470f26-9bf0-41e5-b781-7217b844131e-bundle\") on node \"master-0\" DevicePath \"\"" Feb 24 05:46:17.117927 master-0 kubenswrapper[34361]: I0224 05:46:17.117854 34361 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/8e470f26-9bf0-41e5-b781-7217b844131e-util\") on node \"master-0\" DevicePath \"\"" Feb 24 05:46:17.458150 master-0 kubenswrapper[34361]: I0224 05:46:17.457962 34361 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08xbhw6" event={"ID":"8e470f26-9bf0-41e5-b781-7217b844131e","Type":"ContainerDied","Data":"9eb53ce2df3d1f0998073f6ad2ff8f792dd9a67470b49ca4cc1ae04519b2a3dc"} Feb 24 05:46:17.458150 master-0 kubenswrapper[34361]: I0224 05:46:17.458054 34361 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9eb53ce2df3d1f0998073f6ad2ff8f792dd9a67470b49ca4cc1ae04519b2a3dc" Feb 24 05:46:17.458150 master-0 kubenswrapper[34361]: I0224 05:46:17.458073 34361 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08xbhw6" Feb 24 05:46:21.131998 master-0 kubenswrapper[34361]: I0224 05:46:21.131924 34361 scope.go:117] "RemoveContainer" containerID="a6c4b7c7c8f2d6f7a5d9574827c1d87fc9e887e6f38197076ff1b4325039d136" Feb 24 05:46:22.947058 master-0 kubenswrapper[34361]: I0224 05:46:22.946981 34361 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager-operator/cert-manager-operator-controller-manager-66c8bdd694-wp9sf"] Feb 24 05:46:22.947747 master-0 kubenswrapper[34361]: E0224 05:46:22.947715 34361 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2955113d-b1b9-4c0f-84e0-2baf3f0e7125" containerName="pull" Feb 24 05:46:22.947747 master-0 kubenswrapper[34361]: I0224 05:46:22.947744 34361 state_mem.go:107] "Deleted CPUSet assignment" podUID="2955113d-b1b9-4c0f-84e0-2baf3f0e7125" containerName="pull" Feb 24 05:46:22.947825 master-0 kubenswrapper[34361]: E0224 05:46:22.947768 34361 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2955113d-b1b9-4c0f-84e0-2baf3f0e7125" containerName="extract" Feb 24 05:46:22.947825 master-0 kubenswrapper[34361]: I0224 05:46:22.947775 34361 state_mem.go:107] "Deleted CPUSet assignment" podUID="2955113d-b1b9-4c0f-84e0-2baf3f0e7125" containerName="extract" Feb 24 05:46:22.947825 master-0 kubenswrapper[34361]: E0224 05:46:22.947794 34361 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="54f54781-46d3-40ab-8e73-140a23fd1d20" containerName="util" Feb 24 05:46:22.947825 master-0 kubenswrapper[34361]: I0224 05:46:22.947803 34361 state_mem.go:107] "Deleted CPUSet assignment" podUID="54f54781-46d3-40ab-8e73-140a23fd1d20" containerName="util" Feb 24 05:46:22.947825 master-0 kubenswrapper[34361]: E0224 05:46:22.947814 34361 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2955113d-b1b9-4c0f-84e0-2baf3f0e7125" containerName="util" Feb 24 05:46:22.947825 master-0 kubenswrapper[34361]: I0224 05:46:22.947819 34361 state_mem.go:107] "Deleted CPUSet assignment" podUID="2955113d-b1b9-4c0f-84e0-2baf3f0e7125" containerName="util" Feb 24 05:46:22.948005 master-0 kubenswrapper[34361]: E0224 05:46:22.947832 34361 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8e470f26-9bf0-41e5-b781-7217b844131e" containerName="util" Feb 24 05:46:22.948005 master-0 kubenswrapper[34361]: I0224 05:46:22.947838 34361 state_mem.go:107] "Deleted CPUSet assignment" podUID="8e470f26-9bf0-41e5-b781-7217b844131e" containerName="util" Feb 24 05:46:22.948005 master-0 kubenswrapper[34361]: E0224 05:46:22.947848 34361 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e5a1de3d-c243-4671-b274-15840c7999e4" containerName="extract" Feb 24 05:46:22.948005 master-0 kubenswrapper[34361]: I0224 05:46:22.947854 34361 state_mem.go:107] "Deleted CPUSet assignment" podUID="e5a1de3d-c243-4671-b274-15840c7999e4" containerName="extract" Feb 24 05:46:22.948005 master-0 kubenswrapper[34361]: E0224 05:46:22.947863 34361 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="54f54781-46d3-40ab-8e73-140a23fd1d20" containerName="extract" Feb 24 05:46:22.948005 master-0 kubenswrapper[34361]: I0224 05:46:22.947869 34361 state_mem.go:107] "Deleted CPUSet assignment" podUID="54f54781-46d3-40ab-8e73-140a23fd1d20" containerName="extract" Feb 24 05:46:22.948005 master-0 kubenswrapper[34361]: E0224 05:46:22.947879 34361 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8e470f26-9bf0-41e5-b781-7217b844131e" containerName="pull" Feb 24 05:46:22.948005 master-0 kubenswrapper[34361]: I0224 05:46:22.947885 34361 state_mem.go:107] "Deleted CPUSet assignment" podUID="8e470f26-9bf0-41e5-b781-7217b844131e" containerName="pull" Feb 24 05:46:22.948005 master-0 kubenswrapper[34361]: E0224 05:46:22.947908 34361 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e5a1de3d-c243-4671-b274-15840c7999e4" containerName="util" Feb 24 05:46:22.948005 master-0 kubenswrapper[34361]: I0224 05:46:22.947916 34361 state_mem.go:107] "Deleted CPUSet assignment" podUID="e5a1de3d-c243-4671-b274-15840c7999e4" containerName="util" Feb 24 05:46:22.948005 master-0 kubenswrapper[34361]: E0224 05:46:22.947929 34361 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8e470f26-9bf0-41e5-b781-7217b844131e" containerName="extract" Feb 24 05:46:22.948005 master-0 kubenswrapper[34361]: I0224 05:46:22.947934 34361 state_mem.go:107] "Deleted CPUSet assignment" podUID="8e470f26-9bf0-41e5-b781-7217b844131e" containerName="extract" Feb 24 05:46:22.948005 master-0 kubenswrapper[34361]: E0224 05:46:22.947942 34361 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="54f54781-46d3-40ab-8e73-140a23fd1d20" containerName="pull" Feb 24 05:46:22.948005 master-0 kubenswrapper[34361]: I0224 05:46:22.947948 34361 state_mem.go:107] "Deleted CPUSet assignment" podUID="54f54781-46d3-40ab-8e73-140a23fd1d20" containerName="pull" Feb 24 05:46:22.948005 master-0 kubenswrapper[34361]: E0224 05:46:22.947958 34361 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e5a1de3d-c243-4671-b274-15840c7999e4" containerName="pull" Feb 24 05:46:22.948005 master-0 kubenswrapper[34361]: I0224 05:46:22.947964 34361 state_mem.go:107] "Deleted CPUSet assignment" podUID="e5a1de3d-c243-4671-b274-15840c7999e4" containerName="pull" Feb 24 05:46:22.948512 master-0 kubenswrapper[34361]: I0224 05:46:22.948110 34361 memory_manager.go:354] "RemoveStaleState removing state" podUID="54f54781-46d3-40ab-8e73-140a23fd1d20" containerName="extract" Feb 24 05:46:22.948512 master-0 kubenswrapper[34361]: I0224 05:46:22.948139 34361 memory_manager.go:354] "RemoveStaleState removing state" podUID="2955113d-b1b9-4c0f-84e0-2baf3f0e7125" containerName="extract" Feb 24 05:46:22.948512 master-0 kubenswrapper[34361]: I0224 05:46:22.948153 34361 memory_manager.go:354] "RemoveStaleState removing state" podUID="e5a1de3d-c243-4671-b274-15840c7999e4" containerName="extract" Feb 24 05:46:22.948512 master-0 kubenswrapper[34361]: I0224 05:46:22.948173 34361 memory_manager.go:354] "RemoveStaleState removing state" podUID="8e470f26-9bf0-41e5-b781-7217b844131e" containerName="extract" Feb 24 05:46:22.948768 master-0 kubenswrapper[34361]: I0224 05:46:22.948739 34361 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager-operator/cert-manager-operator-controller-manager-66c8bdd694-wp9sf" Feb 24 05:46:22.953871 master-0 kubenswrapper[34361]: I0224 05:46:22.953812 34361 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager-operator"/"kube-root-ca.crt" Feb 24 05:46:22.953871 master-0 kubenswrapper[34361]: I0224 05:46:22.953852 34361 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager-operator"/"openshift-service-ca.crt" Feb 24 05:46:22.981058 master-0 kubenswrapper[34361]: I0224 05:46:22.980992 34361 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager-operator/cert-manager-operator-controller-manager-66c8bdd694-wp9sf"] Feb 24 05:46:23.144969 master-0 kubenswrapper[34361]: I0224 05:46:23.144907 34361 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/592948fd-49b5-4519-8cdc-7b8aa90cca8b-tmp\") pod \"cert-manager-operator-controller-manager-66c8bdd694-wp9sf\" (UID: \"592948fd-49b5-4519-8cdc-7b8aa90cca8b\") " pod="cert-manager-operator/cert-manager-operator-controller-manager-66c8bdd694-wp9sf" Feb 24 05:46:23.145335 master-0 kubenswrapper[34361]: I0224 05:46:23.145287 34361 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xjcjt\" (UniqueName: \"kubernetes.io/projected/592948fd-49b5-4519-8cdc-7b8aa90cca8b-kube-api-access-xjcjt\") pod \"cert-manager-operator-controller-manager-66c8bdd694-wp9sf\" (UID: \"592948fd-49b5-4519-8cdc-7b8aa90cca8b\") " pod="cert-manager-operator/cert-manager-operator-controller-manager-66c8bdd694-wp9sf" Feb 24 05:46:23.247439 master-0 kubenswrapper[34361]: I0224 05:46:23.247356 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xjcjt\" (UniqueName: \"kubernetes.io/projected/592948fd-49b5-4519-8cdc-7b8aa90cca8b-kube-api-access-xjcjt\") pod \"cert-manager-operator-controller-manager-66c8bdd694-wp9sf\" (UID: \"592948fd-49b5-4519-8cdc-7b8aa90cca8b\") " pod="cert-manager-operator/cert-manager-operator-controller-manager-66c8bdd694-wp9sf" Feb 24 05:46:23.247439 master-0 kubenswrapper[34361]: I0224 05:46:23.247439 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/592948fd-49b5-4519-8cdc-7b8aa90cca8b-tmp\") pod \"cert-manager-operator-controller-manager-66c8bdd694-wp9sf\" (UID: \"592948fd-49b5-4519-8cdc-7b8aa90cca8b\") " pod="cert-manager-operator/cert-manager-operator-controller-manager-66c8bdd694-wp9sf" Feb 24 05:46:23.248221 master-0 kubenswrapper[34361]: I0224 05:46:23.248168 34361 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/592948fd-49b5-4519-8cdc-7b8aa90cca8b-tmp\") pod \"cert-manager-operator-controller-manager-66c8bdd694-wp9sf\" (UID: \"592948fd-49b5-4519-8cdc-7b8aa90cca8b\") " pod="cert-manager-operator/cert-manager-operator-controller-manager-66c8bdd694-wp9sf" Feb 24 05:46:23.273633 master-0 kubenswrapper[34361]: I0224 05:46:23.273559 34361 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xjcjt\" (UniqueName: \"kubernetes.io/projected/592948fd-49b5-4519-8cdc-7b8aa90cca8b-kube-api-access-xjcjt\") pod \"cert-manager-operator-controller-manager-66c8bdd694-wp9sf\" (UID: \"592948fd-49b5-4519-8cdc-7b8aa90cca8b\") " pod="cert-manager-operator/cert-manager-operator-controller-manager-66c8bdd694-wp9sf" Feb 24 05:46:23.278112 master-0 kubenswrapper[34361]: I0224 05:46:23.278054 34361 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager-operator/cert-manager-operator-controller-manager-66c8bdd694-wp9sf" Feb 24 05:46:23.764727 master-0 kubenswrapper[34361]: I0224 05:46:23.764683 34361 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager-operator/cert-manager-operator-controller-manager-66c8bdd694-wp9sf"] Feb 24 05:46:24.595339 master-0 kubenswrapper[34361]: I0224 05:46:24.592593 34361 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager-operator/cert-manager-operator-controller-manager-66c8bdd694-wp9sf" event={"ID":"592948fd-49b5-4519-8cdc-7b8aa90cca8b","Type":"ContainerStarted","Data":"73d4008a0ee376ec7a31fa01834daa3bb898253644a6ffb2edd88115e2a07606"} Feb 24 05:46:27.042120 master-0 kubenswrapper[34361]: I0224 05:46:27.042017 34361 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-operator-694c9596b7-8jfxc"] Feb 24 05:46:27.043500 master-0 kubenswrapper[34361]: I0224 05:46:27.043458 34361 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-operator-694c9596b7-8jfxc" Feb 24 05:46:27.047728 master-0 kubenswrapper[34361]: I0224 05:46:27.047669 34361 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"openshift-service-ca.crt" Feb 24 05:46:27.047966 master-0 kubenswrapper[34361]: I0224 05:46:27.047931 34361 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"kube-root-ca.crt" Feb 24 05:46:27.050829 master-0 kubenswrapper[34361]: I0224 05:46:27.050757 34361 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-operator-694c9596b7-8jfxc"] Feb 24 05:46:27.136331 master-0 kubenswrapper[34361]: I0224 05:46:27.136194 34361 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rvntj\" (UniqueName: \"kubernetes.io/projected/c2a9e99a-12f1-48dc-8828-ee0de53fcfba-kube-api-access-rvntj\") pod \"nmstate-operator-694c9596b7-8jfxc\" (UID: \"c2a9e99a-12f1-48dc-8828-ee0de53fcfba\") " pod="openshift-nmstate/nmstate-operator-694c9596b7-8jfxc" Feb 24 05:46:27.238550 master-0 kubenswrapper[34361]: I0224 05:46:27.238456 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rvntj\" (UniqueName: \"kubernetes.io/projected/c2a9e99a-12f1-48dc-8828-ee0de53fcfba-kube-api-access-rvntj\") pod \"nmstate-operator-694c9596b7-8jfxc\" (UID: \"c2a9e99a-12f1-48dc-8828-ee0de53fcfba\") " pod="openshift-nmstate/nmstate-operator-694c9596b7-8jfxc" Feb 24 05:46:27.255477 master-0 kubenswrapper[34361]: I0224 05:46:27.255427 34361 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rvntj\" (UniqueName: \"kubernetes.io/projected/c2a9e99a-12f1-48dc-8828-ee0de53fcfba-kube-api-access-rvntj\") pod \"nmstate-operator-694c9596b7-8jfxc\" (UID: \"c2a9e99a-12f1-48dc-8828-ee0de53fcfba\") " pod="openshift-nmstate/nmstate-operator-694c9596b7-8jfxc" Feb 24 05:46:27.397292 master-0 kubenswrapper[34361]: I0224 05:46:27.396761 34361 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-operator-694c9596b7-8jfxc" Feb 24 05:46:28.335556 master-0 kubenswrapper[34361]: W0224 05:46:28.335469 34361 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc2a9e99a_12f1_48dc_8828_ee0de53fcfba.slice/crio-8eb60e4fc40dd2afc8735754203e21e9edddc047acdd1a126944510c3adf415d WatchSource:0}: Error finding container 8eb60e4fc40dd2afc8735754203e21e9edddc047acdd1a126944510c3adf415d: Status 404 returned error can't find the container with id 8eb60e4fc40dd2afc8735754203e21e9edddc047acdd1a126944510c3adf415d Feb 24 05:46:28.336287 master-0 kubenswrapper[34361]: I0224 05:46:28.336198 34361 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-operator-694c9596b7-8jfxc"] Feb 24 05:46:28.637270 master-0 kubenswrapper[34361]: I0224 05:46:28.637062 34361 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-operator-694c9596b7-8jfxc" event={"ID":"c2a9e99a-12f1-48dc-8828-ee0de53fcfba","Type":"ContainerStarted","Data":"8eb60e4fc40dd2afc8735754203e21e9edddc047acdd1a126944510c3adf415d"} Feb 24 05:46:28.640666 master-0 kubenswrapper[34361]: I0224 05:46:28.640600 34361 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager-operator/cert-manager-operator-controller-manager-66c8bdd694-wp9sf" event={"ID":"592948fd-49b5-4519-8cdc-7b8aa90cca8b","Type":"ContainerStarted","Data":"26058660780c8e3ed0ebf8f7355473b6bb89d05886f6b906c5ae29091c04a20b"} Feb 24 05:46:28.676625 master-0 kubenswrapper[34361]: I0224 05:46:28.676496 34361 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager-operator/cert-manager-operator-controller-manager-66c8bdd694-wp9sf" podStartSLOduration=2.619653235 podStartE2EDuration="6.676465535s" podCreationTimestamp="2026-02-24 05:46:22 +0000 UTC" firstStartedPulling="2026-02-24 05:46:23.77332009 +0000 UTC m=+543.475937136" lastFinishedPulling="2026-02-24 05:46:27.83013239 +0000 UTC m=+547.532749436" observedRunningTime="2026-02-24 05:46:28.668826731 +0000 UTC m=+548.371443787" watchObservedRunningTime="2026-02-24 05:46:28.676465535 +0000 UTC m=+548.379082581" Feb 24 05:46:31.201337 master-0 kubenswrapper[34361]: I0224 05:46:31.182324 34361 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/metallb-operator-controller-manager-688bdcdc8c-4mpqv"] Feb 24 05:46:31.201337 master-0 kubenswrapper[34361]: I0224 05:46:31.183384 34361 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-controller-manager-688bdcdc8c-4mpqv" Feb 24 05:46:31.201337 master-0 kubenswrapper[34361]: I0224 05:46:31.198862 34361 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-controller-manager-service-cert" Feb 24 05:46:31.201337 master-0 kubenswrapper[34361]: I0224 05:46:31.199670 34361 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-webhook-server-cert" Feb 24 05:46:31.201337 master-0 kubenswrapper[34361]: I0224 05:46:31.199819 34361 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"kube-root-ca.crt" Feb 24 05:46:31.202139 master-0 kubenswrapper[34361]: I0224 05:46:31.201915 34361 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"openshift-service-ca.crt" Feb 24 05:46:31.235336 master-0 kubenswrapper[34361]: I0224 05:46:31.234958 34361 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/76bc0107-4054-43bd-aafc-e970d05d4504-webhook-cert\") pod \"metallb-operator-controller-manager-688bdcdc8c-4mpqv\" (UID: \"76bc0107-4054-43bd-aafc-e970d05d4504\") " pod="metallb-system/metallb-operator-controller-manager-688bdcdc8c-4mpqv" Feb 24 05:46:31.235336 master-0 kubenswrapper[34361]: I0224 05:46:31.235059 34361 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/76bc0107-4054-43bd-aafc-e970d05d4504-apiservice-cert\") pod \"metallb-operator-controller-manager-688bdcdc8c-4mpqv\" (UID: \"76bc0107-4054-43bd-aafc-e970d05d4504\") " pod="metallb-system/metallb-operator-controller-manager-688bdcdc8c-4mpqv" Feb 24 05:46:31.235336 master-0 kubenswrapper[34361]: I0224 05:46:31.235095 34361 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7rmdh\" (UniqueName: \"kubernetes.io/projected/76bc0107-4054-43bd-aafc-e970d05d4504-kube-api-access-7rmdh\") pod \"metallb-operator-controller-manager-688bdcdc8c-4mpqv\" (UID: \"76bc0107-4054-43bd-aafc-e970d05d4504\") " pod="metallb-system/metallb-operator-controller-manager-688bdcdc8c-4mpqv" Feb 24 05:46:31.257348 master-0 kubenswrapper[34361]: I0224 05:46:31.250465 34361 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-controller-manager-688bdcdc8c-4mpqv"] Feb 24 05:46:31.340347 master-0 kubenswrapper[34361]: I0224 05:46:31.339537 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7rmdh\" (UniqueName: \"kubernetes.io/projected/76bc0107-4054-43bd-aafc-e970d05d4504-kube-api-access-7rmdh\") pod \"metallb-operator-controller-manager-688bdcdc8c-4mpqv\" (UID: \"76bc0107-4054-43bd-aafc-e970d05d4504\") " pod="metallb-system/metallb-operator-controller-manager-688bdcdc8c-4mpqv" Feb 24 05:46:31.340347 master-0 kubenswrapper[34361]: I0224 05:46:31.339648 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/76bc0107-4054-43bd-aafc-e970d05d4504-webhook-cert\") pod \"metallb-operator-controller-manager-688bdcdc8c-4mpqv\" (UID: \"76bc0107-4054-43bd-aafc-e970d05d4504\") " pod="metallb-system/metallb-operator-controller-manager-688bdcdc8c-4mpqv" Feb 24 05:46:31.340347 master-0 kubenswrapper[34361]: I0224 05:46:31.339721 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/76bc0107-4054-43bd-aafc-e970d05d4504-apiservice-cert\") pod \"metallb-operator-controller-manager-688bdcdc8c-4mpqv\" (UID: \"76bc0107-4054-43bd-aafc-e970d05d4504\") " pod="metallb-system/metallb-operator-controller-manager-688bdcdc8c-4mpqv" Feb 24 05:46:31.351343 master-0 kubenswrapper[34361]: I0224 05:46:31.345154 34361 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/76bc0107-4054-43bd-aafc-e970d05d4504-webhook-cert\") pod \"metallb-operator-controller-manager-688bdcdc8c-4mpqv\" (UID: \"76bc0107-4054-43bd-aafc-e970d05d4504\") " pod="metallb-system/metallb-operator-controller-manager-688bdcdc8c-4mpqv" Feb 24 05:46:31.358334 master-0 kubenswrapper[34361]: I0224 05:46:31.354198 34361 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/76bc0107-4054-43bd-aafc-e970d05d4504-apiservice-cert\") pod \"metallb-operator-controller-manager-688bdcdc8c-4mpqv\" (UID: \"76bc0107-4054-43bd-aafc-e970d05d4504\") " pod="metallb-system/metallb-operator-controller-manager-688bdcdc8c-4mpqv" Feb 24 05:46:31.413138 master-0 kubenswrapper[34361]: I0224 05:46:31.413076 34361 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7rmdh\" (UniqueName: \"kubernetes.io/projected/76bc0107-4054-43bd-aafc-e970d05d4504-kube-api-access-7rmdh\") pod \"metallb-operator-controller-manager-688bdcdc8c-4mpqv\" (UID: \"76bc0107-4054-43bd-aafc-e970d05d4504\") " pod="metallb-system/metallb-operator-controller-manager-688bdcdc8c-4mpqv" Feb 24 05:46:31.621050 master-0 kubenswrapper[34361]: I0224 05:46:31.620963 34361 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-controller-manager-688bdcdc8c-4mpqv" Feb 24 05:46:32.027593 master-0 kubenswrapper[34361]: I0224 05:46:32.020000 34361 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/metallb-operator-webhook-server-f5b8c49d9-w75vs"] Feb 24 05:46:32.027593 master-0 kubenswrapper[34361]: I0224 05:46:32.023882 34361 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-webhook-server-f5b8c49d9-w75vs" Feb 24 05:46:32.035341 master-0 kubenswrapper[34361]: I0224 05:46:32.034149 34361 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-webhook-cert" Feb 24 05:46:32.035341 master-0 kubenswrapper[34361]: I0224 05:46:32.034449 34361 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-webhook-server-service-cert" Feb 24 05:46:32.063336 master-0 kubenswrapper[34361]: I0224 05:46:32.062923 34361 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/c053f6e1-0c7c-46c9-8e67-4218aef00c90-apiservice-cert\") pod \"metallb-operator-webhook-server-f5b8c49d9-w75vs\" (UID: \"c053f6e1-0c7c-46c9-8e67-4218aef00c90\") " pod="metallb-system/metallb-operator-webhook-server-f5b8c49d9-w75vs" Feb 24 05:46:32.063336 master-0 kubenswrapper[34361]: I0224 05:46:32.063007 34361 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xsrjl\" (UniqueName: \"kubernetes.io/projected/c053f6e1-0c7c-46c9-8e67-4218aef00c90-kube-api-access-xsrjl\") pod \"metallb-operator-webhook-server-f5b8c49d9-w75vs\" (UID: \"c053f6e1-0c7c-46c9-8e67-4218aef00c90\") " pod="metallb-system/metallb-operator-webhook-server-f5b8c49d9-w75vs" Feb 24 05:46:32.063336 master-0 kubenswrapper[34361]: I0224 05:46:32.063076 34361 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/c053f6e1-0c7c-46c9-8e67-4218aef00c90-webhook-cert\") pod \"metallb-operator-webhook-server-f5b8c49d9-w75vs\" (UID: \"c053f6e1-0c7c-46c9-8e67-4218aef00c90\") " pod="metallb-system/metallb-operator-webhook-server-f5b8c49d9-w75vs" Feb 24 05:46:32.086716 master-0 kubenswrapper[34361]: I0224 05:46:32.073183 34361 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-webhook-server-f5b8c49d9-w75vs"] Feb 24 05:46:32.183556 master-0 kubenswrapper[34361]: I0224 05:46:32.181250 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/c053f6e1-0c7c-46c9-8e67-4218aef00c90-apiservice-cert\") pod \"metallb-operator-webhook-server-f5b8c49d9-w75vs\" (UID: \"c053f6e1-0c7c-46c9-8e67-4218aef00c90\") " pod="metallb-system/metallb-operator-webhook-server-f5b8c49d9-w75vs" Feb 24 05:46:32.185795 master-0 kubenswrapper[34361]: I0224 05:46:32.185726 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xsrjl\" (UniqueName: \"kubernetes.io/projected/c053f6e1-0c7c-46c9-8e67-4218aef00c90-kube-api-access-xsrjl\") pod \"metallb-operator-webhook-server-f5b8c49d9-w75vs\" (UID: \"c053f6e1-0c7c-46c9-8e67-4218aef00c90\") " pod="metallb-system/metallb-operator-webhook-server-f5b8c49d9-w75vs" Feb 24 05:46:32.186043 master-0 kubenswrapper[34361]: I0224 05:46:32.186013 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/c053f6e1-0c7c-46c9-8e67-4218aef00c90-webhook-cert\") pod \"metallb-operator-webhook-server-f5b8c49d9-w75vs\" (UID: \"c053f6e1-0c7c-46c9-8e67-4218aef00c90\") " pod="metallb-system/metallb-operator-webhook-server-f5b8c49d9-w75vs" Feb 24 05:46:32.230333 master-0 kubenswrapper[34361]: I0224 05:46:32.226946 34361 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/c053f6e1-0c7c-46c9-8e67-4218aef00c90-apiservice-cert\") pod \"metallb-operator-webhook-server-f5b8c49d9-w75vs\" (UID: \"c053f6e1-0c7c-46c9-8e67-4218aef00c90\") " pod="metallb-system/metallb-operator-webhook-server-f5b8c49d9-w75vs" Feb 24 05:46:32.238325 master-0 kubenswrapper[34361]: I0224 05:46:32.233302 34361 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xsrjl\" (UniqueName: \"kubernetes.io/projected/c053f6e1-0c7c-46c9-8e67-4218aef00c90-kube-api-access-xsrjl\") pod \"metallb-operator-webhook-server-f5b8c49d9-w75vs\" (UID: \"c053f6e1-0c7c-46c9-8e67-4218aef00c90\") " pod="metallb-system/metallb-operator-webhook-server-f5b8c49d9-w75vs" Feb 24 05:46:32.238325 master-0 kubenswrapper[34361]: I0224 05:46:32.238174 34361 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/c053f6e1-0c7c-46c9-8e67-4218aef00c90-webhook-cert\") pod \"metallb-operator-webhook-server-f5b8c49d9-w75vs\" (UID: \"c053f6e1-0c7c-46c9-8e67-4218aef00c90\") " pod="metallb-system/metallb-operator-webhook-server-f5b8c49d9-w75vs" Feb 24 05:46:32.312468 master-0 kubenswrapper[34361]: I0224 05:46:32.312386 34361 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-controller-manager-688bdcdc8c-4mpqv"] Feb 24 05:46:32.317975 master-0 kubenswrapper[34361]: W0224 05:46:32.317898 34361 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod76bc0107_4054_43bd_aafc_e970d05d4504.slice/crio-2e7334e96ad125e7f15bd042a3daa889a5a45bcce349ceb0d9765154b9f1a38b WatchSource:0}: Error finding container 2e7334e96ad125e7f15bd042a3daa889a5a45bcce349ceb0d9765154b9f1a38b: Status 404 returned error can't find the container with id 2e7334e96ad125e7f15bd042a3daa889a5a45bcce349ceb0d9765154b9f1a38b Feb 24 05:46:32.395866 master-0 kubenswrapper[34361]: I0224 05:46:32.395743 34361 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-webhook-server-f5b8c49d9-w75vs" Feb 24 05:46:32.709393 master-0 kubenswrapper[34361]: I0224 05:46:32.708499 34361 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-controller-manager-688bdcdc8c-4mpqv" event={"ID":"76bc0107-4054-43bd-aafc-e970d05d4504","Type":"ContainerStarted","Data":"2e7334e96ad125e7f15bd042a3daa889a5a45bcce349ceb0d9765154b9f1a38b"} Feb 24 05:46:32.741635 master-0 kubenswrapper[34361]: I0224 05:46:32.737121 34361 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-webhook-6888856db4-pxvzq"] Feb 24 05:46:32.741635 master-0 kubenswrapper[34361]: I0224 05:46:32.738724 34361 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-webhook-6888856db4-pxvzq" Feb 24 05:46:32.742505 master-0 kubenswrapper[34361]: I0224 05:46:32.742464 34361 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager"/"kube-root-ca.crt" Feb 24 05:46:32.752019 master-0 kubenswrapper[34361]: I0224 05:46:32.751950 34361 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager"/"openshift-service-ca.crt" Feb 24 05:46:32.762708 master-0 kubenswrapper[34361]: I0224 05:46:32.762614 34361 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-webhook-6888856db4-pxvzq"] Feb 24 05:46:32.814990 master-0 kubenswrapper[34361]: I0224 05:46:32.814918 34361 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/ce2af05b-4d9f-4ca9-83f6-28dcda198e73-bound-sa-token\") pod \"cert-manager-webhook-6888856db4-pxvzq\" (UID: \"ce2af05b-4d9f-4ca9-83f6-28dcda198e73\") " pod="cert-manager/cert-manager-webhook-6888856db4-pxvzq" Feb 24 05:46:32.815282 master-0 kubenswrapper[34361]: I0224 05:46:32.815027 34361 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jj9zx\" (UniqueName: \"kubernetes.io/projected/ce2af05b-4d9f-4ca9-83f6-28dcda198e73-kube-api-access-jj9zx\") pod \"cert-manager-webhook-6888856db4-pxvzq\" (UID: \"ce2af05b-4d9f-4ca9-83f6-28dcda198e73\") " pod="cert-manager/cert-manager-webhook-6888856db4-pxvzq" Feb 24 05:46:32.854844 master-0 kubenswrapper[34361]: I0224 05:46:32.854777 34361 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-webhook-server-f5b8c49d9-w75vs"] Feb 24 05:46:32.860365 master-0 kubenswrapper[34361]: W0224 05:46:32.860272 34361 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc053f6e1_0c7c_46c9_8e67_4218aef00c90.slice/crio-138ec7689ab81721a5fe16f794adca12ee197efcbb2a2fdd8236b448d1674ea7 WatchSource:0}: Error finding container 138ec7689ab81721a5fe16f794adca12ee197efcbb2a2fdd8236b448d1674ea7: Status 404 returned error can't find the container with id 138ec7689ab81721a5fe16f794adca12ee197efcbb2a2fdd8236b448d1674ea7 Feb 24 05:46:32.920338 master-0 kubenswrapper[34361]: I0224 05:46:32.917338 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jj9zx\" (UniqueName: \"kubernetes.io/projected/ce2af05b-4d9f-4ca9-83f6-28dcda198e73-kube-api-access-jj9zx\") pod \"cert-manager-webhook-6888856db4-pxvzq\" (UID: \"ce2af05b-4d9f-4ca9-83f6-28dcda198e73\") " pod="cert-manager/cert-manager-webhook-6888856db4-pxvzq" Feb 24 05:46:32.920338 master-0 kubenswrapper[34361]: I0224 05:46:32.917467 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/ce2af05b-4d9f-4ca9-83f6-28dcda198e73-bound-sa-token\") pod \"cert-manager-webhook-6888856db4-pxvzq\" (UID: \"ce2af05b-4d9f-4ca9-83f6-28dcda198e73\") " pod="cert-manager/cert-manager-webhook-6888856db4-pxvzq" Feb 24 05:46:32.947276 master-0 kubenswrapper[34361]: I0224 05:46:32.947211 34361 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/ce2af05b-4d9f-4ca9-83f6-28dcda198e73-bound-sa-token\") pod \"cert-manager-webhook-6888856db4-pxvzq\" (UID: \"ce2af05b-4d9f-4ca9-83f6-28dcda198e73\") " pod="cert-manager/cert-manager-webhook-6888856db4-pxvzq" Feb 24 05:46:32.959480 master-0 kubenswrapper[34361]: I0224 05:46:32.958203 34361 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jj9zx\" (UniqueName: \"kubernetes.io/projected/ce2af05b-4d9f-4ca9-83f6-28dcda198e73-kube-api-access-jj9zx\") pod \"cert-manager-webhook-6888856db4-pxvzq\" (UID: \"ce2af05b-4d9f-4ca9-83f6-28dcda198e73\") " pod="cert-manager/cert-manager-webhook-6888856db4-pxvzq" Feb 24 05:46:33.076524 master-0 kubenswrapper[34361]: I0224 05:46:33.076351 34361 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-webhook-6888856db4-pxvzq" Feb 24 05:46:33.622478 master-0 kubenswrapper[34361]: I0224 05:46:33.621629 34361 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-webhook-6888856db4-pxvzq"] Feb 24 05:46:33.740438 master-0 kubenswrapper[34361]: I0224 05:46:33.740350 34361 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-webhook-6888856db4-pxvzq" event={"ID":"ce2af05b-4d9f-4ca9-83f6-28dcda198e73","Type":"ContainerStarted","Data":"6dac19dc07e19552ec4904888ea954a34b7e738c4925d3cb829a15c7d7033bc8"} Feb 24 05:46:33.749246 master-0 kubenswrapper[34361]: I0224 05:46:33.745398 34361 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-webhook-server-f5b8c49d9-w75vs" event={"ID":"c053f6e1-0c7c-46c9-8e67-4218aef00c90","Type":"ContainerStarted","Data":"138ec7689ab81721a5fe16f794adca12ee197efcbb2a2fdd8236b448d1674ea7"} Feb 24 05:46:34.120994 master-0 kubenswrapper[34361]: I0224 05:46:34.120908 34361 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-cainjector-5545bd876-vhdf8"] Feb 24 05:46:34.122681 master-0 kubenswrapper[34361]: I0224 05:46:34.122022 34361 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-cainjector-5545bd876-vhdf8" Feb 24 05:46:34.135450 master-0 kubenswrapper[34361]: I0224 05:46:34.131264 34361 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-cainjector-5545bd876-vhdf8"] Feb 24 05:46:34.151357 master-0 kubenswrapper[34361]: I0224 05:46:34.151296 34361 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xl24d\" (UniqueName: \"kubernetes.io/projected/0026a5f4-b8f7-40d1-a422-7abac016fd8a-kube-api-access-xl24d\") pod \"cert-manager-cainjector-5545bd876-vhdf8\" (UID: \"0026a5f4-b8f7-40d1-a422-7abac016fd8a\") " pod="cert-manager/cert-manager-cainjector-5545bd876-vhdf8" Feb 24 05:46:34.151453 master-0 kubenswrapper[34361]: I0224 05:46:34.151412 34361 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/0026a5f4-b8f7-40d1-a422-7abac016fd8a-bound-sa-token\") pod \"cert-manager-cainjector-5545bd876-vhdf8\" (UID: \"0026a5f4-b8f7-40d1-a422-7abac016fd8a\") " pod="cert-manager/cert-manager-cainjector-5545bd876-vhdf8" Feb 24 05:46:34.253556 master-0 kubenswrapper[34361]: I0224 05:46:34.253472 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xl24d\" (UniqueName: \"kubernetes.io/projected/0026a5f4-b8f7-40d1-a422-7abac016fd8a-kube-api-access-xl24d\") pod \"cert-manager-cainjector-5545bd876-vhdf8\" (UID: \"0026a5f4-b8f7-40d1-a422-7abac016fd8a\") " pod="cert-manager/cert-manager-cainjector-5545bd876-vhdf8" Feb 24 05:46:34.253845 master-0 kubenswrapper[34361]: I0224 05:46:34.253578 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/0026a5f4-b8f7-40d1-a422-7abac016fd8a-bound-sa-token\") pod \"cert-manager-cainjector-5545bd876-vhdf8\" (UID: \"0026a5f4-b8f7-40d1-a422-7abac016fd8a\") " pod="cert-manager/cert-manager-cainjector-5545bd876-vhdf8" Feb 24 05:46:34.275338 master-0 kubenswrapper[34361]: I0224 05:46:34.274896 34361 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/0026a5f4-b8f7-40d1-a422-7abac016fd8a-bound-sa-token\") pod \"cert-manager-cainjector-5545bd876-vhdf8\" (UID: \"0026a5f4-b8f7-40d1-a422-7abac016fd8a\") " pod="cert-manager/cert-manager-cainjector-5545bd876-vhdf8" Feb 24 05:46:34.282406 master-0 kubenswrapper[34361]: I0224 05:46:34.278947 34361 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xl24d\" (UniqueName: \"kubernetes.io/projected/0026a5f4-b8f7-40d1-a422-7abac016fd8a-kube-api-access-xl24d\") pod \"cert-manager-cainjector-5545bd876-vhdf8\" (UID: \"0026a5f4-b8f7-40d1-a422-7abac016fd8a\") " pod="cert-manager/cert-manager-cainjector-5545bd876-vhdf8" Feb 24 05:46:34.463435 master-0 kubenswrapper[34361]: I0224 05:46:34.463152 34361 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-cainjector-5545bd876-vhdf8" Feb 24 05:46:35.349368 master-0 kubenswrapper[34361]: I0224 05:46:35.345612 34361 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-cainjector-5545bd876-vhdf8"] Feb 24 05:46:35.773639 master-0 kubenswrapper[34361]: I0224 05:46:35.773558 34361 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-cainjector-5545bd876-vhdf8" event={"ID":"0026a5f4-b8f7-40d1-a422-7abac016fd8a","Type":"ContainerStarted","Data":"9f1bf3c82bdac84e8f61cd2d817e8661587d090ebaa54c32941f613a29bf2b90"} Feb 24 05:46:38.804338 master-0 kubenswrapper[34361]: I0224 05:46:38.804250 34361 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-controller-manager-688bdcdc8c-4mpqv" event={"ID":"76bc0107-4054-43bd-aafc-e970d05d4504","Type":"ContainerStarted","Data":"28d77d31bc578c1734beee4a4867a189974b54badfa1c76d96526c3ed91bdcc0"} Feb 24 05:46:38.805033 master-0 kubenswrapper[34361]: I0224 05:46:38.804449 34361 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/metallb-operator-controller-manager-688bdcdc8c-4mpqv" Feb 24 05:46:38.807081 master-0 kubenswrapper[34361]: I0224 05:46:38.807042 34361 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-operator-694c9596b7-8jfxc" event={"ID":"c2a9e99a-12f1-48dc-8828-ee0de53fcfba","Type":"ContainerStarted","Data":"fa6575346f7c0bdb2eeded7a443f0be019726bc777fba636f1ca2bb1564b2f81"} Feb 24 05:46:38.836200 master-0 kubenswrapper[34361]: I0224 05:46:38.836101 34361 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/metallb-operator-controller-manager-688bdcdc8c-4mpqv" podStartSLOduration=2.062962919 podStartE2EDuration="7.836076012s" podCreationTimestamp="2026-02-24 05:46:31 +0000 UTC" firstStartedPulling="2026-02-24 05:46:32.331066836 +0000 UTC m=+552.033683882" lastFinishedPulling="2026-02-24 05:46:38.104179909 +0000 UTC m=+557.806796975" observedRunningTime="2026-02-24 05:46:38.824874363 +0000 UTC m=+558.527491419" watchObservedRunningTime="2026-02-24 05:46:38.836076012 +0000 UTC m=+558.538693058" Feb 24 05:46:38.856339 master-0 kubenswrapper[34361]: I0224 05:46:38.855612 34361 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-operator-694c9596b7-8jfxc" podStartSLOduration=2.090726446 podStartE2EDuration="11.855588995s" podCreationTimestamp="2026-02-24 05:46:27 +0000 UTC" firstStartedPulling="2026-02-24 05:46:28.33797383 +0000 UTC m=+548.040590886" lastFinishedPulling="2026-02-24 05:46:38.102836389 +0000 UTC m=+557.805453435" observedRunningTime="2026-02-24 05:46:38.851381182 +0000 UTC m=+558.553998258" watchObservedRunningTime="2026-02-24 05:46:38.855588995 +0000 UTC m=+558.558206041" Feb 24 05:46:41.838406 master-0 kubenswrapper[34361]: I0224 05:46:41.838144 34361 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-cainjector-5545bd876-vhdf8" event={"ID":"0026a5f4-b8f7-40d1-a422-7abac016fd8a","Type":"ContainerStarted","Data":"015c530a7eaeef20ba204aa8b3d1c1de8c30b095dda576243378bd9bf0d97ce8"} Feb 24 05:46:41.858257 master-0 kubenswrapper[34361]: I0224 05:46:41.858190 34361 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-webhook-6888856db4-pxvzq" event={"ID":"ce2af05b-4d9f-4ca9-83f6-28dcda198e73","Type":"ContainerStarted","Data":"309d260692b9d57b95fb6d220eb447f15039f91d072c911da1490c8ad9e6fda4"} Feb 24 05:46:41.858748 master-0 kubenswrapper[34361]: I0224 05:46:41.858705 34361 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="cert-manager/cert-manager-webhook-6888856db4-pxvzq" Feb 24 05:46:41.875419 master-0 kubenswrapper[34361]: I0224 05:46:41.875330 34361 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-cainjector-5545bd876-vhdf8" podStartSLOduration=1.7568572850000002 podStartE2EDuration="7.875290624s" podCreationTimestamp="2026-02-24 05:46:34 +0000 UTC" firstStartedPulling="2026-02-24 05:46:35.351725232 +0000 UTC m=+555.054342278" lastFinishedPulling="2026-02-24 05:46:41.470158571 +0000 UTC m=+561.172775617" observedRunningTime="2026-02-24 05:46:41.867747332 +0000 UTC m=+561.570364388" watchObservedRunningTime="2026-02-24 05:46:41.875290624 +0000 UTC m=+561.577907670" Feb 24 05:46:41.969441 master-0 kubenswrapper[34361]: I0224 05:46:41.968275 34361 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-webhook-6888856db4-pxvzq" podStartSLOduration=2.160458593 podStartE2EDuration="9.968251124s" podCreationTimestamp="2026-02-24 05:46:32 +0000 UTC" firstStartedPulling="2026-02-24 05:46:33.65146662 +0000 UTC m=+553.354083666" lastFinishedPulling="2026-02-24 05:46:41.459259151 +0000 UTC m=+561.161876197" observedRunningTime="2026-02-24 05:46:41.898736272 +0000 UTC m=+561.601353318" watchObservedRunningTime="2026-02-24 05:46:41.968251124 +0000 UTC m=+561.670868180" Feb 24 05:46:41.983336 master-0 kubenswrapper[34361]: I0224 05:46:41.982345 34361 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/obo-prometheus-operator-68bc856cb9-gmzdr"] Feb 24 05:46:41.983682 master-0 kubenswrapper[34361]: I0224 05:46:41.983450 34361 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-gmzdr" Feb 24 05:46:41.990690 master-0 kubenswrapper[34361]: I0224 05:46:41.988525 34361 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operators"/"openshift-service-ca.crt" Feb 24 05:46:41.990690 master-0 kubenswrapper[34361]: I0224 05:46:41.988706 34361 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operators"/"kube-root-ca.crt" Feb 24 05:46:42.013330 master-0 kubenswrapper[34361]: I0224 05:46:42.012841 34361 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-68bc856cb9-gmzdr"] Feb 24 05:46:42.100335 master-0 kubenswrapper[34361]: I0224 05:46:42.099402 34361 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hl8f7\" (UniqueName: \"kubernetes.io/projected/2af5df4a-595d-489e-8614-2d494d2c8bf7-kube-api-access-hl8f7\") pod \"obo-prometheus-operator-68bc856cb9-gmzdr\" (UID: \"2af5df4a-595d-489e-8614-2d494d2c8bf7\") " pod="openshift-operators/obo-prometheus-operator-68bc856cb9-gmzdr" Feb 24 05:46:42.192522 master-0 kubenswrapper[34361]: I0224 05:46:42.192408 34361 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-f46855c6-pq8bs"] Feb 24 05:46:42.195324 master-0 kubenswrapper[34361]: I0224 05:46:42.193892 34361 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-f46855c6-pq8bs" Feb 24 05:46:42.202251 master-0 kubenswrapper[34361]: I0224 05:46:42.202062 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hl8f7\" (UniqueName: \"kubernetes.io/projected/2af5df4a-595d-489e-8614-2d494d2c8bf7-kube-api-access-hl8f7\") pod \"obo-prometheus-operator-68bc856cb9-gmzdr\" (UID: \"2af5df4a-595d-489e-8614-2d494d2c8bf7\") " pod="openshift-operators/obo-prometheus-operator-68bc856cb9-gmzdr" Feb 24 05:46:42.220668 master-0 kubenswrapper[34361]: I0224 05:46:42.220571 34361 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"obo-prometheus-operator-admission-webhook-service-cert" Feb 24 05:46:42.224147 master-0 kubenswrapper[34361]: I0224 05:46:42.224115 34361 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hl8f7\" (UniqueName: \"kubernetes.io/projected/2af5df4a-595d-489e-8614-2d494d2c8bf7-kube-api-access-hl8f7\") pod \"obo-prometheus-operator-68bc856cb9-gmzdr\" (UID: \"2af5df4a-595d-489e-8614-2d494d2c8bf7\") " pod="openshift-operators/obo-prometheus-operator-68bc856cb9-gmzdr" Feb 24 05:46:42.224292 master-0 kubenswrapper[34361]: I0224 05:46:42.224273 34361 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-f46855c6-qm7sz"] Feb 24 05:46:42.225783 master-0 kubenswrapper[34361]: I0224 05:46:42.225750 34361 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-f46855c6-qm7sz" Feb 24 05:46:42.256689 master-0 kubenswrapper[34361]: I0224 05:46:42.256576 34361 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-f46855c6-pq8bs"] Feb 24 05:46:42.270856 master-0 kubenswrapper[34361]: I0224 05:46:42.270807 34361 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-f46855c6-qm7sz"] Feb 24 05:46:42.305883 master-0 kubenswrapper[34361]: I0224 05:46:42.305816 34361 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/4a5b74e9-69ce-46e5-a636-61eebd5bab15-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-f46855c6-pq8bs\" (UID: \"4a5b74e9-69ce-46e5-a636-61eebd5bab15\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-f46855c6-pq8bs" Feb 24 05:46:42.306439 master-0 kubenswrapper[34361]: I0224 05:46:42.306416 34361 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/246b7516-1e17-47a0-a3eb-1631b97a15e3-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-f46855c6-qm7sz\" (UID: \"246b7516-1e17-47a0-a3eb-1631b97a15e3\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-f46855c6-qm7sz" Feb 24 05:46:42.306607 master-0 kubenswrapper[34361]: I0224 05:46:42.306589 34361 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/246b7516-1e17-47a0-a3eb-1631b97a15e3-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-f46855c6-qm7sz\" (UID: \"246b7516-1e17-47a0-a3eb-1631b97a15e3\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-f46855c6-qm7sz" Feb 24 05:46:42.306751 master-0 kubenswrapper[34361]: I0224 05:46:42.306736 34361 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/4a5b74e9-69ce-46e5-a636-61eebd5bab15-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-f46855c6-pq8bs\" (UID: \"4a5b74e9-69ce-46e5-a636-61eebd5bab15\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-f46855c6-pq8bs" Feb 24 05:46:42.343135 master-0 kubenswrapper[34361]: I0224 05:46:42.343053 34361 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/observability-operator-59bdc8b94-tgbdb"] Feb 24 05:46:42.344557 master-0 kubenswrapper[34361]: I0224 05:46:42.344539 34361 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/observability-operator-59bdc8b94-tgbdb" Feb 24 05:46:42.348439 master-0 kubenswrapper[34361]: I0224 05:46:42.348389 34361 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-gmzdr" Feb 24 05:46:42.349793 master-0 kubenswrapper[34361]: I0224 05:46:42.349743 34361 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"observability-operator-tls" Feb 24 05:46:42.358811 master-0 kubenswrapper[34361]: I0224 05:46:42.358767 34361 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/observability-operator-59bdc8b94-tgbdb"] Feb 24 05:46:42.412272 master-0 kubenswrapper[34361]: I0224 05:46:42.408675 34361 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"observability-operator-tls\" (UniqueName: \"kubernetes.io/secret/3643e6eb-cea0-4a64-b183-1a75f0b5d2af-observability-operator-tls\") pod \"observability-operator-59bdc8b94-tgbdb\" (UID: \"3643e6eb-cea0-4a64-b183-1a75f0b5d2af\") " pod="openshift-operators/observability-operator-59bdc8b94-tgbdb" Feb 24 05:46:42.412272 master-0 kubenswrapper[34361]: I0224 05:46:42.408757 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/246b7516-1e17-47a0-a3eb-1631b97a15e3-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-f46855c6-qm7sz\" (UID: \"246b7516-1e17-47a0-a3eb-1631b97a15e3\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-f46855c6-qm7sz" Feb 24 05:46:42.412272 master-0 kubenswrapper[34361]: I0224 05:46:42.408841 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/246b7516-1e17-47a0-a3eb-1631b97a15e3-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-f46855c6-qm7sz\" (UID: \"246b7516-1e17-47a0-a3eb-1631b97a15e3\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-f46855c6-qm7sz" Feb 24 05:46:42.412272 master-0 kubenswrapper[34361]: I0224 05:46:42.408874 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/4a5b74e9-69ce-46e5-a636-61eebd5bab15-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-f46855c6-pq8bs\" (UID: \"4a5b74e9-69ce-46e5-a636-61eebd5bab15\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-f46855c6-pq8bs" Feb 24 05:46:42.412272 master-0 kubenswrapper[34361]: I0224 05:46:42.408898 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/4a5b74e9-69ce-46e5-a636-61eebd5bab15-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-f46855c6-pq8bs\" (UID: \"4a5b74e9-69ce-46e5-a636-61eebd5bab15\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-f46855c6-pq8bs" Feb 24 05:46:42.412272 master-0 kubenswrapper[34361]: I0224 05:46:42.409170 34361 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pw7dr\" (UniqueName: \"kubernetes.io/projected/3643e6eb-cea0-4a64-b183-1a75f0b5d2af-kube-api-access-pw7dr\") pod \"observability-operator-59bdc8b94-tgbdb\" (UID: \"3643e6eb-cea0-4a64-b183-1a75f0b5d2af\") " pod="openshift-operators/observability-operator-59bdc8b94-tgbdb" Feb 24 05:46:42.417938 master-0 kubenswrapper[34361]: I0224 05:46:42.417884 34361 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/246b7516-1e17-47a0-a3eb-1631b97a15e3-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-f46855c6-qm7sz\" (UID: \"246b7516-1e17-47a0-a3eb-1631b97a15e3\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-f46855c6-qm7sz" Feb 24 05:46:42.418524 master-0 kubenswrapper[34361]: I0224 05:46:42.418481 34361 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/4a5b74e9-69ce-46e5-a636-61eebd5bab15-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-f46855c6-pq8bs\" (UID: \"4a5b74e9-69ce-46e5-a636-61eebd5bab15\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-f46855c6-pq8bs" Feb 24 05:46:42.420398 master-0 kubenswrapper[34361]: I0224 05:46:42.420372 34361 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/246b7516-1e17-47a0-a3eb-1631b97a15e3-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-f46855c6-qm7sz\" (UID: \"246b7516-1e17-47a0-a3eb-1631b97a15e3\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-f46855c6-qm7sz" Feb 24 05:46:42.420459 master-0 kubenswrapper[34361]: I0224 05:46:42.420397 34361 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/4a5b74e9-69ce-46e5-a636-61eebd5bab15-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-f46855c6-pq8bs\" (UID: \"4a5b74e9-69ce-46e5-a636-61eebd5bab15\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-f46855c6-pq8bs" Feb 24 05:46:42.520644 master-0 kubenswrapper[34361]: I0224 05:46:42.516228 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pw7dr\" (UniqueName: \"kubernetes.io/projected/3643e6eb-cea0-4a64-b183-1a75f0b5d2af-kube-api-access-pw7dr\") pod \"observability-operator-59bdc8b94-tgbdb\" (UID: \"3643e6eb-cea0-4a64-b183-1a75f0b5d2af\") " pod="openshift-operators/observability-operator-59bdc8b94-tgbdb" Feb 24 05:46:42.520644 master-0 kubenswrapper[34361]: I0224 05:46:42.516346 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"observability-operator-tls\" (UniqueName: \"kubernetes.io/secret/3643e6eb-cea0-4a64-b183-1a75f0b5d2af-observability-operator-tls\") pod \"observability-operator-59bdc8b94-tgbdb\" (UID: \"3643e6eb-cea0-4a64-b183-1a75f0b5d2af\") " pod="openshift-operators/observability-operator-59bdc8b94-tgbdb" Feb 24 05:46:42.530479 master-0 kubenswrapper[34361]: I0224 05:46:42.530409 34361 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/perses-operator-5bf474d74f-jbdsj"] Feb 24 05:46:42.531904 master-0 kubenswrapper[34361]: I0224 05:46:42.531884 34361 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/perses-operator-5bf474d74f-jbdsj" Feb 24 05:46:42.534056 master-0 kubenswrapper[34361]: I0224 05:46:42.534007 34361 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"observability-operator-tls\" (UniqueName: \"kubernetes.io/secret/3643e6eb-cea0-4a64-b183-1a75f0b5d2af-observability-operator-tls\") pod \"observability-operator-59bdc8b94-tgbdb\" (UID: \"3643e6eb-cea0-4a64-b183-1a75f0b5d2af\") " pod="openshift-operators/observability-operator-59bdc8b94-tgbdb" Feb 24 05:46:42.536759 master-0 kubenswrapper[34361]: I0224 05:46:42.536716 34361 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pw7dr\" (UniqueName: \"kubernetes.io/projected/3643e6eb-cea0-4a64-b183-1a75f0b5d2af-kube-api-access-pw7dr\") pod \"observability-operator-59bdc8b94-tgbdb\" (UID: \"3643e6eb-cea0-4a64-b183-1a75f0b5d2af\") " pod="openshift-operators/observability-operator-59bdc8b94-tgbdb" Feb 24 05:46:42.538049 master-0 kubenswrapper[34361]: I0224 05:46:42.538016 34361 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/perses-operator-5bf474d74f-jbdsj"] Feb 24 05:46:42.570789 master-0 kubenswrapper[34361]: I0224 05:46:42.570718 34361 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-f46855c6-pq8bs" Feb 24 05:46:42.580644 master-0 kubenswrapper[34361]: I0224 05:46:42.580589 34361 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-f46855c6-qm7sz" Feb 24 05:46:42.629832 master-0 kubenswrapper[34361]: I0224 05:46:42.629772 34361 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openshift-service-ca\" (UniqueName: \"kubernetes.io/configmap/58835146-c8a7-41cd-9020-f2c7b393fb35-openshift-service-ca\") pod \"perses-operator-5bf474d74f-jbdsj\" (UID: \"58835146-c8a7-41cd-9020-f2c7b393fb35\") " pod="openshift-operators/perses-operator-5bf474d74f-jbdsj" Feb 24 05:46:42.630085 master-0 kubenswrapper[34361]: I0224 05:46:42.629896 34361 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cdffk\" (UniqueName: \"kubernetes.io/projected/58835146-c8a7-41cd-9020-f2c7b393fb35-kube-api-access-cdffk\") pod \"perses-operator-5bf474d74f-jbdsj\" (UID: \"58835146-c8a7-41cd-9020-f2c7b393fb35\") " pod="openshift-operators/perses-operator-5bf474d74f-jbdsj" Feb 24 05:46:42.689073 master-0 kubenswrapper[34361]: I0224 05:46:42.688985 34361 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/observability-operator-59bdc8b94-tgbdb" Feb 24 05:46:42.733942 master-0 kubenswrapper[34361]: I0224 05:46:42.733294 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cdffk\" (UniqueName: \"kubernetes.io/projected/58835146-c8a7-41cd-9020-f2c7b393fb35-kube-api-access-cdffk\") pod \"perses-operator-5bf474d74f-jbdsj\" (UID: \"58835146-c8a7-41cd-9020-f2c7b393fb35\") " pod="openshift-operators/perses-operator-5bf474d74f-jbdsj" Feb 24 05:46:42.733942 master-0 kubenswrapper[34361]: I0224 05:46:42.733437 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openshift-service-ca\" (UniqueName: \"kubernetes.io/configmap/58835146-c8a7-41cd-9020-f2c7b393fb35-openshift-service-ca\") pod \"perses-operator-5bf474d74f-jbdsj\" (UID: \"58835146-c8a7-41cd-9020-f2c7b393fb35\") " pod="openshift-operators/perses-operator-5bf474d74f-jbdsj" Feb 24 05:46:42.735250 master-0 kubenswrapper[34361]: I0224 05:46:42.734675 34361 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openshift-service-ca\" (UniqueName: \"kubernetes.io/configmap/58835146-c8a7-41cd-9020-f2c7b393fb35-openshift-service-ca\") pod \"perses-operator-5bf474d74f-jbdsj\" (UID: \"58835146-c8a7-41cd-9020-f2c7b393fb35\") " pod="openshift-operators/perses-operator-5bf474d74f-jbdsj" Feb 24 05:46:42.860333 master-0 kubenswrapper[34361]: I0224 05:46:42.857622 34361 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cdffk\" (UniqueName: \"kubernetes.io/projected/58835146-c8a7-41cd-9020-f2c7b393fb35-kube-api-access-cdffk\") pod \"perses-operator-5bf474d74f-jbdsj\" (UID: \"58835146-c8a7-41cd-9020-f2c7b393fb35\") " pod="openshift-operators/perses-operator-5bf474d74f-jbdsj" Feb 24 05:46:42.962121 master-0 kubenswrapper[34361]: I0224 05:46:42.962013 34361 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-68bc856cb9-gmzdr"] Feb 24 05:46:43.148374 master-0 kubenswrapper[34361]: I0224 05:46:43.148301 34361 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-f46855c6-pq8bs"] Feb 24 05:46:43.151419 master-0 kubenswrapper[34361]: I0224 05:46:43.151372 34361 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/perses-operator-5bf474d74f-jbdsj" Feb 24 05:46:43.293043 master-0 kubenswrapper[34361]: I0224 05:46:43.289698 34361 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/observability-operator-59bdc8b94-tgbdb"] Feb 24 05:46:43.302567 master-0 kubenswrapper[34361]: I0224 05:46:43.297643 34361 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-f46855c6-qm7sz"] Feb 24 05:46:43.330567 master-0 kubenswrapper[34361]: W0224 05:46:43.330501 34361 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod246b7516_1e17_47a0_a3eb_1631b97a15e3.slice/crio-ff28e259bf250036a9ff09281639dd29c352c93edb8b33108bb70085a34af61e WatchSource:0}: Error finding container ff28e259bf250036a9ff09281639dd29c352c93edb8b33108bb70085a34af61e: Status 404 returned error can't find the container with id ff28e259bf250036a9ff09281639dd29c352c93edb8b33108bb70085a34af61e Feb 24 05:46:43.703859 master-0 kubenswrapper[34361]: I0224 05:46:43.702741 34361 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/perses-operator-5bf474d74f-jbdsj"] Feb 24 05:46:43.731264 master-0 kubenswrapper[34361]: W0224 05:46:43.731224 34361 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod58835146_c8a7_41cd_9020_f2c7b393fb35.slice/crio-8e92466b312a7691c9d7f5aaf335d1bb2866b1fa71863469692d830c8418324c WatchSource:0}: Error finding container 8e92466b312a7691c9d7f5aaf335d1bb2866b1fa71863469692d830c8418324c: Status 404 returned error can't find the container with id 8e92466b312a7691c9d7f5aaf335d1bb2866b1fa71863469692d830c8418324c Feb 24 05:46:43.910292 master-0 kubenswrapper[34361]: I0224 05:46:43.910215 34361 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/observability-operator-59bdc8b94-tgbdb" event={"ID":"3643e6eb-cea0-4a64-b183-1a75f0b5d2af","Type":"ContainerStarted","Data":"5519585538a114f8cbcd24c75aff53797a9638941b65be5e6e560eb781877177"} Feb 24 05:46:43.913465 master-0 kubenswrapper[34361]: I0224 05:46:43.913421 34361 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-admission-webhook-f46855c6-pq8bs" event={"ID":"4a5b74e9-69ce-46e5-a636-61eebd5bab15","Type":"ContainerStarted","Data":"7f342c5bd8f311881d80382a8c52fc40f981f08637644ad87862764fc21c0749"} Feb 24 05:46:43.915475 master-0 kubenswrapper[34361]: I0224 05:46:43.915407 34361 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-gmzdr" event={"ID":"2af5df4a-595d-489e-8614-2d494d2c8bf7","Type":"ContainerStarted","Data":"a3288547afbdb9e2176f354ceb890f4077c769a6011896914be776df01dca9cb"} Feb 24 05:46:43.918121 master-0 kubenswrapper[34361]: I0224 05:46:43.917840 34361 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/perses-operator-5bf474d74f-jbdsj" event={"ID":"58835146-c8a7-41cd-9020-f2c7b393fb35","Type":"ContainerStarted","Data":"8e92466b312a7691c9d7f5aaf335d1bb2866b1fa71863469692d830c8418324c"} Feb 24 05:46:43.919148 master-0 kubenswrapper[34361]: I0224 05:46:43.919103 34361 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-admission-webhook-f46855c6-qm7sz" event={"ID":"246b7516-1e17-47a0-a3eb-1631b97a15e3","Type":"ContainerStarted","Data":"ff28e259bf250036a9ff09281639dd29c352c93edb8b33108bb70085a34af61e"} Feb 24 05:46:48.086443 master-0 kubenswrapper[34361]: I0224 05:46:48.082146 34361 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="cert-manager/cert-manager-webhook-6888856db4-pxvzq" Feb 24 05:46:51.314406 master-0 kubenswrapper[34361]: I0224 05:46:51.313034 34361 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-545d4d4674-ss7w9"] Feb 24 05:46:51.315200 master-0 kubenswrapper[34361]: I0224 05:46:51.314699 34361 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-545d4d4674-ss7w9" Feb 24 05:46:51.355678 master-0 kubenswrapper[34361]: I0224 05:46:51.355587 34361 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-545d4d4674-ss7w9"] Feb 24 05:46:51.366342 master-0 kubenswrapper[34361]: I0224 05:46:51.363365 34361 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-889ld\" (UniqueName: \"kubernetes.io/projected/db84fcc2-dfef-4664-9fa9-fe694b4f4067-kube-api-access-889ld\") pod \"cert-manager-545d4d4674-ss7w9\" (UID: \"db84fcc2-dfef-4664-9fa9-fe694b4f4067\") " pod="cert-manager/cert-manager-545d4d4674-ss7w9" Feb 24 05:46:51.366342 master-0 kubenswrapper[34361]: I0224 05:46:51.363463 34361 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/db84fcc2-dfef-4664-9fa9-fe694b4f4067-bound-sa-token\") pod \"cert-manager-545d4d4674-ss7w9\" (UID: \"db84fcc2-dfef-4664-9fa9-fe694b4f4067\") " pod="cert-manager/cert-manager-545d4d4674-ss7w9" Feb 24 05:46:51.465134 master-0 kubenswrapper[34361]: I0224 05:46:51.465045 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-889ld\" (UniqueName: \"kubernetes.io/projected/db84fcc2-dfef-4664-9fa9-fe694b4f4067-kube-api-access-889ld\") pod \"cert-manager-545d4d4674-ss7w9\" (UID: \"db84fcc2-dfef-4664-9fa9-fe694b4f4067\") " pod="cert-manager/cert-manager-545d4d4674-ss7w9" Feb 24 05:46:51.465439 master-0 kubenswrapper[34361]: I0224 05:46:51.465142 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/db84fcc2-dfef-4664-9fa9-fe694b4f4067-bound-sa-token\") pod \"cert-manager-545d4d4674-ss7w9\" (UID: \"db84fcc2-dfef-4664-9fa9-fe694b4f4067\") " pod="cert-manager/cert-manager-545d4d4674-ss7w9" Feb 24 05:46:51.488655 master-0 kubenswrapper[34361]: I0224 05:46:51.488577 34361 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/db84fcc2-dfef-4664-9fa9-fe694b4f4067-bound-sa-token\") pod \"cert-manager-545d4d4674-ss7w9\" (UID: \"db84fcc2-dfef-4664-9fa9-fe694b4f4067\") " pod="cert-manager/cert-manager-545d4d4674-ss7w9" Feb 24 05:46:51.489927 master-0 kubenswrapper[34361]: I0224 05:46:51.489887 34361 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-889ld\" (UniqueName: \"kubernetes.io/projected/db84fcc2-dfef-4664-9fa9-fe694b4f4067-kube-api-access-889ld\") pod \"cert-manager-545d4d4674-ss7w9\" (UID: \"db84fcc2-dfef-4664-9fa9-fe694b4f4067\") " pod="cert-manager/cert-manager-545d4d4674-ss7w9" Feb 24 05:46:51.637588 master-0 kubenswrapper[34361]: I0224 05:46:51.637526 34361 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-545d4d4674-ss7w9" Feb 24 05:46:55.347897 master-0 kubenswrapper[34361]: I0224 05:46:55.347792 34361 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-545d4d4674-ss7w9"] Feb 24 05:46:55.392949 master-0 kubenswrapper[34361]: W0224 05:46:55.392892 34361 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poddb84fcc2_dfef_4664_9fa9_fe694b4f4067.slice/crio-7e694d8a3ad1dda053a6b0ce5aad08fdf6e062c077eb2d2a7d0f752c8e23fcc4 WatchSource:0}: Error finding container 7e694d8a3ad1dda053a6b0ce5aad08fdf6e062c077eb2d2a7d0f752c8e23fcc4: Status 404 returned error can't find the container with id 7e694d8a3ad1dda053a6b0ce5aad08fdf6e062c077eb2d2a7d0f752c8e23fcc4 Feb 24 05:46:56.087049 master-0 kubenswrapper[34361]: I0224 05:46:56.086876 34361 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-webhook-server-f5b8c49d9-w75vs" event={"ID":"c053f6e1-0c7c-46c9-8e67-4218aef00c90","Type":"ContainerStarted","Data":"51f500b40d80c27964a399624a01251358160d1f27877fa2dd3ab1a2242af006"} Feb 24 05:46:56.088243 master-0 kubenswrapper[34361]: I0224 05:46:56.088153 34361 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/metallb-operator-webhook-server-f5b8c49d9-w75vs" Feb 24 05:46:56.093152 master-0 kubenswrapper[34361]: I0224 05:46:56.093063 34361 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-gmzdr" event={"ID":"2af5df4a-595d-489e-8614-2d494d2c8bf7","Type":"ContainerStarted","Data":"a1b4ba2a6607dce4d4ee092c74595b20f60a26a0b1d4f8269e3291bd5adbbe32"} Feb 24 05:46:56.098629 master-0 kubenswrapper[34361]: I0224 05:46:56.097790 34361 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/perses-operator-5bf474d74f-jbdsj" event={"ID":"58835146-c8a7-41cd-9020-f2c7b393fb35","Type":"ContainerStarted","Data":"031d5db804645c5662a5804e7b6d3d4f9470fced38800294f66c14ff06b49bfc"} Feb 24 05:46:56.098629 master-0 kubenswrapper[34361]: I0224 05:46:56.097929 34361 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operators/perses-operator-5bf474d74f-jbdsj" Feb 24 05:46:56.102964 master-0 kubenswrapper[34361]: I0224 05:46:56.102856 34361 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-admission-webhook-f46855c6-qm7sz" event={"ID":"246b7516-1e17-47a0-a3eb-1631b97a15e3","Type":"ContainerStarted","Data":"98bbd63044f43625952a8d28ca05595342ca3ce2fcadfd3972b38fb2e78fea26"} Feb 24 05:46:56.106834 master-0 kubenswrapper[34361]: I0224 05:46:56.106761 34361 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/observability-operator-59bdc8b94-tgbdb" event={"ID":"3643e6eb-cea0-4a64-b183-1a75f0b5d2af","Type":"ContainerStarted","Data":"0b586909488028075a5be769e32b0c95a721a9e4517aeacd3e73c5be87e3c2d6"} Feb 24 05:46:56.108641 master-0 kubenswrapper[34361]: I0224 05:46:56.108583 34361 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operators/observability-operator-59bdc8b94-tgbdb" Feb 24 05:46:56.111272 master-0 kubenswrapper[34361]: I0224 05:46:56.111212 34361 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operators/observability-operator-59bdc8b94-tgbdb" Feb 24 05:46:56.112623 master-0 kubenswrapper[34361]: I0224 05:46:56.112526 34361 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-admission-webhook-f46855c6-pq8bs" event={"ID":"4a5b74e9-69ce-46e5-a636-61eebd5bab15","Type":"ContainerStarted","Data":"59e4954a55b28a332115ed8cc6eb1cc06187c001210913318dc61f032fbe778e"} Feb 24 05:46:56.116022 master-0 kubenswrapper[34361]: I0224 05:46:56.115893 34361 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-545d4d4674-ss7w9" event={"ID":"db84fcc2-dfef-4664-9fa9-fe694b4f4067","Type":"ContainerStarted","Data":"a33785801278c91c6b27f11a584ac65db287a72c38c696e49d3a91c639fb1fa8"} Feb 24 05:46:56.116022 master-0 kubenswrapper[34361]: I0224 05:46:56.116020 34361 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-545d4d4674-ss7w9" event={"ID":"db84fcc2-dfef-4664-9fa9-fe694b4f4067","Type":"ContainerStarted","Data":"7e694d8a3ad1dda053a6b0ce5aad08fdf6e062c077eb2d2a7d0f752c8e23fcc4"} Feb 24 05:46:56.137629 master-0 kubenswrapper[34361]: I0224 05:46:56.137451 34361 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/metallb-operator-webhook-server-f5b8c49d9-w75vs" podStartSLOduration=6.394036126 podStartE2EDuration="25.137411351s" podCreationTimestamp="2026-02-24 05:46:31 +0000 UTC" firstStartedPulling="2026-02-24 05:46:32.864800368 +0000 UTC m=+552.567417414" lastFinishedPulling="2026-02-24 05:46:51.608175603 +0000 UTC m=+571.310792639" observedRunningTime="2026-02-24 05:46:56.128043666 +0000 UTC m=+575.830660762" watchObservedRunningTime="2026-02-24 05:46:56.137411351 +0000 UTC m=+575.840028487" Feb 24 05:46:56.168107 master-0 kubenswrapper[34361]: I0224 05:46:56.167950 34361 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/obo-prometheus-operator-admission-webhook-f46855c6-pq8bs" podStartSLOduration=2.432358809 podStartE2EDuration="14.167927338s" podCreationTimestamp="2026-02-24 05:46:42 +0000 UTC" firstStartedPulling="2026-02-24 05:46:43.17572307 +0000 UTC m=+562.878340116" lastFinishedPulling="2026-02-24 05:46:54.911291599 +0000 UTC m=+574.613908645" observedRunningTime="2026-02-24 05:46:56.167486845 +0000 UTC m=+575.870103991" watchObservedRunningTime="2026-02-24 05:46:56.167927338 +0000 UTC m=+575.870544374" Feb 24 05:46:57.691419 master-0 kubenswrapper[34361]: I0224 05:46:57.686646 34361 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-gmzdr" podStartSLOduration=4.8287586529999995 podStartE2EDuration="16.686621634s" podCreationTimestamp="2026-02-24 05:46:41 +0000 UTC" firstStartedPulling="2026-02-24 05:46:42.988750077 +0000 UTC m=+562.691367133" lastFinishedPulling="2026-02-24 05:46:54.846613068 +0000 UTC m=+574.549230114" observedRunningTime="2026-02-24 05:46:57.681890455 +0000 UTC m=+577.384507521" watchObservedRunningTime="2026-02-24 05:46:57.686621634 +0000 UTC m=+577.389238680" Feb 24 05:46:57.738052 master-0 kubenswrapper[34361]: I0224 05:46:57.737143 34361 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/obo-prometheus-operator-admission-webhook-f46855c6-qm7sz" podStartSLOduration=4.228234642 podStartE2EDuration="15.737114811s" podCreationTimestamp="2026-02-24 05:46:42 +0000 UTC" firstStartedPulling="2026-02-24 05:46:43.335799393 +0000 UTC m=+563.038416439" lastFinishedPulling="2026-02-24 05:46:54.844679542 +0000 UTC m=+574.547296608" observedRunningTime="2026-02-24 05:46:57.736695989 +0000 UTC m=+577.439313045" watchObservedRunningTime="2026-02-24 05:46:57.737114811 +0000 UTC m=+577.439731857" Feb 24 05:46:57.784415 master-0 kubenswrapper[34361]: I0224 05:46:57.783615 34361 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/perses-operator-5bf474d74f-jbdsj" podStartSLOduration=4.671504269 podStartE2EDuration="15.783586659s" podCreationTimestamp="2026-02-24 05:46:42 +0000 UTC" firstStartedPulling="2026-02-24 05:46:43.734066834 +0000 UTC m=+563.436683880" lastFinishedPulling="2026-02-24 05:46:54.846149224 +0000 UTC m=+574.548766270" observedRunningTime="2026-02-24 05:46:57.782189708 +0000 UTC m=+577.484806764" watchObservedRunningTime="2026-02-24 05:46:57.783586659 +0000 UTC m=+577.486203705" Feb 24 05:46:57.817268 master-0 kubenswrapper[34361]: I0224 05:46:57.816358 34361 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/observability-operator-59bdc8b94-tgbdb" podStartSLOduration=4.178436475 podStartE2EDuration="15.816335224s" podCreationTimestamp="2026-02-24 05:46:42 +0000 UTC" firstStartedPulling="2026-02-24 05:46:43.311909781 +0000 UTC m=+563.014526827" lastFinishedPulling="2026-02-24 05:46:54.94980853 +0000 UTC m=+574.652425576" observedRunningTime="2026-02-24 05:46:57.806077512 +0000 UTC m=+577.508694568" watchObservedRunningTime="2026-02-24 05:46:57.816335224 +0000 UTC m=+577.518952270" Feb 24 05:46:57.847569 master-0 kubenswrapper[34361]: I0224 05:46:57.847451 34361 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-545d4d4674-ss7w9" podStartSLOduration=6.847411509 podStartE2EDuration="6.847411509s" podCreationTimestamp="2026-02-24 05:46:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-24 05:46:57.832402936 +0000 UTC m=+577.535020002" watchObservedRunningTime="2026-02-24 05:46:57.847411509 +0000 UTC m=+577.550028545" Feb 24 05:47:03.156738 master-0 kubenswrapper[34361]: I0224 05:47:03.156545 34361 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operators/perses-operator-5bf474d74f-jbdsj" Feb 24 05:47:11.625453 master-0 kubenswrapper[34361]: I0224 05:47:11.625385 34361 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/metallb-operator-controller-manager-688bdcdc8c-4mpqv" Feb 24 05:47:12.401754 master-0 kubenswrapper[34361]: I0224 05:47:12.401683 34361 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/metallb-operator-webhook-server-f5b8c49d9-w75vs" Feb 24 05:47:21.815185 master-0 kubenswrapper[34361]: I0224 05:47:21.815088 34361 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/frr-k8s-webhook-server-78b44bf5bb-9rc2g"] Feb 24 05:47:21.816852 master-0 kubenswrapper[34361]: I0224 05:47:21.816821 34361 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-webhook-server-78b44bf5bb-9rc2g" Feb 24 05:47:21.820808 master-0 kubenswrapper[34361]: I0224 05:47:21.820757 34361 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-webhook-server-cert" Feb 24 05:47:21.836534 master-0 kubenswrapper[34361]: I0224 05:47:21.830910 34361 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/frr-k8s-fsm64"] Feb 24 05:47:21.836534 master-0 kubenswrapper[34361]: I0224 05:47:21.835818 34361 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-fsm64" Feb 24 05:47:21.840535 master-0 kubenswrapper[34361]: I0224 05:47:21.838647 34361 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"frr-startup" Feb 24 05:47:21.840535 master-0 kubenswrapper[34361]: I0224 05:47:21.838912 34361 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-certs-secret" Feb 24 05:47:21.844777 master-0 kubenswrapper[34361]: I0224 05:47:21.844465 34361 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/frr-k8s-webhook-server-78b44bf5bb-9rc2g"] Feb 24 05:47:21.847537 master-0 kubenswrapper[34361]: I0224 05:47:21.847447 34361 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l8vj2\" (UniqueName: \"kubernetes.io/projected/a65c5216-26e6-4e76-a623-10d3f04ca5ce-kube-api-access-l8vj2\") pod \"frr-k8s-fsm64\" (UID: \"a65c5216-26e6-4e76-a623-10d3f04ca5ce\") " pod="metallb-system/frr-k8s-fsm64" Feb 24 05:47:21.847624 master-0 kubenswrapper[34361]: I0224 05:47:21.847607 34361 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g8259\" (UniqueName: \"kubernetes.io/projected/94cf373e-7dbb-41de-b6d2-68186d922c29-kube-api-access-g8259\") pod \"frr-k8s-webhook-server-78b44bf5bb-9rc2g\" (UID: \"94cf373e-7dbb-41de-b6d2-68186d922c29\") " pod="metallb-system/frr-k8s-webhook-server-78b44bf5bb-9rc2g" Feb 24 05:47:21.848013 master-0 kubenswrapper[34361]: I0224 05:47:21.847676 34361 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/a65c5216-26e6-4e76-a623-10d3f04ca5ce-frr-conf\") pod \"frr-k8s-fsm64\" (UID: \"a65c5216-26e6-4e76-a623-10d3f04ca5ce\") " pod="metallb-system/frr-k8s-fsm64" Feb 24 05:47:21.848013 master-0 kubenswrapper[34361]: I0224 05:47:21.847707 34361 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/a65c5216-26e6-4e76-a623-10d3f04ca5ce-frr-startup\") pod \"frr-k8s-fsm64\" (UID: \"a65c5216-26e6-4e76-a623-10d3f04ca5ce\") " pod="metallb-system/frr-k8s-fsm64" Feb 24 05:47:21.848013 master-0 kubenswrapper[34361]: I0224 05:47:21.847756 34361 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/94cf373e-7dbb-41de-b6d2-68186d922c29-cert\") pod \"frr-k8s-webhook-server-78b44bf5bb-9rc2g\" (UID: \"94cf373e-7dbb-41de-b6d2-68186d922c29\") " pod="metallb-system/frr-k8s-webhook-server-78b44bf5bb-9rc2g" Feb 24 05:47:21.848013 master-0 kubenswrapper[34361]: I0224 05:47:21.847780 34361 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/a65c5216-26e6-4e76-a623-10d3f04ca5ce-metrics-certs\") pod \"frr-k8s-fsm64\" (UID: \"a65c5216-26e6-4e76-a623-10d3f04ca5ce\") " pod="metallb-system/frr-k8s-fsm64" Feb 24 05:47:21.848013 master-0 kubenswrapper[34361]: I0224 05:47:21.848003 34361 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/a65c5216-26e6-4e76-a623-10d3f04ca5ce-reloader\") pod \"frr-k8s-fsm64\" (UID: \"a65c5216-26e6-4e76-a623-10d3f04ca5ce\") " pod="metallb-system/frr-k8s-fsm64" Feb 24 05:47:21.848245 master-0 kubenswrapper[34361]: I0224 05:47:21.848095 34361 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/a65c5216-26e6-4e76-a623-10d3f04ca5ce-frr-sockets\") pod \"frr-k8s-fsm64\" (UID: \"a65c5216-26e6-4e76-a623-10d3f04ca5ce\") " pod="metallb-system/frr-k8s-fsm64" Feb 24 05:47:21.848245 master-0 kubenswrapper[34361]: I0224 05:47:21.848197 34361 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/a65c5216-26e6-4e76-a623-10d3f04ca5ce-metrics\") pod \"frr-k8s-fsm64\" (UID: \"a65c5216-26e6-4e76-a623-10d3f04ca5ce\") " pod="metallb-system/frr-k8s-fsm64" Feb 24 05:47:21.909521 master-0 kubenswrapper[34361]: I0224 05:47:21.909440 34361 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/speaker-tds5c"] Feb 24 05:47:21.913218 master-0 kubenswrapper[34361]: I0224 05:47:21.913160 34361 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/speaker-tds5c" Feb 24 05:47:21.920248 master-0 kubenswrapper[34361]: I0224 05:47:21.920209 34361 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"speaker-certs-secret" Feb 24 05:47:21.920423 master-0 kubenswrapper[34361]: I0224 05:47:21.920218 34361 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-memberlist" Feb 24 05:47:21.920526 master-0 kubenswrapper[34361]: I0224 05:47:21.920501 34361 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"metallb-excludel2" Feb 24 05:47:21.926937 master-0 kubenswrapper[34361]: I0224 05:47:21.926595 34361 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/controller-69bbfbf88f-hnk7l"] Feb 24 05:47:21.929218 master-0 kubenswrapper[34361]: I0224 05:47:21.929007 34361 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/controller-69bbfbf88f-hnk7l" Feb 24 05:47:21.946366 master-0 kubenswrapper[34361]: I0224 05:47:21.945139 34361 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"controller-certs-secret" Feb 24 05:47:21.967678 master-0 kubenswrapper[34361]: I0224 05:47:21.965216 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/a65c5216-26e6-4e76-a623-10d3f04ca5ce-reloader\") pod \"frr-k8s-fsm64\" (UID: \"a65c5216-26e6-4e76-a623-10d3f04ca5ce\") " pod="metallb-system/frr-k8s-fsm64" Feb 24 05:47:21.967678 master-0 kubenswrapper[34361]: I0224 05:47:21.965353 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/a65c5216-26e6-4e76-a623-10d3f04ca5ce-frr-sockets\") pod \"frr-k8s-fsm64\" (UID: \"a65c5216-26e6-4e76-a623-10d3f04ca5ce\") " pod="metallb-system/frr-k8s-fsm64" Feb 24 05:47:21.967678 master-0 kubenswrapper[34361]: I0224 05:47:21.965441 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/a65c5216-26e6-4e76-a623-10d3f04ca5ce-metrics\") pod \"frr-k8s-fsm64\" (UID: \"a65c5216-26e6-4e76-a623-10d3f04ca5ce\") " pod="metallb-system/frr-k8s-fsm64" Feb 24 05:47:21.968004 master-0 kubenswrapper[34361]: I0224 05:47:21.967679 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l8vj2\" (UniqueName: \"kubernetes.io/projected/a65c5216-26e6-4e76-a623-10d3f04ca5ce-kube-api-access-l8vj2\") pod \"frr-k8s-fsm64\" (UID: \"a65c5216-26e6-4e76-a623-10d3f04ca5ce\") " pod="metallb-system/frr-k8s-fsm64" Feb 24 05:47:21.968004 master-0 kubenswrapper[34361]: I0224 05:47:21.967847 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g8259\" (UniqueName: \"kubernetes.io/projected/94cf373e-7dbb-41de-b6d2-68186d922c29-kube-api-access-g8259\") pod \"frr-k8s-webhook-server-78b44bf5bb-9rc2g\" (UID: \"94cf373e-7dbb-41de-b6d2-68186d922c29\") " pod="metallb-system/frr-k8s-webhook-server-78b44bf5bb-9rc2g" Feb 24 05:47:21.968004 master-0 kubenswrapper[34361]: I0224 05:47:21.967927 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/a65c5216-26e6-4e76-a623-10d3f04ca5ce-frr-conf\") pod \"frr-k8s-fsm64\" (UID: \"a65c5216-26e6-4e76-a623-10d3f04ca5ce\") " pod="metallb-system/frr-k8s-fsm64" Feb 24 05:47:21.968004 master-0 kubenswrapper[34361]: I0224 05:47:21.967959 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/a65c5216-26e6-4e76-a623-10d3f04ca5ce-frr-startup\") pod \"frr-k8s-fsm64\" (UID: \"a65c5216-26e6-4e76-a623-10d3f04ca5ce\") " pod="metallb-system/frr-k8s-fsm64" Feb 24 05:47:21.968004 master-0 kubenswrapper[34361]: I0224 05:47:21.967962 34361 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/a65c5216-26e6-4e76-a623-10d3f04ca5ce-reloader\") pod \"frr-k8s-fsm64\" (UID: \"a65c5216-26e6-4e76-a623-10d3f04ca5ce\") " pod="metallb-system/frr-k8s-fsm64" Feb 24 05:47:21.968167 master-0 kubenswrapper[34361]: I0224 05:47:21.967986 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/94cf373e-7dbb-41de-b6d2-68186d922c29-cert\") pod \"frr-k8s-webhook-server-78b44bf5bb-9rc2g\" (UID: \"94cf373e-7dbb-41de-b6d2-68186d922c29\") " pod="metallb-system/frr-k8s-webhook-server-78b44bf5bb-9rc2g" Feb 24 05:47:21.968167 master-0 kubenswrapper[34361]: I0224 05:47:21.968073 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/a65c5216-26e6-4e76-a623-10d3f04ca5ce-metrics-certs\") pod \"frr-k8s-fsm64\" (UID: \"a65c5216-26e6-4e76-a623-10d3f04ca5ce\") " pod="metallb-system/frr-k8s-fsm64" Feb 24 05:47:21.970868 master-0 kubenswrapper[34361]: I0224 05:47:21.968944 34361 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/a65c5216-26e6-4e76-a623-10d3f04ca5ce-frr-sockets\") pod \"frr-k8s-fsm64\" (UID: \"a65c5216-26e6-4e76-a623-10d3f04ca5ce\") " pod="metallb-system/frr-k8s-fsm64" Feb 24 05:47:21.970868 master-0 kubenswrapper[34361]: I0224 05:47:21.970796 34361 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/a65c5216-26e6-4e76-a623-10d3f04ca5ce-frr-conf\") pod \"frr-k8s-fsm64\" (UID: \"a65c5216-26e6-4e76-a623-10d3f04ca5ce\") " pod="metallb-system/frr-k8s-fsm64" Feb 24 05:47:21.971553 master-0 kubenswrapper[34361]: I0224 05:47:21.971522 34361 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/a65c5216-26e6-4e76-a623-10d3f04ca5ce-metrics\") pod \"frr-k8s-fsm64\" (UID: \"a65c5216-26e6-4e76-a623-10d3f04ca5ce\") " pod="metallb-system/frr-k8s-fsm64" Feb 24 05:47:21.974220 master-0 kubenswrapper[34361]: I0224 05:47:21.974129 34361 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/a65c5216-26e6-4e76-a623-10d3f04ca5ce-metrics-certs\") pod \"frr-k8s-fsm64\" (UID: \"a65c5216-26e6-4e76-a623-10d3f04ca5ce\") " pod="metallb-system/frr-k8s-fsm64" Feb 24 05:47:21.975876 master-0 kubenswrapper[34361]: I0224 05:47:21.975828 34361 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/94cf373e-7dbb-41de-b6d2-68186d922c29-cert\") pod \"frr-k8s-webhook-server-78b44bf5bb-9rc2g\" (UID: \"94cf373e-7dbb-41de-b6d2-68186d922c29\") " pod="metallb-system/frr-k8s-webhook-server-78b44bf5bb-9rc2g" Feb 24 05:47:21.976838 master-0 kubenswrapper[34361]: I0224 05:47:21.976786 34361 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/a65c5216-26e6-4e76-a623-10d3f04ca5ce-frr-startup\") pod \"frr-k8s-fsm64\" (UID: \"a65c5216-26e6-4e76-a623-10d3f04ca5ce\") " pod="metallb-system/frr-k8s-fsm64" Feb 24 05:47:21.998788 master-0 kubenswrapper[34361]: I0224 05:47:21.998726 34361 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l8vj2\" (UniqueName: \"kubernetes.io/projected/a65c5216-26e6-4e76-a623-10d3f04ca5ce-kube-api-access-l8vj2\") pod \"frr-k8s-fsm64\" (UID: \"a65c5216-26e6-4e76-a623-10d3f04ca5ce\") " pod="metallb-system/frr-k8s-fsm64" Feb 24 05:47:21.999062 master-0 kubenswrapper[34361]: I0224 05:47:21.999001 34361 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g8259\" (UniqueName: \"kubernetes.io/projected/94cf373e-7dbb-41de-b6d2-68186d922c29-kube-api-access-g8259\") pod \"frr-k8s-webhook-server-78b44bf5bb-9rc2g\" (UID: \"94cf373e-7dbb-41de-b6d2-68186d922c29\") " pod="metallb-system/frr-k8s-webhook-server-78b44bf5bb-9rc2g" Feb 24 05:47:22.012708 master-0 kubenswrapper[34361]: I0224 05:47:22.012607 34361 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/controller-69bbfbf88f-hnk7l"] Feb 24 05:47:22.071349 master-0 kubenswrapper[34361]: I0224 05:47:22.071159 34361 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/29866c03-d909-4d71-b528-996c439cdaa0-metallb-excludel2\") pod \"speaker-tds5c\" (UID: \"29866c03-d909-4d71-b528-996c439cdaa0\") " pod="metallb-system/speaker-tds5c" Feb 24 05:47:22.071596 master-0 kubenswrapper[34361]: I0224 05:47:22.071435 34361 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/ecfcf889-19ab-4e52-98db-1cb38643ff33-metrics-certs\") pod \"controller-69bbfbf88f-hnk7l\" (UID: \"ecfcf889-19ab-4e52-98db-1cb38643ff33\") " pod="metallb-system/controller-69bbfbf88f-hnk7l" Feb 24 05:47:22.071596 master-0 kubenswrapper[34361]: I0224 05:47:22.071555 34361 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/29866c03-d909-4d71-b528-996c439cdaa0-memberlist\") pod \"speaker-tds5c\" (UID: \"29866c03-d909-4d71-b528-996c439cdaa0\") " pod="metallb-system/speaker-tds5c" Feb 24 05:47:22.071670 master-0 kubenswrapper[34361]: I0224 05:47:22.071599 34361 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zf9ln\" (UniqueName: \"kubernetes.io/projected/ecfcf889-19ab-4e52-98db-1cb38643ff33-kube-api-access-zf9ln\") pod \"controller-69bbfbf88f-hnk7l\" (UID: \"ecfcf889-19ab-4e52-98db-1cb38643ff33\") " pod="metallb-system/controller-69bbfbf88f-hnk7l" Feb 24 05:47:22.071670 master-0 kubenswrapper[34361]: I0224 05:47:22.071643 34361 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/29866c03-d909-4d71-b528-996c439cdaa0-metrics-certs\") pod \"speaker-tds5c\" (UID: \"29866c03-d909-4d71-b528-996c439cdaa0\") " pod="metallb-system/speaker-tds5c" Feb 24 05:47:22.071740 master-0 kubenswrapper[34361]: I0224 05:47:22.071676 34361 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-97nv9\" (UniqueName: \"kubernetes.io/projected/29866c03-d909-4d71-b528-996c439cdaa0-kube-api-access-97nv9\") pod \"speaker-tds5c\" (UID: \"29866c03-d909-4d71-b528-996c439cdaa0\") " pod="metallb-system/speaker-tds5c" Feb 24 05:47:22.071740 master-0 kubenswrapper[34361]: I0224 05:47:22.071717 34361 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/ecfcf889-19ab-4e52-98db-1cb38643ff33-cert\") pod \"controller-69bbfbf88f-hnk7l\" (UID: \"ecfcf889-19ab-4e52-98db-1cb38643ff33\") " pod="metallb-system/controller-69bbfbf88f-hnk7l" Feb 24 05:47:22.171058 master-0 kubenswrapper[34361]: I0224 05:47:22.170988 34361 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-webhook-server-78b44bf5bb-9rc2g" Feb 24 05:47:22.172965 master-0 kubenswrapper[34361]: I0224 05:47:22.172921 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/ecfcf889-19ab-4e52-98db-1cb38643ff33-metrics-certs\") pod \"controller-69bbfbf88f-hnk7l\" (UID: \"ecfcf889-19ab-4e52-98db-1cb38643ff33\") " pod="metallb-system/controller-69bbfbf88f-hnk7l" Feb 24 05:47:22.173013 master-0 kubenswrapper[34361]: I0224 05:47:22.172968 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/29866c03-d909-4d71-b528-996c439cdaa0-memberlist\") pod \"speaker-tds5c\" (UID: \"29866c03-d909-4d71-b528-996c439cdaa0\") " pod="metallb-system/speaker-tds5c" Feb 24 05:47:22.173199 master-0 kubenswrapper[34361]: E0224 05:47:22.173152 34361 secret.go:189] Couldn't get secret metallb-system/controller-certs-secret: secret "controller-certs-secret" not found Feb 24 05:47:22.173293 master-0 kubenswrapper[34361]: E0224 05:47:22.173276 34361 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ecfcf889-19ab-4e52-98db-1cb38643ff33-metrics-certs podName:ecfcf889-19ab-4e52-98db-1cb38643ff33 nodeName:}" failed. No retries permitted until 2026-02-24 05:47:22.673237419 +0000 UTC m=+602.375854465 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/ecfcf889-19ab-4e52-98db-1cb38643ff33-metrics-certs") pod "controller-69bbfbf88f-hnk7l" (UID: "ecfcf889-19ab-4e52-98db-1cb38643ff33") : secret "controller-certs-secret" not found Feb 24 05:47:22.173488 master-0 kubenswrapper[34361]: E0224 05:47:22.173438 34361 secret.go:189] Couldn't get secret metallb-system/metallb-memberlist: secret "metallb-memberlist" not found Feb 24 05:47:22.173576 master-0 kubenswrapper[34361]: E0224 05:47:22.173556 34361 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/29866c03-d909-4d71-b528-996c439cdaa0-memberlist podName:29866c03-d909-4d71-b528-996c439cdaa0 nodeName:}" failed. No retries permitted until 2026-02-24 05:47:22.673506497 +0000 UTC m=+602.376123743 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "memberlist" (UniqueName: "kubernetes.io/secret/29866c03-d909-4d71-b528-996c439cdaa0-memberlist") pod "speaker-tds5c" (UID: "29866c03-d909-4d71-b528-996c439cdaa0") : secret "metallb-memberlist" not found Feb 24 05:47:22.173651 master-0 kubenswrapper[34361]: I0224 05:47:22.173595 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zf9ln\" (UniqueName: \"kubernetes.io/projected/ecfcf889-19ab-4e52-98db-1cb38643ff33-kube-api-access-zf9ln\") pod \"controller-69bbfbf88f-hnk7l\" (UID: \"ecfcf889-19ab-4e52-98db-1cb38643ff33\") " pod="metallb-system/controller-69bbfbf88f-hnk7l" Feb 24 05:47:22.173868 master-0 kubenswrapper[34361]: I0224 05:47:22.173816 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/29866c03-d909-4d71-b528-996c439cdaa0-metrics-certs\") pod \"speaker-tds5c\" (UID: \"29866c03-d909-4d71-b528-996c439cdaa0\") " pod="metallb-system/speaker-tds5c" Feb 24 05:47:22.173994 master-0 kubenswrapper[34361]: I0224 05:47:22.173957 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-97nv9\" (UniqueName: \"kubernetes.io/projected/29866c03-d909-4d71-b528-996c439cdaa0-kube-api-access-97nv9\") pod \"speaker-tds5c\" (UID: \"29866c03-d909-4d71-b528-996c439cdaa0\") " pod="metallb-system/speaker-tds5c" Feb 24 05:47:22.174205 master-0 kubenswrapper[34361]: I0224 05:47:22.174154 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/ecfcf889-19ab-4e52-98db-1cb38643ff33-cert\") pod \"controller-69bbfbf88f-hnk7l\" (UID: \"ecfcf889-19ab-4e52-98db-1cb38643ff33\") " pod="metallb-system/controller-69bbfbf88f-hnk7l" Feb 24 05:47:22.174555 master-0 kubenswrapper[34361]: I0224 05:47:22.174503 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/29866c03-d909-4d71-b528-996c439cdaa0-metallb-excludel2\") pod \"speaker-tds5c\" (UID: \"29866c03-d909-4d71-b528-996c439cdaa0\") " pod="metallb-system/speaker-tds5c" Feb 24 05:47:22.175381 master-0 kubenswrapper[34361]: I0224 05:47:22.175327 34361 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/29866c03-d909-4d71-b528-996c439cdaa0-metallb-excludel2\") pod \"speaker-tds5c\" (UID: \"29866c03-d909-4d71-b528-996c439cdaa0\") " pod="metallb-system/speaker-tds5c" Feb 24 05:47:22.176053 master-0 kubenswrapper[34361]: I0224 05:47:22.176016 34361 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-webhook-cert" Feb 24 05:47:22.178461 master-0 kubenswrapper[34361]: I0224 05:47:22.178430 34361 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/29866c03-d909-4d71-b528-996c439cdaa0-metrics-certs\") pod \"speaker-tds5c\" (UID: \"29866c03-d909-4d71-b528-996c439cdaa0\") " pod="metallb-system/speaker-tds5c" Feb 24 05:47:22.191163 master-0 kubenswrapper[34361]: I0224 05:47:22.191111 34361 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/ecfcf889-19ab-4e52-98db-1cb38643ff33-cert\") pod \"controller-69bbfbf88f-hnk7l\" (UID: \"ecfcf889-19ab-4e52-98db-1cb38643ff33\") " pod="metallb-system/controller-69bbfbf88f-hnk7l" Feb 24 05:47:22.193157 master-0 kubenswrapper[34361]: I0224 05:47:22.193096 34361 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zf9ln\" (UniqueName: \"kubernetes.io/projected/ecfcf889-19ab-4e52-98db-1cb38643ff33-kube-api-access-zf9ln\") pod \"controller-69bbfbf88f-hnk7l\" (UID: \"ecfcf889-19ab-4e52-98db-1cb38643ff33\") " pod="metallb-system/controller-69bbfbf88f-hnk7l" Feb 24 05:47:22.197873 master-0 kubenswrapper[34361]: I0224 05:47:22.197673 34361 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-97nv9\" (UniqueName: \"kubernetes.io/projected/29866c03-d909-4d71-b528-996c439cdaa0-kube-api-access-97nv9\") pod \"speaker-tds5c\" (UID: \"29866c03-d909-4d71-b528-996c439cdaa0\") " pod="metallb-system/speaker-tds5c" Feb 24 05:47:22.198727 master-0 kubenswrapper[34361]: I0224 05:47:22.198343 34361 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-fsm64" Feb 24 05:47:22.684187 master-0 kubenswrapper[34361]: I0224 05:47:22.683972 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/ecfcf889-19ab-4e52-98db-1cb38643ff33-metrics-certs\") pod \"controller-69bbfbf88f-hnk7l\" (UID: \"ecfcf889-19ab-4e52-98db-1cb38643ff33\") " pod="metallb-system/controller-69bbfbf88f-hnk7l" Feb 24 05:47:22.684681 master-0 kubenswrapper[34361]: I0224 05:47:22.684599 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/29866c03-d909-4d71-b528-996c439cdaa0-memberlist\") pod \"speaker-tds5c\" (UID: \"29866c03-d909-4d71-b528-996c439cdaa0\") " pod="metallb-system/speaker-tds5c" Feb 24 05:47:22.684921 master-0 kubenswrapper[34361]: E0224 05:47:22.684836 34361 secret.go:189] Couldn't get secret metallb-system/metallb-memberlist: secret "metallb-memberlist" not found Feb 24 05:47:22.685011 master-0 kubenswrapper[34361]: E0224 05:47:22.684959 34361 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/29866c03-d909-4d71-b528-996c439cdaa0-memberlist podName:29866c03-d909-4d71-b528-996c439cdaa0 nodeName:}" failed. No retries permitted until 2026-02-24 05:47:23.684937023 +0000 UTC m=+603.387554069 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "memberlist" (UniqueName: "kubernetes.io/secret/29866c03-d909-4d71-b528-996c439cdaa0-memberlist") pod "speaker-tds5c" (UID: "29866c03-d909-4d71-b528-996c439cdaa0") : secret "metallb-memberlist" not found Feb 24 05:47:22.693946 master-0 kubenswrapper[34361]: I0224 05:47:22.693595 34361 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/ecfcf889-19ab-4e52-98db-1cb38643ff33-metrics-certs\") pod \"controller-69bbfbf88f-hnk7l\" (UID: \"ecfcf889-19ab-4e52-98db-1cb38643ff33\") " pod="metallb-system/controller-69bbfbf88f-hnk7l" Feb 24 05:47:22.702901 master-0 kubenswrapper[34361]: I0224 05:47:22.702813 34361 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/frr-k8s-webhook-server-78b44bf5bb-9rc2g"] Feb 24 05:47:22.708741 master-0 kubenswrapper[34361]: W0224 05:47:22.708656 34361 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod94cf373e_7dbb_41de_b6d2_68186d922c29.slice/crio-7873d5d69a3ab91b85bc36806c85596a7e047b7d2c043a4fbfbcf93dbba44303 WatchSource:0}: Error finding container 7873d5d69a3ab91b85bc36806c85596a7e047b7d2c043a4fbfbcf93dbba44303: Status 404 returned error can't find the container with id 7873d5d69a3ab91b85bc36806c85596a7e047b7d2c043a4fbfbcf93dbba44303 Feb 24 05:47:22.953133 master-0 kubenswrapper[34361]: I0224 05:47:22.952983 34361 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/controller-69bbfbf88f-hnk7l" Feb 24 05:47:22.979124 master-0 kubenswrapper[34361]: I0224 05:47:22.979033 34361 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-webhook-server-78b44bf5bb-9rc2g" event={"ID":"94cf373e-7dbb-41de-b6d2-68186d922c29","Type":"ContainerStarted","Data":"7873d5d69a3ab91b85bc36806c85596a7e047b7d2c043a4fbfbcf93dbba44303"} Feb 24 05:47:22.980545 master-0 kubenswrapper[34361]: I0224 05:47:22.980447 34361 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-fsm64" event={"ID":"a65c5216-26e6-4e76-a623-10d3f04ca5ce","Type":"ContainerStarted","Data":"d2ff68919ed7eeccd43d3492587758ff3dbc452bb54a6d9ab6ee1fcb0ead53e8"} Feb 24 05:47:23.468608 master-0 kubenswrapper[34361]: W0224 05:47:23.468523 34361 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podecfcf889_19ab_4e52_98db_1cb38643ff33.slice/crio-17e7c83eff45765b9f2e10456f86a99d46ed8275ee5d99a7afef882c76864af8 WatchSource:0}: Error finding container 17e7c83eff45765b9f2e10456f86a99d46ed8275ee5d99a7afef882c76864af8: Status 404 returned error can't find the container with id 17e7c83eff45765b9f2e10456f86a99d46ed8275ee5d99a7afef882c76864af8 Feb 24 05:47:23.468873 master-0 kubenswrapper[34361]: I0224 05:47:23.468788 34361 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/controller-69bbfbf88f-hnk7l"] Feb 24 05:47:23.711139 master-0 kubenswrapper[34361]: I0224 05:47:23.711081 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/29866c03-d909-4d71-b528-996c439cdaa0-memberlist\") pod \"speaker-tds5c\" (UID: \"29866c03-d909-4d71-b528-996c439cdaa0\") " pod="metallb-system/speaker-tds5c" Feb 24 05:47:23.711631 master-0 kubenswrapper[34361]: E0224 05:47:23.711599 34361 secret.go:189] Couldn't get secret metallb-system/metallb-memberlist: secret "metallb-memberlist" not found Feb 24 05:47:23.711695 master-0 kubenswrapper[34361]: E0224 05:47:23.711656 34361 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/29866c03-d909-4d71-b528-996c439cdaa0-memberlist podName:29866c03-d909-4d71-b528-996c439cdaa0 nodeName:}" failed. No retries permitted until 2026-02-24 05:47:25.711640227 +0000 UTC m=+605.414257273 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "memberlist" (UniqueName: "kubernetes.io/secret/29866c03-d909-4d71-b528-996c439cdaa0-memberlist") pod "speaker-tds5c" (UID: "29866c03-d909-4d71-b528-996c439cdaa0") : secret "metallb-memberlist" not found Feb 24 05:47:23.906029 master-0 kubenswrapper[34361]: I0224 05:47:23.905936 34361 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-metrics-58c85c668d-c85cm"] Feb 24 05:47:23.907868 master-0 kubenswrapper[34361]: I0224 05:47:23.907822 34361 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-metrics-58c85c668d-c85cm" Feb 24 05:47:23.930842 master-0 kubenswrapper[34361]: I0224 05:47:23.928606 34361 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-webhook-866bcb46dc-qp4cm"] Feb 24 05:47:23.930842 master-0 kubenswrapper[34361]: I0224 05:47:23.930053 34361 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-webhook-866bcb46dc-qp4cm" Feb 24 05:47:23.936185 master-0 kubenswrapper[34361]: I0224 05:47:23.936137 34361 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-metrics-58c85c668d-c85cm"] Feb 24 05:47:23.948798 master-0 kubenswrapper[34361]: I0224 05:47:23.942286 34361 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-webhook-866bcb46dc-qp4cm"] Feb 24 05:47:23.948798 master-0 kubenswrapper[34361]: I0224 05:47:23.944777 34361 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"openshift-nmstate-webhook" Feb 24 05:47:23.948798 master-0 kubenswrapper[34361]: I0224 05:47:23.948709 34361 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-handler-r6rsr"] Feb 24 05:47:23.951819 master-0 kubenswrapper[34361]: I0224 05:47:23.951779 34361 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-handler-r6rsr" Feb 24 05:47:24.009931 master-0 kubenswrapper[34361]: I0224 05:47:24.009866 34361 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-69bbfbf88f-hnk7l" event={"ID":"ecfcf889-19ab-4e52-98db-1cb38643ff33","Type":"ContainerStarted","Data":"8e0fe7a19e8289104422271bcbf73773c07a653358d776dd04f74c2a20c7c824"} Feb 24 05:47:24.010720 master-0 kubenswrapper[34361]: I0224 05:47:24.010705 34361 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-69bbfbf88f-hnk7l" event={"ID":"ecfcf889-19ab-4e52-98db-1cb38643ff33","Type":"ContainerStarted","Data":"17e7c83eff45765b9f2e10456f86a99d46ed8275ee5d99a7afef882c76864af8"} Feb 24 05:47:24.018534 master-0 kubenswrapper[34361]: I0224 05:47:24.018407 34361 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-65hvf\" (UniqueName: \"kubernetes.io/projected/60d4ae3e-8833-4332-ac6f-601db5f57f6d-kube-api-access-65hvf\") pod \"nmstate-metrics-58c85c668d-c85cm\" (UID: \"60d4ae3e-8833-4332-ac6f-601db5f57f6d\") " pod="openshift-nmstate/nmstate-metrics-58c85c668d-c85cm" Feb 24 05:47:24.018648 master-0 kubenswrapper[34361]: I0224 05:47:24.018565 34361 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n7n9t\" (UniqueName: \"kubernetes.io/projected/3b304796-dfe9-479f-be1d-695eeb30d29a-kube-api-access-n7n9t\") pod \"nmstate-webhook-866bcb46dc-qp4cm\" (UID: \"3b304796-dfe9-479f-be1d-695eeb30d29a\") " pod="openshift-nmstate/nmstate-webhook-866bcb46dc-qp4cm" Feb 24 05:47:24.018648 master-0 kubenswrapper[34361]: I0224 05:47:24.018603 34361 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/3b304796-dfe9-479f-be1d-695eeb30d29a-tls-key-pair\") pod \"nmstate-webhook-866bcb46dc-qp4cm\" (UID: \"3b304796-dfe9-479f-be1d-695eeb30d29a\") " pod="openshift-nmstate/nmstate-webhook-866bcb46dc-qp4cm" Feb 24 05:47:24.018648 master-0 kubenswrapper[34361]: I0224 05:47:24.018637 34361 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/ba6b8867-e9ef-4814-b06f-2be69d7e9587-dbus-socket\") pod \"nmstate-handler-r6rsr\" (UID: \"ba6b8867-e9ef-4814-b06f-2be69d7e9587\") " pod="openshift-nmstate/nmstate-handler-r6rsr" Feb 24 05:47:24.018788 master-0 kubenswrapper[34361]: I0224 05:47:24.018684 34361 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/ba6b8867-e9ef-4814-b06f-2be69d7e9587-nmstate-lock\") pod \"nmstate-handler-r6rsr\" (UID: \"ba6b8867-e9ef-4814-b06f-2be69d7e9587\") " pod="openshift-nmstate/nmstate-handler-r6rsr" Feb 24 05:47:24.018788 master-0 kubenswrapper[34361]: I0224 05:47:24.018705 34361 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rqzrh\" (UniqueName: \"kubernetes.io/projected/ba6b8867-e9ef-4814-b06f-2be69d7e9587-kube-api-access-rqzrh\") pod \"nmstate-handler-r6rsr\" (UID: \"ba6b8867-e9ef-4814-b06f-2be69d7e9587\") " pod="openshift-nmstate/nmstate-handler-r6rsr" Feb 24 05:47:24.018788 master-0 kubenswrapper[34361]: I0224 05:47:24.018726 34361 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/ba6b8867-e9ef-4814-b06f-2be69d7e9587-ovs-socket\") pod \"nmstate-handler-r6rsr\" (UID: \"ba6b8867-e9ef-4814-b06f-2be69d7e9587\") " pod="openshift-nmstate/nmstate-handler-r6rsr" Feb 24 05:47:24.114787 master-0 kubenswrapper[34361]: I0224 05:47:24.114694 34361 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-console-plugin-5c78fc5d65-447df"] Feb 24 05:47:24.116366 master-0 kubenswrapper[34361]: I0224 05:47:24.116341 34361 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-console-plugin-5c78fc5d65-447df" Feb 24 05:47:24.121064 master-0 kubenswrapper[34361]: I0224 05:47:24.118997 34361 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"nginx-conf" Feb 24 05:47:24.121064 master-0 kubenswrapper[34361]: I0224 05:47:24.119229 34361 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"plugin-serving-cert" Feb 24 05:47:24.121064 master-0 kubenswrapper[34361]: I0224 05:47:24.120207 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n7n9t\" (UniqueName: \"kubernetes.io/projected/3b304796-dfe9-479f-be1d-695eeb30d29a-kube-api-access-n7n9t\") pod \"nmstate-webhook-866bcb46dc-qp4cm\" (UID: \"3b304796-dfe9-479f-be1d-695eeb30d29a\") " pod="openshift-nmstate/nmstate-webhook-866bcb46dc-qp4cm" Feb 24 05:47:24.121064 master-0 kubenswrapper[34361]: I0224 05:47:24.120278 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/3b304796-dfe9-479f-be1d-695eeb30d29a-tls-key-pair\") pod \"nmstate-webhook-866bcb46dc-qp4cm\" (UID: \"3b304796-dfe9-479f-be1d-695eeb30d29a\") " pod="openshift-nmstate/nmstate-webhook-866bcb46dc-qp4cm" Feb 24 05:47:24.121064 master-0 kubenswrapper[34361]: I0224 05:47:24.120723 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/ba6b8867-e9ef-4814-b06f-2be69d7e9587-dbus-socket\") pod \"nmstate-handler-r6rsr\" (UID: \"ba6b8867-e9ef-4814-b06f-2be69d7e9587\") " pod="openshift-nmstate/nmstate-handler-r6rsr" Feb 24 05:47:24.121064 master-0 kubenswrapper[34361]: I0224 05:47:24.120858 34361 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/ba6b8867-e9ef-4814-b06f-2be69d7e9587-dbus-socket\") pod \"nmstate-handler-r6rsr\" (UID: \"ba6b8867-e9ef-4814-b06f-2be69d7e9587\") " pod="openshift-nmstate/nmstate-handler-r6rsr" Feb 24 05:47:24.121064 master-0 kubenswrapper[34361]: I0224 05:47:24.120977 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/ba6b8867-e9ef-4814-b06f-2be69d7e9587-nmstate-lock\") pod \"nmstate-handler-r6rsr\" (UID: \"ba6b8867-e9ef-4814-b06f-2be69d7e9587\") " pod="openshift-nmstate/nmstate-handler-r6rsr" Feb 24 05:47:24.121064 master-0 kubenswrapper[34361]: I0224 05:47:24.121051 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rqzrh\" (UniqueName: \"kubernetes.io/projected/ba6b8867-e9ef-4814-b06f-2be69d7e9587-kube-api-access-rqzrh\") pod \"nmstate-handler-r6rsr\" (UID: \"ba6b8867-e9ef-4814-b06f-2be69d7e9587\") " pod="openshift-nmstate/nmstate-handler-r6rsr" Feb 24 05:47:24.121669 master-0 kubenswrapper[34361]: I0224 05:47:24.121605 34361 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/ba6b8867-e9ef-4814-b06f-2be69d7e9587-nmstate-lock\") pod \"nmstate-handler-r6rsr\" (UID: \"ba6b8867-e9ef-4814-b06f-2be69d7e9587\") " pod="openshift-nmstate/nmstate-handler-r6rsr" Feb 24 05:47:24.121824 master-0 kubenswrapper[34361]: I0224 05:47:24.121787 34361 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/ba6b8867-e9ef-4814-b06f-2be69d7e9587-ovs-socket\") pod \"nmstate-handler-r6rsr\" (UID: \"ba6b8867-e9ef-4814-b06f-2be69d7e9587\") " pod="openshift-nmstate/nmstate-handler-r6rsr" Feb 24 05:47:24.122387 master-0 kubenswrapper[34361]: I0224 05:47:24.121122 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/ba6b8867-e9ef-4814-b06f-2be69d7e9587-ovs-socket\") pod \"nmstate-handler-r6rsr\" (UID: \"ba6b8867-e9ef-4814-b06f-2be69d7e9587\") " pod="openshift-nmstate/nmstate-handler-r6rsr" Feb 24 05:47:24.122744 master-0 kubenswrapper[34361]: I0224 05:47:24.122703 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-65hvf\" (UniqueName: \"kubernetes.io/projected/60d4ae3e-8833-4332-ac6f-601db5f57f6d-kube-api-access-65hvf\") pod \"nmstate-metrics-58c85c668d-c85cm\" (UID: \"60d4ae3e-8833-4332-ac6f-601db5f57f6d\") " pod="openshift-nmstate/nmstate-metrics-58c85c668d-c85cm" Feb 24 05:47:24.128689 master-0 kubenswrapper[34361]: I0224 05:47:24.128639 34361 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-console-plugin-5c78fc5d65-447df"] Feb 24 05:47:24.138829 master-0 kubenswrapper[34361]: I0224 05:47:24.138755 34361 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/3b304796-dfe9-479f-be1d-695eeb30d29a-tls-key-pair\") pod \"nmstate-webhook-866bcb46dc-qp4cm\" (UID: \"3b304796-dfe9-479f-be1d-695eeb30d29a\") " pod="openshift-nmstate/nmstate-webhook-866bcb46dc-qp4cm" Feb 24 05:47:24.144126 master-0 kubenswrapper[34361]: I0224 05:47:24.143822 34361 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rqzrh\" (UniqueName: \"kubernetes.io/projected/ba6b8867-e9ef-4814-b06f-2be69d7e9587-kube-api-access-rqzrh\") pod \"nmstate-handler-r6rsr\" (UID: \"ba6b8867-e9ef-4814-b06f-2be69d7e9587\") " pod="openshift-nmstate/nmstate-handler-r6rsr" Feb 24 05:47:24.144126 master-0 kubenswrapper[34361]: I0224 05:47:24.144084 34361 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n7n9t\" (UniqueName: \"kubernetes.io/projected/3b304796-dfe9-479f-be1d-695eeb30d29a-kube-api-access-n7n9t\") pod \"nmstate-webhook-866bcb46dc-qp4cm\" (UID: \"3b304796-dfe9-479f-be1d-695eeb30d29a\") " pod="openshift-nmstate/nmstate-webhook-866bcb46dc-qp4cm" Feb 24 05:47:24.148583 master-0 kubenswrapper[34361]: I0224 05:47:24.148545 34361 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-65hvf\" (UniqueName: \"kubernetes.io/projected/60d4ae3e-8833-4332-ac6f-601db5f57f6d-kube-api-access-65hvf\") pod \"nmstate-metrics-58c85c668d-c85cm\" (UID: \"60d4ae3e-8833-4332-ac6f-601db5f57f6d\") " pod="openshift-nmstate/nmstate-metrics-58c85c668d-c85cm" Feb 24 05:47:24.231952 master-0 kubenswrapper[34361]: I0224 05:47:24.231881 34361 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4psvt\" (UniqueName: \"kubernetes.io/projected/2a2e34f6-0c52-4f02-8707-697490f93848-kube-api-access-4psvt\") pod \"nmstate-console-plugin-5c78fc5d65-447df\" (UID: \"2a2e34f6-0c52-4f02-8707-697490f93848\") " pod="openshift-nmstate/nmstate-console-plugin-5c78fc5d65-447df" Feb 24 05:47:24.232275 master-0 kubenswrapper[34361]: I0224 05:47:24.232156 34361 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/2a2e34f6-0c52-4f02-8707-697490f93848-nginx-conf\") pod \"nmstate-console-plugin-5c78fc5d65-447df\" (UID: \"2a2e34f6-0c52-4f02-8707-697490f93848\") " pod="openshift-nmstate/nmstate-console-plugin-5c78fc5d65-447df" Feb 24 05:47:24.232275 master-0 kubenswrapper[34361]: I0224 05:47:24.232260 34361 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/2a2e34f6-0c52-4f02-8707-697490f93848-plugin-serving-cert\") pod \"nmstate-console-plugin-5c78fc5d65-447df\" (UID: \"2a2e34f6-0c52-4f02-8707-697490f93848\") " pod="openshift-nmstate/nmstate-console-plugin-5c78fc5d65-447df" Feb 24 05:47:24.237299 master-0 kubenswrapper[34361]: I0224 05:47:24.237136 34361 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-metrics-58c85c668d-c85cm" Feb 24 05:47:24.290120 master-0 kubenswrapper[34361]: I0224 05:47:24.287618 34361 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-webhook-866bcb46dc-qp4cm" Feb 24 05:47:24.313081 master-0 kubenswrapper[34361]: I0224 05:47:24.311868 34361 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-handler-r6rsr" Feb 24 05:47:24.330522 master-0 kubenswrapper[34361]: I0224 05:47:24.330303 34361 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-d54bc7dc7-5mlqz"] Feb 24 05:47:24.332012 master-0 kubenswrapper[34361]: I0224 05:47:24.331944 34361 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-d54bc7dc7-5mlqz" Feb 24 05:47:24.337817 master-0 kubenswrapper[34361]: I0224 05:47:24.337675 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/2a2e34f6-0c52-4f02-8707-697490f93848-nginx-conf\") pod \"nmstate-console-plugin-5c78fc5d65-447df\" (UID: \"2a2e34f6-0c52-4f02-8707-697490f93848\") " pod="openshift-nmstate/nmstate-console-plugin-5c78fc5d65-447df" Feb 24 05:47:24.337817 master-0 kubenswrapper[34361]: I0224 05:47:24.337758 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/2a2e34f6-0c52-4f02-8707-697490f93848-plugin-serving-cert\") pod \"nmstate-console-plugin-5c78fc5d65-447df\" (UID: \"2a2e34f6-0c52-4f02-8707-697490f93848\") " pod="openshift-nmstate/nmstate-console-plugin-5c78fc5d65-447df" Feb 24 05:47:24.338215 master-0 kubenswrapper[34361]: I0224 05:47:24.337825 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4psvt\" (UniqueName: \"kubernetes.io/projected/2a2e34f6-0c52-4f02-8707-697490f93848-kube-api-access-4psvt\") pod \"nmstate-console-plugin-5c78fc5d65-447df\" (UID: \"2a2e34f6-0c52-4f02-8707-697490f93848\") " pod="openshift-nmstate/nmstate-console-plugin-5c78fc5d65-447df" Feb 24 05:47:24.340323 master-0 kubenswrapper[34361]: I0224 05:47:24.340282 34361 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/2a2e34f6-0c52-4f02-8707-697490f93848-nginx-conf\") pod \"nmstate-console-plugin-5c78fc5d65-447df\" (UID: \"2a2e34f6-0c52-4f02-8707-697490f93848\") " pod="openshift-nmstate/nmstate-console-plugin-5c78fc5d65-447df" Feb 24 05:47:24.351350 master-0 kubenswrapper[34361]: I0224 05:47:24.348378 34361 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/2a2e34f6-0c52-4f02-8707-697490f93848-plugin-serving-cert\") pod \"nmstate-console-plugin-5c78fc5d65-447df\" (UID: \"2a2e34f6-0c52-4f02-8707-697490f93848\") " pod="openshift-nmstate/nmstate-console-plugin-5c78fc5d65-447df" Feb 24 05:47:24.360509 master-0 kubenswrapper[34361]: I0224 05:47:24.360468 34361 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4psvt\" (UniqueName: \"kubernetes.io/projected/2a2e34f6-0c52-4f02-8707-697490f93848-kube-api-access-4psvt\") pod \"nmstate-console-plugin-5c78fc5d65-447df\" (UID: \"2a2e34f6-0c52-4f02-8707-697490f93848\") " pod="openshift-nmstate/nmstate-console-plugin-5c78fc5d65-447df" Feb 24 05:47:24.362079 master-0 kubenswrapper[34361]: I0224 05:47:24.361425 34361 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-d54bc7dc7-5mlqz"] Feb 24 05:47:24.439750 master-0 kubenswrapper[34361]: I0224 05:47:24.439654 34361 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/f074dc6a-a1cb-4de6-8b35-1d8f0bbf768d-console-config\") pod \"console-d54bc7dc7-5mlqz\" (UID: \"f074dc6a-a1cb-4de6-8b35-1d8f0bbf768d\") " pod="openshift-console/console-d54bc7dc7-5mlqz" Feb 24 05:47:24.440099 master-0 kubenswrapper[34361]: I0224 05:47:24.439994 34361 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/f074dc6a-a1cb-4de6-8b35-1d8f0bbf768d-console-oauth-config\") pod \"console-d54bc7dc7-5mlqz\" (UID: \"f074dc6a-a1cb-4de6-8b35-1d8f0bbf768d\") " pod="openshift-console/console-d54bc7dc7-5mlqz" Feb 24 05:47:24.440099 master-0 kubenswrapper[34361]: I0224 05:47:24.440061 34361 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-thxhp\" (UniqueName: \"kubernetes.io/projected/f074dc6a-a1cb-4de6-8b35-1d8f0bbf768d-kube-api-access-thxhp\") pod \"console-d54bc7dc7-5mlqz\" (UID: \"f074dc6a-a1cb-4de6-8b35-1d8f0bbf768d\") " pod="openshift-console/console-d54bc7dc7-5mlqz" Feb 24 05:47:24.440202 master-0 kubenswrapper[34361]: I0224 05:47:24.440102 34361 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f074dc6a-a1cb-4de6-8b35-1d8f0bbf768d-trusted-ca-bundle\") pod \"console-d54bc7dc7-5mlqz\" (UID: \"f074dc6a-a1cb-4de6-8b35-1d8f0bbf768d\") " pod="openshift-console/console-d54bc7dc7-5mlqz" Feb 24 05:47:24.440374 master-0 kubenswrapper[34361]: I0224 05:47:24.440294 34361 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/f074dc6a-a1cb-4de6-8b35-1d8f0bbf768d-oauth-serving-cert\") pod \"console-d54bc7dc7-5mlqz\" (UID: \"f074dc6a-a1cb-4de6-8b35-1d8f0bbf768d\") " pod="openshift-console/console-d54bc7dc7-5mlqz" Feb 24 05:47:24.440519 master-0 kubenswrapper[34361]: I0224 05:47:24.440483 34361 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/f074dc6a-a1cb-4de6-8b35-1d8f0bbf768d-service-ca\") pod \"console-d54bc7dc7-5mlqz\" (UID: \"f074dc6a-a1cb-4de6-8b35-1d8f0bbf768d\") " pod="openshift-console/console-d54bc7dc7-5mlqz" Feb 24 05:47:24.440693 master-0 kubenswrapper[34361]: I0224 05:47:24.440667 34361 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/f074dc6a-a1cb-4de6-8b35-1d8f0bbf768d-console-serving-cert\") pod \"console-d54bc7dc7-5mlqz\" (UID: \"f074dc6a-a1cb-4de6-8b35-1d8f0bbf768d\") " pod="openshift-console/console-d54bc7dc7-5mlqz" Feb 24 05:47:24.528610 master-0 kubenswrapper[34361]: I0224 05:47:24.528547 34361 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-console-plugin-5c78fc5d65-447df" Feb 24 05:47:24.545869 master-0 kubenswrapper[34361]: I0224 05:47:24.543509 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/f074dc6a-a1cb-4de6-8b35-1d8f0bbf768d-console-config\") pod \"console-d54bc7dc7-5mlqz\" (UID: \"f074dc6a-a1cb-4de6-8b35-1d8f0bbf768d\") " pod="openshift-console/console-d54bc7dc7-5mlqz" Feb 24 05:47:24.545869 master-0 kubenswrapper[34361]: I0224 05:47:24.543619 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/f074dc6a-a1cb-4de6-8b35-1d8f0bbf768d-console-oauth-config\") pod \"console-d54bc7dc7-5mlqz\" (UID: \"f074dc6a-a1cb-4de6-8b35-1d8f0bbf768d\") " pod="openshift-console/console-d54bc7dc7-5mlqz" Feb 24 05:47:24.545869 master-0 kubenswrapper[34361]: I0224 05:47:24.543648 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-thxhp\" (UniqueName: \"kubernetes.io/projected/f074dc6a-a1cb-4de6-8b35-1d8f0bbf768d-kube-api-access-thxhp\") pod \"console-d54bc7dc7-5mlqz\" (UID: \"f074dc6a-a1cb-4de6-8b35-1d8f0bbf768d\") " pod="openshift-console/console-d54bc7dc7-5mlqz" Feb 24 05:47:24.545869 master-0 kubenswrapper[34361]: I0224 05:47:24.543669 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f074dc6a-a1cb-4de6-8b35-1d8f0bbf768d-trusted-ca-bundle\") pod \"console-d54bc7dc7-5mlqz\" (UID: \"f074dc6a-a1cb-4de6-8b35-1d8f0bbf768d\") " pod="openshift-console/console-d54bc7dc7-5mlqz" Feb 24 05:47:24.545869 master-0 kubenswrapper[34361]: I0224 05:47:24.543701 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/f074dc6a-a1cb-4de6-8b35-1d8f0bbf768d-oauth-serving-cert\") pod \"console-d54bc7dc7-5mlqz\" (UID: \"f074dc6a-a1cb-4de6-8b35-1d8f0bbf768d\") " pod="openshift-console/console-d54bc7dc7-5mlqz" Feb 24 05:47:24.545869 master-0 kubenswrapper[34361]: I0224 05:47:24.543732 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/f074dc6a-a1cb-4de6-8b35-1d8f0bbf768d-service-ca\") pod \"console-d54bc7dc7-5mlqz\" (UID: \"f074dc6a-a1cb-4de6-8b35-1d8f0bbf768d\") " pod="openshift-console/console-d54bc7dc7-5mlqz" Feb 24 05:47:24.545869 master-0 kubenswrapper[34361]: I0224 05:47:24.543794 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/f074dc6a-a1cb-4de6-8b35-1d8f0bbf768d-console-serving-cert\") pod \"console-d54bc7dc7-5mlqz\" (UID: \"f074dc6a-a1cb-4de6-8b35-1d8f0bbf768d\") " pod="openshift-console/console-d54bc7dc7-5mlqz" Feb 24 05:47:24.547015 master-0 kubenswrapper[34361]: I0224 05:47:24.546946 34361 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/f074dc6a-a1cb-4de6-8b35-1d8f0bbf768d-service-ca\") pod \"console-d54bc7dc7-5mlqz\" (UID: \"f074dc6a-a1cb-4de6-8b35-1d8f0bbf768d\") " pod="openshift-console/console-d54bc7dc7-5mlqz" Feb 24 05:47:24.547192 master-0 kubenswrapper[34361]: I0224 05:47:24.547130 34361 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/f074dc6a-a1cb-4de6-8b35-1d8f0bbf768d-oauth-serving-cert\") pod \"console-d54bc7dc7-5mlqz\" (UID: \"f074dc6a-a1cb-4de6-8b35-1d8f0bbf768d\") " pod="openshift-console/console-d54bc7dc7-5mlqz" Feb 24 05:47:24.548472 master-0 kubenswrapper[34361]: I0224 05:47:24.548205 34361 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f074dc6a-a1cb-4de6-8b35-1d8f0bbf768d-trusted-ca-bundle\") pod \"console-d54bc7dc7-5mlqz\" (UID: \"f074dc6a-a1cb-4de6-8b35-1d8f0bbf768d\") " pod="openshift-console/console-d54bc7dc7-5mlqz" Feb 24 05:47:24.549256 master-0 kubenswrapper[34361]: I0224 05:47:24.549024 34361 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/f074dc6a-a1cb-4de6-8b35-1d8f0bbf768d-console-config\") pod \"console-d54bc7dc7-5mlqz\" (UID: \"f074dc6a-a1cb-4de6-8b35-1d8f0bbf768d\") " pod="openshift-console/console-d54bc7dc7-5mlqz" Feb 24 05:47:24.549408 master-0 kubenswrapper[34361]: I0224 05:47:24.549237 34361 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/f074dc6a-a1cb-4de6-8b35-1d8f0bbf768d-console-serving-cert\") pod \"console-d54bc7dc7-5mlqz\" (UID: \"f074dc6a-a1cb-4de6-8b35-1d8f0bbf768d\") " pod="openshift-console/console-d54bc7dc7-5mlqz" Feb 24 05:47:24.555069 master-0 kubenswrapper[34361]: I0224 05:47:24.555024 34361 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/f074dc6a-a1cb-4de6-8b35-1d8f0bbf768d-console-oauth-config\") pod \"console-d54bc7dc7-5mlqz\" (UID: \"f074dc6a-a1cb-4de6-8b35-1d8f0bbf768d\") " pod="openshift-console/console-d54bc7dc7-5mlqz" Feb 24 05:47:24.575464 master-0 kubenswrapper[34361]: I0224 05:47:24.575375 34361 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-thxhp\" (UniqueName: \"kubernetes.io/projected/f074dc6a-a1cb-4de6-8b35-1d8f0bbf768d-kube-api-access-thxhp\") pod \"console-d54bc7dc7-5mlqz\" (UID: \"f074dc6a-a1cb-4de6-8b35-1d8f0bbf768d\") " pod="openshift-console/console-d54bc7dc7-5mlqz" Feb 24 05:47:24.762392 master-0 kubenswrapper[34361]: I0224 05:47:24.759167 34361 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-d54bc7dc7-5mlqz" Feb 24 05:47:24.852640 master-0 kubenswrapper[34361]: W0224 05:47:24.852562 34361 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod60d4ae3e_8833_4332_ac6f_601db5f57f6d.slice/crio-87efaf00c172ea34b7581ddf4bdd996b50f78a6324a9e2dc230fab351a1aecd0 WatchSource:0}: Error finding container 87efaf00c172ea34b7581ddf4bdd996b50f78a6324a9e2dc230fab351a1aecd0: Status 404 returned error can't find the container with id 87efaf00c172ea34b7581ddf4bdd996b50f78a6324a9e2dc230fab351a1aecd0 Feb 24 05:47:24.855172 master-0 kubenswrapper[34361]: I0224 05:47:24.855108 34361 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-metrics-58c85c668d-c85cm"] Feb 24 05:47:24.908967 master-0 kubenswrapper[34361]: W0224 05:47:24.908910 34361 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3b304796_dfe9_479f_be1d_695eeb30d29a.slice/crio-6a6b700bcba62bfc4e7bb22b51f7474f62676c3d39ec5306502d96bcebc55caa WatchSource:0}: Error finding container 6a6b700bcba62bfc4e7bb22b51f7474f62676c3d39ec5306502d96bcebc55caa: Status 404 returned error can't find the container with id 6a6b700bcba62bfc4e7bb22b51f7474f62676c3d39ec5306502d96bcebc55caa Feb 24 05:47:24.909515 master-0 kubenswrapper[34361]: I0224 05:47:24.909473 34361 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-webhook-866bcb46dc-qp4cm"] Feb 24 05:47:25.016075 master-0 kubenswrapper[34361]: I0224 05:47:25.015991 34361 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-console-plugin-5c78fc5d65-447df"] Feb 24 05:47:25.023987 master-0 kubenswrapper[34361]: W0224 05:47:25.023909 34361 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2a2e34f6_0c52_4f02_8707_697490f93848.slice/crio-d07f294b89d7d7cc837c0b938ad3cb227adb0c0bf4a0bfa54a9b12b07fdddaed WatchSource:0}: Error finding container d07f294b89d7d7cc837c0b938ad3cb227adb0c0bf4a0bfa54a9b12b07fdddaed: Status 404 returned error can't find the container with id d07f294b89d7d7cc837c0b938ad3cb227adb0c0bf4a0bfa54a9b12b07fdddaed Feb 24 05:47:25.037443 master-0 kubenswrapper[34361]: I0224 05:47:25.037392 34361 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-58c85c668d-c85cm" event={"ID":"60d4ae3e-8833-4332-ac6f-601db5f57f6d","Type":"ContainerStarted","Data":"87efaf00c172ea34b7581ddf4bdd996b50f78a6324a9e2dc230fab351a1aecd0"} Feb 24 05:47:25.038604 master-0 kubenswrapper[34361]: I0224 05:47:25.038569 34361 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-handler-r6rsr" event={"ID":"ba6b8867-e9ef-4814-b06f-2be69d7e9587","Type":"ContainerStarted","Data":"a5922b31f5b71565e3f242ce8d8706b3c679919214b198f45bb8730d0600b57e"} Feb 24 05:47:25.039880 master-0 kubenswrapper[34361]: I0224 05:47:25.039838 34361 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-webhook-866bcb46dc-qp4cm" event={"ID":"3b304796-dfe9-479f-be1d-695eeb30d29a","Type":"ContainerStarted","Data":"6a6b700bcba62bfc4e7bb22b51f7474f62676c3d39ec5306502d96bcebc55caa"} Feb 24 05:47:25.226262 master-0 kubenswrapper[34361]: I0224 05:47:25.226204 34361 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-d54bc7dc7-5mlqz"] Feb 24 05:47:25.779672 master-0 kubenswrapper[34361]: I0224 05:47:25.779600 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/29866c03-d909-4d71-b528-996c439cdaa0-memberlist\") pod \"speaker-tds5c\" (UID: \"29866c03-d909-4d71-b528-996c439cdaa0\") " pod="metallb-system/speaker-tds5c" Feb 24 05:47:25.802801 master-0 kubenswrapper[34361]: I0224 05:47:25.802750 34361 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/29866c03-d909-4d71-b528-996c439cdaa0-memberlist\") pod \"speaker-tds5c\" (UID: \"29866c03-d909-4d71-b528-996c439cdaa0\") " pod="metallb-system/speaker-tds5c" Feb 24 05:47:25.865521 master-0 kubenswrapper[34361]: I0224 05:47:25.865434 34361 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/speaker-tds5c" Feb 24 05:47:25.890508 master-0 kubenswrapper[34361]: W0224 05:47:25.890382 34361 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod29866c03_d909_4d71_b528_996c439cdaa0.slice/crio-502c99fa7a8fecb18c34da8895861df55ed56b7b463111ddd4ef604ff5beaeb4 WatchSource:0}: Error finding container 502c99fa7a8fecb18c34da8895861df55ed56b7b463111ddd4ef604ff5beaeb4: Status 404 returned error can't find the container with id 502c99fa7a8fecb18c34da8895861df55ed56b7b463111ddd4ef604ff5beaeb4 Feb 24 05:47:26.050718 master-0 kubenswrapper[34361]: I0224 05:47:26.050497 34361 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-d54bc7dc7-5mlqz" event={"ID":"f074dc6a-a1cb-4de6-8b35-1d8f0bbf768d","Type":"ContainerStarted","Data":"feaf63a71b34babee7ae4fea97217ee00b7f39cfecd8b2a64b6dca37b2f93d49"} Feb 24 05:47:26.050718 master-0 kubenswrapper[34361]: I0224 05:47:26.050586 34361 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-d54bc7dc7-5mlqz" event={"ID":"f074dc6a-a1cb-4de6-8b35-1d8f0bbf768d","Type":"ContainerStarted","Data":"99ad91aa72ab87ec05540938b5b9c4bc8c5bea63ebd01fc86e25072ac1c6a4f2"} Feb 24 05:47:26.052692 master-0 kubenswrapper[34361]: I0224 05:47:26.052598 34361 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-console-plugin-5c78fc5d65-447df" event={"ID":"2a2e34f6-0c52-4f02-8707-697490f93848","Type":"ContainerStarted","Data":"d07f294b89d7d7cc837c0b938ad3cb227adb0c0bf4a0bfa54a9b12b07fdddaed"} Feb 24 05:47:26.055441 master-0 kubenswrapper[34361]: I0224 05:47:26.055391 34361 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-69bbfbf88f-hnk7l" event={"ID":"ecfcf889-19ab-4e52-98db-1cb38643ff33","Type":"ContainerStarted","Data":"1ac18d8b853b7a8cf136fdd86799a9e4d1e7618863c1edc6d132b553c7cffcee"} Feb 24 05:47:26.055631 master-0 kubenswrapper[34361]: I0224 05:47:26.055608 34361 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/controller-69bbfbf88f-hnk7l" Feb 24 05:47:26.057680 master-0 kubenswrapper[34361]: I0224 05:47:26.057605 34361 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-tds5c" event={"ID":"29866c03-d909-4d71-b528-996c439cdaa0","Type":"ContainerStarted","Data":"502c99fa7a8fecb18c34da8895861df55ed56b7b463111ddd4ef604ff5beaeb4"} Feb 24 05:47:26.090471 master-0 kubenswrapper[34361]: I0224 05:47:26.090361 34361 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-d54bc7dc7-5mlqz" podStartSLOduration=2.090340552 podStartE2EDuration="2.090340552s" podCreationTimestamp="2026-02-24 05:47:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-24 05:47:26.081890854 +0000 UTC m=+605.784507920" watchObservedRunningTime="2026-02-24 05:47:26.090340552 +0000 UTC m=+605.792957598" Feb 24 05:47:26.111857 master-0 kubenswrapper[34361]: I0224 05:47:26.110500 34361 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/controller-69bbfbf88f-hnk7l" podStartSLOduration=3.830284953 podStartE2EDuration="5.110474455s" podCreationTimestamp="2026-02-24 05:47:21 +0000 UTC" firstStartedPulling="2026-02-24 05:47:23.674550567 +0000 UTC m=+603.377167613" lastFinishedPulling="2026-02-24 05:47:24.954740069 +0000 UTC m=+604.657357115" observedRunningTime="2026-02-24 05:47:26.110102383 +0000 UTC m=+605.812719439" watchObservedRunningTime="2026-02-24 05:47:26.110474455 +0000 UTC m=+605.813091501" Feb 24 05:47:27.073854 master-0 kubenswrapper[34361]: I0224 05:47:27.073699 34361 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-tds5c" event={"ID":"29866c03-d909-4d71-b528-996c439cdaa0","Type":"ContainerStarted","Data":"daa94448b4a70fc7dfdfa7ce8e7883837dc049ed51bd570f88a34e94e4384727"} Feb 24 05:47:27.073854 master-0 kubenswrapper[34361]: I0224 05:47:27.073858 34361 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-tds5c" event={"ID":"29866c03-d909-4d71-b528-996c439cdaa0","Type":"ContainerStarted","Data":"f64cf35bfae97cc29cb594aa445bc1f0006d8e705ff94479a083cc2d8a30a2e0"} Feb 24 05:47:27.074800 master-0 kubenswrapper[34361]: I0224 05:47:27.074104 34361 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/speaker-tds5c" Feb 24 05:47:27.283758 master-0 kubenswrapper[34361]: I0224 05:47:27.283627 34361 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/speaker-tds5c" podStartSLOduration=6.283596 podStartE2EDuration="6.283596s" podCreationTimestamp="2026-02-24 05:47:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-24 05:47:27.282602981 +0000 UTC m=+606.985220047" watchObservedRunningTime="2026-02-24 05:47:27.283596 +0000 UTC m=+606.986213056" Feb 24 05:47:31.414121 master-0 kubenswrapper[34361]: I0224 05:47:31.413469 34361 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-handler-r6rsr" event={"ID":"ba6b8867-e9ef-4814-b06f-2be69d7e9587","Type":"ContainerStarted","Data":"94e4285ce913db84bbc1732694c22803372b2c5310134487689e21fda25839fb"} Feb 24 05:47:31.414789 master-0 kubenswrapper[34361]: I0224 05:47:31.414712 34361 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-nmstate/nmstate-handler-r6rsr" Feb 24 05:47:31.417323 master-0 kubenswrapper[34361]: I0224 05:47:31.416793 34361 generic.go:334] "Generic (PLEG): container finished" podID="a65c5216-26e6-4e76-a623-10d3f04ca5ce" containerID="312a93786408e686bcb80e775800b5949e642547822296282c5635c9929a3835" exitCode=0 Feb 24 05:47:31.417323 master-0 kubenswrapper[34361]: I0224 05:47:31.416970 34361 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-fsm64" event={"ID":"a65c5216-26e6-4e76-a623-10d3f04ca5ce","Type":"ContainerDied","Data":"312a93786408e686bcb80e775800b5949e642547822296282c5635c9929a3835"} Feb 24 05:47:31.423411 master-0 kubenswrapper[34361]: I0224 05:47:31.420430 34361 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-webhook-866bcb46dc-qp4cm" event={"ID":"3b304796-dfe9-479f-be1d-695eeb30d29a","Type":"ContainerStarted","Data":"50e70c07dca6a314dbdabc65882bbb8da5206d512061f6588e741d3c2ee2327e"} Feb 24 05:47:31.423544 master-0 kubenswrapper[34361]: I0224 05:47:31.423459 34361 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-console-plugin-5c78fc5d65-447df" event={"ID":"2a2e34f6-0c52-4f02-8707-697490f93848","Type":"ContainerStarted","Data":"4e46ef4f6bbc5cc9b4c76aaa790087203a8cd5e6798471df9b0b4a05113b041c"} Feb 24 05:47:31.423595 master-0 kubenswrapper[34361]: I0224 05:47:31.423554 34361 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-nmstate/nmstate-webhook-866bcb46dc-qp4cm" Feb 24 05:47:31.424859 master-0 kubenswrapper[34361]: I0224 05:47:31.424816 34361 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-webhook-server-78b44bf5bb-9rc2g" event={"ID":"94cf373e-7dbb-41de-b6d2-68186d922c29","Type":"ContainerStarted","Data":"34026e3258d0f4a0bc5c150e53fe19da38b283c7f44353d448b4a696392ac1ad"} Feb 24 05:47:31.425000 master-0 kubenswrapper[34361]: I0224 05:47:31.424974 34361 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/frr-k8s-webhook-server-78b44bf5bb-9rc2g" Feb 24 05:47:31.430829 master-0 kubenswrapper[34361]: I0224 05:47:31.430705 34361 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-58c85c668d-c85cm" event={"ID":"60d4ae3e-8833-4332-ac6f-601db5f57f6d","Type":"ContainerStarted","Data":"82c762f26d7c7f31a3e4186cb7f60983b4c2c777cbf455309ba027c5d3e0eb16"} Feb 24 05:47:31.430897 master-0 kubenswrapper[34361]: I0224 05:47:31.430866 34361 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-58c85c668d-c85cm" event={"ID":"60d4ae3e-8833-4332-ac6f-601db5f57f6d","Type":"ContainerStarted","Data":"060191198ea99ba98cca9277f75d21f6691f54082b1f77969647b5b2f46d483f"} Feb 24 05:47:31.449272 master-0 kubenswrapper[34361]: I0224 05:47:31.449127 34361 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-handler-r6rsr" podStartSLOduration=1.981254917 podStartE2EDuration="8.449097321s" podCreationTimestamp="2026-02-24 05:47:23 +0000 UTC" firstStartedPulling="2026-02-24 05:47:24.403826883 +0000 UTC m=+604.106443929" lastFinishedPulling="2026-02-24 05:47:30.871669277 +0000 UTC m=+610.574286333" observedRunningTime="2026-02-24 05:47:31.444130345 +0000 UTC m=+611.146747391" watchObservedRunningTime="2026-02-24 05:47:31.449097321 +0000 UTC m=+611.151714407" Feb 24 05:47:31.478249 master-0 kubenswrapper[34361]: I0224 05:47:31.476575 34361 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-webhook-866bcb46dc-qp4cm" podStartSLOduration=2.581236425 podStartE2EDuration="8.476550928s" podCreationTimestamp="2026-02-24 05:47:23 +0000 UTC" firstStartedPulling="2026-02-24 05:47:24.925795969 +0000 UTC m=+604.628413015" lastFinishedPulling="2026-02-24 05:47:30.821110472 +0000 UTC m=+610.523727518" observedRunningTime="2026-02-24 05:47:31.474669703 +0000 UTC m=+611.177286749" watchObservedRunningTime="2026-02-24 05:47:31.476550928 +0000 UTC m=+611.179167974" Feb 24 05:47:31.504607 master-0 kubenswrapper[34361]: I0224 05:47:31.504450 34361 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-console-plugin-5c78fc5d65-447df" podStartSLOduration=1.6507297269999999 podStartE2EDuration="7.504428457s" podCreationTimestamp="2026-02-24 05:47:24 +0000 UTC" firstStartedPulling="2026-02-24 05:47:25.026458196 +0000 UTC m=+604.729075242" lastFinishedPulling="2026-02-24 05:47:30.880156926 +0000 UTC m=+610.582773972" observedRunningTime="2026-02-24 05:47:31.497877365 +0000 UTC m=+611.200494411" watchObservedRunningTime="2026-02-24 05:47:31.504428457 +0000 UTC m=+611.207045503" Feb 24 05:47:31.534456 master-0 kubenswrapper[34361]: I0224 05:47:31.534280 34361 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-metrics-58c85c668d-c85cm" podStartSLOduration=2.518502333 podStartE2EDuration="8.534240264s" podCreationTimestamp="2026-02-24 05:47:23 +0000 UTC" firstStartedPulling="2026-02-24 05:47:24.857223254 +0000 UTC m=+604.559840300" lastFinishedPulling="2026-02-24 05:47:30.872961185 +0000 UTC m=+610.575578231" observedRunningTime="2026-02-24 05:47:31.527757622 +0000 UTC m=+611.230374698" watchObservedRunningTime="2026-02-24 05:47:31.534240264 +0000 UTC m=+611.236857310" Feb 24 05:47:31.617234 master-0 kubenswrapper[34361]: I0224 05:47:31.617129 34361 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/frr-k8s-webhook-server-78b44bf5bb-9rc2g" podStartSLOduration=2.516624078 podStartE2EDuration="10.617108978s" podCreationTimestamp="2026-02-24 05:47:21 +0000 UTC" firstStartedPulling="2026-02-24 05:47:22.712290447 +0000 UTC m=+602.414907493" lastFinishedPulling="2026-02-24 05:47:30.812775357 +0000 UTC m=+610.515392393" observedRunningTime="2026-02-24 05:47:31.591039282 +0000 UTC m=+611.293656328" watchObservedRunningTime="2026-02-24 05:47:31.617108978 +0000 UTC m=+611.319726024" Feb 24 05:47:32.444151 master-0 kubenswrapper[34361]: I0224 05:47:32.444079 34361 generic.go:334] "Generic (PLEG): container finished" podID="a65c5216-26e6-4e76-a623-10d3f04ca5ce" containerID="a45397debeb998496bf645126aa72ed9fa6976ee70a3b7322cffb15b4f15f675" exitCode=0 Feb 24 05:47:32.444920 master-0 kubenswrapper[34361]: I0224 05:47:32.444262 34361 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-fsm64" event={"ID":"a65c5216-26e6-4e76-a623-10d3f04ca5ce","Type":"ContainerDied","Data":"a45397debeb998496bf645126aa72ed9fa6976ee70a3b7322cffb15b4f15f675"} Feb 24 05:47:33.456929 master-0 kubenswrapper[34361]: I0224 05:47:33.456842 34361 generic.go:334] "Generic (PLEG): container finished" podID="a65c5216-26e6-4e76-a623-10d3f04ca5ce" containerID="0c95e4d56aa81afd04f9dd92f06fb05e2c7961fd12a5cd50845d100c27e7099e" exitCode=0 Feb 24 05:47:33.458089 master-0 kubenswrapper[34361]: I0224 05:47:33.456930 34361 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-fsm64" event={"ID":"a65c5216-26e6-4e76-a623-10d3f04ca5ce","Type":"ContainerDied","Data":"0c95e4d56aa81afd04f9dd92f06fb05e2c7961fd12a5cd50845d100c27e7099e"} Feb 24 05:47:34.475396 master-0 kubenswrapper[34361]: I0224 05:47:34.475332 34361 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-fsm64" event={"ID":"a65c5216-26e6-4e76-a623-10d3f04ca5ce","Type":"ContainerStarted","Data":"58be530f24bc05bef089ed0329e166caaa597b5f7c2758dee9e6414c059ed760"} Feb 24 05:47:34.476083 master-0 kubenswrapper[34361]: I0224 05:47:34.476059 34361 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-fsm64" event={"ID":"a65c5216-26e6-4e76-a623-10d3f04ca5ce","Type":"ContainerStarted","Data":"946a2e14f0c561f78b7d0f0318e2e172a8246aba35a97de4adc3dfaa3f3125b9"} Feb 24 05:47:34.476229 master-0 kubenswrapper[34361]: I0224 05:47:34.476209 34361 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-fsm64" event={"ID":"a65c5216-26e6-4e76-a623-10d3f04ca5ce","Type":"ContainerStarted","Data":"3b3031f21e1798c2460c4bcb266d1d3c9fcf103a09dac1830cef2d5068a112a5"} Feb 24 05:47:34.476351 master-0 kubenswrapper[34361]: I0224 05:47:34.476333 34361 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-fsm64" event={"ID":"a65c5216-26e6-4e76-a623-10d3f04ca5ce","Type":"ContainerStarted","Data":"cf0a4f5d54ed479239caf0a268b6127988b085864e97a2ac7943f7beed1022de"} Feb 24 05:47:34.760584 master-0 kubenswrapper[34361]: I0224 05:47:34.760456 34361 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-d54bc7dc7-5mlqz" Feb 24 05:47:34.760584 master-0 kubenswrapper[34361]: I0224 05:47:34.760543 34361 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-d54bc7dc7-5mlqz" Feb 24 05:47:34.768406 master-0 kubenswrapper[34361]: I0224 05:47:34.768191 34361 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-d54bc7dc7-5mlqz" Feb 24 05:47:35.507998 master-0 kubenswrapper[34361]: I0224 05:47:35.507890 34361 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-fsm64" event={"ID":"a65c5216-26e6-4e76-a623-10d3f04ca5ce","Type":"ContainerStarted","Data":"9850e0aacbb095e8f8a11393c717e9f8a7f33e79035902adfca284f2cd7a1edc"} Feb 24 05:47:35.508969 master-0 kubenswrapper[34361]: I0224 05:47:35.508043 34361 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-fsm64" event={"ID":"a65c5216-26e6-4e76-a623-10d3f04ca5ce","Type":"ContainerStarted","Data":"ee146a40578b41fa94d896987f10e6debfff5ceb062f239174a27901af2599da"} Feb 24 05:47:35.508969 master-0 kubenswrapper[34361]: I0224 05:47:35.508870 34361 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/frr-k8s-fsm64" Feb 24 05:47:35.514897 master-0 kubenswrapper[34361]: I0224 05:47:35.514802 34361 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-d54bc7dc7-5mlqz" Feb 24 05:47:35.563460 master-0 kubenswrapper[34361]: I0224 05:47:35.562956 34361 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/frr-k8s-fsm64" podStartSLOduration=6.112153574 podStartE2EDuration="14.562928986s" podCreationTimestamp="2026-02-24 05:47:21 +0000 UTC" firstStartedPulling="2026-02-24 05:47:22.423708098 +0000 UTC m=+602.126325144" lastFinishedPulling="2026-02-24 05:47:30.87448351 +0000 UTC m=+610.577100556" observedRunningTime="2026-02-24 05:47:35.549396978 +0000 UTC m=+615.252014104" watchObservedRunningTime="2026-02-24 05:47:35.562928986 +0000 UTC m=+615.265546032" Feb 24 05:47:35.655346 master-0 kubenswrapper[34361]: I0224 05:47:35.652301 34361 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-576fb8b7f5-srlps"] Feb 24 05:47:37.199809 master-0 kubenswrapper[34361]: I0224 05:47:37.199708 34361 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="metallb-system/frr-k8s-fsm64" Feb 24 05:47:37.262091 master-0 kubenswrapper[34361]: I0224 05:47:37.261993 34361 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="metallb-system/frr-k8s-fsm64" Feb 24 05:47:39.361392 master-0 kubenswrapper[34361]: I0224 05:47:39.361244 34361 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-nmstate/nmstate-handler-r6rsr" Feb 24 05:47:42.191662 master-0 kubenswrapper[34361]: I0224 05:47:42.191579 34361 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/frr-k8s-webhook-server-78b44bf5bb-9rc2g" Feb 24 05:47:42.964786 master-0 kubenswrapper[34361]: I0224 05:47:42.964681 34361 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/controller-69bbfbf88f-hnk7l" Feb 24 05:47:44.299653 master-0 kubenswrapper[34361]: I0224 05:47:44.299524 34361 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-nmstate/nmstate-webhook-866bcb46dc-qp4cm" Feb 24 05:47:45.870601 master-0 kubenswrapper[34361]: I0224 05:47:45.870498 34361 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/speaker-tds5c" Feb 24 05:47:51.631994 master-0 kubenswrapper[34361]: I0224 05:47:51.631878 34361 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-storage/vg-manager-r5t2w"] Feb 24 05:47:51.638590 master-0 kubenswrapper[34361]: I0224 05:47:51.638519 34361 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-storage/vg-manager-r5t2w" Feb 24 05:47:51.641195 master-0 kubenswrapper[34361]: I0224 05:47:51.641147 34361 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-storage"/"vg-manager-metrics-cert" Feb 24 05:47:51.662386 master-0 kubenswrapper[34361]: I0224 05:47:51.661344 34361 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-storage/vg-manager-r5t2w"] Feb 24 05:47:51.832106 master-0 kubenswrapper[34361]: I0224 05:47:51.831969 34361 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"device-dir\" (UniqueName: \"kubernetes.io/host-path/f4e4f4dc-3f81-4757-b36f-de4db40e5b9c-device-dir\") pod \"vg-manager-r5t2w\" (UID: \"f4e4f4dc-3f81-4757-b36f-de4db40e5b9c\") " pod="openshift-storage/vg-manager-r5t2w" Feb 24 05:47:51.832106 master-0 kubenswrapper[34361]: I0224 05:47:51.832093 34361 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-volumes-dir\" (UniqueName: \"kubernetes.io/host-path/f4e4f4dc-3f81-4757-b36f-de4db40e5b9c-pod-volumes-dir\") pod \"vg-manager-r5t2w\" (UID: \"f4e4f4dc-3f81-4757-b36f-de4db40e5b9c\") " pod="openshift-storage/vg-manager-r5t2w" Feb 24 05:47:51.832516 master-0 kubenswrapper[34361]: I0224 05:47:51.832159 34361 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/f4e4f4dc-3f81-4757-b36f-de4db40e5b9c-registration-dir\") pod \"vg-manager-r5t2w\" (UID: \"f4e4f4dc-3f81-4757-b36f-de4db40e5b9c\") " pod="openshift-storage/vg-manager-r5t2w" Feb 24 05:47:51.832516 master-0 kubenswrapper[34361]: I0224 05:47:51.832210 34361 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-udev\" (UniqueName: \"kubernetes.io/host-path/f4e4f4dc-3f81-4757-b36f-de4db40e5b9c-run-udev\") pod \"vg-manager-r5t2w\" (UID: \"f4e4f4dc-3f81-4757-b36f-de4db40e5b9c\") " pod="openshift-storage/vg-manager-r5t2w" Feb 24 05:47:51.832516 master-0 kubenswrapper[34361]: I0224 05:47:51.832294 34361 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"csi-plugin-dir\" (UniqueName: \"kubernetes.io/host-path/f4e4f4dc-3f81-4757-b36f-de4db40e5b9c-csi-plugin-dir\") pod \"vg-manager-r5t2w\" (UID: \"f4e4f4dc-3f81-4757-b36f-de4db40e5b9c\") " pod="openshift-storage/vg-manager-r5t2w" Feb 24 05:47:51.832516 master-0 kubenswrapper[34361]: I0224 05:47:51.832389 34361 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"file-lock-dir\" (UniqueName: \"kubernetes.io/host-path/f4e4f4dc-3f81-4757-b36f-de4db40e5b9c-file-lock-dir\") pod \"vg-manager-r5t2w\" (UID: \"f4e4f4dc-3f81-4757-b36f-de4db40e5b9c\") " pod="openshift-storage/vg-manager-r5t2w" Feb 24 05:47:51.832516 master-0 kubenswrapper[34361]: I0224 05:47:51.832435 34361 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/f4e4f4dc-3f81-4757-b36f-de4db40e5b9c-sys\") pod \"vg-manager-r5t2w\" (UID: \"f4e4f4dc-3f81-4757-b36f-de4db40e5b9c\") " pod="openshift-storage/vg-manager-r5t2w" Feb 24 05:47:51.832516 master-0 kubenswrapper[34361]: I0224 05:47:51.832481 34361 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-plugin-dir\" (UniqueName: \"kubernetes.io/host-path/f4e4f4dc-3f81-4757-b36f-de4db40e5b9c-node-plugin-dir\") pod \"vg-manager-r5t2w\" (UID: \"f4e4f4dc-3f81-4757-b36f-de4db40e5b9c\") " pod="openshift-storage/vg-manager-r5t2w" Feb 24 05:47:51.832777 master-0 kubenswrapper[34361]: I0224 05:47:51.832525 34361 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-cert\" (UniqueName: \"kubernetes.io/secret/f4e4f4dc-3f81-4757-b36f-de4db40e5b9c-metrics-cert\") pod \"vg-manager-r5t2w\" (UID: \"f4e4f4dc-3f81-4757-b36f-de4db40e5b9c\") " pod="openshift-storage/vg-manager-r5t2w" Feb 24 05:47:51.832777 master-0 kubenswrapper[34361]: I0224 05:47:51.832574 34361 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bctc9\" (UniqueName: \"kubernetes.io/projected/f4e4f4dc-3f81-4757-b36f-de4db40e5b9c-kube-api-access-bctc9\") pod \"vg-manager-r5t2w\" (UID: \"f4e4f4dc-3f81-4757-b36f-de4db40e5b9c\") " pod="openshift-storage/vg-manager-r5t2w" Feb 24 05:47:51.832777 master-0 kubenswrapper[34361]: I0224 05:47:51.832644 34361 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lvmd-config\" (UniqueName: \"kubernetes.io/host-path/f4e4f4dc-3f81-4757-b36f-de4db40e5b9c-lvmd-config\") pod \"vg-manager-r5t2w\" (UID: \"f4e4f4dc-3f81-4757-b36f-de4db40e5b9c\") " pod="openshift-storage/vg-manager-r5t2w" Feb 24 05:47:51.934509 master-0 kubenswrapper[34361]: I0224 05:47:51.934266 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"csi-plugin-dir\" (UniqueName: \"kubernetes.io/host-path/f4e4f4dc-3f81-4757-b36f-de4db40e5b9c-csi-plugin-dir\") pod \"vg-manager-r5t2w\" (UID: \"f4e4f4dc-3f81-4757-b36f-de4db40e5b9c\") " pod="openshift-storage/vg-manager-r5t2w" Feb 24 05:47:51.934509 master-0 kubenswrapper[34361]: I0224 05:47:51.934386 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"file-lock-dir\" (UniqueName: \"kubernetes.io/host-path/f4e4f4dc-3f81-4757-b36f-de4db40e5b9c-file-lock-dir\") pod \"vg-manager-r5t2w\" (UID: \"f4e4f4dc-3f81-4757-b36f-de4db40e5b9c\") " pod="openshift-storage/vg-manager-r5t2w" Feb 24 05:47:51.934884 master-0 kubenswrapper[34361]: I0224 05:47:51.934648 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/f4e4f4dc-3f81-4757-b36f-de4db40e5b9c-sys\") pod \"vg-manager-r5t2w\" (UID: \"f4e4f4dc-3f81-4757-b36f-de4db40e5b9c\") " pod="openshift-storage/vg-manager-r5t2w" Feb 24 05:47:51.934884 master-0 kubenswrapper[34361]: I0224 05:47:51.934721 34361 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"csi-plugin-dir\" (UniqueName: \"kubernetes.io/host-path/f4e4f4dc-3f81-4757-b36f-de4db40e5b9c-csi-plugin-dir\") pod \"vg-manager-r5t2w\" (UID: \"f4e4f4dc-3f81-4757-b36f-de4db40e5b9c\") " pod="openshift-storage/vg-manager-r5t2w" Feb 24 05:47:51.934884 master-0 kubenswrapper[34361]: I0224 05:47:51.934809 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-plugin-dir\" (UniqueName: \"kubernetes.io/host-path/f4e4f4dc-3f81-4757-b36f-de4db40e5b9c-node-plugin-dir\") pod \"vg-manager-r5t2w\" (UID: \"f4e4f4dc-3f81-4757-b36f-de4db40e5b9c\") " pod="openshift-storage/vg-manager-r5t2w" Feb 24 05:47:51.934884 master-0 kubenswrapper[34361]: I0224 05:47:51.934829 34361 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/f4e4f4dc-3f81-4757-b36f-de4db40e5b9c-sys\") pod \"vg-manager-r5t2w\" (UID: \"f4e4f4dc-3f81-4757-b36f-de4db40e5b9c\") " pod="openshift-storage/vg-manager-r5t2w" Feb 24 05:47:51.934884 master-0 kubenswrapper[34361]: I0224 05:47:51.934881 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-cert\" (UniqueName: \"kubernetes.io/secret/f4e4f4dc-3f81-4757-b36f-de4db40e5b9c-metrics-cert\") pod \"vg-manager-r5t2w\" (UID: \"f4e4f4dc-3f81-4757-b36f-de4db40e5b9c\") " pod="openshift-storage/vg-manager-r5t2w" Feb 24 05:47:51.935273 master-0 kubenswrapper[34361]: I0224 05:47:51.934983 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bctc9\" (UniqueName: \"kubernetes.io/projected/f4e4f4dc-3f81-4757-b36f-de4db40e5b9c-kube-api-access-bctc9\") pod \"vg-manager-r5t2w\" (UID: \"f4e4f4dc-3f81-4757-b36f-de4db40e5b9c\") " pod="openshift-storage/vg-manager-r5t2w" Feb 24 05:47:51.935273 master-0 kubenswrapper[34361]: I0224 05:47:51.935164 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"lvmd-config\" (UniqueName: \"kubernetes.io/host-path/f4e4f4dc-3f81-4757-b36f-de4db40e5b9c-lvmd-config\") pod \"vg-manager-r5t2w\" (UID: \"f4e4f4dc-3f81-4757-b36f-de4db40e5b9c\") " pod="openshift-storage/vg-manager-r5t2w" Feb 24 05:47:51.935545 master-0 kubenswrapper[34361]: I0224 05:47:51.935285 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"device-dir\" (UniqueName: \"kubernetes.io/host-path/f4e4f4dc-3f81-4757-b36f-de4db40e5b9c-device-dir\") pod \"vg-manager-r5t2w\" (UID: \"f4e4f4dc-3f81-4757-b36f-de4db40e5b9c\") " pod="openshift-storage/vg-manager-r5t2w" Feb 24 05:47:51.935545 master-0 kubenswrapper[34361]: I0224 05:47:51.935345 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-volumes-dir\" (UniqueName: \"kubernetes.io/host-path/f4e4f4dc-3f81-4757-b36f-de4db40e5b9c-pod-volumes-dir\") pod \"vg-manager-r5t2w\" (UID: \"f4e4f4dc-3f81-4757-b36f-de4db40e5b9c\") " pod="openshift-storage/vg-manager-r5t2w" Feb 24 05:47:51.935545 master-0 kubenswrapper[34361]: I0224 05:47:51.935359 34361 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-plugin-dir\" (UniqueName: \"kubernetes.io/host-path/f4e4f4dc-3f81-4757-b36f-de4db40e5b9c-node-plugin-dir\") pod \"vg-manager-r5t2w\" (UID: \"f4e4f4dc-3f81-4757-b36f-de4db40e5b9c\") " pod="openshift-storage/vg-manager-r5t2w" Feb 24 05:47:51.935545 master-0 kubenswrapper[34361]: I0224 05:47:51.935457 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/f4e4f4dc-3f81-4757-b36f-de4db40e5b9c-registration-dir\") pod \"vg-manager-r5t2w\" (UID: \"f4e4f4dc-3f81-4757-b36f-de4db40e5b9c\") " pod="openshift-storage/vg-manager-r5t2w" Feb 24 05:47:51.935545 master-0 kubenswrapper[34361]: I0224 05:47:51.935469 34361 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-volumes-dir\" (UniqueName: \"kubernetes.io/host-path/f4e4f4dc-3f81-4757-b36f-de4db40e5b9c-pod-volumes-dir\") pod \"vg-manager-r5t2w\" (UID: \"f4e4f4dc-3f81-4757-b36f-de4db40e5b9c\") " pod="openshift-storage/vg-manager-r5t2w" Feb 24 05:47:51.935545 master-0 kubenswrapper[34361]: I0224 05:47:51.935543 34361 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/f4e4f4dc-3f81-4757-b36f-de4db40e5b9c-registration-dir\") pod \"vg-manager-r5t2w\" (UID: \"f4e4f4dc-3f81-4757-b36f-de4db40e5b9c\") " pod="openshift-storage/vg-manager-r5t2w" Feb 24 05:47:51.935545 master-0 kubenswrapper[34361]: I0224 05:47:51.935562 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-udev\" (UniqueName: \"kubernetes.io/host-path/f4e4f4dc-3f81-4757-b36f-de4db40e5b9c-run-udev\") pod \"vg-manager-r5t2w\" (UID: \"f4e4f4dc-3f81-4757-b36f-de4db40e5b9c\") " pod="openshift-storage/vg-manager-r5t2w" Feb 24 05:47:51.936085 master-0 kubenswrapper[34361]: I0224 05:47:51.935520 34361 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"device-dir\" (UniqueName: \"kubernetes.io/host-path/f4e4f4dc-3f81-4757-b36f-de4db40e5b9c-device-dir\") pod \"vg-manager-r5t2w\" (UID: \"f4e4f4dc-3f81-4757-b36f-de4db40e5b9c\") " pod="openshift-storage/vg-manager-r5t2w" Feb 24 05:47:51.936085 master-0 kubenswrapper[34361]: I0224 05:47:51.935576 34361 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"lvmd-config\" (UniqueName: \"kubernetes.io/host-path/f4e4f4dc-3f81-4757-b36f-de4db40e5b9c-lvmd-config\") pod \"vg-manager-r5t2w\" (UID: \"f4e4f4dc-3f81-4757-b36f-de4db40e5b9c\") " pod="openshift-storage/vg-manager-r5t2w" Feb 24 05:47:51.936085 master-0 kubenswrapper[34361]: I0224 05:47:51.935636 34361 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-udev\" (UniqueName: \"kubernetes.io/host-path/f4e4f4dc-3f81-4757-b36f-de4db40e5b9c-run-udev\") pod \"vg-manager-r5t2w\" (UID: \"f4e4f4dc-3f81-4757-b36f-de4db40e5b9c\") " pod="openshift-storage/vg-manager-r5t2w" Feb 24 05:47:51.936085 master-0 kubenswrapper[34361]: I0224 05:47:51.935756 34361 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"file-lock-dir\" (UniqueName: \"kubernetes.io/host-path/f4e4f4dc-3f81-4757-b36f-de4db40e5b9c-file-lock-dir\") pod \"vg-manager-r5t2w\" (UID: \"f4e4f4dc-3f81-4757-b36f-de4db40e5b9c\") " pod="openshift-storage/vg-manager-r5t2w" Feb 24 05:47:51.939818 master-0 kubenswrapper[34361]: I0224 05:47:51.939749 34361 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-cert\" (UniqueName: \"kubernetes.io/secret/f4e4f4dc-3f81-4757-b36f-de4db40e5b9c-metrics-cert\") pod \"vg-manager-r5t2w\" (UID: \"f4e4f4dc-3f81-4757-b36f-de4db40e5b9c\") " pod="openshift-storage/vg-manager-r5t2w" Feb 24 05:47:51.966253 master-0 kubenswrapper[34361]: I0224 05:47:51.966138 34361 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bctc9\" (UniqueName: \"kubernetes.io/projected/f4e4f4dc-3f81-4757-b36f-de4db40e5b9c-kube-api-access-bctc9\") pod \"vg-manager-r5t2w\" (UID: \"f4e4f4dc-3f81-4757-b36f-de4db40e5b9c\") " pod="openshift-storage/vg-manager-r5t2w" Feb 24 05:47:52.007275 master-0 kubenswrapper[34361]: I0224 05:47:52.007142 34361 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-storage/vg-manager-r5t2w" Feb 24 05:47:52.210997 master-0 kubenswrapper[34361]: I0224 05:47:52.210860 34361 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/frr-k8s-fsm64" Feb 24 05:47:52.577585 master-0 kubenswrapper[34361]: W0224 05:47:52.569240 34361 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf4e4f4dc_3f81_4757_b36f_de4db40e5b9c.slice/crio-2284819cfbc36d9ca9f33a99a75b9a48f2c758be9c8fb685b17029926cd6228d WatchSource:0}: Error finding container 2284819cfbc36d9ca9f33a99a75b9a48f2c758be9c8fb685b17029926cd6228d: Status 404 returned error can't find the container with id 2284819cfbc36d9ca9f33a99a75b9a48f2c758be9c8fb685b17029926cd6228d Feb 24 05:47:52.585187 master-0 kubenswrapper[34361]: I0224 05:47:52.580861 34361 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-storage/vg-manager-r5t2w"] Feb 24 05:47:52.783805 master-0 kubenswrapper[34361]: I0224 05:47:52.783725 34361 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-storage/vg-manager-r5t2w" event={"ID":"f4e4f4dc-3f81-4757-b36f-de4db40e5b9c","Type":"ContainerStarted","Data":"2284819cfbc36d9ca9f33a99a75b9a48f2c758be9c8fb685b17029926cd6228d"} Feb 24 05:47:53.820839 master-0 kubenswrapper[34361]: I0224 05:47:53.820744 34361 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-storage/vg-manager-r5t2w" event={"ID":"f4e4f4dc-3f81-4757-b36f-de4db40e5b9c","Type":"ContainerStarted","Data":"ea419db2dd09c58226b62f974d719cba4a43cee78f0f3cf7c002c8b88295bff0"} Feb 24 05:47:54.835844 master-0 kubenswrapper[34361]: I0224 05:47:54.835776 34361 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-storage_vg-manager-r5t2w_f4e4f4dc-3f81-4757-b36f-de4db40e5b9c/vg-manager/0.log" Feb 24 05:47:54.836583 master-0 kubenswrapper[34361]: I0224 05:47:54.835872 34361 generic.go:334] "Generic (PLEG): container finished" podID="f4e4f4dc-3f81-4757-b36f-de4db40e5b9c" containerID="ea419db2dd09c58226b62f974d719cba4a43cee78f0f3cf7c002c8b88295bff0" exitCode=1 Feb 24 05:47:54.836583 master-0 kubenswrapper[34361]: I0224 05:47:54.835920 34361 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-storage/vg-manager-r5t2w" event={"ID":"f4e4f4dc-3f81-4757-b36f-de4db40e5b9c","Type":"ContainerDied","Data":"ea419db2dd09c58226b62f974d719cba4a43cee78f0f3cf7c002c8b88295bff0"} Feb 24 05:47:54.838384 master-0 kubenswrapper[34361]: I0224 05:47:54.838292 34361 scope.go:117] "RemoveContainer" containerID="ea419db2dd09c58226b62f974d719cba4a43cee78f0f3cf7c002c8b88295bff0" Feb 24 05:47:55.241473 master-0 kubenswrapper[34361]: I0224 05:47:55.241404 34361 plugin_watcher.go:194] "Adding socket path or updating timestamp to desired state cache" path="/var/lib/kubelet/plugins_registry/topolvm.io-reg.sock" Feb 24 05:47:55.683773 master-0 kubenswrapper[34361]: I0224 05:47:55.683601 34361 reconciler.go:161] "OperationExecutor.RegisterPlugin started" plugin={"SocketPath":"/var/lib/kubelet/plugins_registry/topolvm.io-reg.sock","Timestamp":"2026-02-24T05:47:55.241452096Z","Handler":null,"Name":""} Feb 24 05:47:55.686157 master-0 kubenswrapper[34361]: I0224 05:47:55.686110 34361 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: topolvm.io endpoint: /var/lib/kubelet/plugins/topolvm.io/node/csi-topolvm.sock versions: 1.0.0 Feb 24 05:47:55.686157 master-0 kubenswrapper[34361]: I0224 05:47:55.686160 34361 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: topolvm.io at endpoint: /var/lib/kubelet/plugins/topolvm.io/node/csi-topolvm.sock Feb 24 05:47:55.859109 master-0 kubenswrapper[34361]: I0224 05:47:55.858823 34361 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-storage_vg-manager-r5t2w_f4e4f4dc-3f81-4757-b36f-de4db40e5b9c/vg-manager/0.log" Feb 24 05:47:55.859109 master-0 kubenswrapper[34361]: I0224 05:47:55.858914 34361 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-storage/vg-manager-r5t2w" event={"ID":"f4e4f4dc-3f81-4757-b36f-de4db40e5b9c","Type":"ContainerStarted","Data":"1473124f3ea13e65ef015f0d2041a90d2f4dc2fa8496e1386009ec4efff2ea24"} Feb 24 05:47:55.919435 master-0 kubenswrapper[34361]: I0224 05:47:55.918024 34361 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-storage/vg-manager-r5t2w" podStartSLOduration=4.917992513 podStartE2EDuration="4.917992513s" podCreationTimestamp="2026-02-24 05:47:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-24 05:47:53.85899323 +0000 UTC m=+633.561610286" watchObservedRunningTime="2026-02-24 05:47:55.917992513 +0000 UTC m=+635.620609569" Feb 24 05:47:58.790009 master-0 kubenswrapper[34361]: I0224 05:47:58.764388 34361 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-index-tptx6"] Feb 24 05:47:58.796800 master-0 kubenswrapper[34361]: I0224 05:47:58.791417 34361 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-tptx6"] Feb 24 05:47:58.796800 master-0 kubenswrapper[34361]: I0224 05:47:58.794825 34361 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-tptx6" Feb 24 05:47:58.802022 master-0 kubenswrapper[34361]: I0224 05:47:58.801975 34361 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack-operators"/"openshift-service-ca.crt" Feb 24 05:47:58.802345 master-0 kubenswrapper[34361]: I0224 05:47:58.802294 34361 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack-operators"/"kube-root-ca.crt" Feb 24 05:47:58.814866 master-0 kubenswrapper[34361]: I0224 05:47:58.814799 34361 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-77vwz\" (UniqueName: \"kubernetes.io/projected/7e5bf3c7-92d6-4aee-978f-47efca23c1fe-kube-api-access-77vwz\") pod \"openstack-operator-index-tptx6\" (UID: \"7e5bf3c7-92d6-4aee-978f-47efca23c1fe\") " pod="openstack-operators/openstack-operator-index-tptx6" Feb 24 05:47:58.919069 master-0 kubenswrapper[34361]: I0224 05:47:58.918996 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-77vwz\" (UniqueName: \"kubernetes.io/projected/7e5bf3c7-92d6-4aee-978f-47efca23c1fe-kube-api-access-77vwz\") pod \"openstack-operator-index-tptx6\" (UID: \"7e5bf3c7-92d6-4aee-978f-47efca23c1fe\") " pod="openstack-operators/openstack-operator-index-tptx6" Feb 24 05:47:58.941541 master-0 kubenswrapper[34361]: I0224 05:47:58.936707 34361 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-77vwz\" (UniqueName: \"kubernetes.io/projected/7e5bf3c7-92d6-4aee-978f-47efca23c1fe-kube-api-access-77vwz\") pod \"openstack-operator-index-tptx6\" (UID: \"7e5bf3c7-92d6-4aee-978f-47efca23c1fe\") " pod="openstack-operators/openstack-operator-index-tptx6" Feb 24 05:47:59.121045 master-0 kubenswrapper[34361]: I0224 05:47:59.120883 34361 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-tptx6" Feb 24 05:47:59.629601 master-0 kubenswrapper[34361]: I0224 05:47:59.628619 34361 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-tptx6"] Feb 24 05:47:59.937330 master-0 kubenswrapper[34361]: I0224 05:47:59.937136 34361 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-tptx6" event={"ID":"7e5bf3c7-92d6-4aee-978f-47efca23c1fe","Type":"ContainerStarted","Data":"e539d71c64c0ee76fb884c460bea7f5a7a14c774c3d1fdc0980628dc5fe4b76b"} Feb 24 05:48:00.750941 master-0 kubenswrapper[34361]: I0224 05:48:00.750462 34361 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-console/console-576fb8b7f5-srlps" podUID="94166387-6f51-45e5-9ca0-0408bf7067ef" containerName="console" containerID="cri-o://cf00c8e7123005eda0406a98c2b3995657cdb2d9ccb99201bc063a38dc540e73" gracePeriod=15 Feb 24 05:48:00.951212 master-0 kubenswrapper[34361]: I0224 05:48:00.951146 34361 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-576fb8b7f5-srlps_94166387-6f51-45e5-9ca0-0408bf7067ef/console/0.log" Feb 24 05:48:00.951959 master-0 kubenswrapper[34361]: I0224 05:48:00.951219 34361 generic.go:334] "Generic (PLEG): container finished" podID="94166387-6f51-45e5-9ca0-0408bf7067ef" containerID="cf00c8e7123005eda0406a98c2b3995657cdb2d9ccb99201bc063a38dc540e73" exitCode=2 Feb 24 05:48:00.951959 master-0 kubenswrapper[34361]: I0224 05:48:00.951290 34361 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-576fb8b7f5-srlps" event={"ID":"94166387-6f51-45e5-9ca0-0408bf7067ef","Type":"ContainerDied","Data":"cf00c8e7123005eda0406a98c2b3995657cdb2d9ccb99201bc063a38dc540e73"} Feb 24 05:48:00.954782 master-0 kubenswrapper[34361]: I0224 05:48:00.954260 34361 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-tptx6" event={"ID":"7e5bf3c7-92d6-4aee-978f-47efca23c1fe","Type":"ContainerStarted","Data":"31211abc210230c107a0e1bbf53ff1212c644f7ecd23ea4243083851980cbbf4"} Feb 24 05:48:00.995667 master-0 kubenswrapper[34361]: I0224 05:48:00.994867 34361 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-index-tptx6" podStartSLOduration=2.100413932 podStartE2EDuration="2.994827079s" podCreationTimestamp="2026-02-24 05:47:58 +0000 UTC" firstStartedPulling="2026-02-24 05:47:59.635827522 +0000 UTC m=+639.338444608" lastFinishedPulling="2026-02-24 05:48:00.530240699 +0000 UTC m=+640.232857755" observedRunningTime="2026-02-24 05:48:00.978676134 +0000 UTC m=+640.681293260" watchObservedRunningTime="2026-02-24 05:48:00.994827079 +0000 UTC m=+640.697444155" Feb 24 05:48:01.363392 master-0 kubenswrapper[34361]: I0224 05:48:01.363292 34361 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-576fb8b7f5-srlps_94166387-6f51-45e5-9ca0-0408bf7067ef/console/0.log" Feb 24 05:48:01.363908 master-0 kubenswrapper[34361]: I0224 05:48:01.363875 34361 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-576fb8b7f5-srlps" Feb 24 05:48:01.470566 master-0 kubenswrapper[34361]: I0224 05:48:01.470483 34361 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/94166387-6f51-45e5-9ca0-0408bf7067ef-console-oauth-config\") pod \"94166387-6f51-45e5-9ca0-0408bf7067ef\" (UID: \"94166387-6f51-45e5-9ca0-0408bf7067ef\") " Feb 24 05:48:01.470566 master-0 kubenswrapper[34361]: I0224 05:48:01.470569 34361 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/94166387-6f51-45e5-9ca0-0408bf7067ef-console-config\") pod \"94166387-6f51-45e5-9ca0-0408bf7067ef\" (UID: \"94166387-6f51-45e5-9ca0-0408bf7067ef\") " Feb 24 05:48:01.470944 master-0 kubenswrapper[34361]: I0224 05:48:01.470663 34361 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wxsxj\" (UniqueName: \"kubernetes.io/projected/94166387-6f51-45e5-9ca0-0408bf7067ef-kube-api-access-wxsxj\") pod \"94166387-6f51-45e5-9ca0-0408bf7067ef\" (UID: \"94166387-6f51-45e5-9ca0-0408bf7067ef\") " Feb 24 05:48:01.470944 master-0 kubenswrapper[34361]: I0224 05:48:01.470752 34361 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/94166387-6f51-45e5-9ca0-0408bf7067ef-console-serving-cert\") pod \"94166387-6f51-45e5-9ca0-0408bf7067ef\" (UID: \"94166387-6f51-45e5-9ca0-0408bf7067ef\") " Feb 24 05:48:01.470944 master-0 kubenswrapper[34361]: I0224 05:48:01.470794 34361 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/94166387-6f51-45e5-9ca0-0408bf7067ef-trusted-ca-bundle\") pod \"94166387-6f51-45e5-9ca0-0408bf7067ef\" (UID: \"94166387-6f51-45e5-9ca0-0408bf7067ef\") " Feb 24 05:48:01.470944 master-0 kubenswrapper[34361]: I0224 05:48:01.470942 34361 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/94166387-6f51-45e5-9ca0-0408bf7067ef-oauth-serving-cert\") pod \"94166387-6f51-45e5-9ca0-0408bf7067ef\" (UID: \"94166387-6f51-45e5-9ca0-0408bf7067ef\") " Feb 24 05:48:01.471124 master-0 kubenswrapper[34361]: I0224 05:48:01.470981 34361 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/94166387-6f51-45e5-9ca0-0408bf7067ef-service-ca\") pod \"94166387-6f51-45e5-9ca0-0408bf7067ef\" (UID: \"94166387-6f51-45e5-9ca0-0408bf7067ef\") " Feb 24 05:48:01.472639 master-0 kubenswrapper[34361]: I0224 05:48:01.472560 34361 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/94166387-6f51-45e5-9ca0-0408bf7067ef-service-ca" (OuterVolumeSpecName: "service-ca") pod "94166387-6f51-45e5-9ca0-0408bf7067ef" (UID: "94166387-6f51-45e5-9ca0-0408bf7067ef"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 24 05:48:01.473450 master-0 kubenswrapper[34361]: I0224 05:48:01.473349 34361 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/94166387-6f51-45e5-9ca0-0408bf7067ef-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "94166387-6f51-45e5-9ca0-0408bf7067ef" (UID: "94166387-6f51-45e5-9ca0-0408bf7067ef"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 24 05:48:01.474229 master-0 kubenswrapper[34361]: I0224 05:48:01.474162 34361 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/94166387-6f51-45e5-9ca0-0408bf7067ef-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "94166387-6f51-45e5-9ca0-0408bf7067ef" (UID: "94166387-6f51-45e5-9ca0-0408bf7067ef"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 24 05:48:01.475407 master-0 kubenswrapper[34361]: I0224 05:48:01.475346 34361 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/94166387-6f51-45e5-9ca0-0408bf7067ef-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "94166387-6f51-45e5-9ca0-0408bf7067ef" (UID: "94166387-6f51-45e5-9ca0-0408bf7067ef"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 24 05:48:01.475676 master-0 kubenswrapper[34361]: I0224 05:48:01.475630 34361 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/94166387-6f51-45e5-9ca0-0408bf7067ef-console-config" (OuterVolumeSpecName: "console-config") pod "94166387-6f51-45e5-9ca0-0408bf7067ef" (UID: "94166387-6f51-45e5-9ca0-0408bf7067ef"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 24 05:48:01.476466 master-0 kubenswrapper[34361]: I0224 05:48:01.476404 34361 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/94166387-6f51-45e5-9ca0-0408bf7067ef-kube-api-access-wxsxj" (OuterVolumeSpecName: "kube-api-access-wxsxj") pod "94166387-6f51-45e5-9ca0-0408bf7067ef" (UID: "94166387-6f51-45e5-9ca0-0408bf7067ef"). InnerVolumeSpecName "kube-api-access-wxsxj". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 24 05:48:01.479402 master-0 kubenswrapper[34361]: I0224 05:48:01.479276 34361 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/94166387-6f51-45e5-9ca0-0408bf7067ef-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "94166387-6f51-45e5-9ca0-0408bf7067ef" (UID: "94166387-6f51-45e5-9ca0-0408bf7067ef"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 24 05:48:01.573797 master-0 kubenswrapper[34361]: I0224 05:48:01.573555 34361 reconciler_common.go:293] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/94166387-6f51-45e5-9ca0-0408bf7067ef-oauth-serving-cert\") on node \"master-0\" DevicePath \"\"" Feb 24 05:48:01.573797 master-0 kubenswrapper[34361]: I0224 05:48:01.573649 34361 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/94166387-6f51-45e5-9ca0-0408bf7067ef-service-ca\") on node \"master-0\" DevicePath \"\"" Feb 24 05:48:01.573797 master-0 kubenswrapper[34361]: I0224 05:48:01.573659 34361 reconciler_common.go:293] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/94166387-6f51-45e5-9ca0-0408bf7067ef-console-oauth-config\") on node \"master-0\" DevicePath \"\"" Feb 24 05:48:01.573797 master-0 kubenswrapper[34361]: I0224 05:48:01.573672 34361 reconciler_common.go:293] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/94166387-6f51-45e5-9ca0-0408bf7067ef-console-config\") on node \"master-0\" DevicePath \"\"" Feb 24 05:48:01.573797 master-0 kubenswrapper[34361]: I0224 05:48:01.573685 34361 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wxsxj\" (UniqueName: \"kubernetes.io/projected/94166387-6f51-45e5-9ca0-0408bf7067ef-kube-api-access-wxsxj\") on node \"master-0\" DevicePath \"\"" Feb 24 05:48:01.573797 master-0 kubenswrapper[34361]: I0224 05:48:01.573699 34361 reconciler_common.go:293] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/94166387-6f51-45e5-9ca0-0408bf7067ef-console-serving-cert\") on node \"master-0\" DevicePath \"\"" Feb 24 05:48:01.573797 master-0 kubenswrapper[34361]: I0224 05:48:01.573714 34361 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/94166387-6f51-45e5-9ca0-0408bf7067ef-trusted-ca-bundle\") on node \"master-0\" DevicePath \"\"" Feb 24 05:48:01.980278 master-0 kubenswrapper[34361]: I0224 05:48:01.980103 34361 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-576fb8b7f5-srlps_94166387-6f51-45e5-9ca0-0408bf7067ef/console/0.log" Feb 24 05:48:01.980278 master-0 kubenswrapper[34361]: I0224 05:48:01.980236 34361 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-576fb8b7f5-srlps" event={"ID":"94166387-6f51-45e5-9ca0-0408bf7067ef","Type":"ContainerDied","Data":"26de91720c854b164a674efb57fff696e1de839fba0f42b312430a2bf460afa8"} Feb 24 05:48:01.981448 master-0 kubenswrapper[34361]: I0224 05:48:01.980290 34361 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-576fb8b7f5-srlps" Feb 24 05:48:01.981448 master-0 kubenswrapper[34361]: I0224 05:48:01.980332 34361 scope.go:117] "RemoveContainer" containerID="cf00c8e7123005eda0406a98c2b3995657cdb2d9ccb99201bc063a38dc540e73" Feb 24 05:48:02.007558 master-0 kubenswrapper[34361]: I0224 05:48:02.007387 34361 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-storage/vg-manager-r5t2w" Feb 24 05:48:02.009730 master-0 kubenswrapper[34361]: I0224 05:48:02.009595 34361 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-storage/vg-manager-r5t2w" Feb 24 05:48:02.075996 master-0 kubenswrapper[34361]: I0224 05:48:02.075872 34361 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/openstack-operator-index-tptx6"] Feb 24 05:48:02.085605 master-0 kubenswrapper[34361]: I0224 05:48:02.085421 34361 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-576fb8b7f5-srlps"] Feb 24 05:48:02.094158 master-0 kubenswrapper[34361]: I0224 05:48:02.094076 34361 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-console/console-576fb8b7f5-srlps"] Feb 24 05:48:02.278615 master-0 kubenswrapper[34361]: I0224 05:48:02.278487 34361 patch_prober.go:28] interesting pod/console-576fb8b7f5-srlps container/console namespace/openshift-console: Readiness probe status=failure output="Get \"https://10.128.0.103:8443/health\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 24 05:48:02.279027 master-0 kubenswrapper[34361]: I0224 05:48:02.278616 34361 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/console-576fb8b7f5-srlps" podUID="94166387-6f51-45e5-9ca0-0408bf7067ef" containerName="console" probeResult="failure" output="Get \"https://10.128.0.103:8443/health\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 24 05:48:02.618865 master-0 kubenswrapper[34361]: I0224 05:48:02.618501 34361 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="94166387-6f51-45e5-9ca0-0408bf7067ef" path="/var/lib/kubelet/pods/94166387-6f51-45e5-9ca0-0408bf7067ef/volumes" Feb 24 05:48:02.697267 master-0 kubenswrapper[34361]: I0224 05:48:02.695765 34361 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-index-2pkfs"] Feb 24 05:48:02.697267 master-0 kubenswrapper[34361]: E0224 05:48:02.696746 34361 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="94166387-6f51-45e5-9ca0-0408bf7067ef" containerName="console" Feb 24 05:48:02.697267 master-0 kubenswrapper[34361]: I0224 05:48:02.696789 34361 state_mem.go:107] "Deleted CPUSet assignment" podUID="94166387-6f51-45e5-9ca0-0408bf7067ef" containerName="console" Feb 24 05:48:02.697809 master-0 kubenswrapper[34361]: I0224 05:48:02.697300 34361 memory_manager.go:354] "RemoveStaleState removing state" podUID="94166387-6f51-45e5-9ca0-0408bf7067ef" containerName="console" Feb 24 05:48:02.698544 master-0 kubenswrapper[34361]: I0224 05:48:02.698500 34361 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-2pkfs" Feb 24 05:48:02.709359 master-0 kubenswrapper[34361]: I0224 05:48:02.707006 34361 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-2pkfs"] Feb 24 05:48:02.803371 master-0 kubenswrapper[34361]: I0224 05:48:02.803242 34361 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s4jvw\" (UniqueName: \"kubernetes.io/projected/73ef88f9-e21c-43b6-858a-e0e907001e64-kube-api-access-s4jvw\") pod \"openstack-operator-index-2pkfs\" (UID: \"73ef88f9-e21c-43b6-858a-e0e907001e64\") " pod="openstack-operators/openstack-operator-index-2pkfs" Feb 24 05:48:02.908050 master-0 kubenswrapper[34361]: I0224 05:48:02.907741 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s4jvw\" (UniqueName: \"kubernetes.io/projected/73ef88f9-e21c-43b6-858a-e0e907001e64-kube-api-access-s4jvw\") pod \"openstack-operator-index-2pkfs\" (UID: \"73ef88f9-e21c-43b6-858a-e0e907001e64\") " pod="openstack-operators/openstack-operator-index-2pkfs" Feb 24 05:48:02.944818 master-0 kubenswrapper[34361]: I0224 05:48:02.943470 34361 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s4jvw\" (UniqueName: \"kubernetes.io/projected/73ef88f9-e21c-43b6-858a-e0e907001e64-kube-api-access-s4jvw\") pod \"openstack-operator-index-2pkfs\" (UID: \"73ef88f9-e21c-43b6-858a-e0e907001e64\") " pod="openstack-operators/openstack-operator-index-2pkfs" Feb 24 05:48:02.995772 master-0 kubenswrapper[34361]: I0224 05:48:02.995689 34361 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-storage/vg-manager-r5t2w" Feb 24 05:48:02.995772 master-0 kubenswrapper[34361]: I0224 05:48:02.995686 34361 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack-operators/openstack-operator-index-tptx6" podUID="7e5bf3c7-92d6-4aee-978f-47efca23c1fe" containerName="registry-server" containerID="cri-o://31211abc210230c107a0e1bbf53ff1212c644f7ecd23ea4243083851980cbbf4" gracePeriod=2 Feb 24 05:48:03.001404 master-0 kubenswrapper[34361]: I0224 05:48:03.001345 34361 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-storage/vg-manager-r5t2w" Feb 24 05:48:03.028302 master-0 kubenswrapper[34361]: I0224 05:48:03.028214 34361 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-2pkfs" Feb 24 05:48:03.599738 master-0 kubenswrapper[34361]: I0224 05:48:03.599670 34361 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-2pkfs"] Feb 24 05:48:03.705406 master-0 kubenswrapper[34361]: I0224 05:48:03.705338 34361 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-tptx6" Feb 24 05:48:03.737427 master-0 kubenswrapper[34361]: I0224 05:48:03.737364 34361 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-77vwz\" (UniqueName: \"kubernetes.io/projected/7e5bf3c7-92d6-4aee-978f-47efca23c1fe-kube-api-access-77vwz\") pod \"7e5bf3c7-92d6-4aee-978f-47efca23c1fe\" (UID: \"7e5bf3c7-92d6-4aee-978f-47efca23c1fe\") " Feb 24 05:48:03.740730 master-0 kubenswrapper[34361]: I0224 05:48:03.740673 34361 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7e5bf3c7-92d6-4aee-978f-47efca23c1fe-kube-api-access-77vwz" (OuterVolumeSpecName: "kube-api-access-77vwz") pod "7e5bf3c7-92d6-4aee-978f-47efca23c1fe" (UID: "7e5bf3c7-92d6-4aee-978f-47efca23c1fe"). InnerVolumeSpecName "kube-api-access-77vwz". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 24 05:48:03.839591 master-0 kubenswrapper[34361]: I0224 05:48:03.839414 34361 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-77vwz\" (UniqueName: \"kubernetes.io/projected/7e5bf3c7-92d6-4aee-978f-47efca23c1fe-kube-api-access-77vwz\") on node \"master-0\" DevicePath \"\"" Feb 24 05:48:04.024735 master-0 kubenswrapper[34361]: I0224 05:48:04.024552 34361 generic.go:334] "Generic (PLEG): container finished" podID="7e5bf3c7-92d6-4aee-978f-47efca23c1fe" containerID="31211abc210230c107a0e1bbf53ff1212c644f7ecd23ea4243083851980cbbf4" exitCode=0 Feb 24 05:48:04.024735 master-0 kubenswrapper[34361]: I0224 05:48:04.024717 34361 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-tptx6" event={"ID":"7e5bf3c7-92d6-4aee-978f-47efca23c1fe","Type":"ContainerDied","Data":"31211abc210230c107a0e1bbf53ff1212c644f7ecd23ea4243083851980cbbf4"} Feb 24 05:48:04.025375 master-0 kubenswrapper[34361]: I0224 05:48:04.024677 34361 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-tptx6" Feb 24 05:48:04.025375 master-0 kubenswrapper[34361]: I0224 05:48:04.024796 34361 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-tptx6" event={"ID":"7e5bf3c7-92d6-4aee-978f-47efca23c1fe","Type":"ContainerDied","Data":"e539d71c64c0ee76fb884c460bea7f5a7a14c774c3d1fdc0980628dc5fe4b76b"} Feb 24 05:48:04.025375 master-0 kubenswrapper[34361]: I0224 05:48:04.024831 34361 scope.go:117] "RemoveContainer" containerID="31211abc210230c107a0e1bbf53ff1212c644f7ecd23ea4243083851980cbbf4" Feb 24 05:48:04.032657 master-0 kubenswrapper[34361]: I0224 05:48:04.032561 34361 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-2pkfs" event={"ID":"73ef88f9-e21c-43b6-858a-e0e907001e64","Type":"ContainerStarted","Data":"15548f64343429bae9aa1d5f2aee1fcd6a13c8ad75ea7fa5165635d91cfff6d0"} Feb 24 05:48:04.055425 master-0 kubenswrapper[34361]: I0224 05:48:04.053666 34361 scope.go:117] "RemoveContainer" containerID="31211abc210230c107a0e1bbf53ff1212c644f7ecd23ea4243083851980cbbf4" Feb 24 05:48:04.061086 master-0 kubenswrapper[34361]: E0224 05:48:04.060915 34361 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"31211abc210230c107a0e1bbf53ff1212c644f7ecd23ea4243083851980cbbf4\": container with ID starting with 31211abc210230c107a0e1bbf53ff1212c644f7ecd23ea4243083851980cbbf4 not found: ID does not exist" containerID="31211abc210230c107a0e1bbf53ff1212c644f7ecd23ea4243083851980cbbf4" Feb 24 05:48:04.061086 master-0 kubenswrapper[34361]: I0224 05:48:04.060964 34361 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"31211abc210230c107a0e1bbf53ff1212c644f7ecd23ea4243083851980cbbf4"} err="failed to get container status \"31211abc210230c107a0e1bbf53ff1212c644f7ecd23ea4243083851980cbbf4\": rpc error: code = NotFound desc = could not find container \"31211abc210230c107a0e1bbf53ff1212c644f7ecd23ea4243083851980cbbf4\": container with ID starting with 31211abc210230c107a0e1bbf53ff1212c644f7ecd23ea4243083851980cbbf4 not found: ID does not exist" Feb 24 05:48:04.266917 master-0 kubenswrapper[34361]: I0224 05:48:04.266837 34361 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/openstack-operator-index-tptx6"] Feb 24 05:48:04.281494 master-0 kubenswrapper[34361]: I0224 05:48:04.281414 34361 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack-operators/openstack-operator-index-tptx6"] Feb 24 05:48:04.616878 master-0 kubenswrapper[34361]: I0224 05:48:04.616710 34361 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7e5bf3c7-92d6-4aee-978f-47efca23c1fe" path="/var/lib/kubelet/pods/7e5bf3c7-92d6-4aee-978f-47efca23c1fe/volumes" Feb 24 05:48:05.050071 master-0 kubenswrapper[34361]: I0224 05:48:05.047497 34361 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-2pkfs" event={"ID":"73ef88f9-e21c-43b6-858a-e0e907001e64","Type":"ContainerStarted","Data":"bac507c597162d14ed8f61cd3e8b180158cabdbbbf4dcc0ad2c3491692b18935"} Feb 24 05:48:05.078822 master-0 kubenswrapper[34361]: I0224 05:48:05.078552 34361 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-index-2pkfs" podStartSLOduration=2.40844647 podStartE2EDuration="3.078519757s" podCreationTimestamp="2026-02-24 05:48:02 +0000 UTC" firstStartedPulling="2026-02-24 05:48:03.594651191 +0000 UTC m=+643.297268227" lastFinishedPulling="2026-02-24 05:48:04.264724458 +0000 UTC m=+643.967341514" observedRunningTime="2026-02-24 05:48:05.07422105 +0000 UTC m=+644.776838136" watchObservedRunningTime="2026-02-24 05:48:05.078519757 +0000 UTC m=+644.781136843" Feb 24 05:48:13.029129 master-0 kubenswrapper[34361]: I0224 05:48:13.029017 34361 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-index-2pkfs" Feb 24 05:48:13.029906 master-0 kubenswrapper[34361]: I0224 05:48:13.029161 34361 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack-operators/openstack-operator-index-2pkfs" Feb 24 05:48:13.083576 master-0 kubenswrapper[34361]: I0224 05:48:13.083482 34361 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack-operators/openstack-operator-index-2pkfs" Feb 24 05:48:13.187448 master-0 kubenswrapper[34361]: I0224 05:48:13.187377 34361 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-index-2pkfs" Feb 24 05:48:14.360297 master-0 kubenswrapper[34361]: I0224 05:48:14.360215 34361 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/11a76e9741f3be63a88784b9f3f329441c07f3f3de97b4e48123ebda14rs9xz"] Feb 24 05:48:14.361081 master-0 kubenswrapper[34361]: E0224 05:48:14.360803 34361 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7e5bf3c7-92d6-4aee-978f-47efca23c1fe" containerName="registry-server" Feb 24 05:48:14.361081 master-0 kubenswrapper[34361]: I0224 05:48:14.360824 34361 state_mem.go:107] "Deleted CPUSet assignment" podUID="7e5bf3c7-92d6-4aee-978f-47efca23c1fe" containerName="registry-server" Feb 24 05:48:14.361249 master-0 kubenswrapper[34361]: I0224 05:48:14.361187 34361 memory_manager.go:354] "RemoveStaleState removing state" podUID="7e5bf3c7-92d6-4aee-978f-47efca23c1fe" containerName="registry-server" Feb 24 05:48:14.364379 master-0 kubenswrapper[34361]: I0224 05:48:14.364272 34361 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/11a76e9741f3be63a88784b9f3f329441c07f3f3de97b4e48123ebda14rs9xz" Feb 24 05:48:14.376149 master-0 kubenswrapper[34361]: I0224 05:48:14.376045 34361 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/11a76e9741f3be63a88784b9f3f329441c07f3f3de97b4e48123ebda14rs9xz"] Feb 24 05:48:14.385881 master-0 kubenswrapper[34361]: I0224 05:48:14.385799 34361 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/36175018-7beb-4244-a008-9efe95d6515f-util\") pod \"11a76e9741f3be63a88784b9f3f329441c07f3f3de97b4e48123ebda14rs9xz\" (UID: \"36175018-7beb-4244-a008-9efe95d6515f\") " pod="openstack-operators/11a76e9741f3be63a88784b9f3f329441c07f3f3de97b4e48123ebda14rs9xz" Feb 24 05:48:14.385995 master-0 kubenswrapper[34361]: I0224 05:48:14.385913 34361 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b95sb\" (UniqueName: \"kubernetes.io/projected/36175018-7beb-4244-a008-9efe95d6515f-kube-api-access-b95sb\") pod \"11a76e9741f3be63a88784b9f3f329441c07f3f3de97b4e48123ebda14rs9xz\" (UID: \"36175018-7beb-4244-a008-9efe95d6515f\") " pod="openstack-operators/11a76e9741f3be63a88784b9f3f329441c07f3f3de97b4e48123ebda14rs9xz" Feb 24 05:48:14.386258 master-0 kubenswrapper[34361]: I0224 05:48:14.386196 34361 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/36175018-7beb-4244-a008-9efe95d6515f-bundle\") pod \"11a76e9741f3be63a88784b9f3f329441c07f3f3de97b4e48123ebda14rs9xz\" (UID: \"36175018-7beb-4244-a008-9efe95d6515f\") " pod="openstack-operators/11a76e9741f3be63a88784b9f3f329441c07f3f3de97b4e48123ebda14rs9xz" Feb 24 05:48:14.489174 master-0 kubenswrapper[34361]: I0224 05:48:14.489088 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/36175018-7beb-4244-a008-9efe95d6515f-util\") pod \"11a76e9741f3be63a88784b9f3f329441c07f3f3de97b4e48123ebda14rs9xz\" (UID: \"36175018-7beb-4244-a008-9efe95d6515f\") " pod="openstack-operators/11a76e9741f3be63a88784b9f3f329441c07f3f3de97b4e48123ebda14rs9xz" Feb 24 05:48:14.489567 master-0 kubenswrapper[34361]: I0224 05:48:14.489197 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b95sb\" (UniqueName: \"kubernetes.io/projected/36175018-7beb-4244-a008-9efe95d6515f-kube-api-access-b95sb\") pod \"11a76e9741f3be63a88784b9f3f329441c07f3f3de97b4e48123ebda14rs9xz\" (UID: \"36175018-7beb-4244-a008-9efe95d6515f\") " pod="openstack-operators/11a76e9741f3be63a88784b9f3f329441c07f3f3de97b4e48123ebda14rs9xz" Feb 24 05:48:14.489567 master-0 kubenswrapper[34361]: I0224 05:48:14.489254 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/36175018-7beb-4244-a008-9efe95d6515f-bundle\") pod \"11a76e9741f3be63a88784b9f3f329441c07f3f3de97b4e48123ebda14rs9xz\" (UID: \"36175018-7beb-4244-a008-9efe95d6515f\") " pod="openstack-operators/11a76e9741f3be63a88784b9f3f329441c07f3f3de97b4e48123ebda14rs9xz" Feb 24 05:48:14.490062 master-0 kubenswrapper[34361]: I0224 05:48:14.490026 34361 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/36175018-7beb-4244-a008-9efe95d6515f-bundle\") pod \"11a76e9741f3be63a88784b9f3f329441c07f3f3de97b4e48123ebda14rs9xz\" (UID: \"36175018-7beb-4244-a008-9efe95d6515f\") " pod="openstack-operators/11a76e9741f3be63a88784b9f3f329441c07f3f3de97b4e48123ebda14rs9xz" Feb 24 05:48:14.490252 master-0 kubenswrapper[34361]: I0224 05:48:14.490194 34361 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/36175018-7beb-4244-a008-9efe95d6515f-util\") pod \"11a76e9741f3be63a88784b9f3f329441c07f3f3de97b4e48123ebda14rs9xz\" (UID: \"36175018-7beb-4244-a008-9efe95d6515f\") " pod="openstack-operators/11a76e9741f3be63a88784b9f3f329441c07f3f3de97b4e48123ebda14rs9xz" Feb 24 05:48:14.522443 master-0 kubenswrapper[34361]: I0224 05:48:14.522230 34361 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b95sb\" (UniqueName: \"kubernetes.io/projected/36175018-7beb-4244-a008-9efe95d6515f-kube-api-access-b95sb\") pod \"11a76e9741f3be63a88784b9f3f329441c07f3f3de97b4e48123ebda14rs9xz\" (UID: \"36175018-7beb-4244-a008-9efe95d6515f\") " pod="openstack-operators/11a76e9741f3be63a88784b9f3f329441c07f3f3de97b4e48123ebda14rs9xz" Feb 24 05:48:14.683385 master-0 kubenswrapper[34361]: I0224 05:48:14.683139 34361 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/11a76e9741f3be63a88784b9f3f329441c07f3f3de97b4e48123ebda14rs9xz" Feb 24 05:48:15.198557 master-0 kubenswrapper[34361]: I0224 05:48:15.194280 34361 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/11a76e9741f3be63a88784b9f3f329441c07f3f3de97b4e48123ebda14rs9xz"] Feb 24 05:48:15.199682 master-0 kubenswrapper[34361]: W0224 05:48:15.199582 34361 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod36175018_7beb_4244_a008_9efe95d6515f.slice/crio-94049e4acd59709e69bfba57741523938bab6418354125a7d5339da65548bc9b WatchSource:0}: Error finding container 94049e4acd59709e69bfba57741523938bab6418354125a7d5339da65548bc9b: Status 404 returned error can't find the container with id 94049e4acd59709e69bfba57741523938bab6418354125a7d5339da65548bc9b Feb 24 05:48:16.213205 master-0 kubenswrapper[34361]: I0224 05:48:16.213132 34361 generic.go:334] "Generic (PLEG): container finished" podID="36175018-7beb-4244-a008-9efe95d6515f" containerID="2275f16cc38d90107df8ce67d36626b09c26579c03cc4e3fc3fe038e0d9eddc8" exitCode=0 Feb 24 05:48:16.214206 master-0 kubenswrapper[34361]: I0224 05:48:16.213393 34361 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/11a76e9741f3be63a88784b9f3f329441c07f3f3de97b4e48123ebda14rs9xz" event={"ID":"36175018-7beb-4244-a008-9efe95d6515f","Type":"ContainerDied","Data":"2275f16cc38d90107df8ce67d36626b09c26579c03cc4e3fc3fe038e0d9eddc8"} Feb 24 05:48:16.214424 master-0 kubenswrapper[34361]: I0224 05:48:16.214386 34361 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/11a76e9741f3be63a88784b9f3f329441c07f3f3de97b4e48123ebda14rs9xz" event={"ID":"36175018-7beb-4244-a008-9efe95d6515f","Type":"ContainerStarted","Data":"94049e4acd59709e69bfba57741523938bab6418354125a7d5339da65548bc9b"} Feb 24 05:48:17.224590 master-0 kubenswrapper[34361]: I0224 05:48:17.224426 34361 generic.go:334] "Generic (PLEG): container finished" podID="36175018-7beb-4244-a008-9efe95d6515f" containerID="654c4b740e6fb219f471fd1a57cc6f179ac6640cab1c003046103c1f8a0a7c36" exitCode=0 Feb 24 05:48:17.224590 master-0 kubenswrapper[34361]: I0224 05:48:17.224490 34361 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/11a76e9741f3be63a88784b9f3f329441c07f3f3de97b4e48123ebda14rs9xz" event={"ID":"36175018-7beb-4244-a008-9efe95d6515f","Type":"ContainerDied","Data":"654c4b740e6fb219f471fd1a57cc6f179ac6640cab1c003046103c1f8a0a7c36"} Feb 24 05:48:18.238590 master-0 kubenswrapper[34361]: I0224 05:48:18.238435 34361 generic.go:334] "Generic (PLEG): container finished" podID="36175018-7beb-4244-a008-9efe95d6515f" containerID="b307572575f78020b80f4e94752751a54f3babb71d971125e6bc049d9cd480f6" exitCode=0 Feb 24 05:48:18.239300 master-0 kubenswrapper[34361]: I0224 05:48:18.238561 34361 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/11a76e9741f3be63a88784b9f3f329441c07f3f3de97b4e48123ebda14rs9xz" event={"ID":"36175018-7beb-4244-a008-9efe95d6515f","Type":"ContainerDied","Data":"b307572575f78020b80f4e94752751a54f3babb71d971125e6bc049d9cd480f6"} Feb 24 05:48:19.778133 master-0 kubenswrapper[34361]: I0224 05:48:19.778067 34361 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/11a76e9741f3be63a88784b9f3f329441c07f3f3de97b4e48123ebda14rs9xz" Feb 24 05:48:19.911936 master-0 kubenswrapper[34361]: I0224 05:48:19.911848 34361 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-b95sb\" (UniqueName: \"kubernetes.io/projected/36175018-7beb-4244-a008-9efe95d6515f-kube-api-access-b95sb\") pod \"36175018-7beb-4244-a008-9efe95d6515f\" (UID: \"36175018-7beb-4244-a008-9efe95d6515f\") " Feb 24 05:48:19.911936 master-0 kubenswrapper[34361]: I0224 05:48:19.911948 34361 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/36175018-7beb-4244-a008-9efe95d6515f-bundle\") pod \"36175018-7beb-4244-a008-9efe95d6515f\" (UID: \"36175018-7beb-4244-a008-9efe95d6515f\") " Feb 24 05:48:19.912297 master-0 kubenswrapper[34361]: I0224 05:48:19.912212 34361 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/36175018-7beb-4244-a008-9efe95d6515f-util\") pod \"36175018-7beb-4244-a008-9efe95d6515f\" (UID: \"36175018-7beb-4244-a008-9efe95d6515f\") " Feb 24 05:48:19.912976 master-0 kubenswrapper[34361]: I0224 05:48:19.912898 34361 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/36175018-7beb-4244-a008-9efe95d6515f-bundle" (OuterVolumeSpecName: "bundle") pod "36175018-7beb-4244-a008-9efe95d6515f" (UID: "36175018-7beb-4244-a008-9efe95d6515f"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 24 05:48:19.913112 master-0 kubenswrapper[34361]: I0224 05:48:19.913067 34361 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/36175018-7beb-4244-a008-9efe95d6515f-bundle\") on node \"master-0\" DevicePath \"\"" Feb 24 05:48:19.920039 master-0 kubenswrapper[34361]: I0224 05:48:19.918048 34361 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/36175018-7beb-4244-a008-9efe95d6515f-kube-api-access-b95sb" (OuterVolumeSpecName: "kube-api-access-b95sb") pod "36175018-7beb-4244-a008-9efe95d6515f" (UID: "36175018-7beb-4244-a008-9efe95d6515f"). InnerVolumeSpecName "kube-api-access-b95sb". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 24 05:48:19.929808 master-0 kubenswrapper[34361]: I0224 05:48:19.929701 34361 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/36175018-7beb-4244-a008-9efe95d6515f-util" (OuterVolumeSpecName: "util") pod "36175018-7beb-4244-a008-9efe95d6515f" (UID: "36175018-7beb-4244-a008-9efe95d6515f"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 24 05:48:20.016022 master-0 kubenswrapper[34361]: I0224 05:48:20.015918 34361 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/36175018-7beb-4244-a008-9efe95d6515f-util\") on node \"master-0\" DevicePath \"\"" Feb 24 05:48:20.016022 master-0 kubenswrapper[34361]: I0224 05:48:20.015997 34361 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-b95sb\" (UniqueName: \"kubernetes.io/projected/36175018-7beb-4244-a008-9efe95d6515f-kube-api-access-b95sb\") on node \"master-0\" DevicePath \"\"" Feb 24 05:48:20.292689 master-0 kubenswrapper[34361]: I0224 05:48:20.292283 34361 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/11a76e9741f3be63a88784b9f3f329441c07f3f3de97b4e48123ebda14rs9xz" event={"ID":"36175018-7beb-4244-a008-9efe95d6515f","Type":"ContainerDied","Data":"94049e4acd59709e69bfba57741523938bab6418354125a7d5339da65548bc9b"} Feb 24 05:48:20.292689 master-0 kubenswrapper[34361]: I0224 05:48:20.292381 34361 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="94049e4acd59709e69bfba57741523938bab6418354125a7d5339da65548bc9b" Feb 24 05:48:20.292689 master-0 kubenswrapper[34361]: I0224 05:48:20.292628 34361 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/11a76e9741f3be63a88784b9f3f329441c07f3f3de97b4e48123ebda14rs9xz" Feb 24 05:48:26.931090 master-0 kubenswrapper[34361]: I0224 05:48:26.931006 34361 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-controller-init-55c649df44-8xq4x"] Feb 24 05:48:26.932042 master-0 kubenswrapper[34361]: E0224 05:48:26.931481 34361 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="36175018-7beb-4244-a008-9efe95d6515f" containerName="extract" Feb 24 05:48:26.932042 master-0 kubenswrapper[34361]: I0224 05:48:26.931498 34361 state_mem.go:107] "Deleted CPUSet assignment" podUID="36175018-7beb-4244-a008-9efe95d6515f" containerName="extract" Feb 24 05:48:26.932042 master-0 kubenswrapper[34361]: E0224 05:48:26.931542 34361 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="36175018-7beb-4244-a008-9efe95d6515f" containerName="pull" Feb 24 05:48:26.932042 master-0 kubenswrapper[34361]: I0224 05:48:26.931551 34361 state_mem.go:107] "Deleted CPUSet assignment" podUID="36175018-7beb-4244-a008-9efe95d6515f" containerName="pull" Feb 24 05:48:26.932042 master-0 kubenswrapper[34361]: E0224 05:48:26.931567 34361 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="36175018-7beb-4244-a008-9efe95d6515f" containerName="util" Feb 24 05:48:26.932042 master-0 kubenswrapper[34361]: I0224 05:48:26.931574 34361 state_mem.go:107] "Deleted CPUSet assignment" podUID="36175018-7beb-4244-a008-9efe95d6515f" containerName="util" Feb 24 05:48:26.932042 master-0 kubenswrapper[34361]: I0224 05:48:26.931813 34361 memory_manager.go:354] "RemoveStaleState removing state" podUID="36175018-7beb-4244-a008-9efe95d6515f" containerName="extract" Feb 24 05:48:26.932574 master-0 kubenswrapper[34361]: I0224 05:48:26.932547 34361 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-init-55c649df44-8xq4x" Feb 24 05:48:26.964766 master-0 kubenswrapper[34361]: I0224 05:48:26.964692 34361 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-init-55c649df44-8xq4x"] Feb 24 05:48:26.986349 master-0 kubenswrapper[34361]: I0224 05:48:26.983456 34361 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qrn72\" (UniqueName: \"kubernetes.io/projected/62242bc9-737b-481a-84b1-fb4fc562f5f6-kube-api-access-qrn72\") pod \"openstack-operator-controller-init-55c649df44-8xq4x\" (UID: \"62242bc9-737b-481a-84b1-fb4fc562f5f6\") " pod="openstack-operators/openstack-operator-controller-init-55c649df44-8xq4x" Feb 24 05:48:27.085070 master-0 kubenswrapper[34361]: I0224 05:48:27.084978 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qrn72\" (UniqueName: \"kubernetes.io/projected/62242bc9-737b-481a-84b1-fb4fc562f5f6-kube-api-access-qrn72\") pod \"openstack-operator-controller-init-55c649df44-8xq4x\" (UID: \"62242bc9-737b-481a-84b1-fb4fc562f5f6\") " pod="openstack-operators/openstack-operator-controller-init-55c649df44-8xq4x" Feb 24 05:48:27.105386 master-0 kubenswrapper[34361]: I0224 05:48:27.104456 34361 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qrn72\" (UniqueName: \"kubernetes.io/projected/62242bc9-737b-481a-84b1-fb4fc562f5f6-kube-api-access-qrn72\") pod \"openstack-operator-controller-init-55c649df44-8xq4x\" (UID: \"62242bc9-737b-481a-84b1-fb4fc562f5f6\") " pod="openstack-operators/openstack-operator-controller-init-55c649df44-8xq4x" Feb 24 05:48:27.253552 master-0 kubenswrapper[34361]: I0224 05:48:27.253459 34361 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-init-55c649df44-8xq4x" Feb 24 05:48:27.741289 master-0 kubenswrapper[34361]: I0224 05:48:27.741220 34361 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-init-55c649df44-8xq4x"] Feb 24 05:48:27.741520 master-0 kubenswrapper[34361]: W0224 05:48:27.741440 34361 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod62242bc9_737b_481a_84b1_fb4fc562f5f6.slice/crio-cc6b2edfcd817a84c42f97c36b33858cbace68ecf9603d71c00ffd46962bca79 WatchSource:0}: Error finding container cc6b2edfcd817a84c42f97c36b33858cbace68ecf9603d71c00ffd46962bca79: Status 404 returned error can't find the container with id cc6b2edfcd817a84c42f97c36b33858cbace68ecf9603d71c00ffd46962bca79 Feb 24 05:48:28.420841 master-0 kubenswrapper[34361]: I0224 05:48:28.420689 34361 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-init-55c649df44-8xq4x" event={"ID":"62242bc9-737b-481a-84b1-fb4fc562f5f6","Type":"ContainerStarted","Data":"cc6b2edfcd817a84c42f97c36b33858cbace68ecf9603d71c00ffd46962bca79"} Feb 24 05:48:33.475385 master-0 kubenswrapper[34361]: I0224 05:48:33.475238 34361 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-init-55c649df44-8xq4x" event={"ID":"62242bc9-737b-481a-84b1-fb4fc562f5f6","Type":"ContainerStarted","Data":"3997a56d7f85295137bc0e8a14d8dceccfb3f889c222a046b2a6557e6b2b48a1"} Feb 24 05:48:33.476281 master-0 kubenswrapper[34361]: I0224 05:48:33.475566 34361 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-controller-init-55c649df44-8xq4x" Feb 24 05:48:33.518556 master-0 kubenswrapper[34361]: I0224 05:48:33.518442 34361 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-controller-init-55c649df44-8xq4x" podStartSLOduration=2.549520465 podStartE2EDuration="7.518413479s" podCreationTimestamp="2026-02-24 05:48:26 +0000 UTC" firstStartedPulling="2026-02-24 05:48:27.748744349 +0000 UTC m=+667.451361405" lastFinishedPulling="2026-02-24 05:48:32.717637373 +0000 UTC m=+672.420254419" observedRunningTime="2026-02-24 05:48:33.511545323 +0000 UTC m=+673.214162359" watchObservedRunningTime="2026-02-24 05:48:33.518413479 +0000 UTC m=+673.221030535" Feb 24 05:48:37.277915 master-0 kubenswrapper[34361]: I0224 05:48:37.277500 34361 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-controller-init-55c649df44-8xq4x" Feb 24 05:48:57.933865 master-0 kubenswrapper[34361]: I0224 05:48:57.933777 34361 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/cinder-operator-controller-manager-55d77d7b5c-m52ng"] Feb 24 05:48:57.942330 master-0 kubenswrapper[34361]: I0224 05:48:57.935654 34361 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/cinder-operator-controller-manager-55d77d7b5c-m52ng" Feb 24 05:48:57.958133 master-0 kubenswrapper[34361]: I0224 05:48:57.956921 34361 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/barbican-operator-controller-manager-868647ff47-rngmn"] Feb 24 05:48:57.958899 master-0 kubenswrapper[34361]: I0224 05:48:57.958848 34361 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/barbican-operator-controller-manager-868647ff47-rngmn" Feb 24 05:48:57.975905 master-0 kubenswrapper[34361]: I0224 05:48:57.972228 34361 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/barbican-operator-controller-manager-868647ff47-rngmn"] Feb 24 05:48:57.988751 master-0 kubenswrapper[34361]: I0224 05:48:57.987487 34361 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/cinder-operator-controller-manager-55d77d7b5c-m52ng"] Feb 24 05:48:57.998723 master-0 kubenswrapper[34361]: I0224 05:48:57.998665 34361 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/designate-operator-controller-manager-6d8bf5c495-vq97j"] Feb 24 05:48:58.000438 master-0 kubenswrapper[34361]: I0224 05:48:58.000406 34361 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/designate-operator-controller-manager-6d8bf5c495-vq97j" Feb 24 05:48:58.029374 master-0 kubenswrapper[34361]: I0224 05:48:58.026006 34361 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/designate-operator-controller-manager-6d8bf5c495-vq97j"] Feb 24 05:48:58.056980 master-0 kubenswrapper[34361]: I0224 05:48:58.038750 34361 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q4462\" (UniqueName: \"kubernetes.io/projected/8d9c9b31-7abc-4f3c-b0ee-419a96c0f4aa-kube-api-access-q4462\") pod \"designate-operator-controller-manager-6d8bf5c495-vq97j\" (UID: \"8d9c9b31-7abc-4f3c-b0ee-419a96c0f4aa\") " pod="openstack-operators/designate-operator-controller-manager-6d8bf5c495-vq97j" Feb 24 05:48:58.056980 master-0 kubenswrapper[34361]: I0224 05:48:58.043339 34361 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jd5g2\" (UniqueName: \"kubernetes.io/projected/179ae4f1-42de-4005-b33c-fd32bddbc2ba-kube-api-access-jd5g2\") pod \"barbican-operator-controller-manager-868647ff47-rngmn\" (UID: \"179ae4f1-42de-4005-b33c-fd32bddbc2ba\") " pod="openstack-operators/barbican-operator-controller-manager-868647ff47-rngmn" Feb 24 05:48:58.056980 master-0 kubenswrapper[34361]: I0224 05:48:58.043673 34361 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2px7k\" (UniqueName: \"kubernetes.io/projected/4f457c03-e121-401c-b724-4dd147b7ff3b-kube-api-access-2px7k\") pod \"cinder-operator-controller-manager-55d77d7b5c-m52ng\" (UID: \"4f457c03-e121-401c-b724-4dd147b7ff3b\") " pod="openstack-operators/cinder-operator-controller-manager-55d77d7b5c-m52ng" Feb 24 05:48:58.056980 master-0 kubenswrapper[34361]: I0224 05:48:58.053205 34361 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/glance-operator-controller-manager-784b5bb6c5-zghgv"] Feb 24 05:48:58.056980 master-0 kubenswrapper[34361]: I0224 05:48:58.056620 34361 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/glance-operator-controller-manager-784b5bb6c5-zghgv" Feb 24 05:48:58.071385 master-0 kubenswrapper[34361]: I0224 05:48:58.069268 34361 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/heat-operator-controller-manager-69f49c598c-75df9"] Feb 24 05:48:58.071385 master-0 kubenswrapper[34361]: I0224 05:48:58.071292 34361 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/heat-operator-controller-manager-69f49c598c-75df9" Feb 24 05:48:58.113898 master-0 kubenswrapper[34361]: I0224 05:48:58.112845 34361 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/glance-operator-controller-manager-784b5bb6c5-zghgv"] Feb 24 05:48:58.123041 master-0 kubenswrapper[34361]: I0224 05:48:58.122769 34361 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/horizon-operator-controller-manager-5b9b8895d5-gmljt"] Feb 24 05:48:58.125011 master-0 kubenswrapper[34361]: I0224 05:48:58.124546 34361 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/horizon-operator-controller-manager-5b9b8895d5-gmljt" Feb 24 05:48:58.153538 master-0 kubenswrapper[34361]: I0224 05:48:58.149951 34361 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9sh48\" (UniqueName: \"kubernetes.io/projected/2351f6f1-b876-4809-85ea-79fcdb287059-kube-api-access-9sh48\") pod \"heat-operator-controller-manager-69f49c598c-75df9\" (UID: \"2351f6f1-b876-4809-85ea-79fcdb287059\") " pod="openstack-operators/heat-operator-controller-manager-69f49c598c-75df9" Feb 24 05:48:58.153538 master-0 kubenswrapper[34361]: I0224 05:48:58.150035 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q4462\" (UniqueName: \"kubernetes.io/projected/8d9c9b31-7abc-4f3c-b0ee-419a96c0f4aa-kube-api-access-q4462\") pod \"designate-operator-controller-manager-6d8bf5c495-vq97j\" (UID: \"8d9c9b31-7abc-4f3c-b0ee-419a96c0f4aa\") " pod="openstack-operators/designate-operator-controller-manager-6d8bf5c495-vq97j" Feb 24 05:48:58.153538 master-0 kubenswrapper[34361]: I0224 05:48:58.150088 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jd5g2\" (UniqueName: \"kubernetes.io/projected/179ae4f1-42de-4005-b33c-fd32bddbc2ba-kube-api-access-jd5g2\") pod \"barbican-operator-controller-manager-868647ff47-rngmn\" (UID: \"179ae4f1-42de-4005-b33c-fd32bddbc2ba\") " pod="openstack-operators/barbican-operator-controller-manager-868647ff47-rngmn" Feb 24 05:48:58.153538 master-0 kubenswrapper[34361]: I0224 05:48:58.150111 34361 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7bqvf\" (UniqueName: \"kubernetes.io/projected/a976689d-dcf3-4a33-9530-dce5ff43bddd-kube-api-access-7bqvf\") pod \"glance-operator-controller-manager-784b5bb6c5-zghgv\" (UID: \"a976689d-dcf3-4a33-9530-dce5ff43bddd\") " pod="openstack-operators/glance-operator-controller-manager-784b5bb6c5-zghgv" Feb 24 05:48:58.153538 master-0 kubenswrapper[34361]: I0224 05:48:58.150152 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2px7k\" (UniqueName: \"kubernetes.io/projected/4f457c03-e121-401c-b724-4dd147b7ff3b-kube-api-access-2px7k\") pod \"cinder-operator-controller-manager-55d77d7b5c-m52ng\" (UID: \"4f457c03-e121-401c-b724-4dd147b7ff3b\") " pod="openstack-operators/cinder-operator-controller-manager-55d77d7b5c-m52ng" Feb 24 05:48:58.153538 master-0 kubenswrapper[34361]: I0224 05:48:58.151063 34361 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/heat-operator-controller-manager-69f49c598c-75df9"] Feb 24 05:48:58.207405 master-0 kubenswrapper[34361]: I0224 05:48:58.197553 34361 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2px7k\" (UniqueName: \"kubernetes.io/projected/4f457c03-e121-401c-b724-4dd147b7ff3b-kube-api-access-2px7k\") pod \"cinder-operator-controller-manager-55d77d7b5c-m52ng\" (UID: \"4f457c03-e121-401c-b724-4dd147b7ff3b\") " pod="openstack-operators/cinder-operator-controller-manager-55d77d7b5c-m52ng" Feb 24 05:48:58.207405 master-0 kubenswrapper[34361]: I0224 05:48:58.199482 34361 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q4462\" (UniqueName: \"kubernetes.io/projected/8d9c9b31-7abc-4f3c-b0ee-419a96c0f4aa-kube-api-access-q4462\") pod \"designate-operator-controller-manager-6d8bf5c495-vq97j\" (UID: \"8d9c9b31-7abc-4f3c-b0ee-419a96c0f4aa\") " pod="openstack-operators/designate-operator-controller-manager-6d8bf5c495-vq97j" Feb 24 05:48:58.207405 master-0 kubenswrapper[34361]: I0224 05:48:58.204538 34361 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jd5g2\" (UniqueName: \"kubernetes.io/projected/179ae4f1-42de-4005-b33c-fd32bddbc2ba-kube-api-access-jd5g2\") pod \"barbican-operator-controller-manager-868647ff47-rngmn\" (UID: \"179ae4f1-42de-4005-b33c-fd32bddbc2ba\") " pod="openstack-operators/barbican-operator-controller-manager-868647ff47-rngmn" Feb 24 05:48:58.265418 master-0 kubenswrapper[34361]: I0224 05:48:58.262235 34361 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j9jrj\" (UniqueName: \"kubernetes.io/projected/ec02b59d-8c1d-4117-a043-2f2536f665e4-kube-api-access-j9jrj\") pod \"horizon-operator-controller-manager-5b9b8895d5-gmljt\" (UID: \"ec02b59d-8c1d-4117-a043-2f2536f665e4\") " pod="openstack-operators/horizon-operator-controller-manager-5b9b8895d5-gmljt" Feb 24 05:48:58.265418 master-0 kubenswrapper[34361]: I0224 05:48:58.262294 34361 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/horizon-operator-controller-manager-5b9b8895d5-gmljt"] Feb 24 05:48:58.265418 master-0 kubenswrapper[34361]: I0224 05:48:58.262525 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7bqvf\" (UniqueName: \"kubernetes.io/projected/a976689d-dcf3-4a33-9530-dce5ff43bddd-kube-api-access-7bqvf\") pod \"glance-operator-controller-manager-784b5bb6c5-zghgv\" (UID: \"a976689d-dcf3-4a33-9530-dce5ff43bddd\") " pod="openstack-operators/glance-operator-controller-manager-784b5bb6c5-zghgv" Feb 24 05:48:58.265418 master-0 kubenswrapper[34361]: I0224 05:48:58.263233 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9sh48\" (UniqueName: \"kubernetes.io/projected/2351f6f1-b876-4809-85ea-79fcdb287059-kube-api-access-9sh48\") pod \"heat-operator-controller-manager-69f49c598c-75df9\" (UID: \"2351f6f1-b876-4809-85ea-79fcdb287059\") " pod="openstack-operators/heat-operator-controller-manager-69f49c598c-75df9" Feb 24 05:48:58.289862 master-0 kubenswrapper[34361]: I0224 05:48:58.284486 34361 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/cinder-operator-controller-manager-55d77d7b5c-m52ng" Feb 24 05:48:58.299205 master-0 kubenswrapper[34361]: I0224 05:48:58.299083 34361 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/infra-operator-controller-manager-5f879c76b6-bv48m"] Feb 24 05:48:58.301882 master-0 kubenswrapper[34361]: I0224 05:48:58.301830 34361 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/infra-operator-controller-manager-5f879c76b6-bv48m" Feb 24 05:48:58.304070 master-0 kubenswrapper[34361]: I0224 05:48:58.303993 34361 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7bqvf\" (UniqueName: \"kubernetes.io/projected/a976689d-dcf3-4a33-9530-dce5ff43bddd-kube-api-access-7bqvf\") pod \"glance-operator-controller-manager-784b5bb6c5-zghgv\" (UID: \"a976689d-dcf3-4a33-9530-dce5ff43bddd\") " pod="openstack-operators/glance-operator-controller-manager-784b5bb6c5-zghgv" Feb 24 05:48:58.304343 master-0 kubenswrapper[34361]: I0224 05:48:58.304277 34361 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"infra-operator-webhook-server-cert" Feb 24 05:48:58.316365 master-0 kubenswrapper[34361]: I0224 05:48:58.311573 34361 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9sh48\" (UniqueName: \"kubernetes.io/projected/2351f6f1-b876-4809-85ea-79fcdb287059-kube-api-access-9sh48\") pod \"heat-operator-controller-manager-69f49c598c-75df9\" (UID: \"2351f6f1-b876-4809-85ea-79fcdb287059\") " pod="openstack-operators/heat-operator-controller-manager-69f49c598c-75df9" Feb 24 05:48:58.316365 master-0 kubenswrapper[34361]: I0224 05:48:58.312584 34361 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/ironic-operator-controller-manager-554564d7fc-db24j"] Feb 24 05:48:58.316365 master-0 kubenswrapper[34361]: I0224 05:48:58.314090 34361 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ironic-operator-controller-manager-554564d7fc-db24j" Feb 24 05:48:58.316937 master-0 kubenswrapper[34361]: I0224 05:48:58.316885 34361 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/barbican-operator-controller-manager-868647ff47-rngmn" Feb 24 05:48:58.332346 master-0 kubenswrapper[34361]: I0224 05:48:58.331296 34361 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/infra-operator-controller-manager-5f879c76b6-bv48m"] Feb 24 05:48:58.344964 master-0 kubenswrapper[34361]: I0224 05:48:58.344059 34361 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/designate-operator-controller-manager-6d8bf5c495-vq97j" Feb 24 05:48:58.369921 master-0 kubenswrapper[34361]: I0224 05:48:58.368045 34361 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ironic-operator-controller-manager-554564d7fc-db24j"] Feb 24 05:48:58.369921 master-0 kubenswrapper[34361]: I0224 05:48:58.369876 34361 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/d635d276-775a-4aef-b331-f03468985b12-cert\") pod \"infra-operator-controller-manager-5f879c76b6-bv48m\" (UID: \"d635d276-775a-4aef-b331-f03468985b12\") " pod="openstack-operators/infra-operator-controller-manager-5f879c76b6-bv48m" Feb 24 05:48:58.370984 master-0 kubenswrapper[34361]: I0224 05:48:58.370020 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j9jrj\" (UniqueName: \"kubernetes.io/projected/ec02b59d-8c1d-4117-a043-2f2536f665e4-kube-api-access-j9jrj\") pod \"horizon-operator-controller-manager-5b9b8895d5-gmljt\" (UID: \"ec02b59d-8c1d-4117-a043-2f2536f665e4\") " pod="openstack-operators/horizon-operator-controller-manager-5b9b8895d5-gmljt" Feb 24 05:48:58.370984 master-0 kubenswrapper[34361]: I0224 05:48:58.370059 34361 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hztpd\" (UniqueName: \"kubernetes.io/projected/2573e160-9095-4011-a842-94316fa317b8-kube-api-access-hztpd\") pod \"ironic-operator-controller-manager-554564d7fc-db24j\" (UID: \"2573e160-9095-4011-a842-94316fa317b8\") " pod="openstack-operators/ironic-operator-controller-manager-554564d7fc-db24j" Feb 24 05:48:58.370984 master-0 kubenswrapper[34361]: I0224 05:48:58.370115 34361 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rv8b7\" (UniqueName: \"kubernetes.io/projected/d635d276-775a-4aef-b331-f03468985b12-kube-api-access-rv8b7\") pod \"infra-operator-controller-manager-5f879c76b6-bv48m\" (UID: \"d635d276-775a-4aef-b331-f03468985b12\") " pod="openstack-operators/infra-operator-controller-manager-5f879c76b6-bv48m" Feb 24 05:48:58.393834 master-0 kubenswrapper[34361]: I0224 05:48:58.393782 34361 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j9jrj\" (UniqueName: \"kubernetes.io/projected/ec02b59d-8c1d-4117-a043-2f2536f665e4-kube-api-access-j9jrj\") pod \"horizon-operator-controller-manager-5b9b8895d5-gmljt\" (UID: \"ec02b59d-8c1d-4117-a043-2f2536f665e4\") " pod="openstack-operators/horizon-operator-controller-manager-5b9b8895d5-gmljt" Feb 24 05:48:58.394080 master-0 kubenswrapper[34361]: I0224 05:48:58.393868 34361 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/keystone-operator-controller-manager-b4d948c87-zlj5w"] Feb 24 05:48:58.395544 master-0 kubenswrapper[34361]: I0224 05:48:58.395517 34361 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/keystone-operator-controller-manager-b4d948c87-zlj5w" Feb 24 05:48:58.414523 master-0 kubenswrapper[34361]: I0224 05:48:58.414435 34361 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/manila-operator-controller-manager-67d996989d-qbghx"] Feb 24 05:48:58.422586 master-0 kubenswrapper[34361]: I0224 05:48:58.417651 34361 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/manila-operator-controller-manager-67d996989d-qbghx" Feb 24 05:48:58.431005 master-0 kubenswrapper[34361]: I0224 05:48:58.430947 34361 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/glance-operator-controller-manager-784b5bb6c5-zghgv" Feb 24 05:48:58.431567 master-0 kubenswrapper[34361]: I0224 05:48:58.431538 34361 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/keystone-operator-controller-manager-b4d948c87-zlj5w"] Feb 24 05:48:58.449343 master-0 kubenswrapper[34361]: I0224 05:48:58.441188 34361 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/manila-operator-controller-manager-67d996989d-qbghx"] Feb 24 05:48:58.471281 master-0 kubenswrapper[34361]: I0224 05:48:58.471230 34361 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-6994f66f48-28hdf"] Feb 24 05:48:58.482364 master-0 kubenswrapper[34361]: I0224 05:48:58.472806 34361 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2j9zs\" (UniqueName: \"kubernetes.io/projected/b4a2f137-f0c7-481c-a308-51f97186744d-kube-api-access-2j9zs\") pod \"keystone-operator-controller-manager-b4d948c87-zlj5w\" (UID: \"b4a2f137-f0c7-481c-a308-51f97186744d\") " pod="openstack-operators/keystone-operator-controller-manager-b4d948c87-zlj5w" Feb 24 05:48:58.482364 master-0 kubenswrapper[34361]: I0224 05:48:58.472917 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/d635d276-775a-4aef-b331-f03468985b12-cert\") pod \"infra-operator-controller-manager-5f879c76b6-bv48m\" (UID: \"d635d276-775a-4aef-b331-f03468985b12\") " pod="openstack-operators/infra-operator-controller-manager-5f879c76b6-bv48m" Feb 24 05:48:58.482364 master-0 kubenswrapper[34361]: I0224 05:48:58.473015 34361 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j9n9q\" (UniqueName: \"kubernetes.io/projected/de466d9b-824c-49c7-946e-be9936d48d41-kube-api-access-j9n9q\") pod \"manila-operator-controller-manager-67d996989d-qbghx\" (UID: \"de466d9b-824c-49c7-946e-be9936d48d41\") " pod="openstack-operators/manila-operator-controller-manager-67d996989d-qbghx" Feb 24 05:48:58.482364 master-0 kubenswrapper[34361]: I0224 05:48:58.473071 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hztpd\" (UniqueName: \"kubernetes.io/projected/2573e160-9095-4011-a842-94316fa317b8-kube-api-access-hztpd\") pod \"ironic-operator-controller-manager-554564d7fc-db24j\" (UID: \"2573e160-9095-4011-a842-94316fa317b8\") " pod="openstack-operators/ironic-operator-controller-manager-554564d7fc-db24j" Feb 24 05:48:58.482364 master-0 kubenswrapper[34361]: I0224 05:48:58.473127 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rv8b7\" (UniqueName: \"kubernetes.io/projected/d635d276-775a-4aef-b331-f03468985b12-kube-api-access-rv8b7\") pod \"infra-operator-controller-manager-5f879c76b6-bv48m\" (UID: \"d635d276-775a-4aef-b331-f03468985b12\") " pod="openstack-operators/infra-operator-controller-manager-5f879c76b6-bv48m" Feb 24 05:48:58.482364 master-0 kubenswrapper[34361]: I0224 05:48:58.473647 34361 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/mariadb-operator-controller-manager-6994f66f48-28hdf" Feb 24 05:48:58.482364 master-0 kubenswrapper[34361]: I0224 05:48:58.473977 34361 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/heat-operator-controller-manager-69f49c598c-75df9" Feb 24 05:48:58.482364 master-0 kubenswrapper[34361]: E0224 05:48:58.474760 34361 secret.go:189] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Feb 24 05:48:58.482364 master-0 kubenswrapper[34361]: E0224 05:48:58.476193 34361 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/d635d276-775a-4aef-b331-f03468985b12-cert podName:d635d276-775a-4aef-b331-f03468985b12 nodeName:}" failed. No retries permitted until 2026-02-24 05:48:58.976157749 +0000 UTC m=+698.678774795 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/d635d276-775a-4aef-b331-f03468985b12-cert") pod "infra-operator-controller-manager-5f879c76b6-bv48m" (UID: "d635d276-775a-4aef-b331-f03468985b12") : secret "infra-operator-webhook-server-cert" not found Feb 24 05:48:58.544588 master-0 kubenswrapper[34361]: I0224 05:48:58.537283 34361 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rv8b7\" (UniqueName: \"kubernetes.io/projected/d635d276-775a-4aef-b331-f03468985b12-kube-api-access-rv8b7\") pod \"infra-operator-controller-manager-5f879c76b6-bv48m\" (UID: \"d635d276-775a-4aef-b331-f03468985b12\") " pod="openstack-operators/infra-operator-controller-manager-5f879c76b6-bv48m" Feb 24 05:48:58.544588 master-0 kubenswrapper[34361]: I0224 05:48:58.538212 34361 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hztpd\" (UniqueName: \"kubernetes.io/projected/2573e160-9095-4011-a842-94316fa317b8-kube-api-access-hztpd\") pod \"ironic-operator-controller-manager-554564d7fc-db24j\" (UID: \"2573e160-9095-4011-a842-94316fa317b8\") " pod="openstack-operators/ironic-operator-controller-manager-554564d7fc-db24j" Feb 24 05:48:58.571720 master-0 kubenswrapper[34361]: I0224 05:48:58.570848 34361 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/neutron-operator-controller-manager-6bd4687957-svmn2"] Feb 24 05:48:58.572572 master-0 kubenswrapper[34361]: I0224 05:48:58.572524 34361 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/neutron-operator-controller-manager-6bd4687957-svmn2" Feb 24 05:48:58.574238 master-0 kubenswrapper[34361]: I0224 05:48:58.574200 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2j9zs\" (UniqueName: \"kubernetes.io/projected/b4a2f137-f0c7-481c-a308-51f97186744d-kube-api-access-2j9zs\") pod \"keystone-operator-controller-manager-b4d948c87-zlj5w\" (UID: \"b4a2f137-f0c7-481c-a308-51f97186744d\") " pod="openstack-operators/keystone-operator-controller-manager-b4d948c87-zlj5w" Feb 24 05:48:58.574349 master-0 kubenswrapper[34361]: I0224 05:48:58.574321 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j9n9q\" (UniqueName: \"kubernetes.io/projected/de466d9b-824c-49c7-946e-be9936d48d41-kube-api-access-j9n9q\") pod \"manila-operator-controller-manager-67d996989d-qbghx\" (UID: \"de466d9b-824c-49c7-946e-be9936d48d41\") " pod="openstack-operators/manila-operator-controller-manager-67d996989d-qbghx" Feb 24 05:48:58.574401 master-0 kubenswrapper[34361]: I0224 05:48:58.574359 34361 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fhqkl\" (UniqueName: \"kubernetes.io/projected/4a1b71ea-8ffd-47f7-8c21-816211026592-kube-api-access-fhqkl\") pod \"mariadb-operator-controller-manager-6994f66f48-28hdf\" (UID: \"4a1b71ea-8ffd-47f7-8c21-816211026592\") " pod="openstack-operators/mariadb-operator-controller-manager-6994f66f48-28hdf" Feb 24 05:48:58.586618 master-0 kubenswrapper[34361]: I0224 05:48:58.586287 34361 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/neutron-operator-controller-manager-6bd4687957-svmn2"] Feb 24 05:48:58.604022 master-0 kubenswrapper[34361]: I0224 05:48:58.603974 34361 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j9n9q\" (UniqueName: \"kubernetes.io/projected/de466d9b-824c-49c7-946e-be9936d48d41-kube-api-access-j9n9q\") pod \"manila-operator-controller-manager-67d996989d-qbghx\" (UID: \"de466d9b-824c-49c7-946e-be9936d48d41\") " pod="openstack-operators/manila-operator-controller-manager-67d996989d-qbghx" Feb 24 05:48:58.607319 master-0 kubenswrapper[34361]: I0224 05:48:58.605284 34361 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/horizon-operator-controller-manager-5b9b8895d5-gmljt" Feb 24 05:48:58.608248 master-0 kubenswrapper[34361]: I0224 05:48:58.608189 34361 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2j9zs\" (UniqueName: \"kubernetes.io/projected/b4a2f137-f0c7-481c-a308-51f97186744d-kube-api-access-2j9zs\") pod \"keystone-operator-controller-manager-b4d948c87-zlj5w\" (UID: \"b4a2f137-f0c7-481c-a308-51f97186744d\") " pod="openstack-operators/keystone-operator-controller-manager-b4d948c87-zlj5w" Feb 24 05:48:58.669587 master-0 kubenswrapper[34361]: I0224 05:48:58.667991 34361 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-6994f66f48-28hdf"] Feb 24 05:48:58.683353 master-0 kubenswrapper[34361]: I0224 05:48:58.679708 34361 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/nova-operator-controller-manager-567668f5cf-sfjt8"] Feb 24 05:48:58.690582 master-0 kubenswrapper[34361]: I0224 05:48:58.690527 34361 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/nova-operator-controller-manager-567668f5cf-sfjt8" Feb 24 05:48:58.691656 master-0 kubenswrapper[34361]: I0224 05:48:58.691295 34361 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/nova-operator-controller-manager-567668f5cf-sfjt8"] Feb 24 05:48:58.698024 master-0 kubenswrapper[34361]: I0224 05:48:58.697961 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fhqkl\" (UniqueName: \"kubernetes.io/projected/4a1b71ea-8ffd-47f7-8c21-816211026592-kube-api-access-fhqkl\") pod \"mariadb-operator-controller-manager-6994f66f48-28hdf\" (UID: \"4a1b71ea-8ffd-47f7-8c21-816211026592\") " pod="openstack-operators/mariadb-operator-controller-manager-6994f66f48-28hdf" Feb 24 05:48:58.716167 master-0 kubenswrapper[34361]: I0224 05:48:58.716008 34361 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/octavia-operator-controller-manager-659dc6bbfc-z4h54"] Feb 24 05:48:58.780941 master-0 kubenswrapper[34361]: I0224 05:48:58.773501 34361 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ironic-operator-controller-manager-554564d7fc-db24j" Feb 24 05:48:58.780941 master-0 kubenswrapper[34361]: I0224 05:48:58.775880 34361 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fhqkl\" (UniqueName: \"kubernetes.io/projected/4a1b71ea-8ffd-47f7-8c21-816211026592-kube-api-access-fhqkl\") pod \"mariadb-operator-controller-manager-6994f66f48-28hdf\" (UID: \"4a1b71ea-8ffd-47f7-8c21-816211026592\") " pod="openstack-operators/mariadb-operator-controller-manager-6994f66f48-28hdf" Feb 24 05:48:58.798424 master-0 kubenswrapper[34361]: I0224 05:48:58.789935 34361 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/octavia-operator-controller-manager-659dc6bbfc-z4h54"] Feb 24 05:48:58.798424 master-0 kubenswrapper[34361]: I0224 05:48:58.789995 34361 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-579b7786b92xw4j"] Feb 24 05:48:58.798424 master-0 kubenswrapper[34361]: I0224 05:48:58.791135 34361 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/ovn-operator-controller-manager-5955d8c787-zbd8b"] Feb 24 05:48:58.798424 master-0 kubenswrapper[34361]: I0224 05:48:58.791904 34361 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ovn-operator-controller-manager-5955d8c787-zbd8b" Feb 24 05:48:58.798424 master-0 kubenswrapper[34361]: I0224 05:48:58.792514 34361 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/octavia-operator-controller-manager-659dc6bbfc-z4h54" Feb 24 05:48:58.798424 master-0 kubenswrapper[34361]: I0224 05:48:58.792611 34361 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-baremetal-operator-controller-manager-579b7786b92xw4j" Feb 24 05:48:58.799949 master-0 kubenswrapper[34361]: I0224 05:48:58.799714 34361 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lqrpl\" (UniqueName: \"kubernetes.io/projected/cf59d571-b148-4f01-823f-2d996694d934-kube-api-access-lqrpl\") pod \"neutron-operator-controller-manager-6bd4687957-svmn2\" (UID: \"cf59d571-b148-4f01-823f-2d996694d934\") " pod="openstack-operators/neutron-operator-controller-manager-6bd4687957-svmn2" Feb 24 05:48:58.799949 master-0 kubenswrapper[34361]: I0224 05:48:58.799813 34361 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-787wq\" (UniqueName: \"kubernetes.io/projected/404ea967-df67-480c-bb2a-fd67aba90b6c-kube-api-access-787wq\") pod \"nova-operator-controller-manager-567668f5cf-sfjt8\" (UID: \"404ea967-df67-480c-bb2a-fd67aba90b6c\") " pod="openstack-operators/nova-operator-controller-manager-567668f5cf-sfjt8" Feb 24 05:48:58.801480 master-0 kubenswrapper[34361]: I0224 05:48:58.800868 34361 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-baremetal-operator-webhook-server-cert" Feb 24 05:48:58.807962 master-0 kubenswrapper[34361]: I0224 05:48:58.807916 34361 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/keystone-operator-controller-manager-b4d948c87-zlj5w" Feb 24 05:48:58.843823 master-0 kubenswrapper[34361]: I0224 05:48:58.843740 34361 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/manila-operator-controller-manager-67d996989d-qbghx" Feb 24 05:48:58.865620 master-0 kubenswrapper[34361]: I0224 05:48:58.865545 34361 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/mariadb-operator-controller-manager-6994f66f48-28hdf" Feb 24 05:48:58.892191 master-0 kubenswrapper[34361]: I0224 05:48:58.892122 34361 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-579b7786b92xw4j"] Feb 24 05:48:58.903013 master-0 kubenswrapper[34361]: I0224 05:48:58.902898 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lqrpl\" (UniqueName: \"kubernetes.io/projected/cf59d571-b148-4f01-823f-2d996694d934-kube-api-access-lqrpl\") pod \"neutron-operator-controller-manager-6bd4687957-svmn2\" (UID: \"cf59d571-b148-4f01-823f-2d996694d934\") " pod="openstack-operators/neutron-operator-controller-manager-6bd4687957-svmn2" Feb 24 05:48:58.903013 master-0 kubenswrapper[34361]: I0224 05:48:58.902972 34361 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7zbwt\" (UniqueName: \"kubernetes.io/projected/30188323-a2c2-4fd3-8b5c-42ebe4e57777-kube-api-access-7zbwt\") pod \"octavia-operator-controller-manager-659dc6bbfc-z4h54\" (UID: \"30188323-a2c2-4fd3-8b5c-42ebe4e57777\") " pod="openstack-operators/octavia-operator-controller-manager-659dc6bbfc-z4h54" Feb 24 05:48:58.903134 master-0 kubenswrapper[34361]: I0224 05:48:58.903022 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-787wq\" (UniqueName: \"kubernetes.io/projected/404ea967-df67-480c-bb2a-fd67aba90b6c-kube-api-access-787wq\") pod \"nova-operator-controller-manager-567668f5cf-sfjt8\" (UID: \"404ea967-df67-480c-bb2a-fd67aba90b6c\") " pod="openstack-operators/nova-operator-controller-manager-567668f5cf-sfjt8" Feb 24 05:48:58.903134 master-0 kubenswrapper[34361]: I0224 05:48:58.903053 34361 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/7e772b09-8f15-4d6f-be11-482fe9376b51-cert\") pod \"openstack-baremetal-operator-controller-manager-579b7786b92xw4j\" (UID: \"7e772b09-8f15-4d6f-be11-482fe9376b51\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-579b7786b92xw4j" Feb 24 05:48:58.903206 master-0 kubenswrapper[34361]: I0224 05:48:58.903172 34361 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xf2nq\" (UniqueName: \"kubernetes.io/projected/7e772b09-8f15-4d6f-be11-482fe9376b51-kube-api-access-xf2nq\") pod \"openstack-baremetal-operator-controller-manager-579b7786b92xw4j\" (UID: \"7e772b09-8f15-4d6f-be11-482fe9376b51\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-579b7786b92xw4j" Feb 24 05:48:58.903241 master-0 kubenswrapper[34361]: I0224 05:48:58.903214 34361 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rchp9\" (UniqueName: \"kubernetes.io/projected/31d798ac-2e57-4ad1-a457-55a4dc84ba4f-kube-api-access-rchp9\") pod \"ovn-operator-controller-manager-5955d8c787-zbd8b\" (UID: \"31d798ac-2e57-4ad1-a457-55a4dc84ba4f\") " pod="openstack-operators/ovn-operator-controller-manager-5955d8c787-zbd8b" Feb 24 05:48:58.940636 master-0 kubenswrapper[34361]: I0224 05:48:58.940587 34361 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lqrpl\" (UniqueName: \"kubernetes.io/projected/cf59d571-b148-4f01-823f-2d996694d934-kube-api-access-lqrpl\") pod \"neutron-operator-controller-manager-6bd4687957-svmn2\" (UID: \"cf59d571-b148-4f01-823f-2d996694d934\") " pod="openstack-operators/neutron-operator-controller-manager-6bd4687957-svmn2" Feb 24 05:48:58.987268 master-0 kubenswrapper[34361]: I0224 05:48:58.986346 34361 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-787wq\" (UniqueName: \"kubernetes.io/projected/404ea967-df67-480c-bb2a-fd67aba90b6c-kube-api-access-787wq\") pod \"nova-operator-controller-manager-567668f5cf-sfjt8\" (UID: \"404ea967-df67-480c-bb2a-fd67aba90b6c\") " pod="openstack-operators/nova-operator-controller-manager-567668f5cf-sfjt8" Feb 24 05:48:59.015615 master-0 kubenswrapper[34361]: I0224 05:48:59.014192 34361 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ovn-operator-controller-manager-5955d8c787-zbd8b"] Feb 24 05:48:59.021847 master-0 kubenswrapper[34361]: I0224 05:48:59.020283 34361 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/neutron-operator-controller-manager-6bd4687957-svmn2" Feb 24 05:48:59.063823 master-0 kubenswrapper[34361]: I0224 05:48:59.063736 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xf2nq\" (UniqueName: \"kubernetes.io/projected/7e772b09-8f15-4d6f-be11-482fe9376b51-kube-api-access-xf2nq\") pod \"openstack-baremetal-operator-controller-manager-579b7786b92xw4j\" (UID: \"7e772b09-8f15-4d6f-be11-482fe9376b51\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-579b7786b92xw4j" Feb 24 05:48:59.064194 master-0 kubenswrapper[34361]: I0224 05:48:59.064160 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rchp9\" (UniqueName: \"kubernetes.io/projected/31d798ac-2e57-4ad1-a457-55a4dc84ba4f-kube-api-access-rchp9\") pod \"ovn-operator-controller-manager-5955d8c787-zbd8b\" (UID: \"31d798ac-2e57-4ad1-a457-55a4dc84ba4f\") " pod="openstack-operators/ovn-operator-controller-manager-5955d8c787-zbd8b" Feb 24 05:48:59.064335 master-0 kubenswrapper[34361]: I0224 05:48:59.064316 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/d635d276-775a-4aef-b331-f03468985b12-cert\") pod \"infra-operator-controller-manager-5f879c76b6-bv48m\" (UID: \"d635d276-775a-4aef-b331-f03468985b12\") " pod="openstack-operators/infra-operator-controller-manager-5f879c76b6-bv48m" Feb 24 05:48:59.064648 master-0 kubenswrapper[34361]: E0224 05:48:59.064547 34361 secret.go:189] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Feb 24 05:48:59.064701 master-0 kubenswrapper[34361]: E0224 05:48:59.064693 34361 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/d635d276-775a-4aef-b331-f03468985b12-cert podName:d635d276-775a-4aef-b331-f03468985b12 nodeName:}" failed. No retries permitted until 2026-02-24 05:49:00.064669893 +0000 UTC m=+699.767286939 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/d635d276-775a-4aef-b331-f03468985b12-cert") pod "infra-operator-controller-manager-5f879c76b6-bv48m" (UID: "d635d276-775a-4aef-b331-f03468985b12") : secret "infra-operator-webhook-server-cert" not found Feb 24 05:48:59.065202 master-0 kubenswrapper[34361]: I0224 05:48:59.065146 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7zbwt\" (UniqueName: \"kubernetes.io/projected/30188323-a2c2-4fd3-8b5c-42ebe4e57777-kube-api-access-7zbwt\") pod \"octavia-operator-controller-manager-659dc6bbfc-z4h54\" (UID: \"30188323-a2c2-4fd3-8b5c-42ebe4e57777\") " pod="openstack-operators/octavia-operator-controller-manager-659dc6bbfc-z4h54" Feb 24 05:48:59.065507 master-0 kubenswrapper[34361]: I0224 05:48:59.065342 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/7e772b09-8f15-4d6f-be11-482fe9376b51-cert\") pod \"openstack-baremetal-operator-controller-manager-579b7786b92xw4j\" (UID: \"7e772b09-8f15-4d6f-be11-482fe9376b51\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-579b7786b92xw4j" Feb 24 05:48:59.065789 master-0 kubenswrapper[34361]: E0224 05:48:59.065696 34361 secret.go:189] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 24 05:48:59.065789 master-0 kubenswrapper[34361]: E0224 05:48:59.065734 34361 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/7e772b09-8f15-4d6f-be11-482fe9376b51-cert podName:7e772b09-8f15-4d6f-be11-482fe9376b51 nodeName:}" failed. No retries permitted until 2026-02-24 05:48:59.565726311 +0000 UTC m=+699.268343357 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/7e772b09-8f15-4d6f-be11-482fe9376b51-cert") pod "openstack-baremetal-operator-controller-manager-579b7786b92xw4j" (UID: "7e772b09-8f15-4d6f-be11-482fe9376b51") : secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 24 05:48:59.076223 master-0 kubenswrapper[34361]: I0224 05:48:59.076173 34361 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/nova-operator-controller-manager-567668f5cf-sfjt8" Feb 24 05:48:59.089231 master-0 kubenswrapper[34361]: I0224 05:48:59.089160 34361 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xf2nq\" (UniqueName: \"kubernetes.io/projected/7e772b09-8f15-4d6f-be11-482fe9376b51-kube-api-access-xf2nq\") pod \"openstack-baremetal-operator-controller-manager-579b7786b92xw4j\" (UID: \"7e772b09-8f15-4d6f-be11-482fe9376b51\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-579b7786b92xw4j" Feb 24 05:48:59.102997 master-0 kubenswrapper[34361]: I0224 05:48:59.098087 34361 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/placement-operator-controller-manager-8497b45c89-8xrtm"] Feb 24 05:48:59.104418 master-0 kubenswrapper[34361]: I0224 05:48:59.103280 34361 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rchp9\" (UniqueName: \"kubernetes.io/projected/31d798ac-2e57-4ad1-a457-55a4dc84ba4f-kube-api-access-rchp9\") pod \"ovn-operator-controller-manager-5955d8c787-zbd8b\" (UID: \"31d798ac-2e57-4ad1-a457-55a4dc84ba4f\") " pod="openstack-operators/ovn-operator-controller-manager-5955d8c787-zbd8b" Feb 24 05:48:59.105611 master-0 kubenswrapper[34361]: I0224 05:48:59.105549 34361 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/placement-operator-controller-manager-8497b45c89-8xrtm" Feb 24 05:48:59.108184 master-0 kubenswrapper[34361]: I0224 05:48:59.107306 34361 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7zbwt\" (UniqueName: \"kubernetes.io/projected/30188323-a2c2-4fd3-8b5c-42ebe4e57777-kube-api-access-7zbwt\") pod \"octavia-operator-controller-manager-659dc6bbfc-z4h54\" (UID: \"30188323-a2c2-4fd3-8b5c-42ebe4e57777\") " pod="openstack-operators/octavia-operator-controller-manager-659dc6bbfc-z4h54" Feb 24 05:48:59.132967 master-0 kubenswrapper[34361]: I0224 05:48:59.132899 34361 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ovn-operator-controller-manager-5955d8c787-zbd8b" Feb 24 05:48:59.151418 master-0 kubenswrapper[34361]: I0224 05:48:59.151363 34361 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/swift-operator-controller-manager-68f46476f-tc9k2"] Feb 24 05:48:59.153691 master-0 kubenswrapper[34361]: I0224 05:48:59.153395 34361 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/swift-operator-controller-manager-68f46476f-tc9k2" Feb 24 05:48:59.178161 master-0 kubenswrapper[34361]: I0224 05:48:59.178105 34361 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/placement-operator-controller-manager-8497b45c89-8xrtm"] Feb 24 05:48:59.185568 master-0 kubenswrapper[34361]: I0224 05:48:59.185454 34361 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/swift-operator-controller-manager-68f46476f-tc9k2"] Feb 24 05:48:59.208494 master-0 kubenswrapper[34361]: I0224 05:48:59.201357 34361 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-589c568786-9ljm5"] Feb 24 05:48:59.208494 master-0 kubenswrapper[34361]: I0224 05:48:59.202721 34361 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/telemetry-operator-controller-manager-589c568786-9ljm5" Feb 24 05:48:59.208494 master-0 kubenswrapper[34361]: I0224 05:48:59.204088 34361 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/octavia-operator-controller-manager-659dc6bbfc-z4h54" Feb 24 05:48:59.211075 master-0 kubenswrapper[34361]: I0224 05:48:59.209784 34361 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-589c568786-9ljm5"] Feb 24 05:48:59.218530 master-0 kubenswrapper[34361]: I0224 05:48:59.218487 34361 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/test-operator-controller-manager-5dc6794d5b-96zg4"] Feb 24 05:48:59.219813 master-0 kubenswrapper[34361]: I0224 05:48:59.219771 34361 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/test-operator-controller-manager-5dc6794d5b-96zg4" Feb 24 05:48:59.241261 master-0 kubenswrapper[34361]: I0224 05:48:59.239824 34361 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/test-operator-controller-manager-5dc6794d5b-96zg4"] Feb 24 05:48:59.270421 master-0 kubenswrapper[34361]: I0224 05:48:59.269433 34361 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8xcjw\" (UniqueName: \"kubernetes.io/projected/3d52cd17-06ac-4536-a502-a7202bf0666d-kube-api-access-8xcjw\") pod \"placement-operator-controller-manager-8497b45c89-8xrtm\" (UID: \"3d52cd17-06ac-4536-a502-a7202bf0666d\") " pod="openstack-operators/placement-operator-controller-manager-8497b45c89-8xrtm" Feb 24 05:48:59.270421 master-0 kubenswrapper[34361]: I0224 05:48:59.269493 34361 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-669m8\" (UniqueName: \"kubernetes.io/projected/99742dbc-b8c6-4994-b3b0-dbaa54be9d86-kube-api-access-669m8\") pod \"swift-operator-controller-manager-68f46476f-tc9k2\" (UID: \"99742dbc-b8c6-4994-b3b0-dbaa54be9d86\") " pod="openstack-operators/swift-operator-controller-manager-68f46476f-tc9k2" Feb 24 05:48:59.270421 master-0 kubenswrapper[34361]: I0224 05:48:59.269614 34361 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/watcher-operator-controller-manager-bccc79885-96xg2"] Feb 24 05:48:59.275563 master-0 kubenswrapper[34361]: I0224 05:48:59.275507 34361 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/watcher-operator-controller-manager-bccc79885-96xg2" Feb 24 05:48:59.314796 master-0 kubenswrapper[34361]: I0224 05:48:59.313894 34361 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/watcher-operator-controller-manager-bccc79885-96xg2"] Feb 24 05:48:59.352455 master-0 kubenswrapper[34361]: I0224 05:48:59.352395 34361 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-controller-manager-5dc486cffc-rbqzr"] Feb 24 05:48:59.354353 master-0 kubenswrapper[34361]: I0224 05:48:59.353891 34361 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-manager-5dc486cffc-rbqzr" Feb 24 05:48:59.356796 master-0 kubenswrapper[34361]: I0224 05:48:59.356755 34361 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"metrics-server-cert" Feb 24 05:48:59.356904 master-0 kubenswrapper[34361]: I0224 05:48:59.356785 34361 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"webhook-server-cert" Feb 24 05:48:59.365397 master-0 kubenswrapper[34361]: I0224 05:48:59.365349 34361 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-manager-5dc486cffc-rbqzr"] Feb 24 05:48:59.372972 master-0 kubenswrapper[34361]: I0224 05:48:59.372242 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8xcjw\" (UniqueName: \"kubernetes.io/projected/3d52cd17-06ac-4536-a502-a7202bf0666d-kube-api-access-8xcjw\") pod \"placement-operator-controller-manager-8497b45c89-8xrtm\" (UID: \"3d52cd17-06ac-4536-a502-a7202bf0666d\") " pod="openstack-operators/placement-operator-controller-manager-8497b45c89-8xrtm" Feb 24 05:48:59.372972 master-0 kubenswrapper[34361]: I0224 05:48:59.372328 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-669m8\" (UniqueName: \"kubernetes.io/projected/99742dbc-b8c6-4994-b3b0-dbaa54be9d86-kube-api-access-669m8\") pod \"swift-operator-controller-manager-68f46476f-tc9k2\" (UID: \"99742dbc-b8c6-4994-b3b0-dbaa54be9d86\") " pod="openstack-operators/swift-operator-controller-manager-68f46476f-tc9k2" Feb 24 05:48:59.372972 master-0 kubenswrapper[34361]: I0224 05:48:59.372413 34361 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4xhqv\" (UniqueName: \"kubernetes.io/projected/19710adc-933a-4331-b46a-c836b775f6c7-kube-api-access-4xhqv\") pod \"telemetry-operator-controller-manager-589c568786-9ljm5\" (UID: \"19710adc-933a-4331-b46a-c836b775f6c7\") " pod="openstack-operators/telemetry-operator-controller-manager-589c568786-9ljm5" Feb 24 05:48:59.372972 master-0 kubenswrapper[34361]: I0224 05:48:59.372497 34361 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5ztqj\" (UniqueName: \"kubernetes.io/projected/c44e156d-428e-4d11-ae59-37c01f013e24-kube-api-access-5ztqj\") pod \"test-operator-controller-manager-5dc6794d5b-96zg4\" (UID: \"c44e156d-428e-4d11-ae59-37c01f013e24\") " pod="openstack-operators/test-operator-controller-manager-5dc6794d5b-96zg4" Feb 24 05:48:59.372972 master-0 kubenswrapper[34361]: I0224 05:48:59.372558 34361 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dzvg6\" (UniqueName: \"kubernetes.io/projected/59d94335-c0b3-4bf5-b0a6-b3e1f618f2aa-kube-api-access-dzvg6\") pod \"watcher-operator-controller-manager-bccc79885-96xg2\" (UID: \"59d94335-c0b3-4bf5-b0a6-b3e1f618f2aa\") " pod="openstack-operators/watcher-operator-controller-manager-bccc79885-96xg2" Feb 24 05:48:59.397750 master-0 kubenswrapper[34361]: I0224 05:48:59.395976 34361 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-vrbmh"] Feb 24 05:48:59.397750 master-0 kubenswrapper[34361]: I0224 05:48:59.396007 34361 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8xcjw\" (UniqueName: \"kubernetes.io/projected/3d52cd17-06ac-4536-a502-a7202bf0666d-kube-api-access-8xcjw\") pod \"placement-operator-controller-manager-8497b45c89-8xrtm\" (UID: \"3d52cd17-06ac-4536-a502-a7202bf0666d\") " pod="openstack-operators/placement-operator-controller-manager-8497b45c89-8xrtm" Feb 24 05:48:59.397975 master-0 kubenswrapper[34361]: I0224 05:48:59.397845 34361 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-vrbmh" Feb 24 05:48:59.399075 master-0 kubenswrapper[34361]: W0224 05:48:59.399014 34361 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod179ae4f1_42de_4005_b33c_fd32bddbc2ba.slice/crio-e49e51900ecf4199bf29a858ee6444a764d834e760bc63493e271328d95aef6d WatchSource:0}: Error finding container e49e51900ecf4199bf29a858ee6444a764d834e760bc63493e271328d95aef6d: Status 404 returned error can't find the container with id e49e51900ecf4199bf29a858ee6444a764d834e760bc63493e271328d95aef6d Feb 24 05:48:59.400343 master-0 kubenswrapper[34361]: I0224 05:48:59.400258 34361 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-669m8\" (UniqueName: \"kubernetes.io/projected/99742dbc-b8c6-4994-b3b0-dbaa54be9d86-kube-api-access-669m8\") pod \"swift-operator-controller-manager-68f46476f-tc9k2\" (UID: \"99742dbc-b8c6-4994-b3b0-dbaa54be9d86\") " pod="openstack-operators/swift-operator-controller-manager-68f46476f-tc9k2" Feb 24 05:48:59.406100 master-0 kubenswrapper[34361]: I0224 05:48:59.406072 34361 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-vrbmh"] Feb 24 05:48:59.454010 master-0 kubenswrapper[34361]: I0224 05:48:59.453547 34361 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/placement-operator-controller-manager-8497b45c89-8xrtm" Feb 24 05:48:59.477465 master-0 kubenswrapper[34361]: I0224 05:48:59.476019 34361 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/10243cb4-f4ad-40a0-af84-04d9dc7c32c9-metrics-certs\") pod \"openstack-operator-controller-manager-5dc486cffc-rbqzr\" (UID: \"10243cb4-f4ad-40a0-af84-04d9dc7c32c9\") " pod="openstack-operators/openstack-operator-controller-manager-5dc486cffc-rbqzr" Feb 24 05:48:59.477465 master-0 kubenswrapper[34361]: I0224 05:48:59.476089 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4xhqv\" (UniqueName: \"kubernetes.io/projected/19710adc-933a-4331-b46a-c836b775f6c7-kube-api-access-4xhqv\") pod \"telemetry-operator-controller-manager-589c568786-9ljm5\" (UID: \"19710adc-933a-4331-b46a-c836b775f6c7\") " pod="openstack-operators/telemetry-operator-controller-manager-589c568786-9ljm5" Feb 24 05:48:59.477465 master-0 kubenswrapper[34361]: I0224 05:48:59.476161 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5ztqj\" (UniqueName: \"kubernetes.io/projected/c44e156d-428e-4d11-ae59-37c01f013e24-kube-api-access-5ztqj\") pod \"test-operator-controller-manager-5dc6794d5b-96zg4\" (UID: \"c44e156d-428e-4d11-ae59-37c01f013e24\") " pod="openstack-operators/test-operator-controller-manager-5dc6794d5b-96zg4" Feb 24 05:48:59.477465 master-0 kubenswrapper[34361]: I0224 05:48:59.476217 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dzvg6\" (UniqueName: \"kubernetes.io/projected/59d94335-c0b3-4bf5-b0a6-b3e1f618f2aa-kube-api-access-dzvg6\") pod \"watcher-operator-controller-manager-bccc79885-96xg2\" (UID: \"59d94335-c0b3-4bf5-b0a6-b3e1f618f2aa\") " pod="openstack-operators/watcher-operator-controller-manager-bccc79885-96xg2" Feb 24 05:48:59.477465 master-0 kubenswrapper[34361]: I0224 05:48:59.476255 34361 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zg5lj\" (UniqueName: \"kubernetes.io/projected/10243cb4-f4ad-40a0-af84-04d9dc7c32c9-kube-api-access-zg5lj\") pod \"openstack-operator-controller-manager-5dc486cffc-rbqzr\" (UID: \"10243cb4-f4ad-40a0-af84-04d9dc7c32c9\") " pod="openstack-operators/openstack-operator-controller-manager-5dc486cffc-rbqzr" Feb 24 05:48:59.483988 master-0 kubenswrapper[34361]: I0224 05:48:59.483602 34361 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/cinder-operator-controller-manager-55d77d7b5c-m52ng"] Feb 24 05:48:59.498622 master-0 kubenswrapper[34361]: I0224 05:48:59.496759 34361 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/10243cb4-f4ad-40a0-af84-04d9dc7c32c9-webhook-certs\") pod \"openstack-operator-controller-manager-5dc486cffc-rbqzr\" (UID: \"10243cb4-f4ad-40a0-af84-04d9dc7c32c9\") " pod="openstack-operators/openstack-operator-controller-manager-5dc486cffc-rbqzr" Feb 24 05:48:59.505577 master-0 kubenswrapper[34361]: I0224 05:48:59.504168 34361 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4xhqv\" (UniqueName: \"kubernetes.io/projected/19710adc-933a-4331-b46a-c836b775f6c7-kube-api-access-4xhqv\") pod \"telemetry-operator-controller-manager-589c568786-9ljm5\" (UID: \"19710adc-933a-4331-b46a-c836b775f6c7\") " pod="openstack-operators/telemetry-operator-controller-manager-589c568786-9ljm5" Feb 24 05:48:59.506217 master-0 kubenswrapper[34361]: I0224 05:48:59.506175 34361 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dzvg6\" (UniqueName: \"kubernetes.io/projected/59d94335-c0b3-4bf5-b0a6-b3e1f618f2aa-kube-api-access-dzvg6\") pod \"watcher-operator-controller-manager-bccc79885-96xg2\" (UID: \"59d94335-c0b3-4bf5-b0a6-b3e1f618f2aa\") " pod="openstack-operators/watcher-operator-controller-manager-bccc79885-96xg2" Feb 24 05:48:59.508879 master-0 kubenswrapper[34361]: I0224 05:48:59.508843 34361 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5ztqj\" (UniqueName: \"kubernetes.io/projected/c44e156d-428e-4d11-ae59-37c01f013e24-kube-api-access-5ztqj\") pod \"test-operator-controller-manager-5dc6794d5b-96zg4\" (UID: \"c44e156d-428e-4d11-ae59-37c01f013e24\") " pod="openstack-operators/test-operator-controller-manager-5dc6794d5b-96zg4" Feb 24 05:48:59.517202 master-0 kubenswrapper[34361]: I0224 05:48:59.509541 34361 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/watcher-operator-controller-manager-bccc79885-96xg2" Feb 24 05:48:59.578561 master-0 kubenswrapper[34361]: I0224 05:48:59.578528 34361 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/designate-operator-controller-manager-6d8bf5c495-vq97j"] Feb 24 05:48:59.597870 master-0 kubenswrapper[34361]: W0224 05:48:59.597798 34361 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda976689d_dcf3_4a33_9530_dce5ff43bddd.slice/crio-30203a0c3f69005160e01978ae9891057c710ced930fc61865c620cadd80e43b WatchSource:0}: Error finding container 30203a0c3f69005160e01978ae9891057c710ced930fc61865c620cadd80e43b: Status 404 returned error can't find the container with id 30203a0c3f69005160e01978ae9891057c710ced930fc61865c620cadd80e43b Feb 24 05:48:59.598086 master-0 kubenswrapper[34361]: I0224 05:48:59.597940 34361 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/swift-operator-controller-manager-68f46476f-tc9k2" Feb 24 05:48:59.601870 master-0 kubenswrapper[34361]: I0224 05:48:59.601839 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/10243cb4-f4ad-40a0-af84-04d9dc7c32c9-metrics-certs\") pod \"openstack-operator-controller-manager-5dc486cffc-rbqzr\" (UID: \"10243cb4-f4ad-40a0-af84-04d9dc7c32c9\") " pod="openstack-operators/openstack-operator-controller-manager-5dc486cffc-rbqzr" Feb 24 05:48:59.601956 master-0 kubenswrapper[34361]: I0224 05:48:59.601923 34361 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mx788\" (UniqueName: \"kubernetes.io/projected/773875c3-3433-4c6f-bbc9-dc7c35a0eb4b-kube-api-access-mx788\") pod \"rabbitmq-cluster-operator-manager-668c99d594-vrbmh\" (UID: \"773875c3-3433-4c6f-bbc9-dc7c35a0eb4b\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-vrbmh" Feb 24 05:48:59.602006 master-0 kubenswrapper[34361]: I0224 05:48:59.601966 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/7e772b09-8f15-4d6f-be11-482fe9376b51-cert\") pod \"openstack-baremetal-operator-controller-manager-579b7786b92xw4j\" (UID: \"7e772b09-8f15-4d6f-be11-482fe9376b51\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-579b7786b92xw4j" Feb 24 05:48:59.602006 master-0 kubenswrapper[34361]: I0224 05:48:59.601997 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zg5lj\" (UniqueName: \"kubernetes.io/projected/10243cb4-f4ad-40a0-af84-04d9dc7c32c9-kube-api-access-zg5lj\") pod \"openstack-operator-controller-manager-5dc486cffc-rbqzr\" (UID: \"10243cb4-f4ad-40a0-af84-04d9dc7c32c9\") " pod="openstack-operators/openstack-operator-controller-manager-5dc486cffc-rbqzr" Feb 24 05:48:59.602074 master-0 kubenswrapper[34361]: I0224 05:48:59.602022 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/10243cb4-f4ad-40a0-af84-04d9dc7c32c9-webhook-certs\") pod \"openstack-operator-controller-manager-5dc486cffc-rbqzr\" (UID: \"10243cb4-f4ad-40a0-af84-04d9dc7c32c9\") " pod="openstack-operators/openstack-operator-controller-manager-5dc486cffc-rbqzr" Feb 24 05:48:59.602247 master-0 kubenswrapper[34361]: E0224 05:48:59.602222 34361 secret.go:189] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Feb 24 05:48:59.602372 master-0 kubenswrapper[34361]: E0224 05:48:59.602272 34361 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/10243cb4-f4ad-40a0-af84-04d9dc7c32c9-webhook-certs podName:10243cb4-f4ad-40a0-af84-04d9dc7c32c9 nodeName:}" failed. No retries permitted until 2026-02-24 05:49:00.102256925 +0000 UTC m=+699.804873961 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/10243cb4-f4ad-40a0-af84-04d9dc7c32c9-webhook-certs") pod "openstack-operator-controller-manager-5dc486cffc-rbqzr" (UID: "10243cb4-f4ad-40a0-af84-04d9dc7c32c9") : secret "webhook-server-cert" not found Feb 24 05:48:59.602372 master-0 kubenswrapper[34361]: E0224 05:48:59.602324 34361 secret.go:189] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Feb 24 05:48:59.602372 master-0 kubenswrapper[34361]: E0224 05:48:59.602346 34361 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/10243cb4-f4ad-40a0-af84-04d9dc7c32c9-metrics-certs podName:10243cb4-f4ad-40a0-af84-04d9dc7c32c9 nodeName:}" failed. No retries permitted until 2026-02-24 05:49:00.102340337 +0000 UTC m=+699.804957383 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/10243cb4-f4ad-40a0-af84-04d9dc7c32c9-metrics-certs") pod "openstack-operator-controller-manager-5dc486cffc-rbqzr" (UID: "10243cb4-f4ad-40a0-af84-04d9dc7c32c9") : secret "metrics-server-cert" not found Feb 24 05:48:59.602487 master-0 kubenswrapper[34361]: E0224 05:48:59.602391 34361 secret.go:189] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 24 05:48:59.602487 master-0 kubenswrapper[34361]: E0224 05:48:59.602409 34361 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/7e772b09-8f15-4d6f-be11-482fe9376b51-cert podName:7e772b09-8f15-4d6f-be11-482fe9376b51 nodeName:}" failed. No retries permitted until 2026-02-24 05:49:00.602403039 +0000 UTC m=+700.305020085 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/7e772b09-8f15-4d6f-be11-482fe9376b51-cert") pod "openstack-baremetal-operator-controller-manager-579b7786b92xw4j" (UID: "7e772b09-8f15-4d6f-be11-482fe9376b51") : secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 24 05:48:59.616162 master-0 kubenswrapper[34361]: W0224 05:48:59.613546 34361 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2351f6f1_b876_4809_85ea_79fcdb287059.slice/crio-2f79b643da7d04b712a463068952eb20a5f8ffc240701b41bd745d6fecbf86b1 WatchSource:0}: Error finding container 2f79b643da7d04b712a463068952eb20a5f8ffc240701b41bd745d6fecbf86b1: Status 404 returned error can't find the container with id 2f79b643da7d04b712a463068952eb20a5f8ffc240701b41bd745d6fecbf86b1 Feb 24 05:48:59.629992 master-0 kubenswrapper[34361]: I0224 05:48:59.622985 34361 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zg5lj\" (UniqueName: \"kubernetes.io/projected/10243cb4-f4ad-40a0-af84-04d9dc7c32c9-kube-api-access-zg5lj\") pod \"openstack-operator-controller-manager-5dc486cffc-rbqzr\" (UID: \"10243cb4-f4ad-40a0-af84-04d9dc7c32c9\") " pod="openstack-operators/openstack-operator-controller-manager-5dc486cffc-rbqzr" Feb 24 05:48:59.629992 master-0 kubenswrapper[34361]: I0224 05:48:59.624133 34361 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/barbican-operator-controller-manager-868647ff47-rngmn"] Feb 24 05:48:59.687455 master-0 kubenswrapper[34361]: I0224 05:48:59.673750 34361 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/glance-operator-controller-manager-784b5bb6c5-zghgv"] Feb 24 05:48:59.708236 master-0 kubenswrapper[34361]: I0224 05:48:59.707261 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mx788\" (UniqueName: \"kubernetes.io/projected/773875c3-3433-4c6f-bbc9-dc7c35a0eb4b-kube-api-access-mx788\") pod \"rabbitmq-cluster-operator-manager-668c99d594-vrbmh\" (UID: \"773875c3-3433-4c6f-bbc9-dc7c35a0eb4b\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-vrbmh" Feb 24 05:48:59.726438 master-0 kubenswrapper[34361]: I0224 05:48:59.726271 34361 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/heat-operator-controller-manager-69f49c598c-75df9"] Feb 24 05:48:59.731946 master-0 kubenswrapper[34361]: I0224 05:48:59.731904 34361 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mx788\" (UniqueName: \"kubernetes.io/projected/773875c3-3433-4c6f-bbc9-dc7c35a0eb4b-kube-api-access-mx788\") pod \"rabbitmq-cluster-operator-manager-668c99d594-vrbmh\" (UID: \"773875c3-3433-4c6f-bbc9-dc7c35a0eb4b\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-vrbmh" Feb 24 05:48:59.763046 master-0 kubenswrapper[34361]: I0224 05:48:59.762479 34361 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/telemetry-operator-controller-manager-589c568786-9ljm5" Feb 24 05:48:59.802975 master-0 kubenswrapper[34361]: I0224 05:48:59.802909 34361 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/test-operator-controller-manager-5dc6794d5b-96zg4" Feb 24 05:48:59.836921 master-0 kubenswrapper[34361]: W0224 05:48:59.836537 34361 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podec02b59d_8c1d_4117_a043_2f2536f665e4.slice/crio-e7907c91fd3fa5deea9fc172b38a0e875005ef4bb5926da23611235d6bbf45ad WatchSource:0}: Error finding container e7907c91fd3fa5deea9fc172b38a0e875005ef4bb5926da23611235d6bbf45ad: Status 404 returned error can't find the container with id e7907c91fd3fa5deea9fc172b38a0e875005ef4bb5926da23611235d6bbf45ad Feb 24 05:48:59.856696 master-0 kubenswrapper[34361]: I0224 05:48:59.853679 34361 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/heat-operator-controller-manager-69f49c598c-75df9" event={"ID":"2351f6f1-b876-4809-85ea-79fcdb287059","Type":"ContainerStarted","Data":"2f79b643da7d04b712a463068952eb20a5f8ffc240701b41bd745d6fecbf86b1"} Feb 24 05:48:59.856696 master-0 kubenswrapper[34361]: I0224 05:48:59.854611 34361 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-vrbmh" Feb 24 05:48:59.858384 master-0 kubenswrapper[34361]: I0224 05:48:59.858288 34361 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/horizon-operator-controller-manager-5b9b8895d5-gmljt"] Feb 24 05:48:59.866788 master-0 kubenswrapper[34361]: I0224 05:48:59.866553 34361 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/glance-operator-controller-manager-784b5bb6c5-zghgv" event={"ID":"a976689d-dcf3-4a33-9530-dce5ff43bddd","Type":"ContainerStarted","Data":"30203a0c3f69005160e01978ae9891057c710ced930fc61865c620cadd80e43b"} Feb 24 05:48:59.878772 master-0 kubenswrapper[34361]: I0224 05:48:59.878691 34361 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/cinder-operator-controller-manager-55d77d7b5c-m52ng" event={"ID":"4f457c03-e121-401c-b724-4dd147b7ff3b","Type":"ContainerStarted","Data":"f7bfa4285e38dff1683cb0fec15cc6741deb8efbf6c4b8a1661caec38137a806"} Feb 24 05:48:59.882893 master-0 kubenswrapper[34361]: I0224 05:48:59.882587 34361 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 24 05:48:59.884412 master-0 kubenswrapper[34361]: I0224 05:48:59.884337 34361 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/designate-operator-controller-manager-6d8bf5c495-vq97j" event={"ID":"8d9c9b31-7abc-4f3c-b0ee-419a96c0f4aa","Type":"ContainerStarted","Data":"419fee5f8559998c9284f84ddcd69a56df08e6718accc274bae1b40ff895ff27"} Feb 24 05:48:59.897470 master-0 kubenswrapper[34361]: I0224 05:48:59.897417 34361 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/barbican-operator-controller-manager-868647ff47-rngmn" event={"ID":"179ae4f1-42de-4005-b33c-fd32bddbc2ba","Type":"ContainerStarted","Data":"e49e51900ecf4199bf29a858ee6444a764d834e760bc63493e271328d95aef6d"} Feb 24 05:49:00.127199 master-0 kubenswrapper[34361]: I0224 05:49:00.126935 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/10243cb4-f4ad-40a0-af84-04d9dc7c32c9-webhook-certs\") pod \"openstack-operator-controller-manager-5dc486cffc-rbqzr\" (UID: \"10243cb4-f4ad-40a0-af84-04d9dc7c32c9\") " pod="openstack-operators/openstack-operator-controller-manager-5dc486cffc-rbqzr" Feb 24 05:49:00.127199 master-0 kubenswrapper[34361]: I0224 05:49:00.127074 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/10243cb4-f4ad-40a0-af84-04d9dc7c32c9-metrics-certs\") pod \"openstack-operator-controller-manager-5dc486cffc-rbqzr\" (UID: \"10243cb4-f4ad-40a0-af84-04d9dc7c32c9\") " pod="openstack-operators/openstack-operator-controller-manager-5dc486cffc-rbqzr" Feb 24 05:49:00.127199 master-0 kubenswrapper[34361]: I0224 05:49:00.127100 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/d635d276-775a-4aef-b331-f03468985b12-cert\") pod \"infra-operator-controller-manager-5f879c76b6-bv48m\" (UID: \"d635d276-775a-4aef-b331-f03468985b12\") " pod="openstack-operators/infra-operator-controller-manager-5f879c76b6-bv48m" Feb 24 05:49:00.127199 master-0 kubenswrapper[34361]: E0224 05:49:00.127150 34361 secret.go:189] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Feb 24 05:49:00.127839 master-0 kubenswrapper[34361]: E0224 05:49:00.127252 34361 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/10243cb4-f4ad-40a0-af84-04d9dc7c32c9-webhook-certs podName:10243cb4-f4ad-40a0-af84-04d9dc7c32c9 nodeName:}" failed. No retries permitted until 2026-02-24 05:49:01.127225296 +0000 UTC m=+700.829842342 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/10243cb4-f4ad-40a0-af84-04d9dc7c32c9-webhook-certs") pod "openstack-operator-controller-manager-5dc486cffc-rbqzr" (UID: "10243cb4-f4ad-40a0-af84-04d9dc7c32c9") : secret "webhook-server-cert" not found Feb 24 05:49:00.127839 master-0 kubenswrapper[34361]: E0224 05:49:00.127341 34361 secret.go:189] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Feb 24 05:49:00.127839 master-0 kubenswrapper[34361]: E0224 05:49:00.127448 34361 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/10243cb4-f4ad-40a0-af84-04d9dc7c32c9-metrics-certs podName:10243cb4-f4ad-40a0-af84-04d9dc7c32c9 nodeName:}" failed. No retries permitted until 2026-02-24 05:49:01.127423781 +0000 UTC m=+700.830040827 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/10243cb4-f4ad-40a0-af84-04d9dc7c32c9-metrics-certs") pod "openstack-operator-controller-manager-5dc486cffc-rbqzr" (UID: "10243cb4-f4ad-40a0-af84-04d9dc7c32c9") : secret "metrics-server-cert" not found Feb 24 05:49:00.127839 master-0 kubenswrapper[34361]: E0224 05:49:00.127592 34361 secret.go:189] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Feb 24 05:49:00.127839 master-0 kubenswrapper[34361]: E0224 05:49:00.127619 34361 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/d635d276-775a-4aef-b331-f03468985b12-cert podName:d635d276-775a-4aef-b331-f03468985b12 nodeName:}" failed. No retries permitted until 2026-02-24 05:49:02.127611856 +0000 UTC m=+701.830228902 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/d635d276-775a-4aef-b331-f03468985b12-cert") pod "infra-operator-controller-manager-5f879c76b6-bv48m" (UID: "d635d276-775a-4aef-b331-f03468985b12") : secret "infra-operator-webhook-server-cert" not found Feb 24 05:49:00.241575 master-0 kubenswrapper[34361]: I0224 05:49:00.238908 34361 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-6994f66f48-28hdf"] Feb 24 05:49:00.250706 master-0 kubenswrapper[34361]: I0224 05:49:00.249760 34361 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ironic-operator-controller-manager-554564d7fc-db24j"] Feb 24 05:49:00.261415 master-0 kubenswrapper[34361]: I0224 05:49:00.257755 34361 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/keystone-operator-controller-manager-b4d948c87-zlj5w"] Feb 24 05:49:00.639892 master-0 kubenswrapper[34361]: I0224 05:49:00.639802 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/7e772b09-8f15-4d6f-be11-482fe9376b51-cert\") pod \"openstack-baremetal-operator-controller-manager-579b7786b92xw4j\" (UID: \"7e772b09-8f15-4d6f-be11-482fe9376b51\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-579b7786b92xw4j" Feb 24 05:49:00.640216 master-0 kubenswrapper[34361]: E0224 05:49:00.640109 34361 secret.go:189] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 24 05:49:00.640255 master-0 kubenswrapper[34361]: E0224 05:49:00.640234 34361 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/7e772b09-8f15-4d6f-be11-482fe9376b51-cert podName:7e772b09-8f15-4d6f-be11-482fe9376b51 nodeName:}" failed. No retries permitted until 2026-02-24 05:49:02.640206804 +0000 UTC m=+702.342823850 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/7e772b09-8f15-4d6f-be11-482fe9376b51-cert") pod "openstack-baremetal-operator-controller-manager-579b7786b92xw4j" (UID: "7e772b09-8f15-4d6f-be11-482fe9376b51") : secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 24 05:49:00.813543 master-0 kubenswrapper[34361]: W0224 05:49:00.811564 34361 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod30188323_a2c2_4fd3_8b5c_42ebe4e57777.slice/crio-38da2db58953bf67b2429cbce81a7607c4080c13547baba1e3381cd82661a1e3 WatchSource:0}: Error finding container 38da2db58953bf67b2429cbce81a7607c4080c13547baba1e3381cd82661a1e3: Status 404 returned error can't find the container with id 38da2db58953bf67b2429cbce81a7607c4080c13547baba1e3381cd82661a1e3 Feb 24 05:49:00.815651 master-0 kubenswrapper[34361]: I0224 05:49:00.815547 34361 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ovn-operator-controller-manager-5955d8c787-zbd8b"] Feb 24 05:49:00.838156 master-0 kubenswrapper[34361]: W0224 05:49:00.838064 34361 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podcf59d571_b148_4f01_823f_2d996694d934.slice/crio-898bcbbaac4df2c7712859d6a156291335722c831fb13144caa8ff34a60560d2 WatchSource:0}: Error finding container 898bcbbaac4df2c7712859d6a156291335722c831fb13144caa8ff34a60560d2: Status 404 returned error can't find the container with id 898bcbbaac4df2c7712859d6a156291335722c831fb13144caa8ff34a60560d2 Feb 24 05:49:00.841598 master-0 kubenswrapper[34361]: W0224 05:49:00.841538 34361 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod404ea967_df67_480c_bb2a_fd67aba90b6c.slice/crio-ddaf10cb48e55e269c66cc66fa5faddc59abc1fcdf801afbbdd51f50bfa45848 WatchSource:0}: Error finding container ddaf10cb48e55e269c66cc66fa5faddc59abc1fcdf801afbbdd51f50bfa45848: Status 404 returned error can't find the container with id ddaf10cb48e55e269c66cc66fa5faddc59abc1fcdf801afbbdd51f50bfa45848 Feb 24 05:49:00.856383 master-0 kubenswrapper[34361]: I0224 05:49:00.855612 34361 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/manila-operator-controller-manager-67d996989d-qbghx"] Feb 24 05:49:00.870101 master-0 kubenswrapper[34361]: I0224 05:49:00.870047 34361 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/octavia-operator-controller-manager-659dc6bbfc-z4h54"] Feb 24 05:49:00.889530 master-0 kubenswrapper[34361]: I0224 05:49:00.889395 34361 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/neutron-operator-controller-manager-6bd4687957-svmn2"] Feb 24 05:49:00.907190 master-0 kubenswrapper[34361]: I0224 05:49:00.907139 34361 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/nova-operator-controller-manager-567668f5cf-sfjt8"] Feb 24 05:49:00.924148 master-0 kubenswrapper[34361]: I0224 05:49:00.924043 34361 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/nova-operator-controller-manager-567668f5cf-sfjt8" event={"ID":"404ea967-df67-480c-bb2a-fd67aba90b6c","Type":"ContainerStarted","Data":"ddaf10cb48e55e269c66cc66fa5faddc59abc1fcdf801afbbdd51f50bfa45848"} Feb 24 05:49:00.926170 master-0 kubenswrapper[34361]: I0224 05:49:00.926142 34361 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/horizon-operator-controller-manager-5b9b8895d5-gmljt" event={"ID":"ec02b59d-8c1d-4117-a043-2f2536f665e4","Type":"ContainerStarted","Data":"e7907c91fd3fa5deea9fc172b38a0e875005ef4bb5926da23611235d6bbf45ad"} Feb 24 05:49:00.929002 master-0 kubenswrapper[34361]: I0224 05:49:00.928928 34361 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/neutron-operator-controller-manager-6bd4687957-svmn2" event={"ID":"cf59d571-b148-4f01-823f-2d996694d934","Type":"ContainerStarted","Data":"898bcbbaac4df2c7712859d6a156291335722c831fb13144caa8ff34a60560d2"} Feb 24 05:49:00.938124 master-0 kubenswrapper[34361]: I0224 05:49:00.937608 34361 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ovn-operator-controller-manager-5955d8c787-zbd8b" event={"ID":"31d798ac-2e57-4ad1-a457-55a4dc84ba4f","Type":"ContainerStarted","Data":"c7427b39a2d03f7f180ad3005e66204ae0a9075c2b8b3063d42f054a82002fe1"} Feb 24 05:49:00.941899 master-0 kubenswrapper[34361]: I0224 05:49:00.941782 34361 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/manila-operator-controller-manager-67d996989d-qbghx" event={"ID":"de466d9b-824c-49c7-946e-be9936d48d41","Type":"ContainerStarted","Data":"5a590ef061c36fbe06d3ce14845787739aff993059f518f80b4e846539296b88"} Feb 24 05:49:00.944971 master-0 kubenswrapper[34361]: I0224 05:49:00.944796 34361 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/keystone-operator-controller-manager-b4d948c87-zlj5w" event={"ID":"b4a2f137-f0c7-481c-a308-51f97186744d","Type":"ContainerStarted","Data":"25aeeba05b7b45501e915c429cb53e766e0b0789e5045b9bfe8306f057b95b05"} Feb 24 05:49:00.946303 master-0 kubenswrapper[34361]: I0224 05:49:00.946248 34361 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ironic-operator-controller-manager-554564d7fc-db24j" event={"ID":"2573e160-9095-4011-a842-94316fa317b8","Type":"ContainerStarted","Data":"cdd697721f1d759da80474a43fce97804cd0919bf2bc4cef2ac572dbd72b1e1a"} Feb 24 05:49:00.951609 master-0 kubenswrapper[34361]: I0224 05:49:00.951499 34361 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-controller-manager-6994f66f48-28hdf" event={"ID":"4a1b71ea-8ffd-47f7-8c21-816211026592","Type":"ContainerStarted","Data":"0002e6000be15052b25a100df3bfa72220c654f85b8c9cf2d1687060b635649c"} Feb 24 05:49:00.959888 master-0 kubenswrapper[34361]: I0224 05:49:00.959820 34361 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/octavia-operator-controller-manager-659dc6bbfc-z4h54" event={"ID":"30188323-a2c2-4fd3-8b5c-42ebe4e57777","Type":"ContainerStarted","Data":"38da2db58953bf67b2429cbce81a7607c4080c13547baba1e3381cd82661a1e3"} Feb 24 05:49:01.051286 master-0 kubenswrapper[34361]: W0224 05:49:01.051111 34361 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod99742dbc_b8c6_4994_b3b0_dbaa54be9d86.slice/crio-9512f45d2f707dca97492d6d20cfb53e2d56490ccddcf4c5814c2bae8d1585c8 WatchSource:0}: Error finding container 9512f45d2f707dca97492d6d20cfb53e2d56490ccddcf4c5814c2bae8d1585c8: Status 404 returned error can't find the container with id 9512f45d2f707dca97492d6d20cfb53e2d56490ccddcf4c5814c2bae8d1585c8 Feb 24 05:49:01.058920 master-0 kubenswrapper[34361]: W0224 05:49:01.058864 34361 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod59d94335_c0b3_4bf5_b0a6_b3e1f618f2aa.slice/crio-646743629a4385ad73b146949ad59c545151425a88fdd56aa49033842d4693c3 WatchSource:0}: Error finding container 646743629a4385ad73b146949ad59c545151425a88fdd56aa49033842d4693c3: Status 404 returned error can't find the container with id 646743629a4385ad73b146949ad59c545151425a88fdd56aa49033842d4693c3 Feb 24 05:49:01.077399 master-0 kubenswrapper[34361]: I0224 05:49:01.077031 34361 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/placement-operator-controller-manager-8497b45c89-8xrtm"] Feb 24 05:49:01.132152 master-0 kubenswrapper[34361]: I0224 05:49:01.131192 34361 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-589c568786-9ljm5"] Feb 24 05:49:01.157613 master-0 kubenswrapper[34361]: I0224 05:49:01.157543 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/10243cb4-f4ad-40a0-af84-04d9dc7c32c9-webhook-certs\") pod \"openstack-operator-controller-manager-5dc486cffc-rbqzr\" (UID: \"10243cb4-f4ad-40a0-af84-04d9dc7c32c9\") " pod="openstack-operators/openstack-operator-controller-manager-5dc486cffc-rbqzr" Feb 24 05:49:01.157946 master-0 kubenswrapper[34361]: I0224 05:49:01.157674 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/10243cb4-f4ad-40a0-af84-04d9dc7c32c9-metrics-certs\") pod \"openstack-operator-controller-manager-5dc486cffc-rbqzr\" (UID: \"10243cb4-f4ad-40a0-af84-04d9dc7c32c9\") " pod="openstack-operators/openstack-operator-controller-manager-5dc486cffc-rbqzr" Feb 24 05:49:01.157946 master-0 kubenswrapper[34361]: E0224 05:49:01.157883 34361 secret.go:189] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Feb 24 05:49:01.157946 master-0 kubenswrapper[34361]: E0224 05:49:01.157940 34361 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/10243cb4-f4ad-40a0-af84-04d9dc7c32c9-metrics-certs podName:10243cb4-f4ad-40a0-af84-04d9dc7c32c9 nodeName:}" failed. No retries permitted until 2026-02-24 05:49:03.15792395 +0000 UTC m=+702.860540986 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/10243cb4-f4ad-40a0-af84-04d9dc7c32c9-metrics-certs") pod "openstack-operator-controller-manager-5dc486cffc-rbqzr" (UID: "10243cb4-f4ad-40a0-af84-04d9dc7c32c9") : secret "metrics-server-cert" not found Feb 24 05:49:01.158444 master-0 kubenswrapper[34361]: E0224 05:49:01.158423 34361 secret.go:189] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Feb 24 05:49:01.158522 master-0 kubenswrapper[34361]: E0224 05:49:01.158457 34361 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/10243cb4-f4ad-40a0-af84-04d9dc7c32c9-webhook-certs podName:10243cb4-f4ad-40a0-af84-04d9dc7c32c9 nodeName:}" failed. No retries permitted until 2026-02-24 05:49:03.158448074 +0000 UTC m=+702.861065120 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/10243cb4-f4ad-40a0-af84-04d9dc7c32c9-webhook-certs") pod "openstack-operator-controller-manager-5dc486cffc-rbqzr" (UID: "10243cb4-f4ad-40a0-af84-04d9dc7c32c9") : secret "webhook-server-cert" not found Feb 24 05:49:01.233521 master-0 kubenswrapper[34361]: I0224 05:49:01.233444 34361 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/swift-operator-controller-manager-68f46476f-tc9k2"] Feb 24 05:49:01.254288 master-0 kubenswrapper[34361]: I0224 05:49:01.254224 34361 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/watcher-operator-controller-manager-bccc79885-96xg2"] Feb 24 05:49:01.270724 master-0 kubenswrapper[34361]: I0224 05:49:01.270657 34361 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-vrbmh"] Feb 24 05:49:01.300125 master-0 kubenswrapper[34361]: W0224 05:49:01.299930 34361 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc44e156d_428e_4d11_ae59_37c01f013e24.slice/crio-d388aaa720e578182ab35ff50b126116e8c3d9cd553b90df0c0741ef03ca0641 WatchSource:0}: Error finding container d388aaa720e578182ab35ff50b126116e8c3d9cd553b90df0c0741ef03ca0641: Status 404 returned error can't find the container with id d388aaa720e578182ab35ff50b126116e8c3d9cd553b90df0c0741ef03ca0641 Feb 24 05:49:01.304996 master-0 kubenswrapper[34361]: E0224 05:49:01.304894 34361 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/test-operator@sha256:38e6a5bd24ab1684f22a64186fe99a7cdc7897eb7feb715ec1704eea7596dd98,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-5ztqj,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod test-operator-controller-manager-5dc6794d5b-96zg4_openstack-operators(c44e156d-428e-4d11-ae59-37c01f013e24): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Feb 24 05:49:01.306908 master-0 kubenswrapper[34361]: E0224 05:49:01.306748 34361 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/test-operator-controller-manager-5dc6794d5b-96zg4" podUID="c44e156d-428e-4d11-ae59-37c01f013e24" Feb 24 05:49:01.336910 master-0 kubenswrapper[34361]: I0224 05:49:01.336834 34361 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/test-operator-controller-manager-5dc6794d5b-96zg4"] Feb 24 05:49:02.010565 master-0 kubenswrapper[34361]: I0224 05:49:02.010459 34361 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/telemetry-operator-controller-manager-589c568786-9ljm5" event={"ID":"19710adc-933a-4331-b46a-c836b775f6c7","Type":"ContainerStarted","Data":"dab3040b6f41597e1aabd1dc1d01f9e39788822bd6f3ade558c0ffa3b7e3f477"} Feb 24 05:49:02.015596 master-0 kubenswrapper[34361]: I0224 05:49:02.015557 34361 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/swift-operator-controller-manager-68f46476f-tc9k2" event={"ID":"99742dbc-b8c6-4994-b3b0-dbaa54be9d86","Type":"ContainerStarted","Data":"9512f45d2f707dca97492d6d20cfb53e2d56490ccddcf4c5814c2bae8d1585c8"} Feb 24 05:49:02.024847 master-0 kubenswrapper[34361]: I0224 05:49:02.024805 34361 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/placement-operator-controller-manager-8497b45c89-8xrtm" event={"ID":"3d52cd17-06ac-4536-a502-a7202bf0666d","Type":"ContainerStarted","Data":"2ab12d8dc3d83bfcf20ac07c19ff4517eaaf3f30e560efeca2e1ab1f91898134"} Feb 24 05:49:02.033062 master-0 kubenswrapper[34361]: I0224 05:49:02.032555 34361 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-vrbmh" event={"ID":"773875c3-3433-4c6f-bbc9-dc7c35a0eb4b","Type":"ContainerStarted","Data":"bfa2f7825c7607aad72a94c0934ee1f13c4a598b733c1a435c26de32e1564ae9"} Feb 24 05:49:02.034979 master-0 kubenswrapper[34361]: I0224 05:49:02.034946 34361 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/watcher-operator-controller-manager-bccc79885-96xg2" event={"ID":"59d94335-c0b3-4bf5-b0a6-b3e1f618f2aa","Type":"ContainerStarted","Data":"646743629a4385ad73b146949ad59c545151425a88fdd56aa49033842d4693c3"} Feb 24 05:49:02.038262 master-0 kubenswrapper[34361]: I0224 05:49:02.038202 34361 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/test-operator-controller-manager-5dc6794d5b-96zg4" event={"ID":"c44e156d-428e-4d11-ae59-37c01f013e24","Type":"ContainerStarted","Data":"d388aaa720e578182ab35ff50b126116e8c3d9cd553b90df0c0741ef03ca0641"} Feb 24 05:49:02.041764 master-0 kubenswrapper[34361]: E0224 05:49:02.041728 34361 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/test-operator@sha256:38e6a5bd24ab1684f22a64186fe99a7cdc7897eb7feb715ec1704eea7596dd98\\\"\"" pod="openstack-operators/test-operator-controller-manager-5dc6794d5b-96zg4" podUID="c44e156d-428e-4d11-ae59-37c01f013e24" Feb 24 05:49:02.205511 master-0 kubenswrapper[34361]: I0224 05:49:02.205022 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/d635d276-775a-4aef-b331-f03468985b12-cert\") pod \"infra-operator-controller-manager-5f879c76b6-bv48m\" (UID: \"d635d276-775a-4aef-b331-f03468985b12\") " pod="openstack-operators/infra-operator-controller-manager-5f879c76b6-bv48m" Feb 24 05:49:02.205511 master-0 kubenswrapper[34361]: E0224 05:49:02.205409 34361 secret.go:189] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Feb 24 05:49:02.205951 master-0 kubenswrapper[34361]: E0224 05:49:02.205535 34361 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/d635d276-775a-4aef-b331-f03468985b12-cert podName:d635d276-775a-4aef-b331-f03468985b12 nodeName:}" failed. No retries permitted until 2026-02-24 05:49:06.205509069 +0000 UTC m=+705.908126115 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/d635d276-775a-4aef-b331-f03468985b12-cert") pod "infra-operator-controller-manager-5f879c76b6-bv48m" (UID: "d635d276-775a-4aef-b331-f03468985b12") : secret "infra-operator-webhook-server-cert" not found Feb 24 05:49:02.726857 master-0 kubenswrapper[34361]: I0224 05:49:02.726782 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/7e772b09-8f15-4d6f-be11-482fe9376b51-cert\") pod \"openstack-baremetal-operator-controller-manager-579b7786b92xw4j\" (UID: \"7e772b09-8f15-4d6f-be11-482fe9376b51\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-579b7786b92xw4j" Feb 24 05:49:02.727161 master-0 kubenswrapper[34361]: E0224 05:49:02.727013 34361 secret.go:189] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 24 05:49:02.727161 master-0 kubenswrapper[34361]: E0224 05:49:02.727103 34361 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/7e772b09-8f15-4d6f-be11-482fe9376b51-cert podName:7e772b09-8f15-4d6f-be11-482fe9376b51 nodeName:}" failed. No retries permitted until 2026-02-24 05:49:06.727082798 +0000 UTC m=+706.429699844 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/7e772b09-8f15-4d6f-be11-482fe9376b51-cert") pod "openstack-baremetal-operator-controller-manager-579b7786b92xw4j" (UID: "7e772b09-8f15-4d6f-be11-482fe9376b51") : secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 24 05:49:03.056515 master-0 kubenswrapper[34361]: E0224 05:49:03.056293 34361 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/test-operator@sha256:38e6a5bd24ab1684f22a64186fe99a7cdc7897eb7feb715ec1704eea7596dd98\\\"\"" pod="openstack-operators/test-operator-controller-manager-5dc6794d5b-96zg4" podUID="c44e156d-428e-4d11-ae59-37c01f013e24" Feb 24 05:49:03.239924 master-0 kubenswrapper[34361]: I0224 05:49:03.239835 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/10243cb4-f4ad-40a0-af84-04d9dc7c32c9-webhook-certs\") pod \"openstack-operator-controller-manager-5dc486cffc-rbqzr\" (UID: \"10243cb4-f4ad-40a0-af84-04d9dc7c32c9\") " pod="openstack-operators/openstack-operator-controller-manager-5dc486cffc-rbqzr" Feb 24 05:49:03.240635 master-0 kubenswrapper[34361]: I0224 05:49:03.239980 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/10243cb4-f4ad-40a0-af84-04d9dc7c32c9-metrics-certs\") pod \"openstack-operator-controller-manager-5dc486cffc-rbqzr\" (UID: \"10243cb4-f4ad-40a0-af84-04d9dc7c32c9\") " pod="openstack-operators/openstack-operator-controller-manager-5dc486cffc-rbqzr" Feb 24 05:49:03.240635 master-0 kubenswrapper[34361]: E0224 05:49:03.240108 34361 secret.go:189] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Feb 24 05:49:03.240635 master-0 kubenswrapper[34361]: E0224 05:49:03.240109 34361 secret.go:189] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Feb 24 05:49:03.240635 master-0 kubenswrapper[34361]: E0224 05:49:03.240193 34361 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/10243cb4-f4ad-40a0-af84-04d9dc7c32c9-metrics-certs podName:10243cb4-f4ad-40a0-af84-04d9dc7c32c9 nodeName:}" failed. No retries permitted until 2026-02-24 05:49:07.24017114 +0000 UTC m=+706.942788186 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/10243cb4-f4ad-40a0-af84-04d9dc7c32c9-metrics-certs") pod "openstack-operator-controller-manager-5dc486cffc-rbqzr" (UID: "10243cb4-f4ad-40a0-af84-04d9dc7c32c9") : secret "metrics-server-cert" not found Feb 24 05:49:03.240635 master-0 kubenswrapper[34361]: E0224 05:49:03.240277 34361 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/10243cb4-f4ad-40a0-af84-04d9dc7c32c9-webhook-certs podName:10243cb4-f4ad-40a0-af84-04d9dc7c32c9 nodeName:}" failed. No retries permitted until 2026-02-24 05:49:07.240242082 +0000 UTC m=+706.942859138 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/10243cb4-f4ad-40a0-af84-04d9dc7c32c9-webhook-certs") pod "openstack-operator-controller-manager-5dc486cffc-rbqzr" (UID: "10243cb4-f4ad-40a0-af84-04d9dc7c32c9") : secret "webhook-server-cert" not found Feb 24 05:49:06.221866 master-0 kubenswrapper[34361]: I0224 05:49:06.221791 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/d635d276-775a-4aef-b331-f03468985b12-cert\") pod \"infra-operator-controller-manager-5f879c76b6-bv48m\" (UID: \"d635d276-775a-4aef-b331-f03468985b12\") " pod="openstack-operators/infra-operator-controller-manager-5f879c76b6-bv48m" Feb 24 05:49:06.222520 master-0 kubenswrapper[34361]: E0224 05:49:06.222116 34361 secret.go:189] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Feb 24 05:49:06.222520 master-0 kubenswrapper[34361]: E0224 05:49:06.222364 34361 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/d635d276-775a-4aef-b331-f03468985b12-cert podName:d635d276-775a-4aef-b331-f03468985b12 nodeName:}" failed. No retries permitted until 2026-02-24 05:49:14.222287766 +0000 UTC m=+713.924904942 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/d635d276-775a-4aef-b331-f03468985b12-cert") pod "infra-operator-controller-manager-5f879c76b6-bv48m" (UID: "d635d276-775a-4aef-b331-f03468985b12") : secret "infra-operator-webhook-server-cert" not found Feb 24 05:49:06.735672 master-0 kubenswrapper[34361]: I0224 05:49:06.735588 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/7e772b09-8f15-4d6f-be11-482fe9376b51-cert\") pod \"openstack-baremetal-operator-controller-manager-579b7786b92xw4j\" (UID: \"7e772b09-8f15-4d6f-be11-482fe9376b51\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-579b7786b92xw4j" Feb 24 05:49:06.735969 master-0 kubenswrapper[34361]: E0224 05:49:06.735879 34361 secret.go:189] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 24 05:49:06.736195 master-0 kubenswrapper[34361]: E0224 05:49:06.736140 34361 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/7e772b09-8f15-4d6f-be11-482fe9376b51-cert podName:7e772b09-8f15-4d6f-be11-482fe9376b51 nodeName:}" failed. No retries permitted until 2026-02-24 05:49:14.736098336 +0000 UTC m=+714.438715422 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/7e772b09-8f15-4d6f-be11-482fe9376b51-cert") pod "openstack-baremetal-operator-controller-manager-579b7786b92xw4j" (UID: "7e772b09-8f15-4d6f-be11-482fe9376b51") : secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 24 05:49:07.248486 master-0 kubenswrapper[34361]: I0224 05:49:07.248387 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/10243cb4-f4ad-40a0-af84-04d9dc7c32c9-metrics-certs\") pod \"openstack-operator-controller-manager-5dc486cffc-rbqzr\" (UID: \"10243cb4-f4ad-40a0-af84-04d9dc7c32c9\") " pod="openstack-operators/openstack-operator-controller-manager-5dc486cffc-rbqzr" Feb 24 05:49:07.249829 master-0 kubenswrapper[34361]: I0224 05:49:07.248531 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/10243cb4-f4ad-40a0-af84-04d9dc7c32c9-webhook-certs\") pod \"openstack-operator-controller-manager-5dc486cffc-rbqzr\" (UID: \"10243cb4-f4ad-40a0-af84-04d9dc7c32c9\") " pod="openstack-operators/openstack-operator-controller-manager-5dc486cffc-rbqzr" Feb 24 05:49:07.249829 master-0 kubenswrapper[34361]: E0224 05:49:07.248668 34361 secret.go:189] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Feb 24 05:49:07.249829 master-0 kubenswrapper[34361]: E0224 05:49:07.248728 34361 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/10243cb4-f4ad-40a0-af84-04d9dc7c32c9-webhook-certs podName:10243cb4-f4ad-40a0-af84-04d9dc7c32c9 nodeName:}" failed. No retries permitted until 2026-02-24 05:49:15.248712575 +0000 UTC m=+714.951329621 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/10243cb4-f4ad-40a0-af84-04d9dc7c32c9-webhook-certs") pod "openstack-operator-controller-manager-5dc486cffc-rbqzr" (UID: "10243cb4-f4ad-40a0-af84-04d9dc7c32c9") : secret "webhook-server-cert" not found Feb 24 05:49:07.249829 master-0 kubenswrapper[34361]: E0224 05:49:07.249186 34361 secret.go:189] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Feb 24 05:49:07.249829 master-0 kubenswrapper[34361]: E0224 05:49:07.249216 34361 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/10243cb4-f4ad-40a0-af84-04d9dc7c32c9-metrics-certs podName:10243cb4-f4ad-40a0-af84-04d9dc7c32c9 nodeName:}" failed. No retries permitted until 2026-02-24 05:49:15.249208509 +0000 UTC m=+714.951825555 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/10243cb4-f4ad-40a0-af84-04d9dc7c32c9-metrics-certs") pod "openstack-operator-controller-manager-5dc486cffc-rbqzr" (UID: "10243cb4-f4ad-40a0-af84-04d9dc7c32c9") : secret "metrics-server-cert" not found Feb 24 05:49:14.319021 master-0 kubenswrapper[34361]: I0224 05:49:14.318946 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/d635d276-775a-4aef-b331-f03468985b12-cert\") pod \"infra-operator-controller-manager-5f879c76b6-bv48m\" (UID: \"d635d276-775a-4aef-b331-f03468985b12\") " pod="openstack-operators/infra-operator-controller-manager-5f879c76b6-bv48m" Feb 24 05:49:14.324678 master-0 kubenswrapper[34361]: I0224 05:49:14.324612 34361 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/d635d276-775a-4aef-b331-f03468985b12-cert\") pod \"infra-operator-controller-manager-5f879c76b6-bv48m\" (UID: \"d635d276-775a-4aef-b331-f03468985b12\") " pod="openstack-operators/infra-operator-controller-manager-5f879c76b6-bv48m" Feb 24 05:49:14.354598 master-0 kubenswrapper[34361]: I0224 05:49:14.354511 34361 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/infra-operator-controller-manager-5f879c76b6-bv48m" Feb 24 05:49:14.831453 master-0 kubenswrapper[34361]: I0224 05:49:14.831370 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/7e772b09-8f15-4d6f-be11-482fe9376b51-cert\") pod \"openstack-baremetal-operator-controller-manager-579b7786b92xw4j\" (UID: \"7e772b09-8f15-4d6f-be11-482fe9376b51\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-579b7786b92xw4j" Feb 24 05:49:14.836662 master-0 kubenswrapper[34361]: I0224 05:49:14.836604 34361 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/7e772b09-8f15-4d6f-be11-482fe9376b51-cert\") pod \"openstack-baremetal-operator-controller-manager-579b7786b92xw4j\" (UID: \"7e772b09-8f15-4d6f-be11-482fe9376b51\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-579b7786b92xw4j" Feb 24 05:49:15.112544 master-0 kubenswrapper[34361]: I0224 05:49:15.112393 34361 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-baremetal-operator-controller-manager-579b7786b92xw4j" Feb 24 05:49:15.346579 master-0 kubenswrapper[34361]: I0224 05:49:15.346516 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/10243cb4-f4ad-40a0-af84-04d9dc7c32c9-metrics-certs\") pod \"openstack-operator-controller-manager-5dc486cffc-rbqzr\" (UID: \"10243cb4-f4ad-40a0-af84-04d9dc7c32c9\") " pod="openstack-operators/openstack-operator-controller-manager-5dc486cffc-rbqzr" Feb 24 05:49:15.347383 master-0 kubenswrapper[34361]: E0224 05:49:15.346821 34361 secret.go:189] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Feb 24 05:49:15.347503 master-0 kubenswrapper[34361]: E0224 05:49:15.347480 34361 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/10243cb4-f4ad-40a0-af84-04d9dc7c32c9-metrics-certs podName:10243cb4-f4ad-40a0-af84-04d9dc7c32c9 nodeName:}" failed. No retries permitted until 2026-02-24 05:49:31.347445617 +0000 UTC m=+731.050062693 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/10243cb4-f4ad-40a0-af84-04d9dc7c32c9-metrics-certs") pod "openstack-operator-controller-manager-5dc486cffc-rbqzr" (UID: "10243cb4-f4ad-40a0-af84-04d9dc7c32c9") : secret "metrics-server-cert" not found Feb 24 05:49:15.347873 master-0 kubenswrapper[34361]: E0224 05:49:15.347807 34361 secret.go:189] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Feb 24 05:49:15.347873 master-0 kubenswrapper[34361]: E0224 05:49:15.347859 34361 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/10243cb4-f4ad-40a0-af84-04d9dc7c32c9-webhook-certs podName:10243cb4-f4ad-40a0-af84-04d9dc7c32c9 nodeName:}" failed. No retries permitted until 2026-02-24 05:49:31.347847878 +0000 UTC m=+731.050464924 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/10243cb4-f4ad-40a0-af84-04d9dc7c32c9-webhook-certs") pod "openstack-operator-controller-manager-5dc486cffc-rbqzr" (UID: "10243cb4-f4ad-40a0-af84-04d9dc7c32c9") : secret "webhook-server-cert" not found Feb 24 05:49:15.347983 master-0 kubenswrapper[34361]: I0224 05:49:15.347677 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/10243cb4-f4ad-40a0-af84-04d9dc7c32c9-webhook-certs\") pod \"openstack-operator-controller-manager-5dc486cffc-rbqzr\" (UID: \"10243cb4-f4ad-40a0-af84-04d9dc7c32c9\") " pod="openstack-operators/openstack-operator-controller-manager-5dc486cffc-rbqzr" Feb 24 05:49:20.830860 master-0 kubenswrapper[34361]: I0224 05:49:20.825030 34361 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/infra-operator-controller-manager-5f879c76b6-bv48m"] Feb 24 05:49:21.038751 master-0 kubenswrapper[34361]: I0224 05:49:21.035261 34361 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-579b7786b92xw4j"] Feb 24 05:49:21.087525 master-0 kubenswrapper[34361]: W0224 05:49:21.086456 34361 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod7e772b09_8f15_4d6f_be11_482fe9376b51.slice/crio-b319517d59185f62c682374157cabd3262fe8ccb592b4e06eb07efce3f9f77ee WatchSource:0}: Error finding container b319517d59185f62c682374157cabd3262fe8ccb592b4e06eb07efce3f9f77ee: Status 404 returned error can't find the container with id b319517d59185f62c682374157cabd3262fe8ccb592b4e06eb07efce3f9f77ee Feb 24 05:49:21.371151 master-0 kubenswrapper[34361]: I0224 05:49:21.367230 34361 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-baremetal-operator-controller-manager-579b7786b92xw4j" event={"ID":"7e772b09-8f15-4d6f-be11-482fe9376b51","Type":"ContainerStarted","Data":"b319517d59185f62c682374157cabd3262fe8ccb592b4e06eb07efce3f9f77ee"} Feb 24 05:49:21.371151 master-0 kubenswrapper[34361]: I0224 05:49:21.369759 34361 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/swift-operator-controller-manager-68f46476f-tc9k2" event={"ID":"99742dbc-b8c6-4994-b3b0-dbaa54be9d86","Type":"ContainerStarted","Data":"9eb651d5b517ffded7389eea91b84b77f10fa53337da6ec9ae3d66c0229c7832"} Feb 24 05:49:21.371151 master-0 kubenswrapper[34361]: I0224 05:49:21.370843 34361 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/swift-operator-controller-manager-68f46476f-tc9k2" Feb 24 05:49:21.373170 master-0 kubenswrapper[34361]: I0224 05:49:21.373134 34361 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/designate-operator-controller-manager-6d8bf5c495-vq97j" event={"ID":"8d9c9b31-7abc-4f3c-b0ee-419a96c0f4aa","Type":"ContainerStarted","Data":"dd00ad826b4e56a771e7a28e540041dbcaba5a1fa23003a472ecd238c2ff8c88"} Feb 24 05:49:21.373806 master-0 kubenswrapper[34361]: I0224 05:49:21.373724 34361 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/designate-operator-controller-manager-6d8bf5c495-vq97j" Feb 24 05:49:21.375909 master-0 kubenswrapper[34361]: I0224 05:49:21.375881 34361 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/glance-operator-controller-manager-784b5bb6c5-zghgv" event={"ID":"a976689d-dcf3-4a33-9530-dce5ff43bddd","Type":"ContainerStarted","Data":"edb27756b27b967f532e37df4dc3a92dbf77eaad4347e6cca02d4ae96aca791f"} Feb 24 05:49:21.376485 master-0 kubenswrapper[34361]: I0224 05:49:21.376460 34361 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/glance-operator-controller-manager-784b5bb6c5-zghgv" Feb 24 05:49:21.392410 master-0 kubenswrapper[34361]: I0224 05:49:21.390785 34361 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/neutron-operator-controller-manager-6bd4687957-svmn2" event={"ID":"cf59d571-b148-4f01-823f-2d996694d934","Type":"ContainerStarted","Data":"402ff91839de61a2037e8455ee672f1a10b1fd28fc06298745058f98a9594821"} Feb 24 05:49:21.392410 master-0 kubenswrapper[34361]: I0224 05:49:21.391771 34361 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/neutron-operator-controller-manager-6bd4687957-svmn2" Feb 24 05:49:21.419220 master-0 kubenswrapper[34361]: I0224 05:49:21.419144 34361 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/barbican-operator-controller-manager-868647ff47-rngmn" event={"ID":"179ae4f1-42de-4005-b33c-fd32bddbc2ba","Type":"ContainerStarted","Data":"c4cd7efa8652c7192b887de74bf005b0f42626f8866a226665fc792c0272a0ca"} Feb 24 05:49:21.419549 master-0 kubenswrapper[34361]: I0224 05:49:21.419372 34361 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/swift-operator-controller-manager-68f46476f-tc9k2" podStartSLOduration=5.286771075 podStartE2EDuration="23.419346703s" podCreationTimestamp="2026-02-24 05:48:58 +0000 UTC" firstStartedPulling="2026-02-24 05:49:01.062909528 +0000 UTC m=+700.765526574" lastFinishedPulling="2026-02-24 05:49:19.195485146 +0000 UTC m=+718.898102202" observedRunningTime="2026-02-24 05:49:21.403466695 +0000 UTC m=+721.106083751" watchObservedRunningTime="2026-02-24 05:49:21.419346703 +0000 UTC m=+721.121963749" Feb 24 05:49:21.420252 master-0 kubenswrapper[34361]: I0224 05:49:21.420222 34361 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/barbican-operator-controller-manager-868647ff47-rngmn" Feb 24 05:49:21.434749 master-0 kubenswrapper[34361]: I0224 05:49:21.434676 34361 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/placement-operator-controller-manager-8497b45c89-8xrtm" event={"ID":"3d52cd17-06ac-4536-a502-a7202bf0666d","Type":"ContainerStarted","Data":"c2ea3f6669fa29bc2b5da2e69b00e1277c73879953d04efe0f3954c401b7717d"} Feb 24 05:49:21.435158 master-0 kubenswrapper[34361]: I0224 05:49:21.435116 34361 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/placement-operator-controller-manager-8497b45c89-8xrtm" Feb 24 05:49:21.446105 master-0 kubenswrapper[34361]: I0224 05:49:21.446060 34361 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/heat-operator-controller-manager-69f49c598c-75df9" event={"ID":"2351f6f1-b876-4809-85ea-79fcdb287059","Type":"ContainerStarted","Data":"703da2db2c2335239d6c191f9ee10a66b70153b138715c2e46d5a729306a5f33"} Feb 24 05:49:21.446420 master-0 kubenswrapper[34361]: I0224 05:49:21.446372 34361 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/heat-operator-controller-manager-69f49c598c-75df9" Feb 24 05:49:21.447911 master-0 kubenswrapper[34361]: I0224 05:49:21.447850 34361 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-controller-manager-5f879c76b6-bv48m" event={"ID":"d635d276-775a-4aef-b331-f03468985b12","Type":"ContainerStarted","Data":"36daf28d650df35d829b62a26c8303c14e21d38386117e1a32537fcc3aa4cb6f"} Feb 24 05:49:21.460719 master-0 kubenswrapper[34361]: I0224 05:49:21.460655 34361 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/manila-operator-controller-manager-67d996989d-qbghx" event={"ID":"de466d9b-824c-49c7-946e-be9936d48d41","Type":"ContainerStarted","Data":"a5a9c27d54cecb90fe138830ed073598a961585d16f8157f047d270130cb5604"} Feb 24 05:49:21.462005 master-0 kubenswrapper[34361]: I0224 05:49:21.461981 34361 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/manila-operator-controller-manager-67d996989d-qbghx" Feb 24 05:49:21.466226 master-0 kubenswrapper[34361]: I0224 05:49:21.466149 34361 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/glance-operator-controller-manager-784b5bb6c5-zghgv" podStartSLOduration=5.605301987 podStartE2EDuration="24.466123535s" podCreationTimestamp="2026-02-24 05:48:57 +0000 UTC" firstStartedPulling="2026-02-24 05:48:59.606796087 +0000 UTC m=+699.309413133" lastFinishedPulling="2026-02-24 05:49:18.467617595 +0000 UTC m=+718.170234681" observedRunningTime="2026-02-24 05:49:21.443894075 +0000 UTC m=+721.146511541" watchObservedRunningTime="2026-02-24 05:49:21.466123535 +0000 UTC m=+721.168740581" Feb 24 05:49:21.479240 master-0 kubenswrapper[34361]: I0224 05:49:21.479150 34361 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/neutron-operator-controller-manager-6bd4687957-svmn2" podStartSLOduration=5.125754745 podStartE2EDuration="23.479121864s" podCreationTimestamp="2026-02-24 05:48:58 +0000 UTC" firstStartedPulling="2026-02-24 05:49:00.840543634 +0000 UTC m=+700.543160680" lastFinishedPulling="2026-02-24 05:49:19.193910753 +0000 UTC m=+718.896527799" observedRunningTime="2026-02-24 05:49:21.470905003 +0000 UTC m=+721.173522049" watchObservedRunningTime="2026-02-24 05:49:21.479121864 +0000 UTC m=+721.181738910" Feb 24 05:49:21.531102 master-0 kubenswrapper[34361]: I0224 05:49:21.530374 34361 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/designate-operator-controller-manager-6d8bf5c495-vq97j" podStartSLOduration=4.109914517 podStartE2EDuration="24.530346196s" podCreationTimestamp="2026-02-24 05:48:57 +0000 UTC" firstStartedPulling="2026-02-24 05:48:59.379950443 +0000 UTC m=+699.082567489" lastFinishedPulling="2026-02-24 05:49:19.800382082 +0000 UTC m=+719.502999168" observedRunningTime="2026-02-24 05:49:21.499263957 +0000 UTC m=+721.201881003" watchObservedRunningTime="2026-02-24 05:49:21.530346196 +0000 UTC m=+721.232963262" Feb 24 05:49:21.553215 master-0 kubenswrapper[34361]: I0224 05:49:21.551064 34361 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/placement-operator-controller-manager-8497b45c89-8xrtm" podStartSLOduration=6.146946413 podStartE2EDuration="23.551035333s" podCreationTimestamp="2026-02-24 05:48:58 +0000 UTC" firstStartedPulling="2026-02-24 05:49:01.063441443 +0000 UTC m=+700.766058489" lastFinishedPulling="2026-02-24 05:49:18.467530363 +0000 UTC m=+718.170147409" observedRunningTime="2026-02-24 05:49:21.522036602 +0000 UTC m=+721.224653658" watchObservedRunningTime="2026-02-24 05:49:21.551035333 +0000 UTC m=+721.253652369" Feb 24 05:49:21.573691 master-0 kubenswrapper[34361]: I0224 05:49:21.573571 34361 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/heat-operator-controller-manager-69f49c598c-75df9" podStartSLOduration=5.751843307 podStartE2EDuration="24.57353798s" podCreationTimestamp="2026-02-24 05:48:57 +0000 UTC" firstStartedPulling="2026-02-24 05:48:59.646349164 +0000 UTC m=+699.348966210" lastFinishedPulling="2026-02-24 05:49:18.468043797 +0000 UTC m=+718.170660883" observedRunningTime="2026-02-24 05:49:21.550944981 +0000 UTC m=+721.253562037" watchObservedRunningTime="2026-02-24 05:49:21.57353798 +0000 UTC m=+721.276155016" Feb 24 05:49:21.635648 master-0 kubenswrapper[34361]: I0224 05:49:21.635536 34361 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/manila-operator-controller-manager-67d996989d-qbghx" podStartSLOduration=5.286791406 podStartE2EDuration="23.63550723s" podCreationTimestamp="2026-02-24 05:48:58 +0000 UTC" firstStartedPulling="2026-02-24 05:49:00.810488674 +0000 UTC m=+700.513105720" lastFinishedPulling="2026-02-24 05:49:19.159204488 +0000 UTC m=+718.861821544" observedRunningTime="2026-02-24 05:49:21.613771254 +0000 UTC m=+721.316388290" watchObservedRunningTime="2026-02-24 05:49:21.63550723 +0000 UTC m=+721.338124276" Feb 24 05:49:21.665474 master-0 kubenswrapper[34361]: I0224 05:49:21.664951 34361 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/barbican-operator-controller-manager-868647ff47-rngmn" podStartSLOduration=4.271801971 podStartE2EDuration="24.664924244s" podCreationTimestamp="2026-02-24 05:48:57 +0000 UTC" firstStartedPulling="2026-02-24 05:48:59.405297626 +0000 UTC m=+699.107914672" lastFinishedPulling="2026-02-24 05:49:19.798419899 +0000 UTC m=+719.501036945" observedRunningTime="2026-02-24 05:49:21.651877602 +0000 UTC m=+721.354494648" watchObservedRunningTime="2026-02-24 05:49:21.664924244 +0000 UTC m=+721.367541290" Feb 24 05:49:22.531354 master-0 kubenswrapper[34361]: I0224 05:49:22.530693 34361 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/watcher-operator-controller-manager-bccc79885-96xg2" event={"ID":"59d94335-c0b3-4bf5-b0a6-b3e1f618f2aa","Type":"ContainerStarted","Data":"2709250735e0ac47e29e9b28ad4bb057428601e652868340e7235c8a93b7e57a"} Feb 24 05:49:22.532804 master-0 kubenswrapper[34361]: I0224 05:49:22.532773 34361 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/watcher-operator-controller-manager-bccc79885-96xg2" Feb 24 05:49:22.566910 master-0 kubenswrapper[34361]: I0224 05:49:22.566666 34361 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/test-operator-controller-manager-5dc6794d5b-96zg4" event={"ID":"c44e156d-428e-4d11-ae59-37c01f013e24","Type":"ContainerStarted","Data":"08ba09b96ddc0547a2bedabc71b0a320007099216248b1f70450b9f02801bfbf"} Feb 24 05:49:22.576372 master-0 kubenswrapper[34361]: I0224 05:49:22.567906 34361 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/test-operator-controller-manager-5dc6794d5b-96zg4" Feb 24 05:49:22.581871 master-0 kubenswrapper[34361]: I0224 05:49:22.581776 34361 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/watcher-operator-controller-manager-bccc79885-96xg2" podStartSLOduration=5.312271363 podStartE2EDuration="24.581751678s" podCreationTimestamp="2026-02-24 05:48:58 +0000 UTC" firstStartedPulling="2026-02-24 05:49:01.062877307 +0000 UTC m=+700.765494353" lastFinishedPulling="2026-02-24 05:49:20.332357632 +0000 UTC m=+720.034974668" observedRunningTime="2026-02-24 05:49:22.566937729 +0000 UTC m=+722.269554775" watchObservedRunningTime="2026-02-24 05:49:22.581751678 +0000 UTC m=+722.284368724" Feb 24 05:49:22.638421 master-0 kubenswrapper[34361]: I0224 05:49:22.632475 34361 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/test-operator-controller-manager-5dc6794d5b-96zg4" podStartSLOduration=5.531143464 podStartE2EDuration="24.632444644s" podCreationTimestamp="2026-02-24 05:48:58 +0000 UTC" firstStartedPulling="2026-02-24 05:49:01.304477161 +0000 UTC m=+701.007094197" lastFinishedPulling="2026-02-24 05:49:20.405778331 +0000 UTC m=+720.108395377" observedRunningTime="2026-02-24 05:49:22.601265884 +0000 UTC m=+722.303882930" watchObservedRunningTime="2026-02-24 05:49:22.632444644 +0000 UTC m=+722.335061700" Feb 24 05:49:22.643386 master-0 kubenswrapper[34361]: I0224 05:49:22.637843 34361 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/keystone-operator-controller-manager-b4d948c87-zlj5w" event={"ID":"b4a2f137-f0c7-481c-a308-51f97186744d","Type":"ContainerStarted","Data":"d3fd54c1e06aefa706710e6e93c63a6a35f558394f341495cd734b8ca06e7180"} Feb 24 05:49:22.643386 master-0 kubenswrapper[34361]: I0224 05:49:22.639628 34361 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/keystone-operator-controller-manager-b4d948c87-zlj5w" Feb 24 05:49:22.643386 master-0 kubenswrapper[34361]: I0224 05:49:22.639917 34361 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/nova-operator-controller-manager-567668f5cf-sfjt8" event={"ID":"404ea967-df67-480c-bb2a-fd67aba90b6c","Type":"ContainerStarted","Data":"5f5158f88c5ae04102f8392afb3ed471379edfd9df92b8c1c9eba02b599805b3"} Feb 24 05:49:22.643386 master-0 kubenswrapper[34361]: I0224 05:49:22.640519 34361 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/nova-operator-controller-manager-567668f5cf-sfjt8" Feb 24 05:49:22.654386 master-0 kubenswrapper[34361]: I0224 05:49:22.652657 34361 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/horizon-operator-controller-manager-5b9b8895d5-gmljt" event={"ID":"ec02b59d-8c1d-4117-a043-2f2536f665e4","Type":"ContainerStarted","Data":"0e4ab94143a411120dbd953e4fdf4c226631c6b682e55a5b3f99817689ae6859"} Feb 24 05:49:22.654386 master-0 kubenswrapper[34361]: I0224 05:49:22.653819 34361 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/horizon-operator-controller-manager-5b9b8895d5-gmljt" Feb 24 05:49:22.667809 master-0 kubenswrapper[34361]: I0224 05:49:22.667727 34361 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ironic-operator-controller-manager-554564d7fc-db24j" event={"ID":"2573e160-9095-4011-a842-94316fa317b8","Type":"ContainerStarted","Data":"8be38fe2495f59972cfcf016b5185177daa5e6a85e5addca0f303b218cf832b3"} Feb 24 05:49:22.669734 master-0 kubenswrapper[34361]: I0224 05:49:22.669695 34361 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/ironic-operator-controller-manager-554564d7fc-db24j" Feb 24 05:49:22.680449 master-0 kubenswrapper[34361]: I0224 05:49:22.676730 34361 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/keystone-operator-controller-manager-b4d948c87-zlj5w" podStartSLOduration=5.723787436 podStartE2EDuration="24.676704157s" podCreationTimestamp="2026-02-24 05:48:58 +0000 UTC" firstStartedPulling="2026-02-24 05:49:00.239956035 +0000 UTC m=+699.942573081" lastFinishedPulling="2026-02-24 05:49:19.192872736 +0000 UTC m=+718.895489802" observedRunningTime="2026-02-24 05:49:22.655788933 +0000 UTC m=+722.358405979" watchObservedRunningTime="2026-02-24 05:49:22.676704157 +0000 UTC m=+722.379321203" Feb 24 05:49:22.700945 master-0 kubenswrapper[34361]: I0224 05:49:22.700874 34361 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-vrbmh" event={"ID":"773875c3-3433-4c6f-bbc9-dc7c35a0eb4b","Type":"ContainerStarted","Data":"04903e98418ed4b5ed4d9e3a44b37083024641aac0f071e6734479ddaed12da6"} Feb 24 05:49:22.742482 master-0 kubenswrapper[34361]: I0224 05:49:22.742424 34361 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-controller-manager-6994f66f48-28hdf" event={"ID":"4a1b71ea-8ffd-47f7-8c21-816211026592","Type":"ContainerStarted","Data":"6cdcbec8fa3e1505742b6cc695b7c5faf2d89a9eedffd4f58d7ff4e2ced0676f"} Feb 24 05:49:22.743898 master-0 kubenswrapper[34361]: I0224 05:49:22.743880 34361 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/mariadb-operator-controller-manager-6994f66f48-28hdf" Feb 24 05:49:22.752982 master-0 kubenswrapper[34361]: I0224 05:49:22.751467 34361 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/nova-operator-controller-manager-567668f5cf-sfjt8" podStartSLOduration=5.173625725 podStartE2EDuration="24.751438621s" podCreationTimestamp="2026-02-24 05:48:58 +0000 UTC" firstStartedPulling="2026-02-24 05:49:00.844121651 +0000 UTC m=+700.546738687" lastFinishedPulling="2026-02-24 05:49:20.421934537 +0000 UTC m=+720.124551583" observedRunningTime="2026-02-24 05:49:22.727044724 +0000 UTC m=+722.429661800" watchObservedRunningTime="2026-02-24 05:49:22.751438621 +0000 UTC m=+722.454055667" Feb 24 05:49:22.772926 master-0 kubenswrapper[34361]: I0224 05:49:22.772852 34361 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ovn-operator-controller-manager-5955d8c787-zbd8b" event={"ID":"31d798ac-2e57-4ad1-a457-55a4dc84ba4f","Type":"ContainerStarted","Data":"8831d94517b87e3b50fc57ffd6dfd10d7ed271f64855528fc887c29bfd12da8d"} Feb 24 05:49:22.773252 master-0 kubenswrapper[34361]: I0224 05:49:22.772979 34361 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/ovn-operator-controller-manager-5955d8c787-zbd8b" Feb 24 05:49:22.782456 master-0 kubenswrapper[34361]: I0224 05:49:22.774934 34361 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/octavia-operator-controller-manager-659dc6bbfc-z4h54" event={"ID":"30188323-a2c2-4fd3-8b5c-42ebe4e57777","Type":"ContainerStarted","Data":"c186ba0c3786cff6763d984b20a66c72e5ba874c558c5c64fb2781a99ca74b4d"} Feb 24 05:49:22.782456 master-0 kubenswrapper[34361]: I0224 05:49:22.775919 34361 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/octavia-operator-controller-manager-659dc6bbfc-z4h54" Feb 24 05:49:22.782456 master-0 kubenswrapper[34361]: I0224 05:49:22.780285 34361 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/cinder-operator-controller-manager-55d77d7b5c-m52ng" event={"ID":"4f457c03-e121-401c-b724-4dd147b7ff3b","Type":"ContainerStarted","Data":"998b1ea8413fb9afa67282b5dec8872d502fe68364a04de262304813a316af75"} Feb 24 05:49:22.782456 master-0 kubenswrapper[34361]: I0224 05:49:22.780708 34361 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/cinder-operator-controller-manager-55d77d7b5c-m52ng" Feb 24 05:49:22.782668 master-0 kubenswrapper[34361]: I0224 05:49:22.782468 34361 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/telemetry-operator-controller-manager-589c568786-9ljm5" event={"ID":"19710adc-933a-4331-b46a-c836b775f6c7","Type":"ContainerStarted","Data":"5051e4b5db385d7cadf51caaf61a17373430bf53c6ba530d54d2ec5c89bf408b"} Feb 24 05:49:22.821489 master-0 kubenswrapper[34361]: I0224 05:49:22.821380 34361 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/horizon-operator-controller-manager-5b9b8895d5-gmljt" podStartSLOduration=5.487120576 podStartE2EDuration="24.798289924s" podCreationTimestamp="2026-02-24 05:48:58 +0000 UTC" firstStartedPulling="2026-02-24 05:48:59.882494119 +0000 UTC m=+699.585111155" lastFinishedPulling="2026-02-24 05:49:19.193663417 +0000 UTC m=+718.896280503" observedRunningTime="2026-02-24 05:49:22.765451269 +0000 UTC m=+722.468068325" watchObservedRunningTime="2026-02-24 05:49:22.798289924 +0000 UTC m=+722.500906970" Feb 24 05:49:22.825836 master-0 kubenswrapper[34361]: I0224 05:49:22.825778 34361 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/ironic-operator-controller-manager-554564d7fc-db24j" podStartSLOduration=5.264784383 podStartE2EDuration="24.825761945s" podCreationTimestamp="2026-02-24 05:48:58 +0000 UTC" firstStartedPulling="2026-02-24 05:49:00.239221575 +0000 UTC m=+699.941838621" lastFinishedPulling="2026-02-24 05:49:19.800199137 +0000 UTC m=+719.502816183" observedRunningTime="2026-02-24 05:49:22.824588714 +0000 UTC m=+722.527205760" watchObservedRunningTime="2026-02-24 05:49:22.825761945 +0000 UTC m=+722.528378991" Feb 24 05:49:22.935333 master-0 kubenswrapper[34361]: I0224 05:49:22.929448 34361 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-vrbmh" podStartSLOduration=5.774141953 podStartE2EDuration="24.929414589s" podCreationTimestamp="2026-02-24 05:48:58 +0000 UTC" firstStartedPulling="2026-02-24 05:49:01.266516077 +0000 UTC m=+700.969133123" lastFinishedPulling="2026-02-24 05:49:20.421788673 +0000 UTC m=+720.124405759" observedRunningTime="2026-02-24 05:49:22.883660256 +0000 UTC m=+722.586277312" watchObservedRunningTime="2026-02-24 05:49:22.929414589 +0000 UTC m=+722.632031635" Feb 24 05:49:22.944709 master-0 kubenswrapper[34361]: I0224 05:49:22.944595 34361 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/mariadb-operator-controller-manager-6994f66f48-28hdf" podStartSLOduration=4.901105278 podStartE2EDuration="24.944568127s" podCreationTimestamp="2026-02-24 05:48:58 +0000 UTC" firstStartedPulling="2026-02-24 05:49:00.238266149 +0000 UTC m=+699.940883195" lastFinishedPulling="2026-02-24 05:49:20.281728998 +0000 UTC m=+719.984346044" observedRunningTime="2026-02-24 05:49:22.934708792 +0000 UTC m=+722.637325848" watchObservedRunningTime="2026-02-24 05:49:22.944568127 +0000 UTC m=+722.647185173" Feb 24 05:49:22.969828 master-0 kubenswrapper[34361]: I0224 05:49:22.967559 34361 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/ovn-operator-controller-manager-5955d8c787-zbd8b" podStartSLOduration=5.974638629 podStartE2EDuration="24.967498156s" podCreationTimestamp="2026-02-24 05:48:58 +0000 UTC" firstStartedPulling="2026-02-24 05:49:00.808544602 +0000 UTC m=+700.511161648" lastFinishedPulling="2026-02-24 05:49:19.801404119 +0000 UTC m=+719.504021175" observedRunningTime="2026-02-24 05:49:22.962241094 +0000 UTC m=+722.664858140" watchObservedRunningTime="2026-02-24 05:49:22.967498156 +0000 UTC m=+722.670115202" Feb 24 05:49:22.996191 master-0 kubenswrapper[34361]: I0224 05:49:22.996078 34361 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/cinder-operator-controller-manager-55d77d7b5c-m52ng" podStartSLOduration=6.149452354 podStartE2EDuration="25.996053156s" podCreationTimestamp="2026-02-24 05:48:57 +0000 UTC" firstStartedPulling="2026-02-24 05:48:59.34577044 +0000 UTC m=+699.048387486" lastFinishedPulling="2026-02-24 05:49:19.192371242 +0000 UTC m=+718.894988288" observedRunningTime="2026-02-24 05:49:22.993662211 +0000 UTC m=+722.696279297" watchObservedRunningTime="2026-02-24 05:49:22.996053156 +0000 UTC m=+722.698670212" Feb 24 05:49:23.089846 master-0 kubenswrapper[34361]: I0224 05:49:23.089737 34361 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/octavia-operator-controller-manager-659dc6bbfc-z4h54" podStartSLOduration=6.110620974 podStartE2EDuration="25.08970667s" podCreationTimestamp="2026-02-24 05:48:58 +0000 UTC" firstStartedPulling="2026-02-24 05:49:00.818190832 +0000 UTC m=+700.520807868" lastFinishedPulling="2026-02-24 05:49:19.797276518 +0000 UTC m=+719.499893564" observedRunningTime="2026-02-24 05:49:23.015289904 +0000 UTC m=+722.717906970" watchObservedRunningTime="2026-02-24 05:49:23.08970667 +0000 UTC m=+722.792323716" Feb 24 05:49:23.103501 master-0 kubenswrapper[34361]: I0224 05:49:23.103386 34361 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/telemetry-operator-controller-manager-589c568786-9ljm5" podStartSLOduration=6.352150404 podStartE2EDuration="25.103358348s" podCreationTimestamp="2026-02-24 05:48:58 +0000 UTC" firstStartedPulling="2026-02-24 05:49:01.045805297 +0000 UTC m=+700.748422343" lastFinishedPulling="2026-02-24 05:49:19.797013201 +0000 UTC m=+719.499630287" observedRunningTime="2026-02-24 05:49:23.04926049 +0000 UTC m=+722.751877536" watchObservedRunningTime="2026-02-24 05:49:23.103358348 +0000 UTC m=+722.805975424" Feb 24 05:49:23.799239 master-0 kubenswrapper[34361]: I0224 05:49:23.799171 34361 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/telemetry-operator-controller-manager-589c568786-9ljm5" Feb 24 05:49:25.843634 master-0 kubenswrapper[34361]: I0224 05:49:25.843302 34361 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-controller-manager-5f879c76b6-bv48m" event={"ID":"d635d276-775a-4aef-b331-f03468985b12","Type":"ContainerStarted","Data":"ca1c5fd2049ecb7e869d840ffc715f609a087bbbd7d1cba5d3c89e85d4317bd6"} Feb 24 05:49:25.843634 master-0 kubenswrapper[34361]: I0224 05:49:25.843512 34361 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/infra-operator-controller-manager-5f879c76b6-bv48m" Feb 24 05:49:25.851775 master-0 kubenswrapper[34361]: I0224 05:49:25.851665 34361 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-baremetal-operator-controller-manager-579b7786b92xw4j" event={"ID":"7e772b09-8f15-4d6f-be11-482fe9376b51","Type":"ContainerStarted","Data":"7cc41b019ef55533826008c360ddd978402366064c81affe34821799bd7f4026"} Feb 24 05:49:25.851960 master-0 kubenswrapper[34361]: I0224 05:49:25.851869 34361 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-baremetal-operator-controller-manager-579b7786b92xw4j" Feb 24 05:49:25.883257 master-0 kubenswrapper[34361]: I0224 05:49:25.883054 34361 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/infra-operator-controller-manager-5f879c76b6-bv48m" podStartSLOduration=23.534623131 podStartE2EDuration="27.883015998s" podCreationTimestamp="2026-02-24 05:48:58 +0000 UTC" firstStartedPulling="2026-02-24 05:49:20.959245841 +0000 UTC m=+720.661862887" lastFinishedPulling="2026-02-24 05:49:25.307638708 +0000 UTC m=+725.010255754" observedRunningTime="2026-02-24 05:49:25.877425607 +0000 UTC m=+725.580042723" watchObservedRunningTime="2026-02-24 05:49:25.883015998 +0000 UTC m=+725.585633064" Feb 24 05:49:25.947890 master-0 kubenswrapper[34361]: I0224 05:49:25.947740 34361 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-baremetal-operator-controller-manager-579b7786b92xw4j" podStartSLOduration=23.736406291 podStartE2EDuration="27.947704882s" podCreationTimestamp="2026-02-24 05:48:58 +0000 UTC" firstStartedPulling="2026-02-24 05:49:21.096494251 +0000 UTC m=+720.799111297" lastFinishedPulling="2026-02-24 05:49:25.307792832 +0000 UTC m=+725.010409888" observedRunningTime="2026-02-24 05:49:25.931864025 +0000 UTC m=+725.634481131" watchObservedRunningTime="2026-02-24 05:49:25.947704882 +0000 UTC m=+725.650321938" Feb 24 05:49:28.296433 master-0 kubenswrapper[34361]: I0224 05:49:28.296332 34361 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/cinder-operator-controller-manager-55d77d7b5c-m52ng" Feb 24 05:49:28.321773 master-0 kubenswrapper[34361]: I0224 05:49:28.321698 34361 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/barbican-operator-controller-manager-868647ff47-rngmn" Feb 24 05:49:28.352838 master-0 kubenswrapper[34361]: I0224 05:49:28.352734 34361 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/designate-operator-controller-manager-6d8bf5c495-vq97j" Feb 24 05:49:28.436339 master-0 kubenswrapper[34361]: I0224 05:49:28.436235 34361 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/glance-operator-controller-manager-784b5bb6c5-zghgv" Feb 24 05:49:28.479185 master-0 kubenswrapper[34361]: I0224 05:49:28.477804 34361 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/heat-operator-controller-manager-69f49c598c-75df9" Feb 24 05:49:28.610180 master-0 kubenswrapper[34361]: I0224 05:49:28.610001 34361 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/horizon-operator-controller-manager-5b9b8895d5-gmljt" Feb 24 05:49:28.778013 master-0 kubenswrapper[34361]: I0224 05:49:28.777930 34361 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/ironic-operator-controller-manager-554564d7fc-db24j" Feb 24 05:49:28.812359 master-0 kubenswrapper[34361]: I0224 05:49:28.811813 34361 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/keystone-operator-controller-manager-b4d948c87-zlj5w" Feb 24 05:49:28.848363 master-0 kubenswrapper[34361]: I0224 05:49:28.847746 34361 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/manila-operator-controller-manager-67d996989d-qbghx" Feb 24 05:49:28.869568 master-0 kubenswrapper[34361]: I0224 05:49:28.869401 34361 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/mariadb-operator-controller-manager-6994f66f48-28hdf" Feb 24 05:49:29.025053 master-0 kubenswrapper[34361]: I0224 05:49:29.024950 34361 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/neutron-operator-controller-manager-6bd4687957-svmn2" Feb 24 05:49:29.082085 master-0 kubenswrapper[34361]: I0224 05:49:29.081499 34361 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/nova-operator-controller-manager-567668f5cf-sfjt8" Feb 24 05:49:29.140473 master-0 kubenswrapper[34361]: I0224 05:49:29.140237 34361 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/ovn-operator-controller-manager-5955d8c787-zbd8b" Feb 24 05:49:29.214773 master-0 kubenswrapper[34361]: I0224 05:49:29.214701 34361 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/octavia-operator-controller-manager-659dc6bbfc-z4h54" Feb 24 05:49:29.457700 master-0 kubenswrapper[34361]: I0224 05:49:29.457452 34361 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/placement-operator-controller-manager-8497b45c89-8xrtm" Feb 24 05:49:29.513195 master-0 kubenswrapper[34361]: I0224 05:49:29.513117 34361 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/watcher-operator-controller-manager-bccc79885-96xg2" Feb 24 05:49:29.601464 master-0 kubenswrapper[34361]: I0224 05:49:29.601379 34361 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/swift-operator-controller-manager-68f46476f-tc9k2" Feb 24 05:49:29.769290 master-0 kubenswrapper[34361]: I0224 05:49:29.769201 34361 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/telemetry-operator-controller-manager-589c568786-9ljm5" Feb 24 05:49:29.806381 master-0 kubenswrapper[34361]: I0224 05:49:29.806300 34361 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/test-operator-controller-manager-5dc6794d5b-96zg4" Feb 24 05:49:31.876599 master-0 kubenswrapper[34361]: I0224 05:49:31.875774 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/10243cb4-f4ad-40a0-af84-04d9dc7c32c9-webhook-certs\") pod \"openstack-operator-controller-manager-5dc486cffc-rbqzr\" (UID: \"10243cb4-f4ad-40a0-af84-04d9dc7c32c9\") " pod="openstack-operators/openstack-operator-controller-manager-5dc486cffc-rbqzr" Feb 24 05:49:31.876599 master-0 kubenswrapper[34361]: I0224 05:49:31.876017 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/10243cb4-f4ad-40a0-af84-04d9dc7c32c9-metrics-certs\") pod \"openstack-operator-controller-manager-5dc486cffc-rbqzr\" (UID: \"10243cb4-f4ad-40a0-af84-04d9dc7c32c9\") " pod="openstack-operators/openstack-operator-controller-manager-5dc486cffc-rbqzr" Feb 24 05:49:31.887653 master-0 kubenswrapper[34361]: I0224 05:49:31.887524 34361 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/10243cb4-f4ad-40a0-af84-04d9dc7c32c9-metrics-certs\") pod \"openstack-operator-controller-manager-5dc486cffc-rbqzr\" (UID: \"10243cb4-f4ad-40a0-af84-04d9dc7c32c9\") " pod="openstack-operators/openstack-operator-controller-manager-5dc486cffc-rbqzr" Feb 24 05:49:31.907208 master-0 kubenswrapper[34361]: I0224 05:49:31.907048 34361 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/10243cb4-f4ad-40a0-af84-04d9dc7c32c9-webhook-certs\") pod \"openstack-operator-controller-manager-5dc486cffc-rbqzr\" (UID: \"10243cb4-f4ad-40a0-af84-04d9dc7c32c9\") " pod="openstack-operators/openstack-operator-controller-manager-5dc486cffc-rbqzr" Feb 24 05:49:31.922220 master-0 kubenswrapper[34361]: I0224 05:49:31.921458 34361 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-manager-5dc486cffc-rbqzr" Feb 24 05:49:33.307359 master-0 kubenswrapper[34361]: I0224 05:49:33.305171 34361 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-manager-5dc486cffc-rbqzr"] Feb 24 05:49:33.308032 master-0 kubenswrapper[34361]: W0224 05:49:33.307959 34361 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod10243cb4_f4ad_40a0_af84_04d9dc7c32c9.slice/crio-71e34fa443843b4d2d35534509389f992f8af7608d6854a1abbc5ef2d7b4a735 WatchSource:0}: Error finding container 71e34fa443843b4d2d35534509389f992f8af7608d6854a1abbc5ef2d7b4a735: Status 404 returned error can't find the container with id 71e34fa443843b4d2d35534509389f992f8af7608d6854a1abbc5ef2d7b4a735 Feb 24 05:49:34.015108 master-0 kubenswrapper[34361]: I0224 05:49:34.015037 34361 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-manager-5dc486cffc-rbqzr" event={"ID":"10243cb4-f4ad-40a0-af84-04d9dc7c32c9","Type":"ContainerStarted","Data":"44b7ac392046ceadf5f6a3a230932ceb9eda7e61ec14d5a059970017e0d34ea3"} Feb 24 05:49:34.015108 master-0 kubenswrapper[34361]: I0224 05:49:34.015105 34361 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-manager-5dc486cffc-rbqzr" event={"ID":"10243cb4-f4ad-40a0-af84-04d9dc7c32c9","Type":"ContainerStarted","Data":"71e34fa443843b4d2d35534509389f992f8af7608d6854a1abbc5ef2d7b4a735"} Feb 24 05:49:34.015591 master-0 kubenswrapper[34361]: I0224 05:49:34.015262 34361 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-controller-manager-5dc486cffc-rbqzr" Feb 24 05:49:34.068572 master-0 kubenswrapper[34361]: I0224 05:49:34.068433 34361 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-controller-manager-5dc486cffc-rbqzr" podStartSLOduration=36.068283807 podStartE2EDuration="36.068283807s" podCreationTimestamp="2026-02-24 05:48:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-24 05:49:34.046015827 +0000 UTC m=+733.748632903" watchObservedRunningTime="2026-02-24 05:49:34.068283807 +0000 UTC m=+733.770900883" Feb 24 05:49:34.368018 master-0 kubenswrapper[34361]: I0224 05:49:34.367827 34361 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/infra-operator-controller-manager-5f879c76b6-bv48m" Feb 24 05:49:35.123954 master-0 kubenswrapper[34361]: I0224 05:49:35.123840 34361 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-baremetal-operator-controller-manager-579b7786b92xw4j" Feb 24 05:49:41.933850 master-0 kubenswrapper[34361]: I0224 05:49:41.933785 34361 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-controller-manager-5dc486cffc-rbqzr" Feb 24 05:50:27.218377 master-0 kubenswrapper[34361]: I0224 05:50:27.218269 34361 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-bc7f9869-4lgxt"] Feb 24 05:50:27.220453 master-0 kubenswrapper[34361]: I0224 05:50:27.220309 34361 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-bc7f9869-4lgxt" Feb 24 05:50:27.227747 master-0 kubenswrapper[34361]: I0224 05:50:27.227685 34361 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns" Feb 24 05:50:27.227980 master-0 kubenswrapper[34361]: I0224 05:50:27.227903 34361 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"kube-root-ca.crt" Feb 24 05:50:27.228649 master-0 kubenswrapper[34361]: I0224 05:50:27.228044 34361 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openshift-service-ca.crt" Feb 24 05:50:27.234204 master-0 kubenswrapper[34361]: I0224 05:50:27.234108 34361 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n7jcz\" (UniqueName: \"kubernetes.io/projected/02de9f91-2491-4db8-b028-1c5357ded011-kube-api-access-n7jcz\") pod \"dnsmasq-dns-bc7f9869-4lgxt\" (UID: \"02de9f91-2491-4db8-b028-1c5357ded011\") " pod="openstack/dnsmasq-dns-bc7f9869-4lgxt" Feb 24 05:50:27.234510 master-0 kubenswrapper[34361]: I0224 05:50:27.234463 34361 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/02de9f91-2491-4db8-b028-1c5357ded011-config\") pod \"dnsmasq-dns-bc7f9869-4lgxt\" (UID: \"02de9f91-2491-4db8-b028-1c5357ded011\") " pod="openstack/dnsmasq-dns-bc7f9869-4lgxt" Feb 24 05:50:27.235338 master-0 kubenswrapper[34361]: I0224 05:50:27.235260 34361 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-bc7f9869-4lgxt"] Feb 24 05:50:27.340581 master-0 kubenswrapper[34361]: I0224 05:50:27.340426 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/02de9f91-2491-4db8-b028-1c5357ded011-config\") pod \"dnsmasq-dns-bc7f9869-4lgxt\" (UID: \"02de9f91-2491-4db8-b028-1c5357ded011\") " pod="openstack/dnsmasq-dns-bc7f9869-4lgxt" Feb 24 05:50:27.341081 master-0 kubenswrapper[34361]: I0224 05:50:27.340678 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n7jcz\" (UniqueName: \"kubernetes.io/projected/02de9f91-2491-4db8-b028-1c5357ded011-kube-api-access-n7jcz\") pod \"dnsmasq-dns-bc7f9869-4lgxt\" (UID: \"02de9f91-2491-4db8-b028-1c5357ded011\") " pod="openstack/dnsmasq-dns-bc7f9869-4lgxt" Feb 24 05:50:27.346179 master-0 kubenswrapper[34361]: I0224 05:50:27.346105 34361 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/02de9f91-2491-4db8-b028-1c5357ded011-config\") pod \"dnsmasq-dns-bc7f9869-4lgxt\" (UID: \"02de9f91-2491-4db8-b028-1c5357ded011\") " pod="openstack/dnsmasq-dns-bc7f9869-4lgxt" Feb 24 05:50:27.370349 master-0 kubenswrapper[34361]: I0224 05:50:27.364722 34361 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-7d4c486879-5m7lz"] Feb 24 05:50:27.370349 master-0 kubenswrapper[34361]: I0224 05:50:27.366596 34361 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7d4c486879-5m7lz" Feb 24 05:50:27.370349 master-0 kubenswrapper[34361]: I0224 05:50:27.368986 34361 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns-svc" Feb 24 05:50:27.374335 master-0 kubenswrapper[34361]: I0224 05:50:27.372051 34361 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n7jcz\" (UniqueName: \"kubernetes.io/projected/02de9f91-2491-4db8-b028-1c5357ded011-kube-api-access-n7jcz\") pod \"dnsmasq-dns-bc7f9869-4lgxt\" (UID: \"02de9f91-2491-4db8-b028-1c5357ded011\") " pod="openstack/dnsmasq-dns-bc7f9869-4lgxt" Feb 24 05:50:27.391425 master-0 kubenswrapper[34361]: I0224 05:50:27.376911 34361 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7d4c486879-5m7lz"] Feb 24 05:50:27.442948 master-0 kubenswrapper[34361]: I0224 05:50:27.442657 34361 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f921d6ac-4034-4bd4-b129-9115b362066d-dns-svc\") pod \"dnsmasq-dns-7d4c486879-5m7lz\" (UID: \"f921d6ac-4034-4bd4-b129-9115b362066d\") " pod="openstack/dnsmasq-dns-7d4c486879-5m7lz" Feb 24 05:50:27.443098 master-0 kubenswrapper[34361]: I0224 05:50:27.442990 34361 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f921d6ac-4034-4bd4-b129-9115b362066d-config\") pod \"dnsmasq-dns-7d4c486879-5m7lz\" (UID: \"f921d6ac-4034-4bd4-b129-9115b362066d\") " pod="openstack/dnsmasq-dns-7d4c486879-5m7lz" Feb 24 05:50:27.443098 master-0 kubenswrapper[34361]: I0224 05:50:27.443052 34361 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-927ht\" (UniqueName: \"kubernetes.io/projected/f921d6ac-4034-4bd4-b129-9115b362066d-kube-api-access-927ht\") pod \"dnsmasq-dns-7d4c486879-5m7lz\" (UID: \"f921d6ac-4034-4bd4-b129-9115b362066d\") " pod="openstack/dnsmasq-dns-7d4c486879-5m7lz" Feb 24 05:50:27.546857 master-0 kubenswrapper[34361]: I0224 05:50:27.546787 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-927ht\" (UniqueName: \"kubernetes.io/projected/f921d6ac-4034-4bd4-b129-9115b362066d-kube-api-access-927ht\") pod \"dnsmasq-dns-7d4c486879-5m7lz\" (UID: \"f921d6ac-4034-4bd4-b129-9115b362066d\") " pod="openstack/dnsmasq-dns-7d4c486879-5m7lz" Feb 24 05:50:27.547725 master-0 kubenswrapper[34361]: I0224 05:50:27.547696 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f921d6ac-4034-4bd4-b129-9115b362066d-dns-svc\") pod \"dnsmasq-dns-7d4c486879-5m7lz\" (UID: \"f921d6ac-4034-4bd4-b129-9115b362066d\") " pod="openstack/dnsmasq-dns-7d4c486879-5m7lz" Feb 24 05:50:27.548532 master-0 kubenswrapper[34361]: I0224 05:50:27.548393 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f921d6ac-4034-4bd4-b129-9115b362066d-config\") pod \"dnsmasq-dns-7d4c486879-5m7lz\" (UID: \"f921d6ac-4034-4bd4-b129-9115b362066d\") " pod="openstack/dnsmasq-dns-7d4c486879-5m7lz" Feb 24 05:50:27.549626 master-0 kubenswrapper[34361]: I0224 05:50:27.549593 34361 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f921d6ac-4034-4bd4-b129-9115b362066d-dns-svc\") pod \"dnsmasq-dns-7d4c486879-5m7lz\" (UID: \"f921d6ac-4034-4bd4-b129-9115b362066d\") " pod="openstack/dnsmasq-dns-7d4c486879-5m7lz" Feb 24 05:50:27.550154 master-0 kubenswrapper[34361]: I0224 05:50:27.550044 34361 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f921d6ac-4034-4bd4-b129-9115b362066d-config\") pod \"dnsmasq-dns-7d4c486879-5m7lz\" (UID: \"f921d6ac-4034-4bd4-b129-9115b362066d\") " pod="openstack/dnsmasq-dns-7d4c486879-5m7lz" Feb 24 05:50:27.563186 master-0 kubenswrapper[34361]: I0224 05:50:27.563095 34361 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-bc7f9869-4lgxt" Feb 24 05:50:27.568193 master-0 kubenswrapper[34361]: I0224 05:50:27.568153 34361 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-927ht\" (UniqueName: \"kubernetes.io/projected/f921d6ac-4034-4bd4-b129-9115b362066d-kube-api-access-927ht\") pod \"dnsmasq-dns-7d4c486879-5m7lz\" (UID: \"f921d6ac-4034-4bd4-b129-9115b362066d\") " pod="openstack/dnsmasq-dns-7d4c486879-5m7lz" Feb 24 05:50:27.738329 master-0 kubenswrapper[34361]: I0224 05:50:27.738250 34361 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7d4c486879-5m7lz" Feb 24 05:50:27.938618 master-0 kubenswrapper[34361]: I0224 05:50:27.938557 34361 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-bc7f9869-4lgxt"] Feb 24 05:50:27.962662 master-0 kubenswrapper[34361]: W0224 05:50:27.962612 34361 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod02de9f91_2491_4db8_b028_1c5357ded011.slice/crio-b5bbde839d96341d543524cba3e77627f4e3992a590df8e843ce86fef3b25ead WatchSource:0}: Error finding container b5bbde839d96341d543524cba3e77627f4e3992a590df8e843ce86fef3b25ead: Status 404 returned error can't find the container with id b5bbde839d96341d543524cba3e77627f4e3992a590df8e843ce86fef3b25ead Feb 24 05:50:28.320348 master-0 kubenswrapper[34361]: I0224 05:50:28.318362 34361 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7d4c486879-5m7lz"] Feb 24 05:50:28.331350 master-0 kubenswrapper[34361]: W0224 05:50:28.327652 34361 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf921d6ac_4034_4bd4_b129_9115b362066d.slice/crio-255413d25d2a47048ddccf2afb648548d80a4d9bbbc030b8fd3827d947dba484 WatchSource:0}: Error finding container 255413d25d2a47048ddccf2afb648548d80a4d9bbbc030b8fd3827d947dba484: Status 404 returned error can't find the container with id 255413d25d2a47048ddccf2afb648548d80a4d9bbbc030b8fd3827d947dba484 Feb 24 05:50:28.817338 master-0 kubenswrapper[34361]: I0224 05:50:28.816558 34361 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-bc7f9869-4lgxt" event={"ID":"02de9f91-2491-4db8-b028-1c5357ded011","Type":"ContainerStarted","Data":"b5bbde839d96341d543524cba3e77627f4e3992a590df8e843ce86fef3b25ead"} Feb 24 05:50:28.824956 master-0 kubenswrapper[34361]: I0224 05:50:28.824856 34361 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7d4c486879-5m7lz" event={"ID":"f921d6ac-4034-4bd4-b129-9115b362066d","Type":"ContainerStarted","Data":"255413d25d2a47048ddccf2afb648548d80a4d9bbbc030b8fd3827d947dba484"} Feb 24 05:50:30.514670 master-0 kubenswrapper[34361]: I0224 05:50:30.514547 34361 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7d4c486879-5m7lz"] Feb 24 05:50:30.536905 master-0 kubenswrapper[34361]: I0224 05:50:30.536787 34361 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-6974cff98c-2t99f"] Feb 24 05:50:30.540395 master-0 kubenswrapper[34361]: I0224 05:50:30.540349 34361 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6974cff98c-2t99f" Feb 24 05:50:30.576479 master-0 kubenswrapper[34361]: I0224 05:50:30.572195 34361 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6974cff98c-2t99f"] Feb 24 05:50:30.642196 master-0 kubenswrapper[34361]: I0224 05:50:30.642089 34361 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h6gx6\" (UniqueName: \"kubernetes.io/projected/90697da9-388e-4c2c-9959-12430b1c6848-kube-api-access-h6gx6\") pod \"dnsmasq-dns-6974cff98c-2t99f\" (UID: \"90697da9-388e-4c2c-9959-12430b1c6848\") " pod="openstack/dnsmasq-dns-6974cff98c-2t99f" Feb 24 05:50:30.650430 master-0 kubenswrapper[34361]: I0224 05:50:30.642224 34361 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/90697da9-388e-4c2c-9959-12430b1c6848-config\") pod \"dnsmasq-dns-6974cff98c-2t99f\" (UID: \"90697da9-388e-4c2c-9959-12430b1c6848\") " pod="openstack/dnsmasq-dns-6974cff98c-2t99f" Feb 24 05:50:30.650430 master-0 kubenswrapper[34361]: I0224 05:50:30.642443 34361 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/90697da9-388e-4c2c-9959-12430b1c6848-dns-svc\") pod \"dnsmasq-dns-6974cff98c-2t99f\" (UID: \"90697da9-388e-4c2c-9959-12430b1c6848\") " pod="openstack/dnsmasq-dns-6974cff98c-2t99f" Feb 24 05:50:30.746052 master-0 kubenswrapper[34361]: I0224 05:50:30.745980 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/90697da9-388e-4c2c-9959-12430b1c6848-dns-svc\") pod \"dnsmasq-dns-6974cff98c-2t99f\" (UID: \"90697da9-388e-4c2c-9959-12430b1c6848\") " pod="openstack/dnsmasq-dns-6974cff98c-2t99f" Feb 24 05:50:30.746360 master-0 kubenswrapper[34361]: I0224 05:50:30.746142 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h6gx6\" (UniqueName: \"kubernetes.io/projected/90697da9-388e-4c2c-9959-12430b1c6848-kube-api-access-h6gx6\") pod \"dnsmasq-dns-6974cff98c-2t99f\" (UID: \"90697da9-388e-4c2c-9959-12430b1c6848\") " pod="openstack/dnsmasq-dns-6974cff98c-2t99f" Feb 24 05:50:30.746360 master-0 kubenswrapper[34361]: I0224 05:50:30.746168 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/90697da9-388e-4c2c-9959-12430b1c6848-config\") pod \"dnsmasq-dns-6974cff98c-2t99f\" (UID: \"90697da9-388e-4c2c-9959-12430b1c6848\") " pod="openstack/dnsmasq-dns-6974cff98c-2t99f" Feb 24 05:50:30.747481 master-0 kubenswrapper[34361]: I0224 05:50:30.747436 34361 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/90697da9-388e-4c2c-9959-12430b1c6848-dns-svc\") pod \"dnsmasq-dns-6974cff98c-2t99f\" (UID: \"90697da9-388e-4c2c-9959-12430b1c6848\") " pod="openstack/dnsmasq-dns-6974cff98c-2t99f" Feb 24 05:50:30.747866 master-0 kubenswrapper[34361]: I0224 05:50:30.747812 34361 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/90697da9-388e-4c2c-9959-12430b1c6848-config\") pod \"dnsmasq-dns-6974cff98c-2t99f\" (UID: \"90697da9-388e-4c2c-9959-12430b1c6848\") " pod="openstack/dnsmasq-dns-6974cff98c-2t99f" Feb 24 05:50:30.782482 master-0 kubenswrapper[34361]: I0224 05:50:30.780364 34361 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h6gx6\" (UniqueName: \"kubernetes.io/projected/90697da9-388e-4c2c-9959-12430b1c6848-kube-api-access-h6gx6\") pod \"dnsmasq-dns-6974cff98c-2t99f\" (UID: \"90697da9-388e-4c2c-9959-12430b1c6848\") " pod="openstack/dnsmasq-dns-6974cff98c-2t99f" Feb 24 05:50:30.883573 master-0 kubenswrapper[34361]: I0224 05:50:30.883428 34361 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-bc7f9869-4lgxt"] Feb 24 05:50:30.916058 master-0 kubenswrapper[34361]: I0224 05:50:30.915918 34361 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-7c45d57b9c-k22s7"] Feb 24 05:50:30.921470 master-0 kubenswrapper[34361]: I0224 05:50:30.918386 34361 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7c45d57b9c-k22s7" Feb 24 05:50:30.922966 master-0 kubenswrapper[34361]: I0224 05:50:30.921906 34361 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6974cff98c-2t99f" Feb 24 05:50:30.927572 master-0 kubenswrapper[34361]: I0224 05:50:30.925606 34361 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7c45d57b9c-k22s7"] Feb 24 05:50:30.966075 master-0 kubenswrapper[34361]: I0224 05:50:30.965981 34361 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-694c6\" (UniqueName: \"kubernetes.io/projected/d6091c0d-046c-44f0-888c-dbc5ac5a7aae-kube-api-access-694c6\") pod \"dnsmasq-dns-7c45d57b9c-k22s7\" (UID: \"d6091c0d-046c-44f0-888c-dbc5ac5a7aae\") " pod="openstack/dnsmasq-dns-7c45d57b9c-k22s7" Feb 24 05:50:30.966184 master-0 kubenswrapper[34361]: I0224 05:50:30.966155 34361 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d6091c0d-046c-44f0-888c-dbc5ac5a7aae-dns-svc\") pod \"dnsmasq-dns-7c45d57b9c-k22s7\" (UID: \"d6091c0d-046c-44f0-888c-dbc5ac5a7aae\") " pod="openstack/dnsmasq-dns-7c45d57b9c-k22s7" Feb 24 05:50:30.966329 master-0 kubenswrapper[34361]: I0224 05:50:30.966283 34361 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d6091c0d-046c-44f0-888c-dbc5ac5a7aae-config\") pod \"dnsmasq-dns-7c45d57b9c-k22s7\" (UID: \"d6091c0d-046c-44f0-888c-dbc5ac5a7aae\") " pod="openstack/dnsmasq-dns-7c45d57b9c-k22s7" Feb 24 05:50:31.070503 master-0 kubenswrapper[34361]: I0224 05:50:31.070334 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d6091c0d-046c-44f0-888c-dbc5ac5a7aae-dns-svc\") pod \"dnsmasq-dns-7c45d57b9c-k22s7\" (UID: \"d6091c0d-046c-44f0-888c-dbc5ac5a7aae\") " pod="openstack/dnsmasq-dns-7c45d57b9c-k22s7" Feb 24 05:50:31.070503 master-0 kubenswrapper[34361]: I0224 05:50:31.070420 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d6091c0d-046c-44f0-888c-dbc5ac5a7aae-config\") pod \"dnsmasq-dns-7c45d57b9c-k22s7\" (UID: \"d6091c0d-046c-44f0-888c-dbc5ac5a7aae\") " pod="openstack/dnsmasq-dns-7c45d57b9c-k22s7" Feb 24 05:50:31.070759 master-0 kubenswrapper[34361]: I0224 05:50:31.070656 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-694c6\" (UniqueName: \"kubernetes.io/projected/d6091c0d-046c-44f0-888c-dbc5ac5a7aae-kube-api-access-694c6\") pod \"dnsmasq-dns-7c45d57b9c-k22s7\" (UID: \"d6091c0d-046c-44f0-888c-dbc5ac5a7aae\") " pod="openstack/dnsmasq-dns-7c45d57b9c-k22s7" Feb 24 05:50:31.072862 master-0 kubenswrapper[34361]: I0224 05:50:31.072836 34361 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d6091c0d-046c-44f0-888c-dbc5ac5a7aae-config\") pod \"dnsmasq-dns-7c45d57b9c-k22s7\" (UID: \"d6091c0d-046c-44f0-888c-dbc5ac5a7aae\") " pod="openstack/dnsmasq-dns-7c45d57b9c-k22s7" Feb 24 05:50:31.072942 master-0 kubenswrapper[34361]: I0224 05:50:31.072881 34361 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d6091c0d-046c-44f0-888c-dbc5ac5a7aae-dns-svc\") pod \"dnsmasq-dns-7c45d57b9c-k22s7\" (UID: \"d6091c0d-046c-44f0-888c-dbc5ac5a7aae\") " pod="openstack/dnsmasq-dns-7c45d57b9c-k22s7" Feb 24 05:50:31.101344 master-0 kubenswrapper[34361]: I0224 05:50:31.101221 34361 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-694c6\" (UniqueName: \"kubernetes.io/projected/d6091c0d-046c-44f0-888c-dbc5ac5a7aae-kube-api-access-694c6\") pod \"dnsmasq-dns-7c45d57b9c-k22s7\" (UID: \"d6091c0d-046c-44f0-888c-dbc5ac5a7aae\") " pod="openstack/dnsmasq-dns-7c45d57b9c-k22s7" Feb 24 05:50:31.268928 master-0 kubenswrapper[34361]: I0224 05:50:31.266055 34361 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7c45d57b9c-k22s7" Feb 24 05:50:31.601529 master-0 kubenswrapper[34361]: I0224 05:50:31.601254 34361 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6974cff98c-2t99f"] Feb 24 05:50:31.858476 master-0 kubenswrapper[34361]: I0224 05:50:31.858377 34361 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7c45d57b9c-k22s7"] Feb 24 05:50:31.878277 master-0 kubenswrapper[34361]: W0224 05:50:31.878187 34361 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd6091c0d_046c_44f0_888c_dbc5ac5a7aae.slice/crio-1ae433538cdcd1d0dd339543bc958b8801113e679597a11d30b127e7438236d0 WatchSource:0}: Error finding container 1ae433538cdcd1d0dd339543bc958b8801113e679597a11d30b127e7438236d0: Status 404 returned error can't find the container with id 1ae433538cdcd1d0dd339543bc958b8801113e679597a11d30b127e7438236d0 Feb 24 05:50:31.936487 master-0 kubenswrapper[34361]: I0224 05:50:31.936371 34361 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7c45d57b9c-k22s7" event={"ID":"d6091c0d-046c-44f0-888c-dbc5ac5a7aae","Type":"ContainerStarted","Data":"1ae433538cdcd1d0dd339543bc958b8801113e679597a11d30b127e7438236d0"} Feb 24 05:50:31.941037 master-0 kubenswrapper[34361]: I0224 05:50:31.940994 34361 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6974cff98c-2t99f" event={"ID":"90697da9-388e-4c2c-9959-12430b1c6848","Type":"ContainerStarted","Data":"749cc87ba1eda340ab813f8fac34b1c3db9896634f9c4bc1aa4584f1c59a02a9"} Feb 24 05:50:34.035171 master-0 kubenswrapper[34361]: I0224 05:50:34.029185 34361 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/memcached-0"] Feb 24 05:50:34.035171 master-0 kubenswrapper[34361]: I0224 05:50:34.033353 34361 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/memcached-0" Feb 24 05:50:34.039512 master-0 kubenswrapper[34361]: I0224 05:50:34.036244 34361 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-memcached-svc" Feb 24 05:50:34.039512 master-0 kubenswrapper[34361]: I0224 05:50:34.039231 34361 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"memcached-config-data" Feb 24 05:50:34.049965 master-0 kubenswrapper[34361]: I0224 05:50:34.047542 34361 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"combined-ca-bundle" Feb 24 05:50:34.049965 master-0 kubenswrapper[34361]: I0224 05:50:34.049813 34361 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/memcached-0"] Feb 24 05:50:34.108542 master-0 kubenswrapper[34361]: I0224 05:50:34.108456 34361 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cn2kq\" (UniqueName: \"kubernetes.io/projected/3ab53d19-8c7c-4c4a-9372-c9e7fe2debd0-kube-api-access-cn2kq\") pod \"memcached-0\" (UID: \"3ab53d19-8c7c-4c4a-9372-c9e7fe2debd0\") " pod="openstack/memcached-0" Feb 24 05:50:34.108542 master-0 kubenswrapper[34361]: I0224 05:50:34.108540 34361 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/3ab53d19-8c7c-4c4a-9372-c9e7fe2debd0-config-data\") pod \"memcached-0\" (UID: \"3ab53d19-8c7c-4c4a-9372-c9e7fe2debd0\") " pod="openstack/memcached-0" Feb 24 05:50:34.108854 master-0 kubenswrapper[34361]: I0224 05:50:34.108611 34361 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3ab53d19-8c7c-4c4a-9372-c9e7fe2debd0-combined-ca-bundle\") pod \"memcached-0\" (UID: \"3ab53d19-8c7c-4c4a-9372-c9e7fe2debd0\") " pod="openstack/memcached-0" Feb 24 05:50:34.108854 master-0 kubenswrapper[34361]: I0224 05:50:34.108655 34361 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/3ab53d19-8c7c-4c4a-9372-c9e7fe2debd0-memcached-tls-certs\") pod \"memcached-0\" (UID: \"3ab53d19-8c7c-4c4a-9372-c9e7fe2debd0\") " pod="openstack/memcached-0" Feb 24 05:50:34.108854 master-0 kubenswrapper[34361]: I0224 05:50:34.108706 34361 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/3ab53d19-8c7c-4c4a-9372-c9e7fe2debd0-kolla-config\") pod \"memcached-0\" (UID: \"3ab53d19-8c7c-4c4a-9372-c9e7fe2debd0\") " pod="openstack/memcached-0" Feb 24 05:50:34.221465 master-0 kubenswrapper[34361]: I0224 05:50:34.219838 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/3ab53d19-8c7c-4c4a-9372-c9e7fe2debd0-memcached-tls-certs\") pod \"memcached-0\" (UID: \"3ab53d19-8c7c-4c4a-9372-c9e7fe2debd0\") " pod="openstack/memcached-0" Feb 24 05:50:34.221465 master-0 kubenswrapper[34361]: I0224 05:50:34.220003 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/3ab53d19-8c7c-4c4a-9372-c9e7fe2debd0-kolla-config\") pod \"memcached-0\" (UID: \"3ab53d19-8c7c-4c4a-9372-c9e7fe2debd0\") " pod="openstack/memcached-0" Feb 24 05:50:34.221465 master-0 kubenswrapper[34361]: I0224 05:50:34.220361 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cn2kq\" (UniqueName: \"kubernetes.io/projected/3ab53d19-8c7c-4c4a-9372-c9e7fe2debd0-kube-api-access-cn2kq\") pod \"memcached-0\" (UID: \"3ab53d19-8c7c-4c4a-9372-c9e7fe2debd0\") " pod="openstack/memcached-0" Feb 24 05:50:34.221465 master-0 kubenswrapper[34361]: I0224 05:50:34.220440 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/3ab53d19-8c7c-4c4a-9372-c9e7fe2debd0-config-data\") pod \"memcached-0\" (UID: \"3ab53d19-8c7c-4c4a-9372-c9e7fe2debd0\") " pod="openstack/memcached-0" Feb 24 05:50:34.221465 master-0 kubenswrapper[34361]: I0224 05:50:34.220554 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3ab53d19-8c7c-4c4a-9372-c9e7fe2debd0-combined-ca-bundle\") pod \"memcached-0\" (UID: \"3ab53d19-8c7c-4c4a-9372-c9e7fe2debd0\") " pod="openstack/memcached-0" Feb 24 05:50:34.224327 master-0 kubenswrapper[34361]: I0224 05:50:34.224272 34361 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/3ab53d19-8c7c-4c4a-9372-c9e7fe2debd0-kolla-config\") pod \"memcached-0\" (UID: \"3ab53d19-8c7c-4c4a-9372-c9e7fe2debd0\") " pod="openstack/memcached-0" Feb 24 05:50:34.235541 master-0 kubenswrapper[34361]: I0224 05:50:34.225977 34361 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/3ab53d19-8c7c-4c4a-9372-c9e7fe2debd0-config-data\") pod \"memcached-0\" (UID: \"3ab53d19-8c7c-4c4a-9372-c9e7fe2debd0\") " pod="openstack/memcached-0" Feb 24 05:50:34.236036 master-0 kubenswrapper[34361]: I0224 05:50:34.235992 34361 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/3ab53d19-8c7c-4c4a-9372-c9e7fe2debd0-memcached-tls-certs\") pod \"memcached-0\" (UID: \"3ab53d19-8c7c-4c4a-9372-c9e7fe2debd0\") " pod="openstack/memcached-0" Feb 24 05:50:34.254815 master-0 kubenswrapper[34361]: I0224 05:50:34.253267 34361 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3ab53d19-8c7c-4c4a-9372-c9e7fe2debd0-combined-ca-bundle\") pod \"memcached-0\" (UID: \"3ab53d19-8c7c-4c4a-9372-c9e7fe2debd0\") " pod="openstack/memcached-0" Feb 24 05:50:34.280472 master-0 kubenswrapper[34361]: I0224 05:50:34.279235 34361 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cn2kq\" (UniqueName: \"kubernetes.io/projected/3ab53d19-8c7c-4c4a-9372-c9e7fe2debd0-kube-api-access-cn2kq\") pod \"memcached-0\" (UID: \"3ab53d19-8c7c-4c4a-9372-c9e7fe2debd0\") " pod="openstack/memcached-0" Feb 24 05:50:34.370699 master-0 kubenswrapper[34361]: I0224 05:50:34.368007 34361 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/memcached-0" Feb 24 05:50:34.757386 master-0 kubenswrapper[34361]: I0224 05:50:34.757304 34361 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-server-0"] Feb 24 05:50:34.761166 master-0 kubenswrapper[34361]: I0224 05:50:34.761004 34361 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Feb 24 05:50:34.763984 master-0 kubenswrapper[34361]: I0224 05:50:34.763924 34361 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-config-data" Feb 24 05:50:34.764061 master-0 kubenswrapper[34361]: I0224 05:50:34.763958 34361 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-svc" Feb 24 05:50:34.764061 master-0 kubenswrapper[34361]: I0224 05:50:34.763992 34361 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-default-user" Feb 24 05:50:34.764175 master-0 kubenswrapper[34361]: I0224 05:50:34.764130 34361 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-plugins-conf" Feb 24 05:50:34.768205 master-0 kubenswrapper[34361]: I0224 05:50:34.767473 34361 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-server-conf" Feb 24 05:50:34.768205 master-0 kubenswrapper[34361]: I0224 05:50:34.767507 34361 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-erlang-cookie" Feb 24 05:50:34.831940 master-0 kubenswrapper[34361]: I0224 05:50:34.831823 34361 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/9ac06ab8-5197-4557-8124-583f49b6082b-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"9ac06ab8-5197-4557-8124-583f49b6082b\") " pod="openstack/rabbitmq-server-0" Feb 24 05:50:34.831940 master-0 kubenswrapper[34361]: I0224 05:50:34.831895 34361 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/9ac06ab8-5197-4557-8124-583f49b6082b-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"9ac06ab8-5197-4557-8124-583f49b6082b\") " pod="openstack/rabbitmq-server-0" Feb 24 05:50:34.832286 master-0 kubenswrapper[34361]: I0224 05:50:34.831988 34361 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/9ac06ab8-5197-4557-8124-583f49b6082b-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"9ac06ab8-5197-4557-8124-583f49b6082b\") " pod="openstack/rabbitmq-server-0" Feb 24 05:50:34.832744 master-0 kubenswrapper[34361]: I0224 05:50:34.832673 34361 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/9ac06ab8-5197-4557-8124-583f49b6082b-pod-info\") pod \"rabbitmq-server-0\" (UID: \"9ac06ab8-5197-4557-8124-583f49b6082b\") " pod="openstack/rabbitmq-server-0" Feb 24 05:50:34.832744 master-0 kubenswrapper[34361]: I0224 05:50:34.832724 34361 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-f81fa97a-3a54-4f13-a867-f22d9416fbaa\" (UniqueName: \"kubernetes.io/csi/topolvm.io^44e90109-7af1-498d-97d2-9d560d8af3ad\") pod \"rabbitmq-server-0\" (UID: \"9ac06ab8-5197-4557-8124-583f49b6082b\") " pod="openstack/rabbitmq-server-0" Feb 24 05:50:34.832870 master-0 kubenswrapper[34361]: I0224 05:50:34.832794 34361 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/9ac06ab8-5197-4557-8124-583f49b6082b-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"9ac06ab8-5197-4557-8124-583f49b6082b\") " pod="openstack/rabbitmq-server-0" Feb 24 05:50:34.832870 master-0 kubenswrapper[34361]: I0224 05:50:34.832824 34361 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/9ac06ab8-5197-4557-8124-583f49b6082b-config-data\") pod \"rabbitmq-server-0\" (UID: \"9ac06ab8-5197-4557-8124-583f49b6082b\") " pod="openstack/rabbitmq-server-0" Feb 24 05:50:34.833393 master-0 kubenswrapper[34361]: I0224 05:50:34.832961 34361 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/9ac06ab8-5197-4557-8124-583f49b6082b-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"9ac06ab8-5197-4557-8124-583f49b6082b\") " pod="openstack/rabbitmq-server-0" Feb 24 05:50:34.833393 master-0 kubenswrapper[34361]: I0224 05:50:34.833082 34361 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/9ac06ab8-5197-4557-8124-583f49b6082b-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"9ac06ab8-5197-4557-8124-583f49b6082b\") " pod="openstack/rabbitmq-server-0" Feb 24 05:50:34.833393 master-0 kubenswrapper[34361]: I0224 05:50:34.833229 34361 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/9ac06ab8-5197-4557-8124-583f49b6082b-server-conf\") pod \"rabbitmq-server-0\" (UID: \"9ac06ab8-5197-4557-8124-583f49b6082b\") " pod="openstack/rabbitmq-server-0" Feb 24 05:50:34.833393 master-0 kubenswrapper[34361]: I0224 05:50:34.833281 34361 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4ndsj\" (UniqueName: \"kubernetes.io/projected/9ac06ab8-5197-4557-8124-583f49b6082b-kube-api-access-4ndsj\") pod \"rabbitmq-server-0\" (UID: \"9ac06ab8-5197-4557-8124-583f49b6082b\") " pod="openstack/rabbitmq-server-0" Feb 24 05:50:34.840213 master-0 kubenswrapper[34361]: I0224 05:50:34.840167 34361 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Feb 24 05:50:34.945388 master-0 kubenswrapper[34361]: I0224 05:50:34.942153 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/9ac06ab8-5197-4557-8124-583f49b6082b-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"9ac06ab8-5197-4557-8124-583f49b6082b\") " pod="openstack/rabbitmq-server-0" Feb 24 05:50:34.945388 master-0 kubenswrapper[34361]: I0224 05:50:34.942260 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/9ac06ab8-5197-4557-8124-583f49b6082b-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"9ac06ab8-5197-4557-8124-583f49b6082b\") " pod="openstack/rabbitmq-server-0" Feb 24 05:50:34.945388 master-0 kubenswrapper[34361]: I0224 05:50:34.942330 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/9ac06ab8-5197-4557-8124-583f49b6082b-server-conf\") pod \"rabbitmq-server-0\" (UID: \"9ac06ab8-5197-4557-8124-583f49b6082b\") " pod="openstack/rabbitmq-server-0" Feb 24 05:50:34.945388 master-0 kubenswrapper[34361]: I0224 05:50:34.942355 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4ndsj\" (UniqueName: \"kubernetes.io/projected/9ac06ab8-5197-4557-8124-583f49b6082b-kube-api-access-4ndsj\") pod \"rabbitmq-server-0\" (UID: \"9ac06ab8-5197-4557-8124-583f49b6082b\") " pod="openstack/rabbitmq-server-0" Feb 24 05:50:34.945388 master-0 kubenswrapper[34361]: I0224 05:50:34.942522 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/9ac06ab8-5197-4557-8124-583f49b6082b-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"9ac06ab8-5197-4557-8124-583f49b6082b\") " pod="openstack/rabbitmq-server-0" Feb 24 05:50:34.945388 master-0 kubenswrapper[34361]: I0224 05:50:34.942570 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/9ac06ab8-5197-4557-8124-583f49b6082b-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"9ac06ab8-5197-4557-8124-583f49b6082b\") " pod="openstack/rabbitmq-server-0" Feb 24 05:50:34.945388 master-0 kubenswrapper[34361]: I0224 05:50:34.942592 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/9ac06ab8-5197-4557-8124-583f49b6082b-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"9ac06ab8-5197-4557-8124-583f49b6082b\") " pod="openstack/rabbitmq-server-0" Feb 24 05:50:34.945388 master-0 kubenswrapper[34361]: I0224 05:50:34.942617 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/9ac06ab8-5197-4557-8124-583f49b6082b-pod-info\") pod \"rabbitmq-server-0\" (UID: \"9ac06ab8-5197-4557-8124-583f49b6082b\") " pod="openstack/rabbitmq-server-0" Feb 24 05:50:34.945388 master-0 kubenswrapper[34361]: I0224 05:50:34.942676 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-f81fa97a-3a54-4f13-a867-f22d9416fbaa\" (UniqueName: \"kubernetes.io/csi/topolvm.io^44e90109-7af1-498d-97d2-9d560d8af3ad\") pod \"rabbitmq-server-0\" (UID: \"9ac06ab8-5197-4557-8124-583f49b6082b\") " pod="openstack/rabbitmq-server-0" Feb 24 05:50:34.945388 master-0 kubenswrapper[34361]: I0224 05:50:34.942758 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/9ac06ab8-5197-4557-8124-583f49b6082b-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"9ac06ab8-5197-4557-8124-583f49b6082b\") " pod="openstack/rabbitmq-server-0" Feb 24 05:50:34.945388 master-0 kubenswrapper[34361]: I0224 05:50:34.942810 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/9ac06ab8-5197-4557-8124-583f49b6082b-config-data\") pod \"rabbitmq-server-0\" (UID: \"9ac06ab8-5197-4557-8124-583f49b6082b\") " pod="openstack/rabbitmq-server-0" Feb 24 05:50:34.945388 master-0 kubenswrapper[34361]: I0224 05:50:34.944334 34361 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/9ac06ab8-5197-4557-8124-583f49b6082b-config-data\") pod \"rabbitmq-server-0\" (UID: \"9ac06ab8-5197-4557-8124-583f49b6082b\") " pod="openstack/rabbitmq-server-0" Feb 24 05:50:34.945388 master-0 kubenswrapper[34361]: I0224 05:50:34.944657 34361 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/9ac06ab8-5197-4557-8124-583f49b6082b-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"9ac06ab8-5197-4557-8124-583f49b6082b\") " pod="openstack/rabbitmq-server-0" Feb 24 05:50:34.947889 master-0 kubenswrapper[34361]: I0224 05:50:34.947861 34361 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/9ac06ab8-5197-4557-8124-583f49b6082b-server-conf\") pod \"rabbitmq-server-0\" (UID: \"9ac06ab8-5197-4557-8124-583f49b6082b\") " pod="openstack/rabbitmq-server-0" Feb 24 05:50:34.948543 master-0 kubenswrapper[34361]: I0224 05:50:34.948513 34361 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/9ac06ab8-5197-4557-8124-583f49b6082b-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"9ac06ab8-5197-4557-8124-583f49b6082b\") " pod="openstack/rabbitmq-server-0" Feb 24 05:50:34.949130 master-0 kubenswrapper[34361]: I0224 05:50:34.949066 34361 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/9ac06ab8-5197-4557-8124-583f49b6082b-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"9ac06ab8-5197-4557-8124-583f49b6082b\") " pod="openstack/rabbitmq-server-0" Feb 24 05:50:34.953746 master-0 kubenswrapper[34361]: I0224 05:50:34.953702 34361 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/9ac06ab8-5197-4557-8124-583f49b6082b-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"9ac06ab8-5197-4557-8124-583f49b6082b\") " pod="openstack/rabbitmq-server-0" Feb 24 05:50:34.955525 master-0 kubenswrapper[34361]: I0224 05:50:34.955489 34361 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 24 05:50:34.955591 master-0 kubenswrapper[34361]: I0224 05:50:34.955525 34361 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-f81fa97a-3a54-4f13-a867-f22d9416fbaa\" (UniqueName: \"kubernetes.io/csi/topolvm.io^44e90109-7af1-498d-97d2-9d560d8af3ad\") pod \"rabbitmq-server-0\" (UID: \"9ac06ab8-5197-4557-8124-583f49b6082b\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/topolvm.io/e6bd98ccb6e4aea4353d2ca2f1ccbca3f7e59cbdd7c93ac7729b2010da847b19/globalmount\"" pod="openstack/rabbitmq-server-0" Feb 24 05:50:34.977361 master-0 kubenswrapper[34361]: I0224 05:50:34.976867 34361 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/9ac06ab8-5197-4557-8124-583f49b6082b-pod-info\") pod \"rabbitmq-server-0\" (UID: \"9ac06ab8-5197-4557-8124-583f49b6082b\") " pod="openstack/rabbitmq-server-0" Feb 24 05:50:34.978346 master-0 kubenswrapper[34361]: I0224 05:50:34.978272 34361 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/9ac06ab8-5197-4557-8124-583f49b6082b-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"9ac06ab8-5197-4557-8124-583f49b6082b\") " pod="openstack/rabbitmq-server-0" Feb 24 05:50:34.992432 master-0 kubenswrapper[34361]: I0224 05:50:34.985012 34361 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4ndsj\" (UniqueName: \"kubernetes.io/projected/9ac06ab8-5197-4557-8124-583f49b6082b-kube-api-access-4ndsj\") pod \"rabbitmq-server-0\" (UID: \"9ac06ab8-5197-4557-8124-583f49b6082b\") " pod="openstack/rabbitmq-server-0" Feb 24 05:50:35.026176 master-0 kubenswrapper[34361]: I0224 05:50:35.025948 34361 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/9ac06ab8-5197-4557-8124-583f49b6082b-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"9ac06ab8-5197-4557-8124-583f49b6082b\") " pod="openstack/rabbitmq-server-0" Feb 24 05:50:35.071286 master-0 kubenswrapper[34361]: I0224 05:50:35.068848 34361 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Feb 24 05:50:35.071909 master-0 kubenswrapper[34361]: I0224 05:50:35.071490 34361 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Feb 24 05:50:35.075035 master-0 kubenswrapper[34361]: I0224 05:50:35.074665 34361 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-config-data" Feb 24 05:50:35.090964 master-0 kubenswrapper[34361]: I0224 05:50:35.090421 34361 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-plugins-conf" Feb 24 05:50:35.093380 master-0 kubenswrapper[34361]: I0224 05:50:35.091222 34361 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-server-conf" Feb 24 05:50:35.093380 master-0 kubenswrapper[34361]: I0224 05:50:35.091832 34361 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-erlang-cookie" Feb 24 05:50:35.093380 master-0 kubenswrapper[34361]: I0224 05:50:35.092155 34361 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-default-user" Feb 24 05:50:35.093380 master-0 kubenswrapper[34361]: I0224 05:50:35.093196 34361 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-cell1-svc" Feb 24 05:50:35.192821 master-0 kubenswrapper[34361]: I0224 05:50:35.192722 34361 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Feb 24 05:50:35.256194 master-0 kubenswrapper[34361]: I0224 05:50:35.256117 34361 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/1c4de741-6cb1-4ef0-80c9-173c72825057-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"1c4de741-6cb1-4ef0-80c9-173c72825057\") " pod="openstack/rabbitmq-cell1-server-0" Feb 24 05:50:35.256194 master-0 kubenswrapper[34361]: I0224 05:50:35.256199 34361 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/1c4de741-6cb1-4ef0-80c9-173c72825057-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"1c4de741-6cb1-4ef0-80c9-173c72825057\") " pod="openstack/rabbitmq-cell1-server-0" Feb 24 05:50:35.256734 master-0 kubenswrapper[34361]: I0224 05:50:35.256346 34361 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-28cb1ecd-6ba3-422b-a334-521132dedf93\" (UniqueName: \"kubernetes.io/csi/topolvm.io^6a5b8268-22ef-4dda-8fcf-8430e908eec2\") pod \"rabbitmq-cell1-server-0\" (UID: \"1c4de741-6cb1-4ef0-80c9-173c72825057\") " pod="openstack/rabbitmq-cell1-server-0" Feb 24 05:50:35.256794 master-0 kubenswrapper[34361]: I0224 05:50:35.256695 34361 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/1c4de741-6cb1-4ef0-80c9-173c72825057-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"1c4de741-6cb1-4ef0-80c9-173c72825057\") " pod="openstack/rabbitmq-cell1-server-0" Feb 24 05:50:35.256840 master-0 kubenswrapper[34361]: I0224 05:50:35.256823 34361 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dcx9j\" (UniqueName: \"kubernetes.io/projected/1c4de741-6cb1-4ef0-80c9-173c72825057-kube-api-access-dcx9j\") pod \"rabbitmq-cell1-server-0\" (UID: \"1c4de741-6cb1-4ef0-80c9-173c72825057\") " pod="openstack/rabbitmq-cell1-server-0" Feb 24 05:50:35.256960 master-0 kubenswrapper[34361]: I0224 05:50:35.256848 34361 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/1c4de741-6cb1-4ef0-80c9-173c72825057-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"1c4de741-6cb1-4ef0-80c9-173c72825057\") " pod="openstack/rabbitmq-cell1-server-0" Feb 24 05:50:35.257023 master-0 kubenswrapper[34361]: I0224 05:50:35.256995 34361 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/1c4de741-6cb1-4ef0-80c9-173c72825057-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"1c4de741-6cb1-4ef0-80c9-173c72825057\") " pod="openstack/rabbitmq-cell1-server-0" Feb 24 05:50:35.258027 master-0 kubenswrapper[34361]: I0224 05:50:35.257803 34361 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/1c4de741-6cb1-4ef0-80c9-173c72825057-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"1c4de741-6cb1-4ef0-80c9-173c72825057\") " pod="openstack/rabbitmq-cell1-server-0" Feb 24 05:50:35.258577 master-0 kubenswrapper[34361]: I0224 05:50:35.258549 34361 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/1c4de741-6cb1-4ef0-80c9-173c72825057-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"1c4de741-6cb1-4ef0-80c9-173c72825057\") " pod="openstack/rabbitmq-cell1-server-0" Feb 24 05:50:35.259479 master-0 kubenswrapper[34361]: I0224 05:50:35.259265 34361 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/1c4de741-6cb1-4ef0-80c9-173c72825057-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"1c4de741-6cb1-4ef0-80c9-173c72825057\") " pod="openstack/rabbitmq-cell1-server-0" Feb 24 05:50:35.259479 master-0 kubenswrapper[34361]: I0224 05:50:35.259350 34361 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/1c4de741-6cb1-4ef0-80c9-173c72825057-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"1c4de741-6cb1-4ef0-80c9-173c72825057\") " pod="openstack/rabbitmq-cell1-server-0" Feb 24 05:50:35.362246 master-0 kubenswrapper[34361]: I0224 05:50:35.362085 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-28cb1ecd-6ba3-422b-a334-521132dedf93\" (UniqueName: \"kubernetes.io/csi/topolvm.io^6a5b8268-22ef-4dda-8fcf-8430e908eec2\") pod \"rabbitmq-cell1-server-0\" (UID: \"1c4de741-6cb1-4ef0-80c9-173c72825057\") " pod="openstack/rabbitmq-cell1-server-0" Feb 24 05:50:35.362246 master-0 kubenswrapper[34361]: I0224 05:50:35.362190 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/1c4de741-6cb1-4ef0-80c9-173c72825057-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"1c4de741-6cb1-4ef0-80c9-173c72825057\") " pod="openstack/rabbitmq-cell1-server-0" Feb 24 05:50:35.364059 master-0 kubenswrapper[34361]: I0224 05:50:35.363859 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dcx9j\" (UniqueName: \"kubernetes.io/projected/1c4de741-6cb1-4ef0-80c9-173c72825057-kube-api-access-dcx9j\") pod \"rabbitmq-cell1-server-0\" (UID: \"1c4de741-6cb1-4ef0-80c9-173c72825057\") " pod="openstack/rabbitmq-cell1-server-0" Feb 24 05:50:35.364151 master-0 kubenswrapper[34361]: I0224 05:50:35.364130 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/1c4de741-6cb1-4ef0-80c9-173c72825057-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"1c4de741-6cb1-4ef0-80c9-173c72825057\") " pod="openstack/rabbitmq-cell1-server-0" Feb 24 05:50:35.364190 master-0 kubenswrapper[34361]: I0224 05:50:35.364164 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/1c4de741-6cb1-4ef0-80c9-173c72825057-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"1c4de741-6cb1-4ef0-80c9-173c72825057\") " pod="openstack/rabbitmq-cell1-server-0" Feb 24 05:50:35.364832 master-0 kubenswrapper[34361]: I0224 05:50:35.364382 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/1c4de741-6cb1-4ef0-80c9-173c72825057-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"1c4de741-6cb1-4ef0-80c9-173c72825057\") " pod="openstack/rabbitmq-cell1-server-0" Feb 24 05:50:35.365912 master-0 kubenswrapper[34361]: I0224 05:50:35.365814 34361 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/1c4de741-6cb1-4ef0-80c9-173c72825057-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"1c4de741-6cb1-4ef0-80c9-173c72825057\") " pod="openstack/rabbitmq-cell1-server-0" Feb 24 05:50:35.365991 master-0 kubenswrapper[34361]: I0224 05:50:35.365897 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/1c4de741-6cb1-4ef0-80c9-173c72825057-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"1c4de741-6cb1-4ef0-80c9-173c72825057\") " pod="openstack/rabbitmq-cell1-server-0" Feb 24 05:50:35.366175 master-0 kubenswrapper[34361]: I0224 05:50:35.366102 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/1c4de741-6cb1-4ef0-80c9-173c72825057-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"1c4de741-6cb1-4ef0-80c9-173c72825057\") " pod="openstack/rabbitmq-cell1-server-0" Feb 24 05:50:35.366175 master-0 kubenswrapper[34361]: I0224 05:50:35.366143 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/1c4de741-6cb1-4ef0-80c9-173c72825057-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"1c4de741-6cb1-4ef0-80c9-173c72825057\") " pod="openstack/rabbitmq-cell1-server-0" Feb 24 05:50:35.366387 master-0 kubenswrapper[34361]: I0224 05:50:35.366334 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/1c4de741-6cb1-4ef0-80c9-173c72825057-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"1c4de741-6cb1-4ef0-80c9-173c72825057\") " pod="openstack/rabbitmq-cell1-server-0" Feb 24 05:50:35.366456 master-0 kubenswrapper[34361]: I0224 05:50:35.366428 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/1c4de741-6cb1-4ef0-80c9-173c72825057-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"1c4de741-6cb1-4ef0-80c9-173c72825057\") " pod="openstack/rabbitmq-cell1-server-0" Feb 24 05:50:35.366876 master-0 kubenswrapper[34361]: I0224 05:50:35.366841 34361 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/1c4de741-6cb1-4ef0-80c9-173c72825057-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"1c4de741-6cb1-4ef0-80c9-173c72825057\") " pod="openstack/rabbitmq-cell1-server-0" Feb 24 05:50:35.367688 master-0 kubenswrapper[34361]: I0224 05:50:35.367626 34361 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/1c4de741-6cb1-4ef0-80c9-173c72825057-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"1c4de741-6cb1-4ef0-80c9-173c72825057\") " pod="openstack/rabbitmq-cell1-server-0" Feb 24 05:50:35.368303 master-0 kubenswrapper[34361]: I0224 05:50:35.368255 34361 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/1c4de741-6cb1-4ef0-80c9-173c72825057-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"1c4de741-6cb1-4ef0-80c9-173c72825057\") " pod="openstack/rabbitmq-cell1-server-0" Feb 24 05:50:35.368510 master-0 kubenswrapper[34361]: I0224 05:50:35.368457 34361 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 24 05:50:35.368587 master-0 kubenswrapper[34361]: I0224 05:50:35.368554 34361 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-28cb1ecd-6ba3-422b-a334-521132dedf93\" (UniqueName: \"kubernetes.io/csi/topolvm.io^6a5b8268-22ef-4dda-8fcf-8430e908eec2\") pod \"rabbitmq-cell1-server-0\" (UID: \"1c4de741-6cb1-4ef0-80c9-173c72825057\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/topolvm.io/895d7005e21b75256faa1f9a11cd77f01d5151b5e0ecb57bd33f921abda59f63/globalmount\"" pod="openstack/rabbitmq-cell1-server-0" Feb 24 05:50:35.368746 master-0 kubenswrapper[34361]: I0224 05:50:35.368694 34361 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/1c4de741-6cb1-4ef0-80c9-173c72825057-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"1c4de741-6cb1-4ef0-80c9-173c72825057\") " pod="openstack/rabbitmq-cell1-server-0" Feb 24 05:50:35.370865 master-0 kubenswrapper[34361]: I0224 05:50:35.370830 34361 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/1c4de741-6cb1-4ef0-80c9-173c72825057-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"1c4de741-6cb1-4ef0-80c9-173c72825057\") " pod="openstack/rabbitmq-cell1-server-0" Feb 24 05:50:35.371196 master-0 kubenswrapper[34361]: I0224 05:50:35.371141 34361 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/1c4de741-6cb1-4ef0-80c9-173c72825057-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"1c4de741-6cb1-4ef0-80c9-173c72825057\") " pod="openstack/rabbitmq-cell1-server-0" Feb 24 05:50:35.382979 master-0 kubenswrapper[34361]: I0224 05:50:35.382792 34361 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dcx9j\" (UniqueName: \"kubernetes.io/projected/1c4de741-6cb1-4ef0-80c9-173c72825057-kube-api-access-dcx9j\") pod \"rabbitmq-cell1-server-0\" (UID: \"1c4de741-6cb1-4ef0-80c9-173c72825057\") " pod="openstack/rabbitmq-cell1-server-0" Feb 24 05:50:35.389375 master-0 kubenswrapper[34361]: I0224 05:50:35.389235 34361 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/1c4de741-6cb1-4ef0-80c9-173c72825057-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"1c4de741-6cb1-4ef0-80c9-173c72825057\") " pod="openstack/rabbitmq-cell1-server-0" Feb 24 05:50:35.393014 master-0 kubenswrapper[34361]: I0224 05:50:35.392971 34361 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/1c4de741-6cb1-4ef0-80c9-173c72825057-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"1c4de741-6cb1-4ef0-80c9-173c72825057\") " pod="openstack/rabbitmq-cell1-server-0" Feb 24 05:50:36.319121 master-0 kubenswrapper[34361]: I0224 05:50:36.318988 34361 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstack-galera-0"] Feb 24 05:50:36.326827 master-0 kubenswrapper[34361]: I0224 05:50:36.326733 34361 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-galera-0" Feb 24 05:50:36.328966 master-0 kubenswrapper[34361]: I0224 05:50:36.328908 34361 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-galera-openstack-svc" Feb 24 05:50:36.334560 master-0 kubenswrapper[34361]: I0224 05:50:36.334511 34361 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-config-data" Feb 24 05:50:36.334721 master-0 kubenswrapper[34361]: I0224 05:50:36.334688 34361 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-scripts" Feb 24 05:50:36.380352 master-0 kubenswrapper[34361]: I0224 05:50:36.378526 34361 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-galera-0"] Feb 24 05:50:36.402974 master-0 kubenswrapper[34361]: I0224 05:50:36.402784 34361 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/1c2bd63a-de4d-4b6a-ae00-b96c7b13d38d-kolla-config\") pod \"openstack-galera-0\" (UID: \"1c2bd63a-de4d-4b6a-ae00-b96c7b13d38d\") " pod="openstack/openstack-galera-0" Feb 24 05:50:36.403405 master-0 kubenswrapper[34361]: I0224 05:50:36.403091 34361 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1c2bd63a-de4d-4b6a-ae00-b96c7b13d38d-operator-scripts\") pod \"openstack-galera-0\" (UID: \"1c2bd63a-de4d-4b6a-ae00-b96c7b13d38d\") " pod="openstack/openstack-galera-0" Feb 24 05:50:36.403405 master-0 kubenswrapper[34361]: I0224 05:50:36.403162 34361 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/1c2bd63a-de4d-4b6a-ae00-b96c7b13d38d-config-data-generated\") pod \"openstack-galera-0\" (UID: \"1c2bd63a-de4d-4b6a-ae00-b96c7b13d38d\") " pod="openstack/openstack-galera-0" Feb 24 05:50:36.403405 master-0 kubenswrapper[34361]: I0224 05:50:36.403251 34361 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/1c2bd63a-de4d-4b6a-ae00-b96c7b13d38d-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"1c2bd63a-de4d-4b6a-ae00-b96c7b13d38d\") " pod="openstack/openstack-galera-0" Feb 24 05:50:36.403405 master-0 kubenswrapper[34361]: I0224 05:50:36.403335 34361 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-62c45296-58e6-423d-9cca-31bf5b6d67c8\" (UniqueName: \"kubernetes.io/csi/topolvm.io^736b4a36-8b9e-4996-9f36-2082f49e0205\") pod \"openstack-galera-0\" (UID: \"1c2bd63a-de4d-4b6a-ae00-b96c7b13d38d\") " pod="openstack/openstack-galera-0" Feb 24 05:50:36.403405 master-0 kubenswrapper[34361]: I0224 05:50:36.403396 34361 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1c2bd63a-de4d-4b6a-ae00-b96c7b13d38d-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"1c2bd63a-de4d-4b6a-ae00-b96c7b13d38d\") " pod="openstack/openstack-galera-0" Feb 24 05:50:36.403637 master-0 kubenswrapper[34361]: I0224 05:50:36.403435 34361 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/1c2bd63a-de4d-4b6a-ae00-b96c7b13d38d-config-data-default\") pod \"openstack-galera-0\" (UID: \"1c2bd63a-de4d-4b6a-ae00-b96c7b13d38d\") " pod="openstack/openstack-galera-0" Feb 24 05:50:36.403637 master-0 kubenswrapper[34361]: I0224 05:50:36.403484 34361 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xtjbh\" (UniqueName: \"kubernetes.io/projected/1c2bd63a-de4d-4b6a-ae00-b96c7b13d38d-kube-api-access-xtjbh\") pod \"openstack-galera-0\" (UID: \"1c2bd63a-de4d-4b6a-ae00-b96c7b13d38d\") " pod="openstack/openstack-galera-0" Feb 24 05:50:36.506847 master-0 kubenswrapper[34361]: I0224 05:50:36.506739 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-62c45296-58e6-423d-9cca-31bf5b6d67c8\" (UniqueName: \"kubernetes.io/csi/topolvm.io^736b4a36-8b9e-4996-9f36-2082f49e0205\") pod \"openstack-galera-0\" (UID: \"1c2bd63a-de4d-4b6a-ae00-b96c7b13d38d\") " pod="openstack/openstack-galera-0" Feb 24 05:50:36.506847 master-0 kubenswrapper[34361]: I0224 05:50:36.506849 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1c2bd63a-de4d-4b6a-ae00-b96c7b13d38d-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"1c2bd63a-de4d-4b6a-ae00-b96c7b13d38d\") " pod="openstack/openstack-galera-0" Feb 24 05:50:36.507167 master-0 kubenswrapper[34361]: I0224 05:50:36.506908 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/1c2bd63a-de4d-4b6a-ae00-b96c7b13d38d-config-data-default\") pod \"openstack-galera-0\" (UID: \"1c2bd63a-de4d-4b6a-ae00-b96c7b13d38d\") " pod="openstack/openstack-galera-0" Feb 24 05:50:36.507167 master-0 kubenswrapper[34361]: I0224 05:50:36.506949 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xtjbh\" (UniqueName: \"kubernetes.io/projected/1c2bd63a-de4d-4b6a-ae00-b96c7b13d38d-kube-api-access-xtjbh\") pod \"openstack-galera-0\" (UID: \"1c2bd63a-de4d-4b6a-ae00-b96c7b13d38d\") " pod="openstack/openstack-galera-0" Feb 24 05:50:36.507167 master-0 kubenswrapper[34361]: I0224 05:50:36.507016 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/1c2bd63a-de4d-4b6a-ae00-b96c7b13d38d-kolla-config\") pod \"openstack-galera-0\" (UID: \"1c2bd63a-de4d-4b6a-ae00-b96c7b13d38d\") " pod="openstack/openstack-galera-0" Feb 24 05:50:36.507167 master-0 kubenswrapper[34361]: I0224 05:50:36.507081 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1c2bd63a-de4d-4b6a-ae00-b96c7b13d38d-operator-scripts\") pod \"openstack-galera-0\" (UID: \"1c2bd63a-de4d-4b6a-ae00-b96c7b13d38d\") " pod="openstack/openstack-galera-0" Feb 24 05:50:36.507167 master-0 kubenswrapper[34361]: I0224 05:50:36.507130 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/1c2bd63a-de4d-4b6a-ae00-b96c7b13d38d-config-data-generated\") pod \"openstack-galera-0\" (UID: \"1c2bd63a-de4d-4b6a-ae00-b96c7b13d38d\") " pod="openstack/openstack-galera-0" Feb 24 05:50:36.507167 master-0 kubenswrapper[34361]: I0224 05:50:36.507164 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/1c2bd63a-de4d-4b6a-ae00-b96c7b13d38d-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"1c2bd63a-de4d-4b6a-ae00-b96c7b13d38d\") " pod="openstack/openstack-galera-0" Feb 24 05:50:36.516240 master-0 kubenswrapper[34361]: I0224 05:50:36.516156 34361 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/1c2bd63a-de4d-4b6a-ae00-b96c7b13d38d-kolla-config\") pod \"openstack-galera-0\" (UID: \"1c2bd63a-de4d-4b6a-ae00-b96c7b13d38d\") " pod="openstack/openstack-galera-0" Feb 24 05:50:36.517074 master-0 kubenswrapper[34361]: I0224 05:50:36.517009 34361 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1c2bd63a-de4d-4b6a-ae00-b96c7b13d38d-operator-scripts\") pod \"openstack-galera-0\" (UID: \"1c2bd63a-de4d-4b6a-ae00-b96c7b13d38d\") " pod="openstack/openstack-galera-0" Feb 24 05:50:36.518023 master-0 kubenswrapper[34361]: I0224 05:50:36.517877 34361 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/1c2bd63a-de4d-4b6a-ae00-b96c7b13d38d-config-data-generated\") pod \"openstack-galera-0\" (UID: \"1c2bd63a-de4d-4b6a-ae00-b96c7b13d38d\") " pod="openstack/openstack-galera-0" Feb 24 05:50:36.523865 master-0 kubenswrapper[34361]: I0224 05:50:36.523820 34361 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/1c2bd63a-de4d-4b6a-ae00-b96c7b13d38d-config-data-default\") pod \"openstack-galera-0\" (UID: \"1c2bd63a-de4d-4b6a-ae00-b96c7b13d38d\") " pod="openstack/openstack-galera-0" Feb 24 05:50:36.529176 master-0 kubenswrapper[34361]: I0224 05:50:36.529119 34361 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 24 05:50:36.529238 master-0 kubenswrapper[34361]: I0224 05:50:36.529183 34361 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-62c45296-58e6-423d-9cca-31bf5b6d67c8\" (UniqueName: \"kubernetes.io/csi/topolvm.io^736b4a36-8b9e-4996-9f36-2082f49e0205\") pod \"openstack-galera-0\" (UID: \"1c2bd63a-de4d-4b6a-ae00-b96c7b13d38d\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/topolvm.io/4d9f5520ca482a054247065067d0fdb2820b8003e3306d3b27b106f38c114a10/globalmount\"" pod="openstack/openstack-galera-0" Feb 24 05:50:36.536394 master-0 kubenswrapper[34361]: I0224 05:50:36.536033 34361 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/1c2bd63a-de4d-4b6a-ae00-b96c7b13d38d-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"1c2bd63a-de4d-4b6a-ae00-b96c7b13d38d\") " pod="openstack/openstack-galera-0" Feb 24 05:50:36.540450 master-0 kubenswrapper[34361]: I0224 05:50:36.540334 34361 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1c2bd63a-de4d-4b6a-ae00-b96c7b13d38d-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"1c2bd63a-de4d-4b6a-ae00-b96c7b13d38d\") " pod="openstack/openstack-galera-0" Feb 24 05:50:36.549430 master-0 kubenswrapper[34361]: I0224 05:50:36.542709 34361 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xtjbh\" (UniqueName: \"kubernetes.io/projected/1c2bd63a-de4d-4b6a-ae00-b96c7b13d38d-kube-api-access-xtjbh\") pod \"openstack-galera-0\" (UID: \"1c2bd63a-de4d-4b6a-ae00-b96c7b13d38d\") " pod="openstack/openstack-galera-0" Feb 24 05:50:36.642079 master-0 kubenswrapper[34361]: I0224 05:50:36.641414 34361 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-f81fa97a-3a54-4f13-a867-f22d9416fbaa\" (UniqueName: \"kubernetes.io/csi/topolvm.io^44e90109-7af1-498d-97d2-9d560d8af3ad\") pod \"rabbitmq-server-0\" (UID: \"9ac06ab8-5197-4557-8124-583f49b6082b\") " pod="openstack/rabbitmq-server-0" Feb 24 05:50:36.899552 master-0 kubenswrapper[34361]: I0224 05:50:36.895416 34361 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Feb 24 05:50:37.520163 master-0 kubenswrapper[34361]: I0224 05:50:37.520104 34361 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstack-cell1-galera-0"] Feb 24 05:50:37.523401 master-0 kubenswrapper[34361]: I0224 05:50:37.523379 34361 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-cell1-galera-0" Feb 24 05:50:37.532247 master-0 kubenswrapper[34361]: I0224 05:50:37.529689 34361 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-galera-openstack-cell1-svc" Feb 24 05:50:37.532247 master-0 kubenswrapper[34361]: I0224 05:50:37.530340 34361 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-cell1-scripts" Feb 24 05:50:37.532247 master-0 kubenswrapper[34361]: I0224 05:50:37.530610 34361 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-cell1-config-data" Feb 24 05:50:37.538862 master-0 kubenswrapper[34361]: I0224 05:50:37.538828 34361 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-cell1-galera-0"] Feb 24 05:50:37.633696 master-0 kubenswrapper[34361]: I0224 05:50:37.633572 34361 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/376b0bd6-b6ed-42ca-bc34-b3823b24637e-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"376b0bd6-b6ed-42ca-bc34-b3823b24637e\") " pod="openstack/openstack-cell1-galera-0" Feb 24 05:50:37.633696 master-0 kubenswrapper[34361]: I0224 05:50:37.633651 34361 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/376b0bd6-b6ed-42ca-bc34-b3823b24637e-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"376b0bd6-b6ed-42ca-bc34-b3823b24637e\") " pod="openstack/openstack-cell1-galera-0" Feb 24 05:50:37.633960 master-0 kubenswrapper[34361]: I0224 05:50:37.633706 34361 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xcb69\" (UniqueName: \"kubernetes.io/projected/376b0bd6-b6ed-42ca-bc34-b3823b24637e-kube-api-access-xcb69\") pod \"openstack-cell1-galera-0\" (UID: \"376b0bd6-b6ed-42ca-bc34-b3823b24637e\") " pod="openstack/openstack-cell1-galera-0" Feb 24 05:50:37.633960 master-0 kubenswrapper[34361]: I0224 05:50:37.633735 34361 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/376b0bd6-b6ed-42ca-bc34-b3823b24637e-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"376b0bd6-b6ed-42ca-bc34-b3823b24637e\") " pod="openstack/openstack-cell1-galera-0" Feb 24 05:50:37.633960 master-0 kubenswrapper[34361]: I0224 05:50:37.633759 34361 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-bec77aba-dbd4-474b-9c5a-cb1a27b429a1\" (UniqueName: \"kubernetes.io/csi/topolvm.io^c6a69266-7b89-4f94-8d89-86fd04c44440\") pod \"openstack-cell1-galera-0\" (UID: \"376b0bd6-b6ed-42ca-bc34-b3823b24637e\") " pod="openstack/openstack-cell1-galera-0" Feb 24 05:50:37.633960 master-0 kubenswrapper[34361]: I0224 05:50:37.633776 34361 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/376b0bd6-b6ed-42ca-bc34-b3823b24637e-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"376b0bd6-b6ed-42ca-bc34-b3823b24637e\") " pod="openstack/openstack-cell1-galera-0" Feb 24 05:50:37.633960 master-0 kubenswrapper[34361]: I0224 05:50:37.633820 34361 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/376b0bd6-b6ed-42ca-bc34-b3823b24637e-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"376b0bd6-b6ed-42ca-bc34-b3823b24637e\") " pod="openstack/openstack-cell1-galera-0" Feb 24 05:50:37.633960 master-0 kubenswrapper[34361]: I0224 05:50:37.633861 34361 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/376b0bd6-b6ed-42ca-bc34-b3823b24637e-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"376b0bd6-b6ed-42ca-bc34-b3823b24637e\") " pod="openstack/openstack-cell1-galera-0" Feb 24 05:50:37.736546 master-0 kubenswrapper[34361]: I0224 05:50:37.736468 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/376b0bd6-b6ed-42ca-bc34-b3823b24637e-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"376b0bd6-b6ed-42ca-bc34-b3823b24637e\") " pod="openstack/openstack-cell1-galera-0" Feb 24 05:50:37.736546 master-0 kubenswrapper[34361]: I0224 05:50:37.736552 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/376b0bd6-b6ed-42ca-bc34-b3823b24637e-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"376b0bd6-b6ed-42ca-bc34-b3823b24637e\") " pod="openstack/openstack-cell1-galera-0" Feb 24 05:50:37.736992 master-0 kubenswrapper[34361]: I0224 05:50:37.736713 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/376b0bd6-b6ed-42ca-bc34-b3823b24637e-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"376b0bd6-b6ed-42ca-bc34-b3823b24637e\") " pod="openstack/openstack-cell1-galera-0" Feb 24 05:50:37.736992 master-0 kubenswrapper[34361]: I0224 05:50:37.736764 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/376b0bd6-b6ed-42ca-bc34-b3823b24637e-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"376b0bd6-b6ed-42ca-bc34-b3823b24637e\") " pod="openstack/openstack-cell1-galera-0" Feb 24 05:50:37.736992 master-0 kubenswrapper[34361]: I0224 05:50:37.736802 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xcb69\" (UniqueName: \"kubernetes.io/projected/376b0bd6-b6ed-42ca-bc34-b3823b24637e-kube-api-access-xcb69\") pod \"openstack-cell1-galera-0\" (UID: \"376b0bd6-b6ed-42ca-bc34-b3823b24637e\") " pod="openstack/openstack-cell1-galera-0" Feb 24 05:50:37.736992 master-0 kubenswrapper[34361]: I0224 05:50:37.736829 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/376b0bd6-b6ed-42ca-bc34-b3823b24637e-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"376b0bd6-b6ed-42ca-bc34-b3823b24637e\") " pod="openstack/openstack-cell1-galera-0" Feb 24 05:50:37.736992 master-0 kubenswrapper[34361]: I0224 05:50:37.736857 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-bec77aba-dbd4-474b-9c5a-cb1a27b429a1\" (UniqueName: \"kubernetes.io/csi/topolvm.io^c6a69266-7b89-4f94-8d89-86fd04c44440\") pod \"openstack-cell1-galera-0\" (UID: \"376b0bd6-b6ed-42ca-bc34-b3823b24637e\") " pod="openstack/openstack-cell1-galera-0" Feb 24 05:50:37.736992 master-0 kubenswrapper[34361]: I0224 05:50:37.736883 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/376b0bd6-b6ed-42ca-bc34-b3823b24637e-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"376b0bd6-b6ed-42ca-bc34-b3823b24637e\") " pod="openstack/openstack-cell1-galera-0" Feb 24 05:50:37.738913 master-0 kubenswrapper[34361]: I0224 05:50:37.738756 34361 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/376b0bd6-b6ed-42ca-bc34-b3823b24637e-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"376b0bd6-b6ed-42ca-bc34-b3823b24637e\") " pod="openstack/openstack-cell1-galera-0" Feb 24 05:50:37.739172 master-0 kubenswrapper[34361]: I0224 05:50:37.739116 34361 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/376b0bd6-b6ed-42ca-bc34-b3823b24637e-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"376b0bd6-b6ed-42ca-bc34-b3823b24637e\") " pod="openstack/openstack-cell1-galera-0" Feb 24 05:50:37.739222 master-0 kubenswrapper[34361]: I0224 05:50:37.739173 34361 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/376b0bd6-b6ed-42ca-bc34-b3823b24637e-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"376b0bd6-b6ed-42ca-bc34-b3823b24637e\") " pod="openstack/openstack-cell1-galera-0" Feb 24 05:50:37.739298 master-0 kubenswrapper[34361]: I0224 05:50:37.739245 34361 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/376b0bd6-b6ed-42ca-bc34-b3823b24637e-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"376b0bd6-b6ed-42ca-bc34-b3823b24637e\") " pod="openstack/openstack-cell1-galera-0" Feb 24 05:50:37.740030 master-0 kubenswrapper[34361]: I0224 05:50:37.739975 34361 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 24 05:50:37.740030 master-0 kubenswrapper[34361]: I0224 05:50:37.740011 34361 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-bec77aba-dbd4-474b-9c5a-cb1a27b429a1\" (UniqueName: \"kubernetes.io/csi/topolvm.io^c6a69266-7b89-4f94-8d89-86fd04c44440\") pod \"openstack-cell1-galera-0\" (UID: \"376b0bd6-b6ed-42ca-bc34-b3823b24637e\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/topolvm.io/eec5612d398437f100976de9567f728cf873855766b0b7de0fe37042742b0e65/globalmount\"" pod="openstack/openstack-cell1-galera-0" Feb 24 05:50:37.742695 master-0 kubenswrapper[34361]: I0224 05:50:37.742647 34361 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/376b0bd6-b6ed-42ca-bc34-b3823b24637e-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"376b0bd6-b6ed-42ca-bc34-b3823b24637e\") " pod="openstack/openstack-cell1-galera-0" Feb 24 05:50:37.747492 master-0 kubenswrapper[34361]: I0224 05:50:37.747402 34361 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/376b0bd6-b6ed-42ca-bc34-b3823b24637e-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"376b0bd6-b6ed-42ca-bc34-b3823b24637e\") " pod="openstack/openstack-cell1-galera-0" Feb 24 05:50:37.770991 master-0 kubenswrapper[34361]: I0224 05:50:37.770892 34361 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xcb69\" (UniqueName: \"kubernetes.io/projected/376b0bd6-b6ed-42ca-bc34-b3823b24637e-kube-api-access-xcb69\") pod \"openstack-cell1-galera-0\" (UID: \"376b0bd6-b6ed-42ca-bc34-b3823b24637e\") " pod="openstack/openstack-cell1-galera-0" Feb 24 05:50:38.015952 master-0 kubenswrapper[34361]: I0224 05:50:38.015851 34361 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-28cb1ecd-6ba3-422b-a334-521132dedf93\" (UniqueName: \"kubernetes.io/csi/topolvm.io^6a5b8268-22ef-4dda-8fcf-8430e908eec2\") pod \"rabbitmq-cell1-server-0\" (UID: \"1c4de741-6cb1-4ef0-80c9-173c72825057\") " pod="openstack/rabbitmq-cell1-server-0" Feb 24 05:50:38.205607 master-0 kubenswrapper[34361]: I0224 05:50:38.205506 34361 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Feb 24 05:50:39.051164 master-0 kubenswrapper[34361]: I0224 05:50:39.051060 34361 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-62c45296-58e6-423d-9cca-31bf5b6d67c8\" (UniqueName: \"kubernetes.io/csi/topolvm.io^736b4a36-8b9e-4996-9f36-2082f49e0205\") pod \"openstack-galera-0\" (UID: \"1c2bd63a-de4d-4b6a-ae00-b96c7b13d38d\") " pod="openstack/openstack-galera-0" Feb 24 05:50:39.087185 master-0 kubenswrapper[34361]: I0224 05:50:39.087067 34361 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-galera-0" Feb 24 05:50:39.565720 master-0 kubenswrapper[34361]: I0224 05:50:39.565627 34361 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-5kh8v"] Feb 24 05:50:39.572835 master-0 kubenswrapper[34361]: I0224 05:50:39.567504 34361 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-5kh8v" Feb 24 05:50:39.572835 master-0 kubenswrapper[34361]: I0224 05:50:39.571378 34361 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-scripts" Feb 24 05:50:39.572835 master-0 kubenswrapper[34361]: I0224 05:50:39.571691 34361 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovncontroller-ovndbs" Feb 24 05:50:39.598346 master-0 kubenswrapper[34361]: I0224 05:50:39.596548 34361 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-ovs-86mtg"] Feb 24 05:50:39.599862 master-0 kubenswrapper[34361]: I0224 05:50:39.598712 34361 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/c11f5497-b3de-43b7-9312-b06485f2df8a-var-run\") pod \"ovn-controller-5kh8v\" (UID: \"c11f5497-b3de-43b7-9312-b06485f2df8a\") " pod="openstack/ovn-controller-5kh8v" Feb 24 05:50:39.599862 master-0 kubenswrapper[34361]: I0224 05:50:39.598781 34361 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/c11f5497-b3de-43b7-9312-b06485f2df8a-ovn-controller-tls-certs\") pod \"ovn-controller-5kh8v\" (UID: \"c11f5497-b3de-43b7-9312-b06485f2df8a\") " pod="openstack/ovn-controller-5kh8v" Feb 24 05:50:39.599862 master-0 kubenswrapper[34361]: I0224 05:50:39.598816 34361 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/c11f5497-b3de-43b7-9312-b06485f2df8a-var-log-ovn\") pod \"ovn-controller-5kh8v\" (UID: \"c11f5497-b3de-43b7-9312-b06485f2df8a\") " pod="openstack/ovn-controller-5kh8v" Feb 24 05:50:39.599862 master-0 kubenswrapper[34361]: I0224 05:50:39.598836 34361 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b9dt8\" (UniqueName: \"kubernetes.io/projected/c11f5497-b3de-43b7-9312-b06485f2df8a-kube-api-access-b9dt8\") pod \"ovn-controller-5kh8v\" (UID: \"c11f5497-b3de-43b7-9312-b06485f2df8a\") " pod="openstack/ovn-controller-5kh8v" Feb 24 05:50:39.599862 master-0 kubenswrapper[34361]: I0224 05:50:39.598906 34361 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c11f5497-b3de-43b7-9312-b06485f2df8a-combined-ca-bundle\") pod \"ovn-controller-5kh8v\" (UID: \"c11f5497-b3de-43b7-9312-b06485f2df8a\") " pod="openstack/ovn-controller-5kh8v" Feb 24 05:50:39.599862 master-0 kubenswrapper[34361]: I0224 05:50:39.598958 34361 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/c11f5497-b3de-43b7-9312-b06485f2df8a-var-run-ovn\") pod \"ovn-controller-5kh8v\" (UID: \"c11f5497-b3de-43b7-9312-b06485f2df8a\") " pod="openstack/ovn-controller-5kh8v" Feb 24 05:50:39.599862 master-0 kubenswrapper[34361]: I0224 05:50:39.598986 34361 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/c11f5497-b3de-43b7-9312-b06485f2df8a-scripts\") pod \"ovn-controller-5kh8v\" (UID: \"c11f5497-b3de-43b7-9312-b06485f2df8a\") " pod="openstack/ovn-controller-5kh8v" Feb 24 05:50:39.600408 master-0 kubenswrapper[34361]: I0224 05:50:39.599934 34361 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-ovs-86mtg" Feb 24 05:50:39.626347 master-0 kubenswrapper[34361]: I0224 05:50:39.622422 34361 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-5kh8v"] Feb 24 05:50:39.663569 master-0 kubenswrapper[34361]: I0224 05:50:39.662648 34361 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-ovs-86mtg"] Feb 24 05:50:39.705109 master-0 kubenswrapper[34361]: I0224 05:50:39.700963 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/c11f5497-b3de-43b7-9312-b06485f2df8a-var-run\") pod \"ovn-controller-5kh8v\" (UID: \"c11f5497-b3de-43b7-9312-b06485f2df8a\") " pod="openstack/ovn-controller-5kh8v" Feb 24 05:50:39.705109 master-0 kubenswrapper[34361]: I0224 05:50:39.701014 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/c11f5497-b3de-43b7-9312-b06485f2df8a-ovn-controller-tls-certs\") pod \"ovn-controller-5kh8v\" (UID: \"c11f5497-b3de-43b7-9312-b06485f2df8a\") " pod="openstack/ovn-controller-5kh8v" Feb 24 05:50:39.705109 master-0 kubenswrapper[34361]: I0224 05:50:39.701043 34361 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/9a505c34-4d57-4f00-8ad1-ae7d585c2e0d-var-run\") pod \"ovn-controller-ovs-86mtg\" (UID: \"9a505c34-4d57-4f00-8ad1-ae7d585c2e0d\") " pod="openstack/ovn-controller-ovs-86mtg" Feb 24 05:50:39.705109 master-0 kubenswrapper[34361]: I0224 05:50:39.701073 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/c11f5497-b3de-43b7-9312-b06485f2df8a-var-log-ovn\") pod \"ovn-controller-5kh8v\" (UID: \"c11f5497-b3de-43b7-9312-b06485f2df8a\") " pod="openstack/ovn-controller-5kh8v" Feb 24 05:50:39.705109 master-0 kubenswrapper[34361]: I0224 05:50:39.701515 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b9dt8\" (UniqueName: \"kubernetes.io/projected/c11f5497-b3de-43b7-9312-b06485f2df8a-kube-api-access-b9dt8\") pod \"ovn-controller-5kh8v\" (UID: \"c11f5497-b3de-43b7-9312-b06485f2df8a\") " pod="openstack/ovn-controller-5kh8v" Feb 24 05:50:39.705109 master-0 kubenswrapper[34361]: I0224 05:50:39.701573 34361 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/9a505c34-4d57-4f00-8ad1-ae7d585c2e0d-var-lib\") pod \"ovn-controller-ovs-86mtg\" (UID: \"9a505c34-4d57-4f00-8ad1-ae7d585c2e0d\") " pod="openstack/ovn-controller-ovs-86mtg" Feb 24 05:50:39.705109 master-0 kubenswrapper[34361]: I0224 05:50:39.701595 34361 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/9a505c34-4d57-4f00-8ad1-ae7d585c2e0d-etc-ovs\") pod \"ovn-controller-ovs-86mtg\" (UID: \"9a505c34-4d57-4f00-8ad1-ae7d585c2e0d\") " pod="openstack/ovn-controller-ovs-86mtg" Feb 24 05:50:39.705109 master-0 kubenswrapper[34361]: I0224 05:50:39.701825 34361 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/9a505c34-4d57-4f00-8ad1-ae7d585c2e0d-scripts\") pod \"ovn-controller-ovs-86mtg\" (UID: \"9a505c34-4d57-4f00-8ad1-ae7d585c2e0d\") " pod="openstack/ovn-controller-ovs-86mtg" Feb 24 05:50:39.705109 master-0 kubenswrapper[34361]: I0224 05:50:39.701929 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c11f5497-b3de-43b7-9312-b06485f2df8a-combined-ca-bundle\") pod \"ovn-controller-5kh8v\" (UID: \"c11f5497-b3de-43b7-9312-b06485f2df8a\") " pod="openstack/ovn-controller-5kh8v" Feb 24 05:50:39.705109 master-0 kubenswrapper[34361]: I0224 05:50:39.701969 34361 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/9a505c34-4d57-4f00-8ad1-ae7d585c2e0d-var-log\") pod \"ovn-controller-ovs-86mtg\" (UID: \"9a505c34-4d57-4f00-8ad1-ae7d585c2e0d\") " pod="openstack/ovn-controller-ovs-86mtg" Feb 24 05:50:39.705109 master-0 kubenswrapper[34361]: I0224 05:50:39.702168 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/c11f5497-b3de-43b7-9312-b06485f2df8a-var-run-ovn\") pod \"ovn-controller-5kh8v\" (UID: \"c11f5497-b3de-43b7-9312-b06485f2df8a\") " pod="openstack/ovn-controller-5kh8v" Feb 24 05:50:39.705109 master-0 kubenswrapper[34361]: I0224 05:50:39.702211 34361 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-268rg\" (UniqueName: \"kubernetes.io/projected/9a505c34-4d57-4f00-8ad1-ae7d585c2e0d-kube-api-access-268rg\") pod \"ovn-controller-ovs-86mtg\" (UID: \"9a505c34-4d57-4f00-8ad1-ae7d585c2e0d\") " pod="openstack/ovn-controller-ovs-86mtg" Feb 24 05:50:39.705109 master-0 kubenswrapper[34361]: I0224 05:50:39.702292 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/c11f5497-b3de-43b7-9312-b06485f2df8a-scripts\") pod \"ovn-controller-5kh8v\" (UID: \"c11f5497-b3de-43b7-9312-b06485f2df8a\") " pod="openstack/ovn-controller-5kh8v" Feb 24 05:50:39.705109 master-0 kubenswrapper[34361]: I0224 05:50:39.702593 34361 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/c11f5497-b3de-43b7-9312-b06485f2df8a-var-run\") pod \"ovn-controller-5kh8v\" (UID: \"c11f5497-b3de-43b7-9312-b06485f2df8a\") " pod="openstack/ovn-controller-5kh8v" Feb 24 05:50:39.705109 master-0 kubenswrapper[34361]: I0224 05:50:39.702736 34361 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/c11f5497-b3de-43b7-9312-b06485f2df8a-var-log-ovn\") pod \"ovn-controller-5kh8v\" (UID: \"c11f5497-b3de-43b7-9312-b06485f2df8a\") " pod="openstack/ovn-controller-5kh8v" Feb 24 05:50:39.705109 master-0 kubenswrapper[34361]: I0224 05:50:39.702833 34361 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/c11f5497-b3de-43b7-9312-b06485f2df8a-var-run-ovn\") pod \"ovn-controller-5kh8v\" (UID: \"c11f5497-b3de-43b7-9312-b06485f2df8a\") " pod="openstack/ovn-controller-5kh8v" Feb 24 05:50:39.705109 master-0 kubenswrapper[34361]: I0224 05:50:39.704677 34361 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/c11f5497-b3de-43b7-9312-b06485f2df8a-scripts\") pod \"ovn-controller-5kh8v\" (UID: \"c11f5497-b3de-43b7-9312-b06485f2df8a\") " pod="openstack/ovn-controller-5kh8v" Feb 24 05:50:39.725332 master-0 kubenswrapper[34361]: I0224 05:50:39.723397 34361 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/c11f5497-b3de-43b7-9312-b06485f2df8a-ovn-controller-tls-certs\") pod \"ovn-controller-5kh8v\" (UID: \"c11f5497-b3de-43b7-9312-b06485f2df8a\") " pod="openstack/ovn-controller-5kh8v" Feb 24 05:50:39.725332 master-0 kubenswrapper[34361]: I0224 05:50:39.723619 34361 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c11f5497-b3de-43b7-9312-b06485f2df8a-combined-ca-bundle\") pod \"ovn-controller-5kh8v\" (UID: \"c11f5497-b3de-43b7-9312-b06485f2df8a\") " pod="openstack/ovn-controller-5kh8v" Feb 24 05:50:39.733306 master-0 kubenswrapper[34361]: I0224 05:50:39.733235 34361 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b9dt8\" (UniqueName: \"kubernetes.io/projected/c11f5497-b3de-43b7-9312-b06485f2df8a-kube-api-access-b9dt8\") pod \"ovn-controller-5kh8v\" (UID: \"c11f5497-b3de-43b7-9312-b06485f2df8a\") " pod="openstack/ovn-controller-5kh8v" Feb 24 05:50:39.805012 master-0 kubenswrapper[34361]: I0224 05:50:39.804914 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-268rg\" (UniqueName: \"kubernetes.io/projected/9a505c34-4d57-4f00-8ad1-ae7d585c2e0d-kube-api-access-268rg\") pod \"ovn-controller-ovs-86mtg\" (UID: \"9a505c34-4d57-4f00-8ad1-ae7d585c2e0d\") " pod="openstack/ovn-controller-ovs-86mtg" Feb 24 05:50:39.805396 master-0 kubenswrapper[34361]: I0224 05:50:39.805064 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/9a505c34-4d57-4f00-8ad1-ae7d585c2e0d-var-run\") pod \"ovn-controller-ovs-86mtg\" (UID: \"9a505c34-4d57-4f00-8ad1-ae7d585c2e0d\") " pod="openstack/ovn-controller-ovs-86mtg" Feb 24 05:50:39.805396 master-0 kubenswrapper[34361]: I0224 05:50:39.805152 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/9a505c34-4d57-4f00-8ad1-ae7d585c2e0d-var-lib\") pod \"ovn-controller-ovs-86mtg\" (UID: \"9a505c34-4d57-4f00-8ad1-ae7d585c2e0d\") " pod="openstack/ovn-controller-ovs-86mtg" Feb 24 05:50:39.805396 master-0 kubenswrapper[34361]: I0224 05:50:39.805183 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/9a505c34-4d57-4f00-8ad1-ae7d585c2e0d-etc-ovs\") pod \"ovn-controller-ovs-86mtg\" (UID: \"9a505c34-4d57-4f00-8ad1-ae7d585c2e0d\") " pod="openstack/ovn-controller-ovs-86mtg" Feb 24 05:50:39.805559 master-0 kubenswrapper[34361]: I0224 05:50:39.805468 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/9a505c34-4d57-4f00-8ad1-ae7d585c2e0d-scripts\") pod \"ovn-controller-ovs-86mtg\" (UID: \"9a505c34-4d57-4f00-8ad1-ae7d585c2e0d\") " pod="openstack/ovn-controller-ovs-86mtg" Feb 24 05:50:39.805647 master-0 kubenswrapper[34361]: I0224 05:50:39.805609 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/9a505c34-4d57-4f00-8ad1-ae7d585c2e0d-var-log\") pod \"ovn-controller-ovs-86mtg\" (UID: \"9a505c34-4d57-4f00-8ad1-ae7d585c2e0d\") " pod="openstack/ovn-controller-ovs-86mtg" Feb 24 05:50:39.805780 master-0 kubenswrapper[34361]: I0224 05:50:39.805732 34361 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/9a505c34-4d57-4f00-8ad1-ae7d585c2e0d-etc-ovs\") pod \"ovn-controller-ovs-86mtg\" (UID: \"9a505c34-4d57-4f00-8ad1-ae7d585c2e0d\") " pod="openstack/ovn-controller-ovs-86mtg" Feb 24 05:50:39.805847 master-0 kubenswrapper[34361]: I0224 05:50:39.805488 34361 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/9a505c34-4d57-4f00-8ad1-ae7d585c2e0d-var-run\") pod \"ovn-controller-ovs-86mtg\" (UID: \"9a505c34-4d57-4f00-8ad1-ae7d585c2e0d\") " pod="openstack/ovn-controller-ovs-86mtg" Feb 24 05:50:39.806238 master-0 kubenswrapper[34361]: I0224 05:50:39.806198 34361 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/9a505c34-4d57-4f00-8ad1-ae7d585c2e0d-var-log\") pod \"ovn-controller-ovs-86mtg\" (UID: \"9a505c34-4d57-4f00-8ad1-ae7d585c2e0d\") " pod="openstack/ovn-controller-ovs-86mtg" Feb 24 05:50:39.806287 master-0 kubenswrapper[34361]: I0224 05:50:39.806233 34361 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/9a505c34-4d57-4f00-8ad1-ae7d585c2e0d-var-lib\") pod \"ovn-controller-ovs-86mtg\" (UID: \"9a505c34-4d57-4f00-8ad1-ae7d585c2e0d\") " pod="openstack/ovn-controller-ovs-86mtg" Feb 24 05:50:39.809677 master-0 kubenswrapper[34361]: I0224 05:50:39.809632 34361 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/9a505c34-4d57-4f00-8ad1-ae7d585c2e0d-scripts\") pod \"ovn-controller-ovs-86mtg\" (UID: \"9a505c34-4d57-4f00-8ad1-ae7d585c2e0d\") " pod="openstack/ovn-controller-ovs-86mtg" Feb 24 05:50:39.828382 master-0 kubenswrapper[34361]: I0224 05:50:39.828289 34361 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-268rg\" (UniqueName: \"kubernetes.io/projected/9a505c34-4d57-4f00-8ad1-ae7d585c2e0d-kube-api-access-268rg\") pod \"ovn-controller-ovs-86mtg\" (UID: \"9a505c34-4d57-4f00-8ad1-ae7d585c2e0d\") " pod="openstack/ovn-controller-ovs-86mtg" Feb 24 05:50:39.905436 master-0 kubenswrapper[34361]: I0224 05:50:39.900550 34361 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-5kh8v" Feb 24 05:50:39.963503 master-0 kubenswrapper[34361]: I0224 05:50:39.963433 34361 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-ovs-86mtg" Feb 24 05:50:40.121728 master-0 kubenswrapper[34361]: I0224 05:50:40.121641 34361 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-bec77aba-dbd4-474b-9c5a-cb1a27b429a1\" (UniqueName: \"kubernetes.io/csi/topolvm.io^c6a69266-7b89-4f94-8d89-86fd04c44440\") pod \"openstack-cell1-galera-0\" (UID: \"376b0bd6-b6ed-42ca-bc34-b3823b24637e\") " pod="openstack/openstack-cell1-galera-0" Feb 24 05:50:40.256437 master-0 kubenswrapper[34361]: I0224 05:50:40.251040 34361 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-cell1-galera-0" Feb 24 05:50:41.151622 master-0 kubenswrapper[34361]: I0224 05:50:41.142200 34361 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovsdbserver-nb-0"] Feb 24 05:50:41.151622 master-0 kubenswrapper[34361]: I0224 05:50:41.144430 34361 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-nb-0" Feb 24 05:50:41.151622 master-0 kubenswrapper[34361]: I0224 05:50:41.147405 34361 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-nb-scripts" Feb 24 05:50:41.151622 master-0 kubenswrapper[34361]: I0224 05:50:41.147436 34361 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovndbcluster-nb-ovndbs" Feb 24 05:50:41.151622 master-0 kubenswrapper[34361]: I0224 05:50:41.147724 34361 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovn-metrics" Feb 24 05:50:41.151622 master-0 kubenswrapper[34361]: I0224 05:50:41.147814 34361 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-nb-config" Feb 24 05:50:41.159255 master-0 kubenswrapper[34361]: I0224 05:50:41.157114 34361 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-nb-0"] Feb 24 05:50:41.242847 master-0 kubenswrapper[34361]: I0224 05:50:41.242736 34361 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/15a24782-c245-408a-b5b7-4db9b8e57619-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"15a24782-c245-408a-b5b7-4db9b8e57619\") " pod="openstack/ovsdbserver-nb-0" Feb 24 05:50:41.242847 master-0 kubenswrapper[34361]: I0224 05:50:41.242836 34361 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/15a24782-c245-408a-b5b7-4db9b8e57619-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"15a24782-c245-408a-b5b7-4db9b8e57619\") " pod="openstack/ovsdbserver-nb-0" Feb 24 05:50:41.243238 master-0 kubenswrapper[34361]: I0224 05:50:41.242911 34361 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/15a24782-c245-408a-b5b7-4db9b8e57619-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"15a24782-c245-408a-b5b7-4db9b8e57619\") " pod="openstack/ovsdbserver-nb-0" Feb 24 05:50:41.243238 master-0 kubenswrapper[34361]: I0224 05:50:41.242980 34361 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/15a24782-c245-408a-b5b7-4db9b8e57619-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"15a24782-c245-408a-b5b7-4db9b8e57619\") " pod="openstack/ovsdbserver-nb-0" Feb 24 05:50:41.243238 master-0 kubenswrapper[34361]: I0224 05:50:41.243037 34361 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/15a24782-c245-408a-b5b7-4db9b8e57619-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"15a24782-c245-408a-b5b7-4db9b8e57619\") " pod="openstack/ovsdbserver-nb-0" Feb 24 05:50:41.243772 master-0 kubenswrapper[34361]: I0224 05:50:41.243662 34361 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/15a24782-c245-408a-b5b7-4db9b8e57619-config\") pod \"ovsdbserver-nb-0\" (UID: \"15a24782-c245-408a-b5b7-4db9b8e57619\") " pod="openstack/ovsdbserver-nb-0" Feb 24 05:50:41.244497 master-0 kubenswrapper[34361]: I0224 05:50:41.244458 34361 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-ee5ad954-894e-4ab1-8df1-46fd7b431ce0\" (UniqueName: \"kubernetes.io/csi/topolvm.io^6a22dfb5-eb95-4c91-bd4e-b6ff5989c3e9\") pod \"ovsdbserver-nb-0\" (UID: \"15a24782-c245-408a-b5b7-4db9b8e57619\") " pod="openstack/ovsdbserver-nb-0" Feb 24 05:50:41.244563 master-0 kubenswrapper[34361]: I0224 05:50:41.244507 34361 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8jx8b\" (UniqueName: \"kubernetes.io/projected/15a24782-c245-408a-b5b7-4db9b8e57619-kube-api-access-8jx8b\") pod \"ovsdbserver-nb-0\" (UID: \"15a24782-c245-408a-b5b7-4db9b8e57619\") " pod="openstack/ovsdbserver-nb-0" Feb 24 05:50:41.353973 master-0 kubenswrapper[34361]: I0224 05:50:41.352730 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/15a24782-c245-408a-b5b7-4db9b8e57619-config\") pod \"ovsdbserver-nb-0\" (UID: \"15a24782-c245-408a-b5b7-4db9b8e57619\") " pod="openstack/ovsdbserver-nb-0" Feb 24 05:50:41.353973 master-0 kubenswrapper[34361]: I0224 05:50:41.352803 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-ee5ad954-894e-4ab1-8df1-46fd7b431ce0\" (UniqueName: \"kubernetes.io/csi/topolvm.io^6a22dfb5-eb95-4c91-bd4e-b6ff5989c3e9\") pod \"ovsdbserver-nb-0\" (UID: \"15a24782-c245-408a-b5b7-4db9b8e57619\") " pod="openstack/ovsdbserver-nb-0" Feb 24 05:50:41.353973 master-0 kubenswrapper[34361]: I0224 05:50:41.352823 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8jx8b\" (UniqueName: \"kubernetes.io/projected/15a24782-c245-408a-b5b7-4db9b8e57619-kube-api-access-8jx8b\") pod \"ovsdbserver-nb-0\" (UID: \"15a24782-c245-408a-b5b7-4db9b8e57619\") " pod="openstack/ovsdbserver-nb-0" Feb 24 05:50:41.353973 master-0 kubenswrapper[34361]: I0224 05:50:41.352865 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/15a24782-c245-408a-b5b7-4db9b8e57619-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"15a24782-c245-408a-b5b7-4db9b8e57619\") " pod="openstack/ovsdbserver-nb-0" Feb 24 05:50:41.353973 master-0 kubenswrapper[34361]: I0224 05:50:41.352885 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/15a24782-c245-408a-b5b7-4db9b8e57619-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"15a24782-c245-408a-b5b7-4db9b8e57619\") " pod="openstack/ovsdbserver-nb-0" Feb 24 05:50:41.353973 master-0 kubenswrapper[34361]: I0224 05:50:41.352922 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/15a24782-c245-408a-b5b7-4db9b8e57619-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"15a24782-c245-408a-b5b7-4db9b8e57619\") " pod="openstack/ovsdbserver-nb-0" Feb 24 05:50:41.353973 master-0 kubenswrapper[34361]: I0224 05:50:41.352960 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/15a24782-c245-408a-b5b7-4db9b8e57619-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"15a24782-c245-408a-b5b7-4db9b8e57619\") " pod="openstack/ovsdbserver-nb-0" Feb 24 05:50:41.353973 master-0 kubenswrapper[34361]: I0224 05:50:41.352996 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/15a24782-c245-408a-b5b7-4db9b8e57619-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"15a24782-c245-408a-b5b7-4db9b8e57619\") " pod="openstack/ovsdbserver-nb-0" Feb 24 05:50:41.373248 master-0 kubenswrapper[34361]: I0224 05:50:41.373187 34361 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/15a24782-c245-408a-b5b7-4db9b8e57619-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"15a24782-c245-408a-b5b7-4db9b8e57619\") " pod="openstack/ovsdbserver-nb-0" Feb 24 05:50:41.374868 master-0 kubenswrapper[34361]: I0224 05:50:41.373650 34361 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/15a24782-c245-408a-b5b7-4db9b8e57619-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"15a24782-c245-408a-b5b7-4db9b8e57619\") " pod="openstack/ovsdbserver-nb-0" Feb 24 05:50:41.374868 master-0 kubenswrapper[34361]: I0224 05:50:41.374806 34361 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/15a24782-c245-408a-b5b7-4db9b8e57619-config\") pod \"ovsdbserver-nb-0\" (UID: \"15a24782-c245-408a-b5b7-4db9b8e57619\") " pod="openstack/ovsdbserver-nb-0" Feb 24 05:50:41.392366 master-0 kubenswrapper[34361]: I0224 05:50:41.378135 34361 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/15a24782-c245-408a-b5b7-4db9b8e57619-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"15a24782-c245-408a-b5b7-4db9b8e57619\") " pod="openstack/ovsdbserver-nb-0" Feb 24 05:50:41.392366 master-0 kubenswrapper[34361]: I0224 05:50:41.379247 34361 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/15a24782-c245-408a-b5b7-4db9b8e57619-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"15a24782-c245-408a-b5b7-4db9b8e57619\") " pod="openstack/ovsdbserver-nb-0" Feb 24 05:50:41.398335 master-0 kubenswrapper[34361]: I0224 05:50:41.396378 34361 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/15a24782-c245-408a-b5b7-4db9b8e57619-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"15a24782-c245-408a-b5b7-4db9b8e57619\") " pod="openstack/ovsdbserver-nb-0" Feb 24 05:50:41.438434 master-0 kubenswrapper[34361]: I0224 05:50:41.435122 34361 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 24 05:50:41.438434 master-0 kubenswrapper[34361]: I0224 05:50:41.435189 34361 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-ee5ad954-894e-4ab1-8df1-46fd7b431ce0\" (UniqueName: \"kubernetes.io/csi/topolvm.io^6a22dfb5-eb95-4c91-bd4e-b6ff5989c3e9\") pod \"ovsdbserver-nb-0\" (UID: \"15a24782-c245-408a-b5b7-4db9b8e57619\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/topolvm.io/f77cdafe4358fdbee080fa4553c16b50e754b80b9425619c28f8331732209c73/globalmount\"" pod="openstack/ovsdbserver-nb-0" Feb 24 05:50:41.457886 master-0 kubenswrapper[34361]: I0224 05:50:41.452068 34361 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8jx8b\" (UniqueName: \"kubernetes.io/projected/15a24782-c245-408a-b5b7-4db9b8e57619-kube-api-access-8jx8b\") pod \"ovsdbserver-nb-0\" (UID: \"15a24782-c245-408a-b5b7-4db9b8e57619\") " pod="openstack/ovsdbserver-nb-0" Feb 24 05:50:43.124109 master-0 kubenswrapper[34361]: I0224 05:50:43.124042 34361 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-ee5ad954-894e-4ab1-8df1-46fd7b431ce0\" (UniqueName: \"kubernetes.io/csi/topolvm.io^6a22dfb5-eb95-4c91-bd4e-b6ff5989c3e9\") pod \"ovsdbserver-nb-0\" (UID: \"15a24782-c245-408a-b5b7-4db9b8e57619\") " pod="openstack/ovsdbserver-nb-0" Feb 24 05:50:43.313744 master-0 kubenswrapper[34361]: I0224 05:50:43.313574 34361 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-nb-0" Feb 24 05:50:44.005976 master-0 kubenswrapper[34361]: I0224 05:50:44.005856 34361 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovsdbserver-sb-0"] Feb 24 05:50:44.008397 master-0 kubenswrapper[34361]: I0224 05:50:44.008356 34361 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-0" Feb 24 05:50:44.011616 master-0 kubenswrapper[34361]: I0224 05:50:44.011249 34361 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovndbcluster-sb-ovndbs" Feb 24 05:50:44.012115 master-0 kubenswrapper[34361]: I0224 05:50:44.011983 34361 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-sb-config" Feb 24 05:50:44.012488 master-0 kubenswrapper[34361]: I0224 05:50:44.012420 34361 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-sb-scripts" Feb 24 05:50:44.028826 master-0 kubenswrapper[34361]: I0224 05:50:44.028508 34361 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-sb-0"] Feb 24 05:50:44.138004 master-0 kubenswrapper[34361]: I0224 05:50:44.137828 34361 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/85ba8f6d-e87e-4b58-b4c8-04fddb2fab3f-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"85ba8f6d-e87e-4b58-b4c8-04fddb2fab3f\") " pod="openstack/ovsdbserver-sb-0" Feb 24 05:50:44.138004 master-0 kubenswrapper[34361]: I0224 05:50:44.137946 34361 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/85ba8f6d-e87e-4b58-b4c8-04fddb2fab3f-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"85ba8f6d-e87e-4b58-b4c8-04fddb2fab3f\") " pod="openstack/ovsdbserver-sb-0" Feb 24 05:50:44.138004 master-0 kubenswrapper[34361]: I0224 05:50:44.138009 34361 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-f6bcea25-7406-4a85-8ba1-e12b630dfa9f\" (UniqueName: \"kubernetes.io/csi/topolvm.io^25effcba-2761-4412-94ea-513236813c75\") pod \"ovsdbserver-sb-0\" (UID: \"85ba8f6d-e87e-4b58-b4c8-04fddb2fab3f\") " pod="openstack/ovsdbserver-sb-0" Feb 24 05:50:44.138985 master-0 kubenswrapper[34361]: I0224 05:50:44.138128 34361 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/85ba8f6d-e87e-4b58-b4c8-04fddb2fab3f-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"85ba8f6d-e87e-4b58-b4c8-04fddb2fab3f\") " pod="openstack/ovsdbserver-sb-0" Feb 24 05:50:44.139768 master-0 kubenswrapper[34361]: I0224 05:50:44.138538 34361 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6kcw7\" (UniqueName: \"kubernetes.io/projected/85ba8f6d-e87e-4b58-b4c8-04fddb2fab3f-kube-api-access-6kcw7\") pod \"ovsdbserver-sb-0\" (UID: \"85ba8f6d-e87e-4b58-b4c8-04fddb2fab3f\") " pod="openstack/ovsdbserver-sb-0" Feb 24 05:50:44.140367 master-0 kubenswrapper[34361]: I0224 05:50:44.139802 34361 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/85ba8f6d-e87e-4b58-b4c8-04fddb2fab3f-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"85ba8f6d-e87e-4b58-b4c8-04fddb2fab3f\") " pod="openstack/ovsdbserver-sb-0" Feb 24 05:50:44.140367 master-0 kubenswrapper[34361]: I0224 05:50:44.140052 34361 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/85ba8f6d-e87e-4b58-b4c8-04fddb2fab3f-config\") pod \"ovsdbserver-sb-0\" (UID: \"85ba8f6d-e87e-4b58-b4c8-04fddb2fab3f\") " pod="openstack/ovsdbserver-sb-0" Feb 24 05:50:44.142231 master-0 kubenswrapper[34361]: I0224 05:50:44.142181 34361 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/85ba8f6d-e87e-4b58-b4c8-04fddb2fab3f-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"85ba8f6d-e87e-4b58-b4c8-04fddb2fab3f\") " pod="openstack/ovsdbserver-sb-0" Feb 24 05:50:44.143522 master-0 kubenswrapper[34361]: I0224 05:50:44.143438 34361 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Feb 24 05:50:44.245181 master-0 kubenswrapper[34361]: I0224 05:50:44.245085 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/85ba8f6d-e87e-4b58-b4c8-04fddb2fab3f-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"85ba8f6d-e87e-4b58-b4c8-04fddb2fab3f\") " pod="openstack/ovsdbserver-sb-0" Feb 24 05:50:44.245552 master-0 kubenswrapper[34361]: I0224 05:50:44.245268 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6kcw7\" (UniqueName: \"kubernetes.io/projected/85ba8f6d-e87e-4b58-b4c8-04fddb2fab3f-kube-api-access-6kcw7\") pod \"ovsdbserver-sb-0\" (UID: \"85ba8f6d-e87e-4b58-b4c8-04fddb2fab3f\") " pod="openstack/ovsdbserver-sb-0" Feb 24 05:50:44.245552 master-0 kubenswrapper[34361]: I0224 05:50:44.245358 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/85ba8f6d-e87e-4b58-b4c8-04fddb2fab3f-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"85ba8f6d-e87e-4b58-b4c8-04fddb2fab3f\") " pod="openstack/ovsdbserver-sb-0" Feb 24 05:50:44.245552 master-0 kubenswrapper[34361]: I0224 05:50:44.245444 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/85ba8f6d-e87e-4b58-b4c8-04fddb2fab3f-config\") pod \"ovsdbserver-sb-0\" (UID: \"85ba8f6d-e87e-4b58-b4c8-04fddb2fab3f\") " pod="openstack/ovsdbserver-sb-0" Feb 24 05:50:44.245785 master-0 kubenswrapper[34361]: I0224 05:50:44.245564 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/85ba8f6d-e87e-4b58-b4c8-04fddb2fab3f-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"85ba8f6d-e87e-4b58-b4c8-04fddb2fab3f\") " pod="openstack/ovsdbserver-sb-0" Feb 24 05:50:44.245785 master-0 kubenswrapper[34361]: I0224 05:50:44.245631 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/85ba8f6d-e87e-4b58-b4c8-04fddb2fab3f-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"85ba8f6d-e87e-4b58-b4c8-04fddb2fab3f\") " pod="openstack/ovsdbserver-sb-0" Feb 24 05:50:44.245785 master-0 kubenswrapper[34361]: I0224 05:50:44.245687 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/85ba8f6d-e87e-4b58-b4c8-04fddb2fab3f-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"85ba8f6d-e87e-4b58-b4c8-04fddb2fab3f\") " pod="openstack/ovsdbserver-sb-0" Feb 24 05:50:44.245785 master-0 kubenswrapper[34361]: I0224 05:50:44.245760 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-f6bcea25-7406-4a85-8ba1-e12b630dfa9f\" (UniqueName: \"kubernetes.io/csi/topolvm.io^25effcba-2761-4412-94ea-513236813c75\") pod \"ovsdbserver-sb-0\" (UID: \"85ba8f6d-e87e-4b58-b4c8-04fddb2fab3f\") " pod="openstack/ovsdbserver-sb-0" Feb 24 05:50:44.250458 master-0 kubenswrapper[34361]: I0224 05:50:44.250221 34361 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/85ba8f6d-e87e-4b58-b4c8-04fddb2fab3f-config\") pod \"ovsdbserver-sb-0\" (UID: \"85ba8f6d-e87e-4b58-b4c8-04fddb2fab3f\") " pod="openstack/ovsdbserver-sb-0" Feb 24 05:50:44.250458 master-0 kubenswrapper[34361]: I0224 05:50:44.250223 34361 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/85ba8f6d-e87e-4b58-b4c8-04fddb2fab3f-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"85ba8f6d-e87e-4b58-b4c8-04fddb2fab3f\") " pod="openstack/ovsdbserver-sb-0" Feb 24 05:50:44.250458 master-0 kubenswrapper[34361]: I0224 05:50:44.250266 34361 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/85ba8f6d-e87e-4b58-b4c8-04fddb2fab3f-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"85ba8f6d-e87e-4b58-b4c8-04fddb2fab3f\") " pod="openstack/ovsdbserver-sb-0" Feb 24 05:50:44.254267 master-0 kubenswrapper[34361]: I0224 05:50:44.253232 34361 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/85ba8f6d-e87e-4b58-b4c8-04fddb2fab3f-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"85ba8f6d-e87e-4b58-b4c8-04fddb2fab3f\") " pod="openstack/ovsdbserver-sb-0" Feb 24 05:50:44.254267 master-0 kubenswrapper[34361]: I0224 05:50:44.253625 34361 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/85ba8f6d-e87e-4b58-b4c8-04fddb2fab3f-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"85ba8f6d-e87e-4b58-b4c8-04fddb2fab3f\") " pod="openstack/ovsdbserver-sb-0" Feb 24 05:50:44.254267 master-0 kubenswrapper[34361]: I0224 05:50:44.254178 34361 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 24 05:50:44.254267 master-0 kubenswrapper[34361]: I0224 05:50:44.254222 34361 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-f6bcea25-7406-4a85-8ba1-e12b630dfa9f\" (UniqueName: \"kubernetes.io/csi/topolvm.io^25effcba-2761-4412-94ea-513236813c75\") pod \"ovsdbserver-sb-0\" (UID: \"85ba8f6d-e87e-4b58-b4c8-04fddb2fab3f\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/topolvm.io/63675da8a6f88b567627676eaf0e366e4c7e1ed274669491dc866a4d71366288/globalmount\"" pod="openstack/ovsdbserver-sb-0" Feb 24 05:50:44.259782 master-0 kubenswrapper[34361]: I0224 05:50:44.259672 34361 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/85ba8f6d-e87e-4b58-b4c8-04fddb2fab3f-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"85ba8f6d-e87e-4b58-b4c8-04fddb2fab3f\") " pod="openstack/ovsdbserver-sb-0" Feb 24 05:50:44.277212 master-0 kubenswrapper[34361]: I0224 05:50:44.277140 34361 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6kcw7\" (UniqueName: \"kubernetes.io/projected/85ba8f6d-e87e-4b58-b4c8-04fddb2fab3f-kube-api-access-6kcw7\") pod \"ovsdbserver-sb-0\" (UID: \"85ba8f6d-e87e-4b58-b4c8-04fddb2fab3f\") " pod="openstack/ovsdbserver-sb-0" Feb 24 05:50:45.639087 master-0 kubenswrapper[34361]: I0224 05:50:45.639012 34361 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-f6bcea25-7406-4a85-8ba1-e12b630dfa9f\" (UniqueName: \"kubernetes.io/csi/topolvm.io^25effcba-2761-4412-94ea-513236813c75\") pod \"ovsdbserver-sb-0\" (UID: \"85ba8f6d-e87e-4b58-b4c8-04fddb2fab3f\") " pod="openstack/ovsdbserver-sb-0" Feb 24 05:50:45.850997 master-0 kubenswrapper[34361]: I0224 05:50:45.850747 34361 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-0" Feb 24 05:50:48.227640 master-0 kubenswrapper[34361]: W0224 05:50:48.227534 34361 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9ac06ab8_5197_4557_8124_583f49b6082b.slice/crio-d24d0551e36f36073b3b826200e1f635ef5e87221ff62edc73d12ab1c1afb667 WatchSource:0}: Error finding container d24d0551e36f36073b3b826200e1f635ef5e87221ff62edc73d12ab1c1afb667: Status 404 returned error can't find the container with id d24d0551e36f36073b3b826200e1f635ef5e87221ff62edc73d12ab1c1afb667 Feb 24 05:50:49.303674 master-0 kubenswrapper[34361]: I0224 05:50:49.303595 34361 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"9ac06ab8-5197-4557-8124-583f49b6082b","Type":"ContainerStarted","Data":"d24d0551e36f36073b3b826200e1f635ef5e87221ff62edc73d12ab1c1afb667"} Feb 24 05:50:49.974712 master-0 kubenswrapper[34361]: I0224 05:50:49.974604 34361 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/memcached-0"] Feb 24 05:50:50.005115 master-0 kubenswrapper[34361]: I0224 05:50:50.004505 34361 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Feb 24 05:50:50.016217 master-0 kubenswrapper[34361]: I0224 05:50:50.016145 34361 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-cell1-galera-0"] Feb 24 05:50:50.038729 master-0 kubenswrapper[34361]: W0224 05:50:50.038582 34361 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod376b0bd6_b6ed_42ca_bc34_b3823b24637e.slice/crio-2ead8bd1d561a132025cab3b4c390a0f3515dd52dd19c60a7913e2c38e0e854b WatchSource:0}: Error finding container 2ead8bd1d561a132025cab3b4c390a0f3515dd52dd19c60a7913e2c38e0e854b: Status 404 returned error can't find the container with id 2ead8bd1d561a132025cab3b4c390a0f3515dd52dd19c60a7913e2c38e0e854b Feb 24 05:50:50.058910 master-0 kubenswrapper[34361]: W0224 05:50:50.058827 34361 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod3ab53d19_8c7c_4c4a_9372_c9e7fe2debd0.slice/crio-e65179d8474343754163140394608d136a25e6ef91c4e8430507635d897c1cb1 WatchSource:0}: Error finding container e65179d8474343754163140394608d136a25e6ef91c4e8430507635d897c1cb1: Status 404 returned error can't find the container with id e65179d8474343754163140394608d136a25e6ef91c4e8430507635d897c1cb1 Feb 24 05:50:50.107041 master-0 kubenswrapper[34361]: W0224 05:50:50.106947 34361 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod1c2bd63a_de4d_4b6a_ae00_b96c7b13d38d.slice/crio-8ec3a0ab0027bc3799fbf83ae19f54c953e38ed08d29219067022952d000f202 WatchSource:0}: Error finding container 8ec3a0ab0027bc3799fbf83ae19f54c953e38ed08d29219067022952d000f202: Status 404 returned error can't find the container with id 8ec3a0ab0027bc3799fbf83ae19f54c953e38ed08d29219067022952d000f202 Feb 24 05:50:50.112574 master-0 kubenswrapper[34361]: I0224 05:50:50.112508 34361 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-galera-0"] Feb 24 05:50:50.213269 master-0 kubenswrapper[34361]: W0224 05:50:50.213209 34361 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod15a24782_c245_408a_b5b7_4db9b8e57619.slice/crio-50d72f1eb96b8dccd0e0c58662d4792b43513707b170b72d8d8d0bc11af0751e WatchSource:0}: Error finding container 50d72f1eb96b8dccd0e0c58662d4792b43513707b170b72d8d8d0bc11af0751e: Status 404 returned error can't find the container with id 50d72f1eb96b8dccd0e0c58662d4792b43513707b170b72d8d8d0bc11af0751e Feb 24 05:50:50.222249 master-0 kubenswrapper[34361]: I0224 05:50:50.222205 34361 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-nb-0"] Feb 24 05:50:50.323804 master-0 kubenswrapper[34361]: I0224 05:50:50.323688 34361 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"15a24782-c245-408a-b5b7-4db9b8e57619","Type":"ContainerStarted","Data":"50d72f1eb96b8dccd0e0c58662d4792b43513707b170b72d8d8d0bc11af0751e"} Feb 24 05:50:50.328541 master-0 kubenswrapper[34361]: I0224 05:50:50.327934 34361 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/memcached-0" event={"ID":"3ab53d19-8c7c-4c4a-9372-c9e7fe2debd0","Type":"ContainerStarted","Data":"e65179d8474343754163140394608d136a25e6ef91c4e8430507635d897c1cb1"} Feb 24 05:50:50.329451 master-0 kubenswrapper[34361]: I0224 05:50:50.329404 34361 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"1c4de741-6cb1-4ef0-80c9-173c72825057","Type":"ContainerStarted","Data":"7e0df1f9f480ea5282d253a6b355e642a09952847b3eee27a8575992e967858a"} Feb 24 05:50:50.331898 master-0 kubenswrapper[34361]: I0224 05:50:50.331856 34361 generic.go:334] "Generic (PLEG): container finished" podID="d6091c0d-046c-44f0-888c-dbc5ac5a7aae" containerID="ed307627f9254dfe3a5af801ad793ca19f7eb6ad84d41960f01d9f6cf10504cf" exitCode=0 Feb 24 05:50:50.331998 master-0 kubenswrapper[34361]: I0224 05:50:50.331919 34361 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7c45d57b9c-k22s7" event={"ID":"d6091c0d-046c-44f0-888c-dbc5ac5a7aae","Type":"ContainerDied","Data":"ed307627f9254dfe3a5af801ad793ca19f7eb6ad84d41960f01d9f6cf10504cf"} Feb 24 05:50:50.335663 master-0 kubenswrapper[34361]: I0224 05:50:50.335482 34361 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"376b0bd6-b6ed-42ca-bc34-b3823b24637e","Type":"ContainerStarted","Data":"2ead8bd1d561a132025cab3b4c390a0f3515dd52dd19c60a7913e2c38e0e854b"} Feb 24 05:50:50.339415 master-0 kubenswrapper[34361]: I0224 05:50:50.339369 34361 generic.go:334] "Generic (PLEG): container finished" podID="90697da9-388e-4c2c-9959-12430b1c6848" containerID="6efaa18b29316da860689d53e8ebc140008429bb0cd6165ea5edcf4fd1019454" exitCode=0 Feb 24 05:50:50.339516 master-0 kubenswrapper[34361]: I0224 05:50:50.339447 34361 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6974cff98c-2t99f" event={"ID":"90697da9-388e-4c2c-9959-12430b1c6848","Type":"ContainerDied","Data":"6efaa18b29316da860689d53e8ebc140008429bb0cd6165ea5edcf4fd1019454"} Feb 24 05:50:50.344736 master-0 kubenswrapper[34361]: I0224 05:50:50.344702 34361 generic.go:334] "Generic (PLEG): container finished" podID="02de9f91-2491-4db8-b028-1c5357ded011" containerID="81f6dda8c6eac07a8effb04feaf1fc21945efe995f825111d2fad3b9dff2a716" exitCode=0 Feb 24 05:50:50.344831 master-0 kubenswrapper[34361]: I0224 05:50:50.344778 34361 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-bc7f9869-4lgxt" event={"ID":"02de9f91-2491-4db8-b028-1c5357ded011","Type":"ContainerDied","Data":"81f6dda8c6eac07a8effb04feaf1fc21945efe995f825111d2fad3b9dff2a716"} Feb 24 05:50:50.349553 master-0 kubenswrapper[34361]: I0224 05:50:50.349480 34361 generic.go:334] "Generic (PLEG): container finished" podID="f921d6ac-4034-4bd4-b129-9115b362066d" containerID="65ec29e69af69c05b938a3c0b6d6101ee415de6419fcffdf26893795219eeacb" exitCode=0 Feb 24 05:50:50.349730 master-0 kubenswrapper[34361]: I0224 05:50:50.349638 34361 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7d4c486879-5m7lz" event={"ID":"f921d6ac-4034-4bd4-b129-9115b362066d","Type":"ContainerDied","Data":"65ec29e69af69c05b938a3c0b6d6101ee415de6419fcffdf26893795219eeacb"} Feb 24 05:50:50.354015 master-0 kubenswrapper[34361]: I0224 05:50:50.353959 34361 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"1c2bd63a-de4d-4b6a-ae00-b96c7b13d38d","Type":"ContainerStarted","Data":"8ec3a0ab0027bc3799fbf83ae19f54c953e38ed08d29219067022952d000f202"} Feb 24 05:50:50.596497 master-0 kubenswrapper[34361]: I0224 05:50:50.596046 34361 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-5kh8v"] Feb 24 05:50:50.740468 master-0 kubenswrapper[34361]: I0224 05:50:50.740407 34361 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-sb-0"] Feb 24 05:50:50.773668 master-0 kubenswrapper[34361]: W0224 05:50:50.773567 34361 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod85ba8f6d_e87e_4b58_b4c8_04fddb2fab3f.slice/crio-359624ec7d00ffdd0235376ec187240c6b1bddff9bf26734f61eccfcccdeb994 WatchSource:0}: Error finding container 359624ec7d00ffdd0235376ec187240c6b1bddff9bf26734f61eccfcccdeb994: Status 404 returned error can't find the container with id 359624ec7d00ffdd0235376ec187240c6b1bddff9bf26734f61eccfcccdeb994 Feb 24 05:50:50.963518 master-0 kubenswrapper[34361]: I0224 05:50:50.963432 34361 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-ovs-86mtg"] Feb 24 05:50:50.989969 master-0 kubenswrapper[34361]: W0224 05:50:50.989885 34361 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod9a505c34_4d57_4f00_8ad1_ae7d585c2e0d.slice/crio-09f998df2a86f1972e53c015561dd418f3c62aa66cf8d08e06d31fef0931b408 WatchSource:0}: Error finding container 09f998df2a86f1972e53c015561dd418f3c62aa66cf8d08e06d31fef0931b408: Status 404 returned error can't find the container with id 09f998df2a86f1972e53c015561dd418f3c62aa66cf8d08e06d31fef0931b408 Feb 24 05:50:51.020219 master-0 kubenswrapper[34361]: I0224 05:50:51.020163 34361 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7d4c486879-5m7lz" Feb 24 05:50:51.150925 master-0 kubenswrapper[34361]: I0224 05:50:51.150170 34361 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f921d6ac-4034-4bd4-b129-9115b362066d-config\") pod \"f921d6ac-4034-4bd4-b129-9115b362066d\" (UID: \"f921d6ac-4034-4bd4-b129-9115b362066d\") " Feb 24 05:50:51.151267 master-0 kubenswrapper[34361]: I0224 05:50:51.151057 34361 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-927ht\" (UniqueName: \"kubernetes.io/projected/f921d6ac-4034-4bd4-b129-9115b362066d-kube-api-access-927ht\") pod \"f921d6ac-4034-4bd4-b129-9115b362066d\" (UID: \"f921d6ac-4034-4bd4-b129-9115b362066d\") " Feb 24 05:50:51.151267 master-0 kubenswrapper[34361]: I0224 05:50:51.151259 34361 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f921d6ac-4034-4bd4-b129-9115b362066d-dns-svc\") pod \"f921d6ac-4034-4bd4-b129-9115b362066d\" (UID: \"f921d6ac-4034-4bd4-b129-9115b362066d\") " Feb 24 05:50:51.156763 master-0 kubenswrapper[34361]: I0224 05:50:51.156576 34361 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f921d6ac-4034-4bd4-b129-9115b362066d-kube-api-access-927ht" (OuterVolumeSpecName: "kube-api-access-927ht") pod "f921d6ac-4034-4bd4-b129-9115b362066d" (UID: "f921d6ac-4034-4bd4-b129-9115b362066d"). InnerVolumeSpecName "kube-api-access-927ht". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 24 05:50:51.166563 master-0 kubenswrapper[34361]: I0224 05:50:51.166528 34361 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-bc7f9869-4lgxt" Feb 24 05:50:51.178895 master-0 kubenswrapper[34361]: I0224 05:50:51.178831 34361 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f921d6ac-4034-4bd4-b129-9115b362066d-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "f921d6ac-4034-4bd4-b129-9115b362066d" (UID: "f921d6ac-4034-4bd4-b129-9115b362066d"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 24 05:50:51.180383 master-0 kubenswrapper[34361]: I0224 05:50:51.180347 34361 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f921d6ac-4034-4bd4-b129-9115b362066d-config" (OuterVolumeSpecName: "config") pod "f921d6ac-4034-4bd4-b129-9115b362066d" (UID: "f921d6ac-4034-4bd4-b129-9115b362066d"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 24 05:50:51.254540 master-0 kubenswrapper[34361]: I0224 05:50:51.254383 34361 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-n7jcz\" (UniqueName: \"kubernetes.io/projected/02de9f91-2491-4db8-b028-1c5357ded011-kube-api-access-n7jcz\") pod \"02de9f91-2491-4db8-b028-1c5357ded011\" (UID: \"02de9f91-2491-4db8-b028-1c5357ded011\") " Feb 24 05:50:51.254910 master-0 kubenswrapper[34361]: I0224 05:50:51.254546 34361 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/02de9f91-2491-4db8-b028-1c5357ded011-config\") pod \"02de9f91-2491-4db8-b028-1c5357ded011\" (UID: \"02de9f91-2491-4db8-b028-1c5357ded011\") " Feb 24 05:50:51.255382 master-0 kubenswrapper[34361]: I0224 05:50:51.255358 34361 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f921d6ac-4034-4bd4-b129-9115b362066d-config\") on node \"master-0\" DevicePath \"\"" Feb 24 05:50:51.255382 master-0 kubenswrapper[34361]: I0224 05:50:51.255379 34361 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-927ht\" (UniqueName: \"kubernetes.io/projected/f921d6ac-4034-4bd4-b129-9115b362066d-kube-api-access-927ht\") on node \"master-0\" DevicePath \"\"" Feb 24 05:50:51.255494 master-0 kubenswrapper[34361]: I0224 05:50:51.255391 34361 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f921d6ac-4034-4bd4-b129-9115b362066d-dns-svc\") on node \"master-0\" DevicePath \"\"" Feb 24 05:50:51.260532 master-0 kubenswrapper[34361]: I0224 05:50:51.260433 34361 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/02de9f91-2491-4db8-b028-1c5357ded011-kube-api-access-n7jcz" (OuterVolumeSpecName: "kube-api-access-n7jcz") pod "02de9f91-2491-4db8-b028-1c5357ded011" (UID: "02de9f91-2491-4db8-b028-1c5357ded011"). InnerVolumeSpecName "kube-api-access-n7jcz". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 24 05:50:51.277788 master-0 kubenswrapper[34361]: I0224 05:50:51.277666 34361 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/02de9f91-2491-4db8-b028-1c5357ded011-config" (OuterVolumeSpecName: "config") pod "02de9f91-2491-4db8-b028-1c5357ded011" (UID: "02de9f91-2491-4db8-b028-1c5357ded011"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 24 05:50:51.357167 master-0 kubenswrapper[34361]: I0224 05:50:51.357018 34361 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/02de9f91-2491-4db8-b028-1c5357ded011-config\") on node \"master-0\" DevicePath \"\"" Feb 24 05:50:51.357167 master-0 kubenswrapper[34361]: I0224 05:50:51.357079 34361 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-n7jcz\" (UniqueName: \"kubernetes.io/projected/02de9f91-2491-4db8-b028-1c5357ded011-kube-api-access-n7jcz\") on node \"master-0\" DevicePath \"\"" Feb 24 05:50:51.377090 master-0 kubenswrapper[34361]: I0224 05:50:51.377019 34361 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"85ba8f6d-e87e-4b58-b4c8-04fddb2fab3f","Type":"ContainerStarted","Data":"359624ec7d00ffdd0235376ec187240c6b1bddff9bf26734f61eccfcccdeb994"} Feb 24 05:50:51.385231 master-0 kubenswrapper[34361]: I0224 05:50:51.384564 34361 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6974cff98c-2t99f" event={"ID":"90697da9-388e-4c2c-9959-12430b1c6848","Type":"ContainerStarted","Data":"0eec7b21cdeb0a572f525a0faa373d7aac15567dcd141f7e012790d33f4b2e77"} Feb 24 05:50:51.385231 master-0 kubenswrapper[34361]: I0224 05:50:51.384746 34361 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-6974cff98c-2t99f" Feb 24 05:50:51.387587 master-0 kubenswrapper[34361]: I0224 05:50:51.387208 34361 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-bc7f9869-4lgxt" event={"ID":"02de9f91-2491-4db8-b028-1c5357ded011","Type":"ContainerDied","Data":"b5bbde839d96341d543524cba3e77627f4e3992a590df8e843ce86fef3b25ead"} Feb 24 05:50:51.387587 master-0 kubenswrapper[34361]: I0224 05:50:51.387296 34361 scope.go:117] "RemoveContainer" containerID="81f6dda8c6eac07a8effb04feaf1fc21945efe995f825111d2fad3b9dff2a716" Feb 24 05:50:51.387587 master-0 kubenswrapper[34361]: I0224 05:50:51.387487 34361 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-bc7f9869-4lgxt" Feb 24 05:50:51.394247 master-0 kubenswrapper[34361]: I0224 05:50:51.393918 34361 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-5kh8v" event={"ID":"c11f5497-b3de-43b7-9312-b06485f2df8a","Type":"ContainerStarted","Data":"df53bbf91df6e08ba47da0ba39dde5010303ad55cf99460aa89820f95e8a0ede"} Feb 24 05:50:51.398084 master-0 kubenswrapper[34361]: I0224 05:50:51.397947 34361 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-86mtg" event={"ID":"9a505c34-4d57-4f00-8ad1-ae7d585c2e0d","Type":"ContainerStarted","Data":"09f998df2a86f1972e53c015561dd418f3c62aa66cf8d08e06d31fef0931b408"} Feb 24 05:50:51.404400 master-0 kubenswrapper[34361]: I0224 05:50:51.402193 34361 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7d4c486879-5m7lz" event={"ID":"f921d6ac-4034-4bd4-b129-9115b362066d","Type":"ContainerDied","Data":"255413d25d2a47048ddccf2afb648548d80a4d9bbbc030b8fd3827d947dba484"} Feb 24 05:50:51.404400 master-0 kubenswrapper[34361]: I0224 05:50:51.402342 34361 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7d4c486879-5m7lz" Feb 24 05:50:51.411975 master-0 kubenswrapper[34361]: I0224 05:50:51.410497 34361 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7c45d57b9c-k22s7" event={"ID":"d6091c0d-046c-44f0-888c-dbc5ac5a7aae","Type":"ContainerStarted","Data":"3c1ffe819156bf68fb0b360ad9725e8527c880e13a059a4e3c15a9267322f7d2"} Feb 24 05:50:51.411975 master-0 kubenswrapper[34361]: I0224 05:50:51.411624 34361 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-7c45d57b9c-k22s7" Feb 24 05:50:51.417887 master-0 kubenswrapper[34361]: I0224 05:50:51.417370 34361 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-6974cff98c-2t99f" podStartSLOduration=3.682932846 podStartE2EDuration="21.417346701s" podCreationTimestamp="2026-02-24 05:50:30 +0000 UTC" firstStartedPulling="2026-02-24 05:50:31.610544982 +0000 UTC m=+791.313162028" lastFinishedPulling="2026-02-24 05:50:49.344958837 +0000 UTC m=+809.047575883" observedRunningTime="2026-02-24 05:50:51.40877507 +0000 UTC m=+811.111392126" watchObservedRunningTime="2026-02-24 05:50:51.417346701 +0000 UTC m=+811.119963747" Feb 24 05:50:51.526400 master-0 kubenswrapper[34361]: I0224 05:50:51.526218 34361 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-bc7f9869-4lgxt"] Feb 24 05:50:51.533675 master-0 kubenswrapper[34361]: I0224 05:50:51.533640 34361 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-bc7f9869-4lgxt"] Feb 24 05:50:51.536950 master-0 kubenswrapper[34361]: I0224 05:50:51.536635 34361 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-7c45d57b9c-k22s7" podStartSLOduration=4.026908379 podStartE2EDuration="21.536614077s" podCreationTimestamp="2026-02-24 05:50:30 +0000 UTC" firstStartedPulling="2026-02-24 05:50:31.891045083 +0000 UTC m=+791.593662129" lastFinishedPulling="2026-02-24 05:50:49.400750781 +0000 UTC m=+809.103367827" observedRunningTime="2026-02-24 05:50:51.535594749 +0000 UTC m=+811.238211805" watchObservedRunningTime="2026-02-24 05:50:51.536614077 +0000 UTC m=+811.239231123" Feb 24 05:50:51.597772 master-0 kubenswrapper[34361]: I0224 05:50:51.597654 34361 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7d4c486879-5m7lz"] Feb 24 05:50:51.604688 master-0 kubenswrapper[34361]: I0224 05:50:51.604542 34361 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-7d4c486879-5m7lz"] Feb 24 05:50:52.614169 master-0 kubenswrapper[34361]: I0224 05:50:52.614057 34361 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="02de9f91-2491-4db8-b028-1c5357ded011" path="/var/lib/kubelet/pods/02de9f91-2491-4db8-b028-1c5357ded011/volumes" Feb 24 05:50:52.615299 master-0 kubenswrapper[34361]: I0224 05:50:52.615173 34361 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f921d6ac-4034-4bd4-b129-9115b362066d" path="/var/lib/kubelet/pods/f921d6ac-4034-4bd4-b129-9115b362066d/volumes" Feb 24 05:50:54.916649 master-0 kubenswrapper[34361]: I0224 05:50:54.916018 34361 scope.go:117] "RemoveContainer" containerID="65ec29e69af69c05b938a3c0b6d6101ee415de6419fcffdf26893795219eeacb" Feb 24 05:50:55.926487 master-0 kubenswrapper[34361]: I0224 05:50:55.925563 34361 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-6974cff98c-2t99f" Feb 24 05:50:56.268674 master-0 kubenswrapper[34361]: I0224 05:50:56.268562 34361 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-7c45d57b9c-k22s7" Feb 24 05:50:56.378233 master-0 kubenswrapper[34361]: I0224 05:50:56.376488 34361 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6974cff98c-2t99f"] Feb 24 05:50:56.483276 master-0 kubenswrapper[34361]: I0224 05:50:56.481989 34361 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-6974cff98c-2t99f" podUID="90697da9-388e-4c2c-9959-12430b1c6848" containerName="dnsmasq-dns" containerID="cri-o://0eec7b21cdeb0a572f525a0faa373d7aac15567dcd141f7e012790d33f4b2e77" gracePeriod=10 Feb 24 05:50:57.498190 master-0 kubenswrapper[34361]: I0224 05:50:57.498101 34361 generic.go:334] "Generic (PLEG): container finished" podID="90697da9-388e-4c2c-9959-12430b1c6848" containerID="0eec7b21cdeb0a572f525a0faa373d7aac15567dcd141f7e012790d33f4b2e77" exitCode=0 Feb 24 05:50:57.498190 master-0 kubenswrapper[34361]: I0224 05:50:57.498192 34361 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6974cff98c-2t99f" event={"ID":"90697da9-388e-4c2c-9959-12430b1c6848","Type":"ContainerDied","Data":"0eec7b21cdeb0a572f525a0faa373d7aac15567dcd141f7e012790d33f4b2e77"} Feb 24 05:50:58.470395 master-0 kubenswrapper[34361]: I0224 05:50:58.469538 34361 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6974cff98c-2t99f" Feb 24 05:50:58.500416 master-0 kubenswrapper[34361]: I0224 05:50:58.499546 34361 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/90697da9-388e-4c2c-9959-12430b1c6848-config\") pod \"90697da9-388e-4c2c-9959-12430b1c6848\" (UID: \"90697da9-388e-4c2c-9959-12430b1c6848\") " Feb 24 05:50:58.500416 master-0 kubenswrapper[34361]: I0224 05:50:58.499929 34361 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-h6gx6\" (UniqueName: \"kubernetes.io/projected/90697da9-388e-4c2c-9959-12430b1c6848-kube-api-access-h6gx6\") pod \"90697da9-388e-4c2c-9959-12430b1c6848\" (UID: \"90697da9-388e-4c2c-9959-12430b1c6848\") " Feb 24 05:50:58.500416 master-0 kubenswrapper[34361]: I0224 05:50:58.500111 34361 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/90697da9-388e-4c2c-9959-12430b1c6848-dns-svc\") pod \"90697da9-388e-4c2c-9959-12430b1c6848\" (UID: \"90697da9-388e-4c2c-9959-12430b1c6848\") " Feb 24 05:50:58.529439 master-0 kubenswrapper[34361]: I0224 05:50:58.520266 34361 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/90697da9-388e-4c2c-9959-12430b1c6848-kube-api-access-h6gx6" (OuterVolumeSpecName: "kube-api-access-h6gx6") pod "90697da9-388e-4c2c-9959-12430b1c6848" (UID: "90697da9-388e-4c2c-9959-12430b1c6848"). InnerVolumeSpecName "kube-api-access-h6gx6". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 24 05:50:58.542488 master-0 kubenswrapper[34361]: I0224 05:50:58.540859 34361 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6974cff98c-2t99f" event={"ID":"90697da9-388e-4c2c-9959-12430b1c6848","Type":"ContainerDied","Data":"749cc87ba1eda340ab813f8fac34b1c3db9896634f9c4bc1aa4584f1c59a02a9"} Feb 24 05:50:58.542488 master-0 kubenswrapper[34361]: I0224 05:50:58.540977 34361 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6974cff98c-2t99f" Feb 24 05:50:58.600334 master-0 kubenswrapper[34361]: I0224 05:50:58.591882 34361 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/90697da9-388e-4c2c-9959-12430b1c6848-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "90697da9-388e-4c2c-9959-12430b1c6848" (UID: "90697da9-388e-4c2c-9959-12430b1c6848"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 24 05:50:58.605229 master-0 kubenswrapper[34361]: I0224 05:50:58.605157 34361 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-h6gx6\" (UniqueName: \"kubernetes.io/projected/90697da9-388e-4c2c-9959-12430b1c6848-kube-api-access-h6gx6\") on node \"master-0\" DevicePath \"\"" Feb 24 05:50:58.605229 master-0 kubenswrapper[34361]: I0224 05:50:58.605223 34361 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/90697da9-388e-4c2c-9959-12430b1c6848-dns-svc\") on node \"master-0\" DevicePath \"\"" Feb 24 05:50:58.606064 master-0 kubenswrapper[34361]: I0224 05:50:58.605999 34361 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/90697da9-388e-4c2c-9959-12430b1c6848-config" (OuterVolumeSpecName: "config") pod "90697da9-388e-4c2c-9959-12430b1c6848" (UID: "90697da9-388e-4c2c-9959-12430b1c6848"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 24 05:50:58.710896 master-0 kubenswrapper[34361]: I0224 05:50:58.710674 34361 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/90697da9-388e-4c2c-9959-12430b1c6848-config\") on node \"master-0\" DevicePath \"\"" Feb 24 05:50:58.876400 master-0 kubenswrapper[34361]: I0224 05:50:58.876302 34361 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6974cff98c-2t99f"] Feb 24 05:50:58.888051 master-0 kubenswrapper[34361]: I0224 05:50:58.887981 34361 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-6974cff98c-2t99f"] Feb 24 05:50:59.078486 master-0 kubenswrapper[34361]: I0224 05:50:59.078426 34361 scope.go:117] "RemoveContainer" containerID="0eec7b21cdeb0a572f525a0faa373d7aac15567dcd141f7e012790d33f4b2e77" Feb 24 05:50:59.161450 master-0 kubenswrapper[34361]: I0224 05:50:59.161219 34361 scope.go:117] "RemoveContainer" containerID="6efaa18b29316da860689d53e8ebc140008429bb0cd6165ea5edcf4fd1019454" Feb 24 05:50:59.559585 master-0 kubenswrapper[34361]: I0224 05:50:59.559444 34361 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"1c2bd63a-de4d-4b6a-ae00-b96c7b13d38d","Type":"ContainerStarted","Data":"2cc30bea41b59083f4f6eaad337925629e8516f5516c5b9b35ab986fede04eb0"} Feb 24 05:50:59.573383 master-0 kubenswrapper[34361]: I0224 05:50:59.569455 34361 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/memcached-0" event={"ID":"3ab53d19-8c7c-4c4a-9372-c9e7fe2debd0","Type":"ContainerStarted","Data":"54273b0a51dfc541923eb986d9df4e35f5fcd8c4ffaaaefa1fd34a925613d006"} Feb 24 05:50:59.573383 master-0 kubenswrapper[34361]: I0224 05:50:59.570556 34361 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/memcached-0" Feb 24 05:50:59.622601 master-0 kubenswrapper[34361]: I0224 05:50:59.622014 34361 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/memcached-0" podStartSLOduration=17.569248389 podStartE2EDuration="26.621982788s" podCreationTimestamp="2026-02-24 05:50:33 +0000 UTC" firstStartedPulling="2026-02-24 05:50:50.064394191 +0000 UTC m=+809.767011237" lastFinishedPulling="2026-02-24 05:50:59.11712857 +0000 UTC m=+818.819745636" observedRunningTime="2026-02-24 05:50:59.614584449 +0000 UTC m=+819.317201505" watchObservedRunningTime="2026-02-24 05:50:59.621982788 +0000 UTC m=+819.324599834" Feb 24 05:51:00.626894 master-0 kubenswrapper[34361]: I0224 05:51:00.626829 34361 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="90697da9-388e-4c2c-9959-12430b1c6848" path="/var/lib/kubelet/pods/90697da9-388e-4c2c-9959-12430b1c6848/volumes" Feb 24 05:51:00.634462 master-0 kubenswrapper[34361]: I0224 05:51:00.628166 34361 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-5kh8v" event={"ID":"c11f5497-b3de-43b7-9312-b06485f2df8a","Type":"ContainerStarted","Data":"77a2d593d513d5ed50e417ba355770585f76e81c26c934b7fcc6a11b5c3a9e91"} Feb 24 05:51:00.634462 master-0 kubenswrapper[34361]: I0224 05:51:00.628256 34361 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"15a24782-c245-408a-b5b7-4db9b8e57619","Type":"ContainerStarted","Data":"a6c1faddaa9d4ebcdae00d9c7e6416a629a144278a44148ebafaf82b31331c0c"} Feb 24 05:51:00.634462 master-0 kubenswrapper[34361]: I0224 05:51:00.628282 34361 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-86mtg" event={"ID":"9a505c34-4d57-4f00-8ad1-ae7d585c2e0d","Type":"ContainerStarted","Data":"e15da80f2775eaa476bec47f235f1154c0bc6223b413d6792a485f282dcd5e60"} Feb 24 05:51:00.634462 master-0 kubenswrapper[34361]: I0224 05:51:00.628330 34361 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"376b0bd6-b6ed-42ca-bc34-b3823b24637e","Type":"ContainerStarted","Data":"9ebc03b74dc25cc184077400e927239c0543ca572f3314d9c91baa495a5539ec"} Feb 24 05:51:00.634462 master-0 kubenswrapper[34361]: I0224 05:51:00.628356 34361 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"85ba8f6d-e87e-4b58-b4c8-04fddb2fab3f","Type":"ContainerStarted","Data":"3a9d44fe385ce29519b4c5b6896f09c9780507acb37234af479287a32b207e46"} Feb 24 05:51:00.761954 master-0 kubenswrapper[34361]: I0224 05:51:00.761833 34361 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-5kh8v" podStartSLOduration=13.191949152 podStartE2EDuration="21.761802114s" podCreationTimestamp="2026-02-24 05:50:39 +0000 UTC" firstStartedPulling="2026-02-24 05:50:50.595444546 +0000 UTC m=+810.298061592" lastFinishedPulling="2026-02-24 05:50:59.165297468 +0000 UTC m=+818.867914554" observedRunningTime="2026-02-24 05:51:00.75386504 +0000 UTC m=+820.456482096" watchObservedRunningTime="2026-02-24 05:51:00.761802114 +0000 UTC m=+820.464419160" Feb 24 05:51:01.641391 master-0 kubenswrapper[34361]: I0224 05:51:01.641287 34361 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"1c4de741-6cb1-4ef0-80c9-173c72825057","Type":"ContainerStarted","Data":"0bed034cacdb2711007170fd519dfa80f4a0929347ae6ae697577eea31ed0b62"} Feb 24 05:51:01.644427 master-0 kubenswrapper[34361]: I0224 05:51:01.643888 34361 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"9ac06ab8-5197-4557-8124-583f49b6082b","Type":"ContainerStarted","Data":"47391d5812a91a97d137b4d1eda20bb7e42aa4f35e595c745924a38f15a39f1c"} Feb 24 05:51:01.645861 master-0 kubenswrapper[34361]: I0224 05:51:01.645767 34361 generic.go:334] "Generic (PLEG): container finished" podID="9a505c34-4d57-4f00-8ad1-ae7d585c2e0d" containerID="e15da80f2775eaa476bec47f235f1154c0bc6223b413d6792a485f282dcd5e60" exitCode=0 Feb 24 05:51:01.646505 master-0 kubenswrapper[34361]: I0224 05:51:01.646456 34361 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-86mtg" event={"ID":"9a505c34-4d57-4f00-8ad1-ae7d585c2e0d","Type":"ContainerDied","Data":"e15da80f2775eaa476bec47f235f1154c0bc6223b413d6792a485f282dcd5e60"} Feb 24 05:51:01.646987 master-0 kubenswrapper[34361]: I0224 05:51:01.646941 34361 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-5kh8v" Feb 24 05:51:02.662473 master-0 kubenswrapper[34361]: I0224 05:51:02.661629 34361 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"85ba8f6d-e87e-4b58-b4c8-04fddb2fab3f","Type":"ContainerStarted","Data":"2f2a68a1bb6359d2824d58a194a4722c2214840247b7633303120097b335ce12"} Feb 24 05:51:02.666185 master-0 kubenswrapper[34361]: I0224 05:51:02.666118 34361 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"15a24782-c245-408a-b5b7-4db9b8e57619","Type":"ContainerStarted","Data":"7b86c83ce37503cdfb700ffe44d78297776a5dcbfa6df7b4609b97d58ead32b2"} Feb 24 05:51:02.670365 master-0 kubenswrapper[34361]: I0224 05:51:02.670051 34361 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-86mtg" event={"ID":"9a505c34-4d57-4f00-8ad1-ae7d585c2e0d","Type":"ContainerStarted","Data":"eeb2d2e082ec6710153a6efd39decf4f0301d1fc20d010b15be53fe42befa439"} Feb 24 05:51:02.692511 master-0 kubenswrapper[34361]: I0224 05:51:02.692009 34361 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovsdbserver-sb-0" podStartSLOduration=10.271032338 podStartE2EDuration="21.691990895s" podCreationTimestamp="2026-02-24 05:50:41 +0000 UTC" firstStartedPulling="2026-02-24 05:50:50.794503872 +0000 UTC m=+810.497120918" lastFinishedPulling="2026-02-24 05:51:02.215462429 +0000 UTC m=+821.918079475" observedRunningTime="2026-02-24 05:51:02.690272788 +0000 UTC m=+822.392889834" watchObservedRunningTime="2026-02-24 05:51:02.691990895 +0000 UTC m=+822.394607941" Feb 24 05:51:02.725975 master-0 kubenswrapper[34361]: I0224 05:51:02.725845 34361 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovsdbserver-nb-0" podStartSLOduration=11.717376352 podStartE2EDuration="23.725822896s" podCreationTimestamp="2026-02-24 05:50:39 +0000 UTC" firstStartedPulling="2026-02-24 05:50:50.216015158 +0000 UTC m=+809.918632204" lastFinishedPulling="2026-02-24 05:51:02.224461702 +0000 UTC m=+821.927078748" observedRunningTime="2026-02-24 05:51:02.724799099 +0000 UTC m=+822.427416175" watchObservedRunningTime="2026-02-24 05:51:02.725822896 +0000 UTC m=+822.428439942" Feb 24 05:51:02.771108 master-0 kubenswrapper[34361]: I0224 05:51:02.771018 34361 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-metrics-5kqv6"] Feb 24 05:51:02.771539 master-0 kubenswrapper[34361]: E0224 05:51:02.771508 34361 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="90697da9-388e-4c2c-9959-12430b1c6848" containerName="init" Feb 24 05:51:02.771539 master-0 kubenswrapper[34361]: I0224 05:51:02.771529 34361 state_mem.go:107] "Deleted CPUSet assignment" podUID="90697da9-388e-4c2c-9959-12430b1c6848" containerName="init" Feb 24 05:51:02.771669 master-0 kubenswrapper[34361]: E0224 05:51:02.771577 34361 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="90697da9-388e-4c2c-9959-12430b1c6848" containerName="dnsmasq-dns" Feb 24 05:51:02.771669 master-0 kubenswrapper[34361]: I0224 05:51:02.771584 34361 state_mem.go:107] "Deleted CPUSet assignment" podUID="90697da9-388e-4c2c-9959-12430b1c6848" containerName="dnsmasq-dns" Feb 24 05:51:02.771669 master-0 kubenswrapper[34361]: E0224 05:51:02.771594 34361 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f921d6ac-4034-4bd4-b129-9115b362066d" containerName="init" Feb 24 05:51:02.771669 master-0 kubenswrapper[34361]: I0224 05:51:02.771600 34361 state_mem.go:107] "Deleted CPUSet assignment" podUID="f921d6ac-4034-4bd4-b129-9115b362066d" containerName="init" Feb 24 05:51:02.771669 master-0 kubenswrapper[34361]: E0224 05:51:02.771635 34361 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="02de9f91-2491-4db8-b028-1c5357ded011" containerName="init" Feb 24 05:51:02.771669 master-0 kubenswrapper[34361]: I0224 05:51:02.771640 34361 state_mem.go:107] "Deleted CPUSet assignment" podUID="02de9f91-2491-4db8-b028-1c5357ded011" containerName="init" Feb 24 05:51:02.771954 master-0 kubenswrapper[34361]: I0224 05:51:02.771857 34361 memory_manager.go:354] "RemoveStaleState removing state" podUID="02de9f91-2491-4db8-b028-1c5357ded011" containerName="init" Feb 24 05:51:02.771954 master-0 kubenswrapper[34361]: I0224 05:51:02.771881 34361 memory_manager.go:354] "RemoveStaleState removing state" podUID="f921d6ac-4034-4bd4-b129-9115b362066d" containerName="init" Feb 24 05:51:02.771954 master-0 kubenswrapper[34361]: I0224 05:51:02.771901 34361 memory_manager.go:354] "RemoveStaleState removing state" podUID="90697da9-388e-4c2c-9959-12430b1c6848" containerName="dnsmasq-dns" Feb 24 05:51:02.772651 master-0 kubenswrapper[34361]: I0224 05:51:02.772597 34361 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-metrics-5kqv6" Feb 24 05:51:02.778613 master-0 kubenswrapper[34361]: I0224 05:51:02.778488 34361 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-metrics-config" Feb 24 05:51:02.804154 master-0 kubenswrapper[34361]: I0224 05:51:02.804042 34361 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-metrics-5kqv6"] Feb 24 05:51:02.939637 master-0 kubenswrapper[34361]: I0224 05:51:02.939034 34361 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/2ba47386-9185-46cd-97d0-2d21a55fa3d6-ovs-rundir\") pod \"ovn-controller-metrics-5kqv6\" (UID: \"2ba47386-9185-46cd-97d0-2d21a55fa3d6\") " pod="openstack/ovn-controller-metrics-5kqv6" Feb 24 05:51:02.939637 master-0 kubenswrapper[34361]: I0224 05:51:02.939183 34361 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/2ba47386-9185-46cd-97d0-2d21a55fa3d6-ovn-rundir\") pod \"ovn-controller-metrics-5kqv6\" (UID: \"2ba47386-9185-46cd-97d0-2d21a55fa3d6\") " pod="openstack/ovn-controller-metrics-5kqv6" Feb 24 05:51:02.939637 master-0 kubenswrapper[34361]: I0224 05:51:02.939252 34361 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2ba47386-9185-46cd-97d0-2d21a55fa3d6-combined-ca-bundle\") pod \"ovn-controller-metrics-5kqv6\" (UID: \"2ba47386-9185-46cd-97d0-2d21a55fa3d6\") " pod="openstack/ovn-controller-metrics-5kqv6" Feb 24 05:51:02.939637 master-0 kubenswrapper[34361]: I0224 05:51:02.939407 34361 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/2ba47386-9185-46cd-97d0-2d21a55fa3d6-metrics-certs-tls-certs\") pod \"ovn-controller-metrics-5kqv6\" (UID: \"2ba47386-9185-46cd-97d0-2d21a55fa3d6\") " pod="openstack/ovn-controller-metrics-5kqv6" Feb 24 05:51:02.939637 master-0 kubenswrapper[34361]: I0224 05:51:02.939433 34361 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vxzlm\" (UniqueName: \"kubernetes.io/projected/2ba47386-9185-46cd-97d0-2d21a55fa3d6-kube-api-access-vxzlm\") pod \"ovn-controller-metrics-5kqv6\" (UID: \"2ba47386-9185-46cd-97d0-2d21a55fa3d6\") " pod="openstack/ovn-controller-metrics-5kqv6" Feb 24 05:51:02.940064 master-0 kubenswrapper[34361]: I0224 05:51:02.939913 34361 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2ba47386-9185-46cd-97d0-2d21a55fa3d6-config\") pod \"ovn-controller-metrics-5kqv6\" (UID: \"2ba47386-9185-46cd-97d0-2d21a55fa3d6\") " pod="openstack/ovn-controller-metrics-5kqv6" Feb 24 05:51:02.982375 master-0 kubenswrapper[34361]: I0224 05:51:02.979105 34361 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-5c685c7df5-nbjbv"] Feb 24 05:51:02.982375 master-0 kubenswrapper[34361]: I0224 05:51:02.981528 34361 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5c685c7df5-nbjbv" Feb 24 05:51:02.988056 master-0 kubenswrapper[34361]: I0224 05:51:02.985912 34361 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovsdbserver-nb" Feb 24 05:51:02.995397 master-0 kubenswrapper[34361]: I0224 05:51:02.995324 34361 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5c685c7df5-nbjbv"] Feb 24 05:51:03.049421 master-0 kubenswrapper[34361]: I0224 05:51:03.044516 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/2ba47386-9185-46cd-97d0-2d21a55fa3d6-metrics-certs-tls-certs\") pod \"ovn-controller-metrics-5kqv6\" (UID: \"2ba47386-9185-46cd-97d0-2d21a55fa3d6\") " pod="openstack/ovn-controller-metrics-5kqv6" Feb 24 05:51:03.049421 master-0 kubenswrapper[34361]: I0224 05:51:03.046377 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vxzlm\" (UniqueName: \"kubernetes.io/projected/2ba47386-9185-46cd-97d0-2d21a55fa3d6-kube-api-access-vxzlm\") pod \"ovn-controller-metrics-5kqv6\" (UID: \"2ba47386-9185-46cd-97d0-2d21a55fa3d6\") " pod="openstack/ovn-controller-metrics-5kqv6" Feb 24 05:51:03.049421 master-0 kubenswrapper[34361]: I0224 05:51:03.046589 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2ba47386-9185-46cd-97d0-2d21a55fa3d6-config\") pod \"ovn-controller-metrics-5kqv6\" (UID: \"2ba47386-9185-46cd-97d0-2d21a55fa3d6\") " pod="openstack/ovn-controller-metrics-5kqv6" Feb 24 05:51:03.049421 master-0 kubenswrapper[34361]: I0224 05:51:03.046755 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/2ba47386-9185-46cd-97d0-2d21a55fa3d6-ovs-rundir\") pod \"ovn-controller-metrics-5kqv6\" (UID: \"2ba47386-9185-46cd-97d0-2d21a55fa3d6\") " pod="openstack/ovn-controller-metrics-5kqv6" Feb 24 05:51:03.049421 master-0 kubenswrapper[34361]: I0224 05:51:03.046782 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/2ba47386-9185-46cd-97d0-2d21a55fa3d6-ovn-rundir\") pod \"ovn-controller-metrics-5kqv6\" (UID: \"2ba47386-9185-46cd-97d0-2d21a55fa3d6\") " pod="openstack/ovn-controller-metrics-5kqv6" Feb 24 05:51:03.049421 master-0 kubenswrapper[34361]: I0224 05:51:03.046809 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2ba47386-9185-46cd-97d0-2d21a55fa3d6-combined-ca-bundle\") pod \"ovn-controller-metrics-5kqv6\" (UID: \"2ba47386-9185-46cd-97d0-2d21a55fa3d6\") " pod="openstack/ovn-controller-metrics-5kqv6" Feb 24 05:51:03.054381 master-0 kubenswrapper[34361]: I0224 05:51:03.049511 34361 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/2ba47386-9185-46cd-97d0-2d21a55fa3d6-ovs-rundir\") pod \"ovn-controller-metrics-5kqv6\" (UID: \"2ba47386-9185-46cd-97d0-2d21a55fa3d6\") " pod="openstack/ovn-controller-metrics-5kqv6" Feb 24 05:51:03.054381 master-0 kubenswrapper[34361]: I0224 05:51:03.049925 34361 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2ba47386-9185-46cd-97d0-2d21a55fa3d6-config\") pod \"ovn-controller-metrics-5kqv6\" (UID: \"2ba47386-9185-46cd-97d0-2d21a55fa3d6\") " pod="openstack/ovn-controller-metrics-5kqv6" Feb 24 05:51:03.054381 master-0 kubenswrapper[34361]: I0224 05:51:03.050156 34361 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/2ba47386-9185-46cd-97d0-2d21a55fa3d6-ovn-rundir\") pod \"ovn-controller-metrics-5kqv6\" (UID: \"2ba47386-9185-46cd-97d0-2d21a55fa3d6\") " pod="openstack/ovn-controller-metrics-5kqv6" Feb 24 05:51:03.054381 master-0 kubenswrapper[34361]: I0224 05:51:03.053912 34361 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/2ba47386-9185-46cd-97d0-2d21a55fa3d6-metrics-certs-tls-certs\") pod \"ovn-controller-metrics-5kqv6\" (UID: \"2ba47386-9185-46cd-97d0-2d21a55fa3d6\") " pod="openstack/ovn-controller-metrics-5kqv6" Feb 24 05:51:03.067094 master-0 kubenswrapper[34361]: I0224 05:51:03.067033 34361 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vxzlm\" (UniqueName: \"kubernetes.io/projected/2ba47386-9185-46cd-97d0-2d21a55fa3d6-kube-api-access-vxzlm\") pod \"ovn-controller-metrics-5kqv6\" (UID: \"2ba47386-9185-46cd-97d0-2d21a55fa3d6\") " pod="openstack/ovn-controller-metrics-5kqv6" Feb 24 05:51:03.078527 master-0 kubenswrapper[34361]: I0224 05:51:03.077385 34361 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2ba47386-9185-46cd-97d0-2d21a55fa3d6-combined-ca-bundle\") pod \"ovn-controller-metrics-5kqv6\" (UID: \"2ba47386-9185-46cd-97d0-2d21a55fa3d6\") " pod="openstack/ovn-controller-metrics-5kqv6" Feb 24 05:51:03.151422 master-0 kubenswrapper[34361]: I0224 05:51:03.149973 34361 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/6f2b326d-6ef0-44ce-8fdd-1c74fa4c6bd9-ovsdbserver-nb\") pod \"dnsmasq-dns-5c685c7df5-nbjbv\" (UID: \"6f2b326d-6ef0-44ce-8fdd-1c74fa4c6bd9\") " pod="openstack/dnsmasq-dns-5c685c7df5-nbjbv" Feb 24 05:51:03.151422 master-0 kubenswrapper[34361]: I0224 05:51:03.150090 34361 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/6f2b326d-6ef0-44ce-8fdd-1c74fa4c6bd9-dns-svc\") pod \"dnsmasq-dns-5c685c7df5-nbjbv\" (UID: \"6f2b326d-6ef0-44ce-8fdd-1c74fa4c6bd9\") " pod="openstack/dnsmasq-dns-5c685c7df5-nbjbv" Feb 24 05:51:03.151422 master-0 kubenswrapper[34361]: I0224 05:51:03.150354 34361 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6f2b326d-6ef0-44ce-8fdd-1c74fa4c6bd9-config\") pod \"dnsmasq-dns-5c685c7df5-nbjbv\" (UID: \"6f2b326d-6ef0-44ce-8fdd-1c74fa4c6bd9\") " pod="openstack/dnsmasq-dns-5c685c7df5-nbjbv" Feb 24 05:51:03.151422 master-0 kubenswrapper[34361]: I0224 05:51:03.150421 34361 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9k6cp\" (UniqueName: \"kubernetes.io/projected/6f2b326d-6ef0-44ce-8fdd-1c74fa4c6bd9-kube-api-access-9k6cp\") pod \"dnsmasq-dns-5c685c7df5-nbjbv\" (UID: \"6f2b326d-6ef0-44ce-8fdd-1c74fa4c6bd9\") " pod="openstack/dnsmasq-dns-5c685c7df5-nbjbv" Feb 24 05:51:03.167877 master-0 kubenswrapper[34361]: I0224 05:51:03.167801 34361 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-metrics-5kqv6" Feb 24 05:51:03.255431 master-0 kubenswrapper[34361]: I0224 05:51:03.255361 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/6f2b326d-6ef0-44ce-8fdd-1c74fa4c6bd9-ovsdbserver-nb\") pod \"dnsmasq-dns-5c685c7df5-nbjbv\" (UID: \"6f2b326d-6ef0-44ce-8fdd-1c74fa4c6bd9\") " pod="openstack/dnsmasq-dns-5c685c7df5-nbjbv" Feb 24 05:51:03.255777 master-0 kubenswrapper[34361]: I0224 05:51:03.255449 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/6f2b326d-6ef0-44ce-8fdd-1c74fa4c6bd9-dns-svc\") pod \"dnsmasq-dns-5c685c7df5-nbjbv\" (UID: \"6f2b326d-6ef0-44ce-8fdd-1c74fa4c6bd9\") " pod="openstack/dnsmasq-dns-5c685c7df5-nbjbv" Feb 24 05:51:03.255777 master-0 kubenswrapper[34361]: I0224 05:51:03.255551 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6f2b326d-6ef0-44ce-8fdd-1c74fa4c6bd9-config\") pod \"dnsmasq-dns-5c685c7df5-nbjbv\" (UID: \"6f2b326d-6ef0-44ce-8fdd-1c74fa4c6bd9\") " pod="openstack/dnsmasq-dns-5c685c7df5-nbjbv" Feb 24 05:51:03.255777 master-0 kubenswrapper[34361]: I0224 05:51:03.255576 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9k6cp\" (UniqueName: \"kubernetes.io/projected/6f2b326d-6ef0-44ce-8fdd-1c74fa4c6bd9-kube-api-access-9k6cp\") pod \"dnsmasq-dns-5c685c7df5-nbjbv\" (UID: \"6f2b326d-6ef0-44ce-8fdd-1c74fa4c6bd9\") " pod="openstack/dnsmasq-dns-5c685c7df5-nbjbv" Feb 24 05:51:03.256729 master-0 kubenswrapper[34361]: I0224 05:51:03.256317 34361 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/6f2b326d-6ef0-44ce-8fdd-1c74fa4c6bd9-ovsdbserver-nb\") pod \"dnsmasq-dns-5c685c7df5-nbjbv\" (UID: \"6f2b326d-6ef0-44ce-8fdd-1c74fa4c6bd9\") " pod="openstack/dnsmasq-dns-5c685c7df5-nbjbv" Feb 24 05:51:03.258810 master-0 kubenswrapper[34361]: I0224 05:51:03.258768 34361 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/6f2b326d-6ef0-44ce-8fdd-1c74fa4c6bd9-dns-svc\") pod \"dnsmasq-dns-5c685c7df5-nbjbv\" (UID: \"6f2b326d-6ef0-44ce-8fdd-1c74fa4c6bd9\") " pod="openstack/dnsmasq-dns-5c685c7df5-nbjbv" Feb 24 05:51:03.259166 master-0 kubenswrapper[34361]: I0224 05:51:03.258987 34361 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6f2b326d-6ef0-44ce-8fdd-1c74fa4c6bd9-config\") pod \"dnsmasq-dns-5c685c7df5-nbjbv\" (UID: \"6f2b326d-6ef0-44ce-8fdd-1c74fa4c6bd9\") " pod="openstack/dnsmasq-dns-5c685c7df5-nbjbv" Feb 24 05:51:03.263268 master-0 kubenswrapper[34361]: I0224 05:51:03.262384 34361 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5c685c7df5-nbjbv"] Feb 24 05:51:03.264460 master-0 kubenswrapper[34361]: E0224 05:51:03.264406 34361 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[kube-api-access-9k6cp], unattached volumes=[], failed to process volumes=[]: context canceled" pod="openstack/dnsmasq-dns-5c685c7df5-nbjbv" podUID="6f2b326d-6ef0-44ce-8fdd-1c74fa4c6bd9" Feb 24 05:51:03.289375 master-0 kubenswrapper[34361]: I0224 05:51:03.289264 34361 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9k6cp\" (UniqueName: \"kubernetes.io/projected/6f2b326d-6ef0-44ce-8fdd-1c74fa4c6bd9-kube-api-access-9k6cp\") pod \"dnsmasq-dns-5c685c7df5-nbjbv\" (UID: \"6f2b326d-6ef0-44ce-8fdd-1c74fa4c6bd9\") " pod="openstack/dnsmasq-dns-5c685c7df5-nbjbv" Feb 24 05:51:03.314600 master-0 kubenswrapper[34361]: I0224 05:51:03.314527 34361 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovsdbserver-nb-0" Feb 24 05:51:03.329010 master-0 kubenswrapper[34361]: I0224 05:51:03.328924 34361 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-65c6cc445f-5w2gf"] Feb 24 05:51:03.330953 master-0 kubenswrapper[34361]: I0224 05:51:03.330915 34361 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-65c6cc445f-5w2gf" Feb 24 05:51:03.365555 master-0 kubenswrapper[34361]: I0224 05:51:03.338913 34361 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovsdbserver-sb" Feb 24 05:51:03.428278 master-0 kubenswrapper[34361]: I0224 05:51:03.428166 34361 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-65c6cc445f-5w2gf"] Feb 24 05:51:03.468996 master-0 kubenswrapper[34361]: I0224 05:51:03.468931 34361 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/cd8ee44a-5bb9-4456-915c-06bd6998afb8-ovsdbserver-nb\") pod \"dnsmasq-dns-65c6cc445f-5w2gf\" (UID: \"cd8ee44a-5bb9-4456-915c-06bd6998afb8\") " pod="openstack/dnsmasq-dns-65c6cc445f-5w2gf" Feb 24 05:51:03.469503 master-0 kubenswrapper[34361]: I0224 05:51:03.469452 34361 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/cd8ee44a-5bb9-4456-915c-06bd6998afb8-dns-svc\") pod \"dnsmasq-dns-65c6cc445f-5w2gf\" (UID: \"cd8ee44a-5bb9-4456-915c-06bd6998afb8\") " pod="openstack/dnsmasq-dns-65c6cc445f-5w2gf" Feb 24 05:51:03.469724 master-0 kubenswrapper[34361]: I0224 05:51:03.469698 34361 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wrvq7\" (UniqueName: \"kubernetes.io/projected/cd8ee44a-5bb9-4456-915c-06bd6998afb8-kube-api-access-wrvq7\") pod \"dnsmasq-dns-65c6cc445f-5w2gf\" (UID: \"cd8ee44a-5bb9-4456-915c-06bd6998afb8\") " pod="openstack/dnsmasq-dns-65c6cc445f-5w2gf" Feb 24 05:51:03.469905 master-0 kubenswrapper[34361]: I0224 05:51:03.469858 34361 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/cd8ee44a-5bb9-4456-915c-06bd6998afb8-ovsdbserver-sb\") pod \"dnsmasq-dns-65c6cc445f-5w2gf\" (UID: \"cd8ee44a-5bb9-4456-915c-06bd6998afb8\") " pod="openstack/dnsmasq-dns-65c6cc445f-5w2gf" Feb 24 05:51:03.469955 master-0 kubenswrapper[34361]: I0224 05:51:03.469907 34361 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cd8ee44a-5bb9-4456-915c-06bd6998afb8-config\") pod \"dnsmasq-dns-65c6cc445f-5w2gf\" (UID: \"cd8ee44a-5bb9-4456-915c-06bd6998afb8\") " pod="openstack/dnsmasq-dns-65c6cc445f-5w2gf" Feb 24 05:51:03.571899 master-0 kubenswrapper[34361]: I0224 05:51:03.571837 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/cd8ee44a-5bb9-4456-915c-06bd6998afb8-ovsdbserver-sb\") pod \"dnsmasq-dns-65c6cc445f-5w2gf\" (UID: \"cd8ee44a-5bb9-4456-915c-06bd6998afb8\") " pod="openstack/dnsmasq-dns-65c6cc445f-5w2gf" Feb 24 05:51:03.572204 master-0 kubenswrapper[34361]: I0224 05:51:03.572077 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cd8ee44a-5bb9-4456-915c-06bd6998afb8-config\") pod \"dnsmasq-dns-65c6cc445f-5w2gf\" (UID: \"cd8ee44a-5bb9-4456-915c-06bd6998afb8\") " pod="openstack/dnsmasq-dns-65c6cc445f-5w2gf" Feb 24 05:51:03.572204 master-0 kubenswrapper[34361]: I0224 05:51:03.572127 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/cd8ee44a-5bb9-4456-915c-06bd6998afb8-ovsdbserver-nb\") pod \"dnsmasq-dns-65c6cc445f-5w2gf\" (UID: \"cd8ee44a-5bb9-4456-915c-06bd6998afb8\") " pod="openstack/dnsmasq-dns-65c6cc445f-5w2gf" Feb 24 05:51:03.572273 master-0 kubenswrapper[34361]: I0224 05:51:03.572256 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/cd8ee44a-5bb9-4456-915c-06bd6998afb8-dns-svc\") pod \"dnsmasq-dns-65c6cc445f-5w2gf\" (UID: \"cd8ee44a-5bb9-4456-915c-06bd6998afb8\") " pod="openstack/dnsmasq-dns-65c6cc445f-5w2gf" Feb 24 05:51:03.572364 master-0 kubenswrapper[34361]: I0224 05:51:03.572337 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wrvq7\" (UniqueName: \"kubernetes.io/projected/cd8ee44a-5bb9-4456-915c-06bd6998afb8-kube-api-access-wrvq7\") pod \"dnsmasq-dns-65c6cc445f-5w2gf\" (UID: \"cd8ee44a-5bb9-4456-915c-06bd6998afb8\") " pod="openstack/dnsmasq-dns-65c6cc445f-5w2gf" Feb 24 05:51:03.574205 master-0 kubenswrapper[34361]: I0224 05:51:03.574151 34361 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/cd8ee44a-5bb9-4456-915c-06bd6998afb8-ovsdbserver-sb\") pod \"dnsmasq-dns-65c6cc445f-5w2gf\" (UID: \"cd8ee44a-5bb9-4456-915c-06bd6998afb8\") " pod="openstack/dnsmasq-dns-65c6cc445f-5w2gf" Feb 24 05:51:03.575298 master-0 kubenswrapper[34361]: I0224 05:51:03.575260 34361 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/cd8ee44a-5bb9-4456-915c-06bd6998afb8-ovsdbserver-nb\") pod \"dnsmasq-dns-65c6cc445f-5w2gf\" (UID: \"cd8ee44a-5bb9-4456-915c-06bd6998afb8\") " pod="openstack/dnsmasq-dns-65c6cc445f-5w2gf" Feb 24 05:51:03.577136 master-0 kubenswrapper[34361]: I0224 05:51:03.577085 34361 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cd8ee44a-5bb9-4456-915c-06bd6998afb8-config\") pod \"dnsmasq-dns-65c6cc445f-5w2gf\" (UID: \"cd8ee44a-5bb9-4456-915c-06bd6998afb8\") " pod="openstack/dnsmasq-dns-65c6cc445f-5w2gf" Feb 24 05:51:03.579057 master-0 kubenswrapper[34361]: I0224 05:51:03.579015 34361 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/cd8ee44a-5bb9-4456-915c-06bd6998afb8-dns-svc\") pod \"dnsmasq-dns-65c6cc445f-5w2gf\" (UID: \"cd8ee44a-5bb9-4456-915c-06bd6998afb8\") " pod="openstack/dnsmasq-dns-65c6cc445f-5w2gf" Feb 24 05:51:03.595089 master-0 kubenswrapper[34361]: I0224 05:51:03.593348 34361 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wrvq7\" (UniqueName: \"kubernetes.io/projected/cd8ee44a-5bb9-4456-915c-06bd6998afb8-kube-api-access-wrvq7\") pod \"dnsmasq-dns-65c6cc445f-5w2gf\" (UID: \"cd8ee44a-5bb9-4456-915c-06bd6998afb8\") " pod="openstack/dnsmasq-dns-65c6cc445f-5w2gf" Feb 24 05:51:03.682572 master-0 kubenswrapper[34361]: I0224 05:51:03.682493 34361 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-86mtg" event={"ID":"9a505c34-4d57-4f00-8ad1-ae7d585c2e0d","Type":"ContainerStarted","Data":"e13767eb98c0ebb0991a1fff1b09d3957d47db3b003cae6cd8ed11c015bb589c"} Feb 24 05:51:03.683347 master-0 kubenswrapper[34361]: I0224 05:51:03.682640 34361 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5c685c7df5-nbjbv" Feb 24 05:51:03.685105 master-0 kubenswrapper[34361]: I0224 05:51:03.685072 34361 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-ovs-86mtg" Feb 24 05:51:03.685105 master-0 kubenswrapper[34361]: I0224 05:51:03.685103 34361 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-ovs-86mtg" Feb 24 05:51:03.701964 master-0 kubenswrapper[34361]: I0224 05:51:03.701728 34361 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5c685c7df5-nbjbv" Feb 24 05:51:03.710840 master-0 kubenswrapper[34361]: I0224 05:51:03.710730 34361 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-65c6cc445f-5w2gf" Feb 24 05:51:03.788620 master-0 kubenswrapper[34361]: I0224 05:51:03.788330 34361 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-ovs-86mtg" podStartSLOduration=16.622665223 podStartE2EDuration="24.788257216s" podCreationTimestamp="2026-02-24 05:50:39 +0000 UTC" firstStartedPulling="2026-02-24 05:50:50.995548112 +0000 UTC m=+810.698165158" lastFinishedPulling="2026-02-24 05:50:59.161140085 +0000 UTC m=+818.863757151" observedRunningTime="2026-02-24 05:51:03.77655365 +0000 UTC m=+823.479170706" watchObservedRunningTime="2026-02-24 05:51:03.788257216 +0000 UTC m=+823.490874282" Feb 24 05:51:03.853402 master-0 kubenswrapper[34361]: I0224 05:51:03.851830 34361 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/ovsdbserver-sb-0" Feb 24 05:51:03.886530 master-0 kubenswrapper[34361]: I0224 05:51:03.884674 34361 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6f2b326d-6ef0-44ce-8fdd-1c74fa4c6bd9-config\") pod \"6f2b326d-6ef0-44ce-8fdd-1c74fa4c6bd9\" (UID: \"6f2b326d-6ef0-44ce-8fdd-1c74fa4c6bd9\") " Feb 24 05:51:03.886530 master-0 kubenswrapper[34361]: I0224 05:51:03.884905 34361 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/6f2b326d-6ef0-44ce-8fdd-1c74fa4c6bd9-ovsdbserver-nb\") pod \"6f2b326d-6ef0-44ce-8fdd-1c74fa4c6bd9\" (UID: \"6f2b326d-6ef0-44ce-8fdd-1c74fa4c6bd9\") " Feb 24 05:51:03.886530 master-0 kubenswrapper[34361]: I0224 05:51:03.885058 34361 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/6f2b326d-6ef0-44ce-8fdd-1c74fa4c6bd9-dns-svc\") pod \"6f2b326d-6ef0-44ce-8fdd-1c74fa4c6bd9\" (UID: \"6f2b326d-6ef0-44ce-8fdd-1c74fa4c6bd9\") " Feb 24 05:51:03.886530 master-0 kubenswrapper[34361]: I0224 05:51:03.885129 34361 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9k6cp\" (UniqueName: \"kubernetes.io/projected/6f2b326d-6ef0-44ce-8fdd-1c74fa4c6bd9-kube-api-access-9k6cp\") pod \"6f2b326d-6ef0-44ce-8fdd-1c74fa4c6bd9\" (UID: \"6f2b326d-6ef0-44ce-8fdd-1c74fa4c6bd9\") " Feb 24 05:51:03.888119 master-0 kubenswrapper[34361]: I0224 05:51:03.888057 34361 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6f2b326d-6ef0-44ce-8fdd-1c74fa4c6bd9-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "6f2b326d-6ef0-44ce-8fdd-1c74fa4c6bd9" (UID: "6f2b326d-6ef0-44ce-8fdd-1c74fa4c6bd9"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 24 05:51:03.888983 master-0 kubenswrapper[34361]: I0224 05:51:03.888639 34361 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6f2b326d-6ef0-44ce-8fdd-1c74fa4c6bd9-config" (OuterVolumeSpecName: "config") pod "6f2b326d-6ef0-44ce-8fdd-1c74fa4c6bd9" (UID: "6f2b326d-6ef0-44ce-8fdd-1c74fa4c6bd9"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 24 05:51:03.888983 master-0 kubenswrapper[34361]: I0224 05:51:03.888713 34361 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6f2b326d-6ef0-44ce-8fdd-1c74fa4c6bd9-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "6f2b326d-6ef0-44ce-8fdd-1c74fa4c6bd9" (UID: "6f2b326d-6ef0-44ce-8fdd-1c74fa4c6bd9"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 24 05:51:03.894654 master-0 kubenswrapper[34361]: I0224 05:51:03.894604 34361 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6f2b326d-6ef0-44ce-8fdd-1c74fa4c6bd9-kube-api-access-9k6cp" (OuterVolumeSpecName: "kube-api-access-9k6cp") pod "6f2b326d-6ef0-44ce-8fdd-1c74fa4c6bd9" (UID: "6f2b326d-6ef0-44ce-8fdd-1c74fa4c6bd9"). InnerVolumeSpecName "kube-api-access-9k6cp". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 24 05:51:03.896436 master-0 kubenswrapper[34361]: I0224 05:51:03.896374 34361 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-metrics-5kqv6"] Feb 24 05:51:03.912517 master-0 kubenswrapper[34361]: W0224 05:51:03.912456 34361 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod2ba47386_9185_46cd_97d0_2d21a55fa3d6.slice/crio-59070fa75902821eeeff2c8ead17eecf20482f25613ffba60dfcc998547afebd WatchSource:0}: Error finding container 59070fa75902821eeeff2c8ead17eecf20482f25613ffba60dfcc998547afebd: Status 404 returned error can't find the container with id 59070fa75902821eeeff2c8ead17eecf20482f25613ffba60dfcc998547afebd Feb 24 05:51:03.918393 master-0 kubenswrapper[34361]: I0224 05:51:03.918344 34361 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/6f2b326d-6ef0-44ce-8fdd-1c74fa4c6bd9-dns-svc\") on node \"master-0\" DevicePath \"\"" Feb 24 05:51:03.918393 master-0 kubenswrapper[34361]: I0224 05:51:03.918390 34361 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9k6cp\" (UniqueName: \"kubernetes.io/projected/6f2b326d-6ef0-44ce-8fdd-1c74fa4c6bd9-kube-api-access-9k6cp\") on node \"master-0\" DevicePath \"\"" Feb 24 05:51:03.918515 master-0 kubenswrapper[34361]: I0224 05:51:03.918427 34361 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6f2b326d-6ef0-44ce-8fdd-1c74fa4c6bd9-config\") on node \"master-0\" DevicePath \"\"" Feb 24 05:51:03.918515 master-0 kubenswrapper[34361]: I0224 05:51:03.918440 34361 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/6f2b326d-6ef0-44ce-8fdd-1c74fa4c6bd9-ovsdbserver-nb\") on node \"master-0\" DevicePath \"\"" Feb 24 05:51:03.929788 master-0 kubenswrapper[34361]: I0224 05:51:03.929729 34361 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/ovsdbserver-sb-0" Feb 24 05:51:04.219177 master-0 kubenswrapper[34361]: I0224 05:51:04.219041 34361 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-65c6cc445f-5w2gf"] Feb 24 05:51:04.316396 master-0 kubenswrapper[34361]: I0224 05:51:04.316331 34361 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/ovsdbserver-nb-0" Feb 24 05:51:04.375418 master-0 kubenswrapper[34361]: I0224 05:51:04.375340 34361 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/ovsdbserver-nb-0" Feb 24 05:51:04.375651 master-0 kubenswrapper[34361]: I0224 05:51:04.375465 34361 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/memcached-0" Feb 24 05:51:04.694775 master-0 kubenswrapper[34361]: I0224 05:51:04.694661 34361 generic.go:334] "Generic (PLEG): container finished" podID="1c2bd63a-de4d-4b6a-ae00-b96c7b13d38d" containerID="2cc30bea41b59083f4f6eaad337925629e8516f5516c5b9b35ab986fede04eb0" exitCode=0 Feb 24 05:51:04.695419 master-0 kubenswrapper[34361]: I0224 05:51:04.694793 34361 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"1c2bd63a-de4d-4b6a-ae00-b96c7b13d38d","Type":"ContainerDied","Data":"2cc30bea41b59083f4f6eaad337925629e8516f5516c5b9b35ab986fede04eb0"} Feb 24 05:51:04.701574 master-0 kubenswrapper[34361]: I0224 05:51:04.701526 34361 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-metrics-5kqv6" event={"ID":"2ba47386-9185-46cd-97d0-2d21a55fa3d6","Type":"ContainerStarted","Data":"d771b73dd6a76ef93c15d1269bd2784469a81c8f65ae68122e0a410cfc1008d1"} Feb 24 05:51:04.701654 master-0 kubenswrapper[34361]: I0224 05:51:04.701585 34361 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-metrics-5kqv6" event={"ID":"2ba47386-9185-46cd-97d0-2d21a55fa3d6","Type":"ContainerStarted","Data":"59070fa75902821eeeff2c8ead17eecf20482f25613ffba60dfcc998547afebd"} Feb 24 05:51:04.703161 master-0 kubenswrapper[34361]: I0224 05:51:04.703114 34361 generic.go:334] "Generic (PLEG): container finished" podID="cd8ee44a-5bb9-4456-915c-06bd6998afb8" containerID="1597b1f991bb41c2e07bfa19fb4612e5e4f490bb93455f7c3809b58a5b4a1795" exitCode=0 Feb 24 05:51:04.703527 master-0 kubenswrapper[34361]: I0224 05:51:04.703349 34361 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-65c6cc445f-5w2gf" event={"ID":"cd8ee44a-5bb9-4456-915c-06bd6998afb8","Type":"ContainerDied","Data":"1597b1f991bb41c2e07bfa19fb4612e5e4f490bb93455f7c3809b58a5b4a1795"} Feb 24 05:51:04.703527 master-0 kubenswrapper[34361]: I0224 05:51:04.703497 34361 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-65c6cc445f-5w2gf" event={"ID":"cd8ee44a-5bb9-4456-915c-06bd6998afb8","Type":"ContainerStarted","Data":"660cbae21e68ff2ee30b8a3ee79e7e3dd9e59b20543608a9cfcef778eb3a8672"} Feb 24 05:51:04.704139 master-0 kubenswrapper[34361]: I0224 05:51:04.704104 34361 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5c685c7df5-nbjbv" Feb 24 05:51:04.704691 master-0 kubenswrapper[34361]: I0224 05:51:04.704323 34361 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovsdbserver-sb-0" Feb 24 05:51:04.800452 master-0 kubenswrapper[34361]: I0224 05:51:04.800101 34361 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-metrics-5kqv6" podStartSLOduration=2.80007495 podStartE2EDuration="2.80007495s" podCreationTimestamp="2026-02-24 05:51:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-24 05:51:04.786505464 +0000 UTC m=+824.489122550" watchObservedRunningTime="2026-02-24 05:51:04.80007495 +0000 UTC m=+824.502691996" Feb 24 05:51:04.921551 master-0 kubenswrapper[34361]: I0224 05:51:04.921471 34361 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5c685c7df5-nbjbv"] Feb 24 05:51:04.939503 master-0 kubenswrapper[34361]: I0224 05:51:04.938489 34361 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-5c685c7df5-nbjbv"] Feb 24 05:51:05.724762 master-0 kubenswrapper[34361]: I0224 05:51:05.724621 34361 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"1c2bd63a-de4d-4b6a-ae00-b96c7b13d38d","Type":"ContainerStarted","Data":"e81e40a642201f54af6e0870009d5061cf61b60e20f47efe7d3bda87982749d2"} Feb 24 05:51:05.727206 master-0 kubenswrapper[34361]: I0224 05:51:05.727118 34361 generic.go:334] "Generic (PLEG): container finished" podID="376b0bd6-b6ed-42ca-bc34-b3823b24637e" containerID="9ebc03b74dc25cc184077400e927239c0543ca572f3314d9c91baa495a5539ec" exitCode=0 Feb 24 05:51:05.727415 master-0 kubenswrapper[34361]: I0224 05:51:05.727222 34361 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"376b0bd6-b6ed-42ca-bc34-b3823b24637e","Type":"ContainerDied","Data":"9ebc03b74dc25cc184077400e927239c0543ca572f3314d9c91baa495a5539ec"} Feb 24 05:51:05.732592 master-0 kubenswrapper[34361]: I0224 05:51:05.732504 34361 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-65c6cc445f-5w2gf" event={"ID":"cd8ee44a-5bb9-4456-915c-06bd6998afb8","Type":"ContainerStarted","Data":"7fe25c536c9960f191c585b1c67e1bef92fcf8605257cc1a466e4c1cbf14752b"} Feb 24 05:51:05.771413 master-0 kubenswrapper[34361]: I0224 05:51:05.768966 34361 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstack-galera-0" podStartSLOduration=25.762713122 podStartE2EDuration="34.768934167s" podCreationTimestamp="2026-02-24 05:50:31 +0000 UTC" firstStartedPulling="2026-02-24 05:50:50.110889104 +0000 UTC m=+809.813506150" lastFinishedPulling="2026-02-24 05:50:59.117110119 +0000 UTC m=+818.819727195" observedRunningTime="2026-02-24 05:51:05.752864474 +0000 UTC m=+825.455481520" watchObservedRunningTime="2026-02-24 05:51:05.768934167 +0000 UTC m=+825.471551213" Feb 24 05:51:05.797341 master-0 kubenswrapper[34361]: I0224 05:51:05.796750 34361 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-65c6cc445f-5w2gf" podStartSLOduration=2.796717856 podStartE2EDuration="2.796717856s" podCreationTimestamp="2026-02-24 05:51:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-24 05:51:05.783383467 +0000 UTC m=+825.486000523" watchObservedRunningTime="2026-02-24 05:51:05.796717856 +0000 UTC m=+825.499334912" Feb 24 05:51:05.806018 master-0 kubenswrapper[34361]: I0224 05:51:05.805962 34361 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovsdbserver-nb-0" Feb 24 05:51:05.829344 master-0 kubenswrapper[34361]: I0224 05:51:05.826957 34361 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovsdbserver-sb-0" Feb 24 05:51:06.258486 master-0 kubenswrapper[34361]: I0224 05:51:06.258396 34361 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-northd-0"] Feb 24 05:51:06.260899 master-0 kubenswrapper[34361]: I0224 05:51:06.260536 34361 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-northd-0" Feb 24 05:51:06.267071 master-0 kubenswrapper[34361]: I0224 05:51:06.264744 34361 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-northd-0"] Feb 24 05:51:06.267936 master-0 kubenswrapper[34361]: I0224 05:51:06.267894 34361 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovnnorthd-ovndbs" Feb 24 05:51:06.268355 master-0 kubenswrapper[34361]: I0224 05:51:06.268330 34361 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovnnorthd-config" Feb 24 05:51:06.269135 master-0 kubenswrapper[34361]: I0224 05:51:06.269067 34361 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovnnorthd-scripts" Feb 24 05:51:06.300714 master-0 kubenswrapper[34361]: I0224 05:51:06.300485 34361 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9c1931c4-36b0-4c64-8e18-eb5abea9860f-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"9c1931c4-36b0-4c64-8e18-eb5abea9860f\") " pod="openstack/ovn-northd-0" Feb 24 05:51:06.300714 master-0 kubenswrapper[34361]: I0224 05:51:06.300619 34361 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/9c1931c4-36b0-4c64-8e18-eb5abea9860f-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"9c1931c4-36b0-4c64-8e18-eb5abea9860f\") " pod="openstack/ovn-northd-0" Feb 24 05:51:06.300714 master-0 kubenswrapper[34361]: I0224 05:51:06.300680 34361 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9c1931c4-36b0-4c64-8e18-eb5abea9860f-config\") pod \"ovn-northd-0\" (UID: \"9c1931c4-36b0-4c64-8e18-eb5abea9860f\") " pod="openstack/ovn-northd-0" Feb 24 05:51:06.300880 master-0 kubenswrapper[34361]: I0224 05:51:06.300848 34361 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-27gn5\" (UniqueName: \"kubernetes.io/projected/9c1931c4-36b0-4c64-8e18-eb5abea9860f-kube-api-access-27gn5\") pod \"ovn-northd-0\" (UID: \"9c1931c4-36b0-4c64-8e18-eb5abea9860f\") " pod="openstack/ovn-northd-0" Feb 24 05:51:06.304341 master-0 kubenswrapper[34361]: I0224 05:51:06.300923 34361 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/9c1931c4-36b0-4c64-8e18-eb5abea9860f-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"9c1931c4-36b0-4c64-8e18-eb5abea9860f\") " pod="openstack/ovn-northd-0" Feb 24 05:51:06.304341 master-0 kubenswrapper[34361]: I0224 05:51:06.301097 34361 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/9c1931c4-36b0-4c64-8e18-eb5abea9860f-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"9c1931c4-36b0-4c64-8e18-eb5abea9860f\") " pod="openstack/ovn-northd-0" Feb 24 05:51:06.304341 master-0 kubenswrapper[34361]: I0224 05:51:06.301237 34361 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/9c1931c4-36b0-4c64-8e18-eb5abea9860f-scripts\") pod \"ovn-northd-0\" (UID: \"9c1931c4-36b0-4c64-8e18-eb5abea9860f\") " pod="openstack/ovn-northd-0" Feb 24 05:51:06.405174 master-0 kubenswrapper[34361]: I0224 05:51:06.403836 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/9c1931c4-36b0-4c64-8e18-eb5abea9860f-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"9c1931c4-36b0-4c64-8e18-eb5abea9860f\") " pod="openstack/ovn-northd-0" Feb 24 05:51:06.405174 master-0 kubenswrapper[34361]: I0224 05:51:06.403951 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/9c1931c4-36b0-4c64-8e18-eb5abea9860f-scripts\") pod \"ovn-northd-0\" (UID: \"9c1931c4-36b0-4c64-8e18-eb5abea9860f\") " pod="openstack/ovn-northd-0" Feb 24 05:51:06.405174 master-0 kubenswrapper[34361]: I0224 05:51:06.404031 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9c1931c4-36b0-4c64-8e18-eb5abea9860f-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"9c1931c4-36b0-4c64-8e18-eb5abea9860f\") " pod="openstack/ovn-northd-0" Feb 24 05:51:06.405174 master-0 kubenswrapper[34361]: I0224 05:51:06.404077 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/9c1931c4-36b0-4c64-8e18-eb5abea9860f-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"9c1931c4-36b0-4c64-8e18-eb5abea9860f\") " pod="openstack/ovn-northd-0" Feb 24 05:51:06.405174 master-0 kubenswrapper[34361]: I0224 05:51:06.404109 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9c1931c4-36b0-4c64-8e18-eb5abea9860f-config\") pod \"ovn-northd-0\" (UID: \"9c1931c4-36b0-4c64-8e18-eb5abea9860f\") " pod="openstack/ovn-northd-0" Feb 24 05:51:06.405174 master-0 kubenswrapper[34361]: I0224 05:51:06.404155 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-27gn5\" (UniqueName: \"kubernetes.io/projected/9c1931c4-36b0-4c64-8e18-eb5abea9860f-kube-api-access-27gn5\") pod \"ovn-northd-0\" (UID: \"9c1931c4-36b0-4c64-8e18-eb5abea9860f\") " pod="openstack/ovn-northd-0" Feb 24 05:51:06.405174 master-0 kubenswrapper[34361]: I0224 05:51:06.404186 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/9c1931c4-36b0-4c64-8e18-eb5abea9860f-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"9c1931c4-36b0-4c64-8e18-eb5abea9860f\") " pod="openstack/ovn-northd-0" Feb 24 05:51:06.407991 master-0 kubenswrapper[34361]: I0224 05:51:06.407775 34361 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9c1931c4-36b0-4c64-8e18-eb5abea9860f-config\") pod \"ovn-northd-0\" (UID: \"9c1931c4-36b0-4c64-8e18-eb5abea9860f\") " pod="openstack/ovn-northd-0" Feb 24 05:51:06.407991 master-0 kubenswrapper[34361]: I0224 05:51:06.407837 34361 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/9c1931c4-36b0-4c64-8e18-eb5abea9860f-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"9c1931c4-36b0-4c64-8e18-eb5abea9860f\") " pod="openstack/ovn-northd-0" Feb 24 05:51:06.409210 master-0 kubenswrapper[34361]: I0224 05:51:06.409159 34361 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/9c1931c4-36b0-4c64-8e18-eb5abea9860f-scripts\") pod \"ovn-northd-0\" (UID: \"9c1931c4-36b0-4c64-8e18-eb5abea9860f\") " pod="openstack/ovn-northd-0" Feb 24 05:51:06.443004 master-0 kubenswrapper[34361]: I0224 05:51:06.441356 34361 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/9c1931c4-36b0-4c64-8e18-eb5abea9860f-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"9c1931c4-36b0-4c64-8e18-eb5abea9860f\") " pod="openstack/ovn-northd-0" Feb 24 05:51:06.443004 master-0 kubenswrapper[34361]: I0224 05:51:06.441671 34361 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9c1931c4-36b0-4c64-8e18-eb5abea9860f-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"9c1931c4-36b0-4c64-8e18-eb5abea9860f\") " pod="openstack/ovn-northd-0" Feb 24 05:51:06.446859 master-0 kubenswrapper[34361]: I0224 05:51:06.446816 34361 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-27gn5\" (UniqueName: \"kubernetes.io/projected/9c1931c4-36b0-4c64-8e18-eb5abea9860f-kube-api-access-27gn5\") pod \"ovn-northd-0\" (UID: \"9c1931c4-36b0-4c64-8e18-eb5abea9860f\") " pod="openstack/ovn-northd-0" Feb 24 05:51:06.458273 master-0 kubenswrapper[34361]: I0224 05:51:06.456412 34361 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/9c1931c4-36b0-4c64-8e18-eb5abea9860f-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"9c1931c4-36b0-4c64-8e18-eb5abea9860f\") " pod="openstack/ovn-northd-0" Feb 24 05:51:06.528447 master-0 kubenswrapper[34361]: I0224 05:51:06.526783 34361 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-65c6cc445f-5w2gf"] Feb 24 05:51:06.569625 master-0 kubenswrapper[34361]: I0224 05:51:06.569523 34361 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-5c55964f59-4n57j"] Feb 24 05:51:06.583504 master-0 kubenswrapper[34361]: I0224 05:51:06.580522 34361 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5c55964f59-4n57j" Feb 24 05:51:06.595279 master-0 kubenswrapper[34361]: I0224 05:51:06.593995 34361 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5c55964f59-4n57j"] Feb 24 05:51:06.640339 master-0 kubenswrapper[34361]: I0224 05:51:06.638831 34361 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-northd-0" Feb 24 05:51:06.640591 master-0 kubenswrapper[34361]: I0224 05:51:06.640547 34361 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e1fdfa97-4eba-4aa9-88e0-3b426829d15e-config\") pod \"dnsmasq-dns-5c55964f59-4n57j\" (UID: \"e1fdfa97-4eba-4aa9-88e0-3b426829d15e\") " pod="openstack/dnsmasq-dns-5c55964f59-4n57j" Feb 24 05:51:06.641303 master-0 kubenswrapper[34361]: I0224 05:51:06.640951 34361 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e1fdfa97-4eba-4aa9-88e0-3b426829d15e-dns-svc\") pod \"dnsmasq-dns-5c55964f59-4n57j\" (UID: \"e1fdfa97-4eba-4aa9-88e0-3b426829d15e\") " pod="openstack/dnsmasq-dns-5c55964f59-4n57j" Feb 24 05:51:06.641303 master-0 kubenswrapper[34361]: I0224 05:51:06.641043 34361 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/e1fdfa97-4eba-4aa9-88e0-3b426829d15e-ovsdbserver-sb\") pod \"dnsmasq-dns-5c55964f59-4n57j\" (UID: \"e1fdfa97-4eba-4aa9-88e0-3b426829d15e\") " pod="openstack/dnsmasq-dns-5c55964f59-4n57j" Feb 24 05:51:06.641303 master-0 kubenswrapper[34361]: I0224 05:51:06.641213 34361 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rk49q\" (UniqueName: \"kubernetes.io/projected/e1fdfa97-4eba-4aa9-88e0-3b426829d15e-kube-api-access-rk49q\") pod \"dnsmasq-dns-5c55964f59-4n57j\" (UID: \"e1fdfa97-4eba-4aa9-88e0-3b426829d15e\") " pod="openstack/dnsmasq-dns-5c55964f59-4n57j" Feb 24 05:51:06.641303 master-0 kubenswrapper[34361]: I0224 05:51:06.641251 34361 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/e1fdfa97-4eba-4aa9-88e0-3b426829d15e-ovsdbserver-nb\") pod \"dnsmasq-dns-5c55964f59-4n57j\" (UID: \"e1fdfa97-4eba-4aa9-88e0-3b426829d15e\") " pod="openstack/dnsmasq-dns-5c55964f59-4n57j" Feb 24 05:51:06.642989 master-0 kubenswrapper[34361]: I0224 05:51:06.641763 34361 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6f2b326d-6ef0-44ce-8fdd-1c74fa4c6bd9" path="/var/lib/kubelet/pods/6f2b326d-6ef0-44ce-8fdd-1c74fa4c6bd9/volumes" Feb 24 05:51:06.745353 master-0 kubenswrapper[34361]: I0224 05:51:06.743216 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e1fdfa97-4eba-4aa9-88e0-3b426829d15e-dns-svc\") pod \"dnsmasq-dns-5c55964f59-4n57j\" (UID: \"e1fdfa97-4eba-4aa9-88e0-3b426829d15e\") " pod="openstack/dnsmasq-dns-5c55964f59-4n57j" Feb 24 05:51:06.745353 master-0 kubenswrapper[34361]: I0224 05:51:06.743347 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/e1fdfa97-4eba-4aa9-88e0-3b426829d15e-ovsdbserver-sb\") pod \"dnsmasq-dns-5c55964f59-4n57j\" (UID: \"e1fdfa97-4eba-4aa9-88e0-3b426829d15e\") " pod="openstack/dnsmasq-dns-5c55964f59-4n57j" Feb 24 05:51:06.745353 master-0 kubenswrapper[34361]: I0224 05:51:06.743427 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rk49q\" (UniqueName: \"kubernetes.io/projected/e1fdfa97-4eba-4aa9-88e0-3b426829d15e-kube-api-access-rk49q\") pod \"dnsmasq-dns-5c55964f59-4n57j\" (UID: \"e1fdfa97-4eba-4aa9-88e0-3b426829d15e\") " pod="openstack/dnsmasq-dns-5c55964f59-4n57j" Feb 24 05:51:06.745353 master-0 kubenswrapper[34361]: I0224 05:51:06.743464 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/e1fdfa97-4eba-4aa9-88e0-3b426829d15e-ovsdbserver-nb\") pod \"dnsmasq-dns-5c55964f59-4n57j\" (UID: \"e1fdfa97-4eba-4aa9-88e0-3b426829d15e\") " pod="openstack/dnsmasq-dns-5c55964f59-4n57j" Feb 24 05:51:06.745353 master-0 kubenswrapper[34361]: I0224 05:51:06.743528 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e1fdfa97-4eba-4aa9-88e0-3b426829d15e-config\") pod \"dnsmasq-dns-5c55964f59-4n57j\" (UID: \"e1fdfa97-4eba-4aa9-88e0-3b426829d15e\") " pod="openstack/dnsmasq-dns-5c55964f59-4n57j" Feb 24 05:51:06.745353 master-0 kubenswrapper[34361]: I0224 05:51:06.744744 34361 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e1fdfa97-4eba-4aa9-88e0-3b426829d15e-config\") pod \"dnsmasq-dns-5c55964f59-4n57j\" (UID: \"e1fdfa97-4eba-4aa9-88e0-3b426829d15e\") " pod="openstack/dnsmasq-dns-5c55964f59-4n57j" Feb 24 05:51:06.747387 master-0 kubenswrapper[34361]: I0224 05:51:06.747345 34361 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e1fdfa97-4eba-4aa9-88e0-3b426829d15e-dns-svc\") pod \"dnsmasq-dns-5c55964f59-4n57j\" (UID: \"e1fdfa97-4eba-4aa9-88e0-3b426829d15e\") " pod="openstack/dnsmasq-dns-5c55964f59-4n57j" Feb 24 05:51:06.748193 master-0 kubenswrapper[34361]: I0224 05:51:06.748151 34361 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/e1fdfa97-4eba-4aa9-88e0-3b426829d15e-ovsdbserver-sb\") pod \"dnsmasq-dns-5c55964f59-4n57j\" (UID: \"e1fdfa97-4eba-4aa9-88e0-3b426829d15e\") " pod="openstack/dnsmasq-dns-5c55964f59-4n57j" Feb 24 05:51:06.749426 master-0 kubenswrapper[34361]: I0224 05:51:06.749390 34361 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/e1fdfa97-4eba-4aa9-88e0-3b426829d15e-ovsdbserver-nb\") pod \"dnsmasq-dns-5c55964f59-4n57j\" (UID: \"e1fdfa97-4eba-4aa9-88e0-3b426829d15e\") " pod="openstack/dnsmasq-dns-5c55964f59-4n57j" Feb 24 05:51:06.755171 master-0 kubenswrapper[34361]: I0224 05:51:06.755113 34361 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"376b0bd6-b6ed-42ca-bc34-b3823b24637e","Type":"ContainerStarted","Data":"b223d5189efe5f576b4aeda73f2eac4991b7818e7122b2fb60be085109bc09b1"} Feb 24 05:51:06.756478 master-0 kubenswrapper[34361]: I0224 05:51:06.756426 34361 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-65c6cc445f-5w2gf" Feb 24 05:51:06.765547 master-0 kubenswrapper[34361]: I0224 05:51:06.765502 34361 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rk49q\" (UniqueName: \"kubernetes.io/projected/e1fdfa97-4eba-4aa9-88e0-3b426829d15e-kube-api-access-rk49q\") pod \"dnsmasq-dns-5c55964f59-4n57j\" (UID: \"e1fdfa97-4eba-4aa9-88e0-3b426829d15e\") " pod="openstack/dnsmasq-dns-5c55964f59-4n57j" Feb 24 05:51:06.791397 master-0 kubenswrapper[34361]: I0224 05:51:06.790179 34361 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstack-cell1-galera-0" podStartSLOduration=25.664558754 podStartE2EDuration="34.790149326s" podCreationTimestamp="2026-02-24 05:50:32 +0000 UTC" firstStartedPulling="2026-02-24 05:50:50.043252321 +0000 UTC m=+809.745869367" lastFinishedPulling="2026-02-24 05:50:59.168842853 +0000 UTC m=+818.871459939" observedRunningTime="2026-02-24 05:51:06.777027562 +0000 UTC m=+826.479644608" watchObservedRunningTime="2026-02-24 05:51:06.790149326 +0000 UTC m=+826.492766372" Feb 24 05:51:06.939405 master-0 kubenswrapper[34361]: I0224 05:51:06.936940 34361 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5c55964f59-4n57j" Feb 24 05:51:07.175138 master-0 kubenswrapper[34361]: I0224 05:51:07.175060 34361 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-northd-0"] Feb 24 05:51:07.186163 master-0 kubenswrapper[34361]: W0224 05:51:07.186103 34361 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod9c1931c4_36b0_4c64_8e18_eb5abea9860f.slice/crio-c30d54342835f6b2b2f17aa0dec4efe1e4a00405d2b6cca495d6c6a5baec49af WatchSource:0}: Error finding container c30d54342835f6b2b2f17aa0dec4efe1e4a00405d2b6cca495d6c6a5baec49af: Status 404 returned error can't find the container with id c30d54342835f6b2b2f17aa0dec4efe1e4a00405d2b6cca495d6c6a5baec49af Feb 24 05:51:07.439421 master-0 kubenswrapper[34361]: I0224 05:51:07.439343 34361 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5c55964f59-4n57j"] Feb 24 05:51:07.777150 master-0 kubenswrapper[34361]: I0224 05:51:07.777062 34361 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"9c1931c4-36b0-4c64-8e18-eb5abea9860f","Type":"ContainerStarted","Data":"c30d54342835f6b2b2f17aa0dec4efe1e4a00405d2b6cca495d6c6a5baec49af"} Feb 24 05:51:07.781841 master-0 kubenswrapper[34361]: I0224 05:51:07.781766 34361 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5c55964f59-4n57j" event={"ID":"e1fdfa97-4eba-4aa9-88e0-3b426829d15e","Type":"ContainerStarted","Data":"ac74574fe745ed4b9d807449690911637470260ccc958e228012866bbadc1ca8"} Feb 24 05:51:07.781841 master-0 kubenswrapper[34361]: I0224 05:51:07.781841 34361 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5c55964f59-4n57j" event={"ID":"e1fdfa97-4eba-4aa9-88e0-3b426829d15e","Type":"ContainerStarted","Data":"3beeceb6c9d375bd9b06673cea00e6f86f74b971d8d01e2e7f7f53b251ab2a3e"} Feb 24 05:51:07.782050 master-0 kubenswrapper[34361]: I0224 05:51:07.781978 34361 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-65c6cc445f-5w2gf" podUID="cd8ee44a-5bb9-4456-915c-06bd6998afb8" containerName="dnsmasq-dns" containerID="cri-o://7fe25c536c9960f191c585b1c67e1bef92fcf8605257cc1a466e4c1cbf14752b" gracePeriod=10 Feb 24 05:51:08.420300 master-0 kubenswrapper[34361]: I0224 05:51:08.420222 34361 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-65c6cc445f-5w2gf" Feb 24 05:51:08.509197 master-0 kubenswrapper[34361]: I0224 05:51:08.509121 34361 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/cd8ee44a-5bb9-4456-915c-06bd6998afb8-dns-svc\") pod \"cd8ee44a-5bb9-4456-915c-06bd6998afb8\" (UID: \"cd8ee44a-5bb9-4456-915c-06bd6998afb8\") " Feb 24 05:51:08.509641 master-0 kubenswrapper[34361]: I0224 05:51:08.509264 34361 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/cd8ee44a-5bb9-4456-915c-06bd6998afb8-ovsdbserver-sb\") pod \"cd8ee44a-5bb9-4456-915c-06bd6998afb8\" (UID: \"cd8ee44a-5bb9-4456-915c-06bd6998afb8\") " Feb 24 05:51:08.509641 master-0 kubenswrapper[34361]: I0224 05:51:08.509288 34361 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wrvq7\" (UniqueName: \"kubernetes.io/projected/cd8ee44a-5bb9-4456-915c-06bd6998afb8-kube-api-access-wrvq7\") pod \"cd8ee44a-5bb9-4456-915c-06bd6998afb8\" (UID: \"cd8ee44a-5bb9-4456-915c-06bd6998afb8\") " Feb 24 05:51:08.509641 master-0 kubenswrapper[34361]: I0224 05:51:08.509327 34361 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/cd8ee44a-5bb9-4456-915c-06bd6998afb8-ovsdbserver-nb\") pod \"cd8ee44a-5bb9-4456-915c-06bd6998afb8\" (UID: \"cd8ee44a-5bb9-4456-915c-06bd6998afb8\") " Feb 24 05:51:08.509641 master-0 kubenswrapper[34361]: I0224 05:51:08.509375 34361 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cd8ee44a-5bb9-4456-915c-06bd6998afb8-config\") pod \"cd8ee44a-5bb9-4456-915c-06bd6998afb8\" (UID: \"cd8ee44a-5bb9-4456-915c-06bd6998afb8\") " Feb 24 05:51:08.515652 master-0 kubenswrapper[34361]: I0224 05:51:08.515580 34361 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cd8ee44a-5bb9-4456-915c-06bd6998afb8-kube-api-access-wrvq7" (OuterVolumeSpecName: "kube-api-access-wrvq7") pod "cd8ee44a-5bb9-4456-915c-06bd6998afb8" (UID: "cd8ee44a-5bb9-4456-915c-06bd6998afb8"). InnerVolumeSpecName "kube-api-access-wrvq7". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 24 05:51:08.569572 master-0 kubenswrapper[34361]: I0224 05:51:08.569455 34361 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cd8ee44a-5bb9-4456-915c-06bd6998afb8-config" (OuterVolumeSpecName: "config") pod "cd8ee44a-5bb9-4456-915c-06bd6998afb8" (UID: "cd8ee44a-5bb9-4456-915c-06bd6998afb8"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 24 05:51:08.573346 master-0 kubenswrapper[34361]: I0224 05:51:08.573237 34361 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cd8ee44a-5bb9-4456-915c-06bd6998afb8-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "cd8ee44a-5bb9-4456-915c-06bd6998afb8" (UID: "cd8ee44a-5bb9-4456-915c-06bd6998afb8"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 24 05:51:08.578771 master-0 kubenswrapper[34361]: I0224 05:51:08.578653 34361 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cd8ee44a-5bb9-4456-915c-06bd6998afb8-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "cd8ee44a-5bb9-4456-915c-06bd6998afb8" (UID: "cd8ee44a-5bb9-4456-915c-06bd6998afb8"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 24 05:51:08.579518 master-0 kubenswrapper[34361]: I0224 05:51:08.579436 34361 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cd8ee44a-5bb9-4456-915c-06bd6998afb8-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "cd8ee44a-5bb9-4456-915c-06bd6998afb8" (UID: "cd8ee44a-5bb9-4456-915c-06bd6998afb8"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 24 05:51:08.612230 master-0 kubenswrapper[34361]: I0224 05:51:08.612155 34361 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/cd8ee44a-5bb9-4456-915c-06bd6998afb8-dns-svc\") on node \"master-0\" DevicePath \"\"" Feb 24 05:51:08.612230 master-0 kubenswrapper[34361]: I0224 05:51:08.612205 34361 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/cd8ee44a-5bb9-4456-915c-06bd6998afb8-ovsdbserver-sb\") on node \"master-0\" DevicePath \"\"" Feb 24 05:51:08.612230 master-0 kubenswrapper[34361]: I0224 05:51:08.612219 34361 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wrvq7\" (UniqueName: \"kubernetes.io/projected/cd8ee44a-5bb9-4456-915c-06bd6998afb8-kube-api-access-wrvq7\") on node \"master-0\" DevicePath \"\"" Feb 24 05:51:08.612230 master-0 kubenswrapper[34361]: I0224 05:51:08.612230 34361 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/cd8ee44a-5bb9-4456-915c-06bd6998afb8-ovsdbserver-nb\") on node \"master-0\" DevicePath \"\"" Feb 24 05:51:08.612230 master-0 kubenswrapper[34361]: I0224 05:51:08.612239 34361 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cd8ee44a-5bb9-4456-915c-06bd6998afb8-config\") on node \"master-0\" DevicePath \"\"" Feb 24 05:51:08.729013 master-0 kubenswrapper[34361]: I0224 05:51:08.728790 34361 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/swift-storage-0"] Feb 24 05:51:08.729504 master-0 kubenswrapper[34361]: E0224 05:51:08.729406 34361 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cd8ee44a-5bb9-4456-915c-06bd6998afb8" containerName="init" Feb 24 05:51:08.729504 master-0 kubenswrapper[34361]: I0224 05:51:08.729429 34361 state_mem.go:107] "Deleted CPUSet assignment" podUID="cd8ee44a-5bb9-4456-915c-06bd6998afb8" containerName="init" Feb 24 05:51:08.729504 master-0 kubenswrapper[34361]: E0224 05:51:08.729463 34361 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cd8ee44a-5bb9-4456-915c-06bd6998afb8" containerName="dnsmasq-dns" Feb 24 05:51:08.729504 master-0 kubenswrapper[34361]: I0224 05:51:08.729470 34361 state_mem.go:107] "Deleted CPUSet assignment" podUID="cd8ee44a-5bb9-4456-915c-06bd6998afb8" containerName="dnsmasq-dns" Feb 24 05:51:08.729787 master-0 kubenswrapper[34361]: I0224 05:51:08.729751 34361 memory_manager.go:354] "RemoveStaleState removing state" podUID="cd8ee44a-5bb9-4456-915c-06bd6998afb8" containerName="dnsmasq-dns" Feb 24 05:51:08.737499 master-0 kubenswrapper[34361]: I0224 05:51:08.737455 34361 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-storage-0" Feb 24 05:51:08.744872 master-0 kubenswrapper[34361]: I0224 05:51:08.740360 34361 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-storage-config-data" Feb 24 05:51:08.744872 master-0 kubenswrapper[34361]: I0224 05:51:08.740452 34361 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-conf" Feb 24 05:51:08.744872 master-0 kubenswrapper[34361]: I0224 05:51:08.740607 34361 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-ring-files" Feb 24 05:51:08.799771 master-0 kubenswrapper[34361]: I0224 05:51:08.799692 34361 generic.go:334] "Generic (PLEG): container finished" podID="cd8ee44a-5bb9-4456-915c-06bd6998afb8" containerID="7fe25c536c9960f191c585b1c67e1bef92fcf8605257cc1a466e4c1cbf14752b" exitCode=0 Feb 24 05:51:08.800290 master-0 kubenswrapper[34361]: I0224 05:51:08.799788 34361 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-65c6cc445f-5w2gf" event={"ID":"cd8ee44a-5bb9-4456-915c-06bd6998afb8","Type":"ContainerDied","Data":"7fe25c536c9960f191c585b1c67e1bef92fcf8605257cc1a466e4c1cbf14752b"} Feb 24 05:51:08.800290 master-0 kubenswrapper[34361]: I0224 05:51:08.799918 34361 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-65c6cc445f-5w2gf" event={"ID":"cd8ee44a-5bb9-4456-915c-06bd6998afb8","Type":"ContainerDied","Data":"660cbae21e68ff2ee30b8a3ee79e7e3dd9e59b20543608a9cfcef778eb3a8672"} Feb 24 05:51:08.800290 master-0 kubenswrapper[34361]: I0224 05:51:08.799965 34361 scope.go:117] "RemoveContainer" containerID="7fe25c536c9960f191c585b1c67e1bef92fcf8605257cc1a466e4c1cbf14752b" Feb 24 05:51:08.800425 master-0 kubenswrapper[34361]: I0224 05:51:08.800151 34361 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-65c6cc445f-5w2gf" Feb 24 05:51:08.805038 master-0 kubenswrapper[34361]: I0224 05:51:08.804822 34361 generic.go:334] "Generic (PLEG): container finished" podID="e1fdfa97-4eba-4aa9-88e0-3b426829d15e" containerID="ac74574fe745ed4b9d807449690911637470260ccc958e228012866bbadc1ca8" exitCode=0 Feb 24 05:51:08.805038 master-0 kubenswrapper[34361]: I0224 05:51:08.804924 34361 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5c55964f59-4n57j" event={"ID":"e1fdfa97-4eba-4aa9-88e0-3b426829d15e","Type":"ContainerDied","Data":"ac74574fe745ed4b9d807449690911637470260ccc958e228012866bbadc1ca8"} Feb 24 05:51:08.884809 master-0 kubenswrapper[34361]: I0224 05:51:08.884712 34361 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-storage-0"] Feb 24 05:51:08.940432 master-0 kubenswrapper[34361]: I0224 05:51:08.939775 34361 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/9dc3c847-e88b-4279-ba5a-8ef0c93e4e6c-cache\") pod \"swift-storage-0\" (UID: \"9dc3c847-e88b-4279-ba5a-8ef0c93e4e6c\") " pod="openstack/swift-storage-0" Feb 24 05:51:08.940432 master-0 kubenswrapper[34361]: I0224 05:51:08.939863 34361 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9dc3c847-e88b-4279-ba5a-8ef0c93e4e6c-combined-ca-bundle\") pod \"swift-storage-0\" (UID: \"9dc3c847-e88b-4279-ba5a-8ef0c93e4e6c\") " pod="openstack/swift-storage-0" Feb 24 05:51:08.940432 master-0 kubenswrapper[34361]: I0224 05:51:08.939901 34361 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/9dc3c847-e88b-4279-ba5a-8ef0c93e4e6c-etc-swift\") pod \"swift-storage-0\" (UID: \"9dc3c847-e88b-4279-ba5a-8ef0c93e4e6c\") " pod="openstack/swift-storage-0" Feb 24 05:51:08.940432 master-0 kubenswrapper[34361]: I0224 05:51:08.939937 34361 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s5mxd\" (UniqueName: \"kubernetes.io/projected/9dc3c847-e88b-4279-ba5a-8ef0c93e4e6c-kube-api-access-s5mxd\") pod \"swift-storage-0\" (UID: \"9dc3c847-e88b-4279-ba5a-8ef0c93e4e6c\") " pod="openstack/swift-storage-0" Feb 24 05:51:08.940432 master-0 kubenswrapper[34361]: I0224 05:51:08.940050 34361 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-c269bc7e-f3d0-4828-8ba4-a192dd94a207\" (UniqueName: \"kubernetes.io/csi/topolvm.io^831b2b2b-d48b-4d12-a700-2329c110fce1\") pod \"swift-storage-0\" (UID: \"9dc3c847-e88b-4279-ba5a-8ef0c93e4e6c\") " pod="openstack/swift-storage-0" Feb 24 05:51:08.940432 master-0 kubenswrapper[34361]: I0224 05:51:08.940152 34361 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/9dc3c847-e88b-4279-ba5a-8ef0c93e4e6c-lock\") pod \"swift-storage-0\" (UID: \"9dc3c847-e88b-4279-ba5a-8ef0c93e4e6c\") " pod="openstack/swift-storage-0" Feb 24 05:51:08.999760 master-0 kubenswrapper[34361]: I0224 05:51:08.999699 34361 scope.go:117] "RemoveContainer" containerID="1597b1f991bb41c2e07bfa19fb4612e5e4f490bb93455f7c3809b58a5b4a1795" Feb 24 05:51:09.044434 master-0 kubenswrapper[34361]: I0224 05:51:09.044291 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/9dc3c847-e88b-4279-ba5a-8ef0c93e4e6c-etc-swift\") pod \"swift-storage-0\" (UID: \"9dc3c847-e88b-4279-ba5a-8ef0c93e4e6c\") " pod="openstack/swift-storage-0" Feb 24 05:51:09.044434 master-0 kubenswrapper[34361]: I0224 05:51:09.044388 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s5mxd\" (UniqueName: \"kubernetes.io/projected/9dc3c847-e88b-4279-ba5a-8ef0c93e4e6c-kube-api-access-s5mxd\") pod \"swift-storage-0\" (UID: \"9dc3c847-e88b-4279-ba5a-8ef0c93e4e6c\") " pod="openstack/swift-storage-0" Feb 24 05:51:09.044655 master-0 kubenswrapper[34361]: E0224 05:51:09.044518 34361 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Feb 24 05:51:09.044655 master-0 kubenswrapper[34361]: E0224 05:51:09.044552 34361 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Feb 24 05:51:09.044655 master-0 kubenswrapper[34361]: E0224 05:51:09.044623 34361 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9dc3c847-e88b-4279-ba5a-8ef0c93e4e6c-etc-swift podName:9dc3c847-e88b-4279-ba5a-8ef0c93e4e6c nodeName:}" failed. No retries permitted until 2026-02-24 05:51:09.544595917 +0000 UTC m=+829.247212973 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/9dc3c847-e88b-4279-ba5a-8ef0c93e4e6c-etc-swift") pod "swift-storage-0" (UID: "9dc3c847-e88b-4279-ba5a-8ef0c93e4e6c") : configmap "swift-ring-files" not found Feb 24 05:51:09.044853 master-0 kubenswrapper[34361]: I0224 05:51:09.044807 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-c269bc7e-f3d0-4828-8ba4-a192dd94a207\" (UniqueName: \"kubernetes.io/csi/topolvm.io^831b2b2b-d48b-4d12-a700-2329c110fce1\") pod \"swift-storage-0\" (UID: \"9dc3c847-e88b-4279-ba5a-8ef0c93e4e6c\") " pod="openstack/swift-storage-0" Feb 24 05:51:09.046930 master-0 kubenswrapper[34361]: I0224 05:51:09.044959 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/9dc3c847-e88b-4279-ba5a-8ef0c93e4e6c-lock\") pod \"swift-storage-0\" (UID: \"9dc3c847-e88b-4279-ba5a-8ef0c93e4e6c\") " pod="openstack/swift-storage-0" Feb 24 05:51:09.046930 master-0 kubenswrapper[34361]: I0224 05:51:09.045021 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/9dc3c847-e88b-4279-ba5a-8ef0c93e4e6c-cache\") pod \"swift-storage-0\" (UID: \"9dc3c847-e88b-4279-ba5a-8ef0c93e4e6c\") " pod="openstack/swift-storage-0" Feb 24 05:51:09.046930 master-0 kubenswrapper[34361]: I0224 05:51:09.045061 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9dc3c847-e88b-4279-ba5a-8ef0c93e4e6c-combined-ca-bundle\") pod \"swift-storage-0\" (UID: \"9dc3c847-e88b-4279-ba5a-8ef0c93e4e6c\") " pod="openstack/swift-storage-0" Feb 24 05:51:09.046930 master-0 kubenswrapper[34361]: I0224 05:51:09.046676 34361 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/9dc3c847-e88b-4279-ba5a-8ef0c93e4e6c-lock\") pod \"swift-storage-0\" (UID: \"9dc3c847-e88b-4279-ba5a-8ef0c93e4e6c\") " pod="openstack/swift-storage-0" Feb 24 05:51:09.046930 master-0 kubenswrapper[34361]: I0224 05:51:09.046786 34361 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/9dc3c847-e88b-4279-ba5a-8ef0c93e4e6c-cache\") pod \"swift-storage-0\" (UID: \"9dc3c847-e88b-4279-ba5a-8ef0c93e4e6c\") " pod="openstack/swift-storage-0" Feb 24 05:51:09.048940 master-0 kubenswrapper[34361]: I0224 05:51:09.048777 34361 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-65c6cc445f-5w2gf"] Feb 24 05:51:09.050465 master-0 kubenswrapper[34361]: I0224 05:51:09.050432 34361 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 24 05:51:09.050465 master-0 kubenswrapper[34361]: I0224 05:51:09.050471 34361 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-c269bc7e-f3d0-4828-8ba4-a192dd94a207\" (UniqueName: \"kubernetes.io/csi/topolvm.io^831b2b2b-d48b-4d12-a700-2329c110fce1\") pod \"swift-storage-0\" (UID: \"9dc3c847-e88b-4279-ba5a-8ef0c93e4e6c\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/topolvm.io/b81c980df41704c2525003f64663fa0fefb51f6488d3f42aaff9d5b8bea7180a/globalmount\"" pod="openstack/swift-storage-0" Feb 24 05:51:09.052840 master-0 kubenswrapper[34361]: I0224 05:51:09.052799 34361 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9dc3c847-e88b-4279-ba5a-8ef0c93e4e6c-combined-ca-bundle\") pod \"swift-storage-0\" (UID: \"9dc3c847-e88b-4279-ba5a-8ef0c93e4e6c\") " pod="openstack/swift-storage-0" Feb 24 05:51:09.087896 master-0 kubenswrapper[34361]: I0224 05:51:09.087842 34361 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/openstack-galera-0" Feb 24 05:51:09.088005 master-0 kubenswrapper[34361]: I0224 05:51:09.087911 34361 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/openstack-galera-0" Feb 24 05:51:09.088855 master-0 kubenswrapper[34361]: I0224 05:51:09.088821 34361 scope.go:117] "RemoveContainer" containerID="7fe25c536c9960f191c585b1c67e1bef92fcf8605257cc1a466e4c1cbf14752b" Feb 24 05:51:09.094944 master-0 kubenswrapper[34361]: E0224 05:51:09.094911 34361 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7fe25c536c9960f191c585b1c67e1bef92fcf8605257cc1a466e4c1cbf14752b\": container with ID starting with 7fe25c536c9960f191c585b1c67e1bef92fcf8605257cc1a466e4c1cbf14752b not found: ID does not exist" containerID="7fe25c536c9960f191c585b1c67e1bef92fcf8605257cc1a466e4c1cbf14752b" Feb 24 05:51:09.095106 master-0 kubenswrapper[34361]: I0224 05:51:09.095076 34361 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7fe25c536c9960f191c585b1c67e1bef92fcf8605257cc1a466e4c1cbf14752b"} err="failed to get container status \"7fe25c536c9960f191c585b1c67e1bef92fcf8605257cc1a466e4c1cbf14752b\": rpc error: code = NotFound desc = could not find container \"7fe25c536c9960f191c585b1c67e1bef92fcf8605257cc1a466e4c1cbf14752b\": container with ID starting with 7fe25c536c9960f191c585b1c67e1bef92fcf8605257cc1a466e4c1cbf14752b not found: ID does not exist" Feb 24 05:51:09.095181 master-0 kubenswrapper[34361]: I0224 05:51:09.095169 34361 scope.go:117] "RemoveContainer" containerID="1597b1f991bb41c2e07bfa19fb4612e5e4f490bb93455f7c3809b58a5b4a1795" Feb 24 05:51:09.095571 master-0 kubenswrapper[34361]: E0224 05:51:09.095522 34361 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1597b1f991bb41c2e07bfa19fb4612e5e4f490bb93455f7c3809b58a5b4a1795\": container with ID starting with 1597b1f991bb41c2e07bfa19fb4612e5e4f490bb93455f7c3809b58a5b4a1795 not found: ID does not exist" containerID="1597b1f991bb41c2e07bfa19fb4612e5e4f490bb93455f7c3809b58a5b4a1795" Feb 24 05:51:09.095656 master-0 kubenswrapper[34361]: I0224 05:51:09.095573 34361 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1597b1f991bb41c2e07bfa19fb4612e5e4f490bb93455f7c3809b58a5b4a1795"} err="failed to get container status \"1597b1f991bb41c2e07bfa19fb4612e5e4f490bb93455f7c3809b58a5b4a1795\": rpc error: code = NotFound desc = could not find container \"1597b1f991bb41c2e07bfa19fb4612e5e4f490bb93455f7c3809b58a5b4a1795\": container with ID starting with 1597b1f991bb41c2e07bfa19fb4612e5e4f490bb93455f7c3809b58a5b4a1795 not found: ID does not exist" Feb 24 05:51:09.266281 master-0 kubenswrapper[34361]: I0224 05:51:09.266202 34361 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s5mxd\" (UniqueName: \"kubernetes.io/projected/9dc3c847-e88b-4279-ba5a-8ef0c93e4e6c-kube-api-access-s5mxd\") pod \"swift-storage-0\" (UID: \"9dc3c847-e88b-4279-ba5a-8ef0c93e4e6c\") " pod="openstack/swift-storage-0" Feb 24 05:51:09.271224 master-0 kubenswrapper[34361]: I0224 05:51:09.271144 34361 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-65c6cc445f-5w2gf"] Feb 24 05:51:09.576738 master-0 kubenswrapper[34361]: I0224 05:51:09.576645 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/9dc3c847-e88b-4279-ba5a-8ef0c93e4e6c-etc-swift\") pod \"swift-storage-0\" (UID: \"9dc3c847-e88b-4279-ba5a-8ef0c93e4e6c\") " pod="openstack/swift-storage-0" Feb 24 05:51:09.577145 master-0 kubenswrapper[34361]: E0224 05:51:09.577050 34361 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Feb 24 05:51:09.577145 master-0 kubenswrapper[34361]: E0224 05:51:09.577081 34361 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Feb 24 05:51:09.577259 master-0 kubenswrapper[34361]: E0224 05:51:09.577167 34361 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9dc3c847-e88b-4279-ba5a-8ef0c93e4e6c-etc-swift podName:9dc3c847-e88b-4279-ba5a-8ef0c93e4e6c nodeName:}" failed. No retries permitted until 2026-02-24 05:51:10.577146133 +0000 UTC m=+830.279763179 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/9dc3c847-e88b-4279-ba5a-8ef0c93e4e6c-etc-swift") pod "swift-storage-0" (UID: "9dc3c847-e88b-4279-ba5a-8ef0c93e4e6c") : configmap "swift-ring-files" not found Feb 24 05:51:09.820341 master-0 kubenswrapper[34361]: I0224 05:51:09.820256 34361 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5c55964f59-4n57j" event={"ID":"e1fdfa97-4eba-4aa9-88e0-3b426829d15e","Type":"ContainerStarted","Data":"1098132eae83b70ef21828512180bd59c746271cf3d8ad31f2918bf4bba914d5"} Feb 24 05:51:09.821085 master-0 kubenswrapper[34361]: I0224 05:51:09.820501 34361 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-5c55964f59-4n57j" Feb 24 05:51:09.952839 master-0 kubenswrapper[34361]: I0224 05:51:09.952734 34361 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/swift-ring-rebalance-th4vs"] Feb 24 05:51:09.954404 master-0 kubenswrapper[34361]: I0224 05:51:09.954370 34361 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-th4vs" Feb 24 05:51:09.956841 master-0 kubenswrapper[34361]: I0224 05:51:09.956775 34361 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-proxy-config-data" Feb 24 05:51:09.956943 master-0 kubenswrapper[34361]: I0224 05:51:09.956800 34361 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-ring-config-data" Feb 24 05:51:09.957468 master-0 kubenswrapper[34361]: I0224 05:51:09.957431 34361 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-ring-scripts" Feb 24 05:51:10.091423 master-0 kubenswrapper[34361]: I0224 05:51:10.091288 34361 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/34fce7dc-c92e-471b-9efa-f4960fb52c37-swiftconf\") pod \"swift-ring-rebalance-th4vs\" (UID: \"34fce7dc-c92e-471b-9efa-f4960fb52c37\") " pod="openstack/swift-ring-rebalance-th4vs" Feb 24 05:51:10.091730 master-0 kubenswrapper[34361]: I0224 05:51:10.091616 34361 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/34fce7dc-c92e-471b-9efa-f4960fb52c37-ring-data-devices\") pod \"swift-ring-rebalance-th4vs\" (UID: \"34fce7dc-c92e-471b-9efa-f4960fb52c37\") " pod="openstack/swift-ring-rebalance-th4vs" Feb 24 05:51:10.091730 master-0 kubenswrapper[34361]: I0224 05:51:10.091717 34361 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/34fce7dc-c92e-471b-9efa-f4960fb52c37-dispersionconf\") pod \"swift-ring-rebalance-th4vs\" (UID: \"34fce7dc-c92e-471b-9efa-f4960fb52c37\") " pod="openstack/swift-ring-rebalance-th4vs" Feb 24 05:51:10.092399 master-0 kubenswrapper[34361]: I0224 05:51:10.091920 34361 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/34fce7dc-c92e-471b-9efa-f4960fb52c37-combined-ca-bundle\") pod \"swift-ring-rebalance-th4vs\" (UID: \"34fce7dc-c92e-471b-9efa-f4960fb52c37\") " pod="openstack/swift-ring-rebalance-th4vs" Feb 24 05:51:10.092399 master-0 kubenswrapper[34361]: I0224 05:51:10.092171 34361 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fd7mm\" (UniqueName: \"kubernetes.io/projected/34fce7dc-c92e-471b-9efa-f4960fb52c37-kube-api-access-fd7mm\") pod \"swift-ring-rebalance-th4vs\" (UID: \"34fce7dc-c92e-471b-9efa-f4960fb52c37\") " pod="openstack/swift-ring-rebalance-th4vs" Feb 24 05:51:10.092399 master-0 kubenswrapper[34361]: I0224 05:51:10.092391 34361 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/34fce7dc-c92e-471b-9efa-f4960fb52c37-scripts\") pod \"swift-ring-rebalance-th4vs\" (UID: \"34fce7dc-c92e-471b-9efa-f4960fb52c37\") " pod="openstack/swift-ring-rebalance-th4vs" Feb 24 05:51:10.092498 master-0 kubenswrapper[34361]: I0224 05:51:10.092460 34361 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/34fce7dc-c92e-471b-9efa-f4960fb52c37-etc-swift\") pod \"swift-ring-rebalance-th4vs\" (UID: \"34fce7dc-c92e-471b-9efa-f4960fb52c37\") " pod="openstack/swift-ring-rebalance-th4vs" Feb 24 05:51:10.125010 master-0 kubenswrapper[34361]: I0224 05:51:10.122461 34361 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-ring-rebalance-th4vs"] Feb 24 05:51:10.130216 master-0 kubenswrapper[34361]: I0224 05:51:10.130120 34361 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-5c55964f59-4n57j" podStartSLOduration=4.130092478 podStartE2EDuration="4.130092478s" podCreationTimestamp="2026-02-24 05:51:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-24 05:51:10.108171607 +0000 UTC m=+829.810788653" watchObservedRunningTime="2026-02-24 05:51:10.130092478 +0000 UTC m=+829.832709524" Feb 24 05:51:10.195298 master-0 kubenswrapper[34361]: I0224 05:51:10.195218 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/34fce7dc-c92e-471b-9efa-f4960fb52c37-scripts\") pod \"swift-ring-rebalance-th4vs\" (UID: \"34fce7dc-c92e-471b-9efa-f4960fb52c37\") " pod="openstack/swift-ring-rebalance-th4vs" Feb 24 05:51:10.195298 master-0 kubenswrapper[34361]: I0224 05:51:10.195295 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/34fce7dc-c92e-471b-9efa-f4960fb52c37-etc-swift\") pod \"swift-ring-rebalance-th4vs\" (UID: \"34fce7dc-c92e-471b-9efa-f4960fb52c37\") " pod="openstack/swift-ring-rebalance-th4vs" Feb 24 05:51:10.195687 master-0 kubenswrapper[34361]: I0224 05:51:10.195372 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/34fce7dc-c92e-471b-9efa-f4960fb52c37-swiftconf\") pod \"swift-ring-rebalance-th4vs\" (UID: \"34fce7dc-c92e-471b-9efa-f4960fb52c37\") " pod="openstack/swift-ring-rebalance-th4vs" Feb 24 05:51:10.195835 master-0 kubenswrapper[34361]: I0224 05:51:10.195767 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/34fce7dc-c92e-471b-9efa-f4960fb52c37-ring-data-devices\") pod \"swift-ring-rebalance-th4vs\" (UID: \"34fce7dc-c92e-471b-9efa-f4960fb52c37\") " pod="openstack/swift-ring-rebalance-th4vs" Feb 24 05:51:10.196119 master-0 kubenswrapper[34361]: I0224 05:51:10.196081 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/34fce7dc-c92e-471b-9efa-f4960fb52c37-dispersionconf\") pod \"swift-ring-rebalance-th4vs\" (UID: \"34fce7dc-c92e-471b-9efa-f4960fb52c37\") " pod="openstack/swift-ring-rebalance-th4vs" Feb 24 05:51:10.196249 master-0 kubenswrapper[34361]: I0224 05:51:10.196183 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/34fce7dc-c92e-471b-9efa-f4960fb52c37-combined-ca-bundle\") pod \"swift-ring-rebalance-th4vs\" (UID: \"34fce7dc-c92e-471b-9efa-f4960fb52c37\") " pod="openstack/swift-ring-rebalance-th4vs" Feb 24 05:51:10.196336 master-0 kubenswrapper[34361]: I0224 05:51:10.196255 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fd7mm\" (UniqueName: \"kubernetes.io/projected/34fce7dc-c92e-471b-9efa-f4960fb52c37-kube-api-access-fd7mm\") pod \"swift-ring-rebalance-th4vs\" (UID: \"34fce7dc-c92e-471b-9efa-f4960fb52c37\") " pod="openstack/swift-ring-rebalance-th4vs" Feb 24 05:51:10.196450 master-0 kubenswrapper[34361]: I0224 05:51:10.196398 34361 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/34fce7dc-c92e-471b-9efa-f4960fb52c37-scripts\") pod \"swift-ring-rebalance-th4vs\" (UID: \"34fce7dc-c92e-471b-9efa-f4960fb52c37\") " pod="openstack/swift-ring-rebalance-th4vs" Feb 24 05:51:10.196450 master-0 kubenswrapper[34361]: I0224 05:51:10.196179 34361 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/34fce7dc-c92e-471b-9efa-f4960fb52c37-etc-swift\") pod \"swift-ring-rebalance-th4vs\" (UID: \"34fce7dc-c92e-471b-9efa-f4960fb52c37\") " pod="openstack/swift-ring-rebalance-th4vs" Feb 24 05:51:10.197109 master-0 kubenswrapper[34361]: I0224 05:51:10.197061 34361 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/34fce7dc-c92e-471b-9efa-f4960fb52c37-ring-data-devices\") pod \"swift-ring-rebalance-th4vs\" (UID: \"34fce7dc-c92e-471b-9efa-f4960fb52c37\") " pod="openstack/swift-ring-rebalance-th4vs" Feb 24 05:51:10.199757 master-0 kubenswrapper[34361]: I0224 05:51:10.199713 34361 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/34fce7dc-c92e-471b-9efa-f4960fb52c37-swiftconf\") pod \"swift-ring-rebalance-th4vs\" (UID: \"34fce7dc-c92e-471b-9efa-f4960fb52c37\") " pod="openstack/swift-ring-rebalance-th4vs" Feb 24 05:51:10.200984 master-0 kubenswrapper[34361]: I0224 05:51:10.200942 34361 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/34fce7dc-c92e-471b-9efa-f4960fb52c37-dispersionconf\") pod \"swift-ring-rebalance-th4vs\" (UID: \"34fce7dc-c92e-471b-9efa-f4960fb52c37\") " pod="openstack/swift-ring-rebalance-th4vs" Feb 24 05:51:10.204074 master-0 kubenswrapper[34361]: I0224 05:51:10.203971 34361 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/34fce7dc-c92e-471b-9efa-f4960fb52c37-combined-ca-bundle\") pod \"swift-ring-rebalance-th4vs\" (UID: \"34fce7dc-c92e-471b-9efa-f4960fb52c37\") " pod="openstack/swift-ring-rebalance-th4vs" Feb 24 05:51:10.255905 master-0 kubenswrapper[34361]: I0224 05:51:10.254904 34361 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/openstack-cell1-galera-0" Feb 24 05:51:10.255905 master-0 kubenswrapper[34361]: I0224 05:51:10.255075 34361 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/openstack-cell1-galera-0" Feb 24 05:51:10.485530 master-0 kubenswrapper[34361]: I0224 05:51:10.485445 34361 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fd7mm\" (UniqueName: \"kubernetes.io/projected/34fce7dc-c92e-471b-9efa-f4960fb52c37-kube-api-access-fd7mm\") pod \"swift-ring-rebalance-th4vs\" (UID: \"34fce7dc-c92e-471b-9efa-f4960fb52c37\") " pod="openstack/swift-ring-rebalance-th4vs" Feb 24 05:51:10.555511 master-0 kubenswrapper[34361]: I0224 05:51:10.555440 34361 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-c269bc7e-f3d0-4828-8ba4-a192dd94a207\" (UniqueName: \"kubernetes.io/csi/topolvm.io^831b2b2b-d48b-4d12-a700-2329c110fce1\") pod \"swift-storage-0\" (UID: \"9dc3c847-e88b-4279-ba5a-8ef0c93e4e6c\") " pod="openstack/swift-storage-0" Feb 24 05:51:10.575412 master-0 kubenswrapper[34361]: I0224 05:51:10.575251 34361 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-th4vs" Feb 24 05:51:10.616534 master-0 kubenswrapper[34361]: I0224 05:51:10.616455 34361 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cd8ee44a-5bb9-4456-915c-06bd6998afb8" path="/var/lib/kubelet/pods/cd8ee44a-5bb9-4456-915c-06bd6998afb8/volumes" Feb 24 05:51:10.625937 master-0 kubenswrapper[34361]: E0224 05:51:10.625861 34361 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Feb 24 05:51:10.625937 master-0 kubenswrapper[34361]: E0224 05:51:10.625922 34361 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Feb 24 05:51:10.627144 master-0 kubenswrapper[34361]: E0224 05:51:10.626007 34361 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9dc3c847-e88b-4279-ba5a-8ef0c93e4e6c-etc-swift podName:9dc3c847-e88b-4279-ba5a-8ef0c93e4e6c nodeName:}" failed. No retries permitted until 2026-02-24 05:51:12.625978126 +0000 UTC m=+832.328595212 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/9dc3c847-e88b-4279-ba5a-8ef0c93e4e6c-etc-swift") pod "swift-storage-0" (UID: "9dc3c847-e88b-4279-ba5a-8ef0c93e4e6c") : configmap "swift-ring-files" not found Feb 24 05:51:10.627144 master-0 kubenswrapper[34361]: I0224 05:51:10.626499 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/9dc3c847-e88b-4279-ba5a-8ef0c93e4e6c-etc-swift\") pod \"swift-storage-0\" (UID: \"9dc3c847-e88b-4279-ba5a-8ef0c93e4e6c\") " pod="openstack/swift-storage-0" Feb 24 05:51:10.839635 master-0 kubenswrapper[34361]: I0224 05:51:10.839569 34361 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"9c1931c4-36b0-4c64-8e18-eb5abea9860f","Type":"ContainerStarted","Data":"55dcdaf17beb4e4589e96504c4158055f36cbbf74ffb819a1ce3b71b3856f792"} Feb 24 05:51:11.138542 master-0 kubenswrapper[34361]: W0224 05:51:11.138071 34361 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod34fce7dc_c92e_471b_9efa_f4960fb52c37.slice/crio-cb25f2485895e9388c52ebca56d10fa9ab4dff645993dcefcda33d8cee1e469d WatchSource:0}: Error finding container cb25f2485895e9388c52ebca56d10fa9ab4dff645993dcefcda33d8cee1e469d: Status 404 returned error can't find the container with id cb25f2485895e9388c52ebca56d10fa9ab4dff645993dcefcda33d8cee1e469d Feb 24 05:51:11.158829 master-0 kubenswrapper[34361]: I0224 05:51:11.158745 34361 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-ring-rebalance-th4vs"] Feb 24 05:51:11.878975 master-0 kubenswrapper[34361]: I0224 05:51:11.878901 34361 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"9c1931c4-36b0-4c64-8e18-eb5abea9860f","Type":"ContainerStarted","Data":"d8ddd46b88d8fbc7fddd1ce202b8439901817f48f52b96bc3dc03b461502798a"} Feb 24 05:51:11.879865 master-0 kubenswrapper[34361]: I0224 05:51:11.879089 34361 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-northd-0" Feb 24 05:51:11.884080 master-0 kubenswrapper[34361]: I0224 05:51:11.884017 34361 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-th4vs" event={"ID":"34fce7dc-c92e-471b-9efa-f4960fb52c37","Type":"ContainerStarted","Data":"cb25f2485895e9388c52ebca56d10fa9ab4dff645993dcefcda33d8cee1e469d"} Feb 24 05:51:11.909084 master-0 kubenswrapper[34361]: I0224 05:51:11.908996 34361 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-northd-0" podStartSLOduration=3.003951821 podStartE2EDuration="5.90897572s" podCreationTimestamp="2026-02-24 05:51:06 +0000 UTC" firstStartedPulling="2026-02-24 05:51:07.191505625 +0000 UTC m=+826.894122671" lastFinishedPulling="2026-02-24 05:51:10.096529504 +0000 UTC m=+829.799146570" observedRunningTime="2026-02-24 05:51:11.906224296 +0000 UTC m=+831.608841352" watchObservedRunningTime="2026-02-24 05:51:11.90897572 +0000 UTC m=+831.611592766" Feb 24 05:51:12.566695 master-0 kubenswrapper[34361]: I0224 05:51:12.566600 34361 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/openstack-galera-0" Feb 24 05:51:12.677547 master-0 kubenswrapper[34361]: I0224 05:51:12.677477 34361 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/openstack-galera-0" Feb 24 05:51:12.699680 master-0 kubenswrapper[34361]: I0224 05:51:12.699608 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/9dc3c847-e88b-4279-ba5a-8ef0c93e4e6c-etc-swift\") pod \"swift-storage-0\" (UID: \"9dc3c847-e88b-4279-ba5a-8ef0c93e4e6c\") " pod="openstack/swift-storage-0" Feb 24 05:51:12.700089 master-0 kubenswrapper[34361]: E0224 05:51:12.699770 34361 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Feb 24 05:51:12.700089 master-0 kubenswrapper[34361]: E0224 05:51:12.699811 34361 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Feb 24 05:51:12.700089 master-0 kubenswrapper[34361]: E0224 05:51:12.699938 34361 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9dc3c847-e88b-4279-ba5a-8ef0c93e4e6c-etc-swift podName:9dc3c847-e88b-4279-ba5a-8ef0c93e4e6c nodeName:}" failed. No retries permitted until 2026-02-24 05:51:16.699908231 +0000 UTC m=+836.402525277 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/9dc3c847-e88b-4279-ba5a-8ef0c93e4e6c-etc-swift") pod "swift-storage-0" (UID: "9dc3c847-e88b-4279-ba5a-8ef0c93e4e6c") : configmap "swift-ring-files" not found Feb 24 05:51:14.369426 master-0 kubenswrapper[34361]: I0224 05:51:14.369301 34361 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/openstack-cell1-galera-0" Feb 24 05:51:14.496525 master-0 kubenswrapper[34361]: I0224 05:51:14.496440 34361 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/openstack-cell1-galera-0" Feb 24 05:51:14.782294 master-0 kubenswrapper[34361]: I0224 05:51:14.782227 34361 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-738d-account-create-update-p9hmm"] Feb 24 05:51:14.784185 master-0 kubenswrapper[34361]: I0224 05:51:14.784156 34361 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-738d-account-create-update-p9hmm" Feb 24 05:51:14.788261 master-0 kubenswrapper[34361]: I0224 05:51:14.786533 34361 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-db-secret" Feb 24 05:51:14.795502 master-0 kubenswrapper[34361]: I0224 05:51:14.795447 34361 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-db-create-62w87"] Feb 24 05:51:14.798470 master-0 kubenswrapper[34361]: I0224 05:51:14.798427 34361 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-62w87" Feb 24 05:51:14.976048 master-0 kubenswrapper[34361]: I0224 05:51:14.975935 34361 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/dd8843ac-1648-4291-8d77-ba67a5e46d2b-operator-scripts\") pod \"glance-738d-account-create-update-p9hmm\" (UID: \"dd8843ac-1648-4291-8d77-ba67a5e46d2b\") " pod="openstack/glance-738d-account-create-update-p9hmm" Feb 24 05:51:14.976431 master-0 kubenswrapper[34361]: I0224 05:51:14.976280 34361 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qnmxz\" (UniqueName: \"kubernetes.io/projected/5560d5a8-4360-4b16-b5ca-2817343b3ec9-kube-api-access-qnmxz\") pod \"glance-db-create-62w87\" (UID: \"5560d5a8-4360-4b16-b5ca-2817343b3ec9\") " pod="openstack/glance-db-create-62w87" Feb 24 05:51:14.977086 master-0 kubenswrapper[34361]: I0224 05:51:14.976974 34361 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-strj7\" (UniqueName: \"kubernetes.io/projected/dd8843ac-1648-4291-8d77-ba67a5e46d2b-kube-api-access-strj7\") pod \"glance-738d-account-create-update-p9hmm\" (UID: \"dd8843ac-1648-4291-8d77-ba67a5e46d2b\") " pod="openstack/glance-738d-account-create-update-p9hmm" Feb 24 05:51:14.977262 master-0 kubenswrapper[34361]: I0224 05:51:14.977210 34361 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5560d5a8-4360-4b16-b5ca-2817343b3ec9-operator-scripts\") pod \"glance-db-create-62w87\" (UID: \"5560d5a8-4360-4b16-b5ca-2817343b3ec9\") " pod="openstack/glance-db-create-62w87" Feb 24 05:51:14.978659 master-0 kubenswrapper[34361]: I0224 05:51:14.978604 34361 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-738d-account-create-update-p9hmm"] Feb 24 05:51:14.993523 master-0 kubenswrapper[34361]: I0224 05:51:14.989052 34361 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-create-62w87"] Feb 24 05:51:15.082293 master-0 kubenswrapper[34361]: I0224 05:51:15.082067 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-strj7\" (UniqueName: \"kubernetes.io/projected/dd8843ac-1648-4291-8d77-ba67a5e46d2b-kube-api-access-strj7\") pod \"glance-738d-account-create-update-p9hmm\" (UID: \"dd8843ac-1648-4291-8d77-ba67a5e46d2b\") " pod="openstack/glance-738d-account-create-update-p9hmm" Feb 24 05:51:15.082293 master-0 kubenswrapper[34361]: I0224 05:51:15.082207 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5560d5a8-4360-4b16-b5ca-2817343b3ec9-operator-scripts\") pod \"glance-db-create-62w87\" (UID: \"5560d5a8-4360-4b16-b5ca-2817343b3ec9\") " pod="openstack/glance-db-create-62w87" Feb 24 05:51:15.082740 master-0 kubenswrapper[34361]: I0224 05:51:15.082355 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/dd8843ac-1648-4291-8d77-ba67a5e46d2b-operator-scripts\") pod \"glance-738d-account-create-update-p9hmm\" (UID: \"dd8843ac-1648-4291-8d77-ba67a5e46d2b\") " pod="openstack/glance-738d-account-create-update-p9hmm" Feb 24 05:51:15.082740 master-0 kubenswrapper[34361]: I0224 05:51:15.082407 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qnmxz\" (UniqueName: \"kubernetes.io/projected/5560d5a8-4360-4b16-b5ca-2817343b3ec9-kube-api-access-qnmxz\") pod \"glance-db-create-62w87\" (UID: \"5560d5a8-4360-4b16-b5ca-2817343b3ec9\") " pod="openstack/glance-db-create-62w87" Feb 24 05:51:15.088449 master-0 kubenswrapper[34361]: I0224 05:51:15.083735 34361 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5560d5a8-4360-4b16-b5ca-2817343b3ec9-operator-scripts\") pod \"glance-db-create-62w87\" (UID: \"5560d5a8-4360-4b16-b5ca-2817343b3ec9\") " pod="openstack/glance-db-create-62w87" Feb 24 05:51:15.088449 master-0 kubenswrapper[34361]: I0224 05:51:15.084100 34361 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/dd8843ac-1648-4291-8d77-ba67a5e46d2b-operator-scripts\") pod \"glance-738d-account-create-update-p9hmm\" (UID: \"dd8843ac-1648-4291-8d77-ba67a5e46d2b\") " pod="openstack/glance-738d-account-create-update-p9hmm" Feb 24 05:51:15.216397 master-0 kubenswrapper[34361]: I0224 05:51:15.215768 34361 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qnmxz\" (UniqueName: \"kubernetes.io/projected/5560d5a8-4360-4b16-b5ca-2817343b3ec9-kube-api-access-qnmxz\") pod \"glance-db-create-62w87\" (UID: \"5560d5a8-4360-4b16-b5ca-2817343b3ec9\") " pod="openstack/glance-db-create-62w87" Feb 24 05:51:15.218225 master-0 kubenswrapper[34361]: I0224 05:51:15.218162 34361 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-strj7\" (UniqueName: \"kubernetes.io/projected/dd8843ac-1648-4291-8d77-ba67a5e46d2b-kube-api-access-strj7\") pod \"glance-738d-account-create-update-p9hmm\" (UID: \"dd8843ac-1648-4291-8d77-ba67a5e46d2b\") " pod="openstack/glance-738d-account-create-update-p9hmm" Feb 24 05:51:15.392771 master-0 kubenswrapper[34361]: I0224 05:51:15.392695 34361 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-db-create-f9qxr"] Feb 24 05:51:15.395237 master-0 kubenswrapper[34361]: I0224 05:51:15.395208 34361 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-f9qxr" Feb 24 05:51:15.407108 master-0 kubenswrapper[34361]: I0224 05:51:15.407020 34361 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-738d-account-create-update-p9hmm" Feb 24 05:51:15.422359 master-0 kubenswrapper[34361]: I0224 05:51:15.412864 34361 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-create-f9qxr"] Feb 24 05:51:15.422359 master-0 kubenswrapper[34361]: I0224 05:51:15.418895 34361 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-62w87" Feb 24 05:51:15.495621 master-0 kubenswrapper[34361]: I0224 05:51:15.495534 34361 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h8hlr\" (UniqueName: \"kubernetes.io/projected/962f1471-7def-4417-a4bc-cf1013a76b2f-kube-api-access-h8hlr\") pod \"keystone-db-create-f9qxr\" (UID: \"962f1471-7def-4417-a4bc-cf1013a76b2f\") " pod="openstack/keystone-db-create-f9qxr" Feb 24 05:51:15.495932 master-0 kubenswrapper[34361]: I0224 05:51:15.495766 34361 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/962f1471-7def-4417-a4bc-cf1013a76b2f-operator-scripts\") pod \"keystone-db-create-f9qxr\" (UID: \"962f1471-7def-4417-a4bc-cf1013a76b2f\") " pod="openstack/keystone-db-create-f9qxr" Feb 24 05:51:15.520161 master-0 kubenswrapper[34361]: I0224 05:51:15.520071 34361 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-7814-account-create-update-vkdnw"] Feb 24 05:51:15.523099 master-0 kubenswrapper[34361]: I0224 05:51:15.523051 34361 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-7814-account-create-update-vkdnw" Feb 24 05:51:15.536290 master-0 kubenswrapper[34361]: I0224 05:51:15.536240 34361 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-db-secret" Feb 24 05:51:15.537123 master-0 kubenswrapper[34361]: I0224 05:51:15.537074 34361 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-7814-account-create-update-vkdnw"] Feb 24 05:51:15.599927 master-0 kubenswrapper[34361]: I0224 05:51:15.599775 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/962f1471-7def-4417-a4bc-cf1013a76b2f-operator-scripts\") pod \"keystone-db-create-f9qxr\" (UID: \"962f1471-7def-4417-a4bc-cf1013a76b2f\") " pod="openstack/keystone-db-create-f9qxr" Feb 24 05:51:15.600255 master-0 kubenswrapper[34361]: I0224 05:51:15.600180 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h8hlr\" (UniqueName: \"kubernetes.io/projected/962f1471-7def-4417-a4bc-cf1013a76b2f-kube-api-access-h8hlr\") pod \"keystone-db-create-f9qxr\" (UID: \"962f1471-7def-4417-a4bc-cf1013a76b2f\") " pod="openstack/keystone-db-create-f9qxr" Feb 24 05:51:15.601721 master-0 kubenswrapper[34361]: I0224 05:51:15.601335 34361 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/962f1471-7def-4417-a4bc-cf1013a76b2f-operator-scripts\") pod \"keystone-db-create-f9qxr\" (UID: \"962f1471-7def-4417-a4bc-cf1013a76b2f\") " pod="openstack/keystone-db-create-f9qxr" Feb 24 05:51:15.617230 master-0 kubenswrapper[34361]: I0224 05:51:15.617151 34361 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-db-create-rfgw2"] Feb 24 05:51:15.618993 master-0 kubenswrapper[34361]: I0224 05:51:15.618916 34361 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-rfgw2" Feb 24 05:51:15.631142 master-0 kubenswrapper[34361]: I0224 05:51:15.631070 34361 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-create-rfgw2"] Feb 24 05:51:15.646713 master-0 kubenswrapper[34361]: I0224 05:51:15.646672 34361 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h8hlr\" (UniqueName: \"kubernetes.io/projected/962f1471-7def-4417-a4bc-cf1013a76b2f-kube-api-access-h8hlr\") pod \"keystone-db-create-f9qxr\" (UID: \"962f1471-7def-4417-a4bc-cf1013a76b2f\") " pod="openstack/keystone-db-create-f9qxr" Feb 24 05:51:15.702923 master-0 kubenswrapper[34361]: I0224 05:51:15.702836 34361 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6frdf\" (UniqueName: \"kubernetes.io/projected/d5c030be-c6d3-421a-9a88-e5a3cbd5c8ca-kube-api-access-6frdf\") pod \"keystone-7814-account-create-update-vkdnw\" (UID: \"d5c030be-c6d3-421a-9a88-e5a3cbd5c8ca\") " pod="openstack/keystone-7814-account-create-update-vkdnw" Feb 24 05:51:15.703330 master-0 kubenswrapper[34361]: I0224 05:51:15.703285 34361 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d5c030be-c6d3-421a-9a88-e5a3cbd5c8ca-operator-scripts\") pod \"keystone-7814-account-create-update-vkdnw\" (UID: \"d5c030be-c6d3-421a-9a88-e5a3cbd5c8ca\") " pod="openstack/keystone-7814-account-create-update-vkdnw" Feb 24 05:51:15.707828 master-0 kubenswrapper[34361]: I0224 05:51:15.707786 34361 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-b69d-account-create-update-7dq92"] Feb 24 05:51:15.716225 master-0 kubenswrapper[34361]: I0224 05:51:15.716185 34361 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-b69d-account-create-update-7dq92" Feb 24 05:51:15.719418 master-0 kubenswrapper[34361]: I0224 05:51:15.719390 34361 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-db-secret" Feb 24 05:51:15.728405 master-0 kubenswrapper[34361]: I0224 05:51:15.728362 34361 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-b69d-account-create-update-7dq92"] Feb 24 05:51:15.767425 master-0 kubenswrapper[34361]: I0224 05:51:15.767384 34361 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-f9qxr" Feb 24 05:51:15.811686 master-0 kubenswrapper[34361]: I0224 05:51:15.811438 34361 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1580115a-d292-46a8-b90c-850d483892a4-operator-scripts\") pod \"placement-db-create-rfgw2\" (UID: \"1580115a-d292-46a8-b90c-850d483892a4\") " pod="openstack/placement-db-create-rfgw2" Feb 24 05:51:15.811686 master-0 kubenswrapper[34361]: I0224 05:51:15.811601 34361 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c5b94586-4af1-4814-aaa7-baeba7af6359-operator-scripts\") pod \"placement-b69d-account-create-update-7dq92\" (UID: \"c5b94586-4af1-4814-aaa7-baeba7af6359\") " pod="openstack/placement-b69d-account-create-update-7dq92" Feb 24 05:51:15.811879 master-0 kubenswrapper[34361]: I0224 05:51:15.811826 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d5c030be-c6d3-421a-9a88-e5a3cbd5c8ca-operator-scripts\") pod \"keystone-7814-account-create-update-vkdnw\" (UID: \"d5c030be-c6d3-421a-9a88-e5a3cbd5c8ca\") " pod="openstack/keystone-7814-account-create-update-vkdnw" Feb 24 05:51:15.812145 master-0 kubenswrapper[34361]: I0224 05:51:15.812089 34361 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r7vnl\" (UniqueName: \"kubernetes.io/projected/c5b94586-4af1-4814-aaa7-baeba7af6359-kube-api-access-r7vnl\") pod \"placement-b69d-account-create-update-7dq92\" (UID: \"c5b94586-4af1-4814-aaa7-baeba7af6359\") " pod="openstack/placement-b69d-account-create-update-7dq92" Feb 24 05:51:15.812520 master-0 kubenswrapper[34361]: I0224 05:51:15.812415 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6frdf\" (UniqueName: \"kubernetes.io/projected/d5c030be-c6d3-421a-9a88-e5a3cbd5c8ca-kube-api-access-6frdf\") pod \"keystone-7814-account-create-update-vkdnw\" (UID: \"d5c030be-c6d3-421a-9a88-e5a3cbd5c8ca\") " pod="openstack/keystone-7814-account-create-update-vkdnw" Feb 24 05:51:15.812586 master-0 kubenswrapper[34361]: I0224 05:51:15.812549 34361 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tkz2r\" (UniqueName: \"kubernetes.io/projected/1580115a-d292-46a8-b90c-850d483892a4-kube-api-access-tkz2r\") pod \"placement-db-create-rfgw2\" (UID: \"1580115a-d292-46a8-b90c-850d483892a4\") " pod="openstack/placement-db-create-rfgw2" Feb 24 05:51:15.812879 master-0 kubenswrapper[34361]: I0224 05:51:15.812840 34361 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d5c030be-c6d3-421a-9a88-e5a3cbd5c8ca-operator-scripts\") pod \"keystone-7814-account-create-update-vkdnw\" (UID: \"d5c030be-c6d3-421a-9a88-e5a3cbd5c8ca\") " pod="openstack/keystone-7814-account-create-update-vkdnw" Feb 24 05:51:15.832867 master-0 kubenswrapper[34361]: I0224 05:51:15.832784 34361 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6frdf\" (UniqueName: \"kubernetes.io/projected/d5c030be-c6d3-421a-9a88-e5a3cbd5c8ca-kube-api-access-6frdf\") pod \"keystone-7814-account-create-update-vkdnw\" (UID: \"d5c030be-c6d3-421a-9a88-e5a3cbd5c8ca\") " pod="openstack/keystone-7814-account-create-update-vkdnw" Feb 24 05:51:15.915404 master-0 kubenswrapper[34361]: I0224 05:51:15.915004 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r7vnl\" (UniqueName: \"kubernetes.io/projected/c5b94586-4af1-4814-aaa7-baeba7af6359-kube-api-access-r7vnl\") pod \"placement-b69d-account-create-update-7dq92\" (UID: \"c5b94586-4af1-4814-aaa7-baeba7af6359\") " pod="openstack/placement-b69d-account-create-update-7dq92" Feb 24 05:51:15.915404 master-0 kubenswrapper[34361]: I0224 05:51:15.915146 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tkz2r\" (UniqueName: \"kubernetes.io/projected/1580115a-d292-46a8-b90c-850d483892a4-kube-api-access-tkz2r\") pod \"placement-db-create-rfgw2\" (UID: \"1580115a-d292-46a8-b90c-850d483892a4\") " pod="openstack/placement-db-create-rfgw2" Feb 24 05:51:15.915404 master-0 kubenswrapper[34361]: I0224 05:51:15.915226 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1580115a-d292-46a8-b90c-850d483892a4-operator-scripts\") pod \"placement-db-create-rfgw2\" (UID: \"1580115a-d292-46a8-b90c-850d483892a4\") " pod="openstack/placement-db-create-rfgw2" Feb 24 05:51:15.915404 master-0 kubenswrapper[34361]: I0224 05:51:15.915271 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c5b94586-4af1-4814-aaa7-baeba7af6359-operator-scripts\") pod \"placement-b69d-account-create-update-7dq92\" (UID: \"c5b94586-4af1-4814-aaa7-baeba7af6359\") " pod="openstack/placement-b69d-account-create-update-7dq92" Feb 24 05:51:15.918139 master-0 kubenswrapper[34361]: I0224 05:51:15.916136 34361 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1580115a-d292-46a8-b90c-850d483892a4-operator-scripts\") pod \"placement-db-create-rfgw2\" (UID: \"1580115a-d292-46a8-b90c-850d483892a4\") " pod="openstack/placement-db-create-rfgw2" Feb 24 05:51:15.919532 master-0 kubenswrapper[34361]: I0224 05:51:15.919380 34361 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c5b94586-4af1-4814-aaa7-baeba7af6359-operator-scripts\") pod \"placement-b69d-account-create-update-7dq92\" (UID: \"c5b94586-4af1-4814-aaa7-baeba7af6359\") " pod="openstack/placement-b69d-account-create-update-7dq92" Feb 24 05:51:15.935713 master-0 kubenswrapper[34361]: I0224 05:51:15.935653 34361 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r7vnl\" (UniqueName: \"kubernetes.io/projected/c5b94586-4af1-4814-aaa7-baeba7af6359-kube-api-access-r7vnl\") pod \"placement-b69d-account-create-update-7dq92\" (UID: \"c5b94586-4af1-4814-aaa7-baeba7af6359\") " pod="openstack/placement-b69d-account-create-update-7dq92" Feb 24 05:51:15.936864 master-0 kubenswrapper[34361]: I0224 05:51:15.936813 34361 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tkz2r\" (UniqueName: \"kubernetes.io/projected/1580115a-d292-46a8-b90c-850d483892a4-kube-api-access-tkz2r\") pod \"placement-db-create-rfgw2\" (UID: \"1580115a-d292-46a8-b90c-850d483892a4\") " pod="openstack/placement-db-create-rfgw2" Feb 24 05:51:15.971823 master-0 kubenswrapper[34361]: I0224 05:51:15.971720 34361 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-th4vs" event={"ID":"34fce7dc-c92e-471b-9efa-f4960fb52c37","Type":"ContainerStarted","Data":"efa6878cae69ea047bc5f877367310c77b1d3f3e8c14467bf3ecbf3c2cf3002b"} Feb 24 05:51:16.022775 master-0 kubenswrapper[34361]: I0224 05:51:16.022656 34361 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/swift-ring-rebalance-th4vs" podStartSLOduration=2.791334219 podStartE2EDuration="7.022627849s" podCreationTimestamp="2026-02-24 05:51:09 +0000 UTC" firstStartedPulling="2026-02-24 05:51:11.140784223 +0000 UTC m=+830.843401279" lastFinishedPulling="2026-02-24 05:51:15.372077853 +0000 UTC m=+835.074694909" observedRunningTime="2026-02-24 05:51:16.001763277 +0000 UTC m=+835.704380343" watchObservedRunningTime="2026-02-24 05:51:16.022627849 +0000 UTC m=+835.725244895" Feb 24 05:51:16.065664 master-0 kubenswrapper[34361]: I0224 05:51:16.065576 34361 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-738d-account-create-update-p9hmm"] Feb 24 05:51:16.081654 master-0 kubenswrapper[34361]: I0224 05:51:16.081493 34361 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-7814-account-create-update-vkdnw" Feb 24 05:51:16.122370 master-0 kubenswrapper[34361]: I0224 05:51:16.122284 34361 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-rfgw2" Feb 24 05:51:16.130756 master-0 kubenswrapper[34361]: I0224 05:51:16.130270 34361 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-b69d-account-create-update-7dq92" Feb 24 05:51:16.197349 master-0 kubenswrapper[34361]: I0224 05:51:16.194775 34361 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-create-62w87"] Feb 24 05:51:16.363255 master-0 kubenswrapper[34361]: I0224 05:51:16.361496 34361 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-create-f9qxr"] Feb 24 05:51:16.743946 master-0 kubenswrapper[34361]: I0224 05:51:16.743856 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/9dc3c847-e88b-4279-ba5a-8ef0c93e4e6c-etc-swift\") pod \"swift-storage-0\" (UID: \"9dc3c847-e88b-4279-ba5a-8ef0c93e4e6c\") " pod="openstack/swift-storage-0" Feb 24 05:51:17.036195 master-0 kubenswrapper[34361]: E0224 05:51:16.744186 34361 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Feb 24 05:51:17.036195 master-0 kubenswrapper[34361]: E0224 05:51:16.744208 34361 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Feb 24 05:51:17.036195 master-0 kubenswrapper[34361]: E0224 05:51:16.744285 34361 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9dc3c847-e88b-4279-ba5a-8ef0c93e4e6c-etc-swift podName:9dc3c847-e88b-4279-ba5a-8ef0c93e4e6c nodeName:}" failed. No retries permitted until 2026-02-24 05:51:24.744265392 +0000 UTC m=+844.446882438 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/9dc3c847-e88b-4279-ba5a-8ef0c93e4e6c-etc-swift") pod "swift-storage-0" (UID: "9dc3c847-e88b-4279-ba5a-8ef0c93e4e6c") : configmap "swift-ring-files" not found Feb 24 05:51:17.036195 master-0 kubenswrapper[34361]: I0224 05:51:16.939652 34361 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-5c55964f59-4n57j" Feb 24 05:51:17.036195 master-0 kubenswrapper[34361]: I0224 05:51:17.002777 34361 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-738d-account-create-update-p9hmm" event={"ID":"dd8843ac-1648-4291-8d77-ba67a5e46d2b","Type":"ContainerStarted","Data":"73a8cb5c337320a20db44ede97428663e7ea0d745e84e06c83b6a594fac95297"} Feb 24 05:51:17.036195 master-0 kubenswrapper[34361]: I0224 05:51:17.014633 34361 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-62w87" event={"ID":"5560d5a8-4360-4b16-b5ca-2817343b3ec9","Type":"ContainerStarted","Data":"cd546c64ee79ed45b8a23fca7e02da93fa650e57aa0f2bdea7072068123cd304"} Feb 24 05:51:17.045401 master-0 kubenswrapper[34361]: I0224 05:51:17.043661 34361 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7c45d57b9c-k22s7"] Feb 24 05:51:17.045401 master-0 kubenswrapper[34361]: I0224 05:51:17.044016 34361 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-7c45d57b9c-k22s7" podUID="d6091c0d-046c-44f0-888c-dbc5ac5a7aae" containerName="dnsmasq-dns" containerID="cri-o://3c1ffe819156bf68fb0b360ad9725e8527c880e13a059a4e3c15a9267322f7d2" gracePeriod=10 Feb 24 05:51:17.404898 master-0 kubenswrapper[34361]: I0224 05:51:17.402924 34361 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-7814-account-create-update-vkdnw"] Feb 24 05:51:17.412790 master-0 kubenswrapper[34361]: W0224 05:51:17.412712 34361 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd5c030be_c6d3_421a_9a88_e5a3cbd5c8ca.slice/crio-6ef0c10209f147fb22f6b237a79a3c4443a58239b17eaa2e872073abc4c4d086 WatchSource:0}: Error finding container 6ef0c10209f147fb22f6b237a79a3c4443a58239b17eaa2e872073abc4c4d086: Status 404 returned error can't find the container with id 6ef0c10209f147fb22f6b237a79a3c4443a58239b17eaa2e872073abc4c4d086 Feb 24 05:51:17.714649 master-0 kubenswrapper[34361]: I0224 05:51:17.714564 34361 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-b69d-account-create-update-7dq92"] Feb 24 05:51:17.724185 master-0 kubenswrapper[34361]: I0224 05:51:17.724118 34361 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-create-rfgw2"] Feb 24 05:51:17.812697 master-0 kubenswrapper[34361]: W0224 05:51:17.812155 34361 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod1580115a_d292_46a8_b90c_850d483892a4.slice/crio-dae3daa2120754626adfd57c74d622d5a45f63f35d5dc2843718e1e73d40b1e3 WatchSource:0}: Error finding container dae3daa2120754626adfd57c74d622d5a45f63f35d5dc2843718e1e73d40b1e3: Status 404 returned error can't find the container with id dae3daa2120754626adfd57c74d622d5a45f63f35d5dc2843718e1e73d40b1e3 Feb 24 05:51:17.976304 master-0 kubenswrapper[34361]: I0224 05:51:17.976236 34361 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7c45d57b9c-k22s7" Feb 24 05:51:18.007280 master-0 kubenswrapper[34361]: I0224 05:51:18.007221 34361 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-694c6\" (UniqueName: \"kubernetes.io/projected/d6091c0d-046c-44f0-888c-dbc5ac5a7aae-kube-api-access-694c6\") pod \"d6091c0d-046c-44f0-888c-dbc5ac5a7aae\" (UID: \"d6091c0d-046c-44f0-888c-dbc5ac5a7aae\") " Feb 24 05:51:18.007280 master-0 kubenswrapper[34361]: I0224 05:51:18.007292 34361 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d6091c0d-046c-44f0-888c-dbc5ac5a7aae-dns-svc\") pod \"d6091c0d-046c-44f0-888c-dbc5ac5a7aae\" (UID: \"d6091c0d-046c-44f0-888c-dbc5ac5a7aae\") " Feb 24 05:51:18.007624 master-0 kubenswrapper[34361]: I0224 05:51:18.007392 34361 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d6091c0d-046c-44f0-888c-dbc5ac5a7aae-config\") pod \"d6091c0d-046c-44f0-888c-dbc5ac5a7aae\" (UID: \"d6091c0d-046c-44f0-888c-dbc5ac5a7aae\") " Feb 24 05:51:18.016926 master-0 kubenswrapper[34361]: I0224 05:51:18.015614 34361 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d6091c0d-046c-44f0-888c-dbc5ac5a7aae-kube-api-access-694c6" (OuterVolumeSpecName: "kube-api-access-694c6") pod "d6091c0d-046c-44f0-888c-dbc5ac5a7aae" (UID: "d6091c0d-046c-44f0-888c-dbc5ac5a7aae"). InnerVolumeSpecName "kube-api-access-694c6". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 24 05:51:18.028882 master-0 kubenswrapper[34361]: I0224 05:51:18.028815 34361 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-b69d-account-create-update-7dq92" event={"ID":"c5b94586-4af1-4814-aaa7-baeba7af6359","Type":"ContainerStarted","Data":"1b49bfad8125d09a83f925603539c7f4874bbed165e6095d059a61fe787a46ca"} Feb 24 05:51:18.031641 master-0 kubenswrapper[34361]: I0224 05:51:18.031458 34361 generic.go:334] "Generic (PLEG): container finished" podID="5560d5a8-4360-4b16-b5ca-2817343b3ec9" containerID="a3dea6b25f70ce313d99b17381b693616e1d7310210cc2cba8026f495273acdd" exitCode=0 Feb 24 05:51:18.031641 master-0 kubenswrapper[34361]: I0224 05:51:18.031573 34361 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-62w87" event={"ID":"5560d5a8-4360-4b16-b5ca-2817343b3ec9","Type":"ContainerDied","Data":"a3dea6b25f70ce313d99b17381b693616e1d7310210cc2cba8026f495273acdd"} Feb 24 05:51:18.033598 master-0 kubenswrapper[34361]: I0224 05:51:18.033551 34361 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-rfgw2" event={"ID":"1580115a-d292-46a8-b90c-850d483892a4","Type":"ContainerStarted","Data":"dae3daa2120754626adfd57c74d622d5a45f63f35d5dc2843718e1e73d40b1e3"} Feb 24 05:51:18.037758 master-0 kubenswrapper[34361]: I0224 05:51:18.037705 34361 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-7814-account-create-update-vkdnw" event={"ID":"d5c030be-c6d3-421a-9a88-e5a3cbd5c8ca","Type":"ContainerStarted","Data":"1ab20e1b4e8e66bc887752774a1f00920bee111374be74e85255dfa14c229255"} Feb 24 05:51:18.037758 master-0 kubenswrapper[34361]: I0224 05:51:18.037750 34361 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-7814-account-create-update-vkdnw" event={"ID":"d5c030be-c6d3-421a-9a88-e5a3cbd5c8ca","Type":"ContainerStarted","Data":"6ef0c10209f147fb22f6b237a79a3c4443a58239b17eaa2e872073abc4c4d086"} Feb 24 05:51:18.048009 master-0 kubenswrapper[34361]: I0224 05:51:18.047901 34361 generic.go:334] "Generic (PLEG): container finished" podID="962f1471-7def-4417-a4bc-cf1013a76b2f" containerID="e5228a7ddc095dcfad9c4d23fbce83825608be2bd0507a3e6be62ae35103f671" exitCode=0 Feb 24 05:51:18.048167 master-0 kubenswrapper[34361]: I0224 05:51:18.048120 34361 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-f9qxr" event={"ID":"962f1471-7def-4417-a4bc-cf1013a76b2f","Type":"ContainerDied","Data":"e5228a7ddc095dcfad9c4d23fbce83825608be2bd0507a3e6be62ae35103f671"} Feb 24 05:51:18.048242 master-0 kubenswrapper[34361]: I0224 05:51:18.048208 34361 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-f9qxr" event={"ID":"962f1471-7def-4417-a4bc-cf1013a76b2f","Type":"ContainerStarted","Data":"32698214e800d6bf223942d9eb6573e0f8b75a9174ea37b80835711d7c75703b"} Feb 24 05:51:18.057131 master-0 kubenswrapper[34361]: I0224 05:51:18.057058 34361 generic.go:334] "Generic (PLEG): container finished" podID="d6091c0d-046c-44f0-888c-dbc5ac5a7aae" containerID="3c1ffe819156bf68fb0b360ad9725e8527c880e13a059a4e3c15a9267322f7d2" exitCode=0 Feb 24 05:51:18.057380 master-0 kubenswrapper[34361]: I0224 05:51:18.057154 34361 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7c45d57b9c-k22s7" event={"ID":"d6091c0d-046c-44f0-888c-dbc5ac5a7aae","Type":"ContainerDied","Data":"3c1ffe819156bf68fb0b360ad9725e8527c880e13a059a4e3c15a9267322f7d2"} Feb 24 05:51:18.057380 master-0 kubenswrapper[34361]: I0224 05:51:18.057192 34361 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7c45d57b9c-k22s7" event={"ID":"d6091c0d-046c-44f0-888c-dbc5ac5a7aae","Type":"ContainerDied","Data":"1ae433538cdcd1d0dd339543bc958b8801113e679597a11d30b127e7438236d0"} Feb 24 05:51:18.057380 master-0 kubenswrapper[34361]: I0224 05:51:18.057212 34361 scope.go:117] "RemoveContainer" containerID="3c1ffe819156bf68fb0b360ad9725e8527c880e13a059a4e3c15a9267322f7d2" Feb 24 05:51:18.057496 master-0 kubenswrapper[34361]: I0224 05:51:18.057394 34361 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7c45d57b9c-k22s7" Feb 24 05:51:18.063924 master-0 kubenswrapper[34361]: I0224 05:51:18.063889 34361 generic.go:334] "Generic (PLEG): container finished" podID="dd8843ac-1648-4291-8d77-ba67a5e46d2b" containerID="7112a30745a472e0723347eec5359bb7a5397bcd85933a4808883c7db8b9763e" exitCode=0 Feb 24 05:51:18.064018 master-0 kubenswrapper[34361]: I0224 05:51:18.063936 34361 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-738d-account-create-update-p9hmm" event={"ID":"dd8843ac-1648-4291-8d77-ba67a5e46d2b","Type":"ContainerDied","Data":"7112a30745a472e0723347eec5359bb7a5397bcd85933a4808883c7db8b9763e"} Feb 24 05:51:18.093947 master-0 kubenswrapper[34361]: I0224 05:51:18.093901 34361 scope.go:117] "RemoveContainer" containerID="ed307627f9254dfe3a5af801ad793ca19f7eb6ad84d41960f01d9f6cf10504cf" Feb 24 05:51:18.108317 master-0 kubenswrapper[34361]: I0224 05:51:18.107513 34361 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d6091c0d-046c-44f0-888c-dbc5ac5a7aae-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "d6091c0d-046c-44f0-888c-dbc5ac5a7aae" (UID: "d6091c0d-046c-44f0-888c-dbc5ac5a7aae"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 24 05:51:18.110404 master-0 kubenswrapper[34361]: I0224 05:51:18.110255 34361 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-694c6\" (UniqueName: \"kubernetes.io/projected/d6091c0d-046c-44f0-888c-dbc5ac5a7aae-kube-api-access-694c6\") on node \"master-0\" DevicePath \"\"" Feb 24 05:51:18.110404 master-0 kubenswrapper[34361]: I0224 05:51:18.110287 34361 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d6091c0d-046c-44f0-888c-dbc5ac5a7aae-dns-svc\") on node \"master-0\" DevicePath \"\"" Feb 24 05:51:18.132482 master-0 kubenswrapper[34361]: I0224 05:51:18.131188 34361 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d6091c0d-046c-44f0-888c-dbc5ac5a7aae-config" (OuterVolumeSpecName: "config") pod "d6091c0d-046c-44f0-888c-dbc5ac5a7aae" (UID: "d6091c0d-046c-44f0-888c-dbc5ac5a7aae"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 24 05:51:18.148534 master-0 kubenswrapper[34361]: I0224 05:51:18.148351 34361 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-7814-account-create-update-vkdnw" podStartSLOduration=3.14831543 podStartE2EDuration="3.14831543s" podCreationTimestamp="2026-02-24 05:51:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-24 05:51:18.100225264 +0000 UTC m=+837.802842310" watchObservedRunningTime="2026-02-24 05:51:18.14831543 +0000 UTC m=+837.850932476" Feb 24 05:51:18.187656 master-0 kubenswrapper[34361]: I0224 05:51:18.184611 34361 scope.go:117] "RemoveContainer" containerID="3c1ffe819156bf68fb0b360ad9725e8527c880e13a059a4e3c15a9267322f7d2" Feb 24 05:51:18.187656 master-0 kubenswrapper[34361]: E0224 05:51:18.186555 34361 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3c1ffe819156bf68fb0b360ad9725e8527c880e13a059a4e3c15a9267322f7d2\": container with ID starting with 3c1ffe819156bf68fb0b360ad9725e8527c880e13a059a4e3c15a9267322f7d2 not found: ID does not exist" containerID="3c1ffe819156bf68fb0b360ad9725e8527c880e13a059a4e3c15a9267322f7d2" Feb 24 05:51:18.187656 master-0 kubenswrapper[34361]: I0224 05:51:18.186590 34361 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3c1ffe819156bf68fb0b360ad9725e8527c880e13a059a4e3c15a9267322f7d2"} err="failed to get container status \"3c1ffe819156bf68fb0b360ad9725e8527c880e13a059a4e3c15a9267322f7d2\": rpc error: code = NotFound desc = could not find container \"3c1ffe819156bf68fb0b360ad9725e8527c880e13a059a4e3c15a9267322f7d2\": container with ID starting with 3c1ffe819156bf68fb0b360ad9725e8527c880e13a059a4e3c15a9267322f7d2 not found: ID does not exist" Feb 24 05:51:18.187656 master-0 kubenswrapper[34361]: I0224 05:51:18.186613 34361 scope.go:117] "RemoveContainer" containerID="ed307627f9254dfe3a5af801ad793ca19f7eb6ad84d41960f01d9f6cf10504cf" Feb 24 05:51:18.199696 master-0 kubenswrapper[34361]: E0224 05:51:18.198928 34361 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ed307627f9254dfe3a5af801ad793ca19f7eb6ad84d41960f01d9f6cf10504cf\": container with ID starting with ed307627f9254dfe3a5af801ad793ca19f7eb6ad84d41960f01d9f6cf10504cf not found: ID does not exist" containerID="ed307627f9254dfe3a5af801ad793ca19f7eb6ad84d41960f01d9f6cf10504cf" Feb 24 05:51:18.199696 master-0 kubenswrapper[34361]: I0224 05:51:18.199017 34361 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ed307627f9254dfe3a5af801ad793ca19f7eb6ad84d41960f01d9f6cf10504cf"} err="failed to get container status \"ed307627f9254dfe3a5af801ad793ca19f7eb6ad84d41960f01d9f6cf10504cf\": rpc error: code = NotFound desc = could not find container \"ed307627f9254dfe3a5af801ad793ca19f7eb6ad84d41960f01d9f6cf10504cf\": container with ID starting with ed307627f9254dfe3a5af801ad793ca19f7eb6ad84d41960f01d9f6cf10504cf not found: ID does not exist" Feb 24 05:51:18.212403 master-0 kubenswrapper[34361]: I0224 05:51:18.212199 34361 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d6091c0d-046c-44f0-888c-dbc5ac5a7aae-config\") on node \"master-0\" DevicePath \"\"" Feb 24 05:51:18.553267 master-0 kubenswrapper[34361]: I0224 05:51:18.553177 34361 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7c45d57b9c-k22s7"] Feb 24 05:51:18.565400 master-0 kubenswrapper[34361]: I0224 05:51:18.565330 34361 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-7c45d57b9c-k22s7"] Feb 24 05:51:18.616570 master-0 kubenswrapper[34361]: I0224 05:51:18.616387 34361 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d6091c0d-046c-44f0-888c-dbc5ac5a7aae" path="/var/lib/kubelet/pods/d6091c0d-046c-44f0-888c-dbc5ac5a7aae/volumes" Feb 24 05:51:19.080420 master-0 kubenswrapper[34361]: I0224 05:51:19.080360 34361 generic.go:334] "Generic (PLEG): container finished" podID="c5b94586-4af1-4814-aaa7-baeba7af6359" containerID="8d1f60d586ac2b0d3c3b17db22297f713d35b861309c73591b30209f6c98ad21" exitCode=0 Feb 24 05:51:19.081496 master-0 kubenswrapper[34361]: I0224 05:51:19.080442 34361 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-b69d-account-create-update-7dq92" event={"ID":"c5b94586-4af1-4814-aaa7-baeba7af6359","Type":"ContainerDied","Data":"8d1f60d586ac2b0d3c3b17db22297f713d35b861309c73591b30209f6c98ad21"} Feb 24 05:51:19.084622 master-0 kubenswrapper[34361]: I0224 05:51:19.084543 34361 generic.go:334] "Generic (PLEG): container finished" podID="1580115a-d292-46a8-b90c-850d483892a4" containerID="3377e172e16cebea7357cd056913ba2980dd298b502c539acf7cae646d2d3c96" exitCode=0 Feb 24 05:51:19.084768 master-0 kubenswrapper[34361]: I0224 05:51:19.084701 34361 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-rfgw2" event={"ID":"1580115a-d292-46a8-b90c-850d483892a4","Type":"ContainerDied","Data":"3377e172e16cebea7357cd056913ba2980dd298b502c539acf7cae646d2d3c96"} Feb 24 05:51:19.088369 master-0 kubenswrapper[34361]: I0224 05:51:19.087871 34361 generic.go:334] "Generic (PLEG): container finished" podID="d5c030be-c6d3-421a-9a88-e5a3cbd5c8ca" containerID="1ab20e1b4e8e66bc887752774a1f00920bee111374be74e85255dfa14c229255" exitCode=0 Feb 24 05:51:19.088369 master-0 kubenswrapper[34361]: I0224 05:51:19.087998 34361 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-7814-account-create-update-vkdnw" event={"ID":"d5c030be-c6d3-421a-9a88-e5a3cbd5c8ca","Type":"ContainerDied","Data":"1ab20e1b4e8e66bc887752774a1f00920bee111374be74e85255dfa14c229255"} Feb 24 05:51:19.866458 master-0 kubenswrapper[34361]: I0224 05:51:19.866279 34361 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-f9qxr" Feb 24 05:51:19.977708 master-0 kubenswrapper[34361]: I0224 05:51:19.977624 34361 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-62w87" Feb 24 05:51:19.995386 master-0 kubenswrapper[34361]: I0224 05:51:19.990512 34361 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/962f1471-7def-4417-a4bc-cf1013a76b2f-operator-scripts\") pod \"962f1471-7def-4417-a4bc-cf1013a76b2f\" (UID: \"962f1471-7def-4417-a4bc-cf1013a76b2f\") " Feb 24 05:51:19.995386 master-0 kubenswrapper[34361]: I0224 05:51:19.990573 34361 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-h8hlr\" (UniqueName: \"kubernetes.io/projected/962f1471-7def-4417-a4bc-cf1013a76b2f-kube-api-access-h8hlr\") pod \"962f1471-7def-4417-a4bc-cf1013a76b2f\" (UID: \"962f1471-7def-4417-a4bc-cf1013a76b2f\") " Feb 24 05:51:19.995386 master-0 kubenswrapper[34361]: I0224 05:51:19.990581 34361 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-738d-account-create-update-p9hmm" Feb 24 05:51:19.995386 master-0 kubenswrapper[34361]: I0224 05:51:19.991242 34361 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/962f1471-7def-4417-a4bc-cf1013a76b2f-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "962f1471-7def-4417-a4bc-cf1013a76b2f" (UID: "962f1471-7def-4417-a4bc-cf1013a76b2f"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 24 05:51:19.995386 master-0 kubenswrapper[34361]: I0224 05:51:19.991977 34361 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/962f1471-7def-4417-a4bc-cf1013a76b2f-operator-scripts\") on node \"master-0\" DevicePath \"\"" Feb 24 05:51:20.018956 master-0 kubenswrapper[34361]: I0224 05:51:20.018831 34361 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/962f1471-7def-4417-a4bc-cf1013a76b2f-kube-api-access-h8hlr" (OuterVolumeSpecName: "kube-api-access-h8hlr") pod "962f1471-7def-4417-a4bc-cf1013a76b2f" (UID: "962f1471-7def-4417-a4bc-cf1013a76b2f"). InnerVolumeSpecName "kube-api-access-h8hlr". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 24 05:51:20.092875 master-0 kubenswrapper[34361]: I0224 05:51:20.092804 34361 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5560d5a8-4360-4b16-b5ca-2817343b3ec9-operator-scripts\") pod \"5560d5a8-4360-4b16-b5ca-2817343b3ec9\" (UID: \"5560d5a8-4360-4b16-b5ca-2817343b3ec9\") " Feb 24 05:51:20.093563 master-0 kubenswrapper[34361]: I0224 05:51:20.092935 34361 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/dd8843ac-1648-4291-8d77-ba67a5e46d2b-operator-scripts\") pod \"dd8843ac-1648-4291-8d77-ba67a5e46d2b\" (UID: \"dd8843ac-1648-4291-8d77-ba67a5e46d2b\") " Feb 24 05:51:20.093563 master-0 kubenswrapper[34361]: I0224 05:51:20.093082 34361 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qnmxz\" (UniqueName: \"kubernetes.io/projected/5560d5a8-4360-4b16-b5ca-2817343b3ec9-kube-api-access-qnmxz\") pod \"5560d5a8-4360-4b16-b5ca-2817343b3ec9\" (UID: \"5560d5a8-4360-4b16-b5ca-2817343b3ec9\") " Feb 24 05:51:20.093563 master-0 kubenswrapper[34361]: I0224 05:51:20.093195 34361 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-strj7\" (UniqueName: \"kubernetes.io/projected/dd8843ac-1648-4291-8d77-ba67a5e46d2b-kube-api-access-strj7\") pod \"dd8843ac-1648-4291-8d77-ba67a5e46d2b\" (UID: \"dd8843ac-1648-4291-8d77-ba67a5e46d2b\") " Feb 24 05:51:20.094003 master-0 kubenswrapper[34361]: I0224 05:51:20.093968 34361 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-h8hlr\" (UniqueName: \"kubernetes.io/projected/962f1471-7def-4417-a4bc-cf1013a76b2f-kube-api-access-h8hlr\") on node \"master-0\" DevicePath \"\"" Feb 24 05:51:20.094936 master-0 kubenswrapper[34361]: I0224 05:51:20.094899 34361 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/dd8843ac-1648-4291-8d77-ba67a5e46d2b-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "dd8843ac-1648-4291-8d77-ba67a5e46d2b" (UID: "dd8843ac-1648-4291-8d77-ba67a5e46d2b"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 24 05:51:20.095299 master-0 kubenswrapper[34361]: I0224 05:51:20.095267 34361 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5560d5a8-4360-4b16-b5ca-2817343b3ec9-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "5560d5a8-4360-4b16-b5ca-2817343b3ec9" (UID: "5560d5a8-4360-4b16-b5ca-2817343b3ec9"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 24 05:51:20.097420 master-0 kubenswrapper[34361]: I0224 05:51:20.097384 34361 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dd8843ac-1648-4291-8d77-ba67a5e46d2b-kube-api-access-strj7" (OuterVolumeSpecName: "kube-api-access-strj7") pod "dd8843ac-1648-4291-8d77-ba67a5e46d2b" (UID: "dd8843ac-1648-4291-8d77-ba67a5e46d2b"). InnerVolumeSpecName "kube-api-access-strj7". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 24 05:51:20.100948 master-0 kubenswrapper[34361]: I0224 05:51:20.100899 34361 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5560d5a8-4360-4b16-b5ca-2817343b3ec9-kube-api-access-qnmxz" (OuterVolumeSpecName: "kube-api-access-qnmxz") pod "5560d5a8-4360-4b16-b5ca-2817343b3ec9" (UID: "5560d5a8-4360-4b16-b5ca-2817343b3ec9"). InnerVolumeSpecName "kube-api-access-qnmxz". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 24 05:51:20.111732 master-0 kubenswrapper[34361]: I0224 05:51:20.111663 34361 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-62w87" event={"ID":"5560d5a8-4360-4b16-b5ca-2817343b3ec9","Type":"ContainerDied","Data":"cd546c64ee79ed45b8a23fca7e02da93fa650e57aa0f2bdea7072068123cd304"} Feb 24 05:51:20.111732 master-0 kubenswrapper[34361]: I0224 05:51:20.111726 34361 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="cd546c64ee79ed45b8a23fca7e02da93fa650e57aa0f2bdea7072068123cd304" Feb 24 05:51:20.111910 master-0 kubenswrapper[34361]: I0224 05:51:20.111801 34361 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-62w87" Feb 24 05:51:20.129473 master-0 kubenswrapper[34361]: I0224 05:51:20.129415 34361 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-f9qxr" event={"ID":"962f1471-7def-4417-a4bc-cf1013a76b2f","Type":"ContainerDied","Data":"32698214e800d6bf223942d9eb6573e0f8b75a9174ea37b80835711d7c75703b"} Feb 24 05:51:20.129628 master-0 kubenswrapper[34361]: I0224 05:51:20.129480 34361 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="32698214e800d6bf223942d9eb6573e0f8b75a9174ea37b80835711d7c75703b" Feb 24 05:51:20.129628 master-0 kubenswrapper[34361]: I0224 05:51:20.129565 34361 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-f9qxr" Feb 24 05:51:20.137488 master-0 kubenswrapper[34361]: I0224 05:51:20.137404 34361 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-738d-account-create-update-p9hmm" event={"ID":"dd8843ac-1648-4291-8d77-ba67a5e46d2b","Type":"ContainerDied","Data":"73a8cb5c337320a20db44ede97428663e7ea0d745e84e06c83b6a594fac95297"} Feb 24 05:51:20.137588 master-0 kubenswrapper[34361]: I0224 05:51:20.137495 34361 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="73a8cb5c337320a20db44ede97428663e7ea0d745e84e06c83b6a594fac95297" Feb 24 05:51:20.137661 master-0 kubenswrapper[34361]: I0224 05:51:20.137640 34361 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-738d-account-create-update-p9hmm" Feb 24 05:51:20.196355 master-0 kubenswrapper[34361]: I0224 05:51:20.196088 34361 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5560d5a8-4360-4b16-b5ca-2817343b3ec9-operator-scripts\") on node \"master-0\" DevicePath \"\"" Feb 24 05:51:20.196355 master-0 kubenswrapper[34361]: I0224 05:51:20.196163 34361 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/dd8843ac-1648-4291-8d77-ba67a5e46d2b-operator-scripts\") on node \"master-0\" DevicePath \"\"" Feb 24 05:51:20.196355 master-0 kubenswrapper[34361]: I0224 05:51:20.196175 34361 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qnmxz\" (UniqueName: \"kubernetes.io/projected/5560d5a8-4360-4b16-b5ca-2817343b3ec9-kube-api-access-qnmxz\") on node \"master-0\" DevicePath \"\"" Feb 24 05:51:20.196355 master-0 kubenswrapper[34361]: I0224 05:51:20.196184 34361 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-strj7\" (UniqueName: \"kubernetes.io/projected/dd8843ac-1648-4291-8d77-ba67a5e46d2b-kube-api-access-strj7\") on node \"master-0\" DevicePath \"\"" Feb 24 05:51:20.735202 master-0 kubenswrapper[34361]: I0224 05:51:20.735133 34361 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-rfgw2" Feb 24 05:51:20.826494 master-0 kubenswrapper[34361]: I0224 05:51:20.826427 34361 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1580115a-d292-46a8-b90c-850d483892a4-operator-scripts\") pod \"1580115a-d292-46a8-b90c-850d483892a4\" (UID: \"1580115a-d292-46a8-b90c-850d483892a4\") " Feb 24 05:51:20.826794 master-0 kubenswrapper[34361]: I0224 05:51:20.826518 34361 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tkz2r\" (UniqueName: \"kubernetes.io/projected/1580115a-d292-46a8-b90c-850d483892a4-kube-api-access-tkz2r\") pod \"1580115a-d292-46a8-b90c-850d483892a4\" (UID: \"1580115a-d292-46a8-b90c-850d483892a4\") " Feb 24 05:51:20.827137 master-0 kubenswrapper[34361]: I0224 05:51:20.827095 34361 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1580115a-d292-46a8-b90c-850d483892a4-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "1580115a-d292-46a8-b90c-850d483892a4" (UID: "1580115a-d292-46a8-b90c-850d483892a4"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 24 05:51:20.828561 master-0 kubenswrapper[34361]: I0224 05:51:20.828477 34361 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1580115a-d292-46a8-b90c-850d483892a4-operator-scripts\") on node \"master-0\" DevicePath \"\"" Feb 24 05:51:20.830864 master-0 kubenswrapper[34361]: I0224 05:51:20.830814 34361 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1580115a-d292-46a8-b90c-850d483892a4-kube-api-access-tkz2r" (OuterVolumeSpecName: "kube-api-access-tkz2r") pod "1580115a-d292-46a8-b90c-850d483892a4" (UID: "1580115a-d292-46a8-b90c-850d483892a4"). InnerVolumeSpecName "kube-api-access-tkz2r". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 24 05:51:20.932339 master-0 kubenswrapper[34361]: I0224 05:51:20.931599 34361 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tkz2r\" (UniqueName: \"kubernetes.io/projected/1580115a-d292-46a8-b90c-850d483892a4-kube-api-access-tkz2r\") on node \"master-0\" DevicePath \"\"" Feb 24 05:51:20.936327 master-0 kubenswrapper[34361]: I0224 05:51:20.933032 34361 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-b69d-account-create-update-7dq92" Feb 24 05:51:20.940331 master-0 kubenswrapper[34361]: I0224 05:51:20.939056 34361 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-7814-account-create-update-vkdnw" Feb 24 05:51:21.042347 master-0 kubenswrapper[34361]: I0224 05:51:21.040202 34361 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6frdf\" (UniqueName: \"kubernetes.io/projected/d5c030be-c6d3-421a-9a88-e5a3cbd5c8ca-kube-api-access-6frdf\") pod \"d5c030be-c6d3-421a-9a88-e5a3cbd5c8ca\" (UID: \"d5c030be-c6d3-421a-9a88-e5a3cbd5c8ca\") " Feb 24 05:51:21.042347 master-0 kubenswrapper[34361]: I0224 05:51:21.040376 34361 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-r7vnl\" (UniqueName: \"kubernetes.io/projected/c5b94586-4af1-4814-aaa7-baeba7af6359-kube-api-access-r7vnl\") pod \"c5b94586-4af1-4814-aaa7-baeba7af6359\" (UID: \"c5b94586-4af1-4814-aaa7-baeba7af6359\") " Feb 24 05:51:21.042347 master-0 kubenswrapper[34361]: I0224 05:51:21.040565 34361 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d5c030be-c6d3-421a-9a88-e5a3cbd5c8ca-operator-scripts\") pod \"d5c030be-c6d3-421a-9a88-e5a3cbd5c8ca\" (UID: \"d5c030be-c6d3-421a-9a88-e5a3cbd5c8ca\") " Feb 24 05:51:21.042347 master-0 kubenswrapper[34361]: I0224 05:51:21.040595 34361 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c5b94586-4af1-4814-aaa7-baeba7af6359-operator-scripts\") pod \"c5b94586-4af1-4814-aaa7-baeba7af6359\" (UID: \"c5b94586-4af1-4814-aaa7-baeba7af6359\") " Feb 24 05:51:21.047334 master-0 kubenswrapper[34361]: I0224 05:51:21.043591 34361 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d5c030be-c6d3-421a-9a88-e5a3cbd5c8ca-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "d5c030be-c6d3-421a-9a88-e5a3cbd5c8ca" (UID: "d5c030be-c6d3-421a-9a88-e5a3cbd5c8ca"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 24 05:51:21.047334 master-0 kubenswrapper[34361]: I0224 05:51:21.043817 34361 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c5b94586-4af1-4814-aaa7-baeba7af6359-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "c5b94586-4af1-4814-aaa7-baeba7af6359" (UID: "c5b94586-4af1-4814-aaa7-baeba7af6359"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 24 05:51:21.055326 master-0 kubenswrapper[34361]: I0224 05:51:21.050941 34361 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c5b94586-4af1-4814-aaa7-baeba7af6359-kube-api-access-r7vnl" (OuterVolumeSpecName: "kube-api-access-r7vnl") pod "c5b94586-4af1-4814-aaa7-baeba7af6359" (UID: "c5b94586-4af1-4814-aaa7-baeba7af6359"). InnerVolumeSpecName "kube-api-access-r7vnl". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 24 05:51:21.055326 master-0 kubenswrapper[34361]: I0224 05:51:21.051440 34361 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d5c030be-c6d3-421a-9a88-e5a3cbd5c8ca-kube-api-access-6frdf" (OuterVolumeSpecName: "kube-api-access-6frdf") pod "d5c030be-c6d3-421a-9a88-e5a3cbd5c8ca" (UID: "d5c030be-c6d3-421a-9a88-e5a3cbd5c8ca"). InnerVolumeSpecName "kube-api-access-6frdf". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 24 05:51:21.144434 master-0 kubenswrapper[34361]: I0224 05:51:21.144370 34361 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d5c030be-c6d3-421a-9a88-e5a3cbd5c8ca-operator-scripts\") on node \"master-0\" DevicePath \"\"" Feb 24 05:51:21.144434 master-0 kubenswrapper[34361]: I0224 05:51:21.144422 34361 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c5b94586-4af1-4814-aaa7-baeba7af6359-operator-scripts\") on node \"master-0\" DevicePath \"\"" Feb 24 05:51:21.144434 master-0 kubenswrapper[34361]: I0224 05:51:21.144437 34361 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6frdf\" (UniqueName: \"kubernetes.io/projected/d5c030be-c6d3-421a-9a88-e5a3cbd5c8ca-kube-api-access-6frdf\") on node \"master-0\" DevicePath \"\"" Feb 24 05:51:21.145177 master-0 kubenswrapper[34361]: I0224 05:51:21.144454 34361 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-r7vnl\" (UniqueName: \"kubernetes.io/projected/c5b94586-4af1-4814-aaa7-baeba7af6359-kube-api-access-r7vnl\") on node \"master-0\" DevicePath \"\"" Feb 24 05:51:21.151369 master-0 kubenswrapper[34361]: I0224 05:51:21.151283 34361 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-b69d-account-create-update-7dq92" event={"ID":"c5b94586-4af1-4814-aaa7-baeba7af6359","Type":"ContainerDied","Data":"1b49bfad8125d09a83f925603539c7f4874bbed165e6095d059a61fe787a46ca"} Feb 24 05:51:21.151457 master-0 kubenswrapper[34361]: I0224 05:51:21.151374 34361 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1b49bfad8125d09a83f925603539c7f4874bbed165e6095d059a61fe787a46ca" Feb 24 05:51:21.151457 master-0 kubenswrapper[34361]: I0224 05:51:21.151446 34361 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-b69d-account-create-update-7dq92" Feb 24 05:51:21.154235 master-0 kubenswrapper[34361]: I0224 05:51:21.154202 34361 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-rfgw2" Feb 24 05:51:21.154300 master-0 kubenswrapper[34361]: I0224 05:51:21.154219 34361 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-rfgw2" event={"ID":"1580115a-d292-46a8-b90c-850d483892a4","Type":"ContainerDied","Data":"dae3daa2120754626adfd57c74d622d5a45f63f35d5dc2843718e1e73d40b1e3"} Feb 24 05:51:21.154507 master-0 kubenswrapper[34361]: I0224 05:51:21.154360 34361 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="dae3daa2120754626adfd57c74d622d5a45f63f35d5dc2843718e1e73d40b1e3" Feb 24 05:51:21.157270 master-0 kubenswrapper[34361]: I0224 05:51:21.157223 34361 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-7814-account-create-update-vkdnw" event={"ID":"d5c030be-c6d3-421a-9a88-e5a3cbd5c8ca","Type":"ContainerDied","Data":"6ef0c10209f147fb22f6b237a79a3c4443a58239b17eaa2e872073abc4c4d086"} Feb 24 05:51:21.157342 master-0 kubenswrapper[34361]: I0224 05:51:21.157271 34361 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6ef0c10209f147fb22f6b237a79a3c4443a58239b17eaa2e872073abc4c4d086" Feb 24 05:51:21.157433 master-0 kubenswrapper[34361]: I0224 05:51:21.157397 34361 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-7814-account-create-update-vkdnw" Feb 24 05:51:21.270262 master-0 kubenswrapper[34361]: I0224 05:51:21.270153 34361 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/root-account-create-update-qw6cm"] Feb 24 05:51:21.270828 master-0 kubenswrapper[34361]: E0224 05:51:21.270790 34361 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d6091c0d-046c-44f0-888c-dbc5ac5a7aae" containerName="init" Feb 24 05:51:21.270828 master-0 kubenswrapper[34361]: I0224 05:51:21.270814 34361 state_mem.go:107] "Deleted CPUSet assignment" podUID="d6091c0d-046c-44f0-888c-dbc5ac5a7aae" containerName="init" Feb 24 05:51:21.270828 master-0 kubenswrapper[34361]: E0224 05:51:21.270832 34361 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c5b94586-4af1-4814-aaa7-baeba7af6359" containerName="mariadb-account-create-update" Feb 24 05:51:21.271042 master-0 kubenswrapper[34361]: I0224 05:51:21.270840 34361 state_mem.go:107] "Deleted CPUSet assignment" podUID="c5b94586-4af1-4814-aaa7-baeba7af6359" containerName="mariadb-account-create-update" Feb 24 05:51:21.271042 master-0 kubenswrapper[34361]: E0224 05:51:21.270852 34361 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="962f1471-7def-4417-a4bc-cf1013a76b2f" containerName="mariadb-database-create" Feb 24 05:51:21.271042 master-0 kubenswrapper[34361]: I0224 05:51:21.270859 34361 state_mem.go:107] "Deleted CPUSet assignment" podUID="962f1471-7def-4417-a4bc-cf1013a76b2f" containerName="mariadb-database-create" Feb 24 05:51:21.271042 master-0 kubenswrapper[34361]: E0224 05:51:21.270890 34361 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5560d5a8-4360-4b16-b5ca-2817343b3ec9" containerName="mariadb-database-create" Feb 24 05:51:21.271042 master-0 kubenswrapper[34361]: I0224 05:51:21.270896 34361 state_mem.go:107] "Deleted CPUSet assignment" podUID="5560d5a8-4360-4b16-b5ca-2817343b3ec9" containerName="mariadb-database-create" Feb 24 05:51:21.271042 master-0 kubenswrapper[34361]: E0224 05:51:21.270911 34361 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d6091c0d-046c-44f0-888c-dbc5ac5a7aae" containerName="dnsmasq-dns" Feb 24 05:51:21.271042 master-0 kubenswrapper[34361]: I0224 05:51:21.270918 34361 state_mem.go:107] "Deleted CPUSet assignment" podUID="d6091c0d-046c-44f0-888c-dbc5ac5a7aae" containerName="dnsmasq-dns" Feb 24 05:51:21.271042 master-0 kubenswrapper[34361]: E0224 05:51:21.270951 34361 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d5c030be-c6d3-421a-9a88-e5a3cbd5c8ca" containerName="mariadb-account-create-update" Feb 24 05:51:21.271042 master-0 kubenswrapper[34361]: I0224 05:51:21.270958 34361 state_mem.go:107] "Deleted CPUSet assignment" podUID="d5c030be-c6d3-421a-9a88-e5a3cbd5c8ca" containerName="mariadb-account-create-update" Feb 24 05:51:21.271042 master-0 kubenswrapper[34361]: E0224 05:51:21.270970 34361 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dd8843ac-1648-4291-8d77-ba67a5e46d2b" containerName="mariadb-account-create-update" Feb 24 05:51:21.271042 master-0 kubenswrapper[34361]: I0224 05:51:21.270978 34361 state_mem.go:107] "Deleted CPUSet assignment" podUID="dd8843ac-1648-4291-8d77-ba67a5e46d2b" containerName="mariadb-account-create-update" Feb 24 05:51:21.271042 master-0 kubenswrapper[34361]: E0224 05:51:21.270995 34361 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1580115a-d292-46a8-b90c-850d483892a4" containerName="mariadb-database-create" Feb 24 05:51:21.271042 master-0 kubenswrapper[34361]: I0224 05:51:21.271002 34361 state_mem.go:107] "Deleted CPUSet assignment" podUID="1580115a-d292-46a8-b90c-850d483892a4" containerName="mariadb-database-create" Feb 24 05:51:21.271930 master-0 kubenswrapper[34361]: I0224 05:51:21.271188 34361 memory_manager.go:354] "RemoveStaleState removing state" podUID="962f1471-7def-4417-a4bc-cf1013a76b2f" containerName="mariadb-database-create" Feb 24 05:51:21.271930 master-0 kubenswrapper[34361]: I0224 05:51:21.271217 34361 memory_manager.go:354] "RemoveStaleState removing state" podUID="d5c030be-c6d3-421a-9a88-e5a3cbd5c8ca" containerName="mariadb-account-create-update" Feb 24 05:51:21.271930 master-0 kubenswrapper[34361]: I0224 05:51:21.271235 34361 memory_manager.go:354] "RemoveStaleState removing state" podUID="1580115a-d292-46a8-b90c-850d483892a4" containerName="mariadb-database-create" Feb 24 05:51:21.271930 master-0 kubenswrapper[34361]: I0224 05:51:21.271251 34361 memory_manager.go:354] "RemoveStaleState removing state" podUID="dd8843ac-1648-4291-8d77-ba67a5e46d2b" containerName="mariadb-account-create-update" Feb 24 05:51:21.271930 master-0 kubenswrapper[34361]: I0224 05:51:21.271268 34361 memory_manager.go:354] "RemoveStaleState removing state" podUID="c5b94586-4af1-4814-aaa7-baeba7af6359" containerName="mariadb-account-create-update" Feb 24 05:51:21.271930 master-0 kubenswrapper[34361]: I0224 05:51:21.271282 34361 memory_manager.go:354] "RemoveStaleState removing state" podUID="5560d5a8-4360-4b16-b5ca-2817343b3ec9" containerName="mariadb-database-create" Feb 24 05:51:21.271930 master-0 kubenswrapper[34361]: I0224 05:51:21.271301 34361 memory_manager.go:354] "RemoveStaleState removing state" podUID="d6091c0d-046c-44f0-888c-dbc5ac5a7aae" containerName="dnsmasq-dns" Feb 24 05:51:21.272389 master-0 kubenswrapper[34361]: I0224 05:51:21.272045 34361 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-qw6cm" Feb 24 05:51:21.280902 master-0 kubenswrapper[34361]: I0224 05:51:21.280831 34361 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-mariadb-root-db-secret" Feb 24 05:51:21.287228 master-0 kubenswrapper[34361]: I0224 05:51:21.287132 34361 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-qw6cm"] Feb 24 05:51:21.365346 master-0 kubenswrapper[34361]: I0224 05:51:21.365144 34361 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zpqwd\" (UniqueName: \"kubernetes.io/projected/174354fe-3c0c-488f-ab4c-4acb44c9cf4f-kube-api-access-zpqwd\") pod \"root-account-create-update-qw6cm\" (UID: \"174354fe-3c0c-488f-ab4c-4acb44c9cf4f\") " pod="openstack/root-account-create-update-qw6cm" Feb 24 05:51:21.365346 master-0 kubenswrapper[34361]: I0224 05:51:21.365303 34361 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/174354fe-3c0c-488f-ab4c-4acb44c9cf4f-operator-scripts\") pod \"root-account-create-update-qw6cm\" (UID: \"174354fe-3c0c-488f-ab4c-4acb44c9cf4f\") " pod="openstack/root-account-create-update-qw6cm" Feb 24 05:51:21.468754 master-0 kubenswrapper[34361]: I0224 05:51:21.468670 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zpqwd\" (UniqueName: \"kubernetes.io/projected/174354fe-3c0c-488f-ab4c-4acb44c9cf4f-kube-api-access-zpqwd\") pod \"root-account-create-update-qw6cm\" (UID: \"174354fe-3c0c-488f-ab4c-4acb44c9cf4f\") " pod="openstack/root-account-create-update-qw6cm" Feb 24 05:51:21.468981 master-0 kubenswrapper[34361]: I0224 05:51:21.468900 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/174354fe-3c0c-488f-ab4c-4acb44c9cf4f-operator-scripts\") pod \"root-account-create-update-qw6cm\" (UID: \"174354fe-3c0c-488f-ab4c-4acb44c9cf4f\") " pod="openstack/root-account-create-update-qw6cm" Feb 24 05:51:21.470415 master-0 kubenswrapper[34361]: I0224 05:51:21.470299 34361 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/174354fe-3c0c-488f-ab4c-4acb44c9cf4f-operator-scripts\") pod \"root-account-create-update-qw6cm\" (UID: \"174354fe-3c0c-488f-ab4c-4acb44c9cf4f\") " pod="openstack/root-account-create-update-qw6cm" Feb 24 05:51:21.487687 master-0 kubenswrapper[34361]: I0224 05:51:21.487630 34361 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zpqwd\" (UniqueName: \"kubernetes.io/projected/174354fe-3c0c-488f-ab4c-4acb44c9cf4f-kube-api-access-zpqwd\") pod \"root-account-create-update-qw6cm\" (UID: \"174354fe-3c0c-488f-ab4c-4acb44c9cf4f\") " pod="openstack/root-account-create-update-qw6cm" Feb 24 05:51:21.627073 master-0 kubenswrapper[34361]: I0224 05:51:21.626846 34361 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-qw6cm" Feb 24 05:51:21.657373 master-0 kubenswrapper[34361]: I0224 05:51:21.655766 34361 trace.go:236] Trace[609420603]: "Calculate volume metrics of mysql-db for pod openstack/openstack-galera-0" (24-Feb-2026 05:51:20.548) (total time: 1107ms): Feb 24 05:51:21.657373 master-0 kubenswrapper[34361]: Trace[609420603]: [1.107487493s] [1.107487493s] END Feb 24 05:51:21.854116 master-0 kubenswrapper[34361]: I0224 05:51:21.854057 34361 trace.go:236] Trace[1348943089]: "Calculate volume metrics of persistence for pod openstack/rabbitmq-cell1-server-0" (24-Feb-2026 05:51:20.548) (total time: 1305ms): Feb 24 05:51:21.854116 master-0 kubenswrapper[34361]: Trace[1348943089]: [1.305844471s] [1.305844471s] END Feb 24 05:51:22.143164 master-0 kubenswrapper[34361]: I0224 05:51:22.143077 34361 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-qw6cm"] Feb 24 05:51:22.175865 master-0 kubenswrapper[34361]: I0224 05:51:22.175775 34361 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-qw6cm" event={"ID":"174354fe-3c0c-488f-ab4c-4acb44c9cf4f","Type":"ContainerStarted","Data":"e2f8bcb9a7b3f22b5982d03d0cac3f4552b9e0d94309c11c18ad7749bbab2bf5"} Feb 24 05:51:23.193628 master-0 kubenswrapper[34361]: I0224 05:51:23.193528 34361 generic.go:334] "Generic (PLEG): container finished" podID="174354fe-3c0c-488f-ab4c-4acb44c9cf4f" containerID="daf55ae9d390f698358051c3226bb41d0c117e2713443d4a5ebb58d7b50960ec" exitCode=0 Feb 24 05:51:23.194440 master-0 kubenswrapper[34361]: I0224 05:51:23.193678 34361 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-qw6cm" event={"ID":"174354fe-3c0c-488f-ab4c-4acb44c9cf4f","Type":"ContainerDied","Data":"daf55ae9d390f698358051c3226bb41d0c117e2713443d4a5ebb58d7b50960ec"} Feb 24 05:51:23.197296 master-0 kubenswrapper[34361]: I0224 05:51:23.197251 34361 generic.go:334] "Generic (PLEG): container finished" podID="34fce7dc-c92e-471b-9efa-f4960fb52c37" containerID="efa6878cae69ea047bc5f877367310c77b1d3f3e8c14467bf3ecbf3c2cf3002b" exitCode=0 Feb 24 05:51:23.197435 master-0 kubenswrapper[34361]: I0224 05:51:23.197331 34361 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-th4vs" event={"ID":"34fce7dc-c92e-471b-9efa-f4960fb52c37","Type":"ContainerDied","Data":"efa6878cae69ea047bc5f877367310c77b1d3f3e8c14467bf3ecbf3c2cf3002b"} Feb 24 05:51:24.797844 master-0 kubenswrapper[34361]: I0224 05:51:24.797769 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/9dc3c847-e88b-4279-ba5a-8ef0c93e4e6c-etc-swift\") pod \"swift-storage-0\" (UID: \"9dc3c847-e88b-4279-ba5a-8ef0c93e4e6c\") " pod="openstack/swift-storage-0" Feb 24 05:51:24.831562 master-0 kubenswrapper[34361]: I0224 05:51:24.828889 34361 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/9dc3c847-e88b-4279-ba5a-8ef0c93e4e6c-etc-swift\") pod \"swift-storage-0\" (UID: \"9dc3c847-e88b-4279-ba5a-8ef0c93e4e6c\") " pod="openstack/swift-storage-0" Feb 24 05:51:24.936758 master-0 kubenswrapper[34361]: I0224 05:51:24.929489 34361 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-db-sync-f4vxh"] Feb 24 05:51:24.936758 master-0 kubenswrapper[34361]: I0224 05:51:24.930966 34361 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-f4vxh" Feb 24 05:51:24.958355 master-0 kubenswrapper[34361]: I0224 05:51:24.945959 34361 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-bdafd-config-data" Feb 24 05:51:24.973452 master-0 kubenswrapper[34361]: I0224 05:51:24.958822 34361 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-storage-0" Feb 24 05:51:25.025334 master-0 kubenswrapper[34361]: I0224 05:51:25.015477 34361 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nxdwh\" (UniqueName: \"kubernetes.io/projected/36f1ab4b-258e-434c-8674-0375758ffd49-kube-api-access-nxdwh\") pod \"glance-db-sync-f4vxh\" (UID: \"36f1ab4b-258e-434c-8674-0375758ffd49\") " pod="openstack/glance-db-sync-f4vxh" Feb 24 05:51:25.025334 master-0 kubenswrapper[34361]: I0224 05:51:25.015561 34361 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/36f1ab4b-258e-434c-8674-0375758ffd49-combined-ca-bundle\") pod \"glance-db-sync-f4vxh\" (UID: \"36f1ab4b-258e-434c-8674-0375758ffd49\") " pod="openstack/glance-db-sync-f4vxh" Feb 24 05:51:25.025334 master-0 kubenswrapper[34361]: I0224 05:51:25.015591 34361 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/36f1ab4b-258e-434c-8674-0375758ffd49-db-sync-config-data\") pod \"glance-db-sync-f4vxh\" (UID: \"36f1ab4b-258e-434c-8674-0375758ffd49\") " pod="openstack/glance-db-sync-f4vxh" Feb 24 05:51:25.025334 master-0 kubenswrapper[34361]: I0224 05:51:25.015684 34361 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/36f1ab4b-258e-434c-8674-0375758ffd49-config-data\") pod \"glance-db-sync-f4vxh\" (UID: \"36f1ab4b-258e-434c-8674-0375758ffd49\") " pod="openstack/glance-db-sync-f4vxh" Feb 24 05:51:25.033570 master-0 kubenswrapper[34361]: I0224 05:51:25.031694 34361 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-sync-f4vxh"] Feb 24 05:51:25.117828 master-0 kubenswrapper[34361]: I0224 05:51:25.117769 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nxdwh\" (UniqueName: \"kubernetes.io/projected/36f1ab4b-258e-434c-8674-0375758ffd49-kube-api-access-nxdwh\") pod \"glance-db-sync-f4vxh\" (UID: \"36f1ab4b-258e-434c-8674-0375758ffd49\") " pod="openstack/glance-db-sync-f4vxh" Feb 24 05:51:25.117971 master-0 kubenswrapper[34361]: I0224 05:51:25.117836 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/36f1ab4b-258e-434c-8674-0375758ffd49-combined-ca-bundle\") pod \"glance-db-sync-f4vxh\" (UID: \"36f1ab4b-258e-434c-8674-0375758ffd49\") " pod="openstack/glance-db-sync-f4vxh" Feb 24 05:51:25.117971 master-0 kubenswrapper[34361]: I0224 05:51:25.117873 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/36f1ab4b-258e-434c-8674-0375758ffd49-db-sync-config-data\") pod \"glance-db-sync-f4vxh\" (UID: \"36f1ab4b-258e-434c-8674-0375758ffd49\") " pod="openstack/glance-db-sync-f4vxh" Feb 24 05:51:25.117971 master-0 kubenswrapper[34361]: I0224 05:51:25.117943 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/36f1ab4b-258e-434c-8674-0375758ffd49-config-data\") pod \"glance-db-sync-f4vxh\" (UID: \"36f1ab4b-258e-434c-8674-0375758ffd49\") " pod="openstack/glance-db-sync-f4vxh" Feb 24 05:51:25.118170 master-0 kubenswrapper[34361]: I0224 05:51:25.118124 34361 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-qw6cm" Feb 24 05:51:25.124120 master-0 kubenswrapper[34361]: I0224 05:51:25.124074 34361 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/36f1ab4b-258e-434c-8674-0375758ffd49-db-sync-config-data\") pod \"glance-db-sync-f4vxh\" (UID: \"36f1ab4b-258e-434c-8674-0375758ffd49\") " pod="openstack/glance-db-sync-f4vxh" Feb 24 05:51:25.124915 master-0 kubenswrapper[34361]: I0224 05:51:25.124884 34361 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/36f1ab4b-258e-434c-8674-0375758ffd49-config-data\") pod \"glance-db-sync-f4vxh\" (UID: \"36f1ab4b-258e-434c-8674-0375758ffd49\") " pod="openstack/glance-db-sync-f4vxh" Feb 24 05:51:25.148374 master-0 kubenswrapper[34361]: I0224 05:51:25.148235 34361 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nxdwh\" (UniqueName: \"kubernetes.io/projected/36f1ab4b-258e-434c-8674-0375758ffd49-kube-api-access-nxdwh\") pod \"glance-db-sync-f4vxh\" (UID: \"36f1ab4b-258e-434c-8674-0375758ffd49\") " pod="openstack/glance-db-sync-f4vxh" Feb 24 05:51:25.158085 master-0 kubenswrapper[34361]: I0224 05:51:25.158039 34361 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/36f1ab4b-258e-434c-8674-0375758ffd49-combined-ca-bundle\") pod \"glance-db-sync-f4vxh\" (UID: \"36f1ab4b-258e-434c-8674-0375758ffd49\") " pod="openstack/glance-db-sync-f4vxh" Feb 24 05:51:25.183264 master-0 kubenswrapper[34361]: I0224 05:51:25.183220 34361 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-th4vs" Feb 24 05:51:25.225340 master-0 kubenswrapper[34361]: I0224 05:51:25.220624 34361 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/34fce7dc-c92e-471b-9efa-f4960fb52c37-scripts\") pod \"34fce7dc-c92e-471b-9efa-f4960fb52c37\" (UID: \"34fce7dc-c92e-471b-9efa-f4960fb52c37\") " Feb 24 05:51:25.225340 master-0 kubenswrapper[34361]: I0224 05:51:25.220773 34361 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/34fce7dc-c92e-471b-9efa-f4960fb52c37-combined-ca-bundle\") pod \"34fce7dc-c92e-471b-9efa-f4960fb52c37\" (UID: \"34fce7dc-c92e-471b-9efa-f4960fb52c37\") " Feb 24 05:51:25.225340 master-0 kubenswrapper[34361]: I0224 05:51:25.220805 34361 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fd7mm\" (UniqueName: \"kubernetes.io/projected/34fce7dc-c92e-471b-9efa-f4960fb52c37-kube-api-access-fd7mm\") pod \"34fce7dc-c92e-471b-9efa-f4960fb52c37\" (UID: \"34fce7dc-c92e-471b-9efa-f4960fb52c37\") " Feb 24 05:51:25.225340 master-0 kubenswrapper[34361]: I0224 05:51:25.220855 34361 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/34fce7dc-c92e-471b-9efa-f4960fb52c37-swiftconf\") pod \"34fce7dc-c92e-471b-9efa-f4960fb52c37\" (UID: \"34fce7dc-c92e-471b-9efa-f4960fb52c37\") " Feb 24 05:51:25.225340 master-0 kubenswrapper[34361]: I0224 05:51:25.220885 34361 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zpqwd\" (UniqueName: \"kubernetes.io/projected/174354fe-3c0c-488f-ab4c-4acb44c9cf4f-kube-api-access-zpqwd\") pod \"174354fe-3c0c-488f-ab4c-4acb44c9cf4f\" (UID: \"174354fe-3c0c-488f-ab4c-4acb44c9cf4f\") " Feb 24 05:51:25.225340 master-0 kubenswrapper[34361]: I0224 05:51:25.220920 34361 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/174354fe-3c0c-488f-ab4c-4acb44c9cf4f-operator-scripts\") pod \"174354fe-3c0c-488f-ab4c-4acb44c9cf4f\" (UID: \"174354fe-3c0c-488f-ab4c-4acb44c9cf4f\") " Feb 24 05:51:25.225340 master-0 kubenswrapper[34361]: I0224 05:51:25.220945 34361 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/34fce7dc-c92e-471b-9efa-f4960fb52c37-etc-swift\") pod \"34fce7dc-c92e-471b-9efa-f4960fb52c37\" (UID: \"34fce7dc-c92e-471b-9efa-f4960fb52c37\") " Feb 24 05:51:25.225340 master-0 kubenswrapper[34361]: I0224 05:51:25.221214 34361 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/34fce7dc-c92e-471b-9efa-f4960fb52c37-dispersionconf\") pod \"34fce7dc-c92e-471b-9efa-f4960fb52c37\" (UID: \"34fce7dc-c92e-471b-9efa-f4960fb52c37\") " Feb 24 05:51:25.225340 master-0 kubenswrapper[34361]: I0224 05:51:25.221281 34361 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/34fce7dc-c92e-471b-9efa-f4960fb52c37-ring-data-devices\") pod \"34fce7dc-c92e-471b-9efa-f4960fb52c37\" (UID: \"34fce7dc-c92e-471b-9efa-f4960fb52c37\") " Feb 24 05:51:25.228648 master-0 kubenswrapper[34361]: I0224 05:51:25.227925 34361 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/174354fe-3c0c-488f-ab4c-4acb44c9cf4f-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "174354fe-3c0c-488f-ab4c-4acb44c9cf4f" (UID: "174354fe-3c0c-488f-ab4c-4acb44c9cf4f"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 24 05:51:25.228854 master-0 kubenswrapper[34361]: I0224 05:51:25.228797 34361 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/34fce7dc-c92e-471b-9efa-f4960fb52c37-etc-swift" (OuterVolumeSpecName: "etc-swift") pod "34fce7dc-c92e-471b-9efa-f4960fb52c37" (UID: "34fce7dc-c92e-471b-9efa-f4960fb52c37"). InnerVolumeSpecName "etc-swift". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 24 05:51:25.229610 master-0 kubenswrapper[34361]: I0224 05:51:25.229590 34361 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/34fce7dc-c92e-471b-9efa-f4960fb52c37-ring-data-devices" (OuterVolumeSpecName: "ring-data-devices") pod "34fce7dc-c92e-471b-9efa-f4960fb52c37" (UID: "34fce7dc-c92e-471b-9efa-f4960fb52c37"). InnerVolumeSpecName "ring-data-devices". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 24 05:51:25.234203 master-0 kubenswrapper[34361]: I0224 05:51:25.234127 34361 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/174354fe-3c0c-488f-ab4c-4acb44c9cf4f-kube-api-access-zpqwd" (OuterVolumeSpecName: "kube-api-access-zpqwd") pod "174354fe-3c0c-488f-ab4c-4acb44c9cf4f" (UID: "174354fe-3c0c-488f-ab4c-4acb44c9cf4f"). InnerVolumeSpecName "kube-api-access-zpqwd". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 24 05:51:25.242371 master-0 kubenswrapper[34361]: I0224 05:51:25.241605 34361 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/34fce7dc-c92e-471b-9efa-f4960fb52c37-dispersionconf" (OuterVolumeSpecName: "dispersionconf") pod "34fce7dc-c92e-471b-9efa-f4960fb52c37" (UID: "34fce7dc-c92e-471b-9efa-f4960fb52c37"). InnerVolumeSpecName "dispersionconf". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 24 05:51:25.243388 master-0 kubenswrapper[34361]: I0224 05:51:25.243034 34361 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/34fce7dc-c92e-471b-9efa-f4960fb52c37-kube-api-access-fd7mm" (OuterVolumeSpecName: "kube-api-access-fd7mm") pod "34fce7dc-c92e-471b-9efa-f4960fb52c37" (UID: "34fce7dc-c92e-471b-9efa-f4960fb52c37"). InnerVolumeSpecName "kube-api-access-fd7mm". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 24 05:51:25.250109 master-0 kubenswrapper[34361]: I0224 05:51:25.249233 34361 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-qw6cm" event={"ID":"174354fe-3c0c-488f-ab4c-4acb44c9cf4f","Type":"ContainerDied","Data":"e2f8bcb9a7b3f22b5982d03d0cac3f4552b9e0d94309c11c18ad7749bbab2bf5"} Feb 24 05:51:25.250109 master-0 kubenswrapper[34361]: I0224 05:51:25.249296 34361 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e2f8bcb9a7b3f22b5982d03d0cac3f4552b9e0d94309c11c18ad7749bbab2bf5" Feb 24 05:51:25.250109 master-0 kubenswrapper[34361]: I0224 05:51:25.249396 34361 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-qw6cm" Feb 24 05:51:25.255909 master-0 kubenswrapper[34361]: I0224 05:51:25.255860 34361 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-th4vs" event={"ID":"34fce7dc-c92e-471b-9efa-f4960fb52c37","Type":"ContainerDied","Data":"cb25f2485895e9388c52ebca56d10fa9ab4dff645993dcefcda33d8cee1e469d"} Feb 24 05:51:25.255981 master-0 kubenswrapper[34361]: I0224 05:51:25.255936 34361 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="cb25f2485895e9388c52ebca56d10fa9ab4dff645993dcefcda33d8cee1e469d" Feb 24 05:51:25.256050 master-0 kubenswrapper[34361]: I0224 05:51:25.256030 34361 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-th4vs" Feb 24 05:51:25.270869 master-0 kubenswrapper[34361]: I0224 05:51:25.270795 34361 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/34fce7dc-c92e-471b-9efa-f4960fb52c37-scripts" (OuterVolumeSpecName: "scripts") pod "34fce7dc-c92e-471b-9efa-f4960fb52c37" (UID: "34fce7dc-c92e-471b-9efa-f4960fb52c37"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 24 05:51:25.278486 master-0 kubenswrapper[34361]: I0224 05:51:25.278414 34361 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/34fce7dc-c92e-471b-9efa-f4960fb52c37-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "34fce7dc-c92e-471b-9efa-f4960fb52c37" (UID: "34fce7dc-c92e-471b-9efa-f4960fb52c37"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 24 05:51:25.295336 master-0 kubenswrapper[34361]: I0224 05:51:25.295217 34361 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/34fce7dc-c92e-471b-9efa-f4960fb52c37-swiftconf" (OuterVolumeSpecName: "swiftconf") pod "34fce7dc-c92e-471b-9efa-f4960fb52c37" (UID: "34fce7dc-c92e-471b-9efa-f4960fb52c37"). InnerVolumeSpecName "swiftconf". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 24 05:51:25.324514 master-0 kubenswrapper[34361]: I0224 05:51:25.324419 34361 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/34fce7dc-c92e-471b-9efa-f4960fb52c37-combined-ca-bundle\") on node \"master-0\" DevicePath \"\"" Feb 24 05:51:25.324514 master-0 kubenswrapper[34361]: I0224 05:51:25.324486 34361 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fd7mm\" (UniqueName: \"kubernetes.io/projected/34fce7dc-c92e-471b-9efa-f4960fb52c37-kube-api-access-fd7mm\") on node \"master-0\" DevicePath \"\"" Feb 24 05:51:25.324514 master-0 kubenswrapper[34361]: I0224 05:51:25.324504 34361 reconciler_common.go:293] "Volume detached for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/34fce7dc-c92e-471b-9efa-f4960fb52c37-swiftconf\") on node \"master-0\" DevicePath \"\"" Feb 24 05:51:25.324514 master-0 kubenswrapper[34361]: I0224 05:51:25.324524 34361 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zpqwd\" (UniqueName: \"kubernetes.io/projected/174354fe-3c0c-488f-ab4c-4acb44c9cf4f-kube-api-access-zpqwd\") on node \"master-0\" DevicePath \"\"" Feb 24 05:51:25.324514 master-0 kubenswrapper[34361]: I0224 05:51:25.324534 34361 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/174354fe-3c0c-488f-ab4c-4acb44c9cf4f-operator-scripts\") on node \"master-0\" DevicePath \"\"" Feb 24 05:51:25.324514 master-0 kubenswrapper[34361]: I0224 05:51:25.324545 34361 reconciler_common.go:293] "Volume detached for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/34fce7dc-c92e-471b-9efa-f4960fb52c37-etc-swift\") on node \"master-0\" DevicePath \"\"" Feb 24 05:51:25.324920 master-0 kubenswrapper[34361]: I0224 05:51:25.324555 34361 reconciler_common.go:293] "Volume detached for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/34fce7dc-c92e-471b-9efa-f4960fb52c37-dispersionconf\") on node \"master-0\" DevicePath \"\"" Feb 24 05:51:25.324920 master-0 kubenswrapper[34361]: I0224 05:51:25.324566 34361 reconciler_common.go:293] "Volume detached for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/34fce7dc-c92e-471b-9efa-f4960fb52c37-ring-data-devices\") on node \"master-0\" DevicePath \"\"" Feb 24 05:51:25.324920 master-0 kubenswrapper[34361]: I0224 05:51:25.324574 34361 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/34fce7dc-c92e-471b-9efa-f4960fb52c37-scripts\") on node \"master-0\" DevicePath \"\"" Feb 24 05:51:25.416665 master-0 kubenswrapper[34361]: I0224 05:51:25.416572 34361 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-f4vxh" Feb 24 05:51:25.569018 master-0 kubenswrapper[34361]: I0224 05:51:25.568690 34361 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-storage-0"] Feb 24 05:51:25.953057 master-0 kubenswrapper[34361]: I0224 05:51:25.952995 34361 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-sync-f4vxh"] Feb 24 05:51:25.953670 master-0 kubenswrapper[34361]: W0224 05:51:25.953586 34361 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod36f1ab4b_258e_434c_8674_0375758ffd49.slice/crio-e7b55dbbec1fa720ce0965143242e9d550a47fd64b16551c121e8d5ddee00643 WatchSource:0}: Error finding container e7b55dbbec1fa720ce0965143242e9d550a47fd64b16551c121e8d5ddee00643: Status 404 returned error can't find the container with id e7b55dbbec1fa720ce0965143242e9d550a47fd64b16551c121e8d5ddee00643 Feb 24 05:51:26.274741 master-0 kubenswrapper[34361]: I0224 05:51:26.274646 34361 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-f4vxh" event={"ID":"36f1ab4b-258e-434c-8674-0375758ffd49","Type":"ContainerStarted","Data":"e7b55dbbec1fa720ce0965143242e9d550a47fd64b16551c121e8d5ddee00643"} Feb 24 05:51:26.276926 master-0 kubenswrapper[34361]: I0224 05:51:26.276874 34361 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"9dc3c847-e88b-4279-ba5a-8ef0c93e4e6c","Type":"ContainerStarted","Data":"254db6b658f780413b5845217f3092ecf5918fff4267bbbd3493f685afc91783"} Feb 24 05:51:26.719446 master-0 kubenswrapper[34361]: I0224 05:51:26.719246 34361 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-northd-0" Feb 24 05:51:27.304017 master-0 kubenswrapper[34361]: I0224 05:51:27.303929 34361 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"9dc3c847-e88b-4279-ba5a-8ef0c93e4e6c","Type":"ContainerStarted","Data":"b90c5246f32595221d1a86d92773f9acc8730b5c2355d8dd02e8cb0f6f0dec52"} Feb 24 05:51:27.519515 master-0 kubenswrapper[34361]: I0224 05:51:27.519459 34361 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/root-account-create-update-qw6cm"] Feb 24 05:51:27.537019 master-0 kubenswrapper[34361]: I0224 05:51:27.536331 34361 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/root-account-create-update-qw6cm"] Feb 24 05:51:28.320732 master-0 kubenswrapper[34361]: I0224 05:51:28.320648 34361 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"9dc3c847-e88b-4279-ba5a-8ef0c93e4e6c","Type":"ContainerStarted","Data":"c376fc22824a2b51a3bb284c7301833a872c0633e7d7db95a69f719441608517"} Feb 24 05:51:28.320732 master-0 kubenswrapper[34361]: I0224 05:51:28.320726 34361 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"9dc3c847-e88b-4279-ba5a-8ef0c93e4e6c","Type":"ContainerStarted","Data":"6d6308111a14803ab87c4287de4f47874420ccdd0c9debadf702aa7af943bfd7"} Feb 24 05:51:28.320732 master-0 kubenswrapper[34361]: I0224 05:51:28.320741 34361 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"9dc3c847-e88b-4279-ba5a-8ef0c93e4e6c","Type":"ContainerStarted","Data":"432ea5f095ab1deec9ac65e924cd23e84f40cef5fccb061a272efdd968ebe93b"} Feb 24 05:51:28.626848 master-0 kubenswrapper[34361]: I0224 05:51:28.626669 34361 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="174354fe-3c0c-488f-ab4c-4acb44c9cf4f" path="/var/lib/kubelet/pods/174354fe-3c0c-488f-ab4c-4acb44c9cf4f/volumes" Feb 24 05:51:30.353607 master-0 kubenswrapper[34361]: I0224 05:51:30.353523 34361 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"9dc3c847-e88b-4279-ba5a-8ef0c93e4e6c","Type":"ContainerStarted","Data":"c1dbccbe5c717a99fa06f0686ad7b8a4e0280b291479240a32b36a06074a074f"} Feb 24 05:51:30.353607 master-0 kubenswrapper[34361]: I0224 05:51:30.353607 34361 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"9dc3c847-e88b-4279-ba5a-8ef0c93e4e6c","Type":"ContainerStarted","Data":"ec2c372074e90a92512916420537f8f53def228a97fd4c7421365f1e9beb1634"} Feb 24 05:51:31.373324 master-0 kubenswrapper[34361]: I0224 05:51:31.373226 34361 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"9dc3c847-e88b-4279-ba5a-8ef0c93e4e6c","Type":"ContainerStarted","Data":"51b91f08f09a09e2340375651e0937ec54a77fd13e4b86e72aaf47da1e442faa"} Feb 24 05:51:31.373324 master-0 kubenswrapper[34361]: I0224 05:51:31.373298 34361 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"9dc3c847-e88b-4279-ba5a-8ef0c93e4e6c","Type":"ContainerStarted","Data":"024b1b1b102c869a68d149a86529514dea24dc40512d312385c9dd58ffd1e866"} Feb 24 05:51:32.511008 master-0 kubenswrapper[34361]: I0224 05:51:32.510849 34361 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/root-account-create-update-7zq2x"] Feb 24 05:51:32.511742 master-0 kubenswrapper[34361]: E0224 05:51:32.511440 34361 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="34fce7dc-c92e-471b-9efa-f4960fb52c37" containerName="swift-ring-rebalance" Feb 24 05:51:32.511742 master-0 kubenswrapper[34361]: I0224 05:51:32.511457 34361 state_mem.go:107] "Deleted CPUSet assignment" podUID="34fce7dc-c92e-471b-9efa-f4960fb52c37" containerName="swift-ring-rebalance" Feb 24 05:51:32.511742 master-0 kubenswrapper[34361]: E0224 05:51:32.511468 34361 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="174354fe-3c0c-488f-ab4c-4acb44c9cf4f" containerName="mariadb-account-create-update" Feb 24 05:51:32.511742 master-0 kubenswrapper[34361]: I0224 05:51:32.511474 34361 state_mem.go:107] "Deleted CPUSet assignment" podUID="174354fe-3c0c-488f-ab4c-4acb44c9cf4f" containerName="mariadb-account-create-update" Feb 24 05:51:32.511742 master-0 kubenswrapper[34361]: I0224 05:51:32.511702 34361 memory_manager.go:354] "RemoveStaleState removing state" podUID="34fce7dc-c92e-471b-9efa-f4960fb52c37" containerName="swift-ring-rebalance" Feb 24 05:51:32.511742 master-0 kubenswrapper[34361]: I0224 05:51:32.511737 34361 memory_manager.go:354] "RemoveStaleState removing state" podUID="174354fe-3c0c-488f-ab4c-4acb44c9cf4f" containerName="mariadb-account-create-update" Feb 24 05:51:32.512505 master-0 kubenswrapper[34361]: I0224 05:51:32.512471 34361 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-7zq2x" Feb 24 05:51:32.515889 master-0 kubenswrapper[34361]: I0224 05:51:32.515781 34361 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-cell1-mariadb-root-db-secret" Feb 24 05:51:32.539386 master-0 kubenswrapper[34361]: I0224 05:51:32.539318 34361 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-7zq2x"] Feb 24 05:51:32.640056 master-0 kubenswrapper[34361]: I0224 05:51:32.639974 34361 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/33ec35fe-3c0e-4a79-9f87-63279b8cc21a-operator-scripts\") pod \"root-account-create-update-7zq2x\" (UID: \"33ec35fe-3c0e-4a79-9f87-63279b8cc21a\") " pod="openstack/root-account-create-update-7zq2x" Feb 24 05:51:32.640361 master-0 kubenswrapper[34361]: I0224 05:51:32.640069 34361 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hnwpd\" (UniqueName: \"kubernetes.io/projected/33ec35fe-3c0e-4a79-9f87-63279b8cc21a-kube-api-access-hnwpd\") pod \"root-account-create-update-7zq2x\" (UID: \"33ec35fe-3c0e-4a79-9f87-63279b8cc21a\") " pod="openstack/root-account-create-update-7zq2x" Feb 24 05:51:32.744717 master-0 kubenswrapper[34361]: I0224 05:51:32.744637 34361 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/33ec35fe-3c0e-4a79-9f87-63279b8cc21a-operator-scripts\") pod \"root-account-create-update-7zq2x\" (UID: \"33ec35fe-3c0e-4a79-9f87-63279b8cc21a\") " pod="openstack/root-account-create-update-7zq2x" Feb 24 05:51:32.745073 master-0 kubenswrapper[34361]: I0224 05:51:32.743011 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/33ec35fe-3c0e-4a79-9f87-63279b8cc21a-operator-scripts\") pod \"root-account-create-update-7zq2x\" (UID: \"33ec35fe-3c0e-4a79-9f87-63279b8cc21a\") " pod="openstack/root-account-create-update-7zq2x" Feb 24 05:51:32.745073 master-0 kubenswrapper[34361]: I0224 05:51:32.745053 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hnwpd\" (UniqueName: \"kubernetes.io/projected/33ec35fe-3c0e-4a79-9f87-63279b8cc21a-kube-api-access-hnwpd\") pod \"root-account-create-update-7zq2x\" (UID: \"33ec35fe-3c0e-4a79-9f87-63279b8cc21a\") " pod="openstack/root-account-create-update-7zq2x" Feb 24 05:51:32.764400 master-0 kubenswrapper[34361]: I0224 05:51:32.764219 34361 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hnwpd\" (UniqueName: \"kubernetes.io/projected/33ec35fe-3c0e-4a79-9f87-63279b8cc21a-kube-api-access-hnwpd\") pod \"root-account-create-update-7zq2x\" (UID: \"33ec35fe-3c0e-4a79-9f87-63279b8cc21a\") " pod="openstack/root-account-create-update-7zq2x" Feb 24 05:51:32.841817 master-0 kubenswrapper[34361]: I0224 05:51:32.841745 34361 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-7zq2x" Feb 24 05:51:34.417334 master-0 kubenswrapper[34361]: I0224 05:51:34.417200 34361 generic.go:334] "Generic (PLEG): container finished" podID="1c4de741-6cb1-4ef0-80c9-173c72825057" containerID="0bed034cacdb2711007170fd519dfa80f4a0929347ae6ae697577eea31ed0b62" exitCode=0 Feb 24 05:51:34.417334 master-0 kubenswrapper[34361]: I0224 05:51:34.417284 34361 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"1c4de741-6cb1-4ef0-80c9-173c72825057","Type":"ContainerDied","Data":"0bed034cacdb2711007170fd519dfa80f4a0929347ae6ae697577eea31ed0b62"} Feb 24 05:51:34.420425 master-0 kubenswrapper[34361]: I0224 05:51:34.420296 34361 generic.go:334] "Generic (PLEG): container finished" podID="9ac06ab8-5197-4557-8124-583f49b6082b" containerID="47391d5812a91a97d137b4d1eda20bb7e42aa4f35e595c745924a38f15a39f1c" exitCode=0 Feb 24 05:51:34.420425 master-0 kubenswrapper[34361]: I0224 05:51:34.420373 34361 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"9ac06ab8-5197-4557-8124-583f49b6082b","Type":"ContainerDied","Data":"47391d5812a91a97d137b4d1eda20bb7e42aa4f35e595c745924a38f15a39f1c"} Feb 24 05:51:34.947802 master-0 kubenswrapper[34361]: I0224 05:51:34.947705 34361 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ovn-controller-5kh8v" podUID="c11f5497-b3de-43b7-9312-b06485f2df8a" containerName="ovn-controller" probeResult="failure" output=< Feb 24 05:51:34.947802 master-0 kubenswrapper[34361]: ERROR - ovn-controller connection status is 'not connected', expecting 'connected' status Feb 24 05:51:34.947802 master-0 kubenswrapper[34361]: > Feb 24 05:51:35.015107 master-0 kubenswrapper[34361]: I0224 05:51:35.015039 34361 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-ovs-86mtg" Feb 24 05:51:35.026751 master-0 kubenswrapper[34361]: I0224 05:51:35.026677 34361 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-ovs-86mtg" Feb 24 05:51:35.412735 master-0 kubenswrapper[34361]: I0224 05:51:35.412636 34361 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-5kh8v-config-nwrnh"] Feb 24 05:51:35.415357 master-0 kubenswrapper[34361]: I0224 05:51:35.415284 34361 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-5kh8v-config-nwrnh" Feb 24 05:51:35.418993 master-0 kubenswrapper[34361]: I0224 05:51:35.418952 34361 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-extra-scripts" Feb 24 05:51:35.507201 master-0 kubenswrapper[34361]: I0224 05:51:35.507121 34361 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-5kh8v-config-nwrnh"] Feb 24 05:51:35.522909 master-0 kubenswrapper[34361]: I0224 05:51:35.522788 34361 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/5398499e-548f-46a8-8a5e-45ffb743c2ce-var-log-ovn\") pod \"ovn-controller-5kh8v-config-nwrnh\" (UID: \"5398499e-548f-46a8-8a5e-45ffb743c2ce\") " pod="openstack/ovn-controller-5kh8v-config-nwrnh" Feb 24 05:51:35.523301 master-0 kubenswrapper[34361]: I0224 05:51:35.523038 34361 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/5398499e-548f-46a8-8a5e-45ffb743c2ce-additional-scripts\") pod \"ovn-controller-5kh8v-config-nwrnh\" (UID: \"5398499e-548f-46a8-8a5e-45ffb743c2ce\") " pod="openstack/ovn-controller-5kh8v-config-nwrnh" Feb 24 05:51:35.523301 master-0 kubenswrapper[34361]: I0224 05:51:35.523180 34361 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/5398499e-548f-46a8-8a5e-45ffb743c2ce-var-run\") pod \"ovn-controller-5kh8v-config-nwrnh\" (UID: \"5398499e-548f-46a8-8a5e-45ffb743c2ce\") " pod="openstack/ovn-controller-5kh8v-config-nwrnh" Feb 24 05:51:35.523301 master-0 kubenswrapper[34361]: I0224 05:51:35.523245 34361 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/5398499e-548f-46a8-8a5e-45ffb743c2ce-scripts\") pod \"ovn-controller-5kh8v-config-nwrnh\" (UID: \"5398499e-548f-46a8-8a5e-45ffb743c2ce\") " pod="openstack/ovn-controller-5kh8v-config-nwrnh" Feb 24 05:51:35.523570 master-0 kubenswrapper[34361]: I0224 05:51:35.523415 34361 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tc2wf\" (UniqueName: \"kubernetes.io/projected/5398499e-548f-46a8-8a5e-45ffb743c2ce-kube-api-access-tc2wf\") pod \"ovn-controller-5kh8v-config-nwrnh\" (UID: \"5398499e-548f-46a8-8a5e-45ffb743c2ce\") " pod="openstack/ovn-controller-5kh8v-config-nwrnh" Feb 24 05:51:35.523570 master-0 kubenswrapper[34361]: I0224 05:51:35.523558 34361 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/5398499e-548f-46a8-8a5e-45ffb743c2ce-var-run-ovn\") pod \"ovn-controller-5kh8v-config-nwrnh\" (UID: \"5398499e-548f-46a8-8a5e-45ffb743c2ce\") " pod="openstack/ovn-controller-5kh8v-config-nwrnh" Feb 24 05:51:35.631385 master-0 kubenswrapper[34361]: I0224 05:51:35.628381 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/5398499e-548f-46a8-8a5e-45ffb743c2ce-additional-scripts\") pod \"ovn-controller-5kh8v-config-nwrnh\" (UID: \"5398499e-548f-46a8-8a5e-45ffb743c2ce\") " pod="openstack/ovn-controller-5kh8v-config-nwrnh" Feb 24 05:51:35.631385 master-0 kubenswrapper[34361]: I0224 05:51:35.628463 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/5398499e-548f-46a8-8a5e-45ffb743c2ce-var-run\") pod \"ovn-controller-5kh8v-config-nwrnh\" (UID: \"5398499e-548f-46a8-8a5e-45ffb743c2ce\") " pod="openstack/ovn-controller-5kh8v-config-nwrnh" Feb 24 05:51:35.631385 master-0 kubenswrapper[34361]: I0224 05:51:35.628501 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/5398499e-548f-46a8-8a5e-45ffb743c2ce-scripts\") pod \"ovn-controller-5kh8v-config-nwrnh\" (UID: \"5398499e-548f-46a8-8a5e-45ffb743c2ce\") " pod="openstack/ovn-controller-5kh8v-config-nwrnh" Feb 24 05:51:35.631385 master-0 kubenswrapper[34361]: I0224 05:51:35.628545 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tc2wf\" (UniqueName: \"kubernetes.io/projected/5398499e-548f-46a8-8a5e-45ffb743c2ce-kube-api-access-tc2wf\") pod \"ovn-controller-5kh8v-config-nwrnh\" (UID: \"5398499e-548f-46a8-8a5e-45ffb743c2ce\") " pod="openstack/ovn-controller-5kh8v-config-nwrnh" Feb 24 05:51:35.631385 master-0 kubenswrapper[34361]: I0224 05:51:35.628604 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/5398499e-548f-46a8-8a5e-45ffb743c2ce-var-run-ovn\") pod \"ovn-controller-5kh8v-config-nwrnh\" (UID: \"5398499e-548f-46a8-8a5e-45ffb743c2ce\") " pod="openstack/ovn-controller-5kh8v-config-nwrnh" Feb 24 05:51:35.631385 master-0 kubenswrapper[34361]: I0224 05:51:35.628737 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/5398499e-548f-46a8-8a5e-45ffb743c2ce-var-log-ovn\") pod \"ovn-controller-5kh8v-config-nwrnh\" (UID: \"5398499e-548f-46a8-8a5e-45ffb743c2ce\") " pod="openstack/ovn-controller-5kh8v-config-nwrnh" Feb 24 05:51:35.631385 master-0 kubenswrapper[34361]: I0224 05:51:35.629014 34361 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/5398499e-548f-46a8-8a5e-45ffb743c2ce-var-log-ovn\") pod \"ovn-controller-5kh8v-config-nwrnh\" (UID: \"5398499e-548f-46a8-8a5e-45ffb743c2ce\") " pod="openstack/ovn-controller-5kh8v-config-nwrnh" Feb 24 05:51:35.631385 master-0 kubenswrapper[34361]: I0224 05:51:35.629776 34361 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/5398499e-548f-46a8-8a5e-45ffb743c2ce-additional-scripts\") pod \"ovn-controller-5kh8v-config-nwrnh\" (UID: \"5398499e-548f-46a8-8a5e-45ffb743c2ce\") " pod="openstack/ovn-controller-5kh8v-config-nwrnh" Feb 24 05:51:35.631385 master-0 kubenswrapper[34361]: I0224 05:51:35.629877 34361 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/5398499e-548f-46a8-8a5e-45ffb743c2ce-var-run\") pod \"ovn-controller-5kh8v-config-nwrnh\" (UID: \"5398499e-548f-46a8-8a5e-45ffb743c2ce\") " pod="openstack/ovn-controller-5kh8v-config-nwrnh" Feb 24 05:51:35.631962 master-0 kubenswrapper[34361]: I0224 05:51:35.631721 34361 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/5398499e-548f-46a8-8a5e-45ffb743c2ce-var-run-ovn\") pod \"ovn-controller-5kh8v-config-nwrnh\" (UID: \"5398499e-548f-46a8-8a5e-45ffb743c2ce\") " pod="openstack/ovn-controller-5kh8v-config-nwrnh" Feb 24 05:51:35.639243 master-0 kubenswrapper[34361]: I0224 05:51:35.638948 34361 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/5398499e-548f-46a8-8a5e-45ffb743c2ce-scripts\") pod \"ovn-controller-5kh8v-config-nwrnh\" (UID: \"5398499e-548f-46a8-8a5e-45ffb743c2ce\") " pod="openstack/ovn-controller-5kh8v-config-nwrnh" Feb 24 05:51:35.717336 master-0 kubenswrapper[34361]: I0224 05:51:35.716302 34361 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tc2wf\" (UniqueName: \"kubernetes.io/projected/5398499e-548f-46a8-8a5e-45ffb743c2ce-kube-api-access-tc2wf\") pod \"ovn-controller-5kh8v-config-nwrnh\" (UID: \"5398499e-548f-46a8-8a5e-45ffb743c2ce\") " pod="openstack/ovn-controller-5kh8v-config-nwrnh" Feb 24 05:51:35.742675 master-0 kubenswrapper[34361]: I0224 05:51:35.742603 34361 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-5kh8v-config-nwrnh" Feb 24 05:51:39.802786 master-0 kubenswrapper[34361]: I0224 05:51:39.802420 34361 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-5kh8v-config-nwrnh"] Feb 24 05:51:39.963244 master-0 kubenswrapper[34361]: I0224 05:51:39.963155 34361 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ovn-controller-5kh8v" podUID="c11f5497-b3de-43b7-9312-b06485f2df8a" containerName="ovn-controller" probeResult="failure" output=< Feb 24 05:51:39.963244 master-0 kubenswrapper[34361]: ERROR - ovn-controller connection status is 'not connected', expecting 'connected' status Feb 24 05:51:39.963244 master-0 kubenswrapper[34361]: > Feb 24 05:51:39.987008 master-0 kubenswrapper[34361]: I0224 05:51:39.986918 34361 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-7zq2x"] Feb 24 05:51:40.533448 master-0 kubenswrapper[34361]: I0224 05:51:40.533367 34361 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-f4vxh" event={"ID":"36f1ab4b-258e-434c-8674-0375758ffd49","Type":"ContainerStarted","Data":"505c85bf407d16734485972c8f8ba68a955434679874c9188aa62bcaf5c2307a"} Feb 24 05:51:40.539510 master-0 kubenswrapper[34361]: I0224 05:51:40.538765 34361 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"9ac06ab8-5197-4557-8124-583f49b6082b","Type":"ContainerStarted","Data":"78a19a7eab3f1981082a5dd3bf347e560844139587934025779c351ec49b8a34"} Feb 24 05:51:40.539510 master-0 kubenswrapper[34361]: I0224 05:51:40.539184 34361 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-server-0" Feb 24 05:51:40.557134 master-0 kubenswrapper[34361]: I0224 05:51:40.557006 34361 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"9dc3c847-e88b-4279-ba5a-8ef0c93e4e6c","Type":"ContainerStarted","Data":"9f97861466b0c6ef5a48bde6f697ed9437fb5f037447ff47522d59ba76d1cf45"} Feb 24 05:51:40.557134 master-0 kubenswrapper[34361]: I0224 05:51:40.557075 34361 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"9dc3c847-e88b-4279-ba5a-8ef0c93e4e6c","Type":"ContainerStarted","Data":"c6c2f144cab890aad5dac4ff5c7429cb302b8de2bb529c48eff76ccfc0a7d79e"} Feb 24 05:51:40.557134 master-0 kubenswrapper[34361]: I0224 05:51:40.557087 34361 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"9dc3c847-e88b-4279-ba5a-8ef0c93e4e6c","Type":"ContainerStarted","Data":"c684244a307649cba66e243212bed9be79f9ca86e8c7a6ddfa0cd998d6dba35f"} Feb 24 05:51:40.557134 master-0 kubenswrapper[34361]: I0224 05:51:40.557100 34361 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"9dc3c847-e88b-4279-ba5a-8ef0c93e4e6c","Type":"ContainerStarted","Data":"62326507348e59585a80a5a8f40aaaaaacfadb5bde55752f15bec5bd15042cae"} Feb 24 05:51:40.563013 master-0 kubenswrapper[34361]: I0224 05:51:40.562724 34361 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-db-sync-f4vxh" podStartSLOduration=3.1785797430000002 podStartE2EDuration="16.56269991s" podCreationTimestamp="2026-02-24 05:51:24 +0000 UTC" firstStartedPulling="2026-02-24 05:51:25.957826296 +0000 UTC m=+845.660443342" lastFinishedPulling="2026-02-24 05:51:39.341946453 +0000 UTC m=+859.044563509" observedRunningTime="2026-02-24 05:51:40.553848421 +0000 UTC m=+860.256465467" watchObservedRunningTime="2026-02-24 05:51:40.56269991 +0000 UTC m=+860.265316956" Feb 24 05:51:40.565056 master-0 kubenswrapper[34361]: I0224 05:51:40.564994 34361 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"1c4de741-6cb1-4ef0-80c9-173c72825057","Type":"ContainerStarted","Data":"cdcf455173fc95d4fc3077cf07bd71461bba721c6061eb111e9ec58b4e50be8c"} Feb 24 05:51:40.565474 master-0 kubenswrapper[34361]: I0224 05:51:40.565432 34361 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-cell1-server-0" Feb 24 05:51:40.567765 master-0 kubenswrapper[34361]: I0224 05:51:40.567427 34361 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-5kh8v-config-nwrnh" event={"ID":"5398499e-548f-46a8-8a5e-45ffb743c2ce","Type":"ContainerStarted","Data":"52540c52217a1760ab281ef48a693b2dfb9645bcbc15a990572211f0ca11cb14"} Feb 24 05:51:40.567765 master-0 kubenswrapper[34361]: I0224 05:51:40.567467 34361 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-5kh8v-config-nwrnh" event={"ID":"5398499e-548f-46a8-8a5e-45ffb743c2ce","Type":"ContainerStarted","Data":"a512a503f79eee5f773b86d46174801af1051a9a410cbacdc92744b3e6b5488a"} Feb 24 05:51:40.591397 master-0 kubenswrapper[34361]: I0224 05:51:40.591230 34361 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-7zq2x" event={"ID":"33ec35fe-3c0e-4a79-9f87-63279b8cc21a","Type":"ContainerStarted","Data":"b7d4a41f2866ac98c22c4063d66419579186118ba9b12a5eb8213634976ee515"} Feb 24 05:51:40.591397 master-0 kubenswrapper[34361]: I0224 05:51:40.591295 34361 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-7zq2x" event={"ID":"33ec35fe-3c0e-4a79-9f87-63279b8cc21a","Type":"ContainerStarted","Data":"c8f979989e62a5cba4035aa7a44aca4cb6f6903eb14d6e7619ce6655456fa152"} Feb 24 05:51:40.598344 master-0 kubenswrapper[34361]: I0224 05:51:40.598166 34361 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-server-0" podStartSLOduration=59.663937668 podStartE2EDuration="1m10.598140855s" podCreationTimestamp="2026-02-24 05:50:30 +0000 UTC" firstStartedPulling="2026-02-24 05:50:48.230041453 +0000 UTC m=+807.932658499" lastFinishedPulling="2026-02-24 05:50:59.16424462 +0000 UTC m=+818.866861686" observedRunningTime="2026-02-24 05:51:40.591523747 +0000 UTC m=+860.294140813" watchObservedRunningTime="2026-02-24 05:51:40.598140855 +0000 UTC m=+860.300757901" Feb 24 05:51:40.627178 master-0 kubenswrapper[34361]: I0224 05:51:40.627000 34361 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/root-account-create-update-7zq2x" podStartSLOduration=8.626976642 podStartE2EDuration="8.626976642s" podCreationTimestamp="2026-02-24 05:51:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-24 05:51:40.619847201 +0000 UTC m=+860.322464247" watchObservedRunningTime="2026-02-24 05:51:40.626976642 +0000 UTC m=+860.329593688" Feb 24 05:51:40.674658 master-0 kubenswrapper[34361]: I0224 05:51:40.673884 34361 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-cell1-server-0" podStartSLOduration=60.890281824 podStartE2EDuration="1m9.673857366s" podCreationTimestamp="2026-02-24 05:50:31 +0000 UTC" firstStartedPulling="2026-02-24 05:50:50.034805474 +0000 UTC m=+809.737422520" lastFinishedPulling="2026-02-24 05:50:58.818381016 +0000 UTC m=+818.520998062" observedRunningTime="2026-02-24 05:51:40.664162185 +0000 UTC m=+860.366779251" watchObservedRunningTime="2026-02-24 05:51:40.673857366 +0000 UTC m=+860.376474412" Feb 24 05:51:40.707837 master-0 kubenswrapper[34361]: I0224 05:51:40.707432 34361 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-5kh8v-config-nwrnh" podStartSLOduration=5.707400161 podStartE2EDuration="5.707400161s" podCreationTimestamp="2026-02-24 05:51:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-24 05:51:40.698900881 +0000 UTC m=+860.401517927" watchObservedRunningTime="2026-02-24 05:51:40.707400161 +0000 UTC m=+860.410017207" Feb 24 05:51:41.609574 master-0 kubenswrapper[34361]: I0224 05:51:41.609491 34361 generic.go:334] "Generic (PLEG): container finished" podID="33ec35fe-3c0e-4a79-9f87-63279b8cc21a" containerID="b7d4a41f2866ac98c22c4063d66419579186118ba9b12a5eb8213634976ee515" exitCode=0 Feb 24 05:51:41.610282 master-0 kubenswrapper[34361]: I0224 05:51:41.609629 34361 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-7zq2x" event={"ID":"33ec35fe-3c0e-4a79-9f87-63279b8cc21a","Type":"ContainerDied","Data":"b7d4a41f2866ac98c22c4063d66419579186118ba9b12a5eb8213634976ee515"} Feb 24 05:51:41.619541 master-0 kubenswrapper[34361]: I0224 05:51:41.619483 34361 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"9dc3c847-e88b-4279-ba5a-8ef0c93e4e6c","Type":"ContainerStarted","Data":"dbebbbc75e9951c64f2befc97fc8f3763d0cce30bd397c1a11c010e7fdf0f1c7"} Feb 24 05:51:41.619654 master-0 kubenswrapper[34361]: I0224 05:51:41.619562 34361 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"9dc3c847-e88b-4279-ba5a-8ef0c93e4e6c","Type":"ContainerStarted","Data":"bbfcee5eca3b95fe6ede1bdaa321a6a2d2474440be11d3fd707ac2607bb42537"} Feb 24 05:51:41.619654 master-0 kubenswrapper[34361]: I0224 05:51:41.619576 34361 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"9dc3c847-e88b-4279-ba5a-8ef0c93e4e6c","Type":"ContainerStarted","Data":"ee5b9e377c9e24798f75860aa2c24564087bb4e210b49cf9011dda26a6977761"} Feb 24 05:51:41.623856 master-0 kubenswrapper[34361]: I0224 05:51:41.623794 34361 generic.go:334] "Generic (PLEG): container finished" podID="5398499e-548f-46a8-8a5e-45ffb743c2ce" containerID="52540c52217a1760ab281ef48a693b2dfb9645bcbc15a990572211f0ca11cb14" exitCode=0 Feb 24 05:51:41.624030 master-0 kubenswrapper[34361]: I0224 05:51:41.623915 34361 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-5kh8v-config-nwrnh" event={"ID":"5398499e-548f-46a8-8a5e-45ffb743c2ce","Type":"ContainerDied","Data":"52540c52217a1760ab281ef48a693b2dfb9645bcbc15a990572211f0ca11cb14"} Feb 24 05:51:41.706912 master-0 kubenswrapper[34361]: I0224 05:51:41.706810 34361 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/swift-storage-0" podStartSLOduration=29.367891037 podStartE2EDuration="35.706786731s" podCreationTimestamp="2026-02-24 05:51:06 +0000 UTC" firstStartedPulling="2026-02-24 05:51:25.572327184 +0000 UTC m=+845.274944250" lastFinishedPulling="2026-02-24 05:51:31.911222898 +0000 UTC m=+851.613839944" observedRunningTime="2026-02-24 05:51:41.690496111 +0000 UTC m=+861.393113167" watchObservedRunningTime="2026-02-24 05:51:41.706786731 +0000 UTC m=+861.409403777" Feb 24 05:51:42.084692 master-0 kubenswrapper[34361]: I0224 05:51:42.084615 34361 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-6fbf68b9d7-p96gq"] Feb 24 05:51:42.088391 master-0 kubenswrapper[34361]: I0224 05:51:42.088283 34361 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6fbf68b9d7-p96gq" Feb 24 05:51:42.090211 master-0 kubenswrapper[34361]: I0224 05:51:42.090166 34361 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns-swift-storage-0" Feb 24 05:51:42.106438 master-0 kubenswrapper[34361]: I0224 05:51:42.106064 34361 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6fbf68b9d7-p96gq"] Feb 24 05:51:42.223244 master-0 kubenswrapper[34361]: I0224 05:51:42.223140 34361 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/b2664286-54db-449b-aee5-fbfa93ab489f-dns-swift-storage-0\") pod \"dnsmasq-dns-6fbf68b9d7-p96gq\" (UID: \"b2664286-54db-449b-aee5-fbfa93ab489f\") " pod="openstack/dnsmasq-dns-6fbf68b9d7-p96gq" Feb 24 05:51:42.223703 master-0 kubenswrapper[34361]: I0224 05:51:42.223482 34361 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/b2664286-54db-449b-aee5-fbfa93ab489f-ovsdbserver-sb\") pod \"dnsmasq-dns-6fbf68b9d7-p96gq\" (UID: \"b2664286-54db-449b-aee5-fbfa93ab489f\") " pod="openstack/dnsmasq-dns-6fbf68b9d7-p96gq" Feb 24 05:51:42.223703 master-0 kubenswrapper[34361]: I0224 05:51:42.223605 34361 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s2bpb\" (UniqueName: \"kubernetes.io/projected/b2664286-54db-449b-aee5-fbfa93ab489f-kube-api-access-s2bpb\") pod \"dnsmasq-dns-6fbf68b9d7-p96gq\" (UID: \"b2664286-54db-449b-aee5-fbfa93ab489f\") " pod="openstack/dnsmasq-dns-6fbf68b9d7-p96gq" Feb 24 05:51:42.223803 master-0 kubenswrapper[34361]: I0224 05:51:42.223764 34361 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b2664286-54db-449b-aee5-fbfa93ab489f-config\") pod \"dnsmasq-dns-6fbf68b9d7-p96gq\" (UID: \"b2664286-54db-449b-aee5-fbfa93ab489f\") " pod="openstack/dnsmasq-dns-6fbf68b9d7-p96gq" Feb 24 05:51:42.223925 master-0 kubenswrapper[34361]: I0224 05:51:42.223885 34361 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/b2664286-54db-449b-aee5-fbfa93ab489f-ovsdbserver-nb\") pod \"dnsmasq-dns-6fbf68b9d7-p96gq\" (UID: \"b2664286-54db-449b-aee5-fbfa93ab489f\") " pod="openstack/dnsmasq-dns-6fbf68b9d7-p96gq" Feb 24 05:51:42.224017 master-0 kubenswrapper[34361]: I0224 05:51:42.223989 34361 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/b2664286-54db-449b-aee5-fbfa93ab489f-dns-svc\") pod \"dnsmasq-dns-6fbf68b9d7-p96gq\" (UID: \"b2664286-54db-449b-aee5-fbfa93ab489f\") " pod="openstack/dnsmasq-dns-6fbf68b9d7-p96gq" Feb 24 05:51:42.326237 master-0 kubenswrapper[34361]: I0224 05:51:42.326151 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/b2664286-54db-449b-aee5-fbfa93ab489f-ovsdbserver-sb\") pod \"dnsmasq-dns-6fbf68b9d7-p96gq\" (UID: \"b2664286-54db-449b-aee5-fbfa93ab489f\") " pod="openstack/dnsmasq-dns-6fbf68b9d7-p96gq" Feb 24 05:51:42.326611 master-0 kubenswrapper[34361]: I0224 05:51:42.326376 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2bpb\" (UniqueName: \"kubernetes.io/projected/b2664286-54db-449b-aee5-fbfa93ab489f-kube-api-access-s2bpb\") pod \"dnsmasq-dns-6fbf68b9d7-p96gq\" (UID: \"b2664286-54db-449b-aee5-fbfa93ab489f\") " pod="openstack/dnsmasq-dns-6fbf68b9d7-p96gq" Feb 24 05:51:42.326611 master-0 kubenswrapper[34361]: I0224 05:51:42.326477 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b2664286-54db-449b-aee5-fbfa93ab489f-config\") pod \"dnsmasq-dns-6fbf68b9d7-p96gq\" (UID: \"b2664286-54db-449b-aee5-fbfa93ab489f\") " pod="openstack/dnsmasq-dns-6fbf68b9d7-p96gq" Feb 24 05:51:42.326611 master-0 kubenswrapper[34361]: I0224 05:51:42.326527 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/b2664286-54db-449b-aee5-fbfa93ab489f-ovsdbserver-nb\") pod \"dnsmasq-dns-6fbf68b9d7-p96gq\" (UID: \"b2664286-54db-449b-aee5-fbfa93ab489f\") " pod="openstack/dnsmasq-dns-6fbf68b9d7-p96gq" Feb 24 05:51:42.326712 master-0 kubenswrapper[34361]: I0224 05:51:42.326638 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/b2664286-54db-449b-aee5-fbfa93ab489f-dns-svc\") pod \"dnsmasq-dns-6fbf68b9d7-p96gq\" (UID: \"b2664286-54db-449b-aee5-fbfa93ab489f\") " pod="openstack/dnsmasq-dns-6fbf68b9d7-p96gq" Feb 24 05:51:42.326753 master-0 kubenswrapper[34361]: I0224 05:51:42.326719 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/b2664286-54db-449b-aee5-fbfa93ab489f-dns-swift-storage-0\") pod \"dnsmasq-dns-6fbf68b9d7-p96gq\" (UID: \"b2664286-54db-449b-aee5-fbfa93ab489f\") " pod="openstack/dnsmasq-dns-6fbf68b9d7-p96gq" Feb 24 05:51:42.328371 master-0 kubenswrapper[34361]: I0224 05:51:42.327418 34361 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/b2664286-54db-449b-aee5-fbfa93ab489f-ovsdbserver-sb\") pod \"dnsmasq-dns-6fbf68b9d7-p96gq\" (UID: \"b2664286-54db-449b-aee5-fbfa93ab489f\") " pod="openstack/dnsmasq-dns-6fbf68b9d7-p96gq" Feb 24 05:51:42.328371 master-0 kubenswrapper[34361]: I0224 05:51:42.327772 34361 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/b2664286-54db-449b-aee5-fbfa93ab489f-ovsdbserver-nb\") pod \"dnsmasq-dns-6fbf68b9d7-p96gq\" (UID: \"b2664286-54db-449b-aee5-fbfa93ab489f\") " pod="openstack/dnsmasq-dns-6fbf68b9d7-p96gq" Feb 24 05:51:42.328371 master-0 kubenswrapper[34361]: I0224 05:51:42.327879 34361 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/b2664286-54db-449b-aee5-fbfa93ab489f-dns-swift-storage-0\") pod \"dnsmasq-dns-6fbf68b9d7-p96gq\" (UID: \"b2664286-54db-449b-aee5-fbfa93ab489f\") " pod="openstack/dnsmasq-dns-6fbf68b9d7-p96gq" Feb 24 05:51:42.328371 master-0 kubenswrapper[34361]: I0224 05:51:42.328128 34361 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/b2664286-54db-449b-aee5-fbfa93ab489f-dns-svc\") pod \"dnsmasq-dns-6fbf68b9d7-p96gq\" (UID: \"b2664286-54db-449b-aee5-fbfa93ab489f\") " pod="openstack/dnsmasq-dns-6fbf68b9d7-p96gq" Feb 24 05:51:42.328739 master-0 kubenswrapper[34361]: I0224 05:51:42.328450 34361 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b2664286-54db-449b-aee5-fbfa93ab489f-config\") pod \"dnsmasq-dns-6fbf68b9d7-p96gq\" (UID: \"b2664286-54db-449b-aee5-fbfa93ab489f\") " pod="openstack/dnsmasq-dns-6fbf68b9d7-p96gq" Feb 24 05:51:42.355112 master-0 kubenswrapper[34361]: I0224 05:51:42.354922 34361 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s2bpb\" (UniqueName: \"kubernetes.io/projected/b2664286-54db-449b-aee5-fbfa93ab489f-kube-api-access-s2bpb\") pod \"dnsmasq-dns-6fbf68b9d7-p96gq\" (UID: \"b2664286-54db-449b-aee5-fbfa93ab489f\") " pod="openstack/dnsmasq-dns-6fbf68b9d7-p96gq" Feb 24 05:51:42.460884 master-0 kubenswrapper[34361]: I0224 05:51:42.460786 34361 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6fbf68b9d7-p96gq" Feb 24 05:51:43.363947 master-0 kubenswrapper[34361]: I0224 05:51:43.363881 34361 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-7zq2x" Feb 24 05:51:43.370948 master-0 kubenswrapper[34361]: I0224 05:51:43.370899 34361 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-5kh8v-config-nwrnh" Feb 24 05:51:43.383734 master-0 kubenswrapper[34361]: W0224 05:51:43.383678 34361 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb2664286_54db_449b_aee5_fbfa93ab489f.slice/crio-f6eecb4a8b8598aa9bf7dd6c55b9a46acc5425afce1880dc64f178fbedb97023 WatchSource:0}: Error finding container f6eecb4a8b8598aa9bf7dd6c55b9a46acc5425afce1880dc64f178fbedb97023: Status 404 returned error can't find the container with id f6eecb4a8b8598aa9bf7dd6c55b9a46acc5425afce1880dc64f178fbedb97023 Feb 24 05:51:43.393293 master-0 kubenswrapper[34361]: I0224 05:51:43.393214 34361 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6fbf68b9d7-p96gq"] Feb 24 05:51:43.480349 master-0 kubenswrapper[34361]: I0224 05:51:43.478939 34361 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/5398499e-548f-46a8-8a5e-45ffb743c2ce-additional-scripts\") pod \"5398499e-548f-46a8-8a5e-45ffb743c2ce\" (UID: \"5398499e-548f-46a8-8a5e-45ffb743c2ce\") " Feb 24 05:51:43.480349 master-0 kubenswrapper[34361]: I0224 05:51:43.479573 34361 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5398499e-548f-46a8-8a5e-45ffb743c2ce-additional-scripts" (OuterVolumeSpecName: "additional-scripts") pod "5398499e-548f-46a8-8a5e-45ffb743c2ce" (UID: "5398499e-548f-46a8-8a5e-45ffb743c2ce"). InnerVolumeSpecName "additional-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 24 05:51:43.480349 master-0 kubenswrapper[34361]: I0224 05:51:43.480125 34361 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tc2wf\" (UniqueName: \"kubernetes.io/projected/5398499e-548f-46a8-8a5e-45ffb743c2ce-kube-api-access-tc2wf\") pod \"5398499e-548f-46a8-8a5e-45ffb743c2ce\" (UID: \"5398499e-548f-46a8-8a5e-45ffb743c2ce\") " Feb 24 05:51:43.480349 master-0 kubenswrapper[34361]: I0224 05:51:43.480188 34361 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/5398499e-548f-46a8-8a5e-45ffb743c2ce-var-run-ovn\") pod \"5398499e-548f-46a8-8a5e-45ffb743c2ce\" (UID: \"5398499e-548f-46a8-8a5e-45ffb743c2ce\") " Feb 24 05:51:43.483190 master-0 kubenswrapper[34361]: I0224 05:51:43.480929 34361 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hnwpd\" (UniqueName: \"kubernetes.io/projected/33ec35fe-3c0e-4a79-9f87-63279b8cc21a-kube-api-access-hnwpd\") pod \"33ec35fe-3c0e-4a79-9f87-63279b8cc21a\" (UID: \"33ec35fe-3c0e-4a79-9f87-63279b8cc21a\") " Feb 24 05:51:43.483190 master-0 kubenswrapper[34361]: I0224 05:51:43.481304 34361 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/5398499e-548f-46a8-8a5e-45ffb743c2ce-var-run\") pod \"5398499e-548f-46a8-8a5e-45ffb743c2ce\" (UID: \"5398499e-548f-46a8-8a5e-45ffb743c2ce\") " Feb 24 05:51:43.483190 master-0 kubenswrapper[34361]: I0224 05:51:43.481424 34361 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/5398499e-548f-46a8-8a5e-45ffb743c2ce-scripts\") pod \"5398499e-548f-46a8-8a5e-45ffb743c2ce\" (UID: \"5398499e-548f-46a8-8a5e-45ffb743c2ce\") " Feb 24 05:51:43.483190 master-0 kubenswrapper[34361]: I0224 05:51:43.481695 34361 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/33ec35fe-3c0e-4a79-9f87-63279b8cc21a-operator-scripts\") pod \"33ec35fe-3c0e-4a79-9f87-63279b8cc21a\" (UID: \"33ec35fe-3c0e-4a79-9f87-63279b8cc21a\") " Feb 24 05:51:43.483432 master-0 kubenswrapper[34361]: I0224 05:51:43.483212 34361 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/5398499e-548f-46a8-8a5e-45ffb743c2ce-var-log-ovn\") pod \"5398499e-548f-46a8-8a5e-45ffb743c2ce\" (UID: \"5398499e-548f-46a8-8a5e-45ffb743c2ce\") " Feb 24 05:51:43.485320 master-0 kubenswrapper[34361]: I0224 05:51:43.485252 34361 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/33ec35fe-3c0e-4a79-9f87-63279b8cc21a-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "33ec35fe-3c0e-4a79-9f87-63279b8cc21a" (UID: "33ec35fe-3c0e-4a79-9f87-63279b8cc21a"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 24 05:51:43.485418 master-0 kubenswrapper[34361]: I0224 05:51:43.485325 34361 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5398499e-548f-46a8-8a5e-45ffb743c2ce-var-run" (OuterVolumeSpecName: "var-run") pod "5398499e-548f-46a8-8a5e-45ffb743c2ce" (UID: "5398499e-548f-46a8-8a5e-45ffb743c2ce"). InnerVolumeSpecName "var-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 24 05:51:43.485418 master-0 kubenswrapper[34361]: I0224 05:51:43.485352 34361 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5398499e-548f-46a8-8a5e-45ffb743c2ce-var-run-ovn" (OuterVolumeSpecName: "var-run-ovn") pod "5398499e-548f-46a8-8a5e-45ffb743c2ce" (UID: "5398499e-548f-46a8-8a5e-45ffb743c2ce"). InnerVolumeSpecName "var-run-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 24 05:51:43.485508 master-0 kubenswrapper[34361]: I0224 05:51:43.485441 34361 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5398499e-548f-46a8-8a5e-45ffb743c2ce-var-log-ovn" (OuterVolumeSpecName: "var-log-ovn") pod "5398499e-548f-46a8-8a5e-45ffb743c2ce" (UID: "5398499e-548f-46a8-8a5e-45ffb743c2ce"). InnerVolumeSpecName "var-log-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 24 05:51:43.485729 master-0 kubenswrapper[34361]: I0224 05:51:43.485689 34361 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5398499e-548f-46a8-8a5e-45ffb743c2ce-scripts" (OuterVolumeSpecName: "scripts") pod "5398499e-548f-46a8-8a5e-45ffb743c2ce" (UID: "5398499e-548f-46a8-8a5e-45ffb743c2ce"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 24 05:51:43.487058 master-0 kubenswrapper[34361]: I0224 05:51:43.487003 34361 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/33ec35fe-3c0e-4a79-9f87-63279b8cc21a-operator-scripts\") on node \"master-0\" DevicePath \"\"" Feb 24 05:51:43.487058 master-0 kubenswrapper[34361]: I0224 05:51:43.487050 34361 reconciler_common.go:293] "Volume detached for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/5398499e-548f-46a8-8a5e-45ffb743c2ce-var-log-ovn\") on node \"master-0\" DevicePath \"\"" Feb 24 05:51:43.487197 master-0 kubenswrapper[34361]: I0224 05:51:43.487064 34361 reconciler_common.go:293] "Volume detached for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/5398499e-548f-46a8-8a5e-45ffb743c2ce-additional-scripts\") on node \"master-0\" DevicePath \"\"" Feb 24 05:51:43.487197 master-0 kubenswrapper[34361]: I0224 05:51:43.487077 34361 reconciler_common.go:293] "Volume detached for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/5398499e-548f-46a8-8a5e-45ffb743c2ce-var-run-ovn\") on node \"master-0\" DevicePath \"\"" Feb 24 05:51:43.487197 master-0 kubenswrapper[34361]: I0224 05:51:43.487089 34361 reconciler_common.go:293] "Volume detached for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/5398499e-548f-46a8-8a5e-45ffb743c2ce-var-run\") on node \"master-0\" DevicePath \"\"" Feb 24 05:51:43.487197 master-0 kubenswrapper[34361]: I0224 05:51:43.487098 34361 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/5398499e-548f-46a8-8a5e-45ffb743c2ce-scripts\") on node \"master-0\" DevicePath \"\"" Feb 24 05:51:43.492433 master-0 kubenswrapper[34361]: I0224 05:51:43.491732 34361 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/33ec35fe-3c0e-4a79-9f87-63279b8cc21a-kube-api-access-hnwpd" (OuterVolumeSpecName: "kube-api-access-hnwpd") pod "33ec35fe-3c0e-4a79-9f87-63279b8cc21a" (UID: "33ec35fe-3c0e-4a79-9f87-63279b8cc21a"). InnerVolumeSpecName "kube-api-access-hnwpd". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 24 05:51:43.497390 master-0 kubenswrapper[34361]: I0224 05:51:43.495147 34361 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5398499e-548f-46a8-8a5e-45ffb743c2ce-kube-api-access-tc2wf" (OuterVolumeSpecName: "kube-api-access-tc2wf") pod "5398499e-548f-46a8-8a5e-45ffb743c2ce" (UID: "5398499e-548f-46a8-8a5e-45ffb743c2ce"). InnerVolumeSpecName "kube-api-access-tc2wf". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 24 05:51:43.591917 master-0 kubenswrapper[34361]: I0224 05:51:43.590021 34361 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tc2wf\" (UniqueName: \"kubernetes.io/projected/5398499e-548f-46a8-8a5e-45ffb743c2ce-kube-api-access-tc2wf\") on node \"master-0\" DevicePath \"\"" Feb 24 05:51:43.591917 master-0 kubenswrapper[34361]: I0224 05:51:43.590095 34361 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hnwpd\" (UniqueName: \"kubernetes.io/projected/33ec35fe-3c0e-4a79-9f87-63279b8cc21a-kube-api-access-hnwpd\") on node \"master-0\" DevicePath \"\"" Feb 24 05:51:43.658024 master-0 kubenswrapper[34361]: I0224 05:51:43.657966 34361 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-5kh8v-config-nwrnh" event={"ID":"5398499e-548f-46a8-8a5e-45ffb743c2ce","Type":"ContainerDied","Data":"a512a503f79eee5f773b86d46174801af1051a9a410cbacdc92744b3e6b5488a"} Feb 24 05:51:43.658113 master-0 kubenswrapper[34361]: I0224 05:51:43.658022 34361 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-5kh8v-config-nwrnh" Feb 24 05:51:43.658344 master-0 kubenswrapper[34361]: I0224 05:51:43.658030 34361 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a512a503f79eee5f773b86d46174801af1051a9a410cbacdc92744b3e6b5488a" Feb 24 05:51:43.660949 master-0 kubenswrapper[34361]: I0224 05:51:43.660916 34361 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-7zq2x" event={"ID":"33ec35fe-3c0e-4a79-9f87-63279b8cc21a","Type":"ContainerDied","Data":"c8f979989e62a5cba4035aa7a44aca4cb6f6903eb14d6e7619ce6655456fa152"} Feb 24 05:51:43.660949 master-0 kubenswrapper[34361]: I0224 05:51:43.660938 34361 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c8f979989e62a5cba4035aa7a44aca4cb6f6903eb14d6e7619ce6655456fa152" Feb 24 05:51:43.661217 master-0 kubenswrapper[34361]: I0224 05:51:43.661198 34361 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-7zq2x" Feb 24 05:51:43.668983 master-0 kubenswrapper[34361]: I0224 05:51:43.668929 34361 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6fbf68b9d7-p96gq" event={"ID":"b2664286-54db-449b-aee5-fbfa93ab489f","Type":"ContainerStarted","Data":"f6eecb4a8b8598aa9bf7dd6c55b9a46acc5425afce1880dc64f178fbedb97023"} Feb 24 05:51:44.614522 master-0 kubenswrapper[34361]: I0224 05:51:44.614412 34361 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovn-controller-5kh8v-config-nwrnh"] Feb 24 05:51:44.626093 master-0 kubenswrapper[34361]: I0224 05:51:44.626010 34361 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ovn-controller-5kh8v-config-nwrnh"] Feb 24 05:51:44.681291 master-0 kubenswrapper[34361]: I0224 05:51:44.681223 34361 generic.go:334] "Generic (PLEG): container finished" podID="b2664286-54db-449b-aee5-fbfa93ab489f" containerID="1874f97f3f35d8bc961fa39e86a113dada924bd221b4ecf58715baa08fbaf265" exitCode=0 Feb 24 05:51:44.681291 master-0 kubenswrapper[34361]: I0224 05:51:44.681286 34361 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6fbf68b9d7-p96gq" event={"ID":"b2664286-54db-449b-aee5-fbfa93ab489f","Type":"ContainerDied","Data":"1874f97f3f35d8bc961fa39e86a113dada924bd221b4ecf58715baa08fbaf265"} Feb 24 05:51:44.748431 master-0 kubenswrapper[34361]: I0224 05:51:44.748365 34361 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-5kh8v-config-gtgbg"] Feb 24 05:51:44.749372 master-0 kubenswrapper[34361]: E0224 05:51:44.749349 34361 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="33ec35fe-3c0e-4a79-9f87-63279b8cc21a" containerName="mariadb-account-create-update" Feb 24 05:51:44.749466 master-0 kubenswrapper[34361]: I0224 05:51:44.749454 34361 state_mem.go:107] "Deleted CPUSet assignment" podUID="33ec35fe-3c0e-4a79-9f87-63279b8cc21a" containerName="mariadb-account-create-update" Feb 24 05:51:44.749570 master-0 kubenswrapper[34361]: E0224 05:51:44.749558 34361 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5398499e-548f-46a8-8a5e-45ffb743c2ce" containerName="ovn-config" Feb 24 05:51:44.749646 master-0 kubenswrapper[34361]: I0224 05:51:44.749636 34361 state_mem.go:107] "Deleted CPUSet assignment" podUID="5398499e-548f-46a8-8a5e-45ffb743c2ce" containerName="ovn-config" Feb 24 05:51:44.750001 master-0 kubenswrapper[34361]: I0224 05:51:44.749987 34361 memory_manager.go:354] "RemoveStaleState removing state" podUID="5398499e-548f-46a8-8a5e-45ffb743c2ce" containerName="ovn-config" Feb 24 05:51:44.750096 master-0 kubenswrapper[34361]: I0224 05:51:44.750085 34361 memory_manager.go:354] "RemoveStaleState removing state" podUID="33ec35fe-3c0e-4a79-9f87-63279b8cc21a" containerName="mariadb-account-create-update" Feb 24 05:51:44.751106 master-0 kubenswrapper[34361]: I0224 05:51:44.751086 34361 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-5kh8v-config-gtgbg" Feb 24 05:51:44.756203 master-0 kubenswrapper[34361]: I0224 05:51:44.756133 34361 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-extra-scripts" Feb 24 05:51:44.763200 master-0 kubenswrapper[34361]: I0224 05:51:44.763130 34361 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-5kh8v-config-gtgbg"] Feb 24 05:51:44.834288 master-0 kubenswrapper[34361]: I0224 05:51:44.834211 34361 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/18f0b18a-ce72-499d-bb88-33d0b21a246d-var-run\") pod \"ovn-controller-5kh8v-config-gtgbg\" (UID: \"18f0b18a-ce72-499d-bb88-33d0b21a246d\") " pod="openstack/ovn-controller-5kh8v-config-gtgbg" Feb 24 05:51:44.835138 master-0 kubenswrapper[34361]: I0224 05:51:44.835098 34361 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/18f0b18a-ce72-499d-bb88-33d0b21a246d-additional-scripts\") pod \"ovn-controller-5kh8v-config-gtgbg\" (UID: \"18f0b18a-ce72-499d-bb88-33d0b21a246d\") " pod="openstack/ovn-controller-5kh8v-config-gtgbg" Feb 24 05:51:44.835467 master-0 kubenswrapper[34361]: I0224 05:51:44.835394 34361 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/18f0b18a-ce72-499d-bb88-33d0b21a246d-var-log-ovn\") pod \"ovn-controller-5kh8v-config-gtgbg\" (UID: \"18f0b18a-ce72-499d-bb88-33d0b21a246d\") " pod="openstack/ovn-controller-5kh8v-config-gtgbg" Feb 24 05:51:44.836908 master-0 kubenswrapper[34361]: I0224 05:51:44.836842 34361 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5v549\" (UniqueName: \"kubernetes.io/projected/18f0b18a-ce72-499d-bb88-33d0b21a246d-kube-api-access-5v549\") pod \"ovn-controller-5kh8v-config-gtgbg\" (UID: \"18f0b18a-ce72-499d-bb88-33d0b21a246d\") " pod="openstack/ovn-controller-5kh8v-config-gtgbg" Feb 24 05:51:44.837364 master-0 kubenswrapper[34361]: I0224 05:51:44.837284 34361 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/18f0b18a-ce72-499d-bb88-33d0b21a246d-var-run-ovn\") pod \"ovn-controller-5kh8v-config-gtgbg\" (UID: \"18f0b18a-ce72-499d-bb88-33d0b21a246d\") " pod="openstack/ovn-controller-5kh8v-config-gtgbg" Feb 24 05:51:44.839129 master-0 kubenswrapper[34361]: I0224 05:51:44.839106 34361 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/18f0b18a-ce72-499d-bb88-33d0b21a246d-scripts\") pod \"ovn-controller-5kh8v-config-gtgbg\" (UID: \"18f0b18a-ce72-499d-bb88-33d0b21a246d\") " pod="openstack/ovn-controller-5kh8v-config-gtgbg" Feb 24 05:51:44.940908 master-0 kubenswrapper[34361]: I0224 05:51:44.940832 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/18f0b18a-ce72-499d-bb88-33d0b21a246d-scripts\") pod \"ovn-controller-5kh8v-config-gtgbg\" (UID: \"18f0b18a-ce72-499d-bb88-33d0b21a246d\") " pod="openstack/ovn-controller-5kh8v-config-gtgbg" Feb 24 05:51:44.941986 master-0 kubenswrapper[34361]: I0224 05:51:44.941356 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/18f0b18a-ce72-499d-bb88-33d0b21a246d-var-run\") pod \"ovn-controller-5kh8v-config-gtgbg\" (UID: \"18f0b18a-ce72-499d-bb88-33d0b21a246d\") " pod="openstack/ovn-controller-5kh8v-config-gtgbg" Feb 24 05:51:44.941986 master-0 kubenswrapper[34361]: I0224 05:51:44.941429 34361 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/18f0b18a-ce72-499d-bb88-33d0b21a246d-var-run\") pod \"ovn-controller-5kh8v-config-gtgbg\" (UID: \"18f0b18a-ce72-499d-bb88-33d0b21a246d\") " pod="openstack/ovn-controller-5kh8v-config-gtgbg" Feb 24 05:51:44.941986 master-0 kubenswrapper[34361]: I0224 05:51:44.941725 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/18f0b18a-ce72-499d-bb88-33d0b21a246d-additional-scripts\") pod \"ovn-controller-5kh8v-config-gtgbg\" (UID: \"18f0b18a-ce72-499d-bb88-33d0b21a246d\") " pod="openstack/ovn-controller-5kh8v-config-gtgbg" Feb 24 05:51:44.941986 master-0 kubenswrapper[34361]: I0224 05:51:44.941840 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/18f0b18a-ce72-499d-bb88-33d0b21a246d-var-log-ovn\") pod \"ovn-controller-5kh8v-config-gtgbg\" (UID: \"18f0b18a-ce72-499d-bb88-33d0b21a246d\") " pod="openstack/ovn-controller-5kh8v-config-gtgbg" Feb 24 05:51:44.941986 master-0 kubenswrapper[34361]: I0224 05:51:44.941950 34361 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/18f0b18a-ce72-499d-bb88-33d0b21a246d-var-log-ovn\") pod \"ovn-controller-5kh8v-config-gtgbg\" (UID: \"18f0b18a-ce72-499d-bb88-33d0b21a246d\") " pod="openstack/ovn-controller-5kh8v-config-gtgbg" Feb 24 05:51:44.942185 master-0 kubenswrapper[34361]: I0224 05:51:44.942055 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5v549\" (UniqueName: \"kubernetes.io/projected/18f0b18a-ce72-499d-bb88-33d0b21a246d-kube-api-access-5v549\") pod \"ovn-controller-5kh8v-config-gtgbg\" (UID: \"18f0b18a-ce72-499d-bb88-33d0b21a246d\") " pod="openstack/ovn-controller-5kh8v-config-gtgbg" Feb 24 05:51:44.942185 master-0 kubenswrapper[34361]: I0224 05:51:44.942093 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/18f0b18a-ce72-499d-bb88-33d0b21a246d-var-run-ovn\") pod \"ovn-controller-5kh8v-config-gtgbg\" (UID: \"18f0b18a-ce72-499d-bb88-33d0b21a246d\") " pod="openstack/ovn-controller-5kh8v-config-gtgbg" Feb 24 05:51:44.942251 master-0 kubenswrapper[34361]: I0224 05:51:44.942209 34361 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/18f0b18a-ce72-499d-bb88-33d0b21a246d-var-run-ovn\") pod \"ovn-controller-5kh8v-config-gtgbg\" (UID: \"18f0b18a-ce72-499d-bb88-33d0b21a246d\") " pod="openstack/ovn-controller-5kh8v-config-gtgbg" Feb 24 05:51:44.943006 master-0 kubenswrapper[34361]: I0224 05:51:44.942963 34361 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/18f0b18a-ce72-499d-bb88-33d0b21a246d-scripts\") pod \"ovn-controller-5kh8v-config-gtgbg\" (UID: \"18f0b18a-ce72-499d-bb88-33d0b21a246d\") " pod="openstack/ovn-controller-5kh8v-config-gtgbg" Feb 24 05:51:44.943156 master-0 kubenswrapper[34361]: I0224 05:51:44.943129 34361 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/18f0b18a-ce72-499d-bb88-33d0b21a246d-additional-scripts\") pod \"ovn-controller-5kh8v-config-gtgbg\" (UID: \"18f0b18a-ce72-499d-bb88-33d0b21a246d\") " pod="openstack/ovn-controller-5kh8v-config-gtgbg" Feb 24 05:51:44.952136 master-0 kubenswrapper[34361]: I0224 05:51:44.951697 34361 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-5kh8v" Feb 24 05:51:44.959986 master-0 kubenswrapper[34361]: I0224 05:51:44.959933 34361 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5v549\" (UniqueName: \"kubernetes.io/projected/18f0b18a-ce72-499d-bb88-33d0b21a246d-kube-api-access-5v549\") pod \"ovn-controller-5kh8v-config-gtgbg\" (UID: \"18f0b18a-ce72-499d-bb88-33d0b21a246d\") " pod="openstack/ovn-controller-5kh8v-config-gtgbg" Feb 24 05:51:45.177663 master-0 kubenswrapper[34361]: I0224 05:51:45.177486 34361 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-5kh8v-config-gtgbg" Feb 24 05:51:45.640584 master-0 kubenswrapper[34361]: I0224 05:51:45.640526 34361 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-5kh8v-config-gtgbg"] Feb 24 05:51:45.700345 master-0 kubenswrapper[34361]: I0224 05:51:45.699788 34361 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6fbf68b9d7-p96gq" event={"ID":"b2664286-54db-449b-aee5-fbfa93ab489f","Type":"ContainerStarted","Data":"85663d123135fcf7174a92dc98c915e80f75576619d7de39fd2a4d3c07cdb68c"} Feb 24 05:51:45.700345 master-0 kubenswrapper[34361]: I0224 05:51:45.700109 34361 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-6fbf68b9d7-p96gq" Feb 24 05:51:45.704332 master-0 kubenswrapper[34361]: I0224 05:51:45.703151 34361 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-5kh8v-config-gtgbg" event={"ID":"18f0b18a-ce72-499d-bb88-33d0b21a246d","Type":"ContainerStarted","Data":"287bba7777a429afda5b0124cfdd5694a753de0569bd1682189b5f87c9cec670"} Feb 24 05:51:45.739353 master-0 kubenswrapper[34361]: I0224 05:51:45.738925 34361 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-6fbf68b9d7-p96gq" podStartSLOduration=3.738903371 podStartE2EDuration="3.738903371s" podCreationTimestamp="2026-02-24 05:51:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-24 05:51:45.736811715 +0000 UTC m=+865.439428771" watchObservedRunningTime="2026-02-24 05:51:45.738903371 +0000 UTC m=+865.441520417" Feb 24 05:51:46.618269 master-0 kubenswrapper[34361]: I0224 05:51:46.618142 34361 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5398499e-548f-46a8-8a5e-45ffb743c2ce" path="/var/lib/kubelet/pods/5398499e-548f-46a8-8a5e-45ffb743c2ce/volumes" Feb 24 05:51:46.722811 master-0 kubenswrapper[34361]: I0224 05:51:46.722695 34361 generic.go:334] "Generic (PLEG): container finished" podID="18f0b18a-ce72-499d-bb88-33d0b21a246d" containerID="809200b42c356e44e4959c36a4e0f4f9adc64b7377838bfed8429a3c4bc571e9" exitCode=0 Feb 24 05:51:46.723750 master-0 kubenswrapper[34361]: I0224 05:51:46.722792 34361 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-5kh8v-config-gtgbg" event={"ID":"18f0b18a-ce72-499d-bb88-33d0b21a246d","Type":"ContainerDied","Data":"809200b42c356e44e4959c36a4e0f4f9adc64b7377838bfed8429a3c4bc571e9"} Feb 24 05:51:48.270465 master-0 kubenswrapper[34361]: I0224 05:51:48.270369 34361 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-5kh8v-config-gtgbg" Feb 24 05:51:48.338373 master-0 kubenswrapper[34361]: I0224 05:51:48.338135 34361 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/18f0b18a-ce72-499d-bb88-33d0b21a246d-var-log-ovn\") pod \"18f0b18a-ce72-499d-bb88-33d0b21a246d\" (UID: \"18f0b18a-ce72-499d-bb88-33d0b21a246d\") " Feb 24 05:51:48.338373 master-0 kubenswrapper[34361]: I0224 05:51:48.338250 34361 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/18f0b18a-ce72-499d-bb88-33d0b21a246d-scripts\") pod \"18f0b18a-ce72-499d-bb88-33d0b21a246d\" (UID: \"18f0b18a-ce72-499d-bb88-33d0b21a246d\") " Feb 24 05:51:48.339039 master-0 kubenswrapper[34361]: I0224 05:51:48.338678 34361 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/18f0b18a-ce72-499d-bb88-33d0b21a246d-var-run-ovn\") pod \"18f0b18a-ce72-499d-bb88-33d0b21a246d\" (UID: \"18f0b18a-ce72-499d-bb88-33d0b21a246d\") " Feb 24 05:51:48.339039 master-0 kubenswrapper[34361]: I0224 05:51:48.338718 34361 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/18f0b18a-ce72-499d-bb88-33d0b21a246d-var-run\") pod \"18f0b18a-ce72-499d-bb88-33d0b21a246d\" (UID: \"18f0b18a-ce72-499d-bb88-33d0b21a246d\") " Feb 24 05:51:48.339039 master-0 kubenswrapper[34361]: I0224 05:51:48.338755 34361 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5v549\" (UniqueName: \"kubernetes.io/projected/18f0b18a-ce72-499d-bb88-33d0b21a246d-kube-api-access-5v549\") pod \"18f0b18a-ce72-499d-bb88-33d0b21a246d\" (UID: \"18f0b18a-ce72-499d-bb88-33d0b21a246d\") " Feb 24 05:51:48.339039 master-0 kubenswrapper[34361]: I0224 05:51:48.338846 34361 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/18f0b18a-ce72-499d-bb88-33d0b21a246d-additional-scripts\") pod \"18f0b18a-ce72-499d-bb88-33d0b21a246d\" (UID: \"18f0b18a-ce72-499d-bb88-33d0b21a246d\") " Feb 24 05:51:48.340333 master-0 kubenswrapper[34361]: I0224 05:51:48.340265 34361 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/18f0b18a-ce72-499d-bb88-33d0b21a246d-var-run-ovn" (OuterVolumeSpecName: "var-run-ovn") pod "18f0b18a-ce72-499d-bb88-33d0b21a246d" (UID: "18f0b18a-ce72-499d-bb88-33d0b21a246d"). InnerVolumeSpecName "var-run-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 24 05:51:48.340474 master-0 kubenswrapper[34361]: I0224 05:51:48.340344 34361 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/18f0b18a-ce72-499d-bb88-33d0b21a246d-var-log-ovn" (OuterVolumeSpecName: "var-log-ovn") pod "18f0b18a-ce72-499d-bb88-33d0b21a246d" (UID: "18f0b18a-ce72-499d-bb88-33d0b21a246d"). InnerVolumeSpecName "var-log-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 24 05:51:48.341621 master-0 kubenswrapper[34361]: I0224 05:51:48.341574 34361 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/18f0b18a-ce72-499d-bb88-33d0b21a246d-scripts" (OuterVolumeSpecName: "scripts") pod "18f0b18a-ce72-499d-bb88-33d0b21a246d" (UID: "18f0b18a-ce72-499d-bb88-33d0b21a246d"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 24 05:51:48.341730 master-0 kubenswrapper[34361]: I0224 05:51:48.341634 34361 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/18f0b18a-ce72-499d-bb88-33d0b21a246d-var-run" (OuterVolumeSpecName: "var-run") pod "18f0b18a-ce72-499d-bb88-33d0b21a246d" (UID: "18f0b18a-ce72-499d-bb88-33d0b21a246d"). InnerVolumeSpecName "var-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 24 05:51:48.343497 master-0 kubenswrapper[34361]: I0224 05:51:48.343216 34361 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/18f0b18a-ce72-499d-bb88-33d0b21a246d-additional-scripts" (OuterVolumeSpecName: "additional-scripts") pod "18f0b18a-ce72-499d-bb88-33d0b21a246d" (UID: "18f0b18a-ce72-499d-bb88-33d0b21a246d"). InnerVolumeSpecName "additional-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 24 05:51:48.366401 master-0 kubenswrapper[34361]: I0224 05:51:48.366244 34361 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/18f0b18a-ce72-499d-bb88-33d0b21a246d-kube-api-access-5v549" (OuterVolumeSpecName: "kube-api-access-5v549") pod "18f0b18a-ce72-499d-bb88-33d0b21a246d" (UID: "18f0b18a-ce72-499d-bb88-33d0b21a246d"). InnerVolumeSpecName "kube-api-access-5v549". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 24 05:51:48.441434 master-0 kubenswrapper[34361]: I0224 05:51:48.441230 34361 reconciler_common.go:293] "Volume detached for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/18f0b18a-ce72-499d-bb88-33d0b21a246d-var-run-ovn\") on node \"master-0\" DevicePath \"\"" Feb 24 05:51:48.441434 master-0 kubenswrapper[34361]: I0224 05:51:48.441291 34361 reconciler_common.go:293] "Volume detached for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/18f0b18a-ce72-499d-bb88-33d0b21a246d-var-run\") on node \"master-0\" DevicePath \"\"" Feb 24 05:51:48.441434 master-0 kubenswrapper[34361]: I0224 05:51:48.441303 34361 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5v549\" (UniqueName: \"kubernetes.io/projected/18f0b18a-ce72-499d-bb88-33d0b21a246d-kube-api-access-5v549\") on node \"master-0\" DevicePath \"\"" Feb 24 05:51:48.441434 master-0 kubenswrapper[34361]: I0224 05:51:48.441329 34361 reconciler_common.go:293] "Volume detached for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/18f0b18a-ce72-499d-bb88-33d0b21a246d-additional-scripts\") on node \"master-0\" DevicePath \"\"" Feb 24 05:51:48.441434 master-0 kubenswrapper[34361]: I0224 05:51:48.441342 34361 reconciler_common.go:293] "Volume detached for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/18f0b18a-ce72-499d-bb88-33d0b21a246d-var-log-ovn\") on node \"master-0\" DevicePath \"\"" Feb 24 05:51:48.441434 master-0 kubenswrapper[34361]: I0224 05:51:48.441351 34361 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/18f0b18a-ce72-499d-bb88-33d0b21a246d-scripts\") on node \"master-0\" DevicePath \"\"" Feb 24 05:51:48.772397 master-0 kubenswrapper[34361]: I0224 05:51:48.770780 34361 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-5kh8v-config-gtgbg" event={"ID":"18f0b18a-ce72-499d-bb88-33d0b21a246d","Type":"ContainerDied","Data":"287bba7777a429afda5b0124cfdd5694a753de0569bd1682189b5f87c9cec670"} Feb 24 05:51:48.772397 master-0 kubenswrapper[34361]: I0224 05:51:48.770854 34361 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="287bba7777a429afda5b0124cfdd5694a753de0569bd1682189b5f87c9cec670" Feb 24 05:51:48.772397 master-0 kubenswrapper[34361]: I0224 05:51:48.770936 34361 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-5kh8v-config-gtgbg" Feb 24 05:51:49.385530 master-0 kubenswrapper[34361]: I0224 05:51:49.385437 34361 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovn-controller-5kh8v-config-gtgbg"] Feb 24 05:51:49.399646 master-0 kubenswrapper[34361]: I0224 05:51:49.399571 34361 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ovn-controller-5kh8v-config-gtgbg"] Feb 24 05:51:49.792872 master-0 kubenswrapper[34361]: I0224 05:51:49.792773 34361 generic.go:334] "Generic (PLEG): container finished" podID="36f1ab4b-258e-434c-8674-0375758ffd49" containerID="505c85bf407d16734485972c8f8ba68a955434679874c9188aa62bcaf5c2307a" exitCode=0 Feb 24 05:51:49.792872 master-0 kubenswrapper[34361]: I0224 05:51:49.792861 34361 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-f4vxh" event={"ID":"36f1ab4b-258e-434c-8674-0375758ffd49","Type":"ContainerDied","Data":"505c85bf407d16734485972c8f8ba68a955434679874c9188aa62bcaf5c2307a"} Feb 24 05:51:50.620063 master-0 kubenswrapper[34361]: I0224 05:51:50.619986 34361 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="18f0b18a-ce72-499d-bb88-33d0b21a246d" path="/var/lib/kubelet/pods/18f0b18a-ce72-499d-bb88-33d0b21a246d/volumes" Feb 24 05:51:51.653128 master-0 kubenswrapper[34361]: I0224 05:51:51.653052 34361 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-f4vxh" Feb 24 05:51:51.740190 master-0 kubenswrapper[34361]: I0224 05:51:51.740076 34361 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/36f1ab4b-258e-434c-8674-0375758ffd49-config-data\") pod \"36f1ab4b-258e-434c-8674-0375758ffd49\" (UID: \"36f1ab4b-258e-434c-8674-0375758ffd49\") " Feb 24 05:51:51.740464 master-0 kubenswrapper[34361]: I0224 05:51:51.740306 34361 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nxdwh\" (UniqueName: \"kubernetes.io/projected/36f1ab4b-258e-434c-8674-0375758ffd49-kube-api-access-nxdwh\") pod \"36f1ab4b-258e-434c-8674-0375758ffd49\" (UID: \"36f1ab4b-258e-434c-8674-0375758ffd49\") " Feb 24 05:51:51.740571 master-0 kubenswrapper[34361]: I0224 05:51:51.740494 34361 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/36f1ab4b-258e-434c-8674-0375758ffd49-combined-ca-bundle\") pod \"36f1ab4b-258e-434c-8674-0375758ffd49\" (UID: \"36f1ab4b-258e-434c-8674-0375758ffd49\") " Feb 24 05:51:51.740614 master-0 kubenswrapper[34361]: I0224 05:51:51.740576 34361 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/36f1ab4b-258e-434c-8674-0375758ffd49-db-sync-config-data\") pod \"36f1ab4b-258e-434c-8674-0375758ffd49\" (UID: \"36f1ab4b-258e-434c-8674-0375758ffd49\") " Feb 24 05:51:51.750359 master-0 kubenswrapper[34361]: I0224 05:51:51.750261 34361 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/36f1ab4b-258e-434c-8674-0375758ffd49-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "36f1ab4b-258e-434c-8674-0375758ffd49" (UID: "36f1ab4b-258e-434c-8674-0375758ffd49"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 24 05:51:51.750460 master-0 kubenswrapper[34361]: I0224 05:51:51.750330 34361 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/36f1ab4b-258e-434c-8674-0375758ffd49-kube-api-access-nxdwh" (OuterVolumeSpecName: "kube-api-access-nxdwh") pod "36f1ab4b-258e-434c-8674-0375758ffd49" (UID: "36f1ab4b-258e-434c-8674-0375758ffd49"). InnerVolumeSpecName "kube-api-access-nxdwh". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 24 05:51:51.780105 master-0 kubenswrapper[34361]: I0224 05:51:51.779952 34361 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/36f1ab4b-258e-434c-8674-0375758ffd49-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "36f1ab4b-258e-434c-8674-0375758ffd49" (UID: "36f1ab4b-258e-434c-8674-0375758ffd49"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 24 05:51:51.804689 master-0 kubenswrapper[34361]: I0224 05:51:51.804150 34361 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/36f1ab4b-258e-434c-8674-0375758ffd49-config-data" (OuterVolumeSpecName: "config-data") pod "36f1ab4b-258e-434c-8674-0375758ffd49" (UID: "36f1ab4b-258e-434c-8674-0375758ffd49"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 24 05:51:51.819687 master-0 kubenswrapper[34361]: I0224 05:51:51.819628 34361 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-f4vxh" event={"ID":"36f1ab4b-258e-434c-8674-0375758ffd49","Type":"ContainerDied","Data":"e7b55dbbec1fa720ce0965143242e9d550a47fd64b16551c121e8d5ddee00643"} Feb 24 05:51:51.819789 master-0 kubenswrapper[34361]: I0224 05:51:51.819692 34361 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e7b55dbbec1fa720ce0965143242e9d550a47fd64b16551c121e8d5ddee00643" Feb 24 05:51:51.819789 master-0 kubenswrapper[34361]: I0224 05:51:51.819721 34361 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-f4vxh" Feb 24 05:51:51.845163 master-0 kubenswrapper[34361]: I0224 05:51:51.845093 34361 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/36f1ab4b-258e-434c-8674-0375758ffd49-config-data\") on node \"master-0\" DevicePath \"\"" Feb 24 05:51:51.845163 master-0 kubenswrapper[34361]: I0224 05:51:51.845156 34361 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nxdwh\" (UniqueName: \"kubernetes.io/projected/36f1ab4b-258e-434c-8674-0375758ffd49-kube-api-access-nxdwh\") on node \"master-0\" DevicePath \"\"" Feb 24 05:51:51.845400 master-0 kubenswrapper[34361]: I0224 05:51:51.845173 34361 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/36f1ab4b-258e-434c-8674-0375758ffd49-combined-ca-bundle\") on node \"master-0\" DevicePath \"\"" Feb 24 05:51:51.845400 master-0 kubenswrapper[34361]: I0224 05:51:51.845201 34361 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/36f1ab4b-258e-434c-8674-0375758ffd49-db-sync-config-data\") on node \"master-0\" DevicePath \"\"" Feb 24 05:51:52.390544 master-0 kubenswrapper[34361]: I0224 05:51:52.389850 34361 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6fbf68b9d7-p96gq"] Feb 24 05:51:52.390835 master-0 kubenswrapper[34361]: I0224 05:51:52.390673 34361 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-6fbf68b9d7-p96gq" podUID="b2664286-54db-449b-aee5-fbfa93ab489f" containerName="dnsmasq-dns" containerID="cri-o://85663d123135fcf7174a92dc98c915e80f75576619d7de39fd2a4d3c07cdb68c" gracePeriod=10 Feb 24 05:51:52.395728 master-0 kubenswrapper[34361]: I0224 05:51:52.395653 34361 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-6fbf68b9d7-p96gq" Feb 24 05:51:52.441226 master-0 kubenswrapper[34361]: I0224 05:51:52.441150 34361 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-674c8b7b9c-9fj6z"] Feb 24 05:51:52.442028 master-0 kubenswrapper[34361]: E0224 05:51:52.441981 34361 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="36f1ab4b-258e-434c-8674-0375758ffd49" containerName="glance-db-sync" Feb 24 05:51:52.442074 master-0 kubenswrapper[34361]: I0224 05:51:52.442023 34361 state_mem.go:107] "Deleted CPUSet assignment" podUID="36f1ab4b-258e-434c-8674-0375758ffd49" containerName="glance-db-sync" Feb 24 05:51:52.442109 master-0 kubenswrapper[34361]: E0224 05:51:52.442063 34361 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="18f0b18a-ce72-499d-bb88-33d0b21a246d" containerName="ovn-config" Feb 24 05:51:52.442109 master-0 kubenswrapper[34361]: I0224 05:51:52.442092 34361 state_mem.go:107] "Deleted CPUSet assignment" podUID="18f0b18a-ce72-499d-bb88-33d0b21a246d" containerName="ovn-config" Feb 24 05:51:52.443692 master-0 kubenswrapper[34361]: I0224 05:51:52.442724 34361 memory_manager.go:354] "RemoveStaleState removing state" podUID="18f0b18a-ce72-499d-bb88-33d0b21a246d" containerName="ovn-config" Feb 24 05:51:52.443692 master-0 kubenswrapper[34361]: I0224 05:51:52.442778 34361 memory_manager.go:354] "RemoveStaleState removing state" podUID="36f1ab4b-258e-434c-8674-0375758ffd49" containerName="glance-db-sync" Feb 24 05:51:52.445957 master-0 kubenswrapper[34361]: I0224 05:51:52.445928 34361 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-674c8b7b9c-9fj6z" Feb 24 05:51:52.461875 master-0 kubenswrapper[34361]: I0224 05:51:52.461806 34361 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-6fbf68b9d7-p96gq" podUID="b2664286-54db-449b-aee5-fbfa93ab489f" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.128.0.190:5353: connect: connection refused" Feb 24 05:51:52.473895 master-0 kubenswrapper[34361]: I0224 05:51:52.473826 34361 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-674c8b7b9c-9fj6z"] Feb 24 05:51:52.603440 master-0 kubenswrapper[34361]: I0224 05:51:52.601596 34361 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/ba54c348-0fa9-4fa5-8c7b-77aef67518a2-ovsdbserver-sb\") pod \"dnsmasq-dns-674c8b7b9c-9fj6z\" (UID: \"ba54c348-0fa9-4fa5-8c7b-77aef67518a2\") " pod="openstack/dnsmasq-dns-674c8b7b9c-9fj6z" Feb 24 05:51:52.603440 master-0 kubenswrapper[34361]: I0224 05:51:52.601670 34361 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/ba54c348-0fa9-4fa5-8c7b-77aef67518a2-ovsdbserver-nb\") pod \"dnsmasq-dns-674c8b7b9c-9fj6z\" (UID: \"ba54c348-0fa9-4fa5-8c7b-77aef67518a2\") " pod="openstack/dnsmasq-dns-674c8b7b9c-9fj6z" Feb 24 05:51:52.603440 master-0 kubenswrapper[34361]: I0224 05:51:52.601716 34361 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ba54c348-0fa9-4fa5-8c7b-77aef67518a2-dns-svc\") pod \"dnsmasq-dns-674c8b7b9c-9fj6z\" (UID: \"ba54c348-0fa9-4fa5-8c7b-77aef67518a2\") " pod="openstack/dnsmasq-dns-674c8b7b9c-9fj6z" Feb 24 05:51:52.603440 master-0 kubenswrapper[34361]: I0224 05:51:52.601739 34361 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ba54c348-0fa9-4fa5-8c7b-77aef67518a2-config\") pod \"dnsmasq-dns-674c8b7b9c-9fj6z\" (UID: \"ba54c348-0fa9-4fa5-8c7b-77aef67518a2\") " pod="openstack/dnsmasq-dns-674c8b7b9c-9fj6z" Feb 24 05:51:52.603440 master-0 kubenswrapper[34361]: I0224 05:51:52.601798 34361 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vqc48\" (UniqueName: \"kubernetes.io/projected/ba54c348-0fa9-4fa5-8c7b-77aef67518a2-kube-api-access-vqc48\") pod \"dnsmasq-dns-674c8b7b9c-9fj6z\" (UID: \"ba54c348-0fa9-4fa5-8c7b-77aef67518a2\") " pod="openstack/dnsmasq-dns-674c8b7b9c-9fj6z" Feb 24 05:51:52.603440 master-0 kubenswrapper[34361]: I0224 05:51:52.601824 34361 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/ba54c348-0fa9-4fa5-8c7b-77aef67518a2-dns-swift-storage-0\") pod \"dnsmasq-dns-674c8b7b9c-9fj6z\" (UID: \"ba54c348-0fa9-4fa5-8c7b-77aef67518a2\") " pod="openstack/dnsmasq-dns-674c8b7b9c-9fj6z" Feb 24 05:51:52.705340 master-0 kubenswrapper[34361]: I0224 05:51:52.705242 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ba54c348-0fa9-4fa5-8c7b-77aef67518a2-dns-svc\") pod \"dnsmasq-dns-674c8b7b9c-9fj6z\" (UID: \"ba54c348-0fa9-4fa5-8c7b-77aef67518a2\") " pod="openstack/dnsmasq-dns-674c8b7b9c-9fj6z" Feb 24 05:51:52.705340 master-0 kubenswrapper[34361]: I0224 05:51:52.705351 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ba54c348-0fa9-4fa5-8c7b-77aef67518a2-config\") pod \"dnsmasq-dns-674c8b7b9c-9fj6z\" (UID: \"ba54c348-0fa9-4fa5-8c7b-77aef67518a2\") " pod="openstack/dnsmasq-dns-674c8b7b9c-9fj6z" Feb 24 05:51:52.706083 master-0 kubenswrapper[34361]: I0224 05:51:52.705493 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vqc48\" (UniqueName: \"kubernetes.io/projected/ba54c348-0fa9-4fa5-8c7b-77aef67518a2-kube-api-access-vqc48\") pod \"dnsmasq-dns-674c8b7b9c-9fj6z\" (UID: \"ba54c348-0fa9-4fa5-8c7b-77aef67518a2\") " pod="openstack/dnsmasq-dns-674c8b7b9c-9fj6z" Feb 24 05:51:52.706083 master-0 kubenswrapper[34361]: I0224 05:51:52.705544 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/ba54c348-0fa9-4fa5-8c7b-77aef67518a2-dns-swift-storage-0\") pod \"dnsmasq-dns-674c8b7b9c-9fj6z\" (UID: \"ba54c348-0fa9-4fa5-8c7b-77aef67518a2\") " pod="openstack/dnsmasq-dns-674c8b7b9c-9fj6z" Feb 24 05:51:52.706083 master-0 kubenswrapper[34361]: I0224 05:51:52.705642 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/ba54c348-0fa9-4fa5-8c7b-77aef67518a2-ovsdbserver-sb\") pod \"dnsmasq-dns-674c8b7b9c-9fj6z\" (UID: \"ba54c348-0fa9-4fa5-8c7b-77aef67518a2\") " pod="openstack/dnsmasq-dns-674c8b7b9c-9fj6z" Feb 24 05:51:52.706083 master-0 kubenswrapper[34361]: I0224 05:51:52.705690 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/ba54c348-0fa9-4fa5-8c7b-77aef67518a2-ovsdbserver-nb\") pod \"dnsmasq-dns-674c8b7b9c-9fj6z\" (UID: \"ba54c348-0fa9-4fa5-8c7b-77aef67518a2\") " pod="openstack/dnsmasq-dns-674c8b7b9c-9fj6z" Feb 24 05:51:52.708524 master-0 kubenswrapper[34361]: I0224 05:51:52.707963 34361 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ba54c348-0fa9-4fa5-8c7b-77aef67518a2-dns-svc\") pod \"dnsmasq-dns-674c8b7b9c-9fj6z\" (UID: \"ba54c348-0fa9-4fa5-8c7b-77aef67518a2\") " pod="openstack/dnsmasq-dns-674c8b7b9c-9fj6z" Feb 24 05:51:52.709520 master-0 kubenswrapper[34361]: I0224 05:51:52.709161 34361 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ba54c348-0fa9-4fa5-8c7b-77aef67518a2-config\") pod \"dnsmasq-dns-674c8b7b9c-9fj6z\" (UID: \"ba54c348-0fa9-4fa5-8c7b-77aef67518a2\") " pod="openstack/dnsmasq-dns-674c8b7b9c-9fj6z" Feb 24 05:51:52.710006 master-0 kubenswrapper[34361]: I0224 05:51:52.709910 34361 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/ba54c348-0fa9-4fa5-8c7b-77aef67518a2-dns-swift-storage-0\") pod \"dnsmasq-dns-674c8b7b9c-9fj6z\" (UID: \"ba54c348-0fa9-4fa5-8c7b-77aef67518a2\") " pod="openstack/dnsmasq-dns-674c8b7b9c-9fj6z" Feb 24 05:51:52.711092 master-0 kubenswrapper[34361]: I0224 05:51:52.710906 34361 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/ba54c348-0fa9-4fa5-8c7b-77aef67518a2-ovsdbserver-sb\") pod \"dnsmasq-dns-674c8b7b9c-9fj6z\" (UID: \"ba54c348-0fa9-4fa5-8c7b-77aef67518a2\") " pod="openstack/dnsmasq-dns-674c8b7b9c-9fj6z" Feb 24 05:51:52.711185 master-0 kubenswrapper[34361]: I0224 05:51:52.711094 34361 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/ba54c348-0fa9-4fa5-8c7b-77aef67518a2-ovsdbserver-nb\") pod \"dnsmasq-dns-674c8b7b9c-9fj6z\" (UID: \"ba54c348-0fa9-4fa5-8c7b-77aef67518a2\") " pod="openstack/dnsmasq-dns-674c8b7b9c-9fj6z" Feb 24 05:51:52.734374 master-0 kubenswrapper[34361]: I0224 05:51:52.732721 34361 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vqc48\" (UniqueName: \"kubernetes.io/projected/ba54c348-0fa9-4fa5-8c7b-77aef67518a2-kube-api-access-vqc48\") pod \"dnsmasq-dns-674c8b7b9c-9fj6z\" (UID: \"ba54c348-0fa9-4fa5-8c7b-77aef67518a2\") " pod="openstack/dnsmasq-dns-674c8b7b9c-9fj6z" Feb 24 05:51:52.846203 master-0 kubenswrapper[34361]: I0224 05:51:52.846110 34361 generic.go:334] "Generic (PLEG): container finished" podID="b2664286-54db-449b-aee5-fbfa93ab489f" containerID="85663d123135fcf7174a92dc98c915e80f75576619d7de39fd2a4d3c07cdb68c" exitCode=0 Feb 24 05:51:52.846573 master-0 kubenswrapper[34361]: I0224 05:51:52.846211 34361 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6fbf68b9d7-p96gq" event={"ID":"b2664286-54db-449b-aee5-fbfa93ab489f","Type":"ContainerDied","Data":"85663d123135fcf7174a92dc98c915e80f75576619d7de39fd2a4d3c07cdb68c"} Feb 24 05:51:52.901439 master-0 kubenswrapper[34361]: I0224 05:51:52.901378 34361 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-674c8b7b9c-9fj6z" Feb 24 05:51:53.061835 master-0 kubenswrapper[34361]: I0224 05:51:53.061628 34361 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6fbf68b9d7-p96gq" Feb 24 05:51:53.219864 master-0 kubenswrapper[34361]: I0224 05:51:53.219650 34361 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s2bpb\" (UniqueName: \"kubernetes.io/projected/b2664286-54db-449b-aee5-fbfa93ab489f-kube-api-access-s2bpb\") pod \"b2664286-54db-449b-aee5-fbfa93ab489f\" (UID: \"b2664286-54db-449b-aee5-fbfa93ab489f\") " Feb 24 05:51:53.219864 master-0 kubenswrapper[34361]: I0224 05:51:53.219752 34361 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/b2664286-54db-449b-aee5-fbfa93ab489f-dns-svc\") pod \"b2664286-54db-449b-aee5-fbfa93ab489f\" (UID: \"b2664286-54db-449b-aee5-fbfa93ab489f\") " Feb 24 05:51:53.220152 master-0 kubenswrapper[34361]: I0224 05:51:53.219959 34361 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/b2664286-54db-449b-aee5-fbfa93ab489f-ovsdbserver-sb\") pod \"b2664286-54db-449b-aee5-fbfa93ab489f\" (UID: \"b2664286-54db-449b-aee5-fbfa93ab489f\") " Feb 24 05:51:53.220152 master-0 kubenswrapper[34361]: I0224 05:51:53.220084 34361 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/b2664286-54db-449b-aee5-fbfa93ab489f-ovsdbserver-nb\") pod \"b2664286-54db-449b-aee5-fbfa93ab489f\" (UID: \"b2664286-54db-449b-aee5-fbfa93ab489f\") " Feb 24 05:51:53.220217 master-0 kubenswrapper[34361]: I0224 05:51:53.220170 34361 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/b2664286-54db-449b-aee5-fbfa93ab489f-dns-swift-storage-0\") pod \"b2664286-54db-449b-aee5-fbfa93ab489f\" (UID: \"b2664286-54db-449b-aee5-fbfa93ab489f\") " Feb 24 05:51:53.220217 master-0 kubenswrapper[34361]: I0224 05:51:53.220201 34361 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b2664286-54db-449b-aee5-fbfa93ab489f-config\") pod \"b2664286-54db-449b-aee5-fbfa93ab489f\" (UID: \"b2664286-54db-449b-aee5-fbfa93ab489f\") " Feb 24 05:51:53.226599 master-0 kubenswrapper[34361]: I0224 05:51:53.226499 34361 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b2664286-54db-449b-aee5-fbfa93ab489f-kube-api-access-s2bpb" (OuterVolumeSpecName: "kube-api-access-s2bpb") pod "b2664286-54db-449b-aee5-fbfa93ab489f" (UID: "b2664286-54db-449b-aee5-fbfa93ab489f"). InnerVolumeSpecName "kube-api-access-s2bpb". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 24 05:51:53.280954 master-0 kubenswrapper[34361]: I0224 05:51:53.280791 34361 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b2664286-54db-449b-aee5-fbfa93ab489f-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "b2664286-54db-449b-aee5-fbfa93ab489f" (UID: "b2664286-54db-449b-aee5-fbfa93ab489f"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 24 05:51:53.283347 master-0 kubenswrapper[34361]: I0224 05:51:53.283256 34361 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b2664286-54db-449b-aee5-fbfa93ab489f-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "b2664286-54db-449b-aee5-fbfa93ab489f" (UID: "b2664286-54db-449b-aee5-fbfa93ab489f"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 24 05:51:53.286909 master-0 kubenswrapper[34361]: I0224 05:51:53.286840 34361 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b2664286-54db-449b-aee5-fbfa93ab489f-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "b2664286-54db-449b-aee5-fbfa93ab489f" (UID: "b2664286-54db-449b-aee5-fbfa93ab489f"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 24 05:51:53.306741 master-0 kubenswrapper[34361]: I0224 05:51:53.306665 34361 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b2664286-54db-449b-aee5-fbfa93ab489f-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "b2664286-54db-449b-aee5-fbfa93ab489f" (UID: "b2664286-54db-449b-aee5-fbfa93ab489f"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 24 05:51:53.315903 master-0 kubenswrapper[34361]: I0224 05:51:53.315830 34361 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b2664286-54db-449b-aee5-fbfa93ab489f-config" (OuterVolumeSpecName: "config") pod "b2664286-54db-449b-aee5-fbfa93ab489f" (UID: "b2664286-54db-449b-aee5-fbfa93ab489f"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 24 05:51:53.323146 master-0 kubenswrapper[34361]: I0224 05:51:53.323098 34361 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/b2664286-54db-449b-aee5-fbfa93ab489f-dns-swift-storage-0\") on node \"master-0\" DevicePath \"\"" Feb 24 05:51:53.323146 master-0 kubenswrapper[34361]: I0224 05:51:53.323132 34361 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b2664286-54db-449b-aee5-fbfa93ab489f-config\") on node \"master-0\" DevicePath \"\"" Feb 24 05:51:53.323146 master-0 kubenswrapper[34361]: I0224 05:51:53.323144 34361 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-s2bpb\" (UniqueName: \"kubernetes.io/projected/b2664286-54db-449b-aee5-fbfa93ab489f-kube-api-access-s2bpb\") on node \"master-0\" DevicePath \"\"" Feb 24 05:51:53.323146 master-0 kubenswrapper[34361]: I0224 05:51:53.323155 34361 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/b2664286-54db-449b-aee5-fbfa93ab489f-dns-svc\") on node \"master-0\" DevicePath \"\"" Feb 24 05:51:53.323501 master-0 kubenswrapper[34361]: I0224 05:51:53.323165 34361 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/b2664286-54db-449b-aee5-fbfa93ab489f-ovsdbserver-sb\") on node \"master-0\" DevicePath \"\"" Feb 24 05:51:53.323501 master-0 kubenswrapper[34361]: I0224 05:51:53.323174 34361 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/b2664286-54db-449b-aee5-fbfa93ab489f-ovsdbserver-nb\") on node \"master-0\" DevicePath \"\"" Feb 24 05:51:53.406080 master-0 kubenswrapper[34361]: I0224 05:51:53.406010 34361 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-674c8b7b9c-9fj6z"] Feb 24 05:51:53.865813 master-0 kubenswrapper[34361]: I0224 05:51:53.865664 34361 generic.go:334] "Generic (PLEG): container finished" podID="ba54c348-0fa9-4fa5-8c7b-77aef67518a2" containerID="25ff809ee0f0e4b1d8c1cef611e942c100236a718c0c4259239029b26da7f4d4" exitCode=0 Feb 24 05:51:53.866476 master-0 kubenswrapper[34361]: I0224 05:51:53.866450 34361 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-674c8b7b9c-9fj6z" event={"ID":"ba54c348-0fa9-4fa5-8c7b-77aef67518a2","Type":"ContainerDied","Data":"25ff809ee0f0e4b1d8c1cef611e942c100236a718c0c4259239029b26da7f4d4"} Feb 24 05:51:53.866588 master-0 kubenswrapper[34361]: I0224 05:51:53.866568 34361 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-674c8b7b9c-9fj6z" event={"ID":"ba54c348-0fa9-4fa5-8c7b-77aef67518a2","Type":"ContainerStarted","Data":"ce98c0737b06c3c284339518e7a2aa21b1915edb7acf5ca02c7bfa31a07f6bf3"} Feb 24 05:51:53.871458 master-0 kubenswrapper[34361]: I0224 05:51:53.871399 34361 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6fbf68b9d7-p96gq" event={"ID":"b2664286-54db-449b-aee5-fbfa93ab489f","Type":"ContainerDied","Data":"f6eecb4a8b8598aa9bf7dd6c55b9a46acc5425afce1880dc64f178fbedb97023"} Feb 24 05:51:53.871555 master-0 kubenswrapper[34361]: I0224 05:51:53.871497 34361 scope.go:117] "RemoveContainer" containerID="85663d123135fcf7174a92dc98c915e80f75576619d7de39fd2a4d3c07cdb68c" Feb 24 05:51:53.871555 master-0 kubenswrapper[34361]: I0224 05:51:53.871503 34361 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6fbf68b9d7-p96gq" Feb 24 05:51:53.920711 master-0 kubenswrapper[34361]: I0224 05:51:53.920638 34361 scope.go:117] "RemoveContainer" containerID="1874f97f3f35d8bc961fa39e86a113dada924bd221b4ecf58715baa08fbaf265" Feb 24 05:51:54.121972 master-0 kubenswrapper[34361]: I0224 05:51:54.121742 34361 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6fbf68b9d7-p96gq"] Feb 24 05:51:54.135910 master-0 kubenswrapper[34361]: I0224 05:51:54.135501 34361 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-6fbf68b9d7-p96gq"] Feb 24 05:51:54.612488 master-0 kubenswrapper[34361]: I0224 05:51:54.612425 34361 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b2664286-54db-449b-aee5-fbfa93ab489f" path="/var/lib/kubelet/pods/b2664286-54db-449b-aee5-fbfa93ab489f/volumes" Feb 24 05:51:54.892439 master-0 kubenswrapper[34361]: I0224 05:51:54.888902 34361 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-674c8b7b9c-9fj6z" event={"ID":"ba54c348-0fa9-4fa5-8c7b-77aef67518a2","Type":"ContainerStarted","Data":"cd7966152ad253ed9a029b847c67d7a51c0b9174f654091d97bc6cfceb2dc823"} Feb 24 05:51:54.892439 master-0 kubenswrapper[34361]: I0224 05:51:54.889203 34361 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-674c8b7b9c-9fj6z" Feb 24 05:51:54.919948 master-0 kubenswrapper[34361]: I0224 05:51:54.919815 34361 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-674c8b7b9c-9fj6z" podStartSLOduration=2.919783444 podStartE2EDuration="2.919783444s" podCreationTimestamp="2026-02-24 05:51:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-24 05:51:54.917218005 +0000 UTC m=+874.619835081" watchObservedRunningTime="2026-02-24 05:51:54.919783444 +0000 UTC m=+874.622400490" Feb 24 05:51:56.899891 master-0 kubenswrapper[34361]: I0224 05:51:56.899700 34361 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-server-0" Feb 24 05:51:57.337953 master-0 kubenswrapper[34361]: I0224 05:51:57.329328 34361 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-db-create-sns65"] Feb 24 05:51:57.337953 master-0 kubenswrapper[34361]: E0224 05:51:57.329927 34361 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b2664286-54db-449b-aee5-fbfa93ab489f" containerName="init" Feb 24 05:51:57.337953 master-0 kubenswrapper[34361]: I0224 05:51:57.329945 34361 state_mem.go:107] "Deleted CPUSet assignment" podUID="b2664286-54db-449b-aee5-fbfa93ab489f" containerName="init" Feb 24 05:51:57.337953 master-0 kubenswrapper[34361]: E0224 05:51:57.329981 34361 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b2664286-54db-449b-aee5-fbfa93ab489f" containerName="dnsmasq-dns" Feb 24 05:51:57.337953 master-0 kubenswrapper[34361]: I0224 05:51:57.329988 34361 state_mem.go:107] "Deleted CPUSet assignment" podUID="b2664286-54db-449b-aee5-fbfa93ab489f" containerName="dnsmasq-dns" Feb 24 05:51:57.337953 master-0 kubenswrapper[34361]: I0224 05:51:57.330227 34361 memory_manager.go:354] "RemoveStaleState removing state" podUID="b2664286-54db-449b-aee5-fbfa93ab489f" containerName="dnsmasq-dns" Feb 24 05:51:57.337953 master-0 kubenswrapper[34361]: I0224 05:51:57.330967 34361 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-sns65" Feb 24 05:51:57.350407 master-0 kubenswrapper[34361]: I0224 05:51:57.350350 34361 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-create-sns65"] Feb 24 05:51:57.455243 master-0 kubenswrapper[34361]: I0224 05:51:57.453041 34361 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gg2ns\" (UniqueName: \"kubernetes.io/projected/d66837be-9db0-4e89-be7a-fbcd10882b17-kube-api-access-gg2ns\") pod \"cinder-db-create-sns65\" (UID: \"d66837be-9db0-4e89-be7a-fbcd10882b17\") " pod="openstack/cinder-db-create-sns65" Feb 24 05:51:57.455243 master-0 kubenswrapper[34361]: I0224 05:51:57.453197 34361 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d66837be-9db0-4e89-be7a-fbcd10882b17-operator-scripts\") pod \"cinder-db-create-sns65\" (UID: \"d66837be-9db0-4e89-be7a-fbcd10882b17\") " pod="openstack/cinder-db-create-sns65" Feb 24 05:51:57.466363 master-0 kubenswrapper[34361]: I0224 05:51:57.464892 34361 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-3d67-account-create-update-vkpgp"] Feb 24 05:51:57.473217 master-0 kubenswrapper[34361]: I0224 05:51:57.466804 34361 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-3d67-account-create-update-vkpgp" Feb 24 05:51:57.473217 master-0 kubenswrapper[34361]: I0224 05:51:57.469671 34361 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-db-secret" Feb 24 05:51:57.475173 master-0 kubenswrapper[34361]: I0224 05:51:57.475101 34361 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-3d67-account-create-update-vkpgp"] Feb 24 05:51:57.556330 master-0 kubenswrapper[34361]: I0224 05:51:57.556230 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gg2ns\" (UniqueName: \"kubernetes.io/projected/d66837be-9db0-4e89-be7a-fbcd10882b17-kube-api-access-gg2ns\") pod \"cinder-db-create-sns65\" (UID: \"d66837be-9db0-4e89-be7a-fbcd10882b17\") " pod="openstack/cinder-db-create-sns65" Feb 24 05:51:57.556643 master-0 kubenswrapper[34361]: I0224 05:51:57.556367 34361 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d3a55d2e-a011-4a92-a4a5-3f36d34661b5-operator-scripts\") pod \"cinder-3d67-account-create-update-vkpgp\" (UID: \"d3a55d2e-a011-4a92-a4a5-3f36d34661b5\") " pod="openstack/cinder-3d67-account-create-update-vkpgp" Feb 24 05:51:57.556643 master-0 kubenswrapper[34361]: I0224 05:51:57.556407 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d66837be-9db0-4e89-be7a-fbcd10882b17-operator-scripts\") pod \"cinder-db-create-sns65\" (UID: \"d66837be-9db0-4e89-be7a-fbcd10882b17\") " pod="openstack/cinder-db-create-sns65" Feb 24 05:51:57.556643 master-0 kubenswrapper[34361]: I0224 05:51:57.556442 34361 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-drczk\" (UniqueName: \"kubernetes.io/projected/d3a55d2e-a011-4a92-a4a5-3f36d34661b5-kube-api-access-drczk\") pod \"cinder-3d67-account-create-update-vkpgp\" (UID: \"d3a55d2e-a011-4a92-a4a5-3f36d34661b5\") " pod="openstack/cinder-3d67-account-create-update-vkpgp" Feb 24 05:51:57.557783 master-0 kubenswrapper[34361]: I0224 05:51:57.557740 34361 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d66837be-9db0-4e89-be7a-fbcd10882b17-operator-scripts\") pod \"cinder-db-create-sns65\" (UID: \"d66837be-9db0-4e89-be7a-fbcd10882b17\") " pod="openstack/cinder-db-create-sns65" Feb 24 05:51:57.585287 master-0 kubenswrapper[34361]: I0224 05:51:57.585159 34361 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gg2ns\" (UniqueName: \"kubernetes.io/projected/d66837be-9db0-4e89-be7a-fbcd10882b17-kube-api-access-gg2ns\") pod \"cinder-db-create-sns65\" (UID: \"d66837be-9db0-4e89-be7a-fbcd10882b17\") " pod="openstack/cinder-db-create-sns65" Feb 24 05:51:57.653760 master-0 kubenswrapper[34361]: I0224 05:51:57.652496 34361 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-db-create-fcxq8"] Feb 24 05:51:57.654362 master-0 kubenswrapper[34361]: I0224 05:51:57.654292 34361 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-fcxq8" Feb 24 05:51:57.660253 master-0 kubenswrapper[34361]: I0224 05:51:57.660190 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d3a55d2e-a011-4a92-a4a5-3f36d34661b5-operator-scripts\") pod \"cinder-3d67-account-create-update-vkpgp\" (UID: \"d3a55d2e-a011-4a92-a4a5-3f36d34661b5\") " pod="openstack/cinder-3d67-account-create-update-vkpgp" Feb 24 05:51:57.660253 master-0 kubenswrapper[34361]: I0224 05:51:57.660267 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-drczk\" (UniqueName: \"kubernetes.io/projected/d3a55d2e-a011-4a92-a4a5-3f36d34661b5-kube-api-access-drczk\") pod \"cinder-3d67-account-create-update-vkpgp\" (UID: \"d3a55d2e-a011-4a92-a4a5-3f36d34661b5\") " pod="openstack/cinder-3d67-account-create-update-vkpgp" Feb 24 05:51:57.661544 master-0 kubenswrapper[34361]: I0224 05:51:57.661500 34361 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d3a55d2e-a011-4a92-a4a5-3f36d34661b5-operator-scripts\") pod \"cinder-3d67-account-create-update-vkpgp\" (UID: \"d3a55d2e-a011-4a92-a4a5-3f36d34661b5\") " pod="openstack/cinder-3d67-account-create-update-vkpgp" Feb 24 05:51:57.676834 master-0 kubenswrapper[34361]: I0224 05:51:57.676764 34361 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-828b-account-create-update-4bgnt"] Feb 24 05:51:57.713619 master-0 kubenswrapper[34361]: I0224 05:51:57.707526 34361 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-828b-account-create-update-4bgnt" Feb 24 05:51:57.713619 master-0 kubenswrapper[34361]: I0224 05:51:57.713134 34361 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-db-secret" Feb 24 05:51:57.731243 master-0 kubenswrapper[34361]: I0224 05:51:57.730539 34361 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-drczk\" (UniqueName: \"kubernetes.io/projected/d3a55d2e-a011-4a92-a4a5-3f36d34661b5-kube-api-access-drczk\") pod \"cinder-3d67-account-create-update-vkpgp\" (UID: \"d3a55d2e-a011-4a92-a4a5-3f36d34661b5\") " pod="openstack/cinder-3d67-account-create-update-vkpgp" Feb 24 05:51:57.752511 master-0 kubenswrapper[34361]: I0224 05:51:57.752432 34361 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-sns65" Feb 24 05:51:57.767230 master-0 kubenswrapper[34361]: I0224 05:51:57.767127 34361 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jsc4f\" (UniqueName: \"kubernetes.io/projected/c4f8406c-2516-4c44-b748-bdc79ef32db1-kube-api-access-jsc4f\") pod \"neutron-db-create-fcxq8\" (UID: \"c4f8406c-2516-4c44-b748-bdc79ef32db1\") " pod="openstack/neutron-db-create-fcxq8" Feb 24 05:51:57.770420 master-0 kubenswrapper[34361]: I0224 05:51:57.769045 34361 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hzgwz\" (UniqueName: \"kubernetes.io/projected/26f2ed2f-05e1-4060-8d74-200fcf3cbfe9-kube-api-access-hzgwz\") pod \"neutron-828b-account-create-update-4bgnt\" (UID: \"26f2ed2f-05e1-4060-8d74-200fcf3cbfe9\") " pod="openstack/neutron-828b-account-create-update-4bgnt" Feb 24 05:51:57.770420 master-0 kubenswrapper[34361]: I0224 05:51:57.769460 34361 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/26f2ed2f-05e1-4060-8d74-200fcf3cbfe9-operator-scripts\") pod \"neutron-828b-account-create-update-4bgnt\" (UID: \"26f2ed2f-05e1-4060-8d74-200fcf3cbfe9\") " pod="openstack/neutron-828b-account-create-update-4bgnt" Feb 24 05:51:57.770420 master-0 kubenswrapper[34361]: I0224 05:51:57.769796 34361 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c4f8406c-2516-4c44-b748-bdc79ef32db1-operator-scripts\") pod \"neutron-db-create-fcxq8\" (UID: \"c4f8406c-2516-4c44-b748-bdc79ef32db1\") " pod="openstack/neutron-db-create-fcxq8" Feb 24 05:51:57.787442 master-0 kubenswrapper[34361]: I0224 05:51:57.786392 34361 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-create-fcxq8"] Feb 24 05:51:57.795056 master-0 kubenswrapper[34361]: I0224 05:51:57.788938 34361 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-3d67-account-create-update-vkpgp" Feb 24 05:51:57.840905 master-0 kubenswrapper[34361]: I0224 05:51:57.836963 34361 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-828b-account-create-update-4bgnt"] Feb 24 05:51:57.881283 master-0 kubenswrapper[34361]: I0224 05:51:57.871986 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hzgwz\" (UniqueName: \"kubernetes.io/projected/26f2ed2f-05e1-4060-8d74-200fcf3cbfe9-kube-api-access-hzgwz\") pod \"neutron-828b-account-create-update-4bgnt\" (UID: \"26f2ed2f-05e1-4060-8d74-200fcf3cbfe9\") " pod="openstack/neutron-828b-account-create-update-4bgnt" Feb 24 05:51:57.881283 master-0 kubenswrapper[34361]: I0224 05:51:57.872137 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/26f2ed2f-05e1-4060-8d74-200fcf3cbfe9-operator-scripts\") pod \"neutron-828b-account-create-update-4bgnt\" (UID: \"26f2ed2f-05e1-4060-8d74-200fcf3cbfe9\") " pod="openstack/neutron-828b-account-create-update-4bgnt" Feb 24 05:51:57.881283 master-0 kubenswrapper[34361]: I0224 05:51:57.872421 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c4f8406c-2516-4c44-b748-bdc79ef32db1-operator-scripts\") pod \"neutron-db-create-fcxq8\" (UID: \"c4f8406c-2516-4c44-b748-bdc79ef32db1\") " pod="openstack/neutron-db-create-fcxq8" Feb 24 05:51:57.881283 master-0 kubenswrapper[34361]: I0224 05:51:57.872453 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jsc4f\" (UniqueName: \"kubernetes.io/projected/c4f8406c-2516-4c44-b748-bdc79ef32db1-kube-api-access-jsc4f\") pod \"neutron-db-create-fcxq8\" (UID: \"c4f8406c-2516-4c44-b748-bdc79ef32db1\") " pod="openstack/neutron-db-create-fcxq8" Feb 24 05:51:57.881283 master-0 kubenswrapper[34361]: I0224 05:51:57.873341 34361 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c4f8406c-2516-4c44-b748-bdc79ef32db1-operator-scripts\") pod \"neutron-db-create-fcxq8\" (UID: \"c4f8406c-2516-4c44-b748-bdc79ef32db1\") " pod="openstack/neutron-db-create-fcxq8" Feb 24 05:51:57.881283 master-0 kubenswrapper[34361]: I0224 05:51:57.879736 34361 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/26f2ed2f-05e1-4060-8d74-200fcf3cbfe9-operator-scripts\") pod \"neutron-828b-account-create-update-4bgnt\" (UID: \"26f2ed2f-05e1-4060-8d74-200fcf3cbfe9\") " pod="openstack/neutron-828b-account-create-update-4bgnt" Feb 24 05:51:57.890972 master-0 kubenswrapper[34361]: I0224 05:51:57.883071 34361 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-db-sync-j2nkz"] Feb 24 05:51:57.890972 master-0 kubenswrapper[34361]: I0224 05:51:57.886524 34361 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-j2nkz" Feb 24 05:51:57.895051 master-0 kubenswrapper[34361]: I0224 05:51:57.894997 34361 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Feb 24 05:51:57.895284 master-0 kubenswrapper[34361]: I0224 05:51:57.895260 34361 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Feb 24 05:51:57.895435 master-0 kubenswrapper[34361]: I0224 05:51:57.895416 34361 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Feb 24 05:51:57.901329 master-0 kubenswrapper[34361]: I0224 05:51:57.901235 34361 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-sync-j2nkz"] Feb 24 05:51:57.903634 master-0 kubenswrapper[34361]: I0224 05:51:57.903533 34361 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hzgwz\" (UniqueName: \"kubernetes.io/projected/26f2ed2f-05e1-4060-8d74-200fcf3cbfe9-kube-api-access-hzgwz\") pod \"neutron-828b-account-create-update-4bgnt\" (UID: \"26f2ed2f-05e1-4060-8d74-200fcf3cbfe9\") " pod="openstack/neutron-828b-account-create-update-4bgnt" Feb 24 05:51:57.909018 master-0 kubenswrapper[34361]: I0224 05:51:57.908934 34361 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jsc4f\" (UniqueName: \"kubernetes.io/projected/c4f8406c-2516-4c44-b748-bdc79ef32db1-kube-api-access-jsc4f\") pod \"neutron-db-create-fcxq8\" (UID: \"c4f8406c-2516-4c44-b748-bdc79ef32db1\") " pod="openstack/neutron-db-create-fcxq8" Feb 24 05:51:57.912693 master-0 kubenswrapper[34361]: I0224 05:51:57.912256 34361 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-828b-account-create-update-4bgnt" Feb 24 05:51:58.017200 master-0 kubenswrapper[34361]: I0224 05:51:58.016492 34361 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-fcxq8" Feb 24 05:51:58.134421 master-0 kubenswrapper[34361]: I0224 05:51:58.133953 34361 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7pdpz\" (UniqueName: \"kubernetes.io/projected/a50d2174-643c-425d-92e5-ff1ab4d12f7a-kube-api-access-7pdpz\") pod \"keystone-db-sync-j2nkz\" (UID: \"a50d2174-643c-425d-92e5-ff1ab4d12f7a\") " pod="openstack/keystone-db-sync-j2nkz" Feb 24 05:51:58.134421 master-0 kubenswrapper[34361]: I0224 05:51:58.134094 34361 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a50d2174-643c-425d-92e5-ff1ab4d12f7a-combined-ca-bundle\") pod \"keystone-db-sync-j2nkz\" (UID: \"a50d2174-643c-425d-92e5-ff1ab4d12f7a\") " pod="openstack/keystone-db-sync-j2nkz" Feb 24 05:51:58.134421 master-0 kubenswrapper[34361]: I0224 05:51:58.134163 34361 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a50d2174-643c-425d-92e5-ff1ab4d12f7a-config-data\") pod \"keystone-db-sync-j2nkz\" (UID: \"a50d2174-643c-425d-92e5-ff1ab4d12f7a\") " pod="openstack/keystone-db-sync-j2nkz" Feb 24 05:51:58.211690 master-0 kubenswrapper[34361]: I0224 05:51:58.211460 34361 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-cell1-server-0" Feb 24 05:51:58.255369 master-0 kubenswrapper[34361]: I0224 05:51:58.246034 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7pdpz\" (UniqueName: \"kubernetes.io/projected/a50d2174-643c-425d-92e5-ff1ab4d12f7a-kube-api-access-7pdpz\") pod \"keystone-db-sync-j2nkz\" (UID: \"a50d2174-643c-425d-92e5-ff1ab4d12f7a\") " pod="openstack/keystone-db-sync-j2nkz" Feb 24 05:51:58.255369 master-0 kubenswrapper[34361]: I0224 05:51:58.246167 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a50d2174-643c-425d-92e5-ff1ab4d12f7a-combined-ca-bundle\") pod \"keystone-db-sync-j2nkz\" (UID: \"a50d2174-643c-425d-92e5-ff1ab4d12f7a\") " pod="openstack/keystone-db-sync-j2nkz" Feb 24 05:51:58.255369 master-0 kubenswrapper[34361]: I0224 05:51:58.246236 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a50d2174-643c-425d-92e5-ff1ab4d12f7a-config-data\") pod \"keystone-db-sync-j2nkz\" (UID: \"a50d2174-643c-425d-92e5-ff1ab4d12f7a\") " pod="openstack/keystone-db-sync-j2nkz" Feb 24 05:51:58.290873 master-0 kubenswrapper[34361]: I0224 05:51:58.290801 34361 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7pdpz\" (UniqueName: \"kubernetes.io/projected/a50d2174-643c-425d-92e5-ff1ab4d12f7a-kube-api-access-7pdpz\") pod \"keystone-db-sync-j2nkz\" (UID: \"a50d2174-643c-425d-92e5-ff1ab4d12f7a\") " pod="openstack/keystone-db-sync-j2nkz" Feb 24 05:51:58.317695 master-0 kubenswrapper[34361]: I0224 05:51:58.316738 34361 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a50d2174-643c-425d-92e5-ff1ab4d12f7a-config-data\") pod \"keystone-db-sync-j2nkz\" (UID: \"a50d2174-643c-425d-92e5-ff1ab4d12f7a\") " pod="openstack/keystone-db-sync-j2nkz" Feb 24 05:51:58.319707 master-0 kubenswrapper[34361]: I0224 05:51:58.318569 34361 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a50d2174-643c-425d-92e5-ff1ab4d12f7a-combined-ca-bundle\") pod \"keystone-db-sync-j2nkz\" (UID: \"a50d2174-643c-425d-92e5-ff1ab4d12f7a\") " pod="openstack/keystone-db-sync-j2nkz" Feb 24 05:51:58.358014 master-0 kubenswrapper[34361]: I0224 05:51:58.357780 34361 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-create-sns65"] Feb 24 05:51:58.517644 master-0 kubenswrapper[34361]: I0224 05:51:58.516481 34361 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-3d67-account-create-update-vkpgp"] Feb 24 05:51:58.532077 master-0 kubenswrapper[34361]: W0224 05:51:58.531987 34361 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd3a55d2e_a011_4a92_a4a5_3f36d34661b5.slice/crio-894e2dfe864208f11d673e73d726afd9a0ea7a1ca67dac1d4c0451ca6a88fad0 WatchSource:0}: Error finding container 894e2dfe864208f11d673e73d726afd9a0ea7a1ca67dac1d4c0451ca6a88fad0: Status 404 returned error can't find the container with id 894e2dfe864208f11d673e73d726afd9a0ea7a1ca67dac1d4c0451ca6a88fad0 Feb 24 05:51:58.626193 master-0 kubenswrapper[34361]: I0224 05:51:58.625915 34361 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-j2nkz" Feb 24 05:51:58.819588 master-0 kubenswrapper[34361]: I0224 05:51:58.819517 34361 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-828b-account-create-update-4bgnt"] Feb 24 05:51:58.914028 master-0 kubenswrapper[34361]: I0224 05:51:58.913909 34361 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-create-fcxq8"] Feb 24 05:51:58.924013 master-0 kubenswrapper[34361]: W0224 05:51:58.923929 34361 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc4f8406c_2516_4c44_b748_bdc79ef32db1.slice/crio-7d01d49ef9bcc3b52ca82695cc9cfb8d6587e071ecd8c411578fce08a9b4d188 WatchSource:0}: Error finding container 7d01d49ef9bcc3b52ca82695cc9cfb8d6587e071ecd8c411578fce08a9b4d188: Status 404 returned error can't find the container with id 7d01d49ef9bcc3b52ca82695cc9cfb8d6587e071ecd8c411578fce08a9b4d188 Feb 24 05:51:58.978982 master-0 kubenswrapper[34361]: I0224 05:51:58.971783 34361 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-828b-account-create-update-4bgnt" event={"ID":"26f2ed2f-05e1-4060-8d74-200fcf3cbfe9","Type":"ContainerStarted","Data":"b7c87670756f525818f499efa50f8ad4da44e61afb446d07b9b4fda8491204a0"} Feb 24 05:51:59.015245 master-0 kubenswrapper[34361]: I0224 05:51:59.015155 34361 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-sns65" event={"ID":"d66837be-9db0-4e89-be7a-fbcd10882b17","Type":"ContainerStarted","Data":"509b23ea003f9f1b616fb11fbe93550071804b3fb869f13891f4c8285341bbf2"} Feb 24 05:51:59.015245 master-0 kubenswrapper[34361]: I0224 05:51:59.015213 34361 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-sns65" event={"ID":"d66837be-9db0-4e89-be7a-fbcd10882b17","Type":"ContainerStarted","Data":"80a427a926597fb50aa45a54c7359b0910f72d26d1c8208c14cbb546727aa098"} Feb 24 05:51:59.024489 master-0 kubenswrapper[34361]: I0224 05:51:59.024132 34361 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-3d67-account-create-update-vkpgp" event={"ID":"d3a55d2e-a011-4a92-a4a5-3f36d34661b5","Type":"ContainerStarted","Data":"894e2dfe864208f11d673e73d726afd9a0ea7a1ca67dac1d4c0451ca6a88fad0"} Feb 24 05:51:59.028802 master-0 kubenswrapper[34361]: I0224 05:51:59.025808 34361 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-fcxq8" event={"ID":"c4f8406c-2516-4c44-b748-bdc79ef32db1","Type":"ContainerStarted","Data":"7d01d49ef9bcc3b52ca82695cc9cfb8d6587e071ecd8c411578fce08a9b4d188"} Feb 24 05:51:59.202684 master-0 kubenswrapper[34361]: I0224 05:51:59.202620 34361 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-sync-j2nkz"] Feb 24 05:52:00.039571 master-0 kubenswrapper[34361]: I0224 05:52:00.039503 34361 generic.go:334] "Generic (PLEG): container finished" podID="d66837be-9db0-4e89-be7a-fbcd10882b17" containerID="509b23ea003f9f1b616fb11fbe93550071804b3fb869f13891f4c8285341bbf2" exitCode=0 Feb 24 05:52:00.040425 master-0 kubenswrapper[34361]: I0224 05:52:00.039654 34361 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-sns65" event={"ID":"d66837be-9db0-4e89-be7a-fbcd10882b17","Type":"ContainerDied","Data":"509b23ea003f9f1b616fb11fbe93550071804b3fb869f13891f4c8285341bbf2"} Feb 24 05:52:00.047961 master-0 kubenswrapper[34361]: I0224 05:52:00.047896 34361 generic.go:334] "Generic (PLEG): container finished" podID="d3a55d2e-a011-4a92-a4a5-3f36d34661b5" containerID="54eec3d598d469ce1517f3f57924ef0ec74d22cb2f19f65d7fdc0151234a97b1" exitCode=0 Feb 24 05:52:00.048213 master-0 kubenswrapper[34361]: I0224 05:52:00.048006 34361 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-3d67-account-create-update-vkpgp" event={"ID":"d3a55d2e-a011-4a92-a4a5-3f36d34661b5","Type":"ContainerDied","Data":"54eec3d598d469ce1517f3f57924ef0ec74d22cb2f19f65d7fdc0151234a97b1"} Feb 24 05:52:00.052946 master-0 kubenswrapper[34361]: I0224 05:52:00.052865 34361 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-j2nkz" event={"ID":"a50d2174-643c-425d-92e5-ff1ab4d12f7a","Type":"ContainerStarted","Data":"08f4ac1e14a802011124d37a85f6782541ceee8fd8f0f4bfcda8f6136fcaa65d"} Feb 24 05:52:00.055954 master-0 kubenswrapper[34361]: I0224 05:52:00.055878 34361 generic.go:334] "Generic (PLEG): container finished" podID="c4f8406c-2516-4c44-b748-bdc79ef32db1" containerID="247cdc915c9add122e28a4d7837e566b0d10d128b171a045c4a461105de9ab7f" exitCode=0 Feb 24 05:52:00.056649 master-0 kubenswrapper[34361]: I0224 05:52:00.056600 34361 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-fcxq8" event={"ID":"c4f8406c-2516-4c44-b748-bdc79ef32db1","Type":"ContainerDied","Data":"247cdc915c9add122e28a4d7837e566b0d10d128b171a045c4a461105de9ab7f"} Feb 24 05:52:00.059744 master-0 kubenswrapper[34361]: I0224 05:52:00.059655 34361 generic.go:334] "Generic (PLEG): container finished" podID="26f2ed2f-05e1-4060-8d74-200fcf3cbfe9" containerID="49c48c9bcbb12493b70b28af441b3bdc7f385caa3ed90ea6876b2fb7f910379f" exitCode=0 Feb 24 05:52:00.059831 master-0 kubenswrapper[34361]: I0224 05:52:00.059750 34361 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-828b-account-create-update-4bgnt" event={"ID":"26f2ed2f-05e1-4060-8d74-200fcf3cbfe9","Type":"ContainerDied","Data":"49c48c9bcbb12493b70b28af441b3bdc7f385caa3ed90ea6876b2fb7f910379f"} Feb 24 05:52:00.552355 master-0 kubenswrapper[34361]: I0224 05:52:00.552253 34361 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-sns65" Feb 24 05:52:00.624027 master-0 kubenswrapper[34361]: I0224 05:52:00.623958 34361 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gg2ns\" (UniqueName: \"kubernetes.io/projected/d66837be-9db0-4e89-be7a-fbcd10882b17-kube-api-access-gg2ns\") pod \"d66837be-9db0-4e89-be7a-fbcd10882b17\" (UID: \"d66837be-9db0-4e89-be7a-fbcd10882b17\") " Feb 24 05:52:00.624398 master-0 kubenswrapper[34361]: I0224 05:52:00.624222 34361 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d66837be-9db0-4e89-be7a-fbcd10882b17-operator-scripts\") pod \"d66837be-9db0-4e89-be7a-fbcd10882b17\" (UID: \"d66837be-9db0-4e89-be7a-fbcd10882b17\") " Feb 24 05:52:00.625169 master-0 kubenswrapper[34361]: I0224 05:52:00.625108 34361 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d66837be-9db0-4e89-be7a-fbcd10882b17-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "d66837be-9db0-4e89-be7a-fbcd10882b17" (UID: "d66837be-9db0-4e89-be7a-fbcd10882b17"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 24 05:52:00.630879 master-0 kubenswrapper[34361]: I0224 05:52:00.630804 34361 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d66837be-9db0-4e89-be7a-fbcd10882b17-kube-api-access-gg2ns" (OuterVolumeSpecName: "kube-api-access-gg2ns") pod "d66837be-9db0-4e89-be7a-fbcd10882b17" (UID: "d66837be-9db0-4e89-be7a-fbcd10882b17"). InnerVolumeSpecName "kube-api-access-gg2ns". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 24 05:52:00.727673 master-0 kubenswrapper[34361]: I0224 05:52:00.727497 34361 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d66837be-9db0-4e89-be7a-fbcd10882b17-operator-scripts\") on node \"master-0\" DevicePath \"\"" Feb 24 05:52:00.727673 master-0 kubenswrapper[34361]: I0224 05:52:00.727553 34361 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gg2ns\" (UniqueName: \"kubernetes.io/projected/d66837be-9db0-4e89-be7a-fbcd10882b17-kube-api-access-gg2ns\") on node \"master-0\" DevicePath \"\"" Feb 24 05:52:01.075629 master-0 kubenswrapper[34361]: I0224 05:52:01.075537 34361 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-sns65" event={"ID":"d66837be-9db0-4e89-be7a-fbcd10882b17","Type":"ContainerDied","Data":"80a427a926597fb50aa45a54c7359b0910f72d26d1c8208c14cbb546727aa098"} Feb 24 05:52:01.075629 master-0 kubenswrapper[34361]: I0224 05:52:01.075638 34361 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="80a427a926597fb50aa45a54c7359b0910f72d26d1c8208c14cbb546727aa098" Feb 24 05:52:01.075629 master-0 kubenswrapper[34361]: I0224 05:52:01.075776 34361 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-sns65" Feb 24 05:52:02.903472 master-0 kubenswrapper[34361]: I0224 05:52:02.903354 34361 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-674c8b7b9c-9fj6z" Feb 24 05:52:03.041323 master-0 kubenswrapper[34361]: I0224 05:52:03.040155 34361 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5c55964f59-4n57j"] Feb 24 05:52:03.041490 master-0 kubenswrapper[34361]: I0224 05:52:03.041294 34361 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-5c55964f59-4n57j" podUID="e1fdfa97-4eba-4aa9-88e0-3b426829d15e" containerName="dnsmasq-dns" containerID="cri-o://1098132eae83b70ef21828512180bd59c746271cf3d8ad31f2918bf4bba914d5" gracePeriod=10 Feb 24 05:52:04.159202 master-0 kubenswrapper[34361]: I0224 05:52:04.159123 34361 generic.go:334] "Generic (PLEG): container finished" podID="e1fdfa97-4eba-4aa9-88e0-3b426829d15e" containerID="1098132eae83b70ef21828512180bd59c746271cf3d8ad31f2918bf4bba914d5" exitCode=0 Feb 24 05:52:04.159973 master-0 kubenswrapper[34361]: I0224 05:52:04.159212 34361 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5c55964f59-4n57j" event={"ID":"e1fdfa97-4eba-4aa9-88e0-3b426829d15e","Type":"ContainerDied","Data":"1098132eae83b70ef21828512180bd59c746271cf3d8ad31f2918bf4bba914d5"} Feb 24 05:52:04.601441 master-0 kubenswrapper[34361]: I0224 05:52:04.600921 34361 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-fcxq8" Feb 24 05:52:04.616118 master-0 kubenswrapper[34361]: I0224 05:52:04.616051 34361 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-828b-account-create-update-4bgnt" Feb 24 05:52:04.773788 master-0 kubenswrapper[34361]: I0224 05:52:04.772581 34361 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-3d67-account-create-update-vkpgp" Feb 24 05:52:04.794902 master-0 kubenswrapper[34361]: I0224 05:52:04.794810 34361 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c4f8406c-2516-4c44-b748-bdc79ef32db1-operator-scripts\") pod \"c4f8406c-2516-4c44-b748-bdc79ef32db1\" (UID: \"c4f8406c-2516-4c44-b748-bdc79ef32db1\") " Feb 24 05:52:04.795388 master-0 kubenswrapper[34361]: I0224 05:52:04.795333 34361 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jsc4f\" (UniqueName: \"kubernetes.io/projected/c4f8406c-2516-4c44-b748-bdc79ef32db1-kube-api-access-jsc4f\") pod \"c4f8406c-2516-4c44-b748-bdc79ef32db1\" (UID: \"c4f8406c-2516-4c44-b748-bdc79ef32db1\") " Feb 24 05:52:04.795472 master-0 kubenswrapper[34361]: I0224 05:52:04.795434 34361 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/26f2ed2f-05e1-4060-8d74-200fcf3cbfe9-operator-scripts\") pod \"26f2ed2f-05e1-4060-8d74-200fcf3cbfe9\" (UID: \"26f2ed2f-05e1-4060-8d74-200fcf3cbfe9\") " Feb 24 05:52:04.797736 master-0 kubenswrapper[34361]: I0224 05:52:04.797700 34361 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hzgwz\" (UniqueName: \"kubernetes.io/projected/26f2ed2f-05e1-4060-8d74-200fcf3cbfe9-kube-api-access-hzgwz\") pod \"26f2ed2f-05e1-4060-8d74-200fcf3cbfe9\" (UID: \"26f2ed2f-05e1-4060-8d74-200fcf3cbfe9\") " Feb 24 05:52:04.801993 master-0 kubenswrapper[34361]: I0224 05:52:04.795588 34361 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c4f8406c-2516-4c44-b748-bdc79ef32db1-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "c4f8406c-2516-4c44-b748-bdc79ef32db1" (UID: "c4f8406c-2516-4c44-b748-bdc79ef32db1"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 24 05:52:04.801993 master-0 kubenswrapper[34361]: I0224 05:52:04.795996 34361 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/26f2ed2f-05e1-4060-8d74-200fcf3cbfe9-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "26f2ed2f-05e1-4060-8d74-200fcf3cbfe9" (UID: "26f2ed2f-05e1-4060-8d74-200fcf3cbfe9"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 24 05:52:04.813734 master-0 kubenswrapper[34361]: I0224 05:52:04.813650 34361 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/26f2ed2f-05e1-4060-8d74-200fcf3cbfe9-kube-api-access-hzgwz" (OuterVolumeSpecName: "kube-api-access-hzgwz") pod "26f2ed2f-05e1-4060-8d74-200fcf3cbfe9" (UID: "26f2ed2f-05e1-4060-8d74-200fcf3cbfe9"). InnerVolumeSpecName "kube-api-access-hzgwz". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 24 05:52:04.817280 master-0 kubenswrapper[34361]: I0224 05:52:04.817231 34361 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c4f8406c-2516-4c44-b748-bdc79ef32db1-operator-scripts\") on node \"master-0\" DevicePath \"\"" Feb 24 05:52:04.817280 master-0 kubenswrapper[34361]: I0224 05:52:04.817272 34361 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/26f2ed2f-05e1-4060-8d74-200fcf3cbfe9-operator-scripts\") on node \"master-0\" DevicePath \"\"" Feb 24 05:52:04.817401 master-0 kubenswrapper[34361]: I0224 05:52:04.817287 34361 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hzgwz\" (UniqueName: \"kubernetes.io/projected/26f2ed2f-05e1-4060-8d74-200fcf3cbfe9-kube-api-access-hzgwz\") on node \"master-0\" DevicePath \"\"" Feb 24 05:52:04.835849 master-0 kubenswrapper[34361]: I0224 05:52:04.835690 34361 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c4f8406c-2516-4c44-b748-bdc79ef32db1-kube-api-access-jsc4f" (OuterVolumeSpecName: "kube-api-access-jsc4f") pod "c4f8406c-2516-4c44-b748-bdc79ef32db1" (UID: "c4f8406c-2516-4c44-b748-bdc79ef32db1"). InnerVolumeSpecName "kube-api-access-jsc4f". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 24 05:52:04.891840 master-0 kubenswrapper[34361]: I0224 05:52:04.891773 34361 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5c55964f59-4n57j" Feb 24 05:52:04.918956 master-0 kubenswrapper[34361]: I0224 05:52:04.918869 34361 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d3a55d2e-a011-4a92-a4a5-3f36d34661b5-operator-scripts\") pod \"d3a55d2e-a011-4a92-a4a5-3f36d34661b5\" (UID: \"d3a55d2e-a011-4a92-a4a5-3f36d34661b5\") " Feb 24 05:52:04.934724 master-0 kubenswrapper[34361]: I0224 05:52:04.921540 34361 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d3a55d2e-a011-4a92-a4a5-3f36d34661b5-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "d3a55d2e-a011-4a92-a4a5-3f36d34661b5" (UID: "d3a55d2e-a011-4a92-a4a5-3f36d34661b5"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 24 05:52:04.934724 master-0 kubenswrapper[34361]: I0224 05:52:04.922348 34361 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e1fdfa97-4eba-4aa9-88e0-3b426829d15e-config\") pod \"e1fdfa97-4eba-4aa9-88e0-3b426829d15e\" (UID: \"e1fdfa97-4eba-4aa9-88e0-3b426829d15e\") " Feb 24 05:52:04.934724 master-0 kubenswrapper[34361]: I0224 05:52:04.922408 34361 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/e1fdfa97-4eba-4aa9-88e0-3b426829d15e-ovsdbserver-nb\") pod \"e1fdfa97-4eba-4aa9-88e0-3b426829d15e\" (UID: \"e1fdfa97-4eba-4aa9-88e0-3b426829d15e\") " Feb 24 05:52:04.934724 master-0 kubenswrapper[34361]: I0224 05:52:04.922601 34361 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e1fdfa97-4eba-4aa9-88e0-3b426829d15e-dns-svc\") pod \"e1fdfa97-4eba-4aa9-88e0-3b426829d15e\" (UID: \"e1fdfa97-4eba-4aa9-88e0-3b426829d15e\") " Feb 24 05:52:04.934724 master-0 kubenswrapper[34361]: I0224 05:52:04.922628 34361 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/e1fdfa97-4eba-4aa9-88e0-3b426829d15e-ovsdbserver-sb\") pod \"e1fdfa97-4eba-4aa9-88e0-3b426829d15e\" (UID: \"e1fdfa97-4eba-4aa9-88e0-3b426829d15e\") " Feb 24 05:52:04.934724 master-0 kubenswrapper[34361]: I0224 05:52:04.922712 34361 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rk49q\" (UniqueName: \"kubernetes.io/projected/e1fdfa97-4eba-4aa9-88e0-3b426829d15e-kube-api-access-rk49q\") pod \"e1fdfa97-4eba-4aa9-88e0-3b426829d15e\" (UID: \"e1fdfa97-4eba-4aa9-88e0-3b426829d15e\") " Feb 24 05:52:04.934724 master-0 kubenswrapper[34361]: I0224 05:52:04.922747 34361 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-drczk\" (UniqueName: \"kubernetes.io/projected/d3a55d2e-a011-4a92-a4a5-3f36d34661b5-kube-api-access-drczk\") pod \"d3a55d2e-a011-4a92-a4a5-3f36d34661b5\" (UID: \"d3a55d2e-a011-4a92-a4a5-3f36d34661b5\") " Feb 24 05:52:04.934724 master-0 kubenswrapper[34361]: I0224 05:52:04.923972 34361 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d3a55d2e-a011-4a92-a4a5-3f36d34661b5-operator-scripts\") on node \"master-0\" DevicePath \"\"" Feb 24 05:52:04.934724 master-0 kubenswrapper[34361]: I0224 05:52:04.923999 34361 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jsc4f\" (UniqueName: \"kubernetes.io/projected/c4f8406c-2516-4c44-b748-bdc79ef32db1-kube-api-access-jsc4f\") on node \"master-0\" DevicePath \"\"" Feb 24 05:52:04.934724 master-0 kubenswrapper[34361]: I0224 05:52:04.929363 34361 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d3a55d2e-a011-4a92-a4a5-3f36d34661b5-kube-api-access-drczk" (OuterVolumeSpecName: "kube-api-access-drczk") pod "d3a55d2e-a011-4a92-a4a5-3f36d34661b5" (UID: "d3a55d2e-a011-4a92-a4a5-3f36d34661b5"). InnerVolumeSpecName "kube-api-access-drczk". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 24 05:52:04.934724 master-0 kubenswrapper[34361]: I0224 05:52:04.933187 34361 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e1fdfa97-4eba-4aa9-88e0-3b426829d15e-kube-api-access-rk49q" (OuterVolumeSpecName: "kube-api-access-rk49q") pod "e1fdfa97-4eba-4aa9-88e0-3b426829d15e" (UID: "e1fdfa97-4eba-4aa9-88e0-3b426829d15e"). InnerVolumeSpecName "kube-api-access-rk49q". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 24 05:52:04.986647 master-0 kubenswrapper[34361]: I0224 05:52:04.986574 34361 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e1fdfa97-4eba-4aa9-88e0-3b426829d15e-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "e1fdfa97-4eba-4aa9-88e0-3b426829d15e" (UID: "e1fdfa97-4eba-4aa9-88e0-3b426829d15e"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 24 05:52:04.992010 master-0 kubenswrapper[34361]: I0224 05:52:04.991945 34361 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e1fdfa97-4eba-4aa9-88e0-3b426829d15e-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "e1fdfa97-4eba-4aa9-88e0-3b426829d15e" (UID: "e1fdfa97-4eba-4aa9-88e0-3b426829d15e"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 24 05:52:05.018738 master-0 kubenswrapper[34361]: I0224 05:52:05.018667 34361 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e1fdfa97-4eba-4aa9-88e0-3b426829d15e-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "e1fdfa97-4eba-4aa9-88e0-3b426829d15e" (UID: "e1fdfa97-4eba-4aa9-88e0-3b426829d15e"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 24 05:52:05.021243 master-0 kubenswrapper[34361]: I0224 05:52:05.021046 34361 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e1fdfa97-4eba-4aa9-88e0-3b426829d15e-config" (OuterVolumeSpecName: "config") pod "e1fdfa97-4eba-4aa9-88e0-3b426829d15e" (UID: "e1fdfa97-4eba-4aa9-88e0-3b426829d15e"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 24 05:52:05.026767 master-0 kubenswrapper[34361]: I0224 05:52:05.026694 34361 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/e1fdfa97-4eba-4aa9-88e0-3b426829d15e-ovsdbserver-sb\") on node \"master-0\" DevicePath \"\"" Feb 24 05:52:05.026767 master-0 kubenswrapper[34361]: I0224 05:52:05.026756 34361 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rk49q\" (UniqueName: \"kubernetes.io/projected/e1fdfa97-4eba-4aa9-88e0-3b426829d15e-kube-api-access-rk49q\") on node \"master-0\" DevicePath \"\"" Feb 24 05:52:05.026767 master-0 kubenswrapper[34361]: I0224 05:52:05.026770 34361 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-drczk\" (UniqueName: \"kubernetes.io/projected/d3a55d2e-a011-4a92-a4a5-3f36d34661b5-kube-api-access-drczk\") on node \"master-0\" DevicePath \"\"" Feb 24 05:52:05.026948 master-0 kubenswrapper[34361]: I0224 05:52:05.026783 34361 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e1fdfa97-4eba-4aa9-88e0-3b426829d15e-config\") on node \"master-0\" DevicePath \"\"" Feb 24 05:52:05.026948 master-0 kubenswrapper[34361]: I0224 05:52:05.026794 34361 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/e1fdfa97-4eba-4aa9-88e0-3b426829d15e-ovsdbserver-nb\") on node \"master-0\" DevicePath \"\"" Feb 24 05:52:05.026948 master-0 kubenswrapper[34361]: I0224 05:52:05.026807 34361 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e1fdfa97-4eba-4aa9-88e0-3b426829d15e-dns-svc\") on node \"master-0\" DevicePath \"\"" Feb 24 05:52:05.176357 master-0 kubenswrapper[34361]: I0224 05:52:05.176222 34361 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-j2nkz" event={"ID":"a50d2174-643c-425d-92e5-ff1ab4d12f7a","Type":"ContainerStarted","Data":"6d2b5790114018e290bf3c0eb80fd60c045b9479ea39f45082fd548a03d99b46"} Feb 24 05:52:05.178985 master-0 kubenswrapper[34361]: I0224 05:52:05.178936 34361 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5c55964f59-4n57j" event={"ID":"e1fdfa97-4eba-4aa9-88e0-3b426829d15e","Type":"ContainerDied","Data":"3beeceb6c9d375bd9b06673cea00e6f86f74b971d8d01e2e7f7f53b251ab2a3e"} Feb 24 05:52:05.178985 master-0 kubenswrapper[34361]: I0224 05:52:05.178983 34361 scope.go:117] "RemoveContainer" containerID="1098132eae83b70ef21828512180bd59c746271cf3d8ad31f2918bf4bba914d5" Feb 24 05:52:05.179185 master-0 kubenswrapper[34361]: I0224 05:52:05.179101 34361 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5c55964f59-4n57j" Feb 24 05:52:05.196118 master-0 kubenswrapper[34361]: I0224 05:52:05.196061 34361 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-fcxq8" event={"ID":"c4f8406c-2516-4c44-b748-bdc79ef32db1","Type":"ContainerDied","Data":"7d01d49ef9bcc3b52ca82695cc9cfb8d6587e071ecd8c411578fce08a9b4d188"} Feb 24 05:52:05.196228 master-0 kubenswrapper[34361]: I0224 05:52:05.196080 34361 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-fcxq8" Feb 24 05:52:05.196581 master-0 kubenswrapper[34361]: I0224 05:52:05.196542 34361 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7d01d49ef9bcc3b52ca82695cc9cfb8d6587e071ecd8c411578fce08a9b4d188" Feb 24 05:52:05.202009 master-0 kubenswrapper[34361]: I0224 05:52:05.201930 34361 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-828b-account-create-update-4bgnt" event={"ID":"26f2ed2f-05e1-4060-8d74-200fcf3cbfe9","Type":"ContainerDied","Data":"b7c87670756f525818f499efa50f8ad4da44e61afb446d07b9b4fda8491204a0"} Feb 24 05:52:05.202009 master-0 kubenswrapper[34361]: I0224 05:52:05.202007 34361 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b7c87670756f525818f499efa50f8ad4da44e61afb446d07b9b4fda8491204a0" Feb 24 05:52:05.203913 master-0 kubenswrapper[34361]: I0224 05:52:05.201886 34361 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-828b-account-create-update-4bgnt" Feb 24 05:52:05.204026 master-0 kubenswrapper[34361]: I0224 05:52:05.203980 34361 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-3d67-account-create-update-vkpgp" event={"ID":"d3a55d2e-a011-4a92-a4a5-3f36d34661b5","Type":"ContainerDied","Data":"894e2dfe864208f11d673e73d726afd9a0ea7a1ca67dac1d4c0451ca6a88fad0"} Feb 24 05:52:05.204105 master-0 kubenswrapper[34361]: I0224 05:52:05.204039 34361 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="894e2dfe864208f11d673e73d726afd9a0ea7a1ca67dac1d4c0451ca6a88fad0" Feb 24 05:52:05.204105 master-0 kubenswrapper[34361]: I0224 05:52:05.204068 34361 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-3d67-account-create-update-vkpgp" Feb 24 05:52:05.223073 master-0 kubenswrapper[34361]: I0224 05:52:05.222943 34361 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-db-sync-j2nkz" podStartSLOduration=2.947690809 podStartE2EDuration="8.22291574s" podCreationTimestamp="2026-02-24 05:51:57 +0000 UTC" firstStartedPulling="2026-02-24 05:51:59.216417436 +0000 UTC m=+878.919034482" lastFinishedPulling="2026-02-24 05:52:04.491642367 +0000 UTC m=+884.194259413" observedRunningTime="2026-02-24 05:52:05.211585734 +0000 UTC m=+884.914202790" watchObservedRunningTime="2026-02-24 05:52:05.22291574 +0000 UTC m=+884.925532806" Feb 24 05:52:05.232739 master-0 kubenswrapper[34361]: I0224 05:52:05.231712 34361 scope.go:117] "RemoveContainer" containerID="ac74574fe745ed4b9d807449690911637470260ccc958e228012866bbadc1ca8" Feb 24 05:52:05.253723 master-0 kubenswrapper[34361]: I0224 05:52:05.253647 34361 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5c55964f59-4n57j"] Feb 24 05:52:05.268580 master-0 kubenswrapper[34361]: I0224 05:52:05.268493 34361 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-5c55964f59-4n57j"] Feb 24 05:52:05.359153 master-0 kubenswrapper[34361]: E0224 05:52:05.359013 34361 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd3a55d2e_a011_4a92_a4a5_3f36d34661b5.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode1fdfa97_4eba_4aa9_88e0_3b426829d15e.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod26f2ed2f_05e1_4060_8d74_200fcf3cbfe9.slice\": RecentStats: unable to find data in memory cache]" Feb 24 05:52:06.619787 master-0 kubenswrapper[34361]: I0224 05:52:06.619685 34361 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e1fdfa97-4eba-4aa9-88e0-3b426829d15e" path="/var/lib/kubelet/pods/e1fdfa97-4eba-4aa9-88e0-3b426829d15e/volumes" Feb 24 05:52:10.281836 master-0 kubenswrapper[34361]: I0224 05:52:10.281739 34361 generic.go:334] "Generic (PLEG): container finished" podID="a50d2174-643c-425d-92e5-ff1ab4d12f7a" containerID="6d2b5790114018e290bf3c0eb80fd60c045b9479ea39f45082fd548a03d99b46" exitCode=0 Feb 24 05:52:10.281836 master-0 kubenswrapper[34361]: I0224 05:52:10.281821 34361 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-j2nkz" event={"ID":"a50d2174-643c-425d-92e5-ff1ab4d12f7a","Type":"ContainerDied","Data":"6d2b5790114018e290bf3c0eb80fd60c045b9479ea39f45082fd548a03d99b46"} Feb 24 05:52:11.877704 master-0 kubenswrapper[34361]: I0224 05:52:11.877645 34361 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-j2nkz" Feb 24 05:52:11.943414 master-0 kubenswrapper[34361]: I0224 05:52:11.943337 34361 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a50d2174-643c-425d-92e5-ff1ab4d12f7a-config-data\") pod \"a50d2174-643c-425d-92e5-ff1ab4d12f7a\" (UID: \"a50d2174-643c-425d-92e5-ff1ab4d12f7a\") " Feb 24 05:52:11.943707 master-0 kubenswrapper[34361]: I0224 05:52:11.943681 34361 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a50d2174-643c-425d-92e5-ff1ab4d12f7a-combined-ca-bundle\") pod \"a50d2174-643c-425d-92e5-ff1ab4d12f7a\" (UID: \"a50d2174-643c-425d-92e5-ff1ab4d12f7a\") " Feb 24 05:52:11.943832 master-0 kubenswrapper[34361]: I0224 05:52:11.943809 34361 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7pdpz\" (UniqueName: \"kubernetes.io/projected/a50d2174-643c-425d-92e5-ff1ab4d12f7a-kube-api-access-7pdpz\") pod \"a50d2174-643c-425d-92e5-ff1ab4d12f7a\" (UID: \"a50d2174-643c-425d-92e5-ff1ab4d12f7a\") " Feb 24 05:52:11.950608 master-0 kubenswrapper[34361]: I0224 05:52:11.950545 34361 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a50d2174-643c-425d-92e5-ff1ab4d12f7a-kube-api-access-7pdpz" (OuterVolumeSpecName: "kube-api-access-7pdpz") pod "a50d2174-643c-425d-92e5-ff1ab4d12f7a" (UID: "a50d2174-643c-425d-92e5-ff1ab4d12f7a"). InnerVolumeSpecName "kube-api-access-7pdpz". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 24 05:52:11.984644 master-0 kubenswrapper[34361]: I0224 05:52:11.980423 34361 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a50d2174-643c-425d-92e5-ff1ab4d12f7a-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "a50d2174-643c-425d-92e5-ff1ab4d12f7a" (UID: "a50d2174-643c-425d-92e5-ff1ab4d12f7a"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 24 05:52:12.047985 master-0 kubenswrapper[34361]: I0224 05:52:12.047836 34361 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7pdpz\" (UniqueName: \"kubernetes.io/projected/a50d2174-643c-425d-92e5-ff1ab4d12f7a-kube-api-access-7pdpz\") on node \"master-0\" DevicePath \"\"" Feb 24 05:52:12.047985 master-0 kubenswrapper[34361]: I0224 05:52:12.047897 34361 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a50d2174-643c-425d-92e5-ff1ab4d12f7a-combined-ca-bundle\") on node \"master-0\" DevicePath \"\"" Feb 24 05:52:12.075346 master-0 kubenswrapper[34361]: I0224 05:52:12.074971 34361 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a50d2174-643c-425d-92e5-ff1ab4d12f7a-config-data" (OuterVolumeSpecName: "config-data") pod "a50d2174-643c-425d-92e5-ff1ab4d12f7a" (UID: "a50d2174-643c-425d-92e5-ff1ab4d12f7a"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 24 05:52:12.149687 master-0 kubenswrapper[34361]: I0224 05:52:12.149625 34361 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a50d2174-643c-425d-92e5-ff1ab4d12f7a-config-data\") on node \"master-0\" DevicePath \"\"" Feb 24 05:52:12.306422 master-0 kubenswrapper[34361]: I0224 05:52:12.306167 34361 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-j2nkz" event={"ID":"a50d2174-643c-425d-92e5-ff1ab4d12f7a","Type":"ContainerDied","Data":"08f4ac1e14a802011124d37a85f6782541ceee8fd8f0f4bfcda8f6136fcaa65d"} Feb 24 05:52:12.306422 master-0 kubenswrapper[34361]: I0224 05:52:12.306250 34361 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="08f4ac1e14a802011124d37a85f6782541ceee8fd8f0f4bfcda8f6136fcaa65d" Feb 24 05:52:12.306422 master-0 kubenswrapper[34361]: I0224 05:52:12.306259 34361 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-j2nkz" Feb 24 05:52:12.804155 master-0 kubenswrapper[34361]: I0224 05:52:12.802888 34361 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-bootstrap-dcp4q"] Feb 24 05:52:12.804155 master-0 kubenswrapper[34361]: E0224 05:52:12.803781 34361 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d66837be-9db0-4e89-be7a-fbcd10882b17" containerName="mariadb-database-create" Feb 24 05:52:12.804155 master-0 kubenswrapper[34361]: I0224 05:52:12.803810 34361 state_mem.go:107] "Deleted CPUSet assignment" podUID="d66837be-9db0-4e89-be7a-fbcd10882b17" containerName="mariadb-database-create" Feb 24 05:52:12.804155 master-0 kubenswrapper[34361]: E0224 05:52:12.803829 34361 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a50d2174-643c-425d-92e5-ff1ab4d12f7a" containerName="keystone-db-sync" Feb 24 05:52:12.804155 master-0 kubenswrapper[34361]: I0224 05:52:12.803839 34361 state_mem.go:107] "Deleted CPUSet assignment" podUID="a50d2174-643c-425d-92e5-ff1ab4d12f7a" containerName="keystone-db-sync" Feb 24 05:52:12.804155 master-0 kubenswrapper[34361]: E0224 05:52:12.803928 34361 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d3a55d2e-a011-4a92-a4a5-3f36d34661b5" containerName="mariadb-account-create-update" Feb 24 05:52:12.804155 master-0 kubenswrapper[34361]: I0224 05:52:12.803940 34361 state_mem.go:107] "Deleted CPUSet assignment" podUID="d3a55d2e-a011-4a92-a4a5-3f36d34661b5" containerName="mariadb-account-create-update" Feb 24 05:52:12.804155 master-0 kubenswrapper[34361]: E0224 05:52:12.803954 34361 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="26f2ed2f-05e1-4060-8d74-200fcf3cbfe9" containerName="mariadb-account-create-update" Feb 24 05:52:12.804155 master-0 kubenswrapper[34361]: I0224 05:52:12.803963 34361 state_mem.go:107] "Deleted CPUSet assignment" podUID="26f2ed2f-05e1-4060-8d74-200fcf3cbfe9" containerName="mariadb-account-create-update" Feb 24 05:52:12.804155 master-0 kubenswrapper[34361]: E0224 05:52:12.803992 34361 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c4f8406c-2516-4c44-b748-bdc79ef32db1" containerName="mariadb-database-create" Feb 24 05:52:12.804155 master-0 kubenswrapper[34361]: I0224 05:52:12.804004 34361 state_mem.go:107] "Deleted CPUSet assignment" podUID="c4f8406c-2516-4c44-b748-bdc79ef32db1" containerName="mariadb-database-create" Feb 24 05:52:12.804155 master-0 kubenswrapper[34361]: E0224 05:52:12.804018 34361 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e1fdfa97-4eba-4aa9-88e0-3b426829d15e" containerName="dnsmasq-dns" Feb 24 05:52:12.804155 master-0 kubenswrapper[34361]: I0224 05:52:12.804027 34361 state_mem.go:107] "Deleted CPUSet assignment" podUID="e1fdfa97-4eba-4aa9-88e0-3b426829d15e" containerName="dnsmasq-dns" Feb 24 05:52:12.804155 master-0 kubenswrapper[34361]: E0224 05:52:12.804049 34361 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e1fdfa97-4eba-4aa9-88e0-3b426829d15e" containerName="init" Feb 24 05:52:12.804155 master-0 kubenswrapper[34361]: I0224 05:52:12.804058 34361 state_mem.go:107] "Deleted CPUSet assignment" podUID="e1fdfa97-4eba-4aa9-88e0-3b426829d15e" containerName="init" Feb 24 05:52:12.804911 master-0 kubenswrapper[34361]: I0224 05:52:12.804367 34361 memory_manager.go:354] "RemoveStaleState removing state" podUID="26f2ed2f-05e1-4060-8d74-200fcf3cbfe9" containerName="mariadb-account-create-update" Feb 24 05:52:12.804911 master-0 kubenswrapper[34361]: I0224 05:52:12.804408 34361 memory_manager.go:354] "RemoveStaleState removing state" podUID="e1fdfa97-4eba-4aa9-88e0-3b426829d15e" containerName="dnsmasq-dns" Feb 24 05:52:12.804911 master-0 kubenswrapper[34361]: I0224 05:52:12.804423 34361 memory_manager.go:354] "RemoveStaleState removing state" podUID="d3a55d2e-a011-4a92-a4a5-3f36d34661b5" containerName="mariadb-account-create-update" Feb 24 05:52:12.804911 master-0 kubenswrapper[34361]: I0224 05:52:12.804456 34361 memory_manager.go:354] "RemoveStaleState removing state" podUID="d66837be-9db0-4e89-be7a-fbcd10882b17" containerName="mariadb-database-create" Feb 24 05:52:12.804911 master-0 kubenswrapper[34361]: I0224 05:52:12.804478 34361 memory_manager.go:354] "RemoveStaleState removing state" podUID="c4f8406c-2516-4c44-b748-bdc79ef32db1" containerName="mariadb-database-create" Feb 24 05:52:12.804911 master-0 kubenswrapper[34361]: I0224 05:52:12.804508 34361 memory_manager.go:354] "RemoveStaleState removing state" podUID="a50d2174-643c-425d-92e5-ff1ab4d12f7a" containerName="keystone-db-sync" Feb 24 05:52:12.817853 master-0 kubenswrapper[34361]: I0224 05:52:12.817791 34361 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-dcp4q" Feb 24 05:52:12.827362 master-0 kubenswrapper[34361]: I0224 05:52:12.827280 34361 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Feb 24 05:52:12.827606 master-0 kubenswrapper[34361]: I0224 05:52:12.827550 34361 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"osp-secret" Feb 24 05:52:12.838096 master-0 kubenswrapper[34361]: I0224 05:52:12.834467 34361 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-77dd9bf7ff-sv6dm"] Feb 24 05:52:12.838096 master-0 kubenswrapper[34361]: I0224 05:52:12.837996 34361 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Feb 24 05:52:12.838096 master-0 kubenswrapper[34361]: I0224 05:52:12.838068 34361 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Feb 24 05:52:12.838358 master-0 kubenswrapper[34361]: I0224 05:52:12.838223 34361 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-77dd9bf7ff-sv6dm" Feb 24 05:52:12.849583 master-0 kubenswrapper[34361]: I0224 05:52:12.848576 34361 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-77dd9bf7ff-sv6dm"] Feb 24 05:52:12.868066 master-0 kubenswrapper[34361]: I0224 05:52:12.867770 34361 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/72a1000b-680d-4c11-a03c-d208f81272dd-credential-keys\") pod \"keystone-bootstrap-dcp4q\" (UID: \"72a1000b-680d-4c11-a03c-d208f81272dd\") " pod="openstack/keystone-bootstrap-dcp4q" Feb 24 05:52:12.868066 master-0 kubenswrapper[34361]: I0224 05:52:12.867835 34361 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/72a1000b-680d-4c11-a03c-d208f81272dd-fernet-keys\") pod \"keystone-bootstrap-dcp4q\" (UID: \"72a1000b-680d-4c11-a03c-d208f81272dd\") " pod="openstack/keystone-bootstrap-dcp4q" Feb 24 05:52:12.868066 master-0 kubenswrapper[34361]: I0224 05:52:12.867918 34361 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/72a1000b-680d-4c11-a03c-d208f81272dd-config-data\") pod \"keystone-bootstrap-dcp4q\" (UID: \"72a1000b-680d-4c11-a03c-d208f81272dd\") " pod="openstack/keystone-bootstrap-dcp4q" Feb 24 05:52:12.868066 master-0 kubenswrapper[34361]: I0224 05:52:12.867948 34361 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9pmqw\" (UniqueName: \"kubernetes.io/projected/72a1000b-680d-4c11-a03c-d208f81272dd-kube-api-access-9pmqw\") pod \"keystone-bootstrap-dcp4q\" (UID: \"72a1000b-680d-4c11-a03c-d208f81272dd\") " pod="openstack/keystone-bootstrap-dcp4q" Feb 24 05:52:12.868066 master-0 kubenswrapper[34361]: I0224 05:52:12.868006 34361 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/72a1000b-680d-4c11-a03c-d208f81272dd-combined-ca-bundle\") pod \"keystone-bootstrap-dcp4q\" (UID: \"72a1000b-680d-4c11-a03c-d208f81272dd\") " pod="openstack/keystone-bootstrap-dcp4q" Feb 24 05:52:12.868066 master-0 kubenswrapper[34361]: I0224 05:52:12.868062 34361 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/72a1000b-680d-4c11-a03c-d208f81272dd-scripts\") pod \"keystone-bootstrap-dcp4q\" (UID: \"72a1000b-680d-4c11-a03c-d208f81272dd\") " pod="openstack/keystone-bootstrap-dcp4q" Feb 24 05:52:12.869751 master-0 kubenswrapper[34361]: I0224 05:52:12.869708 34361 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-dcp4q"] Feb 24 05:52:12.974822 master-0 kubenswrapper[34361]: I0224 05:52:12.971673 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/72a1000b-680d-4c11-a03c-d208f81272dd-credential-keys\") pod \"keystone-bootstrap-dcp4q\" (UID: \"72a1000b-680d-4c11-a03c-d208f81272dd\") " pod="openstack/keystone-bootstrap-dcp4q" Feb 24 05:52:12.974822 master-0 kubenswrapper[34361]: I0224 05:52:12.971768 34361 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/0a54b59d-5bdb-4de2-afb3-8d68064c94c9-ovsdbserver-nb\") pod \"dnsmasq-dns-77dd9bf7ff-sv6dm\" (UID: \"0a54b59d-5bdb-4de2-afb3-8d68064c94c9\") " pod="openstack/dnsmasq-dns-77dd9bf7ff-sv6dm" Feb 24 05:52:12.974822 master-0 kubenswrapper[34361]: I0224 05:52:12.971802 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/72a1000b-680d-4c11-a03c-d208f81272dd-fernet-keys\") pod \"keystone-bootstrap-dcp4q\" (UID: \"72a1000b-680d-4c11-a03c-d208f81272dd\") " pod="openstack/keystone-bootstrap-dcp4q" Feb 24 05:52:12.974822 master-0 kubenswrapper[34361]: I0224 05:52:12.971820 34361 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/0a54b59d-5bdb-4de2-afb3-8d68064c94c9-dns-svc\") pod \"dnsmasq-dns-77dd9bf7ff-sv6dm\" (UID: \"0a54b59d-5bdb-4de2-afb3-8d68064c94c9\") " pod="openstack/dnsmasq-dns-77dd9bf7ff-sv6dm" Feb 24 05:52:12.974822 master-0 kubenswrapper[34361]: I0224 05:52:12.971842 34361 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0a54b59d-5bdb-4de2-afb3-8d68064c94c9-config\") pod \"dnsmasq-dns-77dd9bf7ff-sv6dm\" (UID: \"0a54b59d-5bdb-4de2-afb3-8d68064c94c9\") " pod="openstack/dnsmasq-dns-77dd9bf7ff-sv6dm" Feb 24 05:52:12.974822 master-0 kubenswrapper[34361]: I0224 05:52:12.971894 34361 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/0a54b59d-5bdb-4de2-afb3-8d68064c94c9-dns-swift-storage-0\") pod \"dnsmasq-dns-77dd9bf7ff-sv6dm\" (UID: \"0a54b59d-5bdb-4de2-afb3-8d68064c94c9\") " pod="openstack/dnsmasq-dns-77dd9bf7ff-sv6dm" Feb 24 05:52:12.974822 master-0 kubenswrapper[34361]: I0224 05:52:12.971933 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/72a1000b-680d-4c11-a03c-d208f81272dd-config-data\") pod \"keystone-bootstrap-dcp4q\" (UID: \"72a1000b-680d-4c11-a03c-d208f81272dd\") " pod="openstack/keystone-bootstrap-dcp4q" Feb 24 05:52:12.974822 master-0 kubenswrapper[34361]: I0224 05:52:12.971952 34361 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-56kvh\" (UniqueName: \"kubernetes.io/projected/0a54b59d-5bdb-4de2-afb3-8d68064c94c9-kube-api-access-56kvh\") pod \"dnsmasq-dns-77dd9bf7ff-sv6dm\" (UID: \"0a54b59d-5bdb-4de2-afb3-8d68064c94c9\") " pod="openstack/dnsmasq-dns-77dd9bf7ff-sv6dm" Feb 24 05:52:12.974822 master-0 kubenswrapper[34361]: I0224 05:52:12.971972 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9pmqw\" (UniqueName: \"kubernetes.io/projected/72a1000b-680d-4c11-a03c-d208f81272dd-kube-api-access-9pmqw\") pod \"keystone-bootstrap-dcp4q\" (UID: \"72a1000b-680d-4c11-a03c-d208f81272dd\") " pod="openstack/keystone-bootstrap-dcp4q" Feb 24 05:52:12.974822 master-0 kubenswrapper[34361]: I0224 05:52:12.972030 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/72a1000b-680d-4c11-a03c-d208f81272dd-combined-ca-bundle\") pod \"keystone-bootstrap-dcp4q\" (UID: \"72a1000b-680d-4c11-a03c-d208f81272dd\") " pod="openstack/keystone-bootstrap-dcp4q" Feb 24 05:52:12.974822 master-0 kubenswrapper[34361]: I0224 05:52:12.972066 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/72a1000b-680d-4c11-a03c-d208f81272dd-scripts\") pod \"keystone-bootstrap-dcp4q\" (UID: \"72a1000b-680d-4c11-a03c-d208f81272dd\") " pod="openstack/keystone-bootstrap-dcp4q" Feb 24 05:52:12.974822 master-0 kubenswrapper[34361]: I0224 05:52:12.972088 34361 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/0a54b59d-5bdb-4de2-afb3-8d68064c94c9-ovsdbserver-sb\") pod \"dnsmasq-dns-77dd9bf7ff-sv6dm\" (UID: \"0a54b59d-5bdb-4de2-afb3-8d68064c94c9\") " pod="openstack/dnsmasq-dns-77dd9bf7ff-sv6dm" Feb 24 05:52:13.026338 master-0 kubenswrapper[34361]: I0224 05:52:13.011453 34361 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/72a1000b-680d-4c11-a03c-d208f81272dd-fernet-keys\") pod \"keystone-bootstrap-dcp4q\" (UID: \"72a1000b-680d-4c11-a03c-d208f81272dd\") " pod="openstack/keystone-bootstrap-dcp4q" Feb 24 05:52:13.026338 master-0 kubenswrapper[34361]: I0224 05:52:13.015177 34361 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/72a1000b-680d-4c11-a03c-d208f81272dd-scripts\") pod \"keystone-bootstrap-dcp4q\" (UID: \"72a1000b-680d-4c11-a03c-d208f81272dd\") " pod="openstack/keystone-bootstrap-dcp4q" Feb 24 05:52:13.026338 master-0 kubenswrapper[34361]: I0224 05:52:13.015895 34361 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/72a1000b-680d-4c11-a03c-d208f81272dd-config-data\") pod \"keystone-bootstrap-dcp4q\" (UID: \"72a1000b-680d-4c11-a03c-d208f81272dd\") " pod="openstack/keystone-bootstrap-dcp4q" Feb 24 05:52:13.026338 master-0 kubenswrapper[34361]: I0224 05:52:13.016000 34361 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/72a1000b-680d-4c11-a03c-d208f81272dd-combined-ca-bundle\") pod \"keystone-bootstrap-dcp4q\" (UID: \"72a1000b-680d-4c11-a03c-d208f81272dd\") " pod="openstack/keystone-bootstrap-dcp4q" Feb 24 05:52:13.026338 master-0 kubenswrapper[34361]: I0224 05:52:13.021186 34361 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/72a1000b-680d-4c11-a03c-d208f81272dd-credential-keys\") pod \"keystone-bootstrap-dcp4q\" (UID: \"72a1000b-680d-4c11-a03c-d208f81272dd\") " pod="openstack/keystone-bootstrap-dcp4q" Feb 24 05:52:13.042356 master-0 kubenswrapper[34361]: I0224 05:52:13.040585 34361 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-db-sync-m7xgd"] Feb 24 05:52:13.042677 master-0 kubenswrapper[34361]: I0224 05:52:13.042411 34361 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-m7xgd" Feb 24 05:52:13.086701 master-0 kubenswrapper[34361]: I0224 05:52:13.073842 34361 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/4f5d8934-00e0-46c9-ba9d-d9183edd6fb8-config\") pod \"neutron-db-sync-m7xgd\" (UID: \"4f5d8934-00e0-46c9-ba9d-d9183edd6fb8\") " pod="openstack/neutron-db-sync-m7xgd" Feb 24 05:52:13.086701 master-0 kubenswrapper[34361]: I0224 05:52:13.073920 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/0a54b59d-5bdb-4de2-afb3-8d68064c94c9-ovsdbserver-sb\") pod \"dnsmasq-dns-77dd9bf7ff-sv6dm\" (UID: \"0a54b59d-5bdb-4de2-afb3-8d68064c94c9\") " pod="openstack/dnsmasq-dns-77dd9bf7ff-sv6dm" Feb 24 05:52:13.086701 master-0 kubenswrapper[34361]: I0224 05:52:13.073975 34361 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qkx9z\" (UniqueName: \"kubernetes.io/projected/4f5d8934-00e0-46c9-ba9d-d9183edd6fb8-kube-api-access-qkx9z\") pod \"neutron-db-sync-m7xgd\" (UID: \"4f5d8934-00e0-46c9-ba9d-d9183edd6fb8\") " pod="openstack/neutron-db-sync-m7xgd" Feb 24 05:52:13.086701 master-0 kubenswrapper[34361]: I0224 05:52:13.074008 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/0a54b59d-5bdb-4de2-afb3-8d68064c94c9-ovsdbserver-nb\") pod \"dnsmasq-dns-77dd9bf7ff-sv6dm\" (UID: \"0a54b59d-5bdb-4de2-afb3-8d68064c94c9\") " pod="openstack/dnsmasq-dns-77dd9bf7ff-sv6dm" Feb 24 05:52:13.086701 master-0 kubenswrapper[34361]: I0224 05:52:13.074032 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/0a54b59d-5bdb-4de2-afb3-8d68064c94c9-dns-svc\") pod \"dnsmasq-dns-77dd9bf7ff-sv6dm\" (UID: \"0a54b59d-5bdb-4de2-afb3-8d68064c94c9\") " pod="openstack/dnsmasq-dns-77dd9bf7ff-sv6dm" Feb 24 05:52:13.086701 master-0 kubenswrapper[34361]: I0224 05:52:13.074053 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0a54b59d-5bdb-4de2-afb3-8d68064c94c9-config\") pod \"dnsmasq-dns-77dd9bf7ff-sv6dm\" (UID: \"0a54b59d-5bdb-4de2-afb3-8d68064c94c9\") " pod="openstack/dnsmasq-dns-77dd9bf7ff-sv6dm" Feb 24 05:52:13.086701 master-0 kubenswrapper[34361]: I0224 05:52:13.074084 34361 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4f5d8934-00e0-46c9-ba9d-d9183edd6fb8-combined-ca-bundle\") pod \"neutron-db-sync-m7xgd\" (UID: \"4f5d8934-00e0-46c9-ba9d-d9183edd6fb8\") " pod="openstack/neutron-db-sync-m7xgd" Feb 24 05:52:13.086701 master-0 kubenswrapper[34361]: I0224 05:52:13.074118 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/0a54b59d-5bdb-4de2-afb3-8d68064c94c9-dns-swift-storage-0\") pod \"dnsmasq-dns-77dd9bf7ff-sv6dm\" (UID: \"0a54b59d-5bdb-4de2-afb3-8d68064c94c9\") " pod="openstack/dnsmasq-dns-77dd9bf7ff-sv6dm" Feb 24 05:52:13.086701 master-0 kubenswrapper[34361]: I0224 05:52:13.074154 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-56kvh\" (UniqueName: \"kubernetes.io/projected/0a54b59d-5bdb-4de2-afb3-8d68064c94c9-kube-api-access-56kvh\") pod \"dnsmasq-dns-77dd9bf7ff-sv6dm\" (UID: \"0a54b59d-5bdb-4de2-afb3-8d68064c94c9\") " pod="openstack/dnsmasq-dns-77dd9bf7ff-sv6dm" Feb 24 05:52:13.086701 master-0 kubenswrapper[34361]: I0224 05:52:13.075691 34361 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/0a54b59d-5bdb-4de2-afb3-8d68064c94c9-ovsdbserver-nb\") pod \"dnsmasq-dns-77dd9bf7ff-sv6dm\" (UID: \"0a54b59d-5bdb-4de2-afb3-8d68064c94c9\") " pod="openstack/dnsmasq-dns-77dd9bf7ff-sv6dm" Feb 24 05:52:13.086701 master-0 kubenswrapper[34361]: I0224 05:52:13.076520 34361 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/0a54b59d-5bdb-4de2-afb3-8d68064c94c9-ovsdbserver-sb\") pod \"dnsmasq-dns-77dd9bf7ff-sv6dm\" (UID: \"0a54b59d-5bdb-4de2-afb3-8d68064c94c9\") " pod="openstack/dnsmasq-dns-77dd9bf7ff-sv6dm" Feb 24 05:52:13.086701 master-0 kubenswrapper[34361]: I0224 05:52:13.077466 34361 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/0a54b59d-5bdb-4de2-afb3-8d68064c94c9-dns-swift-storage-0\") pod \"dnsmasq-dns-77dd9bf7ff-sv6dm\" (UID: \"0a54b59d-5bdb-4de2-afb3-8d68064c94c9\") " pod="openstack/dnsmasq-dns-77dd9bf7ff-sv6dm" Feb 24 05:52:13.086701 master-0 kubenswrapper[34361]: I0224 05:52:13.077735 34361 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/0a54b59d-5bdb-4de2-afb3-8d68064c94c9-dns-svc\") pod \"dnsmasq-dns-77dd9bf7ff-sv6dm\" (UID: \"0a54b59d-5bdb-4de2-afb3-8d68064c94c9\") " pod="openstack/dnsmasq-dns-77dd9bf7ff-sv6dm" Feb 24 05:52:13.086701 master-0 kubenswrapper[34361]: I0224 05:52:13.077798 34361 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0a54b59d-5bdb-4de2-afb3-8d68064c94c9-config\") pod \"dnsmasq-dns-77dd9bf7ff-sv6dm\" (UID: \"0a54b59d-5bdb-4de2-afb3-8d68064c94c9\") " pod="openstack/dnsmasq-dns-77dd9bf7ff-sv6dm" Feb 24 05:52:13.086701 master-0 kubenswrapper[34361]: I0224 05:52:13.084537 34361 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ironic-db-create-hgms6"] Feb 24 05:52:13.086701 master-0 kubenswrapper[34361]: I0224 05:52:13.086064 34361 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ironic-db-create-hgms6" Feb 24 05:52:13.104426 master-0 kubenswrapper[34361]: I0224 05:52:13.102268 34361 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-sync-m7xgd"] Feb 24 05:52:13.104426 master-0 kubenswrapper[34361]: I0224 05:52:13.102354 34361 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-httpd-config" Feb 24 05:52:13.127619 master-0 kubenswrapper[34361]: I0224 05:52:13.124089 34361 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-config" Feb 24 05:52:13.146335 master-0 kubenswrapper[34361]: I0224 05:52:13.137735 34361 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-b7346-db-sync-f9mbk"] Feb 24 05:52:13.146335 master-0 kubenswrapper[34361]: I0224 05:52:13.139247 34361 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-b7346-db-sync-f9mbk" Feb 24 05:52:13.146335 master-0 kubenswrapper[34361]: I0224 05:52:13.143770 34361 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9pmqw\" (UniqueName: \"kubernetes.io/projected/72a1000b-680d-4c11-a03c-d208f81272dd-kube-api-access-9pmqw\") pod \"keystone-bootstrap-dcp4q\" (UID: \"72a1000b-680d-4c11-a03c-d208f81272dd\") " pod="openstack/keystone-bootstrap-dcp4q" Feb 24 05:52:13.158336 master-0 kubenswrapper[34361]: I0224 05:52:13.153549 34361 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-b7346-scripts" Feb 24 05:52:13.158336 master-0 kubenswrapper[34361]: I0224 05:52:13.153769 34361 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-b7346-config-data" Feb 24 05:52:13.158336 master-0 kubenswrapper[34361]: I0224 05:52:13.154661 34361 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-56kvh\" (UniqueName: \"kubernetes.io/projected/0a54b59d-5bdb-4de2-afb3-8d68064c94c9-kube-api-access-56kvh\") pod \"dnsmasq-dns-77dd9bf7ff-sv6dm\" (UID: \"0a54b59d-5bdb-4de2-afb3-8d68064c94c9\") " pod="openstack/dnsmasq-dns-77dd9bf7ff-sv6dm" Feb 24 05:52:13.161366 master-0 kubenswrapper[34361]: I0224 05:52:13.160616 34361 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ironic-db-create-hgms6"] Feb 24 05:52:13.172440 master-0 kubenswrapper[34361]: I0224 05:52:13.170143 34361 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-b7346-db-sync-f9mbk"] Feb 24 05:52:13.195779 master-0 kubenswrapper[34361]: I0224 05:52:13.185808 34361 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9a1e2cd8-9a9f-454d-b520-75769a722e55-operator-scripts\") pod \"ironic-db-create-hgms6\" (UID: \"9a1e2cd8-9a9f-454d-b520-75769a722e55\") " pod="openstack/ironic-db-create-hgms6" Feb 24 05:52:13.195779 master-0 kubenswrapper[34361]: I0224 05:52:13.185899 34361 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/41c862b6-5eb6-4f54-a435-a8e7691b87c9-combined-ca-bundle\") pod \"cinder-b7346-db-sync-f9mbk\" (UID: \"41c862b6-5eb6-4f54-a435-a8e7691b87c9\") " pod="openstack/cinder-b7346-db-sync-f9mbk" Feb 24 05:52:13.195779 master-0 kubenswrapper[34361]: I0224 05:52:13.185943 34361 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/41c862b6-5eb6-4f54-a435-a8e7691b87c9-db-sync-config-data\") pod \"cinder-b7346-db-sync-f9mbk\" (UID: \"41c862b6-5eb6-4f54-a435-a8e7691b87c9\") " pod="openstack/cinder-b7346-db-sync-f9mbk" Feb 24 05:52:13.195779 master-0 kubenswrapper[34361]: I0224 05:52:13.185992 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qkx9z\" (UniqueName: \"kubernetes.io/projected/4f5d8934-00e0-46c9-ba9d-d9183edd6fb8-kube-api-access-qkx9z\") pod \"neutron-db-sync-m7xgd\" (UID: \"4f5d8934-00e0-46c9-ba9d-d9183edd6fb8\") " pod="openstack/neutron-db-sync-m7xgd" Feb 24 05:52:13.195779 master-0 kubenswrapper[34361]: I0224 05:52:13.186074 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4f5d8934-00e0-46c9-ba9d-d9183edd6fb8-combined-ca-bundle\") pod \"neutron-db-sync-m7xgd\" (UID: \"4f5d8934-00e0-46c9-ba9d-d9183edd6fb8\") " pod="openstack/neutron-db-sync-m7xgd" Feb 24 05:52:13.195779 master-0 kubenswrapper[34361]: I0224 05:52:13.186096 34361 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vfbbm\" (UniqueName: \"kubernetes.io/projected/9a1e2cd8-9a9f-454d-b520-75769a722e55-kube-api-access-vfbbm\") pod \"ironic-db-create-hgms6\" (UID: \"9a1e2cd8-9a9f-454d-b520-75769a722e55\") " pod="openstack/ironic-db-create-hgms6" Feb 24 05:52:13.195779 master-0 kubenswrapper[34361]: I0224 05:52:13.186132 34361 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/41c862b6-5eb6-4f54-a435-a8e7691b87c9-scripts\") pod \"cinder-b7346-db-sync-f9mbk\" (UID: \"41c862b6-5eb6-4f54-a435-a8e7691b87c9\") " pod="openstack/cinder-b7346-db-sync-f9mbk" Feb 24 05:52:13.195779 master-0 kubenswrapper[34361]: I0224 05:52:13.186188 34361 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/41c862b6-5eb6-4f54-a435-a8e7691b87c9-etc-machine-id\") pod \"cinder-b7346-db-sync-f9mbk\" (UID: \"41c862b6-5eb6-4f54-a435-a8e7691b87c9\") " pod="openstack/cinder-b7346-db-sync-f9mbk" Feb 24 05:52:13.195779 master-0 kubenswrapper[34361]: I0224 05:52:13.186234 34361 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ljrcx\" (UniqueName: \"kubernetes.io/projected/41c862b6-5eb6-4f54-a435-a8e7691b87c9-kube-api-access-ljrcx\") pod \"cinder-b7346-db-sync-f9mbk\" (UID: \"41c862b6-5eb6-4f54-a435-a8e7691b87c9\") " pod="openstack/cinder-b7346-db-sync-f9mbk" Feb 24 05:52:13.195779 master-0 kubenswrapper[34361]: I0224 05:52:13.186284 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/4f5d8934-00e0-46c9-ba9d-d9183edd6fb8-config\") pod \"neutron-db-sync-m7xgd\" (UID: \"4f5d8934-00e0-46c9-ba9d-d9183edd6fb8\") " pod="openstack/neutron-db-sync-m7xgd" Feb 24 05:52:13.195779 master-0 kubenswrapper[34361]: I0224 05:52:13.186350 34361 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/41c862b6-5eb6-4f54-a435-a8e7691b87c9-config-data\") pod \"cinder-b7346-db-sync-f9mbk\" (UID: \"41c862b6-5eb6-4f54-a435-a8e7691b87c9\") " pod="openstack/cinder-b7346-db-sync-f9mbk" Feb 24 05:52:13.195779 master-0 kubenswrapper[34361]: I0224 05:52:13.193925 34361 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-dcp4q" Feb 24 05:52:13.218503 master-0 kubenswrapper[34361]: I0224 05:52:13.195831 34361 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4f5d8934-00e0-46c9-ba9d-d9183edd6fb8-combined-ca-bundle\") pod \"neutron-db-sync-m7xgd\" (UID: \"4f5d8934-00e0-46c9-ba9d-d9183edd6fb8\") " pod="openstack/neutron-db-sync-m7xgd" Feb 24 05:52:13.218503 master-0 kubenswrapper[34361]: I0224 05:52:13.198545 34361 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/4f5d8934-00e0-46c9-ba9d-d9183edd6fb8-config\") pod \"neutron-db-sync-m7xgd\" (UID: \"4f5d8934-00e0-46c9-ba9d-d9183edd6fb8\") " pod="openstack/neutron-db-sync-m7xgd" Feb 24 05:52:13.242567 master-0 kubenswrapper[34361]: I0224 05:52:13.239915 34361 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-77dd9bf7ff-sv6dm" Feb 24 05:52:13.290930 master-0 kubenswrapper[34361]: I0224 05:52:13.286192 34361 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qkx9z\" (UniqueName: \"kubernetes.io/projected/4f5d8934-00e0-46c9-ba9d-d9183edd6fb8-kube-api-access-qkx9z\") pod \"neutron-db-sync-m7xgd\" (UID: \"4f5d8934-00e0-46c9-ba9d-d9183edd6fb8\") " pod="openstack/neutron-db-sync-m7xgd" Feb 24 05:52:13.290930 master-0 kubenswrapper[34361]: I0224 05:52:13.288795 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/41c862b6-5eb6-4f54-a435-a8e7691b87c9-etc-machine-id\") pod \"cinder-b7346-db-sync-f9mbk\" (UID: \"41c862b6-5eb6-4f54-a435-a8e7691b87c9\") " pod="openstack/cinder-b7346-db-sync-f9mbk" Feb 24 05:52:13.290930 master-0 kubenswrapper[34361]: I0224 05:52:13.288866 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ljrcx\" (UniqueName: \"kubernetes.io/projected/41c862b6-5eb6-4f54-a435-a8e7691b87c9-kube-api-access-ljrcx\") pod \"cinder-b7346-db-sync-f9mbk\" (UID: \"41c862b6-5eb6-4f54-a435-a8e7691b87c9\") " pod="openstack/cinder-b7346-db-sync-f9mbk" Feb 24 05:52:13.290930 master-0 kubenswrapper[34361]: I0224 05:52:13.288931 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/41c862b6-5eb6-4f54-a435-a8e7691b87c9-config-data\") pod \"cinder-b7346-db-sync-f9mbk\" (UID: \"41c862b6-5eb6-4f54-a435-a8e7691b87c9\") " pod="openstack/cinder-b7346-db-sync-f9mbk" Feb 24 05:52:13.290930 master-0 kubenswrapper[34361]: I0224 05:52:13.288953 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9a1e2cd8-9a9f-454d-b520-75769a722e55-operator-scripts\") pod \"ironic-db-create-hgms6\" (UID: \"9a1e2cd8-9a9f-454d-b520-75769a722e55\") " pod="openstack/ironic-db-create-hgms6" Feb 24 05:52:13.290930 master-0 kubenswrapper[34361]: I0224 05:52:13.288983 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/41c862b6-5eb6-4f54-a435-a8e7691b87c9-combined-ca-bundle\") pod \"cinder-b7346-db-sync-f9mbk\" (UID: \"41c862b6-5eb6-4f54-a435-a8e7691b87c9\") " pod="openstack/cinder-b7346-db-sync-f9mbk" Feb 24 05:52:13.290930 master-0 kubenswrapper[34361]: I0224 05:52:13.289014 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/41c862b6-5eb6-4f54-a435-a8e7691b87c9-db-sync-config-data\") pod \"cinder-b7346-db-sync-f9mbk\" (UID: \"41c862b6-5eb6-4f54-a435-a8e7691b87c9\") " pod="openstack/cinder-b7346-db-sync-f9mbk" Feb 24 05:52:13.290930 master-0 kubenswrapper[34361]: I0224 05:52:13.289085 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vfbbm\" (UniqueName: \"kubernetes.io/projected/9a1e2cd8-9a9f-454d-b520-75769a722e55-kube-api-access-vfbbm\") pod \"ironic-db-create-hgms6\" (UID: \"9a1e2cd8-9a9f-454d-b520-75769a722e55\") " pod="openstack/ironic-db-create-hgms6" Feb 24 05:52:13.290930 master-0 kubenswrapper[34361]: I0224 05:52:13.289108 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/41c862b6-5eb6-4f54-a435-a8e7691b87c9-scripts\") pod \"cinder-b7346-db-sync-f9mbk\" (UID: \"41c862b6-5eb6-4f54-a435-a8e7691b87c9\") " pod="openstack/cinder-b7346-db-sync-f9mbk" Feb 24 05:52:13.290930 master-0 kubenswrapper[34361]: I0224 05:52:13.290683 34361 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9a1e2cd8-9a9f-454d-b520-75769a722e55-operator-scripts\") pod \"ironic-db-create-hgms6\" (UID: \"9a1e2cd8-9a9f-454d-b520-75769a722e55\") " pod="openstack/ironic-db-create-hgms6" Feb 24 05:52:13.290930 master-0 kubenswrapper[34361]: I0224 05:52:13.290740 34361 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/41c862b6-5eb6-4f54-a435-a8e7691b87c9-etc-machine-id\") pod \"cinder-b7346-db-sync-f9mbk\" (UID: \"41c862b6-5eb6-4f54-a435-a8e7691b87c9\") " pod="openstack/cinder-b7346-db-sync-f9mbk" Feb 24 05:52:13.342124 master-0 kubenswrapper[34361]: I0224 05:52:13.340400 34361 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/41c862b6-5eb6-4f54-a435-a8e7691b87c9-scripts\") pod \"cinder-b7346-db-sync-f9mbk\" (UID: \"41c862b6-5eb6-4f54-a435-a8e7691b87c9\") " pod="openstack/cinder-b7346-db-sync-f9mbk" Feb 24 05:52:13.345394 master-0 kubenswrapper[34361]: I0224 05:52:13.342643 34361 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/41c862b6-5eb6-4f54-a435-a8e7691b87c9-config-data\") pod \"cinder-b7346-db-sync-f9mbk\" (UID: \"41c862b6-5eb6-4f54-a435-a8e7691b87c9\") " pod="openstack/cinder-b7346-db-sync-f9mbk" Feb 24 05:52:13.426576 master-0 kubenswrapper[34361]: I0224 05:52:13.349649 34361 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/41c862b6-5eb6-4f54-a435-a8e7691b87c9-db-sync-config-data\") pod \"cinder-b7346-db-sync-f9mbk\" (UID: \"41c862b6-5eb6-4f54-a435-a8e7691b87c9\") " pod="openstack/cinder-b7346-db-sync-f9mbk" Feb 24 05:52:13.426576 master-0 kubenswrapper[34361]: I0224 05:52:13.356450 34361 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/41c862b6-5eb6-4f54-a435-a8e7691b87c9-combined-ca-bundle\") pod \"cinder-b7346-db-sync-f9mbk\" (UID: \"41c862b6-5eb6-4f54-a435-a8e7691b87c9\") " pod="openstack/cinder-b7346-db-sync-f9mbk" Feb 24 05:52:13.440360 master-0 kubenswrapper[34361]: I0224 05:52:13.431148 34361 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ljrcx\" (UniqueName: \"kubernetes.io/projected/41c862b6-5eb6-4f54-a435-a8e7691b87c9-kube-api-access-ljrcx\") pod \"cinder-b7346-db-sync-f9mbk\" (UID: \"41c862b6-5eb6-4f54-a435-a8e7691b87c9\") " pod="openstack/cinder-b7346-db-sync-f9mbk" Feb 24 05:52:13.457474 master-0 kubenswrapper[34361]: I0224 05:52:13.448915 34361 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-m7xgd" Feb 24 05:52:13.531351 master-0 kubenswrapper[34361]: I0224 05:52:13.528212 34361 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vfbbm\" (UniqueName: \"kubernetes.io/projected/9a1e2cd8-9a9f-454d-b520-75769a722e55-kube-api-access-vfbbm\") pod \"ironic-db-create-hgms6\" (UID: \"9a1e2cd8-9a9f-454d-b520-75769a722e55\") " pod="openstack/ironic-db-create-hgms6" Feb 24 05:52:13.550486 master-0 kubenswrapper[34361]: I0224 05:52:13.544765 34361 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ironic-b901-account-create-update-vmptn"] Feb 24 05:52:13.550486 master-0 kubenswrapper[34361]: I0224 05:52:13.546635 34361 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ironic-b901-account-create-update-vmptn" Feb 24 05:52:13.559999 master-0 kubenswrapper[34361]: I0224 05:52:13.559899 34361 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ironic-b901-account-create-update-vmptn"] Feb 24 05:52:13.562545 master-0 kubenswrapper[34361]: I0224 05:52:13.561977 34361 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ironic-db-secret" Feb 24 05:52:13.572560 master-0 kubenswrapper[34361]: I0224 05:52:13.563154 34361 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ironic-db-create-hgms6" Feb 24 05:52:13.572560 master-0 kubenswrapper[34361]: I0224 05:52:13.569026 34361 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-77dd9bf7ff-sv6dm"] Feb 24 05:52:13.637382 master-0 kubenswrapper[34361]: I0224 05:52:13.633791 34361 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l9jgw\" (UniqueName: \"kubernetes.io/projected/32b27462-7223-4f43-8eea-25a2dcd42b17-kube-api-access-l9jgw\") pod \"ironic-b901-account-create-update-vmptn\" (UID: \"32b27462-7223-4f43-8eea-25a2dcd42b17\") " pod="openstack/ironic-b901-account-create-update-vmptn" Feb 24 05:52:13.652427 master-0 kubenswrapper[34361]: I0224 05:52:13.637793 34361 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/32b27462-7223-4f43-8eea-25a2dcd42b17-operator-scripts\") pod \"ironic-b901-account-create-update-vmptn\" (UID: \"32b27462-7223-4f43-8eea-25a2dcd42b17\") " pod="openstack/ironic-b901-account-create-update-vmptn" Feb 24 05:52:13.652427 master-0 kubenswrapper[34361]: I0224 05:52:13.641811 34361 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-b7346-db-sync-f9mbk" Feb 24 05:52:13.676277 master-0 kubenswrapper[34361]: I0224 05:52:13.676071 34361 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-db-sync-629gt"] Feb 24 05:52:13.680177 master-0 kubenswrapper[34361]: I0224 05:52:13.679697 34361 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-629gt" Feb 24 05:52:13.690283 master-0 kubenswrapper[34361]: I0224 05:52:13.690087 34361 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-scripts" Feb 24 05:52:13.693845 master-0 kubenswrapper[34361]: I0224 05:52:13.691704 34361 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-config-data" Feb 24 05:52:13.740671 master-0 kubenswrapper[34361]: I0224 05:52:13.740595 34361 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a033a9c9-abde-4d05-b958-06c6bb913e85-combined-ca-bundle\") pod \"placement-db-sync-629gt\" (UID: \"a033a9c9-abde-4d05-b958-06c6bb913e85\") " pod="openstack/placement-db-sync-629gt" Feb 24 05:52:13.740671 master-0 kubenswrapper[34361]: I0224 05:52:13.740678 34361 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a033a9c9-abde-4d05-b958-06c6bb913e85-scripts\") pod \"placement-db-sync-629gt\" (UID: \"a033a9c9-abde-4d05-b958-06c6bb913e85\") " pod="openstack/placement-db-sync-629gt" Feb 24 05:52:13.741643 master-0 kubenswrapper[34361]: I0224 05:52:13.740724 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/32b27462-7223-4f43-8eea-25a2dcd42b17-operator-scripts\") pod \"ironic-b901-account-create-update-vmptn\" (UID: \"32b27462-7223-4f43-8eea-25a2dcd42b17\") " pod="openstack/ironic-b901-account-create-update-vmptn" Feb 24 05:52:13.741643 master-0 kubenswrapper[34361]: I0224 05:52:13.740806 34361 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a033a9c9-abde-4d05-b958-06c6bb913e85-logs\") pod \"placement-db-sync-629gt\" (UID: \"a033a9c9-abde-4d05-b958-06c6bb913e85\") " pod="openstack/placement-db-sync-629gt" Feb 24 05:52:13.741643 master-0 kubenswrapper[34361]: I0224 05:52:13.740913 34361 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-skf6k\" (UniqueName: \"kubernetes.io/projected/a033a9c9-abde-4d05-b958-06c6bb913e85-kube-api-access-skf6k\") pod \"placement-db-sync-629gt\" (UID: \"a033a9c9-abde-4d05-b958-06c6bb913e85\") " pod="openstack/placement-db-sync-629gt" Feb 24 05:52:13.741643 master-0 kubenswrapper[34361]: I0224 05:52:13.741019 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l9jgw\" (UniqueName: \"kubernetes.io/projected/32b27462-7223-4f43-8eea-25a2dcd42b17-kube-api-access-l9jgw\") pod \"ironic-b901-account-create-update-vmptn\" (UID: \"32b27462-7223-4f43-8eea-25a2dcd42b17\") " pod="openstack/ironic-b901-account-create-update-vmptn" Feb 24 05:52:13.741643 master-0 kubenswrapper[34361]: I0224 05:52:13.741053 34361 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a033a9c9-abde-4d05-b958-06c6bb913e85-config-data\") pod \"placement-db-sync-629gt\" (UID: \"a033a9c9-abde-4d05-b958-06c6bb913e85\") " pod="openstack/placement-db-sync-629gt" Feb 24 05:52:13.742212 master-0 kubenswrapper[34361]: I0224 05:52:13.742158 34361 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/32b27462-7223-4f43-8eea-25a2dcd42b17-operator-scripts\") pod \"ironic-b901-account-create-update-vmptn\" (UID: \"32b27462-7223-4f43-8eea-25a2dcd42b17\") " pod="openstack/ironic-b901-account-create-update-vmptn" Feb 24 05:52:13.742940 master-0 kubenswrapper[34361]: I0224 05:52:13.742880 34361 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-sync-629gt"] Feb 24 05:52:13.759171 master-0 kubenswrapper[34361]: I0224 05:52:13.759089 34361 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-564d4966c5-82kwv"] Feb 24 05:52:13.761600 master-0 kubenswrapper[34361]: I0224 05:52:13.761574 34361 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-564d4966c5-82kwv" Feb 24 05:52:13.788551 master-0 kubenswrapper[34361]: I0224 05:52:13.788473 34361 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-564d4966c5-82kwv"] Feb 24 05:52:13.810435 master-0 kubenswrapper[34361]: I0224 05:52:13.810372 34361 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l9jgw\" (UniqueName: \"kubernetes.io/projected/32b27462-7223-4f43-8eea-25a2dcd42b17-kube-api-access-l9jgw\") pod \"ironic-b901-account-create-update-vmptn\" (UID: \"32b27462-7223-4f43-8eea-25a2dcd42b17\") " pod="openstack/ironic-b901-account-create-update-vmptn" Feb 24 05:52:13.846063 master-0 kubenswrapper[34361]: I0224 05:52:13.845113 34361 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/76fe2580-1b17-4dd5-bdac-693e4027a09e-dns-swift-storage-0\") pod \"dnsmasq-dns-564d4966c5-82kwv\" (UID: \"76fe2580-1b17-4dd5-bdac-693e4027a09e\") " pod="openstack/dnsmasq-dns-564d4966c5-82kwv" Feb 24 05:52:13.846063 master-0 kubenswrapper[34361]: I0224 05:52:13.845212 34361 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/76fe2580-1b17-4dd5-bdac-693e4027a09e-dns-svc\") pod \"dnsmasq-dns-564d4966c5-82kwv\" (UID: \"76fe2580-1b17-4dd5-bdac-693e4027a09e\") " pod="openstack/dnsmasq-dns-564d4966c5-82kwv" Feb 24 05:52:13.846063 master-0 kubenswrapper[34361]: I0224 05:52:13.845258 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a033a9c9-abde-4d05-b958-06c6bb913e85-combined-ca-bundle\") pod \"placement-db-sync-629gt\" (UID: \"a033a9c9-abde-4d05-b958-06c6bb913e85\") " pod="openstack/placement-db-sync-629gt" Feb 24 05:52:13.846063 master-0 kubenswrapper[34361]: I0224 05:52:13.845280 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a033a9c9-abde-4d05-b958-06c6bb913e85-scripts\") pod \"placement-db-sync-629gt\" (UID: \"a033a9c9-abde-4d05-b958-06c6bb913e85\") " pod="openstack/placement-db-sync-629gt" Feb 24 05:52:13.846063 master-0 kubenswrapper[34361]: I0224 05:52:13.845526 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a033a9c9-abde-4d05-b958-06c6bb913e85-logs\") pod \"placement-db-sync-629gt\" (UID: \"a033a9c9-abde-4d05-b958-06c6bb913e85\") " pod="openstack/placement-db-sync-629gt" Feb 24 05:52:13.846063 master-0 kubenswrapper[34361]: I0224 05:52:13.845741 34361 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5txg6\" (UniqueName: \"kubernetes.io/projected/76fe2580-1b17-4dd5-bdac-693e4027a09e-kube-api-access-5txg6\") pod \"dnsmasq-dns-564d4966c5-82kwv\" (UID: \"76fe2580-1b17-4dd5-bdac-693e4027a09e\") " pod="openstack/dnsmasq-dns-564d4966c5-82kwv" Feb 24 05:52:13.846063 master-0 kubenswrapper[34361]: I0224 05:52:13.845854 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-skf6k\" (UniqueName: \"kubernetes.io/projected/a033a9c9-abde-4d05-b958-06c6bb913e85-kube-api-access-skf6k\") pod \"placement-db-sync-629gt\" (UID: \"a033a9c9-abde-4d05-b958-06c6bb913e85\") " pod="openstack/placement-db-sync-629gt" Feb 24 05:52:13.846063 master-0 kubenswrapper[34361]: I0224 05:52:13.845988 34361 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/76fe2580-1b17-4dd5-bdac-693e4027a09e-ovsdbserver-nb\") pod \"dnsmasq-dns-564d4966c5-82kwv\" (UID: \"76fe2580-1b17-4dd5-bdac-693e4027a09e\") " pod="openstack/dnsmasq-dns-564d4966c5-82kwv" Feb 24 05:52:13.847296 master-0 kubenswrapper[34361]: I0224 05:52:13.846564 34361 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/76fe2580-1b17-4dd5-bdac-693e4027a09e-ovsdbserver-sb\") pod \"dnsmasq-dns-564d4966c5-82kwv\" (UID: \"76fe2580-1b17-4dd5-bdac-693e4027a09e\") " pod="openstack/dnsmasq-dns-564d4966c5-82kwv" Feb 24 05:52:13.847296 master-0 kubenswrapper[34361]: I0224 05:52:13.846610 34361 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/76fe2580-1b17-4dd5-bdac-693e4027a09e-config\") pod \"dnsmasq-dns-564d4966c5-82kwv\" (UID: \"76fe2580-1b17-4dd5-bdac-693e4027a09e\") " pod="openstack/dnsmasq-dns-564d4966c5-82kwv" Feb 24 05:52:13.847296 master-0 kubenswrapper[34361]: I0224 05:52:13.846676 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a033a9c9-abde-4d05-b958-06c6bb913e85-config-data\") pod \"placement-db-sync-629gt\" (UID: \"a033a9c9-abde-4d05-b958-06c6bb913e85\") " pod="openstack/placement-db-sync-629gt" Feb 24 05:52:13.848912 master-0 kubenswrapper[34361]: I0224 05:52:13.848854 34361 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a033a9c9-abde-4d05-b958-06c6bb913e85-scripts\") pod \"placement-db-sync-629gt\" (UID: \"a033a9c9-abde-4d05-b958-06c6bb913e85\") " pod="openstack/placement-db-sync-629gt" Feb 24 05:52:13.849461 master-0 kubenswrapper[34361]: I0224 05:52:13.849224 34361 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a033a9c9-abde-4d05-b958-06c6bb913e85-logs\") pod \"placement-db-sync-629gt\" (UID: \"a033a9c9-abde-4d05-b958-06c6bb913e85\") " pod="openstack/placement-db-sync-629gt" Feb 24 05:52:13.849851 master-0 kubenswrapper[34361]: I0224 05:52:13.849811 34361 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a033a9c9-abde-4d05-b958-06c6bb913e85-combined-ca-bundle\") pod \"placement-db-sync-629gt\" (UID: \"a033a9c9-abde-4d05-b958-06c6bb913e85\") " pod="openstack/placement-db-sync-629gt" Feb 24 05:52:13.852721 master-0 kubenswrapper[34361]: I0224 05:52:13.852681 34361 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a033a9c9-abde-4d05-b958-06c6bb913e85-config-data\") pod \"placement-db-sync-629gt\" (UID: \"a033a9c9-abde-4d05-b958-06c6bb913e85\") " pod="openstack/placement-db-sync-629gt" Feb 24 05:52:13.909489 master-0 kubenswrapper[34361]: I0224 05:52:13.909391 34361 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-skf6k\" (UniqueName: \"kubernetes.io/projected/a033a9c9-abde-4d05-b958-06c6bb913e85-kube-api-access-skf6k\") pod \"placement-db-sync-629gt\" (UID: \"a033a9c9-abde-4d05-b958-06c6bb913e85\") " pod="openstack/placement-db-sync-629gt" Feb 24 05:52:13.952170 master-0 kubenswrapper[34361]: I0224 05:52:13.949837 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/76fe2580-1b17-4dd5-bdac-693e4027a09e-ovsdbserver-nb\") pod \"dnsmasq-dns-564d4966c5-82kwv\" (UID: \"76fe2580-1b17-4dd5-bdac-693e4027a09e\") " pod="openstack/dnsmasq-dns-564d4966c5-82kwv" Feb 24 05:52:13.952170 master-0 kubenswrapper[34361]: I0224 05:52:13.949910 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/76fe2580-1b17-4dd5-bdac-693e4027a09e-ovsdbserver-sb\") pod \"dnsmasq-dns-564d4966c5-82kwv\" (UID: \"76fe2580-1b17-4dd5-bdac-693e4027a09e\") " pod="openstack/dnsmasq-dns-564d4966c5-82kwv" Feb 24 05:52:13.952170 master-0 kubenswrapper[34361]: I0224 05:52:13.949940 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/76fe2580-1b17-4dd5-bdac-693e4027a09e-config\") pod \"dnsmasq-dns-564d4966c5-82kwv\" (UID: \"76fe2580-1b17-4dd5-bdac-693e4027a09e\") " pod="openstack/dnsmasq-dns-564d4966c5-82kwv" Feb 24 05:52:13.952170 master-0 kubenswrapper[34361]: I0224 05:52:13.950009 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/76fe2580-1b17-4dd5-bdac-693e4027a09e-dns-swift-storage-0\") pod \"dnsmasq-dns-564d4966c5-82kwv\" (UID: \"76fe2580-1b17-4dd5-bdac-693e4027a09e\") " pod="openstack/dnsmasq-dns-564d4966c5-82kwv" Feb 24 05:52:13.952170 master-0 kubenswrapper[34361]: I0224 05:52:13.950038 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/76fe2580-1b17-4dd5-bdac-693e4027a09e-dns-svc\") pod \"dnsmasq-dns-564d4966c5-82kwv\" (UID: \"76fe2580-1b17-4dd5-bdac-693e4027a09e\") " pod="openstack/dnsmasq-dns-564d4966c5-82kwv" Feb 24 05:52:13.952170 master-0 kubenswrapper[34361]: I0224 05:52:13.950125 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5txg6\" (UniqueName: \"kubernetes.io/projected/76fe2580-1b17-4dd5-bdac-693e4027a09e-kube-api-access-5txg6\") pod \"dnsmasq-dns-564d4966c5-82kwv\" (UID: \"76fe2580-1b17-4dd5-bdac-693e4027a09e\") " pod="openstack/dnsmasq-dns-564d4966c5-82kwv" Feb 24 05:52:13.952170 master-0 kubenswrapper[34361]: I0224 05:52:13.951734 34361 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/76fe2580-1b17-4dd5-bdac-693e4027a09e-ovsdbserver-nb\") pod \"dnsmasq-dns-564d4966c5-82kwv\" (UID: \"76fe2580-1b17-4dd5-bdac-693e4027a09e\") " pod="openstack/dnsmasq-dns-564d4966c5-82kwv" Feb 24 05:52:13.976336 master-0 kubenswrapper[34361]: I0224 05:52:13.963903 34361 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/76fe2580-1b17-4dd5-bdac-693e4027a09e-dns-swift-storage-0\") pod \"dnsmasq-dns-564d4966c5-82kwv\" (UID: \"76fe2580-1b17-4dd5-bdac-693e4027a09e\") " pod="openstack/dnsmasq-dns-564d4966c5-82kwv" Feb 24 05:52:13.976336 master-0 kubenswrapper[34361]: I0224 05:52:13.971510 34361 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/76fe2580-1b17-4dd5-bdac-693e4027a09e-config\") pod \"dnsmasq-dns-564d4966c5-82kwv\" (UID: \"76fe2580-1b17-4dd5-bdac-693e4027a09e\") " pod="openstack/dnsmasq-dns-564d4966c5-82kwv" Feb 24 05:52:13.976336 master-0 kubenswrapper[34361]: I0224 05:52:13.972946 34361 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/76fe2580-1b17-4dd5-bdac-693e4027a09e-dns-svc\") pod \"dnsmasq-dns-564d4966c5-82kwv\" (UID: \"76fe2580-1b17-4dd5-bdac-693e4027a09e\") " pod="openstack/dnsmasq-dns-564d4966c5-82kwv" Feb 24 05:52:13.991083 master-0 kubenswrapper[34361]: I0224 05:52:13.982034 34361 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/76fe2580-1b17-4dd5-bdac-693e4027a09e-ovsdbserver-sb\") pod \"dnsmasq-dns-564d4966c5-82kwv\" (UID: \"76fe2580-1b17-4dd5-bdac-693e4027a09e\") " pod="openstack/dnsmasq-dns-564d4966c5-82kwv" Feb 24 05:52:13.991083 master-0 kubenswrapper[34361]: I0224 05:52:13.990193 34361 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ironic-b901-account-create-update-vmptn" Feb 24 05:52:14.061760 master-0 kubenswrapper[34361]: I0224 05:52:14.059245 34361 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-629gt" Feb 24 05:52:14.156066 master-0 kubenswrapper[34361]: I0224 05:52:14.155999 34361 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5txg6\" (UniqueName: \"kubernetes.io/projected/76fe2580-1b17-4dd5-bdac-693e4027a09e-kube-api-access-5txg6\") pod \"dnsmasq-dns-564d4966c5-82kwv\" (UID: \"76fe2580-1b17-4dd5-bdac-693e4027a09e\") " pod="openstack/dnsmasq-dns-564d4966c5-82kwv" Feb 24 05:52:14.198277 master-0 kubenswrapper[34361]: I0224 05:52:14.190774 34361 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-564d4966c5-82kwv" Feb 24 05:52:14.345725 master-0 kubenswrapper[34361]: I0224 05:52:14.345626 34361 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-dcp4q"] Feb 24 05:52:14.456839 master-0 kubenswrapper[34361]: I0224 05:52:14.455804 34361 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-dcp4q" event={"ID":"72a1000b-680d-4c11-a03c-d208f81272dd","Type":"ContainerStarted","Data":"d7afc3022022e3c99044351489648cd6b9c053e5f1ded198c24627008473507f"} Feb 24 05:52:14.735367 master-0 kubenswrapper[34361]: I0224 05:52:14.732604 34361 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-sync-m7xgd"] Feb 24 05:52:14.748926 master-0 kubenswrapper[34361]: I0224 05:52:14.746133 34361 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-b7346-db-sync-f9mbk"] Feb 24 05:52:14.756115 master-0 kubenswrapper[34361]: I0224 05:52:14.755193 34361 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-77dd9bf7ff-sv6dm"] Feb 24 05:52:14.793863 master-0 kubenswrapper[34361]: W0224 05:52:14.793700 34361 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod4f5d8934_00e0_46c9_ba9d_d9183edd6fb8.slice/crio-30e75ae7a378acfeb29382c3e292ebc80b38786d52e4eea64ed9f6c48aeb1920 WatchSource:0}: Error finding container 30e75ae7a378acfeb29382c3e292ebc80b38786d52e4eea64ed9f6c48aeb1920: Status 404 returned error can't find the container with id 30e75ae7a378acfeb29382c3e292ebc80b38786d52e4eea64ed9f6c48aeb1920 Feb 24 05:52:14.943073 master-0 kubenswrapper[34361]: I0224 05:52:14.942981 34361 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-bdafd-default-external-api-0"] Feb 24 05:52:14.946024 master-0 kubenswrapper[34361]: I0224 05:52:14.945417 34361 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-bdafd-default-external-api-0" Feb 24 05:52:14.948546 master-0 kubenswrapper[34361]: I0224 05:52:14.948480 34361 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-public-svc" Feb 24 05:52:14.948659 master-0 kubenswrapper[34361]: I0224 05:52:14.948578 34361 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-scripts" Feb 24 05:52:14.950785 master-0 kubenswrapper[34361]: I0224 05:52:14.949048 34361 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-bdafd-default-external-config-data" Feb 24 05:52:14.969157 master-0 kubenswrapper[34361]: I0224 05:52:14.965401 34361 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ironic-db-create-hgms6"] Feb 24 05:52:14.977384 master-0 kubenswrapper[34361]: W0224 05:52:14.973245 34361 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod9a1e2cd8_9a9f_454d_b520_75769a722e55.slice/crio-d14e081fa348fb424076b79487dd707513e6fe77ddcbebb2f5ccf9f540046796 WatchSource:0}: Error finding container d14e081fa348fb424076b79487dd707513e6fe77ddcbebb2f5ccf9f540046796: Status 404 returned error can't find the container with id d14e081fa348fb424076b79487dd707513e6fe77ddcbebb2f5ccf9f540046796 Feb 24 05:52:14.992389 master-0 kubenswrapper[34361]: I0224 05:52:14.986429 34361 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-bdafd-default-external-api-0"] Feb 24 05:52:15.133188 master-0 kubenswrapper[34361]: I0224 05:52:15.130986 34361 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b19f8423-1a98-4e8f-902d-d7fbf56a12e7-config-data\") pod \"glance-bdafd-default-external-api-0\" (UID: \"b19f8423-1a98-4e8f-902d-d7fbf56a12e7\") " pod="openstack/glance-bdafd-default-external-api-0" Feb 24 05:52:15.133188 master-0 kubenswrapper[34361]: I0224 05:52:15.131106 34361 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/b19f8423-1a98-4e8f-902d-d7fbf56a12e7-public-tls-certs\") pod \"glance-bdafd-default-external-api-0\" (UID: \"b19f8423-1a98-4e8f-902d-d7fbf56a12e7\") " pod="openstack/glance-bdafd-default-external-api-0" Feb 24 05:52:15.133188 master-0 kubenswrapper[34361]: I0224 05:52:15.131134 34361 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gm8r6\" (UniqueName: \"kubernetes.io/projected/b19f8423-1a98-4e8f-902d-d7fbf56a12e7-kube-api-access-gm8r6\") pod \"glance-bdafd-default-external-api-0\" (UID: \"b19f8423-1a98-4e8f-902d-d7fbf56a12e7\") " pod="openstack/glance-bdafd-default-external-api-0" Feb 24 05:52:15.133188 master-0 kubenswrapper[34361]: I0224 05:52:15.131176 34361 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b19f8423-1a98-4e8f-902d-d7fbf56a12e7-logs\") pod \"glance-bdafd-default-external-api-0\" (UID: \"b19f8423-1a98-4e8f-902d-d7fbf56a12e7\") " pod="openstack/glance-bdafd-default-external-api-0" Feb 24 05:52:15.133188 master-0 kubenswrapper[34361]: I0224 05:52:15.131199 34361 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-8fb5a103-1e29-48ca-b6ff-e05e0e3dadaf\" (UniqueName: \"kubernetes.io/csi/topolvm.io^3c776d19-5d94-41f0-be3d-1e63c1bc4e65\") pod \"glance-bdafd-default-external-api-0\" (UID: \"b19f8423-1a98-4e8f-902d-d7fbf56a12e7\") " pod="openstack/glance-bdafd-default-external-api-0" Feb 24 05:52:15.133188 master-0 kubenswrapper[34361]: I0224 05:52:15.131242 34361 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b19f8423-1a98-4e8f-902d-d7fbf56a12e7-scripts\") pod \"glance-bdafd-default-external-api-0\" (UID: \"b19f8423-1a98-4e8f-902d-d7fbf56a12e7\") " pod="openstack/glance-bdafd-default-external-api-0" Feb 24 05:52:15.133188 master-0 kubenswrapper[34361]: I0224 05:52:15.131260 34361 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/b19f8423-1a98-4e8f-902d-d7fbf56a12e7-httpd-run\") pod \"glance-bdafd-default-external-api-0\" (UID: \"b19f8423-1a98-4e8f-902d-d7fbf56a12e7\") " pod="openstack/glance-bdafd-default-external-api-0" Feb 24 05:52:15.133188 master-0 kubenswrapper[34361]: I0224 05:52:15.131299 34361 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b19f8423-1a98-4e8f-902d-d7fbf56a12e7-combined-ca-bundle\") pod \"glance-bdafd-default-external-api-0\" (UID: \"b19f8423-1a98-4e8f-902d-d7fbf56a12e7\") " pod="openstack/glance-bdafd-default-external-api-0" Feb 24 05:52:15.243508 master-0 kubenswrapper[34361]: I0224 05:52:15.239616 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b19f8423-1a98-4e8f-902d-d7fbf56a12e7-combined-ca-bundle\") pod \"glance-bdafd-default-external-api-0\" (UID: \"b19f8423-1a98-4e8f-902d-d7fbf56a12e7\") " pod="openstack/glance-bdafd-default-external-api-0" Feb 24 05:52:15.243508 master-0 kubenswrapper[34361]: I0224 05:52:15.239728 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b19f8423-1a98-4e8f-902d-d7fbf56a12e7-config-data\") pod \"glance-bdafd-default-external-api-0\" (UID: \"b19f8423-1a98-4e8f-902d-d7fbf56a12e7\") " pod="openstack/glance-bdafd-default-external-api-0" Feb 24 05:52:15.243508 master-0 kubenswrapper[34361]: I0224 05:52:15.239800 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/b19f8423-1a98-4e8f-902d-d7fbf56a12e7-public-tls-certs\") pod \"glance-bdafd-default-external-api-0\" (UID: \"b19f8423-1a98-4e8f-902d-d7fbf56a12e7\") " pod="openstack/glance-bdafd-default-external-api-0" Feb 24 05:52:15.243508 master-0 kubenswrapper[34361]: I0224 05:52:15.239825 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gm8r6\" (UniqueName: \"kubernetes.io/projected/b19f8423-1a98-4e8f-902d-d7fbf56a12e7-kube-api-access-gm8r6\") pod \"glance-bdafd-default-external-api-0\" (UID: \"b19f8423-1a98-4e8f-902d-d7fbf56a12e7\") " pod="openstack/glance-bdafd-default-external-api-0" Feb 24 05:52:15.243508 master-0 kubenswrapper[34361]: I0224 05:52:15.239881 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b19f8423-1a98-4e8f-902d-d7fbf56a12e7-logs\") pod \"glance-bdafd-default-external-api-0\" (UID: \"b19f8423-1a98-4e8f-902d-d7fbf56a12e7\") " pod="openstack/glance-bdafd-default-external-api-0" Feb 24 05:52:15.243508 master-0 kubenswrapper[34361]: I0224 05:52:15.239903 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-8fb5a103-1e29-48ca-b6ff-e05e0e3dadaf\" (UniqueName: \"kubernetes.io/csi/topolvm.io^3c776d19-5d94-41f0-be3d-1e63c1bc4e65\") pod \"glance-bdafd-default-external-api-0\" (UID: \"b19f8423-1a98-4e8f-902d-d7fbf56a12e7\") " pod="openstack/glance-bdafd-default-external-api-0" Feb 24 05:52:15.243508 master-0 kubenswrapper[34361]: I0224 05:52:15.239949 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b19f8423-1a98-4e8f-902d-d7fbf56a12e7-scripts\") pod \"glance-bdafd-default-external-api-0\" (UID: \"b19f8423-1a98-4e8f-902d-d7fbf56a12e7\") " pod="openstack/glance-bdafd-default-external-api-0" Feb 24 05:52:15.243508 master-0 kubenswrapper[34361]: I0224 05:52:15.239966 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/b19f8423-1a98-4e8f-902d-d7fbf56a12e7-httpd-run\") pod \"glance-bdafd-default-external-api-0\" (UID: \"b19f8423-1a98-4e8f-902d-d7fbf56a12e7\") " pod="openstack/glance-bdafd-default-external-api-0" Feb 24 05:52:15.243508 master-0 kubenswrapper[34361]: I0224 05:52:15.240592 34361 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/b19f8423-1a98-4e8f-902d-d7fbf56a12e7-httpd-run\") pod \"glance-bdafd-default-external-api-0\" (UID: \"b19f8423-1a98-4e8f-902d-d7fbf56a12e7\") " pod="openstack/glance-bdafd-default-external-api-0" Feb 24 05:52:15.243508 master-0 kubenswrapper[34361]: I0224 05:52:15.242785 34361 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b19f8423-1a98-4e8f-902d-d7fbf56a12e7-logs\") pod \"glance-bdafd-default-external-api-0\" (UID: \"b19f8423-1a98-4e8f-902d-d7fbf56a12e7\") " pod="openstack/glance-bdafd-default-external-api-0" Feb 24 05:52:15.262798 master-0 kubenswrapper[34361]: I0224 05:52:15.262340 34361 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-sync-629gt"] Feb 24 05:52:15.263957 master-0 kubenswrapper[34361]: I0224 05:52:15.263846 34361 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b19f8423-1a98-4e8f-902d-d7fbf56a12e7-scripts\") pod \"glance-bdafd-default-external-api-0\" (UID: \"b19f8423-1a98-4e8f-902d-d7fbf56a12e7\") " pod="openstack/glance-bdafd-default-external-api-0" Feb 24 05:52:15.263957 master-0 kubenswrapper[34361]: I0224 05:52:15.263878 34361 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b19f8423-1a98-4e8f-902d-d7fbf56a12e7-combined-ca-bundle\") pod \"glance-bdafd-default-external-api-0\" (UID: \"b19f8423-1a98-4e8f-902d-d7fbf56a12e7\") " pod="openstack/glance-bdafd-default-external-api-0" Feb 24 05:52:15.266583 master-0 kubenswrapper[34361]: I0224 05:52:15.264270 34361 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 24 05:52:15.266583 master-0 kubenswrapper[34361]: I0224 05:52:15.264320 34361 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-8fb5a103-1e29-48ca-b6ff-e05e0e3dadaf\" (UniqueName: \"kubernetes.io/csi/topolvm.io^3c776d19-5d94-41f0-be3d-1e63c1bc4e65\") pod \"glance-bdafd-default-external-api-0\" (UID: \"b19f8423-1a98-4e8f-902d-d7fbf56a12e7\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/topolvm.io/7d863fd5501d6d1171206f6d6ea42c84796ef7fcbd0ecfb3be968cf37320363b/globalmount\"" pod="openstack/glance-bdafd-default-external-api-0" Feb 24 05:52:15.266583 master-0 kubenswrapper[34361]: I0224 05:52:15.265571 34361 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b19f8423-1a98-4e8f-902d-d7fbf56a12e7-config-data\") pod \"glance-bdafd-default-external-api-0\" (UID: \"b19f8423-1a98-4e8f-902d-d7fbf56a12e7\") " pod="openstack/glance-bdafd-default-external-api-0" Feb 24 05:52:15.267278 master-0 kubenswrapper[34361]: I0224 05:52:15.267224 34361 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/b19f8423-1a98-4e8f-902d-d7fbf56a12e7-public-tls-certs\") pod \"glance-bdafd-default-external-api-0\" (UID: \"b19f8423-1a98-4e8f-902d-d7fbf56a12e7\") " pod="openstack/glance-bdafd-default-external-api-0" Feb 24 05:52:15.275266 master-0 kubenswrapper[34361]: I0224 05:52:15.275158 34361 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gm8r6\" (UniqueName: \"kubernetes.io/projected/b19f8423-1a98-4e8f-902d-d7fbf56a12e7-kube-api-access-gm8r6\") pod \"glance-bdafd-default-external-api-0\" (UID: \"b19f8423-1a98-4e8f-902d-d7fbf56a12e7\") " pod="openstack/glance-bdafd-default-external-api-0" Feb 24 05:52:15.280930 master-0 kubenswrapper[34361]: I0224 05:52:15.280866 34361 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ironic-b901-account-create-update-vmptn"] Feb 24 05:52:15.292989 master-0 kubenswrapper[34361]: W0224 05:52:15.292909 34361 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda033a9c9_abde_4d05_b958_06c6bb913e85.slice/crio-6da48d88c63a8be25aa2e5e4a182352bafe8bf63ce18dc721a2bd00d1c00b344 WatchSource:0}: Error finding container 6da48d88c63a8be25aa2e5e4a182352bafe8bf63ce18dc721a2bd00d1c00b344: Status 404 returned error can't find the container with id 6da48d88c63a8be25aa2e5e4a182352bafe8bf63ce18dc721a2bd00d1c00b344 Feb 24 05:52:15.295786 master-0 kubenswrapper[34361]: I0224 05:52:15.295721 34361 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-564d4966c5-82kwv"] Feb 24 05:52:15.295832 master-0 kubenswrapper[34361]: W0224 05:52:15.295739 34361 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod32b27462_7223_4f43_8eea_25a2dcd42b17.slice/crio-26de47a233b29156ab49c82812f66aac2ec0c0a1f5d841d5a9aee92e8b4874bb WatchSource:0}: Error finding container 26de47a233b29156ab49c82812f66aac2ec0c0a1f5d841d5a9aee92e8b4874bb: Status 404 returned error can't find the container with id 26de47a233b29156ab49c82812f66aac2ec0c0a1f5d841d5a9aee92e8b4874bb Feb 24 05:52:15.302379 master-0 kubenswrapper[34361]: W0224 05:52:15.302326 34361 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod76fe2580_1b17_4dd5_bdac_693e4027a09e.slice/crio-458afe3acc00c047535908b73d4386c29e9cd2f688113b67fb1c968c8f330588 WatchSource:0}: Error finding container 458afe3acc00c047535908b73d4386c29e9cd2f688113b67fb1c968c8f330588: Status 404 returned error can't find the container with id 458afe3acc00c047535908b73d4386c29e9cd2f688113b67fb1c968c8f330588 Feb 24 05:52:15.505285 master-0 kubenswrapper[34361]: I0224 05:52:15.505028 34361 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-629gt" event={"ID":"a033a9c9-abde-4d05-b958-06c6bb913e85","Type":"ContainerStarted","Data":"6da48d88c63a8be25aa2e5e4a182352bafe8bf63ce18dc721a2bd00d1c00b344"} Feb 24 05:52:15.521880 master-0 kubenswrapper[34361]: I0224 05:52:15.521705 34361 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-db-create-hgms6" event={"ID":"9a1e2cd8-9a9f-454d-b520-75769a722e55","Type":"ContainerStarted","Data":"7ee8a6559ab43eb317259dafe7cdf86adf385019298144a14eb4ca8308528154"} Feb 24 05:52:15.521880 master-0 kubenswrapper[34361]: I0224 05:52:15.521780 34361 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-db-create-hgms6" event={"ID":"9a1e2cd8-9a9f-454d-b520-75769a722e55","Type":"ContainerStarted","Data":"d14e081fa348fb424076b79487dd707513e6fe77ddcbebb2f5ccf9f540046796"} Feb 24 05:52:15.532460 master-0 kubenswrapper[34361]: I0224 05:52:15.530590 34361 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-564d4966c5-82kwv" event={"ID":"76fe2580-1b17-4dd5-bdac-693e4027a09e","Type":"ContainerStarted","Data":"458afe3acc00c047535908b73d4386c29e9cd2f688113b67fb1c968c8f330588"} Feb 24 05:52:15.535428 master-0 kubenswrapper[34361]: I0224 05:52:15.535384 34361 generic.go:334] "Generic (PLEG): container finished" podID="0a54b59d-5bdb-4de2-afb3-8d68064c94c9" containerID="c112fd074325bc5341632db5c1aceb5c347c417d89695e11e8fb8e7c0b6cb665" exitCode=0 Feb 24 05:52:15.535843 master-0 kubenswrapper[34361]: I0224 05:52:15.535820 34361 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-77dd9bf7ff-sv6dm" event={"ID":"0a54b59d-5bdb-4de2-afb3-8d68064c94c9","Type":"ContainerDied","Data":"c112fd074325bc5341632db5c1aceb5c347c417d89695e11e8fb8e7c0b6cb665"} Feb 24 05:52:15.535950 master-0 kubenswrapper[34361]: I0224 05:52:15.535935 34361 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-77dd9bf7ff-sv6dm" event={"ID":"0a54b59d-5bdb-4de2-afb3-8d68064c94c9","Type":"ContainerStarted","Data":"ed3ee91a001265cb1fb20c2e8bd2c7547680946fcb17b8c6788f07ac04d69b8f"} Feb 24 05:52:15.548258 master-0 kubenswrapper[34361]: I0224 05:52:15.547891 34361 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-b7346-db-sync-f9mbk" event={"ID":"41c862b6-5eb6-4f54-a435-a8e7691b87c9","Type":"ContainerStarted","Data":"837a6c85f8d72a5d5b8cfea4438b650db2c3015c8e36d7d64e077b1aa9ee2700"} Feb 24 05:52:15.549335 master-0 kubenswrapper[34361]: I0224 05:52:15.549294 34361 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-b901-account-create-update-vmptn" event={"ID":"32b27462-7223-4f43-8eea-25a2dcd42b17","Type":"ContainerStarted","Data":"26de47a233b29156ab49c82812f66aac2ec0c0a1f5d841d5a9aee92e8b4874bb"} Feb 24 05:52:15.550869 master-0 kubenswrapper[34361]: I0224 05:52:15.550840 34361 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-m7xgd" event={"ID":"4f5d8934-00e0-46c9-ba9d-d9183edd6fb8","Type":"ContainerStarted","Data":"e1779a5577f87379b3eaec1b4b22da92e33df9ea40fc881bc79cca47a933b8d7"} Feb 24 05:52:15.550869 master-0 kubenswrapper[34361]: I0224 05:52:15.550868 34361 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-m7xgd" event={"ID":"4f5d8934-00e0-46c9-ba9d-d9183edd6fb8","Type":"ContainerStarted","Data":"30e75ae7a378acfeb29382c3e292ebc80b38786d52e4eea64ed9f6c48aeb1920"} Feb 24 05:52:15.566531 master-0 kubenswrapper[34361]: I0224 05:52:15.566246 34361 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-dcp4q" event={"ID":"72a1000b-680d-4c11-a03c-d208f81272dd","Type":"ContainerStarted","Data":"c35c7ecde0912c81c6f0da24c270691434f7feef5c1a125826559e148787e233"} Feb 24 05:52:15.578436 master-0 kubenswrapper[34361]: I0224 05:52:15.578363 34361 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-bdafd-default-external-api-0"] Feb 24 05:52:15.579661 master-0 kubenswrapper[34361]: E0224 05:52:15.579619 34361 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[glance], unattached volumes=[], failed to process volumes=[]: context canceled" pod="openstack/glance-bdafd-default-external-api-0" podUID="b19f8423-1a98-4e8f-902d-d7fbf56a12e7" Feb 24 05:52:15.608632 master-0 kubenswrapper[34361]: I0224 05:52:15.604356 34361 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ironic-db-create-hgms6" podStartSLOduration=3.604329834 podStartE2EDuration="3.604329834s" podCreationTimestamp="2026-02-24 05:52:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-24 05:52:15.556646689 +0000 UTC m=+895.259263735" watchObservedRunningTime="2026-02-24 05:52:15.604329834 +0000 UTC m=+895.306946880" Feb 24 05:52:15.619351 master-0 kubenswrapper[34361]: I0224 05:52:15.616150 34361 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-db-sync-m7xgd" podStartSLOduration=3.616117622 podStartE2EDuration="3.616117622s" podCreationTimestamp="2026-02-24 05:52:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-24 05:52:15.582140376 +0000 UTC m=+895.284757422" watchObservedRunningTime="2026-02-24 05:52:15.616117622 +0000 UTC m=+895.318734668" Feb 24 05:52:15.678342 master-0 kubenswrapper[34361]: I0224 05:52:15.675875 34361 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-bootstrap-dcp4q" podStartSLOduration=3.6758414029999997 podStartE2EDuration="3.675841403s" podCreationTimestamp="2026-02-24 05:52:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-24 05:52:15.650194811 +0000 UTC m=+895.352811857" watchObservedRunningTime="2026-02-24 05:52:15.675841403 +0000 UTC m=+895.378458449" Feb 24 05:52:15.762471 master-0 kubenswrapper[34361]: I0224 05:52:15.762133 34361 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-bdafd-default-internal-api-0"] Feb 24 05:52:15.782368 master-0 kubenswrapper[34361]: I0224 05:52:15.766096 34361 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-bdafd-default-internal-api-0" Feb 24 05:52:15.782368 master-0 kubenswrapper[34361]: I0224 05:52:15.768460 34361 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-internal-svc" Feb 24 05:52:15.782368 master-0 kubenswrapper[34361]: I0224 05:52:15.769247 34361 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-bdafd-default-internal-config-data" Feb 24 05:52:15.785338 master-0 kubenswrapper[34361]: I0224 05:52:15.784711 34361 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-bdafd-default-internal-api-0"] Feb 24 05:52:15.855345 master-0 kubenswrapper[34361]: I0224 05:52:15.855271 34361 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/9353daa8-f1c5-493d-8f31-bfc3074c6223-internal-tls-certs\") pod \"glance-bdafd-default-internal-api-0\" (UID: \"9353daa8-f1c5-493d-8f31-bfc3074c6223\") " pod="openstack/glance-bdafd-default-internal-api-0" Feb 24 05:52:15.855530 master-0 kubenswrapper[34361]: I0224 05:52:15.855481 34361 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-261a4ed4-1134-4ffa-8b7c-639ec8b67db7\" (UniqueName: \"kubernetes.io/csi/topolvm.io^db8b87f9-2acd-43ac-b8a5-43ce0b4aa5b0\") pod \"glance-bdafd-default-internal-api-0\" (UID: \"9353daa8-f1c5-493d-8f31-bfc3074c6223\") " pod="openstack/glance-bdafd-default-internal-api-0" Feb 24 05:52:15.855695 master-0 kubenswrapper[34361]: I0224 05:52:15.855666 34361 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hrfr2\" (UniqueName: \"kubernetes.io/projected/9353daa8-f1c5-493d-8f31-bfc3074c6223-kube-api-access-hrfr2\") pod \"glance-bdafd-default-internal-api-0\" (UID: \"9353daa8-f1c5-493d-8f31-bfc3074c6223\") " pod="openstack/glance-bdafd-default-internal-api-0" Feb 24 05:52:15.856261 master-0 kubenswrapper[34361]: I0224 05:52:15.856228 34361 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9353daa8-f1c5-493d-8f31-bfc3074c6223-logs\") pod \"glance-bdafd-default-internal-api-0\" (UID: \"9353daa8-f1c5-493d-8f31-bfc3074c6223\") " pod="openstack/glance-bdafd-default-internal-api-0" Feb 24 05:52:15.861183 master-0 kubenswrapper[34361]: I0224 05:52:15.861153 34361 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9353daa8-f1c5-493d-8f31-bfc3074c6223-config-data\") pod \"glance-bdafd-default-internal-api-0\" (UID: \"9353daa8-f1c5-493d-8f31-bfc3074c6223\") " pod="openstack/glance-bdafd-default-internal-api-0" Feb 24 05:52:15.861253 master-0 kubenswrapper[34361]: I0224 05:52:15.861188 34361 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9353daa8-f1c5-493d-8f31-bfc3074c6223-scripts\") pod \"glance-bdafd-default-internal-api-0\" (UID: \"9353daa8-f1c5-493d-8f31-bfc3074c6223\") " pod="openstack/glance-bdafd-default-internal-api-0" Feb 24 05:52:15.861965 master-0 kubenswrapper[34361]: I0224 05:52:15.861587 34361 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/9353daa8-f1c5-493d-8f31-bfc3074c6223-httpd-run\") pod \"glance-bdafd-default-internal-api-0\" (UID: \"9353daa8-f1c5-493d-8f31-bfc3074c6223\") " pod="openstack/glance-bdafd-default-internal-api-0" Feb 24 05:52:15.861965 master-0 kubenswrapper[34361]: I0224 05:52:15.861733 34361 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9353daa8-f1c5-493d-8f31-bfc3074c6223-combined-ca-bundle\") pod \"glance-bdafd-default-internal-api-0\" (UID: \"9353daa8-f1c5-493d-8f31-bfc3074c6223\") " pod="openstack/glance-bdafd-default-internal-api-0" Feb 24 05:52:15.964701 master-0 kubenswrapper[34361]: I0224 05:52:15.964624 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9353daa8-f1c5-493d-8f31-bfc3074c6223-logs\") pod \"glance-bdafd-default-internal-api-0\" (UID: \"9353daa8-f1c5-493d-8f31-bfc3074c6223\") " pod="openstack/glance-bdafd-default-internal-api-0" Feb 24 05:52:15.965111 master-0 kubenswrapper[34361]: I0224 05:52:15.964974 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9353daa8-f1c5-493d-8f31-bfc3074c6223-scripts\") pod \"glance-bdafd-default-internal-api-0\" (UID: \"9353daa8-f1c5-493d-8f31-bfc3074c6223\") " pod="openstack/glance-bdafd-default-internal-api-0" Feb 24 05:52:15.965111 master-0 kubenswrapper[34361]: I0224 05:52:15.965103 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9353daa8-f1c5-493d-8f31-bfc3074c6223-config-data\") pod \"glance-bdafd-default-internal-api-0\" (UID: \"9353daa8-f1c5-493d-8f31-bfc3074c6223\") " pod="openstack/glance-bdafd-default-internal-api-0" Feb 24 05:52:15.965195 master-0 kubenswrapper[34361]: I0224 05:52:15.965162 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/9353daa8-f1c5-493d-8f31-bfc3074c6223-httpd-run\") pod \"glance-bdafd-default-internal-api-0\" (UID: \"9353daa8-f1c5-493d-8f31-bfc3074c6223\") " pod="openstack/glance-bdafd-default-internal-api-0" Feb 24 05:52:15.965249 master-0 kubenswrapper[34361]: I0224 05:52:15.965228 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9353daa8-f1c5-493d-8f31-bfc3074c6223-combined-ca-bundle\") pod \"glance-bdafd-default-internal-api-0\" (UID: \"9353daa8-f1c5-493d-8f31-bfc3074c6223\") " pod="openstack/glance-bdafd-default-internal-api-0" Feb 24 05:52:15.965515 master-0 kubenswrapper[34361]: I0224 05:52:15.965484 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/9353daa8-f1c5-493d-8f31-bfc3074c6223-internal-tls-certs\") pod \"glance-bdafd-default-internal-api-0\" (UID: \"9353daa8-f1c5-493d-8f31-bfc3074c6223\") " pod="openstack/glance-bdafd-default-internal-api-0" Feb 24 05:52:15.965605 master-0 kubenswrapper[34361]: I0224 05:52:15.965580 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-261a4ed4-1134-4ffa-8b7c-639ec8b67db7\" (UniqueName: \"kubernetes.io/csi/topolvm.io^db8b87f9-2acd-43ac-b8a5-43ce0b4aa5b0\") pod \"glance-bdafd-default-internal-api-0\" (UID: \"9353daa8-f1c5-493d-8f31-bfc3074c6223\") " pod="openstack/glance-bdafd-default-internal-api-0" Feb 24 05:52:15.965784 master-0 kubenswrapper[34361]: I0224 05:52:15.965731 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hrfr2\" (UniqueName: \"kubernetes.io/projected/9353daa8-f1c5-493d-8f31-bfc3074c6223-kube-api-access-hrfr2\") pod \"glance-bdafd-default-internal-api-0\" (UID: \"9353daa8-f1c5-493d-8f31-bfc3074c6223\") " pod="openstack/glance-bdafd-default-internal-api-0" Feb 24 05:52:15.967183 master-0 kubenswrapper[34361]: I0224 05:52:15.967112 34361 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9353daa8-f1c5-493d-8f31-bfc3074c6223-logs\") pod \"glance-bdafd-default-internal-api-0\" (UID: \"9353daa8-f1c5-493d-8f31-bfc3074c6223\") " pod="openstack/glance-bdafd-default-internal-api-0" Feb 24 05:52:15.967915 master-0 kubenswrapper[34361]: I0224 05:52:15.967410 34361 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/9353daa8-f1c5-493d-8f31-bfc3074c6223-httpd-run\") pod \"glance-bdafd-default-internal-api-0\" (UID: \"9353daa8-f1c5-493d-8f31-bfc3074c6223\") " pod="openstack/glance-bdafd-default-internal-api-0" Feb 24 05:52:15.972852 master-0 kubenswrapper[34361]: I0224 05:52:15.972814 34361 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 24 05:52:15.972926 master-0 kubenswrapper[34361]: I0224 05:52:15.972865 34361 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-261a4ed4-1134-4ffa-8b7c-639ec8b67db7\" (UniqueName: \"kubernetes.io/csi/topolvm.io^db8b87f9-2acd-43ac-b8a5-43ce0b4aa5b0\") pod \"glance-bdafd-default-internal-api-0\" (UID: \"9353daa8-f1c5-493d-8f31-bfc3074c6223\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/topolvm.io/03c215443f2c43fe19f38e42f351895e0bcaecfa5c9fe4b43c46bb54166b4232/globalmount\"" pod="openstack/glance-bdafd-default-internal-api-0" Feb 24 05:52:15.972926 master-0 kubenswrapper[34361]: I0224 05:52:15.972904 34361 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9353daa8-f1c5-493d-8f31-bfc3074c6223-scripts\") pod \"glance-bdafd-default-internal-api-0\" (UID: \"9353daa8-f1c5-493d-8f31-bfc3074c6223\") " pod="openstack/glance-bdafd-default-internal-api-0" Feb 24 05:52:15.973068 master-0 kubenswrapper[34361]: I0224 05:52:15.973020 34361 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9353daa8-f1c5-493d-8f31-bfc3074c6223-combined-ca-bundle\") pod \"glance-bdafd-default-internal-api-0\" (UID: \"9353daa8-f1c5-493d-8f31-bfc3074c6223\") " pod="openstack/glance-bdafd-default-internal-api-0" Feb 24 05:52:15.973480 master-0 kubenswrapper[34361]: I0224 05:52:15.973277 34361 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/9353daa8-f1c5-493d-8f31-bfc3074c6223-internal-tls-certs\") pod \"glance-bdafd-default-internal-api-0\" (UID: \"9353daa8-f1c5-493d-8f31-bfc3074c6223\") " pod="openstack/glance-bdafd-default-internal-api-0" Feb 24 05:52:15.974009 master-0 kubenswrapper[34361]: I0224 05:52:15.973975 34361 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9353daa8-f1c5-493d-8f31-bfc3074c6223-config-data\") pod \"glance-bdafd-default-internal-api-0\" (UID: \"9353daa8-f1c5-493d-8f31-bfc3074c6223\") " pod="openstack/glance-bdafd-default-internal-api-0" Feb 24 05:52:15.997713 master-0 kubenswrapper[34361]: I0224 05:52:15.997668 34361 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hrfr2\" (UniqueName: \"kubernetes.io/projected/9353daa8-f1c5-493d-8f31-bfc3074c6223-kube-api-access-hrfr2\") pod \"glance-bdafd-default-internal-api-0\" (UID: \"9353daa8-f1c5-493d-8f31-bfc3074c6223\") " pod="openstack/glance-bdafd-default-internal-api-0" Feb 24 05:52:16.198401 master-0 kubenswrapper[34361]: I0224 05:52:16.198279 34361 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-77dd9bf7ff-sv6dm" Feb 24 05:52:16.376745 master-0 kubenswrapper[34361]: I0224 05:52:16.376517 34361 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/0a54b59d-5bdb-4de2-afb3-8d68064c94c9-dns-svc\") pod \"0a54b59d-5bdb-4de2-afb3-8d68064c94c9\" (UID: \"0a54b59d-5bdb-4de2-afb3-8d68064c94c9\") " Feb 24 05:52:16.376745 master-0 kubenswrapper[34361]: I0224 05:52:16.376622 34361 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0a54b59d-5bdb-4de2-afb3-8d68064c94c9-config\") pod \"0a54b59d-5bdb-4de2-afb3-8d68064c94c9\" (UID: \"0a54b59d-5bdb-4de2-afb3-8d68064c94c9\") " Feb 24 05:52:16.377132 master-0 kubenswrapper[34361]: I0224 05:52:16.376976 34361 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/0a54b59d-5bdb-4de2-afb3-8d68064c94c9-ovsdbserver-nb\") pod \"0a54b59d-5bdb-4de2-afb3-8d68064c94c9\" (UID: \"0a54b59d-5bdb-4de2-afb3-8d68064c94c9\") " Feb 24 05:52:16.377132 master-0 kubenswrapper[34361]: I0224 05:52:16.377042 34361 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-56kvh\" (UniqueName: \"kubernetes.io/projected/0a54b59d-5bdb-4de2-afb3-8d68064c94c9-kube-api-access-56kvh\") pod \"0a54b59d-5bdb-4de2-afb3-8d68064c94c9\" (UID: \"0a54b59d-5bdb-4de2-afb3-8d68064c94c9\") " Feb 24 05:52:16.377891 master-0 kubenswrapper[34361]: I0224 05:52:16.377227 34361 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/0a54b59d-5bdb-4de2-afb3-8d68064c94c9-ovsdbserver-sb\") pod \"0a54b59d-5bdb-4de2-afb3-8d68064c94c9\" (UID: \"0a54b59d-5bdb-4de2-afb3-8d68064c94c9\") " Feb 24 05:52:16.377891 master-0 kubenswrapper[34361]: I0224 05:52:16.377272 34361 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/0a54b59d-5bdb-4de2-afb3-8d68064c94c9-dns-swift-storage-0\") pod \"0a54b59d-5bdb-4de2-afb3-8d68064c94c9\" (UID: \"0a54b59d-5bdb-4de2-afb3-8d68064c94c9\") " Feb 24 05:52:16.382102 master-0 kubenswrapper[34361]: I0224 05:52:16.382033 34361 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0a54b59d-5bdb-4de2-afb3-8d68064c94c9-kube-api-access-56kvh" (OuterVolumeSpecName: "kube-api-access-56kvh") pod "0a54b59d-5bdb-4de2-afb3-8d68064c94c9" (UID: "0a54b59d-5bdb-4de2-afb3-8d68064c94c9"). InnerVolumeSpecName "kube-api-access-56kvh". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 24 05:52:16.411801 master-0 kubenswrapper[34361]: I0224 05:52:16.411666 34361 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0a54b59d-5bdb-4de2-afb3-8d68064c94c9-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "0a54b59d-5bdb-4de2-afb3-8d68064c94c9" (UID: "0a54b59d-5bdb-4de2-afb3-8d68064c94c9"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 24 05:52:16.412986 master-0 kubenswrapper[34361]: I0224 05:52:16.412890 34361 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0a54b59d-5bdb-4de2-afb3-8d68064c94c9-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "0a54b59d-5bdb-4de2-afb3-8d68064c94c9" (UID: "0a54b59d-5bdb-4de2-afb3-8d68064c94c9"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 24 05:52:16.413118 master-0 kubenswrapper[34361]: I0224 05:52:16.413049 34361 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0a54b59d-5bdb-4de2-afb3-8d68064c94c9-config" (OuterVolumeSpecName: "config") pod "0a54b59d-5bdb-4de2-afb3-8d68064c94c9" (UID: "0a54b59d-5bdb-4de2-afb3-8d68064c94c9"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 24 05:52:16.416464 master-0 kubenswrapper[34361]: I0224 05:52:16.416400 34361 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0a54b59d-5bdb-4de2-afb3-8d68064c94c9-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "0a54b59d-5bdb-4de2-afb3-8d68064c94c9" (UID: "0a54b59d-5bdb-4de2-afb3-8d68064c94c9"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 24 05:52:16.439431 master-0 kubenswrapper[34361]: I0224 05:52:16.439327 34361 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0a54b59d-5bdb-4de2-afb3-8d68064c94c9-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "0a54b59d-5bdb-4de2-afb3-8d68064c94c9" (UID: "0a54b59d-5bdb-4de2-afb3-8d68064c94c9"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 24 05:52:16.480636 master-0 kubenswrapper[34361]: I0224 05:52:16.480573 34361 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/0a54b59d-5bdb-4de2-afb3-8d68064c94c9-dns-svc\") on node \"master-0\" DevicePath \"\"" Feb 24 05:52:16.480636 master-0 kubenswrapper[34361]: I0224 05:52:16.480626 34361 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0a54b59d-5bdb-4de2-afb3-8d68064c94c9-config\") on node \"master-0\" DevicePath \"\"" Feb 24 05:52:16.480636 master-0 kubenswrapper[34361]: I0224 05:52:16.480638 34361 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/0a54b59d-5bdb-4de2-afb3-8d68064c94c9-ovsdbserver-nb\") on node \"master-0\" DevicePath \"\"" Feb 24 05:52:16.480636 master-0 kubenswrapper[34361]: I0224 05:52:16.480651 34361 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-56kvh\" (UniqueName: \"kubernetes.io/projected/0a54b59d-5bdb-4de2-afb3-8d68064c94c9-kube-api-access-56kvh\") on node \"master-0\" DevicePath \"\"" Feb 24 05:52:16.480636 master-0 kubenswrapper[34361]: I0224 05:52:16.480661 34361 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/0a54b59d-5bdb-4de2-afb3-8d68064c94c9-ovsdbserver-sb\") on node \"master-0\" DevicePath \"\"" Feb 24 05:52:16.480636 master-0 kubenswrapper[34361]: I0224 05:52:16.480673 34361 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/0a54b59d-5bdb-4de2-afb3-8d68064c94c9-dns-swift-storage-0\") on node \"master-0\" DevicePath \"\"" Feb 24 05:52:16.592552 master-0 kubenswrapper[34361]: I0224 05:52:16.592491 34361 generic.go:334] "Generic (PLEG): container finished" podID="9a1e2cd8-9a9f-454d-b520-75769a722e55" containerID="7ee8a6559ab43eb317259dafe7cdf86adf385019298144a14eb4ca8308528154" exitCode=0 Feb 24 05:52:16.592855 master-0 kubenswrapper[34361]: I0224 05:52:16.592595 34361 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-db-create-hgms6" event={"ID":"9a1e2cd8-9a9f-454d-b520-75769a722e55","Type":"ContainerDied","Data":"7ee8a6559ab43eb317259dafe7cdf86adf385019298144a14eb4ca8308528154"} Feb 24 05:52:16.596526 master-0 kubenswrapper[34361]: I0224 05:52:16.595632 34361 generic.go:334] "Generic (PLEG): container finished" podID="76fe2580-1b17-4dd5-bdac-693e4027a09e" containerID="e3e63cffc76806b2461a1e6bc7c6a3e085e9a0b605198c8b84b959db2c742953" exitCode=0 Feb 24 05:52:16.596526 master-0 kubenswrapper[34361]: I0224 05:52:16.596216 34361 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-564d4966c5-82kwv" event={"ID":"76fe2580-1b17-4dd5-bdac-693e4027a09e","Type":"ContainerDied","Data":"e3e63cffc76806b2461a1e6bc7c6a3e085e9a0b605198c8b84b959db2c742953"} Feb 24 05:52:16.600142 master-0 kubenswrapper[34361]: I0224 05:52:16.600107 34361 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-77dd9bf7ff-sv6dm" Feb 24 05:52:16.616225 master-0 kubenswrapper[34361]: I0224 05:52:16.616163 34361 generic.go:334] "Generic (PLEG): container finished" podID="32b27462-7223-4f43-8eea-25a2dcd42b17" containerID="32a49cb5298021eb2317880085eff0f3e379e1e29dc76290de13fe982a1891e4" exitCode=0 Feb 24 05:52:16.617145 master-0 kubenswrapper[34361]: I0224 05:52:16.617114 34361 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-bdafd-default-external-api-0" Feb 24 05:52:16.653771 master-0 kubenswrapper[34361]: I0224 05:52:16.653190 34361 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-bdafd-default-external-api-0" Feb 24 05:52:16.655966 master-0 kubenswrapper[34361]: I0224 05:52:16.655921 34361 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-77dd9bf7ff-sv6dm" event={"ID":"0a54b59d-5bdb-4de2-afb3-8d68064c94c9","Type":"ContainerDied","Data":"ed3ee91a001265cb1fb20c2e8bd2c7547680946fcb17b8c6788f07ac04d69b8f"} Feb 24 05:52:16.656053 master-0 kubenswrapper[34361]: I0224 05:52:16.655972 34361 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-b901-account-create-update-vmptn" event={"ID":"32b27462-7223-4f43-8eea-25a2dcd42b17","Type":"ContainerDied","Data":"32a49cb5298021eb2317880085eff0f3e379e1e29dc76290de13fe982a1891e4"} Feb 24 05:52:16.656053 master-0 kubenswrapper[34361]: I0224 05:52:16.656002 34361 scope.go:117] "RemoveContainer" containerID="c112fd074325bc5341632db5c1aceb5c347c417d89695e11e8fb8e7c0b6cb665" Feb 24 05:52:16.777331 master-0 kubenswrapper[34361]: I0224 05:52:16.777044 34361 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-8fb5a103-1e29-48ca-b6ff-e05e0e3dadaf\" (UniqueName: \"kubernetes.io/csi/topolvm.io^3c776d19-5d94-41f0-be3d-1e63c1bc4e65\") pod \"glance-bdafd-default-external-api-0\" (UID: \"b19f8423-1a98-4e8f-902d-d7fbf56a12e7\") " pod="openstack/glance-bdafd-default-external-api-0" Feb 24 05:52:16.787488 master-0 kubenswrapper[34361]: I0224 05:52:16.787433 34361 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/b19f8423-1a98-4e8f-902d-d7fbf56a12e7-httpd-run\") pod \"b19f8423-1a98-4e8f-902d-d7fbf56a12e7\" (UID: \"b19f8423-1a98-4e8f-902d-d7fbf56a12e7\") " Feb 24 05:52:16.793689 master-0 kubenswrapper[34361]: I0224 05:52:16.787898 34361 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b19f8423-1a98-4e8f-902d-d7fbf56a12e7-config-data\") pod \"b19f8423-1a98-4e8f-902d-d7fbf56a12e7\" (UID: \"b19f8423-1a98-4e8f-902d-d7fbf56a12e7\") " Feb 24 05:52:16.793689 master-0 kubenswrapper[34361]: I0224 05:52:16.788024 34361 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b19f8423-1a98-4e8f-902d-d7fbf56a12e7-scripts\") pod \"b19f8423-1a98-4e8f-902d-d7fbf56a12e7\" (UID: \"b19f8423-1a98-4e8f-902d-d7fbf56a12e7\") " Feb 24 05:52:16.793689 master-0 kubenswrapper[34361]: I0224 05:52:16.788149 34361 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b19f8423-1a98-4e8f-902d-d7fbf56a12e7-logs\") pod \"b19f8423-1a98-4e8f-902d-d7fbf56a12e7\" (UID: \"b19f8423-1a98-4e8f-902d-d7fbf56a12e7\") " Feb 24 05:52:16.793689 master-0 kubenswrapper[34361]: I0224 05:52:16.788186 34361 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/b19f8423-1a98-4e8f-902d-d7fbf56a12e7-public-tls-certs\") pod \"b19f8423-1a98-4e8f-902d-d7fbf56a12e7\" (UID: \"b19f8423-1a98-4e8f-902d-d7fbf56a12e7\") " Feb 24 05:52:16.793689 master-0 kubenswrapper[34361]: I0224 05:52:16.788282 34361 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gm8r6\" (UniqueName: \"kubernetes.io/projected/b19f8423-1a98-4e8f-902d-d7fbf56a12e7-kube-api-access-gm8r6\") pod \"b19f8423-1a98-4e8f-902d-d7fbf56a12e7\" (UID: \"b19f8423-1a98-4e8f-902d-d7fbf56a12e7\") " Feb 24 05:52:16.793689 master-0 kubenswrapper[34361]: I0224 05:52:16.788444 34361 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b19f8423-1a98-4e8f-902d-d7fbf56a12e7-combined-ca-bundle\") pod \"b19f8423-1a98-4e8f-902d-d7fbf56a12e7\" (UID: \"b19f8423-1a98-4e8f-902d-d7fbf56a12e7\") " Feb 24 05:52:16.793689 master-0 kubenswrapper[34361]: I0224 05:52:16.787898 34361 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b19f8423-1a98-4e8f-902d-d7fbf56a12e7-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "b19f8423-1a98-4e8f-902d-d7fbf56a12e7" (UID: "b19f8423-1a98-4e8f-902d-d7fbf56a12e7"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 24 05:52:16.793689 master-0 kubenswrapper[34361]: I0224 05:52:16.791340 34361 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b19f8423-1a98-4e8f-902d-d7fbf56a12e7-logs" (OuterVolumeSpecName: "logs") pod "b19f8423-1a98-4e8f-902d-d7fbf56a12e7" (UID: "b19f8423-1a98-4e8f-902d-d7fbf56a12e7"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 24 05:52:16.793689 master-0 kubenswrapper[34361]: I0224 05:52:16.792815 34361 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b19f8423-1a98-4e8f-902d-d7fbf56a12e7-config-data" (OuterVolumeSpecName: "config-data") pod "b19f8423-1a98-4e8f-902d-d7fbf56a12e7" (UID: "b19f8423-1a98-4e8f-902d-d7fbf56a12e7"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 24 05:52:16.794151 master-0 kubenswrapper[34361]: I0224 05:52:16.794033 34361 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b19f8423-1a98-4e8f-902d-d7fbf56a12e7-scripts" (OuterVolumeSpecName: "scripts") pod "b19f8423-1a98-4e8f-902d-d7fbf56a12e7" (UID: "b19f8423-1a98-4e8f-902d-d7fbf56a12e7"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 24 05:52:16.795656 master-0 kubenswrapper[34361]: I0224 05:52:16.795625 34361 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-77dd9bf7ff-sv6dm"] Feb 24 05:52:16.799143 master-0 kubenswrapper[34361]: I0224 05:52:16.799101 34361 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b19f8423-1a98-4e8f-902d-d7fbf56a12e7-kube-api-access-gm8r6" (OuterVolumeSpecName: "kube-api-access-gm8r6") pod "b19f8423-1a98-4e8f-902d-d7fbf56a12e7" (UID: "b19f8423-1a98-4e8f-902d-d7fbf56a12e7"). InnerVolumeSpecName "kube-api-access-gm8r6". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 24 05:52:16.800193 master-0 kubenswrapper[34361]: I0224 05:52:16.800137 34361 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b19f8423-1a98-4e8f-902d-d7fbf56a12e7-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "b19f8423-1a98-4e8f-902d-d7fbf56a12e7" (UID: "b19f8423-1a98-4e8f-902d-d7fbf56a12e7"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 24 05:52:16.811666 master-0 kubenswrapper[34361]: I0224 05:52:16.811611 34361 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b19f8423-1a98-4e8f-902d-d7fbf56a12e7-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "b19f8423-1a98-4e8f-902d-d7fbf56a12e7" (UID: "b19f8423-1a98-4e8f-902d-d7fbf56a12e7"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 24 05:52:16.825062 master-0 kubenswrapper[34361]: I0224 05:52:16.824998 34361 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-77dd9bf7ff-sv6dm"] Feb 24 05:52:16.893976 master-0 kubenswrapper[34361]: I0224 05:52:16.890762 34361 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/csi/topolvm.io^3c776d19-5d94-41f0-be3d-1e63c1bc4e65\") pod \"b19f8423-1a98-4e8f-902d-d7fbf56a12e7\" (UID: \"b19f8423-1a98-4e8f-902d-d7fbf56a12e7\") " Feb 24 05:52:16.893976 master-0 kubenswrapper[34361]: I0224 05:52:16.891577 34361 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b19f8423-1a98-4e8f-902d-d7fbf56a12e7-scripts\") on node \"master-0\" DevicePath \"\"" Feb 24 05:52:16.893976 master-0 kubenswrapper[34361]: I0224 05:52:16.891591 34361 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b19f8423-1a98-4e8f-902d-d7fbf56a12e7-logs\") on node \"master-0\" DevicePath \"\"" Feb 24 05:52:16.893976 master-0 kubenswrapper[34361]: I0224 05:52:16.891601 34361 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/b19f8423-1a98-4e8f-902d-d7fbf56a12e7-public-tls-certs\") on node \"master-0\" DevicePath \"\"" Feb 24 05:52:16.893976 master-0 kubenswrapper[34361]: I0224 05:52:16.891612 34361 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gm8r6\" (UniqueName: \"kubernetes.io/projected/b19f8423-1a98-4e8f-902d-d7fbf56a12e7-kube-api-access-gm8r6\") on node \"master-0\" DevicePath \"\"" Feb 24 05:52:16.893976 master-0 kubenswrapper[34361]: I0224 05:52:16.891621 34361 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b19f8423-1a98-4e8f-902d-d7fbf56a12e7-combined-ca-bundle\") on node \"master-0\" DevicePath \"\"" Feb 24 05:52:16.893976 master-0 kubenswrapper[34361]: I0224 05:52:16.891630 34361 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/b19f8423-1a98-4e8f-902d-d7fbf56a12e7-httpd-run\") on node \"master-0\" DevicePath \"\"" Feb 24 05:52:16.893976 master-0 kubenswrapper[34361]: I0224 05:52:16.891639 34361 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b19f8423-1a98-4e8f-902d-d7fbf56a12e7-config-data\") on node \"master-0\" DevicePath \"\"" Feb 24 05:52:18.501087 master-0 kubenswrapper[34361]: I0224 05:52:18.501013 34361 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-564d4966c5-82kwv" event={"ID":"76fe2580-1b17-4dd5-bdac-693e4027a09e","Type":"ContainerStarted","Data":"bf87fa3ad91c9791fd5c0f1f5dee9989e63c440a986753d19e543def1d63c006"} Feb 24 05:52:18.504894 master-0 kubenswrapper[34361]: I0224 05:52:18.504857 34361 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-564d4966c5-82kwv" Feb 24 05:52:18.506591 master-0 kubenswrapper[34361]: I0224 05:52:18.506563 34361 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-bdafd-default-external-api-0" Feb 24 05:52:18.616161 master-0 kubenswrapper[34361]: I0224 05:52:18.614976 34361 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0a54b59d-5bdb-4de2-afb3-8d68064c94c9" path="/var/lib/kubelet/pods/0a54b59d-5bdb-4de2-afb3-8d68064c94c9/volumes" Feb 24 05:52:19.056475 master-0 kubenswrapper[34361]: I0224 05:52:19.056360 34361 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-564d4966c5-82kwv" podStartSLOduration=6.056335278 podStartE2EDuration="6.056335278s" podCreationTimestamp="2026-02-24 05:52:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-24 05:52:19.046782811 +0000 UTC m=+898.749399877" watchObservedRunningTime="2026-02-24 05:52:19.056335278 +0000 UTC m=+898.758952344" Feb 24 05:52:21.911435 master-0 kubenswrapper[34361]: I0224 05:52:21.911372 34361 trace.go:236] Trace[973710436]: "Calculate volume metrics of glance for pod openstack/glance-bdafd-default-external-api-0" (24-Feb-2026 05:52:20.547) (total time: 1363ms): Feb 24 05:52:21.911435 master-0 kubenswrapper[34361]: Trace[973710436]: [1.363593437s] [1.363593437s] END Feb 24 05:52:21.912211 master-0 kubenswrapper[34361]: I0224 05:52:21.911785 34361 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/topolvm.io^3c776d19-5d94-41f0-be3d-1e63c1bc4e65" (OuterVolumeSpecName: "glance") pod "b19f8423-1a98-4e8f-902d-d7fbf56a12e7" (UID: "b19f8423-1a98-4e8f-902d-d7fbf56a12e7"). InnerVolumeSpecName "pvc-8fb5a103-1e29-48ca-b6ff-e05e0e3dadaf". PluginName "kubernetes.io/csi", VolumeGidValue "" Feb 24 05:52:21.920903 master-0 kubenswrapper[34361]: I0224 05:52:21.920816 34361 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-261a4ed4-1134-4ffa-8b7c-639ec8b67db7\" (UniqueName: \"kubernetes.io/csi/topolvm.io^db8b87f9-2acd-43ac-b8a5-43ce0b4aa5b0\") pod \"glance-bdafd-default-internal-api-0\" (UID: \"9353daa8-f1c5-493d-8f31-bfc3074c6223\") " pod="openstack/glance-bdafd-default-internal-api-0" Feb 24 05:52:21.955601 master-0 kubenswrapper[34361]: I0224 05:52:21.955552 34361 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"pvc-8fb5a103-1e29-48ca-b6ff-e05e0e3dadaf\" (UniqueName: \"kubernetes.io/csi/topolvm.io^3c776d19-5d94-41f0-be3d-1e63c1bc4e65\") on node \"master-0\" " Feb 24 05:52:21.982275 master-0 kubenswrapper[34361]: I0224 05:52:21.982208 34361 csi_attacher.go:630] kubernetes.io/csi: attacher.UnmountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping UnmountDevice... Feb 24 05:52:21.982585 master-0 kubenswrapper[34361]: I0224 05:52:21.982497 34361 operation_generator.go:917] UnmountDevice succeeded for volume "pvc-8fb5a103-1e29-48ca-b6ff-e05e0e3dadaf" (UniqueName: "kubernetes.io/csi/topolvm.io^3c776d19-5d94-41f0-be3d-1e63c1bc4e65") on node "master-0" Feb 24 05:52:22.065740 master-0 kubenswrapper[34361]: I0224 05:52:22.065645 34361 reconciler_common.go:293] "Volume detached for volume \"pvc-8fb5a103-1e29-48ca-b6ff-e05e0e3dadaf\" (UniqueName: \"kubernetes.io/csi/topolvm.io^3c776d19-5d94-41f0-be3d-1e63c1bc4e65\") on node \"master-0\" DevicePath \"\"" Feb 24 05:52:22.194506 master-0 kubenswrapper[34361]: I0224 05:52:22.192743 34361 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-bdafd-default-external-api-0"] Feb 24 05:52:22.223935 master-0 kubenswrapper[34361]: I0224 05:52:22.223592 34361 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-bdafd-default-external-api-0"] Feb 24 05:52:22.227485 master-0 kubenswrapper[34361]: I0224 05:52:22.227421 34361 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-bdafd-default-internal-api-0" Feb 24 05:52:22.260514 master-0 kubenswrapper[34361]: I0224 05:52:22.260403 34361 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-bdafd-default-external-api-0"] Feb 24 05:52:22.261268 master-0 kubenswrapper[34361]: E0224 05:52:22.261146 34361 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0a54b59d-5bdb-4de2-afb3-8d68064c94c9" containerName="init" Feb 24 05:52:22.261268 master-0 kubenswrapper[34361]: I0224 05:52:22.261169 34361 state_mem.go:107] "Deleted CPUSet assignment" podUID="0a54b59d-5bdb-4de2-afb3-8d68064c94c9" containerName="init" Feb 24 05:52:22.261481 master-0 kubenswrapper[34361]: I0224 05:52:22.261452 34361 memory_manager.go:354] "RemoveStaleState removing state" podUID="0a54b59d-5bdb-4de2-afb3-8d68064c94c9" containerName="init" Feb 24 05:52:22.262881 master-0 kubenswrapper[34361]: I0224 05:52:22.262850 34361 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-bdafd-default-external-api-0" Feb 24 05:52:22.266689 master-0 kubenswrapper[34361]: I0224 05:52:22.266615 34361 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-bdafd-default-external-config-data" Feb 24 05:52:22.267594 master-0 kubenswrapper[34361]: I0224 05:52:22.267565 34361 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-public-svc" Feb 24 05:52:22.275837 master-0 kubenswrapper[34361]: I0224 05:52:22.275775 34361 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-bdafd-default-external-api-0"] Feb 24 05:52:22.376395 master-0 kubenswrapper[34361]: I0224 05:52:22.376335 34361 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/167c633e-12d2-45f6-a746-7437ee0bbfff-public-tls-certs\") pod \"glance-bdafd-default-external-api-0\" (UID: \"167c633e-12d2-45f6-a746-7437ee0bbfff\") " pod="openstack/glance-bdafd-default-external-api-0" Feb 24 05:52:22.376735 master-0 kubenswrapper[34361]: I0224 05:52:22.376502 34361 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/167c633e-12d2-45f6-a746-7437ee0bbfff-config-data\") pod \"glance-bdafd-default-external-api-0\" (UID: \"167c633e-12d2-45f6-a746-7437ee0bbfff\") " pod="openstack/glance-bdafd-default-external-api-0" Feb 24 05:52:22.376735 master-0 kubenswrapper[34361]: I0224 05:52:22.376609 34361 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/167c633e-12d2-45f6-a746-7437ee0bbfff-logs\") pod \"glance-bdafd-default-external-api-0\" (UID: \"167c633e-12d2-45f6-a746-7437ee0bbfff\") " pod="openstack/glance-bdafd-default-external-api-0" Feb 24 05:52:22.376735 master-0 kubenswrapper[34361]: I0224 05:52:22.376658 34361 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/167c633e-12d2-45f6-a746-7437ee0bbfff-combined-ca-bundle\") pod \"glance-bdafd-default-external-api-0\" (UID: \"167c633e-12d2-45f6-a746-7437ee0bbfff\") " pod="openstack/glance-bdafd-default-external-api-0" Feb 24 05:52:22.376735 master-0 kubenswrapper[34361]: I0224 05:52:22.376709 34361 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kd5nv\" (UniqueName: \"kubernetes.io/projected/167c633e-12d2-45f6-a746-7437ee0bbfff-kube-api-access-kd5nv\") pod \"glance-bdafd-default-external-api-0\" (UID: \"167c633e-12d2-45f6-a746-7437ee0bbfff\") " pod="openstack/glance-bdafd-default-external-api-0" Feb 24 05:52:22.376886 master-0 kubenswrapper[34361]: I0224 05:52:22.376755 34361 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-8fb5a103-1e29-48ca-b6ff-e05e0e3dadaf\" (UniqueName: \"kubernetes.io/csi/topolvm.io^3c776d19-5d94-41f0-be3d-1e63c1bc4e65\") pod \"glance-bdafd-default-external-api-0\" (UID: \"167c633e-12d2-45f6-a746-7437ee0bbfff\") " pod="openstack/glance-bdafd-default-external-api-0" Feb 24 05:52:22.378589 master-0 kubenswrapper[34361]: I0224 05:52:22.377299 34361 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/167c633e-12d2-45f6-a746-7437ee0bbfff-httpd-run\") pod \"glance-bdafd-default-external-api-0\" (UID: \"167c633e-12d2-45f6-a746-7437ee0bbfff\") " pod="openstack/glance-bdafd-default-external-api-0" Feb 24 05:52:22.378589 master-0 kubenswrapper[34361]: I0224 05:52:22.377428 34361 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/167c633e-12d2-45f6-a746-7437ee0bbfff-scripts\") pod \"glance-bdafd-default-external-api-0\" (UID: \"167c633e-12d2-45f6-a746-7437ee0bbfff\") " pod="openstack/glance-bdafd-default-external-api-0" Feb 24 05:52:22.482257 master-0 kubenswrapper[34361]: I0224 05:52:22.481436 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/167c633e-12d2-45f6-a746-7437ee0bbfff-config-data\") pod \"glance-bdafd-default-external-api-0\" (UID: \"167c633e-12d2-45f6-a746-7437ee0bbfff\") " pod="openstack/glance-bdafd-default-external-api-0" Feb 24 05:52:22.482257 master-0 kubenswrapper[34361]: I0224 05:52:22.481591 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/167c633e-12d2-45f6-a746-7437ee0bbfff-logs\") pod \"glance-bdafd-default-external-api-0\" (UID: \"167c633e-12d2-45f6-a746-7437ee0bbfff\") " pod="openstack/glance-bdafd-default-external-api-0" Feb 24 05:52:22.482257 master-0 kubenswrapper[34361]: I0224 05:52:22.481636 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/167c633e-12d2-45f6-a746-7437ee0bbfff-combined-ca-bundle\") pod \"glance-bdafd-default-external-api-0\" (UID: \"167c633e-12d2-45f6-a746-7437ee0bbfff\") " pod="openstack/glance-bdafd-default-external-api-0" Feb 24 05:52:22.482257 master-0 kubenswrapper[34361]: I0224 05:52:22.481664 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kd5nv\" (UniqueName: \"kubernetes.io/projected/167c633e-12d2-45f6-a746-7437ee0bbfff-kube-api-access-kd5nv\") pod \"glance-bdafd-default-external-api-0\" (UID: \"167c633e-12d2-45f6-a746-7437ee0bbfff\") " pod="openstack/glance-bdafd-default-external-api-0" Feb 24 05:52:22.482257 master-0 kubenswrapper[34361]: I0224 05:52:22.481728 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-8fb5a103-1e29-48ca-b6ff-e05e0e3dadaf\" (UniqueName: \"kubernetes.io/csi/topolvm.io^3c776d19-5d94-41f0-be3d-1e63c1bc4e65\") pod \"glance-bdafd-default-external-api-0\" (UID: \"167c633e-12d2-45f6-a746-7437ee0bbfff\") " pod="openstack/glance-bdafd-default-external-api-0" Feb 24 05:52:22.482257 master-0 kubenswrapper[34361]: I0224 05:52:22.481775 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/167c633e-12d2-45f6-a746-7437ee0bbfff-httpd-run\") pod \"glance-bdafd-default-external-api-0\" (UID: \"167c633e-12d2-45f6-a746-7437ee0bbfff\") " pod="openstack/glance-bdafd-default-external-api-0" Feb 24 05:52:22.482257 master-0 kubenswrapper[34361]: I0224 05:52:22.481802 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/167c633e-12d2-45f6-a746-7437ee0bbfff-scripts\") pod \"glance-bdafd-default-external-api-0\" (UID: \"167c633e-12d2-45f6-a746-7437ee0bbfff\") " pod="openstack/glance-bdafd-default-external-api-0" Feb 24 05:52:22.482257 master-0 kubenswrapper[34361]: I0224 05:52:22.481829 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/167c633e-12d2-45f6-a746-7437ee0bbfff-public-tls-certs\") pod \"glance-bdafd-default-external-api-0\" (UID: \"167c633e-12d2-45f6-a746-7437ee0bbfff\") " pod="openstack/glance-bdafd-default-external-api-0" Feb 24 05:52:22.488784 master-0 kubenswrapper[34361]: I0224 05:52:22.487245 34361 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/167c633e-12d2-45f6-a746-7437ee0bbfff-httpd-run\") pod \"glance-bdafd-default-external-api-0\" (UID: \"167c633e-12d2-45f6-a746-7437ee0bbfff\") " pod="openstack/glance-bdafd-default-external-api-0" Feb 24 05:52:22.492278 master-0 kubenswrapper[34361]: I0224 05:52:22.490477 34361 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 24 05:52:22.492278 master-0 kubenswrapper[34361]: I0224 05:52:22.490547 34361 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-8fb5a103-1e29-48ca-b6ff-e05e0e3dadaf\" (UniqueName: \"kubernetes.io/csi/topolvm.io^3c776d19-5d94-41f0-be3d-1e63c1bc4e65\") pod \"glance-bdafd-default-external-api-0\" (UID: \"167c633e-12d2-45f6-a746-7437ee0bbfff\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/topolvm.io/7d863fd5501d6d1171206f6d6ea42c84796ef7fcbd0ecfb3be968cf37320363b/globalmount\"" pod="openstack/glance-bdafd-default-external-api-0" Feb 24 05:52:22.492872 master-0 kubenswrapper[34361]: I0224 05:52:22.492657 34361 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/167c633e-12d2-45f6-a746-7437ee0bbfff-logs\") pod \"glance-bdafd-default-external-api-0\" (UID: \"167c633e-12d2-45f6-a746-7437ee0bbfff\") " pod="openstack/glance-bdafd-default-external-api-0" Feb 24 05:52:22.493260 master-0 kubenswrapper[34361]: I0224 05:52:22.493197 34361 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/167c633e-12d2-45f6-a746-7437ee0bbfff-public-tls-certs\") pod \"glance-bdafd-default-external-api-0\" (UID: \"167c633e-12d2-45f6-a746-7437ee0bbfff\") " pod="openstack/glance-bdafd-default-external-api-0" Feb 24 05:52:22.493678 master-0 kubenswrapper[34361]: I0224 05:52:22.493624 34361 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/167c633e-12d2-45f6-a746-7437ee0bbfff-config-data\") pod \"glance-bdafd-default-external-api-0\" (UID: \"167c633e-12d2-45f6-a746-7437ee0bbfff\") " pod="openstack/glance-bdafd-default-external-api-0" Feb 24 05:52:22.497120 master-0 kubenswrapper[34361]: I0224 05:52:22.497076 34361 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/167c633e-12d2-45f6-a746-7437ee0bbfff-scripts\") pod \"glance-bdafd-default-external-api-0\" (UID: \"167c633e-12d2-45f6-a746-7437ee0bbfff\") " pod="openstack/glance-bdafd-default-external-api-0" Feb 24 05:52:22.497247 master-0 kubenswrapper[34361]: I0224 05:52:22.497157 34361 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/167c633e-12d2-45f6-a746-7437ee0bbfff-combined-ca-bundle\") pod \"glance-bdafd-default-external-api-0\" (UID: \"167c633e-12d2-45f6-a746-7437ee0bbfff\") " pod="openstack/glance-bdafd-default-external-api-0" Feb 24 05:52:22.505187 master-0 kubenswrapper[34361]: I0224 05:52:22.505076 34361 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kd5nv\" (UniqueName: \"kubernetes.io/projected/167c633e-12d2-45f6-a746-7437ee0bbfff-kube-api-access-kd5nv\") pod \"glance-bdafd-default-external-api-0\" (UID: \"167c633e-12d2-45f6-a746-7437ee0bbfff\") " pod="openstack/glance-bdafd-default-external-api-0" Feb 24 05:52:22.612431 master-0 kubenswrapper[34361]: I0224 05:52:22.612228 34361 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b19f8423-1a98-4e8f-902d-d7fbf56a12e7" path="/var/lib/kubelet/pods/b19f8423-1a98-4e8f-902d-d7fbf56a12e7/volumes" Feb 24 05:52:22.681436 master-0 kubenswrapper[34361]: I0224 05:52:22.677412 34361 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ironic-db-create-hgms6" Feb 24 05:52:22.745622 master-0 kubenswrapper[34361]: I0224 05:52:22.734626 34361 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ironic-b901-account-create-update-vmptn" Feb 24 05:52:22.803370 master-0 kubenswrapper[34361]: I0224 05:52:22.802415 34361 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9a1e2cd8-9a9f-454d-b520-75769a722e55-operator-scripts\") pod \"9a1e2cd8-9a9f-454d-b520-75769a722e55\" (UID: \"9a1e2cd8-9a9f-454d-b520-75769a722e55\") " Feb 24 05:52:22.803370 master-0 kubenswrapper[34361]: I0224 05:52:22.802708 34361 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vfbbm\" (UniqueName: \"kubernetes.io/projected/9a1e2cd8-9a9f-454d-b520-75769a722e55-kube-api-access-vfbbm\") pod \"9a1e2cd8-9a9f-454d-b520-75769a722e55\" (UID: \"9a1e2cd8-9a9f-454d-b520-75769a722e55\") " Feb 24 05:52:22.814427 master-0 kubenswrapper[34361]: I0224 05:52:22.813757 34361 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9a1e2cd8-9a9f-454d-b520-75769a722e55-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "9a1e2cd8-9a9f-454d-b520-75769a722e55" (UID: "9a1e2cd8-9a9f-454d-b520-75769a722e55"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 24 05:52:22.845901 master-0 kubenswrapper[34361]: I0224 05:52:22.845674 34361 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9a1e2cd8-9a9f-454d-b520-75769a722e55-kube-api-access-vfbbm" (OuterVolumeSpecName: "kube-api-access-vfbbm") pod "9a1e2cd8-9a9f-454d-b520-75769a722e55" (UID: "9a1e2cd8-9a9f-454d-b520-75769a722e55"). InnerVolumeSpecName "kube-api-access-vfbbm". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 24 05:52:22.917352 master-0 kubenswrapper[34361]: I0224 05:52:22.905864 34361 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-l9jgw\" (UniqueName: \"kubernetes.io/projected/32b27462-7223-4f43-8eea-25a2dcd42b17-kube-api-access-l9jgw\") pod \"32b27462-7223-4f43-8eea-25a2dcd42b17\" (UID: \"32b27462-7223-4f43-8eea-25a2dcd42b17\") " Feb 24 05:52:22.918627 master-0 kubenswrapper[34361]: I0224 05:52:22.918579 34361 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/32b27462-7223-4f43-8eea-25a2dcd42b17-operator-scripts\") pod \"32b27462-7223-4f43-8eea-25a2dcd42b17\" (UID: \"32b27462-7223-4f43-8eea-25a2dcd42b17\") " Feb 24 05:52:22.918781 master-0 kubenswrapper[34361]: I0224 05:52:22.918713 34361 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/32b27462-7223-4f43-8eea-25a2dcd42b17-kube-api-access-l9jgw" (OuterVolumeSpecName: "kube-api-access-l9jgw") pod "32b27462-7223-4f43-8eea-25a2dcd42b17" (UID: "32b27462-7223-4f43-8eea-25a2dcd42b17"). InnerVolumeSpecName "kube-api-access-l9jgw". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 24 05:52:22.933842 master-0 kubenswrapper[34361]: I0224 05:52:22.919646 34361 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/32b27462-7223-4f43-8eea-25a2dcd42b17-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "32b27462-7223-4f43-8eea-25a2dcd42b17" (UID: "32b27462-7223-4f43-8eea-25a2dcd42b17"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 24 05:52:22.938649 master-0 kubenswrapper[34361]: I0224 05:52:22.934784 34361 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vfbbm\" (UniqueName: \"kubernetes.io/projected/9a1e2cd8-9a9f-454d-b520-75769a722e55-kube-api-access-vfbbm\") on node \"master-0\" DevicePath \"\"" Feb 24 05:52:22.938649 master-0 kubenswrapper[34361]: I0224 05:52:22.938442 34361 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9a1e2cd8-9a9f-454d-b520-75769a722e55-operator-scripts\") on node \"master-0\" DevicePath \"\"" Feb 24 05:52:22.938649 master-0 kubenswrapper[34361]: I0224 05:52:22.938506 34361 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/32b27462-7223-4f43-8eea-25a2dcd42b17-operator-scripts\") on node \"master-0\" DevicePath \"\"" Feb 24 05:52:22.938649 master-0 kubenswrapper[34361]: I0224 05:52:22.938519 34361 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-l9jgw\" (UniqueName: \"kubernetes.io/projected/32b27462-7223-4f43-8eea-25a2dcd42b17-kube-api-access-l9jgw\") on node \"master-0\" DevicePath \"\"" Feb 24 05:52:22.949102 master-0 kubenswrapper[34361]: I0224 05:52:22.945265 34361 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-db-create-hgms6" event={"ID":"9a1e2cd8-9a9f-454d-b520-75769a722e55","Type":"ContainerDied","Data":"d14e081fa348fb424076b79487dd707513e6fe77ddcbebb2f5ccf9f540046796"} Feb 24 05:52:22.949102 master-0 kubenswrapper[34361]: I0224 05:52:22.945360 34361 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d14e081fa348fb424076b79487dd707513e6fe77ddcbebb2f5ccf9f540046796" Feb 24 05:52:22.949102 master-0 kubenswrapper[34361]: I0224 05:52:22.949010 34361 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ironic-db-create-hgms6" Feb 24 05:52:22.966601 master-0 kubenswrapper[34361]: I0224 05:52:22.966491 34361 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-b901-account-create-update-vmptn" event={"ID":"32b27462-7223-4f43-8eea-25a2dcd42b17","Type":"ContainerDied","Data":"26de47a233b29156ab49c82812f66aac2ec0c0a1f5d841d5a9aee92e8b4874bb"} Feb 24 05:52:22.966601 master-0 kubenswrapper[34361]: I0224 05:52:22.966599 34361 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="26de47a233b29156ab49c82812f66aac2ec0c0a1f5d841d5a9aee92e8b4874bb" Feb 24 05:52:22.966900 master-0 kubenswrapper[34361]: I0224 05:52:22.966686 34361 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ironic-b901-account-create-update-vmptn" Feb 24 05:52:23.517473 master-0 kubenswrapper[34361]: I0224 05:52:23.516519 34361 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-bdafd-default-internal-api-0"] Feb 24 05:52:23.711799 master-0 kubenswrapper[34361]: I0224 05:52:23.711755 34361 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-8fb5a103-1e29-48ca-b6ff-e05e0e3dadaf\" (UniqueName: \"kubernetes.io/csi/topolvm.io^3c776d19-5d94-41f0-be3d-1e63c1bc4e65\") pod \"glance-bdafd-default-external-api-0\" (UID: \"167c633e-12d2-45f6-a746-7437ee0bbfff\") " pod="openstack/glance-bdafd-default-external-api-0" Feb 24 05:52:23.783720 master-0 kubenswrapper[34361]: I0224 05:52:23.783574 34361 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-bdafd-default-external-api-0" Feb 24 05:52:24.036295 master-0 kubenswrapper[34361]: I0224 05:52:24.036072 34361 generic.go:334] "Generic (PLEG): container finished" podID="72a1000b-680d-4c11-a03c-d208f81272dd" containerID="c35c7ecde0912c81c6f0da24c270691434f7feef5c1a125826559e148787e233" exitCode=0 Feb 24 05:52:24.036295 master-0 kubenswrapper[34361]: I0224 05:52:24.036213 34361 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-dcp4q" event={"ID":"72a1000b-680d-4c11-a03c-d208f81272dd","Type":"ContainerDied","Data":"c35c7ecde0912c81c6f0da24c270691434f7feef5c1a125826559e148787e233"} Feb 24 05:52:24.060000 master-0 kubenswrapper[34361]: I0224 05:52:24.059427 34361 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-629gt" event={"ID":"a033a9c9-abde-4d05-b958-06c6bb913e85","Type":"ContainerStarted","Data":"96979dff88f72790d528082b0711c7f48cc32173ad6267d2e6500d3c608b8037"} Feb 24 05:52:24.081294 master-0 kubenswrapper[34361]: I0224 05:52:24.081075 34361 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-bdafd-default-internal-api-0" event={"ID":"9353daa8-f1c5-493d-8f31-bfc3074c6223","Type":"ContainerStarted","Data":"88a62162cc1b7d58341f32b6deb8263d1cc7b2de23fed7a618d79da9c1aad7c7"} Feb 24 05:52:24.100434 master-0 kubenswrapper[34361]: I0224 05:52:24.100337 34361 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-db-sync-629gt" podStartSLOduration=3.766968645 podStartE2EDuration="11.100289215s" podCreationTimestamp="2026-02-24 05:52:13 +0000 UTC" firstStartedPulling="2026-02-24 05:52:15.296755483 +0000 UTC m=+894.999372529" lastFinishedPulling="2026-02-24 05:52:22.630076053 +0000 UTC m=+902.332693099" observedRunningTime="2026-02-24 05:52:24.092114654 +0000 UTC m=+903.794731700" watchObservedRunningTime="2026-02-24 05:52:24.100289215 +0000 UTC m=+903.802906261" Feb 24 05:52:24.194191 master-0 kubenswrapper[34361]: I0224 05:52:24.193561 34361 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-564d4966c5-82kwv" Feb 24 05:52:24.314301 master-0 kubenswrapper[34361]: I0224 05:52:24.313945 34361 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-674c8b7b9c-9fj6z"] Feb 24 05:52:24.314863 master-0 kubenswrapper[34361]: I0224 05:52:24.314748 34361 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-674c8b7b9c-9fj6z" podUID="ba54c348-0fa9-4fa5-8c7b-77aef67518a2" containerName="dnsmasq-dns" containerID="cri-o://cd7966152ad253ed9a029b847c67d7a51c0b9174f654091d97bc6cfceb2dc823" gracePeriod=10 Feb 24 05:52:24.408261 master-0 kubenswrapper[34361]: I0224 05:52:24.406437 34361 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ironic-db-sync-s9d6l"] Feb 24 05:52:24.408261 master-0 kubenswrapper[34361]: E0224 05:52:24.407186 34361 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9a1e2cd8-9a9f-454d-b520-75769a722e55" containerName="mariadb-database-create" Feb 24 05:52:24.408261 master-0 kubenswrapper[34361]: I0224 05:52:24.407205 34361 state_mem.go:107] "Deleted CPUSet assignment" podUID="9a1e2cd8-9a9f-454d-b520-75769a722e55" containerName="mariadb-database-create" Feb 24 05:52:24.408261 master-0 kubenswrapper[34361]: E0224 05:52:24.407281 34361 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="32b27462-7223-4f43-8eea-25a2dcd42b17" containerName="mariadb-account-create-update" Feb 24 05:52:24.408261 master-0 kubenswrapper[34361]: I0224 05:52:24.407291 34361 state_mem.go:107] "Deleted CPUSet assignment" podUID="32b27462-7223-4f43-8eea-25a2dcd42b17" containerName="mariadb-account-create-update" Feb 24 05:52:24.408261 master-0 kubenswrapper[34361]: I0224 05:52:24.407635 34361 memory_manager.go:354] "RemoveStaleState removing state" podUID="9a1e2cd8-9a9f-454d-b520-75769a722e55" containerName="mariadb-database-create" Feb 24 05:52:24.408261 master-0 kubenswrapper[34361]: I0224 05:52:24.407654 34361 memory_manager.go:354] "RemoveStaleState removing state" podUID="32b27462-7223-4f43-8eea-25a2dcd42b17" containerName="mariadb-account-create-update" Feb 24 05:52:24.409260 master-0 kubenswrapper[34361]: I0224 05:52:24.409214 34361 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ironic-db-sync-s9d6l" Feb 24 05:52:24.416459 master-0 kubenswrapper[34361]: I0224 05:52:24.416134 34361 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ironic-config-data" Feb 24 05:52:24.417981 master-0 kubenswrapper[34361]: I0224 05:52:24.416874 34361 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ironic-scripts" Feb 24 05:52:24.432326 master-0 kubenswrapper[34361]: W0224 05:52:24.432192 34361 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod167c633e_12d2_45f6_a746_7437ee0bbfff.slice/crio-6ab03244ef140d4acc3ab994daa83b061d31c315a4212896c9d2a272c8b71fc0 WatchSource:0}: Error finding container 6ab03244ef140d4acc3ab994daa83b061d31c315a4212896c9d2a272c8b71fc0: Status 404 returned error can't find the container with id 6ab03244ef140d4acc3ab994daa83b061d31c315a4212896c9d2a272c8b71fc0 Feb 24 05:52:24.434455 master-0 kubenswrapper[34361]: I0224 05:52:24.434031 34361 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ironic-db-sync-s9d6l"] Feb 24 05:52:24.455427 master-0 kubenswrapper[34361]: I0224 05:52:24.454783 34361 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-bdafd-default-external-api-0"] Feb 24 05:52:24.518601 master-0 kubenswrapper[34361]: I0224 05:52:24.518539 34361 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-podinfo\" (UniqueName: \"kubernetes.io/downward-api/7e30393e-c247-4ba9-9db9-864d16ba6d82-etc-podinfo\") pod \"ironic-db-sync-s9d6l\" (UID: \"7e30393e-c247-4ba9-9db9-864d16ba6d82\") " pod="openstack/ironic-db-sync-s9d6l" Feb 24 05:52:24.519023 master-0 kubenswrapper[34361]: I0224 05:52:24.518651 34361 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-merged\" (UniqueName: \"kubernetes.io/empty-dir/7e30393e-c247-4ba9-9db9-864d16ba6d82-config-data-merged\") pod \"ironic-db-sync-s9d6l\" (UID: \"7e30393e-c247-4ba9-9db9-864d16ba6d82\") " pod="openstack/ironic-db-sync-s9d6l" Feb 24 05:52:24.519023 master-0 kubenswrapper[34361]: I0224 05:52:24.518750 34361 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8pbkx\" (UniqueName: \"kubernetes.io/projected/7e30393e-c247-4ba9-9db9-864d16ba6d82-kube-api-access-8pbkx\") pod \"ironic-db-sync-s9d6l\" (UID: \"7e30393e-c247-4ba9-9db9-864d16ba6d82\") " pod="openstack/ironic-db-sync-s9d6l" Feb 24 05:52:24.519023 master-0 kubenswrapper[34361]: I0224 05:52:24.518771 34361 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7e30393e-c247-4ba9-9db9-864d16ba6d82-scripts\") pod \"ironic-db-sync-s9d6l\" (UID: \"7e30393e-c247-4ba9-9db9-864d16ba6d82\") " pod="openstack/ironic-db-sync-s9d6l" Feb 24 05:52:24.519023 master-0 kubenswrapper[34361]: I0224 05:52:24.518809 34361 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7e30393e-c247-4ba9-9db9-864d16ba6d82-combined-ca-bundle\") pod \"ironic-db-sync-s9d6l\" (UID: \"7e30393e-c247-4ba9-9db9-864d16ba6d82\") " pod="openstack/ironic-db-sync-s9d6l" Feb 24 05:52:24.519023 master-0 kubenswrapper[34361]: I0224 05:52:24.518849 34361 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7e30393e-c247-4ba9-9db9-864d16ba6d82-config-data\") pod \"ironic-db-sync-s9d6l\" (UID: \"7e30393e-c247-4ba9-9db9-864d16ba6d82\") " pod="openstack/ironic-db-sync-s9d6l" Feb 24 05:52:24.621717 master-0 kubenswrapper[34361]: I0224 05:52:24.621657 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8pbkx\" (UniqueName: \"kubernetes.io/projected/7e30393e-c247-4ba9-9db9-864d16ba6d82-kube-api-access-8pbkx\") pod \"ironic-db-sync-s9d6l\" (UID: \"7e30393e-c247-4ba9-9db9-864d16ba6d82\") " pod="openstack/ironic-db-sync-s9d6l" Feb 24 05:52:24.621897 master-0 kubenswrapper[34361]: I0224 05:52:24.621733 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7e30393e-c247-4ba9-9db9-864d16ba6d82-scripts\") pod \"ironic-db-sync-s9d6l\" (UID: \"7e30393e-c247-4ba9-9db9-864d16ba6d82\") " pod="openstack/ironic-db-sync-s9d6l" Feb 24 05:52:24.621897 master-0 kubenswrapper[34361]: I0224 05:52:24.621807 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7e30393e-c247-4ba9-9db9-864d16ba6d82-combined-ca-bundle\") pod \"ironic-db-sync-s9d6l\" (UID: \"7e30393e-c247-4ba9-9db9-864d16ba6d82\") " pod="openstack/ironic-db-sync-s9d6l" Feb 24 05:52:24.622010 master-0 kubenswrapper[34361]: I0224 05:52:24.621904 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7e30393e-c247-4ba9-9db9-864d16ba6d82-config-data\") pod \"ironic-db-sync-s9d6l\" (UID: \"7e30393e-c247-4ba9-9db9-864d16ba6d82\") " pod="openstack/ironic-db-sync-s9d6l" Feb 24 05:52:24.622010 master-0 kubenswrapper[34361]: I0224 05:52:24.621931 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-podinfo\" (UniqueName: \"kubernetes.io/downward-api/7e30393e-c247-4ba9-9db9-864d16ba6d82-etc-podinfo\") pod \"ironic-db-sync-s9d6l\" (UID: \"7e30393e-c247-4ba9-9db9-864d16ba6d82\") " pod="openstack/ironic-db-sync-s9d6l" Feb 24 05:52:24.622010 master-0 kubenswrapper[34361]: I0224 05:52:24.621986 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-merged\" (UniqueName: \"kubernetes.io/empty-dir/7e30393e-c247-4ba9-9db9-864d16ba6d82-config-data-merged\") pod \"ironic-db-sync-s9d6l\" (UID: \"7e30393e-c247-4ba9-9db9-864d16ba6d82\") " pod="openstack/ironic-db-sync-s9d6l" Feb 24 05:52:24.622982 master-0 kubenswrapper[34361]: I0224 05:52:24.622926 34361 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-merged\" (UniqueName: \"kubernetes.io/empty-dir/7e30393e-c247-4ba9-9db9-864d16ba6d82-config-data-merged\") pod \"ironic-db-sync-s9d6l\" (UID: \"7e30393e-c247-4ba9-9db9-864d16ba6d82\") " pod="openstack/ironic-db-sync-s9d6l" Feb 24 05:52:24.627863 master-0 kubenswrapper[34361]: I0224 05:52:24.627829 34361 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7e30393e-c247-4ba9-9db9-864d16ba6d82-scripts\") pod \"ironic-db-sync-s9d6l\" (UID: \"7e30393e-c247-4ba9-9db9-864d16ba6d82\") " pod="openstack/ironic-db-sync-s9d6l" Feb 24 05:52:24.627947 master-0 kubenswrapper[34361]: I0224 05:52:24.627902 34361 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-podinfo\" (UniqueName: \"kubernetes.io/downward-api/7e30393e-c247-4ba9-9db9-864d16ba6d82-etc-podinfo\") pod \"ironic-db-sync-s9d6l\" (UID: \"7e30393e-c247-4ba9-9db9-864d16ba6d82\") " pod="openstack/ironic-db-sync-s9d6l" Feb 24 05:52:24.628363 master-0 kubenswrapper[34361]: I0224 05:52:24.628294 34361 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7e30393e-c247-4ba9-9db9-864d16ba6d82-combined-ca-bundle\") pod \"ironic-db-sync-s9d6l\" (UID: \"7e30393e-c247-4ba9-9db9-864d16ba6d82\") " pod="openstack/ironic-db-sync-s9d6l" Feb 24 05:52:24.633417 master-0 kubenswrapper[34361]: I0224 05:52:24.633369 34361 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7e30393e-c247-4ba9-9db9-864d16ba6d82-config-data\") pod \"ironic-db-sync-s9d6l\" (UID: \"7e30393e-c247-4ba9-9db9-864d16ba6d82\") " pod="openstack/ironic-db-sync-s9d6l" Feb 24 05:52:24.679622 master-0 kubenswrapper[34361]: I0224 05:52:24.679548 34361 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8pbkx\" (UniqueName: \"kubernetes.io/projected/7e30393e-c247-4ba9-9db9-864d16ba6d82-kube-api-access-8pbkx\") pod \"ironic-db-sync-s9d6l\" (UID: \"7e30393e-c247-4ba9-9db9-864d16ba6d82\") " pod="openstack/ironic-db-sync-s9d6l" Feb 24 05:52:24.805336 master-0 kubenswrapper[34361]: I0224 05:52:24.804806 34361 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ironic-db-sync-s9d6l" Feb 24 05:52:25.000757 master-0 kubenswrapper[34361]: I0224 05:52:25.000694 34361 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-674c8b7b9c-9fj6z" Feb 24 05:52:25.033799 master-0 kubenswrapper[34361]: I0224 05:52:25.032284 34361 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ba54c348-0fa9-4fa5-8c7b-77aef67518a2-config\") pod \"ba54c348-0fa9-4fa5-8c7b-77aef67518a2\" (UID: \"ba54c348-0fa9-4fa5-8c7b-77aef67518a2\") " Feb 24 05:52:25.033799 master-0 kubenswrapper[34361]: I0224 05:52:25.032411 34361 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ba54c348-0fa9-4fa5-8c7b-77aef67518a2-dns-svc\") pod \"ba54c348-0fa9-4fa5-8c7b-77aef67518a2\" (UID: \"ba54c348-0fa9-4fa5-8c7b-77aef67518a2\") " Feb 24 05:52:25.033799 master-0 kubenswrapper[34361]: I0224 05:52:25.032507 34361 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vqc48\" (UniqueName: \"kubernetes.io/projected/ba54c348-0fa9-4fa5-8c7b-77aef67518a2-kube-api-access-vqc48\") pod \"ba54c348-0fa9-4fa5-8c7b-77aef67518a2\" (UID: \"ba54c348-0fa9-4fa5-8c7b-77aef67518a2\") " Feb 24 05:52:25.033799 master-0 kubenswrapper[34361]: I0224 05:52:25.032563 34361 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/ba54c348-0fa9-4fa5-8c7b-77aef67518a2-ovsdbserver-sb\") pod \"ba54c348-0fa9-4fa5-8c7b-77aef67518a2\" (UID: \"ba54c348-0fa9-4fa5-8c7b-77aef67518a2\") " Feb 24 05:52:25.033799 master-0 kubenswrapper[34361]: I0224 05:52:25.032742 34361 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/ba54c348-0fa9-4fa5-8c7b-77aef67518a2-dns-swift-storage-0\") pod \"ba54c348-0fa9-4fa5-8c7b-77aef67518a2\" (UID: \"ba54c348-0fa9-4fa5-8c7b-77aef67518a2\") " Feb 24 05:52:25.033799 master-0 kubenswrapper[34361]: I0224 05:52:25.032792 34361 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/ba54c348-0fa9-4fa5-8c7b-77aef67518a2-ovsdbserver-nb\") pod \"ba54c348-0fa9-4fa5-8c7b-77aef67518a2\" (UID: \"ba54c348-0fa9-4fa5-8c7b-77aef67518a2\") " Feb 24 05:52:25.038958 master-0 kubenswrapper[34361]: I0224 05:52:25.038910 34361 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ba54c348-0fa9-4fa5-8c7b-77aef67518a2-kube-api-access-vqc48" (OuterVolumeSpecName: "kube-api-access-vqc48") pod "ba54c348-0fa9-4fa5-8c7b-77aef67518a2" (UID: "ba54c348-0fa9-4fa5-8c7b-77aef67518a2"). InnerVolumeSpecName "kube-api-access-vqc48". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 24 05:52:25.112735 master-0 kubenswrapper[34361]: I0224 05:52:25.112664 34361 generic.go:334] "Generic (PLEG): container finished" podID="ba54c348-0fa9-4fa5-8c7b-77aef67518a2" containerID="cd7966152ad253ed9a029b847c67d7a51c0b9174f654091d97bc6cfceb2dc823" exitCode=0 Feb 24 05:52:25.113038 master-0 kubenswrapper[34361]: I0224 05:52:25.112769 34361 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-674c8b7b9c-9fj6z" event={"ID":"ba54c348-0fa9-4fa5-8c7b-77aef67518a2","Type":"ContainerDied","Data":"cd7966152ad253ed9a029b847c67d7a51c0b9174f654091d97bc6cfceb2dc823"} Feb 24 05:52:25.113038 master-0 kubenswrapper[34361]: I0224 05:52:25.112808 34361 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-674c8b7b9c-9fj6z" event={"ID":"ba54c348-0fa9-4fa5-8c7b-77aef67518a2","Type":"ContainerDied","Data":"ce98c0737b06c3c284339518e7a2aa21b1915edb7acf5ca02c7bfa31a07f6bf3"} Feb 24 05:52:25.113038 master-0 kubenswrapper[34361]: I0224 05:52:25.112827 34361 scope.go:117] "RemoveContainer" containerID="cd7966152ad253ed9a029b847c67d7a51c0b9174f654091d97bc6cfceb2dc823" Feb 24 05:52:25.113038 master-0 kubenswrapper[34361]: I0224 05:52:25.112982 34361 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-674c8b7b9c-9fj6z" Feb 24 05:52:25.115962 master-0 kubenswrapper[34361]: I0224 05:52:25.115891 34361 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-bdafd-default-external-api-0" event={"ID":"167c633e-12d2-45f6-a746-7437ee0bbfff","Type":"ContainerStarted","Data":"6ab03244ef140d4acc3ab994daa83b061d31c315a4212896c9d2a272c8b71fc0"} Feb 24 05:52:25.119227 master-0 kubenswrapper[34361]: I0224 05:52:25.119183 34361 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-bdafd-default-internal-api-0" event={"ID":"9353daa8-f1c5-493d-8f31-bfc3074c6223","Type":"ContainerStarted","Data":"f21829fd6b0d389f5b690cffbcf84955a80e446f4b913fa10461795c84683f71"} Feb 24 05:52:25.135487 master-0 kubenswrapper[34361]: I0224 05:52:25.135398 34361 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ba54c348-0fa9-4fa5-8c7b-77aef67518a2-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "ba54c348-0fa9-4fa5-8c7b-77aef67518a2" (UID: "ba54c348-0fa9-4fa5-8c7b-77aef67518a2"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 24 05:52:25.136041 master-0 kubenswrapper[34361]: I0224 05:52:25.135554 34361 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vqc48\" (UniqueName: \"kubernetes.io/projected/ba54c348-0fa9-4fa5-8c7b-77aef67518a2-kube-api-access-vqc48\") on node \"master-0\" DevicePath \"\"" Feb 24 05:52:25.138548 master-0 kubenswrapper[34361]: I0224 05:52:25.138499 34361 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ba54c348-0fa9-4fa5-8c7b-77aef67518a2-config" (OuterVolumeSpecName: "config") pod "ba54c348-0fa9-4fa5-8c7b-77aef67518a2" (UID: "ba54c348-0fa9-4fa5-8c7b-77aef67518a2"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 24 05:52:25.144513 master-0 kubenswrapper[34361]: I0224 05:52:25.144424 34361 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ba54c348-0fa9-4fa5-8c7b-77aef67518a2-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "ba54c348-0fa9-4fa5-8c7b-77aef67518a2" (UID: "ba54c348-0fa9-4fa5-8c7b-77aef67518a2"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 24 05:52:25.185379 master-0 kubenswrapper[34361]: I0224 05:52:25.185074 34361 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ba54c348-0fa9-4fa5-8c7b-77aef67518a2-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "ba54c348-0fa9-4fa5-8c7b-77aef67518a2" (UID: "ba54c348-0fa9-4fa5-8c7b-77aef67518a2"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 24 05:52:25.196692 master-0 kubenswrapper[34361]: I0224 05:52:25.196586 34361 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ba54c348-0fa9-4fa5-8c7b-77aef67518a2-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "ba54c348-0fa9-4fa5-8c7b-77aef67518a2" (UID: "ba54c348-0fa9-4fa5-8c7b-77aef67518a2"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 24 05:52:25.237798 master-0 kubenswrapper[34361]: I0224 05:52:25.237724 34361 scope.go:117] "RemoveContainer" containerID="25ff809ee0f0e4b1d8c1cef611e942c100236a718c0c4259239029b26da7f4d4" Feb 24 05:52:25.240837 master-0 kubenswrapper[34361]: I0224 05:52:25.240805 34361 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/ba54c348-0fa9-4fa5-8c7b-77aef67518a2-ovsdbserver-sb\") on node \"master-0\" DevicePath \"\"" Feb 24 05:52:25.240929 master-0 kubenswrapper[34361]: I0224 05:52:25.240839 34361 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/ba54c348-0fa9-4fa5-8c7b-77aef67518a2-dns-swift-storage-0\") on node \"master-0\" DevicePath \"\"" Feb 24 05:52:25.240929 master-0 kubenswrapper[34361]: I0224 05:52:25.240862 34361 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/ba54c348-0fa9-4fa5-8c7b-77aef67518a2-ovsdbserver-nb\") on node \"master-0\" DevicePath \"\"" Feb 24 05:52:25.240929 master-0 kubenswrapper[34361]: I0224 05:52:25.240879 34361 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ba54c348-0fa9-4fa5-8c7b-77aef67518a2-config\") on node \"master-0\" DevicePath \"\"" Feb 24 05:52:25.240929 master-0 kubenswrapper[34361]: I0224 05:52:25.240893 34361 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ba54c348-0fa9-4fa5-8c7b-77aef67518a2-dns-svc\") on node \"master-0\" DevicePath \"\"" Feb 24 05:52:25.284819 master-0 kubenswrapper[34361]: I0224 05:52:25.284510 34361 scope.go:117] "RemoveContainer" containerID="cd7966152ad253ed9a029b847c67d7a51c0b9174f654091d97bc6cfceb2dc823" Feb 24 05:52:25.287358 master-0 kubenswrapper[34361]: E0224 05:52:25.287262 34361 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"cd7966152ad253ed9a029b847c67d7a51c0b9174f654091d97bc6cfceb2dc823\": container with ID starting with cd7966152ad253ed9a029b847c67d7a51c0b9174f654091d97bc6cfceb2dc823 not found: ID does not exist" containerID="cd7966152ad253ed9a029b847c67d7a51c0b9174f654091d97bc6cfceb2dc823" Feb 24 05:52:25.287358 master-0 kubenswrapper[34361]: I0224 05:52:25.287326 34361 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cd7966152ad253ed9a029b847c67d7a51c0b9174f654091d97bc6cfceb2dc823"} err="failed to get container status \"cd7966152ad253ed9a029b847c67d7a51c0b9174f654091d97bc6cfceb2dc823\": rpc error: code = NotFound desc = could not find container \"cd7966152ad253ed9a029b847c67d7a51c0b9174f654091d97bc6cfceb2dc823\": container with ID starting with cd7966152ad253ed9a029b847c67d7a51c0b9174f654091d97bc6cfceb2dc823 not found: ID does not exist" Feb 24 05:52:25.287358 master-0 kubenswrapper[34361]: I0224 05:52:25.287350 34361 scope.go:117] "RemoveContainer" containerID="25ff809ee0f0e4b1d8c1cef611e942c100236a718c0c4259239029b26da7f4d4" Feb 24 05:52:25.288339 master-0 kubenswrapper[34361]: E0224 05:52:25.288295 34361 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"25ff809ee0f0e4b1d8c1cef611e942c100236a718c0c4259239029b26da7f4d4\": container with ID starting with 25ff809ee0f0e4b1d8c1cef611e942c100236a718c0c4259239029b26da7f4d4 not found: ID does not exist" containerID="25ff809ee0f0e4b1d8c1cef611e942c100236a718c0c4259239029b26da7f4d4" Feb 24 05:52:25.288404 master-0 kubenswrapper[34361]: I0224 05:52:25.288340 34361 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"25ff809ee0f0e4b1d8c1cef611e942c100236a718c0c4259239029b26da7f4d4"} err="failed to get container status \"25ff809ee0f0e4b1d8c1cef611e942c100236a718c0c4259239029b26da7f4d4\": rpc error: code = NotFound desc = could not find container \"25ff809ee0f0e4b1d8c1cef611e942c100236a718c0c4259239029b26da7f4d4\": container with ID starting with 25ff809ee0f0e4b1d8c1cef611e942c100236a718c0c4259239029b26da7f4d4 not found: ID does not exist" Feb 24 05:52:25.440436 master-0 kubenswrapper[34361]: I0224 05:52:25.440221 34361 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ironic-db-sync-s9d6l"] Feb 24 05:52:25.465076 master-0 kubenswrapper[34361]: W0224 05:52:25.464736 34361 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod7e30393e_c247_4ba9_9db9_864d16ba6d82.slice/crio-63d7eeb97c7bbb78d31359f9d0c2c236d4d6ed47b3d39b598e29fc0e3ace5a51 WatchSource:0}: Error finding container 63d7eeb97c7bbb78d31359f9d0c2c236d4d6ed47b3d39b598e29fc0e3ace5a51: Status 404 returned error can't find the container with id 63d7eeb97c7bbb78d31359f9d0c2c236d4d6ed47b3d39b598e29fc0e3ace5a51 Feb 24 05:52:25.568493 master-0 kubenswrapper[34361]: I0224 05:52:25.568284 34361 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-dcp4q" Feb 24 05:52:25.660795 master-0 kubenswrapper[34361]: I0224 05:52:25.660726 34361 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-674c8b7b9c-9fj6z"] Feb 24 05:52:25.756832 master-0 kubenswrapper[34361]: I0224 05:52:25.753112 34361 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-674c8b7b9c-9fj6z"] Feb 24 05:52:25.762044 master-0 kubenswrapper[34361]: I0224 05:52:25.761964 34361 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/72a1000b-680d-4c11-a03c-d208f81272dd-config-data\") pod \"72a1000b-680d-4c11-a03c-d208f81272dd\" (UID: \"72a1000b-680d-4c11-a03c-d208f81272dd\") " Feb 24 05:52:25.762442 master-0 kubenswrapper[34361]: I0224 05:52:25.762380 34361 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/72a1000b-680d-4c11-a03c-d208f81272dd-combined-ca-bundle\") pod \"72a1000b-680d-4c11-a03c-d208f81272dd\" (UID: \"72a1000b-680d-4c11-a03c-d208f81272dd\") " Feb 24 05:52:25.762542 master-0 kubenswrapper[34361]: I0224 05:52:25.762498 34361 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/72a1000b-680d-4c11-a03c-d208f81272dd-credential-keys\") pod \"72a1000b-680d-4c11-a03c-d208f81272dd\" (UID: \"72a1000b-680d-4c11-a03c-d208f81272dd\") " Feb 24 05:52:25.762792 master-0 kubenswrapper[34361]: I0224 05:52:25.762609 34361 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/72a1000b-680d-4c11-a03c-d208f81272dd-scripts\") pod \"72a1000b-680d-4c11-a03c-d208f81272dd\" (UID: \"72a1000b-680d-4c11-a03c-d208f81272dd\") " Feb 24 05:52:25.762792 master-0 kubenswrapper[34361]: I0224 05:52:25.762649 34361 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9pmqw\" (UniqueName: \"kubernetes.io/projected/72a1000b-680d-4c11-a03c-d208f81272dd-kube-api-access-9pmqw\") pod \"72a1000b-680d-4c11-a03c-d208f81272dd\" (UID: \"72a1000b-680d-4c11-a03c-d208f81272dd\") " Feb 24 05:52:25.762792 master-0 kubenswrapper[34361]: I0224 05:52:25.762764 34361 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/72a1000b-680d-4c11-a03c-d208f81272dd-fernet-keys\") pod \"72a1000b-680d-4c11-a03c-d208f81272dd\" (UID: \"72a1000b-680d-4c11-a03c-d208f81272dd\") " Feb 24 05:52:25.767478 master-0 kubenswrapper[34361]: I0224 05:52:25.767151 34361 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/72a1000b-680d-4c11-a03c-d208f81272dd-credential-keys" (OuterVolumeSpecName: "credential-keys") pod "72a1000b-680d-4c11-a03c-d208f81272dd" (UID: "72a1000b-680d-4c11-a03c-d208f81272dd"). InnerVolumeSpecName "credential-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 24 05:52:25.767861 master-0 kubenswrapper[34361]: I0224 05:52:25.767201 34361 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/72a1000b-680d-4c11-a03c-d208f81272dd-scripts" (OuterVolumeSpecName: "scripts") pod "72a1000b-680d-4c11-a03c-d208f81272dd" (UID: "72a1000b-680d-4c11-a03c-d208f81272dd"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 24 05:52:25.767861 master-0 kubenswrapper[34361]: I0224 05:52:25.767646 34361 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/72a1000b-680d-4c11-a03c-d208f81272dd-kube-api-access-9pmqw" (OuterVolumeSpecName: "kube-api-access-9pmqw") pod "72a1000b-680d-4c11-a03c-d208f81272dd" (UID: "72a1000b-680d-4c11-a03c-d208f81272dd"). InnerVolumeSpecName "kube-api-access-9pmqw". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 24 05:52:25.815948 master-0 kubenswrapper[34361]: I0224 05:52:25.815857 34361 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/72a1000b-680d-4c11-a03c-d208f81272dd-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "72a1000b-680d-4c11-a03c-d208f81272dd" (UID: "72a1000b-680d-4c11-a03c-d208f81272dd"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 24 05:52:25.823098 master-0 kubenswrapper[34361]: I0224 05:52:25.823039 34361 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/72a1000b-680d-4c11-a03c-d208f81272dd-config-data" (OuterVolumeSpecName: "config-data") pod "72a1000b-680d-4c11-a03c-d208f81272dd" (UID: "72a1000b-680d-4c11-a03c-d208f81272dd"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 24 05:52:25.823195 master-0 kubenswrapper[34361]: I0224 05:52:25.823167 34361 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/72a1000b-680d-4c11-a03c-d208f81272dd-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "72a1000b-680d-4c11-a03c-d208f81272dd" (UID: "72a1000b-680d-4c11-a03c-d208f81272dd"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 24 05:52:25.866989 master-0 kubenswrapper[34361]: I0224 05:52:25.866900 34361 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/72a1000b-680d-4c11-a03c-d208f81272dd-combined-ca-bundle\") on node \"master-0\" DevicePath \"\"" Feb 24 05:52:25.866989 master-0 kubenswrapper[34361]: I0224 05:52:25.866983 34361 reconciler_common.go:293] "Volume detached for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/72a1000b-680d-4c11-a03c-d208f81272dd-credential-keys\") on node \"master-0\" DevicePath \"\"" Feb 24 05:52:25.866989 master-0 kubenswrapper[34361]: I0224 05:52:25.867006 34361 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/72a1000b-680d-4c11-a03c-d208f81272dd-scripts\") on node \"master-0\" DevicePath \"\"" Feb 24 05:52:25.869494 master-0 kubenswrapper[34361]: I0224 05:52:25.867028 34361 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9pmqw\" (UniqueName: \"kubernetes.io/projected/72a1000b-680d-4c11-a03c-d208f81272dd-kube-api-access-9pmqw\") on node \"master-0\" DevicePath \"\"" Feb 24 05:52:25.869494 master-0 kubenswrapper[34361]: I0224 05:52:25.867053 34361 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/72a1000b-680d-4c11-a03c-d208f81272dd-fernet-keys\") on node \"master-0\" DevicePath \"\"" Feb 24 05:52:25.869494 master-0 kubenswrapper[34361]: I0224 05:52:25.867078 34361 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/72a1000b-680d-4c11-a03c-d208f81272dd-config-data\") on node \"master-0\" DevicePath \"\"" Feb 24 05:52:26.139879 master-0 kubenswrapper[34361]: I0224 05:52:26.139799 34361 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-db-sync-s9d6l" event={"ID":"7e30393e-c247-4ba9-9db9-864d16ba6d82","Type":"ContainerStarted","Data":"63d7eeb97c7bbb78d31359f9d0c2c236d4d6ed47b3d39b598e29fc0e3ace5a51"} Feb 24 05:52:26.143612 master-0 kubenswrapper[34361]: I0224 05:52:26.143573 34361 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-bdafd-default-internal-api-0" event={"ID":"9353daa8-f1c5-493d-8f31-bfc3074c6223","Type":"ContainerStarted","Data":"c3119b8a3607fa9c3df6b54da589b714968f583afefab83eb324d1696714b2b6"} Feb 24 05:52:26.148030 master-0 kubenswrapper[34361]: I0224 05:52:26.147925 34361 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-dcp4q" event={"ID":"72a1000b-680d-4c11-a03c-d208f81272dd","Type":"ContainerDied","Data":"d7afc3022022e3c99044351489648cd6b9c053e5f1ded198c24627008473507f"} Feb 24 05:52:26.148199 master-0 kubenswrapper[34361]: I0224 05:52:26.148179 34361 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d7afc3022022e3c99044351489648cd6b9c053e5f1ded198c24627008473507f" Feb 24 05:52:26.148275 master-0 kubenswrapper[34361]: I0224 05:52:26.148141 34361 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-dcp4q" Feb 24 05:52:26.151005 master-0 kubenswrapper[34361]: I0224 05:52:26.150969 34361 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-bdafd-default-external-api-0" event={"ID":"167c633e-12d2-45f6-a746-7437ee0bbfff","Type":"ContainerStarted","Data":"0942ee6a98d6bb5a9765463e9f2e7b660623d6201a2a9b274816da5132fa8c64"} Feb 24 05:52:26.188888 master-0 kubenswrapper[34361]: I0224 05:52:26.188801 34361 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-bdafd-default-internal-api-0" podStartSLOduration=11.188773432 podStartE2EDuration="11.188773432s" podCreationTimestamp="2026-02-24 05:52:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-24 05:52:26.166639925 +0000 UTC m=+905.869256971" watchObservedRunningTime="2026-02-24 05:52:26.188773432 +0000 UTC m=+905.891390478" Feb 24 05:52:26.202173 master-0 kubenswrapper[34361]: I0224 05:52:26.202077 34361 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-bdafd-default-external-api-0" podStartSLOduration=4.20205243 podStartE2EDuration="4.20205243s" podCreationTimestamp="2026-02-24 05:52:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-24 05:52:26.197731644 +0000 UTC m=+905.900348700" watchObservedRunningTime="2026-02-24 05:52:26.20205243 +0000 UTC m=+905.904669476" Feb 24 05:52:26.376195 master-0 kubenswrapper[34361]: I0224 05:52:26.376027 34361 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-bootstrap-dcp4q"] Feb 24 05:52:26.389882 master-0 kubenswrapper[34361]: I0224 05:52:26.389752 34361 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-bootstrap-dcp4q"] Feb 24 05:52:26.478745 master-0 kubenswrapper[34361]: I0224 05:52:26.478642 34361 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-bootstrap-trt9l"] Feb 24 05:52:26.479455 master-0 kubenswrapper[34361]: E0224 05:52:26.479428 34361 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ba54c348-0fa9-4fa5-8c7b-77aef67518a2" containerName="init" Feb 24 05:52:26.479455 master-0 kubenswrapper[34361]: I0224 05:52:26.479449 34361 state_mem.go:107] "Deleted CPUSet assignment" podUID="ba54c348-0fa9-4fa5-8c7b-77aef67518a2" containerName="init" Feb 24 05:52:26.479546 master-0 kubenswrapper[34361]: E0224 05:52:26.479489 34361 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ba54c348-0fa9-4fa5-8c7b-77aef67518a2" containerName="dnsmasq-dns" Feb 24 05:52:26.479546 master-0 kubenswrapper[34361]: I0224 05:52:26.479497 34361 state_mem.go:107] "Deleted CPUSet assignment" podUID="ba54c348-0fa9-4fa5-8c7b-77aef67518a2" containerName="dnsmasq-dns" Feb 24 05:52:26.479546 master-0 kubenswrapper[34361]: E0224 05:52:26.479537 34361 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="72a1000b-680d-4c11-a03c-d208f81272dd" containerName="keystone-bootstrap" Feb 24 05:52:26.479546 master-0 kubenswrapper[34361]: I0224 05:52:26.479545 34361 state_mem.go:107] "Deleted CPUSet assignment" podUID="72a1000b-680d-4c11-a03c-d208f81272dd" containerName="keystone-bootstrap" Feb 24 05:52:26.479799 master-0 kubenswrapper[34361]: I0224 05:52:26.479774 34361 memory_manager.go:354] "RemoveStaleState removing state" podUID="ba54c348-0fa9-4fa5-8c7b-77aef67518a2" containerName="dnsmasq-dns" Feb 24 05:52:26.479799 master-0 kubenswrapper[34361]: I0224 05:52:26.479792 34361 memory_manager.go:354] "RemoveStaleState removing state" podUID="72a1000b-680d-4c11-a03c-d208f81272dd" containerName="keystone-bootstrap" Feb 24 05:52:26.480675 master-0 kubenswrapper[34361]: I0224 05:52:26.480645 34361 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-trt9l" Feb 24 05:52:26.484869 master-0 kubenswrapper[34361]: I0224 05:52:26.484814 34361 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Feb 24 05:52:26.486533 master-0 kubenswrapper[34361]: I0224 05:52:26.486500 34361 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Feb 24 05:52:26.505297 master-0 kubenswrapper[34361]: I0224 05:52:26.504756 34361 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Feb 24 05:52:26.538581 master-0 kubenswrapper[34361]: I0224 05:52:26.538487 34361 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/6eb75ff2-586c-4d0c-bb92-967635ac99d0-credential-keys\") pod \"keystone-bootstrap-trt9l\" (UID: \"6eb75ff2-586c-4d0c-bb92-967635ac99d0\") " pod="openstack/keystone-bootstrap-trt9l" Feb 24 05:52:26.539930 master-0 kubenswrapper[34361]: I0224 05:52:26.539871 34361 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6eb75ff2-586c-4d0c-bb92-967635ac99d0-combined-ca-bundle\") pod \"keystone-bootstrap-trt9l\" (UID: \"6eb75ff2-586c-4d0c-bb92-967635ac99d0\") " pod="openstack/keystone-bootstrap-trt9l" Feb 24 05:52:26.540092 master-0 kubenswrapper[34361]: I0224 05:52:26.540065 34361 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cf849\" (UniqueName: \"kubernetes.io/projected/6eb75ff2-586c-4d0c-bb92-967635ac99d0-kube-api-access-cf849\") pod \"keystone-bootstrap-trt9l\" (UID: \"6eb75ff2-586c-4d0c-bb92-967635ac99d0\") " pod="openstack/keystone-bootstrap-trt9l" Feb 24 05:52:26.540179 master-0 kubenswrapper[34361]: I0224 05:52:26.540119 34361 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6eb75ff2-586c-4d0c-bb92-967635ac99d0-scripts\") pod \"keystone-bootstrap-trt9l\" (UID: \"6eb75ff2-586c-4d0c-bb92-967635ac99d0\") " pod="openstack/keystone-bootstrap-trt9l" Feb 24 05:52:26.540546 master-0 kubenswrapper[34361]: I0224 05:52:26.540347 34361 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6eb75ff2-586c-4d0c-bb92-967635ac99d0-config-data\") pod \"keystone-bootstrap-trt9l\" (UID: \"6eb75ff2-586c-4d0c-bb92-967635ac99d0\") " pod="openstack/keystone-bootstrap-trt9l" Feb 24 05:52:26.540546 master-0 kubenswrapper[34361]: I0224 05:52:26.540501 34361 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/6eb75ff2-586c-4d0c-bb92-967635ac99d0-fernet-keys\") pod \"keystone-bootstrap-trt9l\" (UID: \"6eb75ff2-586c-4d0c-bb92-967635ac99d0\") " pod="openstack/keystone-bootstrap-trt9l" Feb 24 05:52:26.548891 master-0 kubenswrapper[34361]: I0224 05:52:26.548787 34361 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-trt9l"] Feb 24 05:52:26.616238 master-0 kubenswrapper[34361]: I0224 05:52:26.616160 34361 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="72a1000b-680d-4c11-a03c-d208f81272dd" path="/var/lib/kubelet/pods/72a1000b-680d-4c11-a03c-d208f81272dd/volumes" Feb 24 05:52:26.616921 master-0 kubenswrapper[34361]: I0224 05:52:26.616885 34361 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ba54c348-0fa9-4fa5-8c7b-77aef67518a2" path="/var/lib/kubelet/pods/ba54c348-0fa9-4fa5-8c7b-77aef67518a2/volumes" Feb 24 05:52:26.651301 master-0 kubenswrapper[34361]: I0224 05:52:26.651079 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6eb75ff2-586c-4d0c-bb92-967635ac99d0-combined-ca-bundle\") pod \"keystone-bootstrap-trt9l\" (UID: \"6eb75ff2-586c-4d0c-bb92-967635ac99d0\") " pod="openstack/keystone-bootstrap-trt9l" Feb 24 05:52:26.651301 master-0 kubenswrapper[34361]: I0224 05:52:26.651197 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cf849\" (UniqueName: \"kubernetes.io/projected/6eb75ff2-586c-4d0c-bb92-967635ac99d0-kube-api-access-cf849\") pod \"keystone-bootstrap-trt9l\" (UID: \"6eb75ff2-586c-4d0c-bb92-967635ac99d0\") " pod="openstack/keystone-bootstrap-trt9l" Feb 24 05:52:26.651301 master-0 kubenswrapper[34361]: I0224 05:52:26.651238 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6eb75ff2-586c-4d0c-bb92-967635ac99d0-scripts\") pod \"keystone-bootstrap-trt9l\" (UID: \"6eb75ff2-586c-4d0c-bb92-967635ac99d0\") " pod="openstack/keystone-bootstrap-trt9l" Feb 24 05:52:26.651853 master-0 kubenswrapper[34361]: I0224 05:52:26.651362 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6eb75ff2-586c-4d0c-bb92-967635ac99d0-config-data\") pod \"keystone-bootstrap-trt9l\" (UID: \"6eb75ff2-586c-4d0c-bb92-967635ac99d0\") " pod="openstack/keystone-bootstrap-trt9l" Feb 24 05:52:26.651853 master-0 kubenswrapper[34361]: I0224 05:52:26.651472 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/6eb75ff2-586c-4d0c-bb92-967635ac99d0-fernet-keys\") pod \"keystone-bootstrap-trt9l\" (UID: \"6eb75ff2-586c-4d0c-bb92-967635ac99d0\") " pod="openstack/keystone-bootstrap-trt9l" Feb 24 05:52:26.652540 master-0 kubenswrapper[34361]: I0224 05:52:26.651714 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/6eb75ff2-586c-4d0c-bb92-967635ac99d0-credential-keys\") pod \"keystone-bootstrap-trt9l\" (UID: \"6eb75ff2-586c-4d0c-bb92-967635ac99d0\") " pod="openstack/keystone-bootstrap-trt9l" Feb 24 05:52:26.656961 master-0 kubenswrapper[34361]: I0224 05:52:26.656913 34361 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6eb75ff2-586c-4d0c-bb92-967635ac99d0-config-data\") pod \"keystone-bootstrap-trt9l\" (UID: \"6eb75ff2-586c-4d0c-bb92-967635ac99d0\") " pod="openstack/keystone-bootstrap-trt9l" Feb 24 05:52:26.660010 master-0 kubenswrapper[34361]: I0224 05:52:26.659939 34361 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/6eb75ff2-586c-4d0c-bb92-967635ac99d0-fernet-keys\") pod \"keystone-bootstrap-trt9l\" (UID: \"6eb75ff2-586c-4d0c-bb92-967635ac99d0\") " pod="openstack/keystone-bootstrap-trt9l" Feb 24 05:52:26.661486 master-0 kubenswrapper[34361]: I0224 05:52:26.661344 34361 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/6eb75ff2-586c-4d0c-bb92-967635ac99d0-credential-keys\") pod \"keystone-bootstrap-trt9l\" (UID: \"6eb75ff2-586c-4d0c-bb92-967635ac99d0\") " pod="openstack/keystone-bootstrap-trt9l" Feb 24 05:52:26.661731 master-0 kubenswrapper[34361]: I0224 05:52:26.661685 34361 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6eb75ff2-586c-4d0c-bb92-967635ac99d0-combined-ca-bundle\") pod \"keystone-bootstrap-trt9l\" (UID: \"6eb75ff2-586c-4d0c-bb92-967635ac99d0\") " pod="openstack/keystone-bootstrap-trt9l" Feb 24 05:52:26.663801 master-0 kubenswrapper[34361]: I0224 05:52:26.663764 34361 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6eb75ff2-586c-4d0c-bb92-967635ac99d0-scripts\") pod \"keystone-bootstrap-trt9l\" (UID: \"6eb75ff2-586c-4d0c-bb92-967635ac99d0\") " pod="openstack/keystone-bootstrap-trt9l" Feb 24 05:52:26.673665 master-0 kubenswrapper[34361]: I0224 05:52:26.673613 34361 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cf849\" (UniqueName: \"kubernetes.io/projected/6eb75ff2-586c-4d0c-bb92-967635ac99d0-kube-api-access-cf849\") pod \"keystone-bootstrap-trt9l\" (UID: \"6eb75ff2-586c-4d0c-bb92-967635ac99d0\") " pod="openstack/keystone-bootstrap-trt9l" Feb 24 05:52:26.828964 master-0 kubenswrapper[34361]: I0224 05:52:26.828893 34361 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-trt9l" Feb 24 05:52:27.181864 master-0 kubenswrapper[34361]: I0224 05:52:27.181782 34361 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-bdafd-default-external-api-0" event={"ID":"167c633e-12d2-45f6-a746-7437ee0bbfff","Type":"ContainerStarted","Data":"e7c9aa8d6472db53b2f7d10161065f254ecbe1f61c23fa12fbb9fd5d661a9703"} Feb 24 05:52:27.185441 master-0 kubenswrapper[34361]: I0224 05:52:27.185397 34361 generic.go:334] "Generic (PLEG): container finished" podID="a033a9c9-abde-4d05-b958-06c6bb913e85" containerID="96979dff88f72790d528082b0711c7f48cc32173ad6267d2e6500d3c608b8037" exitCode=0 Feb 24 05:52:27.185537 master-0 kubenswrapper[34361]: I0224 05:52:27.185464 34361 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-629gt" event={"ID":"a033a9c9-abde-4d05-b958-06c6bb913e85","Type":"ContainerDied","Data":"96979dff88f72790d528082b0711c7f48cc32173ad6267d2e6500d3c608b8037"} Feb 24 05:52:27.387295 master-0 kubenswrapper[34361]: I0224 05:52:27.384940 34361 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-trt9l"] Feb 24 05:52:27.397397 master-0 kubenswrapper[34361]: W0224 05:52:27.397263 34361 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod6eb75ff2_586c_4d0c_bb92_967635ac99d0.slice/crio-08f3a6c3b6de489ca428b09f15a1ac75a2d4e02d9e9c1242e27005c27f753cba WatchSource:0}: Error finding container 08f3a6c3b6de489ca428b09f15a1ac75a2d4e02d9e9c1242e27005c27f753cba: Status 404 returned error can't find the container with id 08f3a6c3b6de489ca428b09f15a1ac75a2d4e02d9e9c1242e27005c27f753cba Feb 24 05:52:28.202814 master-0 kubenswrapper[34361]: I0224 05:52:28.202719 34361 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-trt9l" event={"ID":"6eb75ff2-586c-4d0c-bb92-967635ac99d0","Type":"ContainerStarted","Data":"ec9e0f959f58b15d2ac33c7f7fe7637fca1a3c27908113b8181f4a982095b802"} Feb 24 05:52:28.202814 master-0 kubenswrapper[34361]: I0224 05:52:28.202788 34361 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-trt9l" event={"ID":"6eb75ff2-586c-4d0c-bb92-967635ac99d0","Type":"ContainerStarted","Data":"08f3a6c3b6de489ca428b09f15a1ac75a2d4e02d9e9c1242e27005c27f753cba"} Feb 24 05:52:28.248347 master-0 kubenswrapper[34361]: I0224 05:52:28.248192 34361 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-bootstrap-trt9l" podStartSLOduration=2.248165026 podStartE2EDuration="2.248165026s" podCreationTimestamp="2026-02-24 05:52:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-24 05:52:28.233562812 +0000 UTC m=+907.936179888" watchObservedRunningTime="2026-02-24 05:52:28.248165026 +0000 UTC m=+907.950782072" Feb 24 05:52:30.391380 master-0 kubenswrapper[34361]: I0224 05:52:30.391136 34361 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-629gt" Feb 24 05:52:30.585562 master-0 kubenswrapper[34361]: I0224 05:52:30.585376 34361 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a033a9c9-abde-4d05-b958-06c6bb913e85-scripts\") pod \"a033a9c9-abde-4d05-b958-06c6bb913e85\" (UID: \"a033a9c9-abde-4d05-b958-06c6bb913e85\") " Feb 24 05:52:30.585955 master-0 kubenswrapper[34361]: I0224 05:52:30.585905 34361 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a033a9c9-abde-4d05-b958-06c6bb913e85-config-data\") pod \"a033a9c9-abde-4d05-b958-06c6bb913e85\" (UID: \"a033a9c9-abde-4d05-b958-06c6bb913e85\") " Feb 24 05:52:30.586152 master-0 kubenswrapper[34361]: I0224 05:52:30.586130 34361 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-skf6k\" (UniqueName: \"kubernetes.io/projected/a033a9c9-abde-4d05-b958-06c6bb913e85-kube-api-access-skf6k\") pod \"a033a9c9-abde-4d05-b958-06c6bb913e85\" (UID: \"a033a9c9-abde-4d05-b958-06c6bb913e85\") " Feb 24 05:52:30.586204 master-0 kubenswrapper[34361]: I0224 05:52:30.586191 34361 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a033a9c9-abde-4d05-b958-06c6bb913e85-combined-ca-bundle\") pod \"a033a9c9-abde-4d05-b958-06c6bb913e85\" (UID: \"a033a9c9-abde-4d05-b958-06c6bb913e85\") " Feb 24 05:52:30.586591 master-0 kubenswrapper[34361]: I0224 05:52:30.586541 34361 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a033a9c9-abde-4d05-b958-06c6bb913e85-logs\") pod \"a033a9c9-abde-4d05-b958-06c6bb913e85\" (UID: \"a033a9c9-abde-4d05-b958-06c6bb913e85\") " Feb 24 05:52:30.587114 master-0 kubenswrapper[34361]: I0224 05:52:30.587072 34361 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a033a9c9-abde-4d05-b958-06c6bb913e85-logs" (OuterVolumeSpecName: "logs") pod "a033a9c9-abde-4d05-b958-06c6bb913e85" (UID: "a033a9c9-abde-4d05-b958-06c6bb913e85"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 24 05:52:30.588648 master-0 kubenswrapper[34361]: I0224 05:52:30.588614 34361 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a033a9c9-abde-4d05-b958-06c6bb913e85-logs\") on node \"master-0\" DevicePath \"\"" Feb 24 05:52:30.590519 master-0 kubenswrapper[34361]: I0224 05:52:30.590267 34361 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a033a9c9-abde-4d05-b958-06c6bb913e85-scripts" (OuterVolumeSpecName: "scripts") pod "a033a9c9-abde-4d05-b958-06c6bb913e85" (UID: "a033a9c9-abde-4d05-b958-06c6bb913e85"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 24 05:52:30.597805 master-0 kubenswrapper[34361]: I0224 05:52:30.597731 34361 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a033a9c9-abde-4d05-b958-06c6bb913e85-kube-api-access-skf6k" (OuterVolumeSpecName: "kube-api-access-skf6k") pod "a033a9c9-abde-4d05-b958-06c6bb913e85" (UID: "a033a9c9-abde-4d05-b958-06c6bb913e85"). InnerVolumeSpecName "kube-api-access-skf6k". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 24 05:52:30.617427 master-0 kubenswrapper[34361]: I0224 05:52:30.617354 34361 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a033a9c9-abde-4d05-b958-06c6bb913e85-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "a033a9c9-abde-4d05-b958-06c6bb913e85" (UID: "a033a9c9-abde-4d05-b958-06c6bb913e85"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 24 05:52:30.636627 master-0 kubenswrapper[34361]: I0224 05:52:30.636546 34361 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a033a9c9-abde-4d05-b958-06c6bb913e85-config-data" (OuterVolumeSpecName: "config-data") pod "a033a9c9-abde-4d05-b958-06c6bb913e85" (UID: "a033a9c9-abde-4d05-b958-06c6bb913e85"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 24 05:52:30.691248 master-0 kubenswrapper[34361]: I0224 05:52:30.691158 34361 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-skf6k\" (UniqueName: \"kubernetes.io/projected/a033a9c9-abde-4d05-b958-06c6bb913e85-kube-api-access-skf6k\") on node \"master-0\" DevicePath \"\"" Feb 24 05:52:30.691248 master-0 kubenswrapper[34361]: I0224 05:52:30.691229 34361 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a033a9c9-abde-4d05-b958-06c6bb913e85-combined-ca-bundle\") on node \"master-0\" DevicePath \"\"" Feb 24 05:52:30.691248 master-0 kubenswrapper[34361]: I0224 05:52:30.691245 34361 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a033a9c9-abde-4d05-b958-06c6bb913e85-scripts\") on node \"master-0\" DevicePath \"\"" Feb 24 05:52:30.691248 master-0 kubenswrapper[34361]: I0224 05:52:30.691257 34361 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a033a9c9-abde-4d05-b958-06c6bb913e85-config-data\") on node \"master-0\" DevicePath \"\"" Feb 24 05:52:31.255999 master-0 kubenswrapper[34361]: I0224 05:52:31.255912 34361 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-629gt" event={"ID":"a033a9c9-abde-4d05-b958-06c6bb913e85","Type":"ContainerDied","Data":"6da48d88c63a8be25aa2e5e4a182352bafe8bf63ce18dc721a2bd00d1c00b344"} Feb 24 05:52:31.255999 master-0 kubenswrapper[34361]: I0224 05:52:31.255978 34361 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6da48d88c63a8be25aa2e5e4a182352bafe8bf63ce18dc721a2bd00d1c00b344" Feb 24 05:52:31.255999 master-0 kubenswrapper[34361]: I0224 05:52:31.255986 34361 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-629gt" Feb 24 05:52:31.260193 master-0 kubenswrapper[34361]: I0224 05:52:31.260139 34361 generic.go:334] "Generic (PLEG): container finished" podID="6eb75ff2-586c-4d0c-bb92-967635ac99d0" containerID="ec9e0f959f58b15d2ac33c7f7fe7637fca1a3c27908113b8181f4a982095b802" exitCode=0 Feb 24 05:52:31.260193 master-0 kubenswrapper[34361]: I0224 05:52:31.260183 34361 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-trt9l" event={"ID":"6eb75ff2-586c-4d0c-bb92-967635ac99d0","Type":"ContainerDied","Data":"ec9e0f959f58b15d2ac33c7f7fe7637fca1a3c27908113b8181f4a982095b802"} Feb 24 05:52:31.672861 master-0 kubenswrapper[34361]: I0224 05:52:31.672650 34361 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-fb464bf7d-gv8b6"] Feb 24 05:52:31.673722 master-0 kubenswrapper[34361]: E0224 05:52:31.673413 34361 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a033a9c9-abde-4d05-b958-06c6bb913e85" containerName="placement-db-sync" Feb 24 05:52:31.673722 master-0 kubenswrapper[34361]: I0224 05:52:31.673429 34361 state_mem.go:107] "Deleted CPUSet assignment" podUID="a033a9c9-abde-4d05-b958-06c6bb913e85" containerName="placement-db-sync" Feb 24 05:52:31.673827 master-0 kubenswrapper[34361]: I0224 05:52:31.673741 34361 memory_manager.go:354] "RemoveStaleState removing state" podUID="a033a9c9-abde-4d05-b958-06c6bb913e85" containerName="placement-db-sync" Feb 24 05:52:31.675041 master-0 kubenswrapper[34361]: I0224 05:52:31.675012 34361 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-fb464bf7d-gv8b6" Feb 24 05:52:31.680587 master-0 kubenswrapper[34361]: I0224 05:52:31.680499 34361 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-fb464bf7d-gv8b6"] Feb 24 05:52:31.684330 master-0 kubenswrapper[34361]: I0224 05:52:31.682061 34361 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-scripts" Feb 24 05:52:31.684330 master-0 kubenswrapper[34361]: I0224 05:52:31.682436 34361 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-config-data" Feb 24 05:52:31.684330 master-0 kubenswrapper[34361]: I0224 05:52:31.683215 34361 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-placement-internal-svc" Feb 24 05:52:31.685324 master-0 kubenswrapper[34361]: I0224 05:52:31.683377 34361 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-placement-public-svc" Feb 24 05:52:31.823804 master-0 kubenswrapper[34361]: I0224 05:52:31.823684 34361 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/05343afd-e975-47cb-a3f4-58664d26d871-config-data\") pod \"placement-fb464bf7d-gv8b6\" (UID: \"05343afd-e975-47cb-a3f4-58664d26d871\") " pod="openstack/placement-fb464bf7d-gv8b6" Feb 24 05:52:31.823804 master-0 kubenswrapper[34361]: I0224 05:52:31.823816 34361 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/05343afd-e975-47cb-a3f4-58664d26d871-scripts\") pod \"placement-fb464bf7d-gv8b6\" (UID: \"05343afd-e975-47cb-a3f4-58664d26d871\") " pod="openstack/placement-fb464bf7d-gv8b6" Feb 24 05:52:31.824180 master-0 kubenswrapper[34361]: I0224 05:52:31.823900 34361 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/05343afd-e975-47cb-a3f4-58664d26d871-internal-tls-certs\") pod \"placement-fb464bf7d-gv8b6\" (UID: \"05343afd-e975-47cb-a3f4-58664d26d871\") " pod="openstack/placement-fb464bf7d-gv8b6" Feb 24 05:52:31.824180 master-0 kubenswrapper[34361]: I0224 05:52:31.823983 34361 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/05343afd-e975-47cb-a3f4-58664d26d871-logs\") pod \"placement-fb464bf7d-gv8b6\" (UID: \"05343afd-e975-47cb-a3f4-58664d26d871\") " pod="openstack/placement-fb464bf7d-gv8b6" Feb 24 05:52:31.824180 master-0 kubenswrapper[34361]: I0224 05:52:31.824047 34361 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/05343afd-e975-47cb-a3f4-58664d26d871-public-tls-certs\") pod \"placement-fb464bf7d-gv8b6\" (UID: \"05343afd-e975-47cb-a3f4-58664d26d871\") " pod="openstack/placement-fb464bf7d-gv8b6" Feb 24 05:52:31.824180 master-0 kubenswrapper[34361]: I0224 05:52:31.824110 34361 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/05343afd-e975-47cb-a3f4-58664d26d871-combined-ca-bundle\") pod \"placement-fb464bf7d-gv8b6\" (UID: \"05343afd-e975-47cb-a3f4-58664d26d871\") " pod="openstack/placement-fb464bf7d-gv8b6" Feb 24 05:52:31.824180 master-0 kubenswrapper[34361]: I0224 05:52:31.824165 34361 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m4pxh\" (UniqueName: \"kubernetes.io/projected/05343afd-e975-47cb-a3f4-58664d26d871-kube-api-access-m4pxh\") pod \"placement-fb464bf7d-gv8b6\" (UID: \"05343afd-e975-47cb-a3f4-58664d26d871\") " pod="openstack/placement-fb464bf7d-gv8b6" Feb 24 05:52:31.927285 master-0 kubenswrapper[34361]: I0224 05:52:31.927060 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/05343afd-e975-47cb-a3f4-58664d26d871-combined-ca-bundle\") pod \"placement-fb464bf7d-gv8b6\" (UID: \"05343afd-e975-47cb-a3f4-58664d26d871\") " pod="openstack/placement-fb464bf7d-gv8b6" Feb 24 05:52:31.927285 master-0 kubenswrapper[34361]: I0224 05:52:31.927143 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m4pxh\" (UniqueName: \"kubernetes.io/projected/05343afd-e975-47cb-a3f4-58664d26d871-kube-api-access-m4pxh\") pod \"placement-fb464bf7d-gv8b6\" (UID: \"05343afd-e975-47cb-a3f4-58664d26d871\") " pod="openstack/placement-fb464bf7d-gv8b6" Feb 24 05:52:31.927285 master-0 kubenswrapper[34361]: I0224 05:52:31.927299 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/05343afd-e975-47cb-a3f4-58664d26d871-config-data\") pod \"placement-fb464bf7d-gv8b6\" (UID: \"05343afd-e975-47cb-a3f4-58664d26d871\") " pod="openstack/placement-fb464bf7d-gv8b6" Feb 24 05:52:31.928274 master-0 kubenswrapper[34361]: I0224 05:52:31.927569 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/05343afd-e975-47cb-a3f4-58664d26d871-scripts\") pod \"placement-fb464bf7d-gv8b6\" (UID: \"05343afd-e975-47cb-a3f4-58664d26d871\") " pod="openstack/placement-fb464bf7d-gv8b6" Feb 24 05:52:31.928274 master-0 kubenswrapper[34361]: I0224 05:52:31.927630 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/05343afd-e975-47cb-a3f4-58664d26d871-internal-tls-certs\") pod \"placement-fb464bf7d-gv8b6\" (UID: \"05343afd-e975-47cb-a3f4-58664d26d871\") " pod="openstack/placement-fb464bf7d-gv8b6" Feb 24 05:52:31.928274 master-0 kubenswrapper[34361]: I0224 05:52:31.927690 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/05343afd-e975-47cb-a3f4-58664d26d871-logs\") pod \"placement-fb464bf7d-gv8b6\" (UID: \"05343afd-e975-47cb-a3f4-58664d26d871\") " pod="openstack/placement-fb464bf7d-gv8b6" Feb 24 05:52:31.928274 master-0 kubenswrapper[34361]: I0224 05:52:31.927725 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/05343afd-e975-47cb-a3f4-58664d26d871-public-tls-certs\") pod \"placement-fb464bf7d-gv8b6\" (UID: \"05343afd-e975-47cb-a3f4-58664d26d871\") " pod="openstack/placement-fb464bf7d-gv8b6" Feb 24 05:52:31.930255 master-0 kubenswrapper[34361]: I0224 05:52:31.930191 34361 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/05343afd-e975-47cb-a3f4-58664d26d871-logs\") pod \"placement-fb464bf7d-gv8b6\" (UID: \"05343afd-e975-47cb-a3f4-58664d26d871\") " pod="openstack/placement-fb464bf7d-gv8b6" Feb 24 05:52:31.938909 master-0 kubenswrapper[34361]: I0224 05:52:31.938016 34361 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/05343afd-e975-47cb-a3f4-58664d26d871-scripts\") pod \"placement-fb464bf7d-gv8b6\" (UID: \"05343afd-e975-47cb-a3f4-58664d26d871\") " pod="openstack/placement-fb464bf7d-gv8b6" Feb 24 05:52:31.938909 master-0 kubenswrapper[34361]: I0224 05:52:31.938139 34361 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/05343afd-e975-47cb-a3f4-58664d26d871-config-data\") pod \"placement-fb464bf7d-gv8b6\" (UID: \"05343afd-e975-47cb-a3f4-58664d26d871\") " pod="openstack/placement-fb464bf7d-gv8b6" Feb 24 05:52:31.938909 master-0 kubenswrapper[34361]: I0224 05:52:31.938783 34361 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/05343afd-e975-47cb-a3f4-58664d26d871-combined-ca-bundle\") pod \"placement-fb464bf7d-gv8b6\" (UID: \"05343afd-e975-47cb-a3f4-58664d26d871\") " pod="openstack/placement-fb464bf7d-gv8b6" Feb 24 05:52:31.939883 master-0 kubenswrapper[34361]: I0224 05:52:31.939826 34361 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/05343afd-e975-47cb-a3f4-58664d26d871-public-tls-certs\") pod \"placement-fb464bf7d-gv8b6\" (UID: \"05343afd-e975-47cb-a3f4-58664d26d871\") " pod="openstack/placement-fb464bf7d-gv8b6" Feb 24 05:52:31.941015 master-0 kubenswrapper[34361]: I0224 05:52:31.940965 34361 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/05343afd-e975-47cb-a3f4-58664d26d871-internal-tls-certs\") pod \"placement-fb464bf7d-gv8b6\" (UID: \"05343afd-e975-47cb-a3f4-58664d26d871\") " pod="openstack/placement-fb464bf7d-gv8b6" Feb 24 05:52:31.947837 master-0 kubenswrapper[34361]: I0224 05:52:31.947700 34361 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m4pxh\" (UniqueName: \"kubernetes.io/projected/05343afd-e975-47cb-a3f4-58664d26d871-kube-api-access-m4pxh\") pod \"placement-fb464bf7d-gv8b6\" (UID: \"05343afd-e975-47cb-a3f4-58664d26d871\") " pod="openstack/placement-fb464bf7d-gv8b6" Feb 24 05:52:32.001094 master-0 kubenswrapper[34361]: I0224 05:52:32.000982 34361 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-fb464bf7d-gv8b6" Feb 24 05:52:32.229535 master-0 kubenswrapper[34361]: I0224 05:52:32.227702 34361 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-bdafd-default-internal-api-0" Feb 24 05:52:32.229535 master-0 kubenswrapper[34361]: I0224 05:52:32.227761 34361 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-bdafd-default-internal-api-0" Feb 24 05:52:32.269095 master-0 kubenswrapper[34361]: I0224 05:52:32.269024 34361 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-bdafd-default-internal-api-0" Feb 24 05:52:32.270375 master-0 kubenswrapper[34361]: I0224 05:52:32.270344 34361 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-bdafd-default-internal-api-0" Feb 24 05:52:32.273126 master-0 kubenswrapper[34361]: I0224 05:52:32.271362 34361 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-bdafd-default-internal-api-0" Feb 24 05:52:32.273126 master-0 kubenswrapper[34361]: I0224 05:52:32.271435 34361 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-bdafd-default-internal-api-0" Feb 24 05:52:33.784622 master-0 kubenswrapper[34361]: I0224 05:52:33.784535 34361 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-bdafd-default-external-api-0" Feb 24 05:52:33.784622 master-0 kubenswrapper[34361]: I0224 05:52:33.784637 34361 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-bdafd-default-external-api-0" Feb 24 05:52:33.821663 master-0 kubenswrapper[34361]: I0224 05:52:33.821520 34361 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-bdafd-default-external-api-0" Feb 24 05:52:33.843247 master-0 kubenswrapper[34361]: I0224 05:52:33.843170 34361 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-bdafd-default-external-api-0" Feb 24 05:52:34.313000 master-0 kubenswrapper[34361]: I0224 05:52:34.312913 34361 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-bdafd-default-external-api-0" Feb 24 05:52:34.313000 master-0 kubenswrapper[34361]: I0224 05:52:34.313012 34361 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-bdafd-default-external-api-0" Feb 24 05:52:34.469887 master-0 kubenswrapper[34361]: I0224 05:52:34.469787 34361 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-bdafd-default-internal-api-0" Feb 24 05:52:34.470155 master-0 kubenswrapper[34361]: I0224 05:52:34.469930 34361 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 24 05:52:34.520948 master-0 kubenswrapper[34361]: I0224 05:52:34.520872 34361 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-bdafd-default-internal-api-0" Feb 24 05:52:36.539931 master-0 kubenswrapper[34361]: I0224 05:52:36.539846 34361 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-bdafd-default-external-api-0" Feb 24 05:52:36.540707 master-0 kubenswrapper[34361]: I0224 05:52:36.540018 34361 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 24 05:52:36.744362 master-0 kubenswrapper[34361]: I0224 05:52:36.744106 34361 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-bdafd-default-external-api-0" Feb 24 05:52:41.082483 master-0 kubenswrapper[34361]: I0224 05:52:41.082216 34361 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-trt9l" Feb 24 05:52:41.179173 master-0 kubenswrapper[34361]: I0224 05:52:41.179072 34361 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6eb75ff2-586c-4d0c-bb92-967635ac99d0-config-data\") pod \"6eb75ff2-586c-4d0c-bb92-967635ac99d0\" (UID: \"6eb75ff2-586c-4d0c-bb92-967635ac99d0\") " Feb 24 05:52:41.179173 master-0 kubenswrapper[34361]: I0224 05:52:41.179176 34361 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/6eb75ff2-586c-4d0c-bb92-967635ac99d0-fernet-keys\") pod \"6eb75ff2-586c-4d0c-bb92-967635ac99d0\" (UID: \"6eb75ff2-586c-4d0c-bb92-967635ac99d0\") " Feb 24 05:52:41.180451 master-0 kubenswrapper[34361]: I0224 05:52:41.179624 34361 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cf849\" (UniqueName: \"kubernetes.io/projected/6eb75ff2-586c-4d0c-bb92-967635ac99d0-kube-api-access-cf849\") pod \"6eb75ff2-586c-4d0c-bb92-967635ac99d0\" (UID: \"6eb75ff2-586c-4d0c-bb92-967635ac99d0\") " Feb 24 05:52:41.185658 master-0 kubenswrapper[34361]: I0224 05:52:41.185402 34361 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6eb75ff2-586c-4d0c-bb92-967635ac99d0-kube-api-access-cf849" (OuterVolumeSpecName: "kube-api-access-cf849") pod "6eb75ff2-586c-4d0c-bb92-967635ac99d0" (UID: "6eb75ff2-586c-4d0c-bb92-967635ac99d0"). InnerVolumeSpecName "kube-api-access-cf849". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 24 05:52:41.189174 master-0 kubenswrapper[34361]: I0224 05:52:41.189105 34361 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6eb75ff2-586c-4d0c-bb92-967635ac99d0-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "6eb75ff2-586c-4d0c-bb92-967635ac99d0" (UID: "6eb75ff2-586c-4d0c-bb92-967635ac99d0"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 24 05:52:41.236502 master-0 kubenswrapper[34361]: I0224 05:52:41.236409 34361 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6eb75ff2-586c-4d0c-bb92-967635ac99d0-config-data" (OuterVolumeSpecName: "config-data") pod "6eb75ff2-586c-4d0c-bb92-967635ac99d0" (UID: "6eb75ff2-586c-4d0c-bb92-967635ac99d0"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 24 05:52:41.283029 master-0 kubenswrapper[34361]: I0224 05:52:41.281232 34361 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6eb75ff2-586c-4d0c-bb92-967635ac99d0-scripts\") pod \"6eb75ff2-586c-4d0c-bb92-967635ac99d0\" (UID: \"6eb75ff2-586c-4d0c-bb92-967635ac99d0\") " Feb 24 05:52:41.283029 master-0 kubenswrapper[34361]: I0224 05:52:41.281370 34361 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6eb75ff2-586c-4d0c-bb92-967635ac99d0-combined-ca-bundle\") pod \"6eb75ff2-586c-4d0c-bb92-967635ac99d0\" (UID: \"6eb75ff2-586c-4d0c-bb92-967635ac99d0\") " Feb 24 05:52:41.283029 master-0 kubenswrapper[34361]: I0224 05:52:41.281423 34361 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/6eb75ff2-586c-4d0c-bb92-967635ac99d0-credential-keys\") pod \"6eb75ff2-586c-4d0c-bb92-967635ac99d0\" (UID: \"6eb75ff2-586c-4d0c-bb92-967635ac99d0\") " Feb 24 05:52:41.283029 master-0 kubenswrapper[34361]: I0224 05:52:41.281973 34361 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6eb75ff2-586c-4d0c-bb92-967635ac99d0-config-data\") on node \"master-0\" DevicePath \"\"" Feb 24 05:52:41.283029 master-0 kubenswrapper[34361]: I0224 05:52:41.281988 34361 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/6eb75ff2-586c-4d0c-bb92-967635ac99d0-fernet-keys\") on node \"master-0\" DevicePath \"\"" Feb 24 05:52:41.283029 master-0 kubenswrapper[34361]: I0224 05:52:41.282020 34361 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cf849\" (UniqueName: \"kubernetes.io/projected/6eb75ff2-586c-4d0c-bb92-967635ac99d0-kube-api-access-cf849\") on node \"master-0\" DevicePath \"\"" Feb 24 05:52:41.285924 master-0 kubenswrapper[34361]: I0224 05:52:41.285862 34361 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6eb75ff2-586c-4d0c-bb92-967635ac99d0-scripts" (OuterVolumeSpecName: "scripts") pod "6eb75ff2-586c-4d0c-bb92-967635ac99d0" (UID: "6eb75ff2-586c-4d0c-bb92-967635ac99d0"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 24 05:52:41.288249 master-0 kubenswrapper[34361]: I0224 05:52:41.288175 34361 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6eb75ff2-586c-4d0c-bb92-967635ac99d0-credential-keys" (OuterVolumeSpecName: "credential-keys") pod "6eb75ff2-586c-4d0c-bb92-967635ac99d0" (UID: "6eb75ff2-586c-4d0c-bb92-967635ac99d0"). InnerVolumeSpecName "credential-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 24 05:52:41.330308 master-0 kubenswrapper[34361]: I0224 05:52:41.330220 34361 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6eb75ff2-586c-4d0c-bb92-967635ac99d0-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "6eb75ff2-586c-4d0c-bb92-967635ac99d0" (UID: "6eb75ff2-586c-4d0c-bb92-967635ac99d0"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 24 05:52:41.385022 master-0 kubenswrapper[34361]: I0224 05:52:41.384926 34361 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6eb75ff2-586c-4d0c-bb92-967635ac99d0-combined-ca-bundle\") on node \"master-0\" DevicePath \"\"" Feb 24 05:52:41.385022 master-0 kubenswrapper[34361]: I0224 05:52:41.384999 34361 reconciler_common.go:293] "Volume detached for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/6eb75ff2-586c-4d0c-bb92-967635ac99d0-credential-keys\") on node \"master-0\" DevicePath \"\"" Feb 24 05:52:41.385022 master-0 kubenswrapper[34361]: I0224 05:52:41.385012 34361 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6eb75ff2-586c-4d0c-bb92-967635ac99d0-scripts\") on node \"master-0\" DevicePath \"\"" Feb 24 05:52:41.416193 master-0 kubenswrapper[34361]: I0224 05:52:41.416112 34361 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-trt9l" event={"ID":"6eb75ff2-586c-4d0c-bb92-967635ac99d0","Type":"ContainerDied","Data":"08f3a6c3b6de489ca428b09f15a1ac75a2d4e02d9e9c1242e27005c27f753cba"} Feb 24 05:52:41.416193 master-0 kubenswrapper[34361]: I0224 05:52:41.416176 34361 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="08f3a6c3b6de489ca428b09f15a1ac75a2d4e02d9e9c1242e27005c27f753cba" Feb 24 05:52:41.416193 master-0 kubenswrapper[34361]: I0224 05:52:41.416179 34361 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-trt9l" Feb 24 05:52:42.355676 master-0 kubenswrapper[34361]: I0224 05:52:42.355524 34361 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-64cf598f88-t2877"] Feb 24 05:52:42.357742 master-0 kubenswrapper[34361]: E0224 05:52:42.357714 34361 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6eb75ff2-586c-4d0c-bb92-967635ac99d0" containerName="keystone-bootstrap" Feb 24 05:52:42.357864 master-0 kubenswrapper[34361]: I0224 05:52:42.357849 34361 state_mem.go:107] "Deleted CPUSet assignment" podUID="6eb75ff2-586c-4d0c-bb92-967635ac99d0" containerName="keystone-bootstrap" Feb 24 05:52:42.358301 master-0 kubenswrapper[34361]: I0224 05:52:42.358281 34361 memory_manager.go:354] "RemoveStaleState removing state" podUID="6eb75ff2-586c-4d0c-bb92-967635ac99d0" containerName="keystone-bootstrap" Feb 24 05:52:42.360000 master-0 kubenswrapper[34361]: I0224 05:52:42.359982 34361 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-64cf598f88-t2877" Feb 24 05:52:42.418193 master-0 kubenswrapper[34361]: I0224 05:52:42.417767 34361 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-keystone-public-svc" Feb 24 05:52:42.421732 master-0 kubenswrapper[34361]: I0224 05:52:42.420852 34361 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Feb 24 05:52:42.423121 master-0 kubenswrapper[34361]: I0224 05:52:42.421822 34361 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Feb 24 05:52:42.423121 master-0 kubenswrapper[34361]: I0224 05:52:42.422566 34361 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-keystone-internal-svc" Feb 24 05:52:42.449096 master-0 kubenswrapper[34361]: I0224 05:52:42.448638 34361 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/642b528f-fe86-4321-8818-131d99e034f4-public-tls-certs\") pod \"keystone-64cf598f88-t2877\" (UID: \"642b528f-fe86-4321-8818-131d99e034f4\") " pod="openstack/keystone-64cf598f88-t2877" Feb 24 05:52:42.449096 master-0 kubenswrapper[34361]: I0224 05:52:42.448731 34361 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/642b528f-fe86-4321-8818-131d99e034f4-credential-keys\") pod \"keystone-64cf598f88-t2877\" (UID: \"642b528f-fe86-4321-8818-131d99e034f4\") " pod="openstack/keystone-64cf598f88-t2877" Feb 24 05:52:42.449096 master-0 kubenswrapper[34361]: I0224 05:52:42.448827 34361 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/642b528f-fe86-4321-8818-131d99e034f4-config-data\") pod \"keystone-64cf598f88-t2877\" (UID: \"642b528f-fe86-4321-8818-131d99e034f4\") " pod="openstack/keystone-64cf598f88-t2877" Feb 24 05:52:42.449096 master-0 kubenswrapper[34361]: I0224 05:52:42.448907 34361 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/642b528f-fe86-4321-8818-131d99e034f4-internal-tls-certs\") pod \"keystone-64cf598f88-t2877\" (UID: \"642b528f-fe86-4321-8818-131d99e034f4\") " pod="openstack/keystone-64cf598f88-t2877" Feb 24 05:52:42.449096 master-0 kubenswrapper[34361]: I0224 05:52:42.449018 34361 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/642b528f-fe86-4321-8818-131d99e034f4-scripts\") pod \"keystone-64cf598f88-t2877\" (UID: \"642b528f-fe86-4321-8818-131d99e034f4\") " pod="openstack/keystone-64cf598f88-t2877" Feb 24 05:52:42.449626 master-0 kubenswrapper[34361]: I0224 05:52:42.449131 34361 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/642b528f-fe86-4321-8818-131d99e034f4-fernet-keys\") pod \"keystone-64cf598f88-t2877\" (UID: \"642b528f-fe86-4321-8818-131d99e034f4\") " pod="openstack/keystone-64cf598f88-t2877" Feb 24 05:52:42.449626 master-0 kubenswrapper[34361]: I0224 05:52:42.449335 34361 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nvtsb\" (UniqueName: \"kubernetes.io/projected/642b528f-fe86-4321-8818-131d99e034f4-kube-api-access-nvtsb\") pod \"keystone-64cf598f88-t2877\" (UID: \"642b528f-fe86-4321-8818-131d99e034f4\") " pod="openstack/keystone-64cf598f88-t2877" Feb 24 05:52:42.449727 master-0 kubenswrapper[34361]: I0224 05:52:42.449656 34361 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/642b528f-fe86-4321-8818-131d99e034f4-combined-ca-bundle\") pod \"keystone-64cf598f88-t2877\" (UID: \"642b528f-fe86-4321-8818-131d99e034f4\") " pod="openstack/keystone-64cf598f88-t2877" Feb 24 05:52:42.459306 master-0 kubenswrapper[34361]: I0224 05:52:42.459227 34361 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Feb 24 05:52:42.482094 master-0 kubenswrapper[34361]: I0224 05:52:42.482010 34361 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-64cf598f88-t2877"] Feb 24 05:52:42.566699 master-0 kubenswrapper[34361]: I0224 05:52:42.566616 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/642b528f-fe86-4321-8818-131d99e034f4-credential-keys\") pod \"keystone-64cf598f88-t2877\" (UID: \"642b528f-fe86-4321-8818-131d99e034f4\") " pod="openstack/keystone-64cf598f88-t2877" Feb 24 05:52:42.566699 master-0 kubenswrapper[34361]: I0224 05:52:42.566692 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/642b528f-fe86-4321-8818-131d99e034f4-config-data\") pod \"keystone-64cf598f88-t2877\" (UID: \"642b528f-fe86-4321-8818-131d99e034f4\") " pod="openstack/keystone-64cf598f88-t2877" Feb 24 05:52:42.566882 master-0 kubenswrapper[34361]: I0224 05:52:42.566733 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/642b528f-fe86-4321-8818-131d99e034f4-internal-tls-certs\") pod \"keystone-64cf598f88-t2877\" (UID: \"642b528f-fe86-4321-8818-131d99e034f4\") " pod="openstack/keystone-64cf598f88-t2877" Feb 24 05:52:42.566882 master-0 kubenswrapper[34361]: I0224 05:52:42.566809 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/642b528f-fe86-4321-8818-131d99e034f4-scripts\") pod \"keystone-64cf598f88-t2877\" (UID: \"642b528f-fe86-4321-8818-131d99e034f4\") " pod="openstack/keystone-64cf598f88-t2877" Feb 24 05:52:42.566882 master-0 kubenswrapper[34361]: I0224 05:52:42.566832 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/642b528f-fe86-4321-8818-131d99e034f4-fernet-keys\") pod \"keystone-64cf598f88-t2877\" (UID: \"642b528f-fe86-4321-8818-131d99e034f4\") " pod="openstack/keystone-64cf598f88-t2877" Feb 24 05:52:42.566882 master-0 kubenswrapper[34361]: I0224 05:52:42.566860 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nvtsb\" (UniqueName: \"kubernetes.io/projected/642b528f-fe86-4321-8818-131d99e034f4-kube-api-access-nvtsb\") pod \"keystone-64cf598f88-t2877\" (UID: \"642b528f-fe86-4321-8818-131d99e034f4\") " pod="openstack/keystone-64cf598f88-t2877" Feb 24 05:52:42.567003 master-0 kubenswrapper[34361]: I0224 05:52:42.566905 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/642b528f-fe86-4321-8818-131d99e034f4-combined-ca-bundle\") pod \"keystone-64cf598f88-t2877\" (UID: \"642b528f-fe86-4321-8818-131d99e034f4\") " pod="openstack/keystone-64cf598f88-t2877" Feb 24 05:52:42.567003 master-0 kubenswrapper[34361]: I0224 05:52:42.566947 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/642b528f-fe86-4321-8818-131d99e034f4-public-tls-certs\") pod \"keystone-64cf598f88-t2877\" (UID: \"642b528f-fe86-4321-8818-131d99e034f4\") " pod="openstack/keystone-64cf598f88-t2877" Feb 24 05:52:42.571032 master-0 kubenswrapper[34361]: I0224 05:52:42.570994 34361 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/642b528f-fe86-4321-8818-131d99e034f4-scripts\") pod \"keystone-64cf598f88-t2877\" (UID: \"642b528f-fe86-4321-8818-131d99e034f4\") " pod="openstack/keystone-64cf598f88-t2877" Feb 24 05:52:42.571429 master-0 kubenswrapper[34361]: I0224 05:52:42.571401 34361 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/642b528f-fe86-4321-8818-131d99e034f4-config-data\") pod \"keystone-64cf598f88-t2877\" (UID: \"642b528f-fe86-4321-8818-131d99e034f4\") " pod="openstack/keystone-64cf598f88-t2877" Feb 24 05:52:42.572015 master-0 kubenswrapper[34361]: I0224 05:52:42.571972 34361 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/642b528f-fe86-4321-8818-131d99e034f4-internal-tls-certs\") pod \"keystone-64cf598f88-t2877\" (UID: \"642b528f-fe86-4321-8818-131d99e034f4\") " pod="openstack/keystone-64cf598f88-t2877" Feb 24 05:52:42.572912 master-0 kubenswrapper[34361]: I0224 05:52:42.572875 34361 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/642b528f-fe86-4321-8818-131d99e034f4-credential-keys\") pod \"keystone-64cf598f88-t2877\" (UID: \"642b528f-fe86-4321-8818-131d99e034f4\") " pod="openstack/keystone-64cf598f88-t2877" Feb 24 05:52:42.573187 master-0 kubenswrapper[34361]: I0224 05:52:42.573094 34361 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/642b528f-fe86-4321-8818-131d99e034f4-public-tls-certs\") pod \"keystone-64cf598f88-t2877\" (UID: \"642b528f-fe86-4321-8818-131d99e034f4\") " pod="openstack/keystone-64cf598f88-t2877" Feb 24 05:52:42.574438 master-0 kubenswrapper[34361]: I0224 05:52:42.574402 34361 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/642b528f-fe86-4321-8818-131d99e034f4-combined-ca-bundle\") pod \"keystone-64cf598f88-t2877\" (UID: \"642b528f-fe86-4321-8818-131d99e034f4\") " pod="openstack/keystone-64cf598f88-t2877" Feb 24 05:52:42.583339 master-0 kubenswrapper[34361]: I0224 05:52:42.581003 34361 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/642b528f-fe86-4321-8818-131d99e034f4-fernet-keys\") pod \"keystone-64cf598f88-t2877\" (UID: \"642b528f-fe86-4321-8818-131d99e034f4\") " pod="openstack/keystone-64cf598f88-t2877" Feb 24 05:52:42.590663 master-0 kubenswrapper[34361]: I0224 05:52:42.590557 34361 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nvtsb\" (UniqueName: \"kubernetes.io/projected/642b528f-fe86-4321-8818-131d99e034f4-kube-api-access-nvtsb\") pod \"keystone-64cf598f88-t2877\" (UID: \"642b528f-fe86-4321-8818-131d99e034f4\") " pod="openstack/keystone-64cf598f88-t2877" Feb 24 05:52:42.671725 master-0 kubenswrapper[34361]: I0224 05:52:42.671155 34361 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-64cf598f88-t2877" Feb 24 05:52:42.969704 master-0 kubenswrapper[34361]: I0224 05:52:42.969600 34361 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-fb464bf7d-gv8b6"] Feb 24 05:52:42.979860 master-0 kubenswrapper[34361]: W0224 05:52:42.979744 34361 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod05343afd_e975_47cb_a3f4_58664d26d871.slice/crio-fb2bf7b7c831f1e20ad5adf412de75ab97252a198dcc478117147f864b83b15c WatchSource:0}: Error finding container fb2bf7b7c831f1e20ad5adf412de75ab97252a198dcc478117147f864b83b15c: Status 404 returned error can't find the container with id fb2bf7b7c831f1e20ad5adf412de75ab97252a198dcc478117147f864b83b15c Feb 24 05:52:43.241269 master-0 kubenswrapper[34361]: I0224 05:52:43.239340 34361 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-64cf598f88-t2877"] Feb 24 05:52:43.502602 master-0 kubenswrapper[34361]: I0224 05:52:43.502546 34361 generic.go:334] "Generic (PLEG): container finished" podID="7e30393e-c247-4ba9-9db9-864d16ba6d82" containerID="bd09c98d98993340d389f251378d76ce14467fdb5b3cfc089a66c0e6178dace3" exitCode=0 Feb 24 05:52:43.503051 master-0 kubenswrapper[34361]: I0224 05:52:43.502648 34361 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-db-sync-s9d6l" event={"ID":"7e30393e-c247-4ba9-9db9-864d16ba6d82","Type":"ContainerDied","Data":"bd09c98d98993340d389f251378d76ce14467fdb5b3cfc089a66c0e6178dace3"} Feb 24 05:52:43.509152 master-0 kubenswrapper[34361]: I0224 05:52:43.507688 34361 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-64cf598f88-t2877" event={"ID":"642b528f-fe86-4321-8818-131d99e034f4","Type":"ContainerStarted","Data":"0480227e364d05ce212889d820d5634dd47663bbd3f9eaed8dff6fe90c64d85a"} Feb 24 05:52:43.510938 master-0 kubenswrapper[34361]: I0224 05:52:43.510868 34361 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-b7346-db-sync-f9mbk" event={"ID":"41c862b6-5eb6-4f54-a435-a8e7691b87c9","Type":"ContainerStarted","Data":"c47f422e1fdc982c03255913a4df34e0eb690b433e5d23550b39b7db2c74272d"} Feb 24 05:52:43.513268 master-0 kubenswrapper[34361]: I0224 05:52:43.513220 34361 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-fb464bf7d-gv8b6" event={"ID":"05343afd-e975-47cb-a3f4-58664d26d871","Type":"ContainerStarted","Data":"84af720b033e0813084f651dc8d59e820c61bb2232501fe50e4f346a78960db9"} Feb 24 05:52:43.513376 master-0 kubenswrapper[34361]: I0224 05:52:43.513296 34361 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-fb464bf7d-gv8b6" event={"ID":"05343afd-e975-47cb-a3f4-58664d26d871","Type":"ContainerStarted","Data":"fb2bf7b7c831f1e20ad5adf412de75ab97252a198dcc478117147f864b83b15c"} Feb 24 05:52:43.563862 master-0 kubenswrapper[34361]: I0224 05:52:43.563773 34361 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-b7346-db-sync-f9mbk" podStartSLOduration=3.814572317 podStartE2EDuration="31.563746968s" podCreationTimestamp="2026-02-24 05:52:12 +0000 UTC" firstStartedPulling="2026-02-24 05:52:14.775393909 +0000 UTC m=+894.478010955" lastFinishedPulling="2026-02-24 05:52:42.52456856 +0000 UTC m=+922.227185606" observedRunningTime="2026-02-24 05:52:43.549526504 +0000 UTC m=+923.252143550" watchObservedRunningTime="2026-02-24 05:52:43.563746968 +0000 UTC m=+923.266364014" Feb 24 05:52:44.534750 master-0 kubenswrapper[34361]: I0224 05:52:44.534615 34361 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-64cf598f88-t2877" event={"ID":"642b528f-fe86-4321-8818-131d99e034f4","Type":"ContainerStarted","Data":"564af384d7d7426d6c60d6ac63291116bbea37c6b4604bc976a78b82aceb4e6b"} Feb 24 05:52:44.535675 master-0 kubenswrapper[34361]: I0224 05:52:44.534763 34361 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/keystone-64cf598f88-t2877" Feb 24 05:52:44.538606 master-0 kubenswrapper[34361]: I0224 05:52:44.538498 34361 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-fb464bf7d-gv8b6" event={"ID":"05343afd-e975-47cb-a3f4-58664d26d871","Type":"ContainerStarted","Data":"95ec194de761882be6bf22ca2973e3e0e5fbb4be965fad586d74dc01ee70cc37"} Feb 24 05:52:44.538754 master-0 kubenswrapper[34361]: I0224 05:52:44.538625 34361 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/placement-fb464bf7d-gv8b6" Feb 24 05:52:44.538754 master-0 kubenswrapper[34361]: I0224 05:52:44.538690 34361 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/placement-fb464bf7d-gv8b6" Feb 24 05:52:44.547393 master-0 kubenswrapper[34361]: I0224 05:52:44.545115 34361 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-db-sync-s9d6l" event={"ID":"7e30393e-c247-4ba9-9db9-864d16ba6d82","Type":"ContainerStarted","Data":"9d60d4d6b2af8e7533e56ae9ba0ebd383f1b4443362a0493fff00bdb76302614"} Feb 24 05:52:44.586766 master-0 kubenswrapper[34361]: I0224 05:52:44.586622 34361 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-64cf598f88-t2877" podStartSLOduration=2.5865884340000003 podStartE2EDuration="2.586588434s" podCreationTimestamp="2026-02-24 05:52:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-24 05:52:44.574413525 +0000 UTC m=+924.277030601" watchObservedRunningTime="2026-02-24 05:52:44.586588434 +0000 UTC m=+924.289205500" Feb 24 05:52:44.696390 master-0 kubenswrapper[34361]: I0224 05:52:44.696250 34361 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-fb464bf7d-gv8b6" podStartSLOduration=13.696220401 podStartE2EDuration="13.696220401s" podCreationTimestamp="2026-02-24 05:52:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-24 05:52:44.695394089 +0000 UTC m=+924.398011155" watchObservedRunningTime="2026-02-24 05:52:44.696220401 +0000 UTC m=+924.398837447" Feb 24 05:52:45.271064 master-0 kubenswrapper[34361]: I0224 05:52:45.270930 34361 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ironic-db-sync-s9d6l" podStartSLOduration=4.268448822 podStartE2EDuration="21.27090141s" podCreationTimestamp="2026-02-24 05:52:24 +0000 UTC" firstStartedPulling="2026-02-24 05:52:25.468464065 +0000 UTC m=+905.171081111" lastFinishedPulling="2026-02-24 05:52:42.470916653 +0000 UTC m=+922.173533699" observedRunningTime="2026-02-24 05:52:45.264477357 +0000 UTC m=+924.967094413" watchObservedRunningTime="2026-02-24 05:52:45.27090141 +0000 UTC m=+924.973518456" Feb 24 05:52:49.644544 master-0 kubenswrapper[34361]: I0224 05:52:49.644451 34361 generic.go:334] "Generic (PLEG): container finished" podID="4f5d8934-00e0-46c9-ba9d-d9183edd6fb8" containerID="e1779a5577f87379b3eaec1b4b22da92e33df9ea40fc881bc79cca47a933b8d7" exitCode=0 Feb 24 05:52:49.645244 master-0 kubenswrapper[34361]: I0224 05:52:49.644538 34361 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-m7xgd" event={"ID":"4f5d8934-00e0-46c9-ba9d-d9183edd6fb8","Type":"ContainerDied","Data":"e1779a5577f87379b3eaec1b4b22da92e33df9ea40fc881bc79cca47a933b8d7"} Feb 24 05:52:51.197936 master-0 kubenswrapper[34361]: I0224 05:52:51.197874 34361 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-m7xgd" Feb 24 05:52:51.380450 master-0 kubenswrapper[34361]: I0224 05:52:51.380291 34361 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4f5d8934-00e0-46c9-ba9d-d9183edd6fb8-combined-ca-bundle\") pod \"4f5d8934-00e0-46c9-ba9d-d9183edd6fb8\" (UID: \"4f5d8934-00e0-46c9-ba9d-d9183edd6fb8\") " Feb 24 05:52:51.382553 master-0 kubenswrapper[34361]: I0224 05:52:51.382481 34361 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/4f5d8934-00e0-46c9-ba9d-d9183edd6fb8-config\") pod \"4f5d8934-00e0-46c9-ba9d-d9183edd6fb8\" (UID: \"4f5d8934-00e0-46c9-ba9d-d9183edd6fb8\") " Feb 24 05:52:51.383239 master-0 kubenswrapper[34361]: I0224 05:52:51.383206 34361 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qkx9z\" (UniqueName: \"kubernetes.io/projected/4f5d8934-00e0-46c9-ba9d-d9183edd6fb8-kube-api-access-qkx9z\") pod \"4f5d8934-00e0-46c9-ba9d-d9183edd6fb8\" (UID: \"4f5d8934-00e0-46c9-ba9d-d9183edd6fb8\") " Feb 24 05:52:51.389879 master-0 kubenswrapper[34361]: I0224 05:52:51.389766 34361 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4f5d8934-00e0-46c9-ba9d-d9183edd6fb8-kube-api-access-qkx9z" (OuterVolumeSpecName: "kube-api-access-qkx9z") pod "4f5d8934-00e0-46c9-ba9d-d9183edd6fb8" (UID: "4f5d8934-00e0-46c9-ba9d-d9183edd6fb8"). InnerVolumeSpecName "kube-api-access-qkx9z". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 24 05:52:51.442142 master-0 kubenswrapper[34361]: I0224 05:52:51.442051 34361 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4f5d8934-00e0-46c9-ba9d-d9183edd6fb8-config" (OuterVolumeSpecName: "config") pod "4f5d8934-00e0-46c9-ba9d-d9183edd6fb8" (UID: "4f5d8934-00e0-46c9-ba9d-d9183edd6fb8"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 24 05:52:51.442142 master-0 kubenswrapper[34361]: I0224 05:52:51.442143 34361 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4f5d8934-00e0-46c9-ba9d-d9183edd6fb8-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "4f5d8934-00e0-46c9-ba9d-d9183edd6fb8" (UID: "4f5d8934-00e0-46c9-ba9d-d9183edd6fb8"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 24 05:52:51.487599 master-0 kubenswrapper[34361]: I0224 05:52:51.487212 34361 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/4f5d8934-00e0-46c9-ba9d-d9183edd6fb8-config\") on node \"master-0\" DevicePath \"\"" Feb 24 05:52:51.487599 master-0 kubenswrapper[34361]: I0224 05:52:51.487268 34361 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qkx9z\" (UniqueName: \"kubernetes.io/projected/4f5d8934-00e0-46c9-ba9d-d9183edd6fb8-kube-api-access-qkx9z\") on node \"master-0\" DevicePath \"\"" Feb 24 05:52:51.487599 master-0 kubenswrapper[34361]: I0224 05:52:51.487284 34361 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4f5d8934-00e0-46c9-ba9d-d9183edd6fb8-combined-ca-bundle\") on node \"master-0\" DevicePath \"\"" Feb 24 05:52:51.677280 master-0 kubenswrapper[34361]: I0224 05:52:51.676788 34361 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-m7xgd" event={"ID":"4f5d8934-00e0-46c9-ba9d-d9183edd6fb8","Type":"ContainerDied","Data":"30e75ae7a378acfeb29382c3e292ebc80b38786d52e4eea64ed9f6c48aeb1920"} Feb 24 05:52:51.677280 master-0 kubenswrapper[34361]: I0224 05:52:51.676888 34361 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="30e75ae7a378acfeb29382c3e292ebc80b38786d52e4eea64ed9f6c48aeb1920" Feb 24 05:52:51.677280 master-0 kubenswrapper[34361]: I0224 05:52:51.676825 34361 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-m7xgd" Feb 24 05:52:53.241751 master-0 kubenswrapper[34361]: I0224 05:52:53.241662 34361 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-84969fcbcc-27cm6"] Feb 24 05:52:53.243358 master-0 kubenswrapper[34361]: E0224 05:52:53.242616 34361 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4f5d8934-00e0-46c9-ba9d-d9183edd6fb8" containerName="neutron-db-sync" Feb 24 05:52:53.243358 master-0 kubenswrapper[34361]: I0224 05:52:53.242649 34361 state_mem.go:107] "Deleted CPUSet assignment" podUID="4f5d8934-00e0-46c9-ba9d-d9183edd6fb8" containerName="neutron-db-sync" Feb 24 05:52:53.243358 master-0 kubenswrapper[34361]: I0224 05:52:53.243073 34361 memory_manager.go:354] "RemoveStaleState removing state" podUID="4f5d8934-00e0-46c9-ba9d-d9183edd6fb8" containerName="neutron-db-sync" Feb 24 05:52:53.245583 master-0 kubenswrapper[34361]: I0224 05:52:53.245548 34361 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-84969fcbcc-27cm6" Feb 24 05:52:53.267936 master-0 kubenswrapper[34361]: I0224 05:52:53.262674 34361 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-84969fcbcc-27cm6"] Feb 24 05:52:53.335885 master-0 kubenswrapper[34361]: I0224 05:52:53.335811 34361 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7vkt2\" (UniqueName: \"kubernetes.io/projected/4de3f381-7d5a-46fe-9c93-97e35cca31d1-kube-api-access-7vkt2\") pod \"dnsmasq-dns-84969fcbcc-27cm6\" (UID: \"4de3f381-7d5a-46fe-9c93-97e35cca31d1\") " pod="openstack/dnsmasq-dns-84969fcbcc-27cm6" Feb 24 05:52:53.336182 master-0 kubenswrapper[34361]: I0224 05:52:53.335908 34361 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4de3f381-7d5a-46fe-9c93-97e35cca31d1-config\") pod \"dnsmasq-dns-84969fcbcc-27cm6\" (UID: \"4de3f381-7d5a-46fe-9c93-97e35cca31d1\") " pod="openstack/dnsmasq-dns-84969fcbcc-27cm6" Feb 24 05:52:53.336182 master-0 kubenswrapper[34361]: I0224 05:52:53.335932 34361 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/4de3f381-7d5a-46fe-9c93-97e35cca31d1-ovsdbserver-sb\") pod \"dnsmasq-dns-84969fcbcc-27cm6\" (UID: \"4de3f381-7d5a-46fe-9c93-97e35cca31d1\") " pod="openstack/dnsmasq-dns-84969fcbcc-27cm6" Feb 24 05:52:53.336182 master-0 kubenswrapper[34361]: I0224 05:52:53.335968 34361 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/4de3f381-7d5a-46fe-9c93-97e35cca31d1-ovsdbserver-nb\") pod \"dnsmasq-dns-84969fcbcc-27cm6\" (UID: \"4de3f381-7d5a-46fe-9c93-97e35cca31d1\") " pod="openstack/dnsmasq-dns-84969fcbcc-27cm6" Feb 24 05:52:53.336182 master-0 kubenswrapper[34361]: I0224 05:52:53.336030 34361 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/4de3f381-7d5a-46fe-9c93-97e35cca31d1-dns-swift-storage-0\") pod \"dnsmasq-dns-84969fcbcc-27cm6\" (UID: \"4de3f381-7d5a-46fe-9c93-97e35cca31d1\") " pod="openstack/dnsmasq-dns-84969fcbcc-27cm6" Feb 24 05:52:53.336182 master-0 kubenswrapper[34361]: I0224 05:52:53.336078 34361 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/4de3f381-7d5a-46fe-9c93-97e35cca31d1-dns-svc\") pod \"dnsmasq-dns-84969fcbcc-27cm6\" (UID: \"4de3f381-7d5a-46fe-9c93-97e35cca31d1\") " pod="openstack/dnsmasq-dns-84969fcbcc-27cm6" Feb 24 05:52:53.377158 master-0 kubenswrapper[34361]: I0224 05:52:53.374240 34361 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-d477bdc58-p8d8s"] Feb 24 05:52:53.377158 master-0 kubenswrapper[34361]: I0224 05:52:53.376918 34361 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-d477bdc58-p8d8s" Feb 24 05:52:53.386397 master-0 kubenswrapper[34361]: I0224 05:52:53.381289 34361 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-httpd-config" Feb 24 05:52:53.395386 master-0 kubenswrapper[34361]: I0224 05:52:53.389915 34361 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-ovndbs" Feb 24 05:52:53.395386 master-0 kubenswrapper[34361]: I0224 05:52:53.390297 34361 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-config" Feb 24 05:52:53.430346 master-0 kubenswrapper[34361]: I0224 05:52:53.429439 34361 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-d477bdc58-p8d8s"] Feb 24 05:52:53.445463 master-0 kubenswrapper[34361]: I0224 05:52:53.442375 34361 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/3057364f-388c-47da-adc8-4c8e074b8362-ovndb-tls-certs\") pod \"neutron-d477bdc58-p8d8s\" (UID: \"3057364f-388c-47da-adc8-4c8e074b8362\") " pod="openstack/neutron-d477bdc58-p8d8s" Feb 24 05:52:53.445463 master-0 kubenswrapper[34361]: I0224 05:52:53.442455 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/4de3f381-7d5a-46fe-9c93-97e35cca31d1-dns-swift-storage-0\") pod \"dnsmasq-dns-84969fcbcc-27cm6\" (UID: \"4de3f381-7d5a-46fe-9c93-97e35cca31d1\") " pod="openstack/dnsmasq-dns-84969fcbcc-27cm6" Feb 24 05:52:53.445463 master-0 kubenswrapper[34361]: I0224 05:52:53.442643 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/4de3f381-7d5a-46fe-9c93-97e35cca31d1-dns-svc\") pod \"dnsmasq-dns-84969fcbcc-27cm6\" (UID: \"4de3f381-7d5a-46fe-9c93-97e35cca31d1\") " pod="openstack/dnsmasq-dns-84969fcbcc-27cm6" Feb 24 05:52:53.445463 master-0 kubenswrapper[34361]: I0224 05:52:53.442803 34361 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/3057364f-388c-47da-adc8-4c8e074b8362-config\") pod \"neutron-d477bdc58-p8d8s\" (UID: \"3057364f-388c-47da-adc8-4c8e074b8362\") " pod="openstack/neutron-d477bdc58-p8d8s" Feb 24 05:52:53.445463 master-0 kubenswrapper[34361]: I0224 05:52:53.442846 34361 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/3057364f-388c-47da-adc8-4c8e074b8362-httpd-config\") pod \"neutron-d477bdc58-p8d8s\" (UID: \"3057364f-388c-47da-adc8-4c8e074b8362\") " pod="openstack/neutron-d477bdc58-p8d8s" Feb 24 05:52:53.445463 master-0 kubenswrapper[34361]: I0224 05:52:53.442928 34361 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3057364f-388c-47da-adc8-4c8e074b8362-combined-ca-bundle\") pod \"neutron-d477bdc58-p8d8s\" (UID: \"3057364f-388c-47da-adc8-4c8e074b8362\") " pod="openstack/neutron-d477bdc58-p8d8s" Feb 24 05:52:53.445463 master-0 kubenswrapper[34361]: I0224 05:52:53.442998 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7vkt2\" (UniqueName: \"kubernetes.io/projected/4de3f381-7d5a-46fe-9c93-97e35cca31d1-kube-api-access-7vkt2\") pod \"dnsmasq-dns-84969fcbcc-27cm6\" (UID: \"4de3f381-7d5a-46fe-9c93-97e35cca31d1\") " pod="openstack/dnsmasq-dns-84969fcbcc-27cm6" Feb 24 05:52:53.445463 master-0 kubenswrapper[34361]: I0224 05:52:53.443061 34361 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5k2wm\" (UniqueName: \"kubernetes.io/projected/3057364f-388c-47da-adc8-4c8e074b8362-kube-api-access-5k2wm\") pod \"neutron-d477bdc58-p8d8s\" (UID: \"3057364f-388c-47da-adc8-4c8e074b8362\") " pod="openstack/neutron-d477bdc58-p8d8s" Feb 24 05:52:53.445463 master-0 kubenswrapper[34361]: I0224 05:52:53.443145 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4de3f381-7d5a-46fe-9c93-97e35cca31d1-config\") pod \"dnsmasq-dns-84969fcbcc-27cm6\" (UID: \"4de3f381-7d5a-46fe-9c93-97e35cca31d1\") " pod="openstack/dnsmasq-dns-84969fcbcc-27cm6" Feb 24 05:52:53.445463 master-0 kubenswrapper[34361]: I0224 05:52:53.443167 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/4de3f381-7d5a-46fe-9c93-97e35cca31d1-ovsdbserver-sb\") pod \"dnsmasq-dns-84969fcbcc-27cm6\" (UID: \"4de3f381-7d5a-46fe-9c93-97e35cca31d1\") " pod="openstack/dnsmasq-dns-84969fcbcc-27cm6" Feb 24 05:52:53.445463 master-0 kubenswrapper[34361]: I0224 05:52:53.443229 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/4de3f381-7d5a-46fe-9c93-97e35cca31d1-ovsdbserver-nb\") pod \"dnsmasq-dns-84969fcbcc-27cm6\" (UID: \"4de3f381-7d5a-46fe-9c93-97e35cca31d1\") " pod="openstack/dnsmasq-dns-84969fcbcc-27cm6" Feb 24 05:52:53.445463 master-0 kubenswrapper[34361]: I0224 05:52:53.444416 34361 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/4de3f381-7d5a-46fe-9c93-97e35cca31d1-ovsdbserver-nb\") pod \"dnsmasq-dns-84969fcbcc-27cm6\" (UID: \"4de3f381-7d5a-46fe-9c93-97e35cca31d1\") " pod="openstack/dnsmasq-dns-84969fcbcc-27cm6" Feb 24 05:52:53.445463 master-0 kubenswrapper[34361]: I0224 05:52:53.444600 34361 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/4de3f381-7d5a-46fe-9c93-97e35cca31d1-dns-swift-storage-0\") pod \"dnsmasq-dns-84969fcbcc-27cm6\" (UID: \"4de3f381-7d5a-46fe-9c93-97e35cca31d1\") " pod="openstack/dnsmasq-dns-84969fcbcc-27cm6" Feb 24 05:52:53.446136 master-0 kubenswrapper[34361]: I0224 05:52:53.445563 34361 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/4de3f381-7d5a-46fe-9c93-97e35cca31d1-dns-svc\") pod \"dnsmasq-dns-84969fcbcc-27cm6\" (UID: \"4de3f381-7d5a-46fe-9c93-97e35cca31d1\") " pod="openstack/dnsmasq-dns-84969fcbcc-27cm6" Feb 24 05:52:53.446136 master-0 kubenswrapper[34361]: I0224 05:52:53.445819 34361 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4de3f381-7d5a-46fe-9c93-97e35cca31d1-config\") pod \"dnsmasq-dns-84969fcbcc-27cm6\" (UID: \"4de3f381-7d5a-46fe-9c93-97e35cca31d1\") " pod="openstack/dnsmasq-dns-84969fcbcc-27cm6" Feb 24 05:52:53.446413 master-0 kubenswrapper[34361]: I0224 05:52:53.446365 34361 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/4de3f381-7d5a-46fe-9c93-97e35cca31d1-ovsdbserver-sb\") pod \"dnsmasq-dns-84969fcbcc-27cm6\" (UID: \"4de3f381-7d5a-46fe-9c93-97e35cca31d1\") " pod="openstack/dnsmasq-dns-84969fcbcc-27cm6" Feb 24 05:52:53.490427 master-0 kubenswrapper[34361]: I0224 05:52:53.481704 34361 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7vkt2\" (UniqueName: \"kubernetes.io/projected/4de3f381-7d5a-46fe-9c93-97e35cca31d1-kube-api-access-7vkt2\") pod \"dnsmasq-dns-84969fcbcc-27cm6\" (UID: \"4de3f381-7d5a-46fe-9c93-97e35cca31d1\") " pod="openstack/dnsmasq-dns-84969fcbcc-27cm6" Feb 24 05:52:53.545173 master-0 kubenswrapper[34361]: I0224 05:52:53.545079 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/3057364f-388c-47da-adc8-4c8e074b8362-ovndb-tls-certs\") pod \"neutron-d477bdc58-p8d8s\" (UID: \"3057364f-388c-47da-adc8-4c8e074b8362\") " pod="openstack/neutron-d477bdc58-p8d8s" Feb 24 05:52:53.545507 master-0 kubenswrapper[34361]: I0224 05:52:53.545216 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/3057364f-388c-47da-adc8-4c8e074b8362-config\") pod \"neutron-d477bdc58-p8d8s\" (UID: \"3057364f-388c-47da-adc8-4c8e074b8362\") " pod="openstack/neutron-d477bdc58-p8d8s" Feb 24 05:52:53.545507 master-0 kubenswrapper[34361]: I0224 05:52:53.545242 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/3057364f-388c-47da-adc8-4c8e074b8362-httpd-config\") pod \"neutron-d477bdc58-p8d8s\" (UID: \"3057364f-388c-47da-adc8-4c8e074b8362\") " pod="openstack/neutron-d477bdc58-p8d8s" Feb 24 05:52:53.545507 master-0 kubenswrapper[34361]: I0224 05:52:53.545279 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3057364f-388c-47da-adc8-4c8e074b8362-combined-ca-bundle\") pod \"neutron-d477bdc58-p8d8s\" (UID: \"3057364f-388c-47da-adc8-4c8e074b8362\") " pod="openstack/neutron-d477bdc58-p8d8s" Feb 24 05:52:53.545507 master-0 kubenswrapper[34361]: I0224 05:52:53.545334 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5k2wm\" (UniqueName: \"kubernetes.io/projected/3057364f-388c-47da-adc8-4c8e074b8362-kube-api-access-5k2wm\") pod \"neutron-d477bdc58-p8d8s\" (UID: \"3057364f-388c-47da-adc8-4c8e074b8362\") " pod="openstack/neutron-d477bdc58-p8d8s" Feb 24 05:52:53.564345 master-0 kubenswrapper[34361]: I0224 05:52:53.550903 34361 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/3057364f-388c-47da-adc8-4c8e074b8362-httpd-config\") pod \"neutron-d477bdc58-p8d8s\" (UID: \"3057364f-388c-47da-adc8-4c8e074b8362\") " pod="openstack/neutron-d477bdc58-p8d8s" Feb 24 05:52:53.564345 master-0 kubenswrapper[34361]: I0224 05:52:53.559487 34361 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3057364f-388c-47da-adc8-4c8e074b8362-combined-ca-bundle\") pod \"neutron-d477bdc58-p8d8s\" (UID: \"3057364f-388c-47da-adc8-4c8e074b8362\") " pod="openstack/neutron-d477bdc58-p8d8s" Feb 24 05:52:53.564345 master-0 kubenswrapper[34361]: I0224 05:52:53.559680 34361 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/3057364f-388c-47da-adc8-4c8e074b8362-ovndb-tls-certs\") pod \"neutron-d477bdc58-p8d8s\" (UID: \"3057364f-388c-47da-adc8-4c8e074b8362\") " pod="openstack/neutron-d477bdc58-p8d8s" Feb 24 05:52:53.564345 master-0 kubenswrapper[34361]: I0224 05:52:53.560028 34361 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/3057364f-388c-47da-adc8-4c8e074b8362-config\") pod \"neutron-d477bdc58-p8d8s\" (UID: \"3057364f-388c-47da-adc8-4c8e074b8362\") " pod="openstack/neutron-d477bdc58-p8d8s" Feb 24 05:52:53.575340 master-0 kubenswrapper[34361]: I0224 05:52:53.570597 34361 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5k2wm\" (UniqueName: \"kubernetes.io/projected/3057364f-388c-47da-adc8-4c8e074b8362-kube-api-access-5k2wm\") pod \"neutron-d477bdc58-p8d8s\" (UID: \"3057364f-388c-47da-adc8-4c8e074b8362\") " pod="openstack/neutron-d477bdc58-p8d8s" Feb 24 05:52:53.638547 master-0 kubenswrapper[34361]: I0224 05:52:53.638435 34361 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-84969fcbcc-27cm6" Feb 24 05:52:53.707535 master-0 kubenswrapper[34361]: I0224 05:52:53.707380 34361 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-d477bdc58-p8d8s" Feb 24 05:52:53.719450 master-0 kubenswrapper[34361]: I0224 05:52:53.716043 34361 generic.go:334] "Generic (PLEG): container finished" podID="41c862b6-5eb6-4f54-a435-a8e7691b87c9" containerID="c47f422e1fdc982c03255913a4df34e0eb690b433e5d23550b39b7db2c74272d" exitCode=0 Feb 24 05:52:53.719450 master-0 kubenswrapper[34361]: I0224 05:52:53.716114 34361 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-b7346-db-sync-f9mbk" event={"ID":"41c862b6-5eb6-4f54-a435-a8e7691b87c9","Type":"ContainerDied","Data":"c47f422e1fdc982c03255913a4df34e0eb690b433e5d23550b39b7db2c74272d"} Feb 24 05:52:54.247786 master-0 kubenswrapper[34361]: I0224 05:52:54.247710 34361 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-84969fcbcc-27cm6"] Feb 24 05:52:54.516877 master-0 kubenswrapper[34361]: I0224 05:52:54.516834 34361 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-d477bdc58-p8d8s"] Feb 24 05:52:54.731409 master-0 kubenswrapper[34361]: I0224 05:52:54.731346 34361 generic.go:334] "Generic (PLEG): container finished" podID="4de3f381-7d5a-46fe-9c93-97e35cca31d1" containerID="f4c31541e6649676ea54e212e590a7e5b14e1b2f1f8c65c21a833de3f60449b9" exitCode=0 Feb 24 05:52:54.731706 master-0 kubenswrapper[34361]: I0224 05:52:54.731437 34361 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-84969fcbcc-27cm6" event={"ID":"4de3f381-7d5a-46fe-9c93-97e35cca31d1","Type":"ContainerDied","Data":"f4c31541e6649676ea54e212e590a7e5b14e1b2f1f8c65c21a833de3f60449b9"} Feb 24 05:52:54.731706 master-0 kubenswrapper[34361]: I0224 05:52:54.731470 34361 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-84969fcbcc-27cm6" event={"ID":"4de3f381-7d5a-46fe-9c93-97e35cca31d1","Type":"ContainerStarted","Data":"1af06452c78d9e90cb5fbdaa827ed418a9e6a0312eb5bc422a4d3489334e6e9a"} Feb 24 05:52:54.734515 master-0 kubenswrapper[34361]: I0224 05:52:54.734411 34361 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-d477bdc58-p8d8s" event={"ID":"3057364f-388c-47da-adc8-4c8e074b8362","Type":"ContainerStarted","Data":"b135f60c4c74d5c03d568fb4b645d7d7e27c289dc4554b7bffed6440ae659678"} Feb 24 05:52:55.172365 master-0 kubenswrapper[34361]: I0224 05:52:55.172320 34361 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-b7346-db-sync-f9mbk" Feb 24 05:52:55.300575 master-0 kubenswrapper[34361]: I0224 05:52:55.300499 34361 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ljrcx\" (UniqueName: \"kubernetes.io/projected/41c862b6-5eb6-4f54-a435-a8e7691b87c9-kube-api-access-ljrcx\") pod \"41c862b6-5eb6-4f54-a435-a8e7691b87c9\" (UID: \"41c862b6-5eb6-4f54-a435-a8e7691b87c9\") " Feb 24 05:52:55.300575 master-0 kubenswrapper[34361]: I0224 05:52:55.300582 34361 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/41c862b6-5eb6-4f54-a435-a8e7691b87c9-config-data\") pod \"41c862b6-5eb6-4f54-a435-a8e7691b87c9\" (UID: \"41c862b6-5eb6-4f54-a435-a8e7691b87c9\") " Feb 24 05:52:55.301235 master-0 kubenswrapper[34361]: I0224 05:52:55.300649 34361 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/41c862b6-5eb6-4f54-a435-a8e7691b87c9-combined-ca-bundle\") pod \"41c862b6-5eb6-4f54-a435-a8e7691b87c9\" (UID: \"41c862b6-5eb6-4f54-a435-a8e7691b87c9\") " Feb 24 05:52:55.301235 master-0 kubenswrapper[34361]: I0224 05:52:55.300756 34361 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/41c862b6-5eb6-4f54-a435-a8e7691b87c9-etc-machine-id\") pod \"41c862b6-5eb6-4f54-a435-a8e7691b87c9\" (UID: \"41c862b6-5eb6-4f54-a435-a8e7691b87c9\") " Feb 24 05:52:55.301235 master-0 kubenswrapper[34361]: I0224 05:52:55.300936 34361 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/41c862b6-5eb6-4f54-a435-a8e7691b87c9-db-sync-config-data\") pod \"41c862b6-5eb6-4f54-a435-a8e7691b87c9\" (UID: \"41c862b6-5eb6-4f54-a435-a8e7691b87c9\") " Feb 24 05:52:55.301235 master-0 kubenswrapper[34361]: I0224 05:52:55.300985 34361 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/41c862b6-5eb6-4f54-a435-a8e7691b87c9-scripts\") pod \"41c862b6-5eb6-4f54-a435-a8e7691b87c9\" (UID: \"41c862b6-5eb6-4f54-a435-a8e7691b87c9\") " Feb 24 05:52:55.307419 master-0 kubenswrapper[34361]: I0224 05:52:55.305453 34361 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/41c862b6-5eb6-4f54-a435-a8e7691b87c9-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "41c862b6-5eb6-4f54-a435-a8e7691b87c9" (UID: "41c862b6-5eb6-4f54-a435-a8e7691b87c9"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 24 05:52:55.327550 master-0 kubenswrapper[34361]: I0224 05:52:55.327291 34361 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/41c862b6-5eb6-4f54-a435-a8e7691b87c9-scripts" (OuterVolumeSpecName: "scripts") pod "41c862b6-5eb6-4f54-a435-a8e7691b87c9" (UID: "41c862b6-5eb6-4f54-a435-a8e7691b87c9"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 24 05:52:55.343340 master-0 kubenswrapper[34361]: I0224 05:52:55.341422 34361 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/41c862b6-5eb6-4f54-a435-a8e7691b87c9-kube-api-access-ljrcx" (OuterVolumeSpecName: "kube-api-access-ljrcx") pod "41c862b6-5eb6-4f54-a435-a8e7691b87c9" (UID: "41c862b6-5eb6-4f54-a435-a8e7691b87c9"). InnerVolumeSpecName "kube-api-access-ljrcx". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 24 05:52:55.351700 master-0 kubenswrapper[34361]: I0224 05:52:55.351600 34361 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/41c862b6-5eb6-4f54-a435-a8e7691b87c9-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "41c862b6-5eb6-4f54-a435-a8e7691b87c9" (UID: "41c862b6-5eb6-4f54-a435-a8e7691b87c9"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 24 05:52:55.404332 master-0 kubenswrapper[34361]: I0224 05:52:55.404094 34361 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/41c862b6-5eb6-4f54-a435-a8e7691b87c9-etc-machine-id\") on node \"master-0\" DevicePath \"\"" Feb 24 05:52:55.404332 master-0 kubenswrapper[34361]: I0224 05:52:55.404151 34361 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/41c862b6-5eb6-4f54-a435-a8e7691b87c9-db-sync-config-data\") on node \"master-0\" DevicePath \"\"" Feb 24 05:52:55.404332 master-0 kubenswrapper[34361]: I0224 05:52:55.404163 34361 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/41c862b6-5eb6-4f54-a435-a8e7691b87c9-scripts\") on node \"master-0\" DevicePath \"\"" Feb 24 05:52:55.404332 master-0 kubenswrapper[34361]: I0224 05:52:55.404175 34361 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ljrcx\" (UniqueName: \"kubernetes.io/projected/41c862b6-5eb6-4f54-a435-a8e7691b87c9-kube-api-access-ljrcx\") on node \"master-0\" DevicePath \"\"" Feb 24 05:52:55.425745 master-0 kubenswrapper[34361]: I0224 05:52:55.425637 34361 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/41c862b6-5eb6-4f54-a435-a8e7691b87c9-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "41c862b6-5eb6-4f54-a435-a8e7691b87c9" (UID: "41c862b6-5eb6-4f54-a435-a8e7691b87c9"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 24 05:52:55.450372 master-0 kubenswrapper[34361]: I0224 05:52:55.448791 34361 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/41c862b6-5eb6-4f54-a435-a8e7691b87c9-config-data" (OuterVolumeSpecName: "config-data") pod "41c862b6-5eb6-4f54-a435-a8e7691b87c9" (UID: "41c862b6-5eb6-4f54-a435-a8e7691b87c9"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 24 05:52:55.510355 master-0 kubenswrapper[34361]: I0224 05:52:55.506975 34361 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/41c862b6-5eb6-4f54-a435-a8e7691b87c9-combined-ca-bundle\") on node \"master-0\" DevicePath \"\"" Feb 24 05:52:55.510355 master-0 kubenswrapper[34361]: I0224 05:52:55.507033 34361 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/41c862b6-5eb6-4f54-a435-a8e7691b87c9-config-data\") on node \"master-0\" DevicePath \"\"" Feb 24 05:52:55.567496 master-0 kubenswrapper[34361]: I0224 05:52:55.567392 34361 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-564b95b965-jqq92"] Feb 24 05:52:55.568165 master-0 kubenswrapper[34361]: E0224 05:52:55.568129 34361 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="41c862b6-5eb6-4f54-a435-a8e7691b87c9" containerName="cinder-b7346-db-sync" Feb 24 05:52:55.568165 master-0 kubenswrapper[34361]: I0224 05:52:55.568158 34361 state_mem.go:107] "Deleted CPUSet assignment" podUID="41c862b6-5eb6-4f54-a435-a8e7691b87c9" containerName="cinder-b7346-db-sync" Feb 24 05:52:55.568573 master-0 kubenswrapper[34361]: I0224 05:52:55.568491 34361 memory_manager.go:354] "RemoveStaleState removing state" podUID="41c862b6-5eb6-4f54-a435-a8e7691b87c9" containerName="cinder-b7346-db-sync" Feb 24 05:52:55.569889 master-0 kubenswrapper[34361]: I0224 05:52:55.569861 34361 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-564b95b965-jqq92" Feb 24 05:52:55.572995 master-0 kubenswrapper[34361]: I0224 05:52:55.572943 34361 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-public-svc" Feb 24 05:52:55.573504 master-0 kubenswrapper[34361]: I0224 05:52:55.573344 34361 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-internal-svc" Feb 24 05:52:55.602172 master-0 kubenswrapper[34361]: I0224 05:52:55.600217 34361 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-564b95b965-jqq92"] Feb 24 05:52:55.614898 master-0 kubenswrapper[34361]: I0224 05:52:55.612588 34361 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/0505d98c-cd90-4424-b40d-304625ffdb03-public-tls-certs\") pod \"neutron-564b95b965-jqq92\" (UID: \"0505d98c-cd90-4424-b40d-304625ffdb03\") " pod="openstack/neutron-564b95b965-jqq92" Feb 24 05:52:55.614898 master-0 kubenswrapper[34361]: I0224 05:52:55.612661 34361 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/0505d98c-cd90-4424-b40d-304625ffdb03-internal-tls-certs\") pod \"neutron-564b95b965-jqq92\" (UID: \"0505d98c-cd90-4424-b40d-304625ffdb03\") " pod="openstack/neutron-564b95b965-jqq92" Feb 24 05:52:55.614898 master-0 kubenswrapper[34361]: I0224 05:52:55.612758 34361 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/0505d98c-cd90-4424-b40d-304625ffdb03-config\") pod \"neutron-564b95b965-jqq92\" (UID: \"0505d98c-cd90-4424-b40d-304625ffdb03\") " pod="openstack/neutron-564b95b965-jqq92" Feb 24 05:52:55.614898 master-0 kubenswrapper[34361]: I0224 05:52:55.612778 34361 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/0505d98c-cd90-4424-b40d-304625ffdb03-httpd-config\") pod \"neutron-564b95b965-jqq92\" (UID: \"0505d98c-cd90-4424-b40d-304625ffdb03\") " pod="openstack/neutron-564b95b965-jqq92" Feb 24 05:52:55.614898 master-0 kubenswrapper[34361]: I0224 05:52:55.612811 34361 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zvm6b\" (UniqueName: \"kubernetes.io/projected/0505d98c-cd90-4424-b40d-304625ffdb03-kube-api-access-zvm6b\") pod \"neutron-564b95b965-jqq92\" (UID: \"0505d98c-cd90-4424-b40d-304625ffdb03\") " pod="openstack/neutron-564b95b965-jqq92" Feb 24 05:52:55.614898 master-0 kubenswrapper[34361]: I0224 05:52:55.612850 34361 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0505d98c-cd90-4424-b40d-304625ffdb03-combined-ca-bundle\") pod \"neutron-564b95b965-jqq92\" (UID: \"0505d98c-cd90-4424-b40d-304625ffdb03\") " pod="openstack/neutron-564b95b965-jqq92" Feb 24 05:52:55.614898 master-0 kubenswrapper[34361]: I0224 05:52:55.612871 34361 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/0505d98c-cd90-4424-b40d-304625ffdb03-ovndb-tls-certs\") pod \"neutron-564b95b965-jqq92\" (UID: \"0505d98c-cd90-4424-b40d-304625ffdb03\") " pod="openstack/neutron-564b95b965-jqq92" Feb 24 05:52:55.717378 master-0 kubenswrapper[34361]: I0224 05:52:55.715593 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/0505d98c-cd90-4424-b40d-304625ffdb03-config\") pod \"neutron-564b95b965-jqq92\" (UID: \"0505d98c-cd90-4424-b40d-304625ffdb03\") " pod="openstack/neutron-564b95b965-jqq92" Feb 24 05:52:55.717378 master-0 kubenswrapper[34361]: I0224 05:52:55.715675 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/0505d98c-cd90-4424-b40d-304625ffdb03-httpd-config\") pod \"neutron-564b95b965-jqq92\" (UID: \"0505d98c-cd90-4424-b40d-304625ffdb03\") " pod="openstack/neutron-564b95b965-jqq92" Feb 24 05:52:55.717378 master-0 kubenswrapper[34361]: I0224 05:52:55.715745 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zvm6b\" (UniqueName: \"kubernetes.io/projected/0505d98c-cd90-4424-b40d-304625ffdb03-kube-api-access-zvm6b\") pod \"neutron-564b95b965-jqq92\" (UID: \"0505d98c-cd90-4424-b40d-304625ffdb03\") " pod="openstack/neutron-564b95b965-jqq92" Feb 24 05:52:55.717378 master-0 kubenswrapper[34361]: I0224 05:52:55.715821 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0505d98c-cd90-4424-b40d-304625ffdb03-combined-ca-bundle\") pod \"neutron-564b95b965-jqq92\" (UID: \"0505d98c-cd90-4424-b40d-304625ffdb03\") " pod="openstack/neutron-564b95b965-jqq92" Feb 24 05:52:55.717378 master-0 kubenswrapper[34361]: I0224 05:52:55.715847 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/0505d98c-cd90-4424-b40d-304625ffdb03-ovndb-tls-certs\") pod \"neutron-564b95b965-jqq92\" (UID: \"0505d98c-cd90-4424-b40d-304625ffdb03\") " pod="openstack/neutron-564b95b965-jqq92" Feb 24 05:52:55.717378 master-0 kubenswrapper[34361]: I0224 05:52:55.715915 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/0505d98c-cd90-4424-b40d-304625ffdb03-public-tls-certs\") pod \"neutron-564b95b965-jqq92\" (UID: \"0505d98c-cd90-4424-b40d-304625ffdb03\") " pod="openstack/neutron-564b95b965-jqq92" Feb 24 05:52:55.717378 master-0 kubenswrapper[34361]: I0224 05:52:55.716020 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/0505d98c-cd90-4424-b40d-304625ffdb03-internal-tls-certs\") pod \"neutron-564b95b965-jqq92\" (UID: \"0505d98c-cd90-4424-b40d-304625ffdb03\") " pod="openstack/neutron-564b95b965-jqq92" Feb 24 05:52:55.725173 master-0 kubenswrapper[34361]: I0224 05:52:55.724503 34361 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/0505d98c-cd90-4424-b40d-304625ffdb03-public-tls-certs\") pod \"neutron-564b95b965-jqq92\" (UID: \"0505d98c-cd90-4424-b40d-304625ffdb03\") " pod="openstack/neutron-564b95b965-jqq92" Feb 24 05:52:55.725173 master-0 kubenswrapper[34361]: I0224 05:52:55.724519 34361 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/0505d98c-cd90-4424-b40d-304625ffdb03-config\") pod \"neutron-564b95b965-jqq92\" (UID: \"0505d98c-cd90-4424-b40d-304625ffdb03\") " pod="openstack/neutron-564b95b965-jqq92" Feb 24 05:52:55.725173 master-0 kubenswrapper[34361]: I0224 05:52:55.724959 34361 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/0505d98c-cd90-4424-b40d-304625ffdb03-httpd-config\") pod \"neutron-564b95b965-jqq92\" (UID: \"0505d98c-cd90-4424-b40d-304625ffdb03\") " pod="openstack/neutron-564b95b965-jqq92" Feb 24 05:52:55.726450 master-0 kubenswrapper[34361]: I0224 05:52:55.726386 34361 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/0505d98c-cd90-4424-b40d-304625ffdb03-ovndb-tls-certs\") pod \"neutron-564b95b965-jqq92\" (UID: \"0505d98c-cd90-4424-b40d-304625ffdb03\") " pod="openstack/neutron-564b95b965-jqq92" Feb 24 05:52:55.726579 master-0 kubenswrapper[34361]: I0224 05:52:55.726514 34361 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0505d98c-cd90-4424-b40d-304625ffdb03-combined-ca-bundle\") pod \"neutron-564b95b965-jqq92\" (UID: \"0505d98c-cd90-4424-b40d-304625ffdb03\") " pod="openstack/neutron-564b95b965-jqq92" Feb 24 05:52:55.741864 master-0 kubenswrapper[34361]: I0224 05:52:55.741564 34361 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/0505d98c-cd90-4424-b40d-304625ffdb03-internal-tls-certs\") pod \"neutron-564b95b965-jqq92\" (UID: \"0505d98c-cd90-4424-b40d-304625ffdb03\") " pod="openstack/neutron-564b95b965-jqq92" Feb 24 05:52:55.742144 master-0 kubenswrapper[34361]: I0224 05:52:55.742049 34361 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zvm6b\" (UniqueName: \"kubernetes.io/projected/0505d98c-cd90-4424-b40d-304625ffdb03-kube-api-access-zvm6b\") pod \"neutron-564b95b965-jqq92\" (UID: \"0505d98c-cd90-4424-b40d-304625ffdb03\") " pod="openstack/neutron-564b95b965-jqq92" Feb 24 05:52:55.757906 master-0 kubenswrapper[34361]: I0224 05:52:55.752185 34361 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-84969fcbcc-27cm6" event={"ID":"4de3f381-7d5a-46fe-9c93-97e35cca31d1","Type":"ContainerStarted","Data":"b43a0924981916f1d53eac73604134138fe50b0a5df3484c818ffa8a1dc263ff"} Feb 24 05:52:55.757906 master-0 kubenswrapper[34361]: I0224 05:52:55.753959 34361 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-84969fcbcc-27cm6" Feb 24 05:52:55.758648 master-0 kubenswrapper[34361]: I0224 05:52:55.758561 34361 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-d477bdc58-p8d8s" event={"ID":"3057364f-388c-47da-adc8-4c8e074b8362","Type":"ContainerStarted","Data":"4f7f4865ef45e0d6f6cd182f7e4ff15bcccff460d685cf4e910cc0f51615f94e"} Feb 24 05:52:55.758648 master-0 kubenswrapper[34361]: I0224 05:52:55.758597 34361 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-d477bdc58-p8d8s" event={"ID":"3057364f-388c-47da-adc8-4c8e074b8362","Type":"ContainerStarted","Data":"b4136cfb871ee6b478ec62af981a996389b5b5b3043351647079c6db301b06b0"} Feb 24 05:52:55.759200 master-0 kubenswrapper[34361]: I0224 05:52:55.759170 34361 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/neutron-d477bdc58-p8d8s" Feb 24 05:52:55.762448 master-0 kubenswrapper[34361]: I0224 05:52:55.762416 34361 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-b7346-db-sync-f9mbk" event={"ID":"41c862b6-5eb6-4f54-a435-a8e7691b87c9","Type":"ContainerDied","Data":"837a6c85f8d72a5d5b8cfea4438b650db2c3015c8e36d7d64e077b1aa9ee2700"} Feb 24 05:52:55.762448 master-0 kubenswrapper[34361]: I0224 05:52:55.762444 34361 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="837a6c85f8d72a5d5b8cfea4438b650db2c3015c8e36d7d64e077b1aa9ee2700" Feb 24 05:52:55.762589 master-0 kubenswrapper[34361]: I0224 05:52:55.762494 34361 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-b7346-db-sync-f9mbk" Feb 24 05:52:55.804648 master-0 kubenswrapper[34361]: I0224 05:52:55.804509 34361 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-84969fcbcc-27cm6" podStartSLOduration=2.804480447 podStartE2EDuration="2.804480447s" podCreationTimestamp="2026-02-24 05:52:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-24 05:52:55.790161851 +0000 UTC m=+935.492778897" watchObservedRunningTime="2026-02-24 05:52:55.804480447 +0000 UTC m=+935.507097493" Feb 24 05:52:55.839783 master-0 kubenswrapper[34361]: I0224 05:52:55.839664 34361 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-d477bdc58-p8d8s" podStartSLOduration=2.839634206 podStartE2EDuration="2.839634206s" podCreationTimestamp="2026-02-24 05:52:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-24 05:52:55.833222082 +0000 UTC m=+935.535839138" watchObservedRunningTime="2026-02-24 05:52:55.839634206 +0000 UTC m=+935.542251252" Feb 24 05:52:55.912762 master-0 kubenswrapper[34361]: I0224 05:52:55.912693 34361 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-564b95b965-jqq92" Feb 24 05:52:56.272529 master-0 kubenswrapper[34361]: I0224 05:52:56.272395 34361 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-84969fcbcc-27cm6"] Feb 24 05:52:56.327681 master-0 kubenswrapper[34361]: I0224 05:52:56.319487 34361 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-b7346-scheduler-0"] Feb 24 05:52:56.327681 master-0 kubenswrapper[34361]: I0224 05:52:56.327084 34361 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-b7346-scheduler-0" Feb 24 05:52:56.333510 master-0 kubenswrapper[34361]: I0224 05:52:56.330513 34361 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-b7346-scheduler-config-data" Feb 24 05:52:56.333510 master-0 kubenswrapper[34361]: I0224 05:52:56.331009 34361 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-b7346-scripts" Feb 24 05:52:56.345715 master-0 kubenswrapper[34361]: I0224 05:52:56.345662 34361 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-b7346-config-data" Feb 24 05:52:56.364211 master-0 kubenswrapper[34361]: I0224 05:52:56.364156 34361 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-b7346-volume-lvm-iscsi-0"] Feb 24 05:52:56.435668 master-0 kubenswrapper[34361]: I0224 05:52:56.435576 34361 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/2f0b28b5-741c-4761-b250-30d89ea99407-config-data-custom\") pod \"cinder-b7346-scheduler-0\" (UID: \"2f0b28b5-741c-4761-b250-30d89ea99407\") " pod="openstack/cinder-b7346-scheduler-0" Feb 24 05:52:56.435668 master-0 kubenswrapper[34361]: I0224 05:52:56.435649 34361 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2f0b28b5-741c-4761-b250-30d89ea99407-scripts\") pod \"cinder-b7346-scheduler-0\" (UID: \"2f0b28b5-741c-4761-b250-30d89ea99407\") " pod="openstack/cinder-b7346-scheduler-0" Feb 24 05:52:56.436183 master-0 kubenswrapper[34361]: I0224 05:52:56.435725 34361 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2f0b28b5-741c-4761-b250-30d89ea99407-combined-ca-bundle\") pod \"cinder-b7346-scheduler-0\" (UID: \"2f0b28b5-741c-4761-b250-30d89ea99407\") " pod="openstack/cinder-b7346-scheduler-0" Feb 24 05:52:56.436183 master-0 kubenswrapper[34361]: I0224 05:52:56.435751 34361 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9dgmw\" (UniqueName: \"kubernetes.io/projected/2f0b28b5-741c-4761-b250-30d89ea99407-kube-api-access-9dgmw\") pod \"cinder-b7346-scheduler-0\" (UID: \"2f0b28b5-741c-4761-b250-30d89ea99407\") " pod="openstack/cinder-b7346-scheduler-0" Feb 24 05:52:56.436183 master-0 kubenswrapper[34361]: I0224 05:52:56.435796 34361 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/2f0b28b5-741c-4761-b250-30d89ea99407-etc-machine-id\") pod \"cinder-b7346-scheduler-0\" (UID: \"2f0b28b5-741c-4761-b250-30d89ea99407\") " pod="openstack/cinder-b7346-scheduler-0" Feb 24 05:52:56.436183 master-0 kubenswrapper[34361]: I0224 05:52:56.435815 34361 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2f0b28b5-741c-4761-b250-30d89ea99407-config-data\") pod \"cinder-b7346-scheduler-0\" (UID: \"2f0b28b5-741c-4761-b250-30d89ea99407\") " pod="openstack/cinder-b7346-scheduler-0" Feb 24 05:52:56.445214 master-0 kubenswrapper[34361]: I0224 05:52:56.444949 34361 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-b7346-volume-lvm-iscsi-0" Feb 24 05:52:56.451239 master-0 kubenswrapper[34361]: I0224 05:52:56.451162 34361 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-b7346-scheduler-0"] Feb 24 05:52:56.455395 master-0 kubenswrapper[34361]: I0224 05:52:56.453737 34361 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-b7346-volume-lvm-iscsi-config-data" Feb 24 05:52:56.500258 master-0 kubenswrapper[34361]: I0224 05:52:56.498377 34361 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-b7346-volume-lvm-iscsi-0"] Feb 24 05:52:56.514701 master-0 kubenswrapper[34361]: I0224 05:52:56.514442 34361 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-66c9d5d889-nmpw7"] Feb 24 05:52:56.522998 master-0 kubenswrapper[34361]: I0224 05:52:56.522943 34361 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-66c9d5d889-nmpw7" Feb 24 05:52:56.541158 master-0 kubenswrapper[34361]: I0224 05:52:56.541078 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2f0b28b5-741c-4761-b250-30d89ea99407-combined-ca-bundle\") pod \"cinder-b7346-scheduler-0\" (UID: \"2f0b28b5-741c-4761-b250-30d89ea99407\") " pod="openstack/cinder-b7346-scheduler-0" Feb 24 05:52:56.542837 master-0 kubenswrapper[34361]: I0224 05:52:56.542803 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9dgmw\" (UniqueName: \"kubernetes.io/projected/2f0b28b5-741c-4761-b250-30d89ea99407-kube-api-access-9dgmw\") pod \"cinder-b7346-scheduler-0\" (UID: \"2f0b28b5-741c-4761-b250-30d89ea99407\") " pod="openstack/cinder-b7346-scheduler-0" Feb 24 05:52:56.542965 master-0 kubenswrapper[34361]: I0224 05:52:56.542867 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/2f0b28b5-741c-4761-b250-30d89ea99407-etc-machine-id\") pod \"cinder-b7346-scheduler-0\" (UID: \"2f0b28b5-741c-4761-b250-30d89ea99407\") " pod="openstack/cinder-b7346-scheduler-0" Feb 24 05:52:56.542965 master-0 kubenswrapper[34361]: I0224 05:52:56.542890 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2f0b28b5-741c-4761-b250-30d89ea99407-config-data\") pod \"cinder-b7346-scheduler-0\" (UID: \"2f0b28b5-741c-4761-b250-30d89ea99407\") " pod="openstack/cinder-b7346-scheduler-0" Feb 24 05:52:56.543058 master-0 kubenswrapper[34361]: I0224 05:52:56.543006 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/2f0b28b5-741c-4761-b250-30d89ea99407-config-data-custom\") pod \"cinder-b7346-scheduler-0\" (UID: \"2f0b28b5-741c-4761-b250-30d89ea99407\") " pod="openstack/cinder-b7346-scheduler-0" Feb 24 05:52:56.543058 master-0 kubenswrapper[34361]: I0224 05:52:56.543030 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2f0b28b5-741c-4761-b250-30d89ea99407-scripts\") pod \"cinder-b7346-scheduler-0\" (UID: \"2f0b28b5-741c-4761-b250-30d89ea99407\") " pod="openstack/cinder-b7346-scheduler-0" Feb 24 05:52:56.544524 master-0 kubenswrapper[34361]: I0224 05:52:56.544491 34361 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/2f0b28b5-741c-4761-b250-30d89ea99407-etc-machine-id\") pod \"cinder-b7346-scheduler-0\" (UID: \"2f0b28b5-741c-4761-b250-30d89ea99407\") " pod="openstack/cinder-b7346-scheduler-0" Feb 24 05:52:56.545180 master-0 kubenswrapper[34361]: I0224 05:52:56.545117 34361 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2f0b28b5-741c-4761-b250-30d89ea99407-combined-ca-bundle\") pod \"cinder-b7346-scheduler-0\" (UID: \"2f0b28b5-741c-4761-b250-30d89ea99407\") " pod="openstack/cinder-b7346-scheduler-0" Feb 24 05:52:56.548866 master-0 kubenswrapper[34361]: I0224 05:52:56.548830 34361 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2f0b28b5-741c-4761-b250-30d89ea99407-scripts\") pod \"cinder-b7346-scheduler-0\" (UID: \"2f0b28b5-741c-4761-b250-30d89ea99407\") " pod="openstack/cinder-b7346-scheduler-0" Feb 24 05:52:56.549081 master-0 kubenswrapper[34361]: I0224 05:52:56.549022 34361 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/2f0b28b5-741c-4761-b250-30d89ea99407-config-data-custom\") pod \"cinder-b7346-scheduler-0\" (UID: \"2f0b28b5-741c-4761-b250-30d89ea99407\") " pod="openstack/cinder-b7346-scheduler-0" Feb 24 05:52:56.557506 master-0 kubenswrapper[34361]: I0224 05:52:56.554526 34361 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-b7346-backup-0"] Feb 24 05:52:56.558168 master-0 kubenswrapper[34361]: I0224 05:52:56.558108 34361 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-b7346-backup-0" Feb 24 05:52:56.560634 master-0 kubenswrapper[34361]: I0224 05:52:56.560582 34361 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-b7346-backup-config-data" Feb 24 05:52:56.564648 master-0 kubenswrapper[34361]: I0224 05:52:56.563934 34361 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2f0b28b5-741c-4761-b250-30d89ea99407-config-data\") pod \"cinder-b7346-scheduler-0\" (UID: \"2f0b28b5-741c-4761-b250-30d89ea99407\") " pod="openstack/cinder-b7346-scheduler-0" Feb 24 05:52:56.565056 master-0 kubenswrapper[34361]: I0224 05:52:56.564988 34361 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-b7346-backup-0"] Feb 24 05:52:56.569222 master-0 kubenswrapper[34361]: I0224 05:52:56.569180 34361 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9dgmw\" (UniqueName: \"kubernetes.io/projected/2f0b28b5-741c-4761-b250-30d89ea99407-kube-api-access-9dgmw\") pod \"cinder-b7346-scheduler-0\" (UID: \"2f0b28b5-741c-4761-b250-30d89ea99407\") " pod="openstack/cinder-b7346-scheduler-0" Feb 24 05:52:56.579483 master-0 kubenswrapper[34361]: I0224 05:52:56.579293 34361 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-66c9d5d889-nmpw7"] Feb 24 05:52:56.657730 master-0 kubenswrapper[34361]: I0224 05:52:56.655716 34361 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/fad53e67-bd04-4577-af57-e5b896b6e56f-var-locks-brick\") pod \"cinder-b7346-volume-lvm-iscsi-0\" (UID: \"fad53e67-bd04-4577-af57-e5b896b6e56f\") " pod="openstack/cinder-b7346-volume-lvm-iscsi-0" Feb 24 05:52:56.657730 master-0 kubenswrapper[34361]: I0224 05:52:56.655784 34361 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/c1490d04-fc1d-488b-a427-285554ec1692-dev\") pod \"cinder-b7346-backup-0\" (UID: \"c1490d04-fc1d-488b-a427-285554ec1692\") " pod="openstack/cinder-b7346-backup-0" Feb 24 05:52:56.657730 master-0 kubenswrapper[34361]: I0224 05:52:56.655817 34361 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fad53e67-bd04-4577-af57-e5b896b6e56f-config-data\") pod \"cinder-b7346-volume-lvm-iscsi-0\" (UID: \"fad53e67-bd04-4577-af57-e5b896b6e56f\") " pod="openstack/cinder-b7346-volume-lvm-iscsi-0" Feb 24 05:52:56.657730 master-0 kubenswrapper[34361]: I0224 05:52:56.655841 34361 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-locks-cinder\" (UniqueName: \"kubernetes.io/host-path/c1490d04-fc1d-488b-a427-285554ec1692-var-locks-cinder\") pod \"cinder-b7346-backup-0\" (UID: \"c1490d04-fc1d-488b-a427-285554ec1692\") " pod="openstack/cinder-b7346-backup-0" Feb 24 05:52:56.657730 master-0 kubenswrapper[34361]: I0224 05:52:56.655865 34361 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/fad53e67-bd04-4577-af57-e5b896b6e56f-lib-modules\") pod \"cinder-b7346-volume-lvm-iscsi-0\" (UID: \"fad53e67-bd04-4577-af57-e5b896b6e56f\") " pod="openstack/cinder-b7346-volume-lvm-iscsi-0" Feb 24 05:52:56.657730 master-0 kubenswrapper[34361]: I0224 05:52:56.655889 34361 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/3c458e23-405b-449a-8e0b-aa6e42a286c9-dns-svc\") pod \"dnsmasq-dns-66c9d5d889-nmpw7\" (UID: \"3c458e23-405b-449a-8e0b-aa6e42a286c9\") " pod="openstack/dnsmasq-dns-66c9d5d889-nmpw7" Feb 24 05:52:56.657730 master-0 kubenswrapper[34361]: I0224 05:52:56.655909 34361 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/3c458e23-405b-449a-8e0b-aa6e42a286c9-dns-swift-storage-0\") pod \"dnsmasq-dns-66c9d5d889-nmpw7\" (UID: \"3c458e23-405b-449a-8e0b-aa6e42a286c9\") " pod="openstack/dnsmasq-dns-66c9d5d889-nmpw7" Feb 24 05:52:56.657730 master-0 kubenswrapper[34361]: I0224 05:52:56.655928 34361 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zzhlb\" (UniqueName: \"kubernetes.io/projected/fad53e67-bd04-4577-af57-e5b896b6e56f-kube-api-access-zzhlb\") pod \"cinder-b7346-volume-lvm-iscsi-0\" (UID: \"fad53e67-bd04-4577-af57-e5b896b6e56f\") " pod="openstack/cinder-b7346-volume-lvm-iscsi-0" Feb 24 05:52:56.657730 master-0 kubenswrapper[34361]: I0224 05:52:56.655960 34361 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/fad53e67-bd04-4577-af57-e5b896b6e56f-etc-iscsi\") pod \"cinder-b7346-volume-lvm-iscsi-0\" (UID: \"fad53e67-bd04-4577-af57-e5b896b6e56f\") " pod="openstack/cinder-b7346-volume-lvm-iscsi-0" Feb 24 05:52:56.657730 master-0 kubenswrapper[34361]: I0224 05:52:56.655980 34361 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/c1490d04-fc1d-488b-a427-285554ec1692-var-locks-brick\") pod \"cinder-b7346-backup-0\" (UID: \"c1490d04-fc1d-488b-a427-285554ec1692\") " pod="openstack/cinder-b7346-backup-0" Feb 24 05:52:56.657730 master-0 kubenswrapper[34361]: I0224 05:52:56.656019 34361 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/c1490d04-fc1d-488b-a427-285554ec1692-sys\") pod \"cinder-b7346-backup-0\" (UID: \"c1490d04-fc1d-488b-a427-285554ec1692\") " pod="openstack/cinder-b7346-backup-0" Feb 24 05:52:56.657730 master-0 kubenswrapper[34361]: I0224 05:52:56.656041 34361 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c1490d04-fc1d-488b-a427-285554ec1692-combined-ca-bundle\") pod \"cinder-b7346-backup-0\" (UID: \"c1490d04-fc1d-488b-a427-285554ec1692\") " pod="openstack/cinder-b7346-backup-0" Feb 24 05:52:56.657730 master-0 kubenswrapper[34361]: I0224 05:52:56.656064 34361 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/c1490d04-fc1d-488b-a427-285554ec1692-run\") pod \"cinder-b7346-backup-0\" (UID: \"c1490d04-fc1d-488b-a427-285554ec1692\") " pod="openstack/cinder-b7346-backup-0" Feb 24 05:52:56.657730 master-0 kubenswrapper[34361]: I0224 05:52:56.656087 34361 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/fad53e67-bd04-4577-af57-e5b896b6e56f-sys\") pod \"cinder-b7346-volume-lvm-iscsi-0\" (UID: \"fad53e67-bd04-4577-af57-e5b896b6e56f\") " pod="openstack/cinder-b7346-volume-lvm-iscsi-0" Feb 24 05:52:56.657730 master-0 kubenswrapper[34361]: I0224 05:52:56.656116 34361 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/fad53e67-bd04-4577-af57-e5b896b6e56f-run\") pod \"cinder-b7346-volume-lvm-iscsi-0\" (UID: \"fad53e67-bd04-4577-af57-e5b896b6e56f\") " pod="openstack/cinder-b7346-volume-lvm-iscsi-0" Feb 24 05:52:56.657730 master-0 kubenswrapper[34361]: I0224 05:52:56.656147 34361 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/c1490d04-fc1d-488b-a427-285554ec1692-etc-nvme\") pod \"cinder-b7346-backup-0\" (UID: \"c1490d04-fc1d-488b-a427-285554ec1692\") " pod="openstack/cinder-b7346-backup-0" Feb 24 05:52:56.657730 master-0 kubenswrapper[34361]: I0224 05:52:56.656166 34361 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/fad53e67-bd04-4577-af57-e5b896b6e56f-config-data-custom\") pod \"cinder-b7346-volume-lvm-iscsi-0\" (UID: \"fad53e67-bd04-4577-af57-e5b896b6e56f\") " pod="openstack/cinder-b7346-volume-lvm-iscsi-0" Feb 24 05:52:56.657730 master-0 kubenswrapper[34361]: I0224 05:52:56.656189 34361 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c1490d04-fc1d-488b-a427-285554ec1692-scripts\") pod \"cinder-b7346-backup-0\" (UID: \"c1490d04-fc1d-488b-a427-285554ec1692\") " pod="openstack/cinder-b7346-backup-0" Feb 24 05:52:56.657730 master-0 kubenswrapper[34361]: I0224 05:52:56.656211 34361 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7msnw\" (UniqueName: \"kubernetes.io/projected/3c458e23-405b-449a-8e0b-aa6e42a286c9-kube-api-access-7msnw\") pod \"dnsmasq-dns-66c9d5d889-nmpw7\" (UID: \"3c458e23-405b-449a-8e0b-aa6e42a286c9\") " pod="openstack/dnsmasq-dns-66c9d5d889-nmpw7" Feb 24 05:52:56.657730 master-0 kubenswrapper[34361]: I0224 05:52:56.656237 34361 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-cinder\" (UniqueName: \"kubernetes.io/host-path/c1490d04-fc1d-488b-a427-285554ec1692-var-lib-cinder\") pod \"cinder-b7346-backup-0\" (UID: \"c1490d04-fc1d-488b-a427-285554ec1692\") " pod="openstack/cinder-b7346-backup-0" Feb 24 05:52:56.657730 master-0 kubenswrapper[34361]: I0224 05:52:56.656258 34361 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/fad53e67-bd04-4577-af57-e5b896b6e56f-etc-nvme\") pod \"cinder-b7346-volume-lvm-iscsi-0\" (UID: \"fad53e67-bd04-4577-af57-e5b896b6e56f\") " pod="openstack/cinder-b7346-volume-lvm-iscsi-0" Feb 24 05:52:56.657730 master-0 kubenswrapper[34361]: I0224 05:52:56.656291 34361 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/3c458e23-405b-449a-8e0b-aa6e42a286c9-ovsdbserver-sb\") pod \"dnsmasq-dns-66c9d5d889-nmpw7\" (UID: \"3c458e23-405b-449a-8e0b-aa6e42a286c9\") " pod="openstack/dnsmasq-dns-66c9d5d889-nmpw7" Feb 24 05:52:56.657730 master-0 kubenswrapper[34361]: I0224 05:52:56.656328 34361 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/fad53e67-bd04-4577-af57-e5b896b6e56f-dev\") pod \"cinder-b7346-volume-lvm-iscsi-0\" (UID: \"fad53e67-bd04-4577-af57-e5b896b6e56f\") " pod="openstack/cinder-b7346-volume-lvm-iscsi-0" Feb 24 05:52:56.657730 master-0 kubenswrapper[34361]: I0224 05:52:56.656383 34361 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-locks-cinder\" (UniqueName: \"kubernetes.io/host-path/fad53e67-bd04-4577-af57-e5b896b6e56f-var-locks-cinder\") pod \"cinder-b7346-volume-lvm-iscsi-0\" (UID: \"fad53e67-bd04-4577-af57-e5b896b6e56f\") " pod="openstack/cinder-b7346-volume-lvm-iscsi-0" Feb 24 05:52:56.657730 master-0 kubenswrapper[34361]: I0224 05:52:56.656405 34361 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/fad53e67-bd04-4577-af57-e5b896b6e56f-etc-machine-id\") pod \"cinder-b7346-volume-lvm-iscsi-0\" (UID: \"fad53e67-bd04-4577-af57-e5b896b6e56f\") " pod="openstack/cinder-b7346-volume-lvm-iscsi-0" Feb 24 05:52:56.657730 master-0 kubenswrapper[34361]: I0224 05:52:56.656426 34361 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-cinder\" (UniqueName: \"kubernetes.io/host-path/fad53e67-bd04-4577-af57-e5b896b6e56f-var-lib-cinder\") pod \"cinder-b7346-volume-lvm-iscsi-0\" (UID: \"fad53e67-bd04-4577-af57-e5b896b6e56f\") " pod="openstack/cinder-b7346-volume-lvm-iscsi-0" Feb 24 05:52:56.657730 master-0 kubenswrapper[34361]: I0224 05:52:56.656453 34361 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3c458e23-405b-449a-8e0b-aa6e42a286c9-config\") pod \"dnsmasq-dns-66c9d5d889-nmpw7\" (UID: \"3c458e23-405b-449a-8e0b-aa6e42a286c9\") " pod="openstack/dnsmasq-dns-66c9d5d889-nmpw7" Feb 24 05:52:56.657730 master-0 kubenswrapper[34361]: I0224 05:52:56.656480 34361 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/c1490d04-fc1d-488b-a427-285554ec1692-etc-machine-id\") pod \"cinder-b7346-backup-0\" (UID: \"c1490d04-fc1d-488b-a427-285554ec1692\") " pod="openstack/cinder-b7346-backup-0" Feb 24 05:52:56.657730 master-0 kubenswrapper[34361]: I0224 05:52:56.656510 34361 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/3c458e23-405b-449a-8e0b-aa6e42a286c9-ovsdbserver-nb\") pod \"dnsmasq-dns-66c9d5d889-nmpw7\" (UID: \"3c458e23-405b-449a-8e0b-aa6e42a286c9\") " pod="openstack/dnsmasq-dns-66c9d5d889-nmpw7" Feb 24 05:52:56.657730 master-0 kubenswrapper[34361]: I0224 05:52:56.656547 34361 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/c1490d04-fc1d-488b-a427-285554ec1692-lib-modules\") pod \"cinder-b7346-backup-0\" (UID: \"c1490d04-fc1d-488b-a427-285554ec1692\") " pod="openstack/cinder-b7346-backup-0" Feb 24 05:52:56.657730 master-0 kubenswrapper[34361]: I0224 05:52:56.656573 34361 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/c1490d04-fc1d-488b-a427-285554ec1692-etc-iscsi\") pod \"cinder-b7346-backup-0\" (UID: \"c1490d04-fc1d-488b-a427-285554ec1692\") " pod="openstack/cinder-b7346-backup-0" Feb 24 05:52:56.657730 master-0 kubenswrapper[34361]: I0224 05:52:56.656592 34361 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fad53e67-bd04-4577-af57-e5b896b6e56f-scripts\") pod \"cinder-b7346-volume-lvm-iscsi-0\" (UID: \"fad53e67-bd04-4577-af57-e5b896b6e56f\") " pod="openstack/cinder-b7346-volume-lvm-iscsi-0" Feb 24 05:52:56.657730 master-0 kubenswrapper[34361]: I0224 05:52:56.656612 34361 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/c1490d04-fc1d-488b-a427-285554ec1692-config-data-custom\") pod \"cinder-b7346-backup-0\" (UID: \"c1490d04-fc1d-488b-a427-285554ec1692\") " pod="openstack/cinder-b7346-backup-0" Feb 24 05:52:56.657730 master-0 kubenswrapper[34361]: I0224 05:52:56.656640 34361 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fad53e67-bd04-4577-af57-e5b896b6e56f-combined-ca-bundle\") pod \"cinder-b7346-volume-lvm-iscsi-0\" (UID: \"fad53e67-bd04-4577-af57-e5b896b6e56f\") " pod="openstack/cinder-b7346-volume-lvm-iscsi-0" Feb 24 05:52:56.657730 master-0 kubenswrapper[34361]: I0224 05:52:56.656661 34361 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lndt4\" (UniqueName: \"kubernetes.io/projected/c1490d04-fc1d-488b-a427-285554ec1692-kube-api-access-lndt4\") pod \"cinder-b7346-backup-0\" (UID: \"c1490d04-fc1d-488b-a427-285554ec1692\") " pod="openstack/cinder-b7346-backup-0" Feb 24 05:52:56.657730 master-0 kubenswrapper[34361]: I0224 05:52:56.656684 34361 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c1490d04-fc1d-488b-a427-285554ec1692-config-data\") pod \"cinder-b7346-backup-0\" (UID: \"c1490d04-fc1d-488b-a427-285554ec1692\") " pod="openstack/cinder-b7346-backup-0" Feb 24 05:52:56.678452 master-0 kubenswrapper[34361]: I0224 05:52:56.678375 34361 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-564b95b965-jqq92"] Feb 24 05:52:56.687409 master-0 kubenswrapper[34361]: I0224 05:52:56.687347 34361 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-b7346-api-0"] Feb 24 05:52:56.722341 master-0 kubenswrapper[34361]: I0224 05:52:56.715793 34361 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-b7346-api-0" Feb 24 05:52:56.722341 master-0 kubenswrapper[34361]: I0224 05:52:56.719574 34361 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-b7346-api-config-data" Feb 24 05:52:56.757662 master-0 kubenswrapper[34361]: I0224 05:52:56.757082 34361 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-b7346-api-0"] Feb 24 05:52:56.769768 master-0 kubenswrapper[34361]: I0224 05:52:56.769708 34361 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-b7346-scheduler-0" Feb 24 05:52:56.774493 master-0 kubenswrapper[34361]: I0224 05:52:56.774430 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/3c458e23-405b-449a-8e0b-aa6e42a286c9-dns-svc\") pod \"dnsmasq-dns-66c9d5d889-nmpw7\" (UID: \"3c458e23-405b-449a-8e0b-aa6e42a286c9\") " pod="openstack/dnsmasq-dns-66c9d5d889-nmpw7" Feb 24 05:52:56.774707 master-0 kubenswrapper[34361]: I0224 05:52:56.774660 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/3c458e23-405b-449a-8e0b-aa6e42a286c9-dns-swift-storage-0\") pod \"dnsmasq-dns-66c9d5d889-nmpw7\" (UID: \"3c458e23-405b-449a-8e0b-aa6e42a286c9\") " pod="openstack/dnsmasq-dns-66c9d5d889-nmpw7" Feb 24 05:52:56.774762 master-0 kubenswrapper[34361]: I0224 05:52:56.774718 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zzhlb\" (UniqueName: \"kubernetes.io/projected/fad53e67-bd04-4577-af57-e5b896b6e56f-kube-api-access-zzhlb\") pod \"cinder-b7346-volume-lvm-iscsi-0\" (UID: \"fad53e67-bd04-4577-af57-e5b896b6e56f\") " pod="openstack/cinder-b7346-volume-lvm-iscsi-0" Feb 24 05:52:56.774813 master-0 kubenswrapper[34361]: I0224 05:52:56.774774 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/fad53e67-bd04-4577-af57-e5b896b6e56f-etc-iscsi\") pod \"cinder-b7346-volume-lvm-iscsi-0\" (UID: \"fad53e67-bd04-4577-af57-e5b896b6e56f\") " pod="openstack/cinder-b7346-volume-lvm-iscsi-0" Feb 24 05:52:56.774813 master-0 kubenswrapper[34361]: I0224 05:52:56.774800 34361 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3d496d7e-f8ee-477d-ba68-1084904b9b33-scripts\") pod \"cinder-b7346-api-0\" (UID: \"3d496d7e-f8ee-477d-ba68-1084904b9b33\") " pod="openstack/cinder-b7346-api-0" Feb 24 05:52:56.774916 master-0 kubenswrapper[34361]: I0224 05:52:56.774833 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/c1490d04-fc1d-488b-a427-285554ec1692-var-locks-brick\") pod \"cinder-b7346-backup-0\" (UID: \"c1490d04-fc1d-488b-a427-285554ec1692\") " pod="openstack/cinder-b7346-backup-0" Feb 24 05:52:56.774916 master-0 kubenswrapper[34361]: I0224 05:52:56.774883 34361 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3d496d7e-f8ee-477d-ba68-1084904b9b33-config-data\") pod \"cinder-b7346-api-0\" (UID: \"3d496d7e-f8ee-477d-ba68-1084904b9b33\") " pod="openstack/cinder-b7346-api-0" Feb 24 05:52:56.774916 master-0 kubenswrapper[34361]: I0224 05:52:56.774910 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/c1490d04-fc1d-488b-a427-285554ec1692-sys\") pod \"cinder-b7346-backup-0\" (UID: \"c1490d04-fc1d-488b-a427-285554ec1692\") " pod="openstack/cinder-b7346-backup-0" Feb 24 05:52:56.775049 master-0 kubenswrapper[34361]: I0224 05:52:56.774960 34361 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/fad53e67-bd04-4577-af57-e5b896b6e56f-etc-iscsi\") pod \"cinder-b7346-volume-lvm-iscsi-0\" (UID: \"fad53e67-bd04-4577-af57-e5b896b6e56f\") " pod="openstack/cinder-b7346-volume-lvm-iscsi-0" Feb 24 05:52:56.775414 master-0 kubenswrapper[34361]: I0224 05:52:56.775321 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c1490d04-fc1d-488b-a427-285554ec1692-combined-ca-bundle\") pod \"cinder-b7346-backup-0\" (UID: \"c1490d04-fc1d-488b-a427-285554ec1692\") " pod="openstack/cinder-b7346-backup-0" Feb 24 05:52:56.775544 master-0 kubenswrapper[34361]: I0224 05:52:56.775516 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/c1490d04-fc1d-488b-a427-285554ec1692-run\") pod \"cinder-b7346-backup-0\" (UID: \"c1490d04-fc1d-488b-a427-285554ec1692\") " pod="openstack/cinder-b7346-backup-0" Feb 24 05:52:56.775610 master-0 kubenswrapper[34361]: I0224 05:52:56.775596 34361 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3d496d7e-f8ee-477d-ba68-1084904b9b33-combined-ca-bundle\") pod \"cinder-b7346-api-0\" (UID: \"3d496d7e-f8ee-477d-ba68-1084904b9b33\") " pod="openstack/cinder-b7346-api-0" Feb 24 05:52:56.775712 master-0 kubenswrapper[34361]: I0224 05:52:56.775673 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/fad53e67-bd04-4577-af57-e5b896b6e56f-sys\") pod \"cinder-b7346-volume-lvm-iscsi-0\" (UID: \"fad53e67-bd04-4577-af57-e5b896b6e56f\") " pod="openstack/cinder-b7346-volume-lvm-iscsi-0" Feb 24 05:52:56.775771 master-0 kubenswrapper[34361]: I0224 05:52:56.775714 34361 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/3c458e23-405b-449a-8e0b-aa6e42a286c9-dns-svc\") pod \"dnsmasq-dns-66c9d5d889-nmpw7\" (UID: \"3c458e23-405b-449a-8e0b-aa6e42a286c9\") " pod="openstack/dnsmasq-dns-66c9d5d889-nmpw7" Feb 24 05:52:56.775771 master-0 kubenswrapper[34361]: I0224 05:52:56.774970 34361 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/c1490d04-fc1d-488b-a427-285554ec1692-sys\") pod \"cinder-b7346-backup-0\" (UID: \"c1490d04-fc1d-488b-a427-285554ec1692\") " pod="openstack/cinder-b7346-backup-0" Feb 24 05:52:56.775877 master-0 kubenswrapper[34361]: I0224 05:52:56.775783 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/fad53e67-bd04-4577-af57-e5b896b6e56f-run\") pod \"cinder-b7346-volume-lvm-iscsi-0\" (UID: \"fad53e67-bd04-4577-af57-e5b896b6e56f\") " pod="openstack/cinder-b7346-volume-lvm-iscsi-0" Feb 24 05:52:56.775959 master-0 kubenswrapper[34361]: I0224 05:52:56.775921 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/c1490d04-fc1d-488b-a427-285554ec1692-etc-nvme\") pod \"cinder-b7346-backup-0\" (UID: \"c1490d04-fc1d-488b-a427-285554ec1692\") " pod="openstack/cinder-b7346-backup-0" Feb 24 05:52:56.776039 master-0 kubenswrapper[34361]: I0224 05:52:56.775974 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/fad53e67-bd04-4577-af57-e5b896b6e56f-config-data-custom\") pod \"cinder-b7346-volume-lvm-iscsi-0\" (UID: \"fad53e67-bd04-4577-af57-e5b896b6e56f\") " pod="openstack/cinder-b7346-volume-lvm-iscsi-0" Feb 24 05:52:56.776039 master-0 kubenswrapper[34361]: I0224 05:52:56.776023 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c1490d04-fc1d-488b-a427-285554ec1692-scripts\") pod \"cinder-b7346-backup-0\" (UID: \"c1490d04-fc1d-488b-a427-285554ec1692\") " pod="openstack/cinder-b7346-backup-0" Feb 24 05:52:56.776162 master-0 kubenswrapper[34361]: I0224 05:52:56.776055 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7msnw\" (UniqueName: \"kubernetes.io/projected/3c458e23-405b-449a-8e0b-aa6e42a286c9-kube-api-access-7msnw\") pod \"dnsmasq-dns-66c9d5d889-nmpw7\" (UID: \"3c458e23-405b-449a-8e0b-aa6e42a286c9\") " pod="openstack/dnsmasq-dns-66c9d5d889-nmpw7" Feb 24 05:52:56.776162 master-0 kubenswrapper[34361]: I0224 05:52:56.776117 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-cinder\" (UniqueName: \"kubernetes.io/host-path/c1490d04-fc1d-488b-a427-285554ec1692-var-lib-cinder\") pod \"cinder-b7346-backup-0\" (UID: \"c1490d04-fc1d-488b-a427-285554ec1692\") " pod="openstack/cinder-b7346-backup-0" Feb 24 05:52:56.776162 master-0 kubenswrapper[34361]: I0224 05:52:56.776158 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/fad53e67-bd04-4577-af57-e5b896b6e56f-etc-nvme\") pod \"cinder-b7346-volume-lvm-iscsi-0\" (UID: \"fad53e67-bd04-4577-af57-e5b896b6e56f\") " pod="openstack/cinder-b7346-volume-lvm-iscsi-0" Feb 24 05:52:56.776329 master-0 kubenswrapper[34361]: I0224 05:52:56.776239 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/3c458e23-405b-449a-8e0b-aa6e42a286c9-ovsdbserver-sb\") pod \"dnsmasq-dns-66c9d5d889-nmpw7\" (UID: \"3c458e23-405b-449a-8e0b-aa6e42a286c9\") " pod="openstack/dnsmasq-dns-66c9d5d889-nmpw7" Feb 24 05:52:56.776329 master-0 kubenswrapper[34361]: I0224 05:52:56.776288 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/fad53e67-bd04-4577-af57-e5b896b6e56f-dev\") pod \"cinder-b7346-volume-lvm-iscsi-0\" (UID: \"fad53e67-bd04-4577-af57-e5b896b6e56f\") " pod="openstack/cinder-b7346-volume-lvm-iscsi-0" Feb 24 05:52:56.779381 master-0 kubenswrapper[34361]: I0224 05:52:56.777352 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-locks-cinder\" (UniqueName: \"kubernetes.io/host-path/fad53e67-bd04-4577-af57-e5b896b6e56f-var-locks-cinder\") pod \"cinder-b7346-volume-lvm-iscsi-0\" (UID: \"fad53e67-bd04-4577-af57-e5b896b6e56f\") " pod="openstack/cinder-b7346-volume-lvm-iscsi-0" Feb 24 05:52:56.779381 master-0 kubenswrapper[34361]: I0224 05:52:56.777399 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/fad53e67-bd04-4577-af57-e5b896b6e56f-etc-machine-id\") pod \"cinder-b7346-volume-lvm-iscsi-0\" (UID: \"fad53e67-bd04-4577-af57-e5b896b6e56f\") " pod="openstack/cinder-b7346-volume-lvm-iscsi-0" Feb 24 05:52:56.779381 master-0 kubenswrapper[34361]: I0224 05:52:56.777424 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-cinder\" (UniqueName: \"kubernetes.io/host-path/fad53e67-bd04-4577-af57-e5b896b6e56f-var-lib-cinder\") pod \"cinder-b7346-volume-lvm-iscsi-0\" (UID: \"fad53e67-bd04-4577-af57-e5b896b6e56f\") " pod="openstack/cinder-b7346-volume-lvm-iscsi-0" Feb 24 05:52:56.779381 master-0 kubenswrapper[34361]: I0224 05:52:56.777457 34361 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/c1490d04-fc1d-488b-a427-285554ec1692-var-locks-brick\") pod \"cinder-b7346-backup-0\" (UID: \"c1490d04-fc1d-488b-a427-285554ec1692\") " pod="openstack/cinder-b7346-backup-0" Feb 24 05:52:56.779381 master-0 kubenswrapper[34361]: I0224 05:52:56.776585 34361 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run\" (UniqueName: \"kubernetes.io/host-path/c1490d04-fc1d-488b-a427-285554ec1692-run\") pod \"cinder-b7346-backup-0\" (UID: \"c1490d04-fc1d-488b-a427-285554ec1692\") " pod="openstack/cinder-b7346-backup-0" Feb 24 05:52:56.779381 master-0 kubenswrapper[34361]: I0224 05:52:56.777467 34361 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3d496d7e-f8ee-477d-ba68-1084904b9b33-logs\") pod \"cinder-b7346-api-0\" (UID: \"3d496d7e-f8ee-477d-ba68-1084904b9b33\") " pod="openstack/cinder-b7346-api-0" Feb 24 05:52:56.779381 master-0 kubenswrapper[34361]: I0224 05:52:56.776457 34361 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/3c458e23-405b-449a-8e0b-aa6e42a286c9-dns-swift-storage-0\") pod \"dnsmasq-dns-66c9d5d889-nmpw7\" (UID: \"3c458e23-405b-449a-8e0b-aa6e42a286c9\") " pod="openstack/dnsmasq-dns-66c9d5d889-nmpw7" Feb 24 05:52:56.779381 master-0 kubenswrapper[34361]: I0224 05:52:56.776535 34361 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/c1490d04-fc1d-488b-a427-285554ec1692-etc-nvme\") pod \"cinder-b7346-backup-0\" (UID: \"c1490d04-fc1d-488b-a427-285554ec1692\") " pod="openstack/cinder-b7346-backup-0" Feb 24 05:52:56.779381 master-0 kubenswrapper[34361]: I0224 05:52:56.776559 34361 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run\" (UniqueName: \"kubernetes.io/host-path/fad53e67-bd04-4577-af57-e5b896b6e56f-run\") pod \"cinder-b7346-volume-lvm-iscsi-0\" (UID: \"fad53e67-bd04-4577-af57-e5b896b6e56f\") " pod="openstack/cinder-b7346-volume-lvm-iscsi-0" Feb 24 05:52:56.779381 master-0 kubenswrapper[34361]: I0224 05:52:56.776572 34361 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/fad53e67-bd04-4577-af57-e5b896b6e56f-sys\") pod \"cinder-b7346-volume-lvm-iscsi-0\" (UID: \"fad53e67-bd04-4577-af57-e5b896b6e56f\") " pod="openstack/cinder-b7346-volume-lvm-iscsi-0" Feb 24 05:52:56.779381 master-0 kubenswrapper[34361]: I0224 05:52:56.778010 34361 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/fad53e67-bd04-4577-af57-e5b896b6e56f-dev\") pod \"cinder-b7346-volume-lvm-iscsi-0\" (UID: \"fad53e67-bd04-4577-af57-e5b896b6e56f\") " pod="openstack/cinder-b7346-volume-lvm-iscsi-0" Feb 24 05:52:56.779381 master-0 kubenswrapper[34361]: I0224 05:52:56.778555 34361 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-cinder\" (UniqueName: \"kubernetes.io/host-path/c1490d04-fc1d-488b-a427-285554ec1692-var-lib-cinder\") pod \"cinder-b7346-backup-0\" (UID: \"c1490d04-fc1d-488b-a427-285554ec1692\") " pod="openstack/cinder-b7346-backup-0" Feb 24 05:52:56.779381 master-0 kubenswrapper[34361]: I0224 05:52:56.778602 34361 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/fad53e67-bd04-4577-af57-e5b896b6e56f-etc-nvme\") pod \"cinder-b7346-volume-lvm-iscsi-0\" (UID: \"fad53e67-bd04-4577-af57-e5b896b6e56f\") " pod="openstack/cinder-b7346-volume-lvm-iscsi-0" Feb 24 05:52:56.779381 master-0 kubenswrapper[34361]: I0224 05:52:56.779229 34361 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/3c458e23-405b-449a-8e0b-aa6e42a286c9-ovsdbserver-sb\") pod \"dnsmasq-dns-66c9d5d889-nmpw7\" (UID: \"3c458e23-405b-449a-8e0b-aa6e42a286c9\") " pod="openstack/dnsmasq-dns-66c9d5d889-nmpw7" Feb 24 05:52:56.779381 master-0 kubenswrapper[34361]: I0224 05:52:56.779273 34361 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/fad53e67-bd04-4577-af57-e5b896b6e56f-etc-machine-id\") pod \"cinder-b7346-volume-lvm-iscsi-0\" (UID: \"fad53e67-bd04-4577-af57-e5b896b6e56f\") " pod="openstack/cinder-b7346-volume-lvm-iscsi-0" Feb 24 05:52:56.779381 master-0 kubenswrapper[34361]: I0224 05:52:56.779382 34361 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-locks-cinder\" (UniqueName: \"kubernetes.io/host-path/fad53e67-bd04-4577-af57-e5b896b6e56f-var-locks-cinder\") pod \"cinder-b7346-volume-lvm-iscsi-0\" (UID: \"fad53e67-bd04-4577-af57-e5b896b6e56f\") " pod="openstack/cinder-b7346-volume-lvm-iscsi-0" Feb 24 05:52:56.780212 master-0 kubenswrapper[34361]: I0224 05:52:56.779429 34361 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-cinder\" (UniqueName: \"kubernetes.io/host-path/fad53e67-bd04-4577-af57-e5b896b6e56f-var-lib-cinder\") pod \"cinder-b7346-volume-lvm-iscsi-0\" (UID: \"fad53e67-bd04-4577-af57-e5b896b6e56f\") " pod="openstack/cinder-b7346-volume-lvm-iscsi-0" Feb 24 05:52:56.780212 master-0 kubenswrapper[34361]: I0224 05:52:56.779574 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3c458e23-405b-449a-8e0b-aa6e42a286c9-config\") pod \"dnsmasq-dns-66c9d5d889-nmpw7\" (UID: \"3c458e23-405b-449a-8e0b-aa6e42a286c9\") " pod="openstack/dnsmasq-dns-66c9d5d889-nmpw7" Feb 24 05:52:56.780212 master-0 kubenswrapper[34361]: I0224 05:52:56.779616 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/c1490d04-fc1d-488b-a427-285554ec1692-etc-machine-id\") pod \"cinder-b7346-backup-0\" (UID: \"c1490d04-fc1d-488b-a427-285554ec1692\") " pod="openstack/cinder-b7346-backup-0" Feb 24 05:52:56.780212 master-0 kubenswrapper[34361]: I0224 05:52:56.780176 34361 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3c458e23-405b-449a-8e0b-aa6e42a286c9-config\") pod \"dnsmasq-dns-66c9d5d889-nmpw7\" (UID: \"3c458e23-405b-449a-8e0b-aa6e42a286c9\") " pod="openstack/dnsmasq-dns-66c9d5d889-nmpw7" Feb 24 05:52:56.780212 master-0 kubenswrapper[34361]: I0224 05:52:56.780214 34361 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/c1490d04-fc1d-488b-a427-285554ec1692-etc-machine-id\") pod \"cinder-b7346-backup-0\" (UID: \"c1490d04-fc1d-488b-a427-285554ec1692\") " pod="openstack/cinder-b7346-backup-0" Feb 24 05:52:56.780485 master-0 kubenswrapper[34361]: I0224 05:52:56.780284 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/3c458e23-405b-449a-8e0b-aa6e42a286c9-ovsdbserver-nb\") pod \"dnsmasq-dns-66c9d5d889-nmpw7\" (UID: \"3c458e23-405b-449a-8e0b-aa6e42a286c9\") " pod="openstack/dnsmasq-dns-66c9d5d889-nmpw7" Feb 24 05:52:56.780485 master-0 kubenswrapper[34361]: I0224 05:52:56.780364 34361 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/3d496d7e-f8ee-477d-ba68-1084904b9b33-etc-machine-id\") pod \"cinder-b7346-api-0\" (UID: \"3d496d7e-f8ee-477d-ba68-1084904b9b33\") " pod="openstack/cinder-b7346-api-0" Feb 24 05:52:56.780485 master-0 kubenswrapper[34361]: I0224 05:52:56.780409 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/c1490d04-fc1d-488b-a427-285554ec1692-lib-modules\") pod \"cinder-b7346-backup-0\" (UID: \"c1490d04-fc1d-488b-a427-285554ec1692\") " pod="openstack/cinder-b7346-backup-0" Feb 24 05:52:56.780485 master-0 kubenswrapper[34361]: I0224 05:52:56.780462 34361 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dc5fx\" (UniqueName: \"kubernetes.io/projected/3d496d7e-f8ee-477d-ba68-1084904b9b33-kube-api-access-dc5fx\") pod \"cinder-b7346-api-0\" (UID: \"3d496d7e-f8ee-477d-ba68-1084904b9b33\") " pod="openstack/cinder-b7346-api-0" Feb 24 05:52:56.780666 master-0 kubenswrapper[34361]: I0224 05:52:56.780522 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fad53e67-bd04-4577-af57-e5b896b6e56f-scripts\") pod \"cinder-b7346-volume-lvm-iscsi-0\" (UID: \"fad53e67-bd04-4577-af57-e5b896b6e56f\") " pod="openstack/cinder-b7346-volume-lvm-iscsi-0" Feb 24 05:52:56.780666 master-0 kubenswrapper[34361]: I0224 05:52:56.780552 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/c1490d04-fc1d-488b-a427-285554ec1692-etc-iscsi\") pod \"cinder-b7346-backup-0\" (UID: \"c1490d04-fc1d-488b-a427-285554ec1692\") " pod="openstack/cinder-b7346-backup-0" Feb 24 05:52:56.780666 master-0 kubenswrapper[34361]: I0224 05:52:56.780597 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/c1490d04-fc1d-488b-a427-285554ec1692-config-data-custom\") pod \"cinder-b7346-backup-0\" (UID: \"c1490d04-fc1d-488b-a427-285554ec1692\") " pod="openstack/cinder-b7346-backup-0" Feb 24 05:52:56.780666 master-0 kubenswrapper[34361]: I0224 05:52:56.780632 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fad53e67-bd04-4577-af57-e5b896b6e56f-combined-ca-bundle\") pod \"cinder-b7346-volume-lvm-iscsi-0\" (UID: \"fad53e67-bd04-4577-af57-e5b896b6e56f\") " pod="openstack/cinder-b7346-volume-lvm-iscsi-0" Feb 24 05:52:56.780666 master-0 kubenswrapper[34361]: I0224 05:52:56.780664 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lndt4\" (UniqueName: \"kubernetes.io/projected/c1490d04-fc1d-488b-a427-285554ec1692-kube-api-access-lndt4\") pod \"cinder-b7346-backup-0\" (UID: \"c1490d04-fc1d-488b-a427-285554ec1692\") " pod="openstack/cinder-b7346-backup-0" Feb 24 05:52:56.780871 master-0 kubenswrapper[34361]: I0224 05:52:56.780694 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c1490d04-fc1d-488b-a427-285554ec1692-config-data\") pod \"cinder-b7346-backup-0\" (UID: \"c1490d04-fc1d-488b-a427-285554ec1692\") " pod="openstack/cinder-b7346-backup-0" Feb 24 05:52:56.780871 master-0 kubenswrapper[34361]: I0224 05:52:56.780770 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/fad53e67-bd04-4577-af57-e5b896b6e56f-var-locks-brick\") pod \"cinder-b7346-volume-lvm-iscsi-0\" (UID: \"fad53e67-bd04-4577-af57-e5b896b6e56f\") " pod="openstack/cinder-b7346-volume-lvm-iscsi-0" Feb 24 05:52:56.780871 master-0 kubenswrapper[34361]: I0224 05:52:56.780817 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/c1490d04-fc1d-488b-a427-285554ec1692-dev\") pod \"cinder-b7346-backup-0\" (UID: \"c1490d04-fc1d-488b-a427-285554ec1692\") " pod="openstack/cinder-b7346-backup-0" Feb 24 05:52:56.780871 master-0 kubenswrapper[34361]: I0224 05:52:56.780846 34361 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/3d496d7e-f8ee-477d-ba68-1084904b9b33-config-data-custom\") pod \"cinder-b7346-api-0\" (UID: \"3d496d7e-f8ee-477d-ba68-1084904b9b33\") " pod="openstack/cinder-b7346-api-0" Feb 24 05:52:56.783530 master-0 kubenswrapper[34361]: I0224 05:52:56.783481 34361 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c1490d04-fc1d-488b-a427-285554ec1692-scripts\") pod \"cinder-b7346-backup-0\" (UID: \"c1490d04-fc1d-488b-a427-285554ec1692\") " pod="openstack/cinder-b7346-backup-0" Feb 24 05:52:56.783626 master-0 kubenswrapper[34361]: I0224 05:52:56.783549 34361 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/c1490d04-fc1d-488b-a427-285554ec1692-lib-modules\") pod \"cinder-b7346-backup-0\" (UID: \"c1490d04-fc1d-488b-a427-285554ec1692\") " pod="openstack/cinder-b7346-backup-0" Feb 24 05:52:56.784210 master-0 kubenswrapper[34361]: I0224 05:52:56.784185 34361 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/3c458e23-405b-449a-8e0b-aa6e42a286c9-ovsdbserver-nb\") pod \"dnsmasq-dns-66c9d5d889-nmpw7\" (UID: \"3c458e23-405b-449a-8e0b-aa6e42a286c9\") " pod="openstack/dnsmasq-dns-66c9d5d889-nmpw7" Feb 24 05:52:56.785220 master-0 kubenswrapper[34361]: I0224 05:52:56.784730 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fad53e67-bd04-4577-af57-e5b896b6e56f-config-data\") pod \"cinder-b7346-volume-lvm-iscsi-0\" (UID: \"fad53e67-bd04-4577-af57-e5b896b6e56f\") " pod="openstack/cinder-b7346-volume-lvm-iscsi-0" Feb 24 05:52:56.786944 master-0 kubenswrapper[34361]: I0224 05:52:56.785539 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-locks-cinder\" (UniqueName: \"kubernetes.io/host-path/c1490d04-fc1d-488b-a427-285554ec1692-var-locks-cinder\") pod \"cinder-b7346-backup-0\" (UID: \"c1490d04-fc1d-488b-a427-285554ec1692\") " pod="openstack/cinder-b7346-backup-0" Feb 24 05:52:56.786944 master-0 kubenswrapper[34361]: I0224 05:52:56.785648 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/fad53e67-bd04-4577-af57-e5b896b6e56f-lib-modules\") pod \"cinder-b7346-volume-lvm-iscsi-0\" (UID: \"fad53e67-bd04-4577-af57-e5b896b6e56f\") " pod="openstack/cinder-b7346-volume-lvm-iscsi-0" Feb 24 05:52:56.786944 master-0 kubenswrapper[34361]: I0224 05:52:56.786539 34361 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-locks-cinder\" (UniqueName: \"kubernetes.io/host-path/c1490d04-fc1d-488b-a427-285554ec1692-var-locks-cinder\") pod \"cinder-b7346-backup-0\" (UID: \"c1490d04-fc1d-488b-a427-285554ec1692\") " pod="openstack/cinder-b7346-backup-0" Feb 24 05:52:56.790245 master-0 kubenswrapper[34361]: I0224 05:52:56.790198 34361 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/fad53e67-bd04-4577-af57-e5b896b6e56f-var-locks-brick\") pod \"cinder-b7346-volume-lvm-iscsi-0\" (UID: \"fad53e67-bd04-4577-af57-e5b896b6e56f\") " pod="openstack/cinder-b7346-volume-lvm-iscsi-0" Feb 24 05:52:56.790245 master-0 kubenswrapper[34361]: I0224 05:52:56.790205 34361 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/fad53e67-bd04-4577-af57-e5b896b6e56f-lib-modules\") pod \"cinder-b7346-volume-lvm-iscsi-0\" (UID: \"fad53e67-bd04-4577-af57-e5b896b6e56f\") " pod="openstack/cinder-b7346-volume-lvm-iscsi-0" Feb 24 05:52:56.790536 master-0 kubenswrapper[34361]: I0224 05:52:56.790247 34361 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/c1490d04-fc1d-488b-a427-285554ec1692-etc-iscsi\") pod \"cinder-b7346-backup-0\" (UID: \"c1490d04-fc1d-488b-a427-285554ec1692\") " pod="openstack/cinder-b7346-backup-0" Feb 24 05:52:56.790536 master-0 kubenswrapper[34361]: I0224 05:52:56.790361 34361 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/c1490d04-fc1d-488b-a427-285554ec1692-dev\") pod \"cinder-b7346-backup-0\" (UID: \"c1490d04-fc1d-488b-a427-285554ec1692\") " pod="openstack/cinder-b7346-backup-0" Feb 24 05:52:56.790536 master-0 kubenswrapper[34361]: I0224 05:52:56.790348 34361 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c1490d04-fc1d-488b-a427-285554ec1692-combined-ca-bundle\") pod \"cinder-b7346-backup-0\" (UID: \"c1490d04-fc1d-488b-a427-285554ec1692\") " pod="openstack/cinder-b7346-backup-0" Feb 24 05:52:56.794080 master-0 kubenswrapper[34361]: I0224 05:52:56.794010 34361 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fad53e67-bd04-4577-af57-e5b896b6e56f-scripts\") pod \"cinder-b7346-volume-lvm-iscsi-0\" (UID: \"fad53e67-bd04-4577-af57-e5b896b6e56f\") " pod="openstack/cinder-b7346-volume-lvm-iscsi-0" Feb 24 05:52:56.794385 master-0 kubenswrapper[34361]: I0224 05:52:56.794346 34361 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fad53e67-bd04-4577-af57-e5b896b6e56f-combined-ca-bundle\") pod \"cinder-b7346-volume-lvm-iscsi-0\" (UID: \"fad53e67-bd04-4577-af57-e5b896b6e56f\") " pod="openstack/cinder-b7346-volume-lvm-iscsi-0" Feb 24 05:52:56.795391 master-0 kubenswrapper[34361]: I0224 05:52:56.795338 34361 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c1490d04-fc1d-488b-a427-285554ec1692-config-data\") pod \"cinder-b7346-backup-0\" (UID: \"c1490d04-fc1d-488b-a427-285554ec1692\") " pod="openstack/cinder-b7346-backup-0" Feb 24 05:52:56.798231 master-0 kubenswrapper[34361]: I0224 05:52:56.798195 34361 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7msnw\" (UniqueName: \"kubernetes.io/projected/3c458e23-405b-449a-8e0b-aa6e42a286c9-kube-api-access-7msnw\") pod \"dnsmasq-dns-66c9d5d889-nmpw7\" (UID: \"3c458e23-405b-449a-8e0b-aa6e42a286c9\") " pod="openstack/dnsmasq-dns-66c9d5d889-nmpw7" Feb 24 05:52:56.801289 master-0 kubenswrapper[34361]: I0224 05:52:56.801211 34361 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zzhlb\" (UniqueName: \"kubernetes.io/projected/fad53e67-bd04-4577-af57-e5b896b6e56f-kube-api-access-zzhlb\") pod \"cinder-b7346-volume-lvm-iscsi-0\" (UID: \"fad53e67-bd04-4577-af57-e5b896b6e56f\") " pod="openstack/cinder-b7346-volume-lvm-iscsi-0" Feb 24 05:52:56.803495 master-0 kubenswrapper[34361]: I0224 05:52:56.803409 34361 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/c1490d04-fc1d-488b-a427-285554ec1692-config-data-custom\") pod \"cinder-b7346-backup-0\" (UID: \"c1490d04-fc1d-488b-a427-285554ec1692\") " pod="openstack/cinder-b7346-backup-0" Feb 24 05:52:56.803495 master-0 kubenswrapper[34361]: I0224 05:52:56.803426 34361 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fad53e67-bd04-4577-af57-e5b896b6e56f-config-data\") pod \"cinder-b7346-volume-lvm-iscsi-0\" (UID: \"fad53e67-bd04-4577-af57-e5b896b6e56f\") " pod="openstack/cinder-b7346-volume-lvm-iscsi-0" Feb 24 05:52:56.804869 master-0 kubenswrapper[34361]: I0224 05:52:56.803746 34361 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/fad53e67-bd04-4577-af57-e5b896b6e56f-config-data-custom\") pod \"cinder-b7346-volume-lvm-iscsi-0\" (UID: \"fad53e67-bd04-4577-af57-e5b896b6e56f\") " pod="openstack/cinder-b7346-volume-lvm-iscsi-0" Feb 24 05:52:56.821461 master-0 kubenswrapper[34361]: I0224 05:52:56.818799 34361 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lndt4\" (UniqueName: \"kubernetes.io/projected/c1490d04-fc1d-488b-a427-285554ec1692-kube-api-access-lndt4\") pod \"cinder-b7346-backup-0\" (UID: \"c1490d04-fc1d-488b-a427-285554ec1692\") " pod="openstack/cinder-b7346-backup-0" Feb 24 05:52:56.828561 master-0 kubenswrapper[34361]: I0224 05:52:56.828497 34361 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-564b95b965-jqq92" event={"ID":"0505d98c-cd90-4424-b40d-304625ffdb03","Type":"ContainerStarted","Data":"500d6887acf4a13193974e701ccf71d730b1de4f6a20b7ce30bfb926907b246b"} Feb 24 05:52:56.841128 master-0 kubenswrapper[34361]: I0224 05:52:56.840944 34361 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-66c9d5d889-nmpw7" Feb 24 05:52:56.861484 master-0 kubenswrapper[34361]: I0224 05:52:56.861415 34361 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-b7346-backup-0" Feb 24 05:52:56.890895 master-0 kubenswrapper[34361]: I0224 05:52:56.888353 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3d496d7e-f8ee-477d-ba68-1084904b9b33-logs\") pod \"cinder-b7346-api-0\" (UID: \"3d496d7e-f8ee-477d-ba68-1084904b9b33\") " pod="openstack/cinder-b7346-api-0" Feb 24 05:52:56.890895 master-0 kubenswrapper[34361]: I0224 05:52:56.888473 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/3d496d7e-f8ee-477d-ba68-1084904b9b33-etc-machine-id\") pod \"cinder-b7346-api-0\" (UID: \"3d496d7e-f8ee-477d-ba68-1084904b9b33\") " pod="openstack/cinder-b7346-api-0" Feb 24 05:52:56.890895 master-0 kubenswrapper[34361]: I0224 05:52:56.888508 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dc5fx\" (UniqueName: \"kubernetes.io/projected/3d496d7e-f8ee-477d-ba68-1084904b9b33-kube-api-access-dc5fx\") pod \"cinder-b7346-api-0\" (UID: \"3d496d7e-f8ee-477d-ba68-1084904b9b33\") " pod="openstack/cinder-b7346-api-0" Feb 24 05:52:56.890895 master-0 kubenswrapper[34361]: I0224 05:52:56.888612 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/3d496d7e-f8ee-477d-ba68-1084904b9b33-config-data-custom\") pod \"cinder-b7346-api-0\" (UID: \"3d496d7e-f8ee-477d-ba68-1084904b9b33\") " pod="openstack/cinder-b7346-api-0" Feb 24 05:52:56.890895 master-0 kubenswrapper[34361]: I0224 05:52:56.888804 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3d496d7e-f8ee-477d-ba68-1084904b9b33-scripts\") pod \"cinder-b7346-api-0\" (UID: \"3d496d7e-f8ee-477d-ba68-1084904b9b33\") " pod="openstack/cinder-b7346-api-0" Feb 24 05:52:56.890895 master-0 kubenswrapper[34361]: I0224 05:52:56.888964 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3d496d7e-f8ee-477d-ba68-1084904b9b33-config-data\") pod \"cinder-b7346-api-0\" (UID: \"3d496d7e-f8ee-477d-ba68-1084904b9b33\") " pod="openstack/cinder-b7346-api-0" Feb 24 05:52:56.890895 master-0 kubenswrapper[34361]: I0224 05:52:56.889100 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3d496d7e-f8ee-477d-ba68-1084904b9b33-combined-ca-bundle\") pod \"cinder-b7346-api-0\" (UID: \"3d496d7e-f8ee-477d-ba68-1084904b9b33\") " pod="openstack/cinder-b7346-api-0" Feb 24 05:52:56.892468 master-0 kubenswrapper[34361]: I0224 05:52:56.891791 34361 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3d496d7e-f8ee-477d-ba68-1084904b9b33-logs\") pod \"cinder-b7346-api-0\" (UID: \"3d496d7e-f8ee-477d-ba68-1084904b9b33\") " pod="openstack/cinder-b7346-api-0" Feb 24 05:52:56.893217 master-0 kubenswrapper[34361]: I0224 05:52:56.892676 34361 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/3d496d7e-f8ee-477d-ba68-1084904b9b33-etc-machine-id\") pod \"cinder-b7346-api-0\" (UID: \"3d496d7e-f8ee-477d-ba68-1084904b9b33\") " pod="openstack/cinder-b7346-api-0" Feb 24 05:52:56.895502 master-0 kubenswrapper[34361]: I0224 05:52:56.894969 34361 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3d496d7e-f8ee-477d-ba68-1084904b9b33-scripts\") pod \"cinder-b7346-api-0\" (UID: \"3d496d7e-f8ee-477d-ba68-1084904b9b33\") " pod="openstack/cinder-b7346-api-0" Feb 24 05:52:56.896651 master-0 kubenswrapper[34361]: I0224 05:52:56.896594 34361 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3d496d7e-f8ee-477d-ba68-1084904b9b33-combined-ca-bundle\") pod \"cinder-b7346-api-0\" (UID: \"3d496d7e-f8ee-477d-ba68-1084904b9b33\") " pod="openstack/cinder-b7346-api-0" Feb 24 05:52:56.897742 master-0 kubenswrapper[34361]: I0224 05:52:56.897672 34361 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/3d496d7e-f8ee-477d-ba68-1084904b9b33-config-data-custom\") pod \"cinder-b7346-api-0\" (UID: \"3d496d7e-f8ee-477d-ba68-1084904b9b33\") " pod="openstack/cinder-b7346-api-0" Feb 24 05:52:56.901678 master-0 kubenswrapper[34361]: I0224 05:52:56.901228 34361 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3d496d7e-f8ee-477d-ba68-1084904b9b33-config-data\") pod \"cinder-b7346-api-0\" (UID: \"3d496d7e-f8ee-477d-ba68-1084904b9b33\") " pod="openstack/cinder-b7346-api-0" Feb 24 05:52:56.915640 master-0 kubenswrapper[34361]: I0224 05:52:56.915182 34361 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dc5fx\" (UniqueName: \"kubernetes.io/projected/3d496d7e-f8ee-477d-ba68-1084904b9b33-kube-api-access-dc5fx\") pod \"cinder-b7346-api-0\" (UID: \"3d496d7e-f8ee-477d-ba68-1084904b9b33\") " pod="openstack/cinder-b7346-api-0" Feb 24 05:52:57.086222 master-0 kubenswrapper[34361]: I0224 05:52:57.086072 34361 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-b7346-volume-lvm-iscsi-0" Feb 24 05:52:57.174531 master-0 kubenswrapper[34361]: I0224 05:52:57.174475 34361 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-b7346-api-0" Feb 24 05:52:57.395441 master-0 kubenswrapper[34361]: I0224 05:52:57.391664 34361 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-b7346-scheduler-0"] Feb 24 05:52:57.679562 master-0 kubenswrapper[34361]: I0224 05:52:57.679488 34361 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-66c9d5d889-nmpw7"] Feb 24 05:52:57.693915 master-0 kubenswrapper[34361]: W0224 05:52:57.693847 34361 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod3c458e23_405b_449a_8e0b_aa6e42a286c9.slice/crio-d7a8e7e943ec22b815d47f56d603c628058fbafba9f3477f332edcb39f803433 WatchSource:0}: Error finding container d7a8e7e943ec22b815d47f56d603c628058fbafba9f3477f332edcb39f803433: Status 404 returned error can't find the container with id d7a8e7e943ec22b815d47f56d603c628058fbafba9f3477f332edcb39f803433 Feb 24 05:52:57.887745 master-0 kubenswrapper[34361]: I0224 05:52:57.887560 34361 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-66c9d5d889-nmpw7" event={"ID":"3c458e23-405b-449a-8e0b-aa6e42a286c9","Type":"ContainerStarted","Data":"d7a8e7e943ec22b815d47f56d603c628058fbafba9f3477f332edcb39f803433"} Feb 24 05:52:57.889262 master-0 kubenswrapper[34361]: I0224 05:52:57.889217 34361 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-564b95b965-jqq92" event={"ID":"0505d98c-cd90-4424-b40d-304625ffdb03","Type":"ContainerStarted","Data":"f968a9aa5bb4f65e617d24dafe04c189bf6fa379a490ff2cb9ddd05562acf048"} Feb 24 05:52:57.889262 master-0 kubenswrapper[34361]: I0224 05:52:57.889252 34361 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-564b95b965-jqq92" event={"ID":"0505d98c-cd90-4424-b40d-304625ffdb03","Type":"ContainerStarted","Data":"05f5b0f7d9540fdeee8edff8909e60addbfc912341ae82aab458d65e4c2a6c34"} Feb 24 05:52:57.890920 master-0 kubenswrapper[34361]: I0224 05:52:57.890881 34361 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/neutron-564b95b965-jqq92" Feb 24 05:52:57.897072 master-0 kubenswrapper[34361]: I0224 05:52:57.897005 34361 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-b7346-scheduler-0" event={"ID":"2f0b28b5-741c-4761-b250-30d89ea99407","Type":"ContainerStarted","Data":"ad356392f9d21e069c43944c4d6d2a68d8b26d9fa531697713595e104dd045ee"} Feb 24 05:52:57.897507 master-0 kubenswrapper[34361]: I0224 05:52:57.897469 34361 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-84969fcbcc-27cm6" podUID="4de3f381-7d5a-46fe-9c93-97e35cca31d1" containerName="dnsmasq-dns" containerID="cri-o://b43a0924981916f1d53eac73604134138fe50b0a5df3484c818ffa8a1dc263ff" gracePeriod=10 Feb 24 05:52:57.922566 master-0 kubenswrapper[34361]: I0224 05:52:57.919101 34361 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-564b95b965-jqq92" podStartSLOduration=2.9190722190000002 podStartE2EDuration="2.919072219s" podCreationTimestamp="2026-02-24 05:52:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-24 05:52:57.916833559 +0000 UTC m=+937.619450625" watchObservedRunningTime="2026-02-24 05:52:57.919072219 +0000 UTC m=+937.621689265" Feb 24 05:52:58.010727 master-0 kubenswrapper[34361]: I0224 05:52:58.010657 34361 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-b7346-backup-0"] Feb 24 05:52:58.026947 master-0 kubenswrapper[34361]: I0224 05:52:58.026801 34361 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-b7346-volume-lvm-iscsi-0"] Feb 24 05:52:58.044338 master-0 kubenswrapper[34361]: W0224 05:52:58.044270 34361 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc1490d04_fc1d_488b_a427_285554ec1692.slice/crio-712f67cdabb3bae9608d2725a92cb1b04710e175f31705fa460471515a7feebd WatchSource:0}: Error finding container 712f67cdabb3bae9608d2725a92cb1b04710e175f31705fa460471515a7feebd: Status 404 returned error can't find the container with id 712f67cdabb3bae9608d2725a92cb1b04710e175f31705fa460471515a7feebd Feb 24 05:52:58.229022 master-0 kubenswrapper[34361]: I0224 05:52:58.228946 34361 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-b7346-api-0"] Feb 24 05:52:58.524671 master-0 kubenswrapper[34361]: I0224 05:52:58.524339 34361 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-84969fcbcc-27cm6" Feb 24 05:52:58.632814 master-0 kubenswrapper[34361]: I0224 05:52:58.632751 34361 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/4de3f381-7d5a-46fe-9c93-97e35cca31d1-dns-svc\") pod \"4de3f381-7d5a-46fe-9c93-97e35cca31d1\" (UID: \"4de3f381-7d5a-46fe-9c93-97e35cca31d1\") " Feb 24 05:52:58.632982 master-0 kubenswrapper[34361]: I0224 05:52:58.632837 34361 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/4de3f381-7d5a-46fe-9c93-97e35cca31d1-dns-swift-storage-0\") pod \"4de3f381-7d5a-46fe-9c93-97e35cca31d1\" (UID: \"4de3f381-7d5a-46fe-9c93-97e35cca31d1\") " Feb 24 05:52:58.632982 master-0 kubenswrapper[34361]: I0224 05:52:58.632865 34361 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7vkt2\" (UniqueName: \"kubernetes.io/projected/4de3f381-7d5a-46fe-9c93-97e35cca31d1-kube-api-access-7vkt2\") pod \"4de3f381-7d5a-46fe-9c93-97e35cca31d1\" (UID: \"4de3f381-7d5a-46fe-9c93-97e35cca31d1\") " Feb 24 05:52:58.632982 master-0 kubenswrapper[34361]: I0224 05:52:58.632978 34361 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/4de3f381-7d5a-46fe-9c93-97e35cca31d1-ovsdbserver-nb\") pod \"4de3f381-7d5a-46fe-9c93-97e35cca31d1\" (UID: \"4de3f381-7d5a-46fe-9c93-97e35cca31d1\") " Feb 24 05:52:58.633109 master-0 kubenswrapper[34361]: I0224 05:52:58.633035 34361 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/4de3f381-7d5a-46fe-9c93-97e35cca31d1-ovsdbserver-sb\") pod \"4de3f381-7d5a-46fe-9c93-97e35cca31d1\" (UID: \"4de3f381-7d5a-46fe-9c93-97e35cca31d1\") " Feb 24 05:52:58.633109 master-0 kubenswrapper[34361]: I0224 05:52:58.633073 34361 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4de3f381-7d5a-46fe-9c93-97e35cca31d1-config\") pod \"4de3f381-7d5a-46fe-9c93-97e35cca31d1\" (UID: \"4de3f381-7d5a-46fe-9c93-97e35cca31d1\") " Feb 24 05:52:58.639701 master-0 kubenswrapper[34361]: I0224 05:52:58.639627 34361 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4de3f381-7d5a-46fe-9c93-97e35cca31d1-kube-api-access-7vkt2" (OuterVolumeSpecName: "kube-api-access-7vkt2") pod "4de3f381-7d5a-46fe-9c93-97e35cca31d1" (UID: "4de3f381-7d5a-46fe-9c93-97e35cca31d1"). InnerVolumeSpecName "kube-api-access-7vkt2". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 24 05:52:58.709500 master-0 kubenswrapper[34361]: I0224 05:52:58.709210 34361 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4de3f381-7d5a-46fe-9c93-97e35cca31d1-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "4de3f381-7d5a-46fe-9c93-97e35cca31d1" (UID: "4de3f381-7d5a-46fe-9c93-97e35cca31d1"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 24 05:52:58.735743 master-0 kubenswrapper[34361]: I0224 05:52:58.735016 34361 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/4de3f381-7d5a-46fe-9c93-97e35cca31d1-dns-svc\") on node \"master-0\" DevicePath \"\"" Feb 24 05:52:58.735743 master-0 kubenswrapper[34361]: I0224 05:52:58.735062 34361 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7vkt2\" (UniqueName: \"kubernetes.io/projected/4de3f381-7d5a-46fe-9c93-97e35cca31d1-kube-api-access-7vkt2\") on node \"master-0\" DevicePath \"\"" Feb 24 05:52:58.767890 master-0 kubenswrapper[34361]: I0224 05:52:58.767812 34361 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4de3f381-7d5a-46fe-9c93-97e35cca31d1-config" (OuterVolumeSpecName: "config") pod "4de3f381-7d5a-46fe-9c93-97e35cca31d1" (UID: "4de3f381-7d5a-46fe-9c93-97e35cca31d1"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 24 05:52:58.818748 master-0 kubenswrapper[34361]: I0224 05:52:58.810997 34361 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4de3f381-7d5a-46fe-9c93-97e35cca31d1-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "4de3f381-7d5a-46fe-9c93-97e35cca31d1" (UID: "4de3f381-7d5a-46fe-9c93-97e35cca31d1"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 24 05:52:58.833763 master-0 kubenswrapper[34361]: I0224 05:52:58.832967 34361 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4de3f381-7d5a-46fe-9c93-97e35cca31d1-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "4de3f381-7d5a-46fe-9c93-97e35cca31d1" (UID: "4de3f381-7d5a-46fe-9c93-97e35cca31d1"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 24 05:52:58.837881 master-0 kubenswrapper[34361]: I0224 05:52:58.837816 34361 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/4de3f381-7d5a-46fe-9c93-97e35cca31d1-ovsdbserver-nb\") on node \"master-0\" DevicePath \"\"" Feb 24 05:52:58.837881 master-0 kubenswrapper[34361]: I0224 05:52:58.837870 34361 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/4de3f381-7d5a-46fe-9c93-97e35cca31d1-ovsdbserver-sb\") on node \"master-0\" DevicePath \"\"" Feb 24 05:52:58.838049 master-0 kubenswrapper[34361]: I0224 05:52:58.837890 34361 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4de3f381-7d5a-46fe-9c93-97e35cca31d1-config\") on node \"master-0\" DevicePath \"\"" Feb 24 05:52:58.870227 master-0 kubenswrapper[34361]: I0224 05:52:58.870127 34361 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4de3f381-7d5a-46fe-9c93-97e35cca31d1-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "4de3f381-7d5a-46fe-9c93-97e35cca31d1" (UID: "4de3f381-7d5a-46fe-9c93-97e35cca31d1"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 24 05:52:58.923374 master-0 kubenswrapper[34361]: I0224 05:52:58.919379 34361 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-b7346-volume-lvm-iscsi-0" event={"ID":"fad53e67-bd04-4577-af57-e5b896b6e56f","Type":"ContainerStarted","Data":"8ea5b994ca9263487cbb12d51da025f9077de5fd96dee4a3591fdd5b7fcf61e7"} Feb 24 05:52:58.923374 master-0 kubenswrapper[34361]: I0224 05:52:58.921669 34361 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-b7346-api-0" event={"ID":"3d496d7e-f8ee-477d-ba68-1084904b9b33","Type":"ContainerStarted","Data":"562ac7cecb36abae9da3b1589971463014027e7d811f508b622a57eb112161a7"} Feb 24 05:52:58.932433 master-0 kubenswrapper[34361]: I0224 05:52:58.931920 34361 generic.go:334] "Generic (PLEG): container finished" podID="4de3f381-7d5a-46fe-9c93-97e35cca31d1" containerID="b43a0924981916f1d53eac73604134138fe50b0a5df3484c818ffa8a1dc263ff" exitCode=0 Feb 24 05:52:58.932433 master-0 kubenswrapper[34361]: I0224 05:52:58.931987 34361 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-84969fcbcc-27cm6" event={"ID":"4de3f381-7d5a-46fe-9c93-97e35cca31d1","Type":"ContainerDied","Data":"b43a0924981916f1d53eac73604134138fe50b0a5df3484c818ffa8a1dc263ff"} Feb 24 05:52:58.932433 master-0 kubenswrapper[34361]: I0224 05:52:58.932011 34361 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-84969fcbcc-27cm6" event={"ID":"4de3f381-7d5a-46fe-9c93-97e35cca31d1","Type":"ContainerDied","Data":"1af06452c78d9e90cb5fbdaa827ed418a9e6a0312eb5bc422a4d3489334e6e9a"} Feb 24 05:52:58.932433 master-0 kubenswrapper[34361]: I0224 05:52:58.932030 34361 scope.go:117] "RemoveContainer" containerID="b43a0924981916f1d53eac73604134138fe50b0a5df3484c818ffa8a1dc263ff" Feb 24 05:52:58.932433 master-0 kubenswrapper[34361]: I0224 05:52:58.932233 34361 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-84969fcbcc-27cm6" Feb 24 05:52:58.945108 master-0 kubenswrapper[34361]: I0224 05:52:58.944962 34361 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/4de3f381-7d5a-46fe-9c93-97e35cca31d1-dns-swift-storage-0\") on node \"master-0\" DevicePath \"\"" Feb 24 05:52:58.948287 master-0 kubenswrapper[34361]: I0224 05:52:58.948235 34361 generic.go:334] "Generic (PLEG): container finished" podID="3c458e23-405b-449a-8e0b-aa6e42a286c9" containerID="0360e68d2b78f485981b0d4adbdeaef4ed36c422da62966ca8112b12a599df35" exitCode=0 Feb 24 05:52:58.948442 master-0 kubenswrapper[34361]: I0224 05:52:58.948363 34361 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-66c9d5d889-nmpw7" event={"ID":"3c458e23-405b-449a-8e0b-aa6e42a286c9","Type":"ContainerDied","Data":"0360e68d2b78f485981b0d4adbdeaef4ed36c422da62966ca8112b12a599df35"} Feb 24 05:52:58.955183 master-0 kubenswrapper[34361]: I0224 05:52:58.954996 34361 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-b7346-backup-0" event={"ID":"c1490d04-fc1d-488b-a427-285554ec1692","Type":"ContainerStarted","Data":"712f67cdabb3bae9608d2725a92cb1b04710e175f31705fa460471515a7feebd"} Feb 24 05:52:59.003544 master-0 kubenswrapper[34361]: I0224 05:52:59.003478 34361 scope.go:117] "RemoveContainer" containerID="f4c31541e6649676ea54e212e590a7e5b14e1b2f1f8c65c21a833de3f60449b9" Feb 24 05:52:59.020006 master-0 kubenswrapper[34361]: I0224 05:52:59.019945 34361 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-84969fcbcc-27cm6"] Feb 24 05:52:59.044229 master-0 kubenswrapper[34361]: I0224 05:52:59.044107 34361 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-84969fcbcc-27cm6"] Feb 24 05:52:59.077852 master-0 kubenswrapper[34361]: I0224 05:52:59.076621 34361 scope.go:117] "RemoveContainer" containerID="b43a0924981916f1d53eac73604134138fe50b0a5df3484c818ffa8a1dc263ff" Feb 24 05:52:59.079789 master-0 kubenswrapper[34361]: E0224 05:52:59.079749 34361 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b43a0924981916f1d53eac73604134138fe50b0a5df3484c818ffa8a1dc263ff\": container with ID starting with b43a0924981916f1d53eac73604134138fe50b0a5df3484c818ffa8a1dc263ff not found: ID does not exist" containerID="b43a0924981916f1d53eac73604134138fe50b0a5df3484c818ffa8a1dc263ff" Feb 24 05:52:59.079864 master-0 kubenswrapper[34361]: I0224 05:52:59.079807 34361 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b43a0924981916f1d53eac73604134138fe50b0a5df3484c818ffa8a1dc263ff"} err="failed to get container status \"b43a0924981916f1d53eac73604134138fe50b0a5df3484c818ffa8a1dc263ff\": rpc error: code = NotFound desc = could not find container \"b43a0924981916f1d53eac73604134138fe50b0a5df3484c818ffa8a1dc263ff\": container with ID starting with b43a0924981916f1d53eac73604134138fe50b0a5df3484c818ffa8a1dc263ff not found: ID does not exist" Feb 24 05:52:59.079864 master-0 kubenswrapper[34361]: I0224 05:52:59.079837 34361 scope.go:117] "RemoveContainer" containerID="f4c31541e6649676ea54e212e590a7e5b14e1b2f1f8c65c21a833de3f60449b9" Feb 24 05:52:59.081216 master-0 kubenswrapper[34361]: E0224 05:52:59.081188 34361 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f4c31541e6649676ea54e212e590a7e5b14e1b2f1f8c65c21a833de3f60449b9\": container with ID starting with f4c31541e6649676ea54e212e590a7e5b14e1b2f1f8c65c21a833de3f60449b9 not found: ID does not exist" containerID="f4c31541e6649676ea54e212e590a7e5b14e1b2f1f8c65c21a833de3f60449b9" Feb 24 05:52:59.081301 master-0 kubenswrapper[34361]: I0224 05:52:59.081215 34361 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f4c31541e6649676ea54e212e590a7e5b14e1b2f1f8c65c21a833de3f60449b9"} err="failed to get container status \"f4c31541e6649676ea54e212e590a7e5b14e1b2f1f8c65c21a833de3f60449b9\": rpc error: code = NotFound desc = could not find container \"f4c31541e6649676ea54e212e590a7e5b14e1b2f1f8c65c21a833de3f60449b9\": container with ID starting with f4c31541e6649676ea54e212e590a7e5b14e1b2f1f8c65c21a833de3f60449b9 not found: ID does not exist" Feb 24 05:53:00.057677 master-0 kubenswrapper[34361]: I0224 05:53:00.057494 34361 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-b7346-backup-0" event={"ID":"c1490d04-fc1d-488b-a427-285554ec1692","Type":"ContainerStarted","Data":"f7d03b7d30fa8baa7cb21136279a2fed9c3dda6e15069cd7df8b5a2cd646f37e"} Feb 24 05:53:00.070637 master-0 kubenswrapper[34361]: I0224 05:53:00.070498 34361 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-b7346-scheduler-0" event={"ID":"2f0b28b5-741c-4761-b250-30d89ea99407","Type":"ContainerStarted","Data":"f3f9c022e793ae50651ba44d048e2d2e445c8cdf9660ca198173ab50b867ecb7"} Feb 24 05:53:00.090069 master-0 kubenswrapper[34361]: I0224 05:53:00.089987 34361 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-b7346-api-0" event={"ID":"3d496d7e-f8ee-477d-ba68-1084904b9b33","Type":"ContainerStarted","Data":"f96a312b01c7e833b0a29525685ab6754d7aed185f9a085808e0c3892b659427"} Feb 24 05:53:00.133775 master-0 kubenswrapper[34361]: I0224 05:53:00.133622 34361 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-b7346-api-0"] Feb 24 05:53:00.214517 master-0 kubenswrapper[34361]: I0224 05:53:00.214402 34361 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-66c9d5d889-nmpw7" event={"ID":"3c458e23-405b-449a-8e0b-aa6e42a286c9","Type":"ContainerStarted","Data":"719381efa530f9326b166c8c44acc394693bf446e30448e49cc2d04aeac11fa2"} Feb 24 05:53:00.215206 master-0 kubenswrapper[34361]: I0224 05:53:00.215144 34361 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-66c9d5d889-nmpw7" Feb 24 05:53:00.315886 master-0 kubenswrapper[34361]: I0224 05:53:00.315206 34361 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-66c9d5d889-nmpw7" podStartSLOduration=4.315174443 podStartE2EDuration="4.315174443s" podCreationTimestamp="2026-02-24 05:52:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-24 05:53:00.273943322 +0000 UTC m=+939.976560378" watchObservedRunningTime="2026-02-24 05:53:00.315174443 +0000 UTC m=+940.017791509" Feb 24 05:53:00.621143 master-0 kubenswrapper[34361]: I0224 05:53:00.621055 34361 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4de3f381-7d5a-46fe-9c93-97e35cca31d1" path="/var/lib/kubelet/pods/4de3f381-7d5a-46fe-9c93-97e35cca31d1/volumes" Feb 24 05:53:01.240873 master-0 kubenswrapper[34361]: I0224 05:53:01.240483 34361 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-b7346-volume-lvm-iscsi-0" event={"ID":"fad53e67-bd04-4577-af57-e5b896b6e56f","Type":"ContainerStarted","Data":"fc623512700ec8e8994d8e079171552546ec3764436c6fded4cca2506b97aa97"} Feb 24 05:53:01.240873 master-0 kubenswrapper[34361]: I0224 05:53:01.240567 34361 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-b7346-volume-lvm-iscsi-0" event={"ID":"fad53e67-bd04-4577-af57-e5b896b6e56f","Type":"ContainerStarted","Data":"1c34a792ad75c0733c51a93cf45561e771b6c1fed09ddf0c6d96fe0d13d23f16"} Feb 24 05:53:01.246751 master-0 kubenswrapper[34361]: I0224 05:53:01.246656 34361 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-b7346-scheduler-0" event={"ID":"2f0b28b5-741c-4761-b250-30d89ea99407","Type":"ContainerStarted","Data":"6644f2541ea8eb1d0e102c41726b036f76b1bcafd2712081e27ca63f8e5ac73f"} Feb 24 05:53:01.252017 master-0 kubenswrapper[34361]: I0224 05:53:01.251938 34361 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-b7346-api-0" event={"ID":"3d496d7e-f8ee-477d-ba68-1084904b9b33","Type":"ContainerStarted","Data":"39eda24ddbe3fc9a2014f034ad8fa4bceb1d4e04f2728cfff260c69d4bd334b9"} Feb 24 05:53:01.252303 master-0 kubenswrapper[34361]: I0224 05:53:01.252207 34361 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-b7346-api-0" podUID="3d496d7e-f8ee-477d-ba68-1084904b9b33" containerName="cinder-b7346-api-log" containerID="cri-o://f96a312b01c7e833b0a29525685ab6754d7aed185f9a085808e0c3892b659427" gracePeriod=30 Feb 24 05:53:01.252579 master-0 kubenswrapper[34361]: I0224 05:53:01.252541 34361 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/cinder-b7346-api-0" Feb 24 05:53:01.252652 master-0 kubenswrapper[34361]: I0224 05:53:01.252617 34361 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-b7346-api-0" podUID="3d496d7e-f8ee-477d-ba68-1084904b9b33" containerName="cinder-api" containerID="cri-o://39eda24ddbe3fc9a2014f034ad8fa4bceb1d4e04f2728cfff260c69d4bd334b9" gracePeriod=30 Feb 24 05:53:01.260353 master-0 kubenswrapper[34361]: I0224 05:53:01.260255 34361 generic.go:334] "Generic (PLEG): container finished" podID="7e30393e-c247-4ba9-9db9-864d16ba6d82" containerID="9d60d4d6b2af8e7533e56ae9ba0ebd383f1b4443362a0493fff00bdb76302614" exitCode=0 Feb 24 05:53:01.260499 master-0 kubenswrapper[34361]: I0224 05:53:01.260349 34361 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-db-sync-s9d6l" event={"ID":"7e30393e-c247-4ba9-9db9-864d16ba6d82","Type":"ContainerDied","Data":"9d60d4d6b2af8e7533e56ae9ba0ebd383f1b4443362a0493fff00bdb76302614"} Feb 24 05:53:01.264822 master-0 kubenswrapper[34361]: I0224 05:53:01.264786 34361 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-b7346-backup-0" event={"ID":"c1490d04-fc1d-488b-a427-285554ec1692","Type":"ContainerStarted","Data":"7e1b2ebdea4f1ac8655c81e6c89c7a613460cc40d12b5b35c1d7fd9d0e90e437"} Feb 24 05:53:01.288869 master-0 kubenswrapper[34361]: I0224 05:53:01.288766 34361 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-b7346-volume-lvm-iscsi-0" podStartSLOduration=4.294753814 podStartE2EDuration="5.288738562s" podCreationTimestamp="2026-02-24 05:52:56 +0000 UTC" firstStartedPulling="2026-02-24 05:52:58.437728198 +0000 UTC m=+938.140345234" lastFinishedPulling="2026-02-24 05:52:59.431712936 +0000 UTC m=+939.134329982" observedRunningTime="2026-02-24 05:53:01.278835884 +0000 UTC m=+940.981452940" watchObservedRunningTime="2026-02-24 05:53:01.288738562 +0000 UTC m=+940.991355608" Feb 24 05:53:01.340211 master-0 kubenswrapper[34361]: I0224 05:53:01.340118 34361 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-b7346-api-0" podStartSLOduration=5.340090806 podStartE2EDuration="5.340090806s" podCreationTimestamp="2026-02-24 05:52:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-24 05:53:01.339358557 +0000 UTC m=+941.041975623" watchObservedRunningTime="2026-02-24 05:53:01.340090806 +0000 UTC m=+941.042707852" Feb 24 05:53:01.355139 master-0 kubenswrapper[34361]: I0224 05:53:01.350072 34361 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-b7346-scheduler-0" podStartSLOduration=4.350185558 podStartE2EDuration="5.350045315s" podCreationTimestamp="2026-02-24 05:52:56 +0000 UTC" firstStartedPulling="2026-02-24 05:52:57.412946098 +0000 UTC m=+937.115563144" lastFinishedPulling="2026-02-24 05:52:58.412805855 +0000 UTC m=+938.115422901" observedRunningTime="2026-02-24 05:53:01.309099081 +0000 UTC m=+941.011716137" watchObservedRunningTime="2026-02-24 05:53:01.350045315 +0000 UTC m=+941.052662351" Feb 24 05:53:01.401954 master-0 kubenswrapper[34361]: I0224 05:53:01.401854 34361 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-b7346-backup-0" podStartSLOduration=4.3715919549999995 podStartE2EDuration="5.401821171s" podCreationTimestamp="2026-02-24 05:52:56 +0000 UTC" firstStartedPulling="2026-02-24 05:52:58.047771971 +0000 UTC m=+937.750389017" lastFinishedPulling="2026-02-24 05:52:59.078001187 +0000 UTC m=+938.780618233" observedRunningTime="2026-02-24 05:53:01.38547947 +0000 UTC m=+941.088096536" watchObservedRunningTime="2026-02-24 05:53:01.401821171 +0000 UTC m=+941.104438217" Feb 24 05:53:01.770413 master-0 kubenswrapper[34361]: I0224 05:53:01.770338 34361 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-b7346-scheduler-0" Feb 24 05:53:01.862682 master-0 kubenswrapper[34361]: I0224 05:53:01.862582 34361 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-b7346-backup-0" Feb 24 05:53:02.087127 master-0 kubenswrapper[34361]: I0224 05:53:02.086940 34361 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-b7346-volume-lvm-iscsi-0" Feb 24 05:53:02.276381 master-0 kubenswrapper[34361]: I0224 05:53:02.276019 34361 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-b7346-api-0" Feb 24 05:53:02.281135 master-0 kubenswrapper[34361]: I0224 05:53:02.280925 34361 generic.go:334] "Generic (PLEG): container finished" podID="3d496d7e-f8ee-477d-ba68-1084904b9b33" containerID="39eda24ddbe3fc9a2014f034ad8fa4bceb1d4e04f2728cfff260c69d4bd334b9" exitCode=0 Feb 24 05:53:02.281135 master-0 kubenswrapper[34361]: I0224 05:53:02.280955 34361 generic.go:334] "Generic (PLEG): container finished" podID="3d496d7e-f8ee-477d-ba68-1084904b9b33" containerID="f96a312b01c7e833b0a29525685ab6754d7aed185f9a085808e0c3892b659427" exitCode=143 Feb 24 05:53:02.281135 master-0 kubenswrapper[34361]: I0224 05:53:02.281078 34361 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-b7346-api-0" event={"ID":"3d496d7e-f8ee-477d-ba68-1084904b9b33","Type":"ContainerDied","Data":"39eda24ddbe3fc9a2014f034ad8fa4bceb1d4e04f2728cfff260c69d4bd334b9"} Feb 24 05:53:02.281275 master-0 kubenswrapper[34361]: I0224 05:53:02.281155 34361 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-b7346-api-0" event={"ID":"3d496d7e-f8ee-477d-ba68-1084904b9b33","Type":"ContainerDied","Data":"f96a312b01c7e833b0a29525685ab6754d7aed185f9a085808e0c3892b659427"} Feb 24 05:53:02.281275 master-0 kubenswrapper[34361]: I0224 05:53:02.281171 34361 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-b7346-api-0" event={"ID":"3d496d7e-f8ee-477d-ba68-1084904b9b33","Type":"ContainerDied","Data":"562ac7cecb36abae9da3b1589971463014027e7d811f508b622a57eb112161a7"} Feb 24 05:53:02.281275 master-0 kubenswrapper[34361]: I0224 05:53:02.281194 34361 scope.go:117] "RemoveContainer" containerID="39eda24ddbe3fc9a2014f034ad8fa4bceb1d4e04f2728cfff260c69d4bd334b9" Feb 24 05:53:02.318830 master-0 kubenswrapper[34361]: I0224 05:53:02.318753 34361 scope.go:117] "RemoveContainer" containerID="f96a312b01c7e833b0a29525685ab6754d7aed185f9a085808e0c3892b659427" Feb 24 05:53:02.374704 master-0 kubenswrapper[34361]: I0224 05:53:02.374651 34361 scope.go:117] "RemoveContainer" containerID="39eda24ddbe3fc9a2014f034ad8fa4bceb1d4e04f2728cfff260c69d4bd334b9" Feb 24 05:53:02.377992 master-0 kubenswrapper[34361]: E0224 05:53:02.377920 34361 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"39eda24ddbe3fc9a2014f034ad8fa4bceb1d4e04f2728cfff260c69d4bd334b9\": container with ID starting with 39eda24ddbe3fc9a2014f034ad8fa4bceb1d4e04f2728cfff260c69d4bd334b9 not found: ID does not exist" containerID="39eda24ddbe3fc9a2014f034ad8fa4bceb1d4e04f2728cfff260c69d4bd334b9" Feb 24 05:53:02.378073 master-0 kubenswrapper[34361]: I0224 05:53:02.378011 34361 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"39eda24ddbe3fc9a2014f034ad8fa4bceb1d4e04f2728cfff260c69d4bd334b9"} err="failed to get container status \"39eda24ddbe3fc9a2014f034ad8fa4bceb1d4e04f2728cfff260c69d4bd334b9\": rpc error: code = NotFound desc = could not find container \"39eda24ddbe3fc9a2014f034ad8fa4bceb1d4e04f2728cfff260c69d4bd334b9\": container with ID starting with 39eda24ddbe3fc9a2014f034ad8fa4bceb1d4e04f2728cfff260c69d4bd334b9 not found: ID does not exist" Feb 24 05:53:02.378073 master-0 kubenswrapper[34361]: I0224 05:53:02.378056 34361 scope.go:117] "RemoveContainer" containerID="f96a312b01c7e833b0a29525685ab6754d7aed185f9a085808e0c3892b659427" Feb 24 05:53:02.379946 master-0 kubenswrapper[34361]: E0224 05:53:02.379913 34361 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f96a312b01c7e833b0a29525685ab6754d7aed185f9a085808e0c3892b659427\": container with ID starting with f96a312b01c7e833b0a29525685ab6754d7aed185f9a085808e0c3892b659427 not found: ID does not exist" containerID="f96a312b01c7e833b0a29525685ab6754d7aed185f9a085808e0c3892b659427" Feb 24 05:53:02.380121 master-0 kubenswrapper[34361]: I0224 05:53:02.379960 34361 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f96a312b01c7e833b0a29525685ab6754d7aed185f9a085808e0c3892b659427"} err="failed to get container status \"f96a312b01c7e833b0a29525685ab6754d7aed185f9a085808e0c3892b659427\": rpc error: code = NotFound desc = could not find container \"f96a312b01c7e833b0a29525685ab6754d7aed185f9a085808e0c3892b659427\": container with ID starting with f96a312b01c7e833b0a29525685ab6754d7aed185f9a085808e0c3892b659427 not found: ID does not exist" Feb 24 05:53:02.380121 master-0 kubenswrapper[34361]: I0224 05:53:02.379992 34361 scope.go:117] "RemoveContainer" containerID="39eda24ddbe3fc9a2014f034ad8fa4bceb1d4e04f2728cfff260c69d4bd334b9" Feb 24 05:53:02.382485 master-0 kubenswrapper[34361]: I0224 05:53:02.380489 34361 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"39eda24ddbe3fc9a2014f034ad8fa4bceb1d4e04f2728cfff260c69d4bd334b9"} err="failed to get container status \"39eda24ddbe3fc9a2014f034ad8fa4bceb1d4e04f2728cfff260c69d4bd334b9\": rpc error: code = NotFound desc = could not find container \"39eda24ddbe3fc9a2014f034ad8fa4bceb1d4e04f2728cfff260c69d4bd334b9\": container with ID starting with 39eda24ddbe3fc9a2014f034ad8fa4bceb1d4e04f2728cfff260c69d4bd334b9 not found: ID does not exist" Feb 24 05:53:02.382485 master-0 kubenswrapper[34361]: I0224 05:53:02.380522 34361 scope.go:117] "RemoveContainer" containerID="f96a312b01c7e833b0a29525685ab6754d7aed185f9a085808e0c3892b659427" Feb 24 05:53:02.382485 master-0 kubenswrapper[34361]: I0224 05:53:02.381262 34361 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f96a312b01c7e833b0a29525685ab6754d7aed185f9a085808e0c3892b659427"} err="failed to get container status \"f96a312b01c7e833b0a29525685ab6754d7aed185f9a085808e0c3892b659427\": rpc error: code = NotFound desc = could not find container \"f96a312b01c7e833b0a29525685ab6754d7aed185f9a085808e0c3892b659427\": container with ID starting with f96a312b01c7e833b0a29525685ab6754d7aed185f9a085808e0c3892b659427 not found: ID does not exist" Feb 24 05:53:02.422007 master-0 kubenswrapper[34361]: I0224 05:53:02.421890 34361 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3d496d7e-f8ee-477d-ba68-1084904b9b33-logs\") pod \"3d496d7e-f8ee-477d-ba68-1084904b9b33\" (UID: \"3d496d7e-f8ee-477d-ba68-1084904b9b33\") " Feb 24 05:53:02.422007 master-0 kubenswrapper[34361]: I0224 05:53:02.421988 34361 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/3d496d7e-f8ee-477d-ba68-1084904b9b33-config-data-custom\") pod \"3d496d7e-f8ee-477d-ba68-1084904b9b33\" (UID: \"3d496d7e-f8ee-477d-ba68-1084904b9b33\") " Feb 24 05:53:02.422232 master-0 kubenswrapper[34361]: I0224 05:53:02.422077 34361 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/3d496d7e-f8ee-477d-ba68-1084904b9b33-etc-machine-id\") pod \"3d496d7e-f8ee-477d-ba68-1084904b9b33\" (UID: \"3d496d7e-f8ee-477d-ba68-1084904b9b33\") " Feb 24 05:53:02.422232 master-0 kubenswrapper[34361]: I0224 05:53:02.422109 34361 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3d496d7e-f8ee-477d-ba68-1084904b9b33-config-data\") pod \"3d496d7e-f8ee-477d-ba68-1084904b9b33\" (UID: \"3d496d7e-f8ee-477d-ba68-1084904b9b33\") " Feb 24 05:53:02.422232 master-0 kubenswrapper[34361]: I0224 05:53:02.422192 34361 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3d496d7e-f8ee-477d-ba68-1084904b9b33-scripts\") pod \"3d496d7e-f8ee-477d-ba68-1084904b9b33\" (UID: \"3d496d7e-f8ee-477d-ba68-1084904b9b33\") " Feb 24 05:53:02.422395 master-0 kubenswrapper[34361]: I0224 05:53:02.422270 34361 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dc5fx\" (UniqueName: \"kubernetes.io/projected/3d496d7e-f8ee-477d-ba68-1084904b9b33-kube-api-access-dc5fx\") pod \"3d496d7e-f8ee-477d-ba68-1084904b9b33\" (UID: \"3d496d7e-f8ee-477d-ba68-1084904b9b33\") " Feb 24 05:53:02.422395 master-0 kubenswrapper[34361]: I0224 05:53:02.422295 34361 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3d496d7e-f8ee-477d-ba68-1084904b9b33-combined-ca-bundle\") pod \"3d496d7e-f8ee-477d-ba68-1084904b9b33\" (UID: \"3d496d7e-f8ee-477d-ba68-1084904b9b33\") " Feb 24 05:53:02.422675 master-0 kubenswrapper[34361]: I0224 05:53:02.422578 34361 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3d496d7e-f8ee-477d-ba68-1084904b9b33-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "3d496d7e-f8ee-477d-ba68-1084904b9b33" (UID: "3d496d7e-f8ee-477d-ba68-1084904b9b33"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 24 05:53:02.422859 master-0 kubenswrapper[34361]: I0224 05:53:02.422754 34361 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3d496d7e-f8ee-477d-ba68-1084904b9b33-logs" (OuterVolumeSpecName: "logs") pod "3d496d7e-f8ee-477d-ba68-1084904b9b33" (UID: "3d496d7e-f8ee-477d-ba68-1084904b9b33"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 24 05:53:02.426492 master-0 kubenswrapper[34361]: I0224 05:53:02.425124 34361 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3d496d7e-f8ee-477d-ba68-1084904b9b33-logs\") on node \"master-0\" DevicePath \"\"" Feb 24 05:53:02.426492 master-0 kubenswrapper[34361]: I0224 05:53:02.425151 34361 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/3d496d7e-f8ee-477d-ba68-1084904b9b33-etc-machine-id\") on node \"master-0\" DevicePath \"\"" Feb 24 05:53:02.428467 master-0 kubenswrapper[34361]: I0224 05:53:02.428036 34361 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3d496d7e-f8ee-477d-ba68-1084904b9b33-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "3d496d7e-f8ee-477d-ba68-1084904b9b33" (UID: "3d496d7e-f8ee-477d-ba68-1084904b9b33"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 24 05:53:02.430079 master-0 kubenswrapper[34361]: I0224 05:53:02.429387 34361 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3d496d7e-f8ee-477d-ba68-1084904b9b33-scripts" (OuterVolumeSpecName: "scripts") pod "3d496d7e-f8ee-477d-ba68-1084904b9b33" (UID: "3d496d7e-f8ee-477d-ba68-1084904b9b33"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 24 05:53:02.430079 master-0 kubenswrapper[34361]: I0224 05:53:02.429895 34361 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3d496d7e-f8ee-477d-ba68-1084904b9b33-kube-api-access-dc5fx" (OuterVolumeSpecName: "kube-api-access-dc5fx") pod "3d496d7e-f8ee-477d-ba68-1084904b9b33" (UID: "3d496d7e-f8ee-477d-ba68-1084904b9b33"). InnerVolumeSpecName "kube-api-access-dc5fx". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 24 05:53:02.466336 master-0 kubenswrapper[34361]: I0224 05:53:02.464356 34361 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3d496d7e-f8ee-477d-ba68-1084904b9b33-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "3d496d7e-f8ee-477d-ba68-1084904b9b33" (UID: "3d496d7e-f8ee-477d-ba68-1084904b9b33"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 24 05:53:02.500965 master-0 kubenswrapper[34361]: I0224 05:53:02.500810 34361 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3d496d7e-f8ee-477d-ba68-1084904b9b33-config-data" (OuterVolumeSpecName: "config-data") pod "3d496d7e-f8ee-477d-ba68-1084904b9b33" (UID: "3d496d7e-f8ee-477d-ba68-1084904b9b33"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 24 05:53:02.528703 master-0 kubenswrapper[34361]: I0224 05:53:02.527918 34361 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3d496d7e-f8ee-477d-ba68-1084904b9b33-config-data\") on node \"master-0\" DevicePath \"\"" Feb 24 05:53:02.528703 master-0 kubenswrapper[34361]: I0224 05:53:02.527997 34361 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3d496d7e-f8ee-477d-ba68-1084904b9b33-scripts\") on node \"master-0\" DevicePath \"\"" Feb 24 05:53:02.528703 master-0 kubenswrapper[34361]: I0224 05:53:02.528014 34361 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dc5fx\" (UniqueName: \"kubernetes.io/projected/3d496d7e-f8ee-477d-ba68-1084904b9b33-kube-api-access-dc5fx\") on node \"master-0\" DevicePath \"\"" Feb 24 05:53:02.528703 master-0 kubenswrapper[34361]: I0224 05:53:02.528031 34361 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3d496d7e-f8ee-477d-ba68-1084904b9b33-combined-ca-bundle\") on node \"master-0\" DevicePath \"\"" Feb 24 05:53:02.528703 master-0 kubenswrapper[34361]: I0224 05:53:02.528047 34361 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/3d496d7e-f8ee-477d-ba68-1084904b9b33-config-data-custom\") on node \"master-0\" DevicePath \"\"" Feb 24 05:53:02.942588 master-0 kubenswrapper[34361]: I0224 05:53:02.941264 34361 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ironic-db-sync-s9d6l" Feb 24 05:53:03.046386 master-0 kubenswrapper[34361]: I0224 05:53:03.046339 34361 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7e30393e-c247-4ba9-9db9-864d16ba6d82-scripts\") pod \"7e30393e-c247-4ba9-9db9-864d16ba6d82\" (UID: \"7e30393e-c247-4ba9-9db9-864d16ba6d82\") " Feb 24 05:53:03.046585 master-0 kubenswrapper[34361]: I0224 05:53:03.046412 34361 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-podinfo\" (UniqueName: \"kubernetes.io/downward-api/7e30393e-c247-4ba9-9db9-864d16ba6d82-etc-podinfo\") pod \"7e30393e-c247-4ba9-9db9-864d16ba6d82\" (UID: \"7e30393e-c247-4ba9-9db9-864d16ba6d82\") " Feb 24 05:53:03.047170 master-0 kubenswrapper[34361]: I0224 05:53:03.047140 34361 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-merged\" (UniqueName: \"kubernetes.io/empty-dir/7e30393e-c247-4ba9-9db9-864d16ba6d82-config-data-merged\") pod \"7e30393e-c247-4ba9-9db9-864d16ba6d82\" (UID: \"7e30393e-c247-4ba9-9db9-864d16ba6d82\") " Feb 24 05:53:03.047236 master-0 kubenswrapper[34361]: I0224 05:53:03.047177 34361 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7e30393e-c247-4ba9-9db9-864d16ba6d82-config-data\") pod \"7e30393e-c247-4ba9-9db9-864d16ba6d82\" (UID: \"7e30393e-c247-4ba9-9db9-864d16ba6d82\") " Feb 24 05:53:03.047378 master-0 kubenswrapper[34361]: I0224 05:53:03.047290 34361 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7e30393e-c247-4ba9-9db9-864d16ba6d82-combined-ca-bundle\") pod \"7e30393e-c247-4ba9-9db9-864d16ba6d82\" (UID: \"7e30393e-c247-4ba9-9db9-864d16ba6d82\") " Feb 24 05:53:03.047378 master-0 kubenswrapper[34361]: I0224 05:53:03.047344 34361 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8pbkx\" (UniqueName: \"kubernetes.io/projected/7e30393e-c247-4ba9-9db9-864d16ba6d82-kube-api-access-8pbkx\") pod \"7e30393e-c247-4ba9-9db9-864d16ba6d82\" (UID: \"7e30393e-c247-4ba9-9db9-864d16ba6d82\") " Feb 24 05:53:03.049072 master-0 kubenswrapper[34361]: I0224 05:53:03.049023 34361 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7e30393e-c247-4ba9-9db9-864d16ba6d82-config-data-merged" (OuterVolumeSpecName: "config-data-merged") pod "7e30393e-c247-4ba9-9db9-864d16ba6d82" (UID: "7e30393e-c247-4ba9-9db9-864d16ba6d82"). InnerVolumeSpecName "config-data-merged". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 24 05:53:03.055177 master-0 kubenswrapper[34361]: I0224 05:53:03.055129 34361 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7e30393e-c247-4ba9-9db9-864d16ba6d82-kube-api-access-8pbkx" (OuterVolumeSpecName: "kube-api-access-8pbkx") pod "7e30393e-c247-4ba9-9db9-864d16ba6d82" (UID: "7e30393e-c247-4ba9-9db9-864d16ba6d82"). InnerVolumeSpecName "kube-api-access-8pbkx". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 24 05:53:03.058472 master-0 kubenswrapper[34361]: I0224 05:53:03.058433 34361 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/downward-api/7e30393e-c247-4ba9-9db9-864d16ba6d82-etc-podinfo" (OuterVolumeSpecName: "etc-podinfo") pod "7e30393e-c247-4ba9-9db9-864d16ba6d82" (UID: "7e30393e-c247-4ba9-9db9-864d16ba6d82"). InnerVolumeSpecName "etc-podinfo". PluginName "kubernetes.io/downward-api", VolumeGidValue "" Feb 24 05:53:03.058732 master-0 kubenswrapper[34361]: I0224 05:53:03.058686 34361 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7e30393e-c247-4ba9-9db9-864d16ba6d82-scripts" (OuterVolumeSpecName: "scripts") pod "7e30393e-c247-4ba9-9db9-864d16ba6d82" (UID: "7e30393e-c247-4ba9-9db9-864d16ba6d82"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 24 05:53:03.109660 master-0 kubenswrapper[34361]: I0224 05:53:03.109461 34361 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7e30393e-c247-4ba9-9db9-864d16ba6d82-config-data" (OuterVolumeSpecName: "config-data") pod "7e30393e-c247-4ba9-9db9-864d16ba6d82" (UID: "7e30393e-c247-4ba9-9db9-864d16ba6d82"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 24 05:53:03.150675 master-0 kubenswrapper[34361]: I0224 05:53:03.150610 34361 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8pbkx\" (UniqueName: \"kubernetes.io/projected/7e30393e-c247-4ba9-9db9-864d16ba6d82-kube-api-access-8pbkx\") on node \"master-0\" DevicePath \"\"" Feb 24 05:53:03.150923 master-0 kubenswrapper[34361]: I0224 05:53:03.150903 34361 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7e30393e-c247-4ba9-9db9-864d16ba6d82-scripts\") on node \"master-0\" DevicePath \"\"" Feb 24 05:53:03.151006 master-0 kubenswrapper[34361]: I0224 05:53:03.150926 34361 reconciler_common.go:293] "Volume detached for volume \"etc-podinfo\" (UniqueName: \"kubernetes.io/downward-api/7e30393e-c247-4ba9-9db9-864d16ba6d82-etc-podinfo\") on node \"master-0\" DevicePath \"\"" Feb 24 05:53:03.151006 master-0 kubenswrapper[34361]: I0224 05:53:03.150937 34361 reconciler_common.go:293] "Volume detached for volume \"config-data-merged\" (UniqueName: \"kubernetes.io/empty-dir/7e30393e-c247-4ba9-9db9-864d16ba6d82-config-data-merged\") on node \"master-0\" DevicePath \"\"" Feb 24 05:53:03.151006 master-0 kubenswrapper[34361]: I0224 05:53:03.150946 34361 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7e30393e-c247-4ba9-9db9-864d16ba6d82-config-data\") on node \"master-0\" DevicePath \"\"" Feb 24 05:53:03.153039 master-0 kubenswrapper[34361]: I0224 05:53:03.152963 34361 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7e30393e-c247-4ba9-9db9-864d16ba6d82-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "7e30393e-c247-4ba9-9db9-864d16ba6d82" (UID: "7e30393e-c247-4ba9-9db9-864d16ba6d82"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 24 05:53:03.253125 master-0 kubenswrapper[34361]: I0224 05:53:03.253028 34361 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7e30393e-c247-4ba9-9db9-864d16ba6d82-combined-ca-bundle\") on node \"master-0\" DevicePath \"\"" Feb 24 05:53:03.268235 master-0 kubenswrapper[34361]: I0224 05:53:03.268168 34361 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/placement-fb464bf7d-gv8b6" Feb 24 05:53:03.314065 master-0 kubenswrapper[34361]: I0224 05:53:03.314000 34361 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-b7346-api-0" Feb 24 05:53:03.329479 master-0 kubenswrapper[34361]: I0224 05:53:03.328914 34361 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ironic-db-sync-s9d6l" Feb 24 05:53:03.330144 master-0 kubenswrapper[34361]: I0224 05:53:03.329468 34361 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-db-sync-s9d6l" event={"ID":"7e30393e-c247-4ba9-9db9-864d16ba6d82","Type":"ContainerDied","Data":"63d7eeb97c7bbb78d31359f9d0c2c236d4d6ed47b3d39b598e29fc0e3ace5a51"} Feb 24 05:53:03.330144 master-0 kubenswrapper[34361]: I0224 05:53:03.329550 34361 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="63d7eeb97c7bbb78d31359f9d0c2c236d4d6ed47b3d39b598e29fc0e3ace5a51" Feb 24 05:53:03.350361 master-0 kubenswrapper[34361]: I0224 05:53:03.349100 34361 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/placement-fb464bf7d-gv8b6" Feb 24 05:53:03.369420 master-0 kubenswrapper[34361]: I0224 05:53:03.369211 34361 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-b7346-api-0"] Feb 24 05:53:03.381665 master-0 kubenswrapper[34361]: I0224 05:53:03.381574 34361 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-b7346-api-0"] Feb 24 05:53:03.500645 master-0 kubenswrapper[34361]: I0224 05:53:03.495935 34361 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-b7346-api-0"] Feb 24 05:53:03.500645 master-0 kubenswrapper[34361]: E0224 05:53:03.496982 34361 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7e30393e-c247-4ba9-9db9-864d16ba6d82" containerName="init" Feb 24 05:53:03.500645 master-0 kubenswrapper[34361]: I0224 05:53:03.497006 34361 state_mem.go:107] "Deleted CPUSet assignment" podUID="7e30393e-c247-4ba9-9db9-864d16ba6d82" containerName="init" Feb 24 05:53:03.500645 master-0 kubenswrapper[34361]: E0224 05:53:03.497049 34361 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7e30393e-c247-4ba9-9db9-864d16ba6d82" containerName="ironic-db-sync" Feb 24 05:53:03.500645 master-0 kubenswrapper[34361]: I0224 05:53:03.497058 34361 state_mem.go:107] "Deleted CPUSet assignment" podUID="7e30393e-c247-4ba9-9db9-864d16ba6d82" containerName="ironic-db-sync" Feb 24 05:53:03.500645 master-0 kubenswrapper[34361]: E0224 05:53:03.497087 34361 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4de3f381-7d5a-46fe-9c93-97e35cca31d1" containerName="init" Feb 24 05:53:03.500645 master-0 kubenswrapper[34361]: I0224 05:53:03.497097 34361 state_mem.go:107] "Deleted CPUSet assignment" podUID="4de3f381-7d5a-46fe-9c93-97e35cca31d1" containerName="init" Feb 24 05:53:03.500645 master-0 kubenswrapper[34361]: E0224 05:53:03.497115 34361 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4de3f381-7d5a-46fe-9c93-97e35cca31d1" containerName="dnsmasq-dns" Feb 24 05:53:03.500645 master-0 kubenswrapper[34361]: I0224 05:53:03.497121 34361 state_mem.go:107] "Deleted CPUSet assignment" podUID="4de3f381-7d5a-46fe-9c93-97e35cca31d1" containerName="dnsmasq-dns" Feb 24 05:53:03.500645 master-0 kubenswrapper[34361]: E0224 05:53:03.497137 34361 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3d496d7e-f8ee-477d-ba68-1084904b9b33" containerName="cinder-b7346-api-log" Feb 24 05:53:03.500645 master-0 kubenswrapper[34361]: I0224 05:53:03.497143 34361 state_mem.go:107] "Deleted CPUSet assignment" podUID="3d496d7e-f8ee-477d-ba68-1084904b9b33" containerName="cinder-b7346-api-log" Feb 24 05:53:03.500645 master-0 kubenswrapper[34361]: E0224 05:53:03.497206 34361 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3d496d7e-f8ee-477d-ba68-1084904b9b33" containerName="cinder-api" Feb 24 05:53:03.500645 master-0 kubenswrapper[34361]: I0224 05:53:03.497214 34361 state_mem.go:107] "Deleted CPUSet assignment" podUID="3d496d7e-f8ee-477d-ba68-1084904b9b33" containerName="cinder-api" Feb 24 05:53:03.500645 master-0 kubenswrapper[34361]: I0224 05:53:03.497617 34361 memory_manager.go:354] "RemoveStaleState removing state" podUID="4de3f381-7d5a-46fe-9c93-97e35cca31d1" containerName="dnsmasq-dns" Feb 24 05:53:03.500645 master-0 kubenswrapper[34361]: I0224 05:53:03.497642 34361 memory_manager.go:354] "RemoveStaleState removing state" podUID="3d496d7e-f8ee-477d-ba68-1084904b9b33" containerName="cinder-api" Feb 24 05:53:03.500645 master-0 kubenswrapper[34361]: I0224 05:53:03.497662 34361 memory_manager.go:354] "RemoveStaleState removing state" podUID="7e30393e-c247-4ba9-9db9-864d16ba6d82" containerName="ironic-db-sync" Feb 24 05:53:03.500645 master-0 kubenswrapper[34361]: I0224 05:53:03.497681 34361 memory_manager.go:354] "RemoveStaleState removing state" podUID="3d496d7e-f8ee-477d-ba68-1084904b9b33" containerName="cinder-b7346-api-log" Feb 24 05:53:03.500645 master-0 kubenswrapper[34361]: I0224 05:53:03.498909 34361 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-b7346-api-0"] Feb 24 05:53:03.500645 master-0 kubenswrapper[34361]: I0224 05:53:03.499007 34361 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-b7346-api-0" Feb 24 05:53:03.510336 master-0 kubenswrapper[34361]: I0224 05:53:03.509959 34361 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-b7346-api-config-data" Feb 24 05:53:03.510336 master-0 kubenswrapper[34361]: I0224 05:53:03.510097 34361 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-cinder-internal-svc" Feb 24 05:53:03.510336 master-0 kubenswrapper[34361]: I0224 05:53:03.510332 34361 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-cinder-public-svc" Feb 24 05:53:03.566966 master-0 kubenswrapper[34361]: I0224 05:53:03.565719 34361 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/abacbccf-fef9-4c23-86af-7d01714da00b-logs\") pod \"cinder-b7346-api-0\" (UID: \"abacbccf-fef9-4c23-86af-7d01714da00b\") " pod="openstack/cinder-b7346-api-0" Feb 24 05:53:03.566966 master-0 kubenswrapper[34361]: I0224 05:53:03.565800 34361 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/abacbccf-fef9-4c23-86af-7d01714da00b-scripts\") pod \"cinder-b7346-api-0\" (UID: \"abacbccf-fef9-4c23-86af-7d01714da00b\") " pod="openstack/cinder-b7346-api-0" Feb 24 05:53:03.566966 master-0 kubenswrapper[34361]: I0224 05:53:03.566191 34361 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/abacbccf-fef9-4c23-86af-7d01714da00b-combined-ca-bundle\") pod \"cinder-b7346-api-0\" (UID: \"abacbccf-fef9-4c23-86af-7d01714da00b\") " pod="openstack/cinder-b7346-api-0" Feb 24 05:53:03.566966 master-0 kubenswrapper[34361]: I0224 05:53:03.566227 34361 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/abacbccf-fef9-4c23-86af-7d01714da00b-config-data-custom\") pod \"cinder-b7346-api-0\" (UID: \"abacbccf-fef9-4c23-86af-7d01714da00b\") " pod="openstack/cinder-b7346-api-0" Feb 24 05:53:03.566966 master-0 kubenswrapper[34361]: I0224 05:53:03.566397 34361 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/abacbccf-fef9-4c23-86af-7d01714da00b-config-data\") pod \"cinder-b7346-api-0\" (UID: \"abacbccf-fef9-4c23-86af-7d01714da00b\") " pod="openstack/cinder-b7346-api-0" Feb 24 05:53:03.566966 master-0 kubenswrapper[34361]: I0224 05:53:03.566459 34361 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9kxht\" (UniqueName: \"kubernetes.io/projected/abacbccf-fef9-4c23-86af-7d01714da00b-kube-api-access-9kxht\") pod \"cinder-b7346-api-0\" (UID: \"abacbccf-fef9-4c23-86af-7d01714da00b\") " pod="openstack/cinder-b7346-api-0" Feb 24 05:53:03.566966 master-0 kubenswrapper[34361]: I0224 05:53:03.566551 34361 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/abacbccf-fef9-4c23-86af-7d01714da00b-internal-tls-certs\") pod \"cinder-b7346-api-0\" (UID: \"abacbccf-fef9-4c23-86af-7d01714da00b\") " pod="openstack/cinder-b7346-api-0" Feb 24 05:53:03.566966 master-0 kubenswrapper[34361]: I0224 05:53:03.566602 34361 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/abacbccf-fef9-4c23-86af-7d01714da00b-public-tls-certs\") pod \"cinder-b7346-api-0\" (UID: \"abacbccf-fef9-4c23-86af-7d01714da00b\") " pod="openstack/cinder-b7346-api-0" Feb 24 05:53:03.566966 master-0 kubenswrapper[34361]: I0224 05:53:03.566654 34361 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/abacbccf-fef9-4c23-86af-7d01714da00b-etc-machine-id\") pod \"cinder-b7346-api-0\" (UID: \"abacbccf-fef9-4c23-86af-7d01714da00b\") " pod="openstack/cinder-b7346-api-0" Feb 24 05:53:03.670350 master-0 kubenswrapper[34361]: I0224 05:53:03.670185 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/abacbccf-fef9-4c23-86af-7d01714da00b-scripts\") pod \"cinder-b7346-api-0\" (UID: \"abacbccf-fef9-4c23-86af-7d01714da00b\") " pod="openstack/cinder-b7346-api-0" Feb 24 05:53:03.670350 master-0 kubenswrapper[34361]: I0224 05:53:03.670283 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/abacbccf-fef9-4c23-86af-7d01714da00b-combined-ca-bundle\") pod \"cinder-b7346-api-0\" (UID: \"abacbccf-fef9-4c23-86af-7d01714da00b\") " pod="openstack/cinder-b7346-api-0" Feb 24 05:53:03.670350 master-0 kubenswrapper[34361]: I0224 05:53:03.670338 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/abacbccf-fef9-4c23-86af-7d01714da00b-config-data-custom\") pod \"cinder-b7346-api-0\" (UID: \"abacbccf-fef9-4c23-86af-7d01714da00b\") " pod="openstack/cinder-b7346-api-0" Feb 24 05:53:03.670721 master-0 kubenswrapper[34361]: I0224 05:53:03.670436 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/abacbccf-fef9-4c23-86af-7d01714da00b-config-data\") pod \"cinder-b7346-api-0\" (UID: \"abacbccf-fef9-4c23-86af-7d01714da00b\") " pod="openstack/cinder-b7346-api-0" Feb 24 05:53:03.670721 master-0 kubenswrapper[34361]: I0224 05:53:03.670470 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9kxht\" (UniqueName: \"kubernetes.io/projected/abacbccf-fef9-4c23-86af-7d01714da00b-kube-api-access-9kxht\") pod \"cinder-b7346-api-0\" (UID: \"abacbccf-fef9-4c23-86af-7d01714da00b\") " pod="openstack/cinder-b7346-api-0" Feb 24 05:53:03.670721 master-0 kubenswrapper[34361]: I0224 05:53:03.670517 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/abacbccf-fef9-4c23-86af-7d01714da00b-internal-tls-certs\") pod \"cinder-b7346-api-0\" (UID: \"abacbccf-fef9-4c23-86af-7d01714da00b\") " pod="openstack/cinder-b7346-api-0" Feb 24 05:53:03.670721 master-0 kubenswrapper[34361]: I0224 05:53:03.670557 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/abacbccf-fef9-4c23-86af-7d01714da00b-public-tls-certs\") pod \"cinder-b7346-api-0\" (UID: \"abacbccf-fef9-4c23-86af-7d01714da00b\") " pod="openstack/cinder-b7346-api-0" Feb 24 05:53:03.670721 master-0 kubenswrapper[34361]: I0224 05:53:03.670583 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/abacbccf-fef9-4c23-86af-7d01714da00b-etc-machine-id\") pod \"cinder-b7346-api-0\" (UID: \"abacbccf-fef9-4c23-86af-7d01714da00b\") " pod="openstack/cinder-b7346-api-0" Feb 24 05:53:03.670721 master-0 kubenswrapper[34361]: I0224 05:53:03.670654 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/abacbccf-fef9-4c23-86af-7d01714da00b-logs\") pod \"cinder-b7346-api-0\" (UID: \"abacbccf-fef9-4c23-86af-7d01714da00b\") " pod="openstack/cinder-b7346-api-0" Feb 24 05:53:03.679350 master-0 kubenswrapper[34361]: I0224 05:53:03.671224 34361 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/abacbccf-fef9-4c23-86af-7d01714da00b-logs\") pod \"cinder-b7346-api-0\" (UID: \"abacbccf-fef9-4c23-86af-7d01714da00b\") " pod="openstack/cinder-b7346-api-0" Feb 24 05:53:03.679350 master-0 kubenswrapper[34361]: I0224 05:53:03.674422 34361 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/abacbccf-fef9-4c23-86af-7d01714da00b-etc-machine-id\") pod \"cinder-b7346-api-0\" (UID: \"abacbccf-fef9-4c23-86af-7d01714da00b\") " pod="openstack/cinder-b7346-api-0" Feb 24 05:53:03.690352 master-0 kubenswrapper[34361]: I0224 05:53:03.680852 34361 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/abacbccf-fef9-4c23-86af-7d01714da00b-config-data-custom\") pod \"cinder-b7346-api-0\" (UID: \"abacbccf-fef9-4c23-86af-7d01714da00b\") " pod="openstack/cinder-b7346-api-0" Feb 24 05:53:03.690352 master-0 kubenswrapper[34361]: I0224 05:53:03.683147 34361 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/abacbccf-fef9-4c23-86af-7d01714da00b-combined-ca-bundle\") pod \"cinder-b7346-api-0\" (UID: \"abacbccf-fef9-4c23-86af-7d01714da00b\") " pod="openstack/cinder-b7346-api-0" Feb 24 05:53:03.690352 master-0 kubenswrapper[34361]: I0224 05:53:03.683383 34361 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/abacbccf-fef9-4c23-86af-7d01714da00b-internal-tls-certs\") pod \"cinder-b7346-api-0\" (UID: \"abacbccf-fef9-4c23-86af-7d01714da00b\") " pod="openstack/cinder-b7346-api-0" Feb 24 05:53:03.690352 master-0 kubenswrapper[34361]: I0224 05:53:03.683842 34361 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/abacbccf-fef9-4c23-86af-7d01714da00b-public-tls-certs\") pod \"cinder-b7346-api-0\" (UID: \"abacbccf-fef9-4c23-86af-7d01714da00b\") " pod="openstack/cinder-b7346-api-0" Feb 24 05:53:03.691335 master-0 kubenswrapper[34361]: I0224 05:53:03.691250 34361 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/abacbccf-fef9-4c23-86af-7d01714da00b-scripts\") pod \"cinder-b7346-api-0\" (UID: \"abacbccf-fef9-4c23-86af-7d01714da00b\") " pod="openstack/cinder-b7346-api-0" Feb 24 05:53:03.695768 master-0 kubenswrapper[34361]: I0224 05:53:03.692497 34361 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/abacbccf-fef9-4c23-86af-7d01714da00b-config-data\") pod \"cinder-b7346-api-0\" (UID: \"abacbccf-fef9-4c23-86af-7d01714da00b\") " pod="openstack/cinder-b7346-api-0" Feb 24 05:53:03.695768 master-0 kubenswrapper[34361]: I0224 05:53:03.692965 34361 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-7d9548858-h45cl"] Feb 24 05:53:03.695768 master-0 kubenswrapper[34361]: I0224 05:53:03.695747 34361 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-7d9548858-h45cl" Feb 24 05:53:03.729639 master-0 kubenswrapper[34361]: I0224 05:53:03.721068 34361 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-7d9548858-h45cl"] Feb 24 05:53:03.737334 master-0 kubenswrapper[34361]: I0224 05:53:03.731835 34361 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9kxht\" (UniqueName: \"kubernetes.io/projected/abacbccf-fef9-4c23-86af-7d01714da00b-kube-api-access-9kxht\") pod \"cinder-b7346-api-0\" (UID: \"abacbccf-fef9-4c23-86af-7d01714da00b\") " pod="openstack/cinder-b7346-api-0" Feb 24 05:53:03.780329 master-0 kubenswrapper[34361]: I0224 05:53:03.775463 34361 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/e585e3c7-27e7-4583-8053-fd9d301a9881-internal-tls-certs\") pod \"placement-7d9548858-h45cl\" (UID: \"e585e3c7-27e7-4583-8053-fd9d301a9881\") " pod="openstack/placement-7d9548858-h45cl" Feb 24 05:53:03.780329 master-0 kubenswrapper[34361]: I0224 05:53:03.775559 34361 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e585e3c7-27e7-4583-8053-fd9d301a9881-scripts\") pod \"placement-7d9548858-h45cl\" (UID: \"e585e3c7-27e7-4583-8053-fd9d301a9881\") " pod="openstack/placement-7d9548858-h45cl" Feb 24 05:53:03.780329 master-0 kubenswrapper[34361]: I0224 05:53:03.775721 34361 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e585e3c7-27e7-4583-8053-fd9d301a9881-combined-ca-bundle\") pod \"placement-7d9548858-h45cl\" (UID: \"e585e3c7-27e7-4583-8053-fd9d301a9881\") " pod="openstack/placement-7d9548858-h45cl" Feb 24 05:53:03.780329 master-0 kubenswrapper[34361]: I0224 05:53:03.775761 34361 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tncwk\" (UniqueName: \"kubernetes.io/projected/e585e3c7-27e7-4583-8053-fd9d301a9881-kube-api-access-tncwk\") pod \"placement-7d9548858-h45cl\" (UID: \"e585e3c7-27e7-4583-8053-fd9d301a9881\") " pod="openstack/placement-7d9548858-h45cl" Feb 24 05:53:03.780329 master-0 kubenswrapper[34361]: I0224 05:53:03.775852 34361 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e585e3c7-27e7-4583-8053-fd9d301a9881-config-data\") pod \"placement-7d9548858-h45cl\" (UID: \"e585e3c7-27e7-4583-8053-fd9d301a9881\") " pod="openstack/placement-7d9548858-h45cl" Feb 24 05:53:03.780329 master-0 kubenswrapper[34361]: I0224 05:53:03.775901 34361 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e585e3c7-27e7-4583-8053-fd9d301a9881-logs\") pod \"placement-7d9548858-h45cl\" (UID: \"e585e3c7-27e7-4583-8053-fd9d301a9881\") " pod="openstack/placement-7d9548858-h45cl" Feb 24 05:53:03.780329 master-0 kubenswrapper[34361]: I0224 05:53:03.775956 34361 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/e585e3c7-27e7-4583-8053-fd9d301a9881-public-tls-certs\") pod \"placement-7d9548858-h45cl\" (UID: \"e585e3c7-27e7-4583-8053-fd9d301a9881\") " pod="openstack/placement-7d9548858-h45cl" Feb 24 05:53:03.850460 master-0 kubenswrapper[34361]: I0224 05:53:03.847279 34361 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-b7346-api-0" Feb 24 05:53:03.887326 master-0 kubenswrapper[34361]: I0224 05:53:03.878460 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e585e3c7-27e7-4583-8053-fd9d301a9881-config-data\") pod \"placement-7d9548858-h45cl\" (UID: \"e585e3c7-27e7-4583-8053-fd9d301a9881\") " pod="openstack/placement-7d9548858-h45cl" Feb 24 05:53:03.887326 master-0 kubenswrapper[34361]: I0224 05:53:03.878546 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e585e3c7-27e7-4583-8053-fd9d301a9881-logs\") pod \"placement-7d9548858-h45cl\" (UID: \"e585e3c7-27e7-4583-8053-fd9d301a9881\") " pod="openstack/placement-7d9548858-h45cl" Feb 24 05:53:03.887326 master-0 kubenswrapper[34361]: I0224 05:53:03.878592 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/e585e3c7-27e7-4583-8053-fd9d301a9881-public-tls-certs\") pod \"placement-7d9548858-h45cl\" (UID: \"e585e3c7-27e7-4583-8053-fd9d301a9881\") " pod="openstack/placement-7d9548858-h45cl" Feb 24 05:53:03.887326 master-0 kubenswrapper[34361]: I0224 05:53:03.878627 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/e585e3c7-27e7-4583-8053-fd9d301a9881-internal-tls-certs\") pod \"placement-7d9548858-h45cl\" (UID: \"e585e3c7-27e7-4583-8053-fd9d301a9881\") " pod="openstack/placement-7d9548858-h45cl" Feb 24 05:53:03.887326 master-0 kubenswrapper[34361]: I0224 05:53:03.878658 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e585e3c7-27e7-4583-8053-fd9d301a9881-scripts\") pod \"placement-7d9548858-h45cl\" (UID: \"e585e3c7-27e7-4583-8053-fd9d301a9881\") " pod="openstack/placement-7d9548858-h45cl" Feb 24 05:53:03.887326 master-0 kubenswrapper[34361]: I0224 05:53:03.878718 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e585e3c7-27e7-4583-8053-fd9d301a9881-combined-ca-bundle\") pod \"placement-7d9548858-h45cl\" (UID: \"e585e3c7-27e7-4583-8053-fd9d301a9881\") " pod="openstack/placement-7d9548858-h45cl" Feb 24 05:53:03.887326 master-0 kubenswrapper[34361]: I0224 05:53:03.878745 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tncwk\" (UniqueName: \"kubernetes.io/projected/e585e3c7-27e7-4583-8053-fd9d301a9881-kube-api-access-tncwk\") pod \"placement-7d9548858-h45cl\" (UID: \"e585e3c7-27e7-4583-8053-fd9d301a9881\") " pod="openstack/placement-7d9548858-h45cl" Feb 24 05:53:03.898328 master-0 kubenswrapper[34361]: I0224 05:53:03.895232 34361 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e585e3c7-27e7-4583-8053-fd9d301a9881-logs\") pod \"placement-7d9548858-h45cl\" (UID: \"e585e3c7-27e7-4583-8053-fd9d301a9881\") " pod="openstack/placement-7d9548858-h45cl" Feb 24 05:53:03.904362 master-0 kubenswrapper[34361]: I0224 05:53:03.904238 34361 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/e585e3c7-27e7-4583-8053-fd9d301a9881-internal-tls-certs\") pod \"placement-7d9548858-h45cl\" (UID: \"e585e3c7-27e7-4583-8053-fd9d301a9881\") " pod="openstack/placement-7d9548858-h45cl" Feb 24 05:53:03.923933 master-0 kubenswrapper[34361]: I0224 05:53:03.920679 34361 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tncwk\" (UniqueName: \"kubernetes.io/projected/e585e3c7-27e7-4583-8053-fd9d301a9881-kube-api-access-tncwk\") pod \"placement-7d9548858-h45cl\" (UID: \"e585e3c7-27e7-4583-8053-fd9d301a9881\") " pod="openstack/placement-7d9548858-h45cl" Feb 24 05:53:03.927323 master-0 kubenswrapper[34361]: I0224 05:53:03.926203 34361 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e585e3c7-27e7-4583-8053-fd9d301a9881-scripts\") pod \"placement-7d9548858-h45cl\" (UID: \"e585e3c7-27e7-4583-8053-fd9d301a9881\") " pod="openstack/placement-7d9548858-h45cl" Feb 24 05:53:03.931939 master-0 kubenswrapper[34361]: I0224 05:53:03.931651 34361 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e585e3c7-27e7-4583-8053-fd9d301a9881-config-data\") pod \"placement-7d9548858-h45cl\" (UID: \"e585e3c7-27e7-4583-8053-fd9d301a9881\") " pod="openstack/placement-7d9548858-h45cl" Feb 24 05:53:03.938152 master-0 kubenswrapper[34361]: I0224 05:53:03.936916 34361 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/e585e3c7-27e7-4583-8053-fd9d301a9881-public-tls-certs\") pod \"placement-7d9548858-h45cl\" (UID: \"e585e3c7-27e7-4583-8053-fd9d301a9881\") " pod="openstack/placement-7d9548858-h45cl" Feb 24 05:53:03.938152 master-0 kubenswrapper[34361]: I0224 05:53:03.938086 34361 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e585e3c7-27e7-4583-8053-fd9d301a9881-combined-ca-bundle\") pod \"placement-7d9548858-h45cl\" (UID: \"e585e3c7-27e7-4583-8053-fd9d301a9881\") " pod="openstack/placement-7d9548858-h45cl" Feb 24 05:53:03.966355 master-0 kubenswrapper[34361]: I0224 05:53:03.961845 34361 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ironic-inspector-db-create-pwcj4"] Feb 24 05:53:03.966355 master-0 kubenswrapper[34361]: I0224 05:53:03.963864 34361 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ironic-inspector-db-create-pwcj4" Feb 24 05:53:04.034333 master-0 kubenswrapper[34361]: I0224 05:53:04.032349 34361 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ironic-inspector-db-create-pwcj4"] Feb 24 05:53:04.049353 master-0 kubenswrapper[34361]: I0224 05:53:04.047740 34361 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ironic-inspector-6402-account-create-update-kj7ts"] Feb 24 05:53:04.088334 master-0 kubenswrapper[34361]: I0224 05:53:04.087872 34361 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ironic-inspector-6402-account-create-update-kj7ts" Feb 24 05:53:04.119832 master-0 kubenswrapper[34361]: I0224 05:53:04.115852 34361 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ironic-inspector-db-secret" Feb 24 05:53:04.186390 master-0 kubenswrapper[34361]: I0224 05:53:04.169585 34361 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0a0262a8-c31e-4022-bf1e-7952af276733-operator-scripts\") pod \"ironic-inspector-db-create-pwcj4\" (UID: \"0a0262a8-c31e-4022-bf1e-7952af276733\") " pod="openstack/ironic-inspector-db-create-pwcj4" Feb 24 05:53:04.186390 master-0 kubenswrapper[34361]: I0224 05:53:04.169758 34361 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s96kw\" (UniqueName: \"kubernetes.io/projected/68c5e68e-c7ed-4fb9-a323-2104110a3742-kube-api-access-s96kw\") pod \"ironic-inspector-6402-account-create-update-kj7ts\" (UID: \"68c5e68e-c7ed-4fb9-a323-2104110a3742\") " pod="openstack/ironic-inspector-6402-account-create-update-kj7ts" Feb 24 05:53:04.186390 master-0 kubenswrapper[34361]: I0224 05:53:04.169861 34361 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/68c5e68e-c7ed-4fb9-a323-2104110a3742-operator-scripts\") pod \"ironic-inspector-6402-account-create-update-kj7ts\" (UID: \"68c5e68e-c7ed-4fb9-a323-2104110a3742\") " pod="openstack/ironic-inspector-6402-account-create-update-kj7ts" Feb 24 05:53:04.186390 master-0 kubenswrapper[34361]: I0224 05:53:04.170009 34361 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gsd8t\" (UniqueName: \"kubernetes.io/projected/0a0262a8-c31e-4022-bf1e-7952af276733-kube-api-access-gsd8t\") pod \"ironic-inspector-db-create-pwcj4\" (UID: \"0a0262a8-c31e-4022-bf1e-7952af276733\") " pod="openstack/ironic-inspector-db-create-pwcj4" Feb 24 05:53:04.186390 master-0 kubenswrapper[34361]: I0224 05:53:04.181476 34361 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-7d9548858-h45cl" Feb 24 05:53:04.315974 master-0 kubenswrapper[34361]: I0224 05:53:04.315888 34361 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ironic-inspector-6402-account-create-update-kj7ts"] Feb 24 05:53:04.321985 master-0 kubenswrapper[34361]: I0224 05:53:04.321819 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s96kw\" (UniqueName: \"kubernetes.io/projected/68c5e68e-c7ed-4fb9-a323-2104110a3742-kube-api-access-s96kw\") pod \"ironic-inspector-6402-account-create-update-kj7ts\" (UID: \"68c5e68e-c7ed-4fb9-a323-2104110a3742\") " pod="openstack/ironic-inspector-6402-account-create-update-kj7ts" Feb 24 05:53:04.322256 master-0 kubenswrapper[34361]: I0224 05:53:04.322004 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/68c5e68e-c7ed-4fb9-a323-2104110a3742-operator-scripts\") pod \"ironic-inspector-6402-account-create-update-kj7ts\" (UID: \"68c5e68e-c7ed-4fb9-a323-2104110a3742\") " pod="openstack/ironic-inspector-6402-account-create-update-kj7ts" Feb 24 05:53:04.332965 master-0 kubenswrapper[34361]: I0224 05:53:04.328774 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gsd8t\" (UniqueName: \"kubernetes.io/projected/0a0262a8-c31e-4022-bf1e-7952af276733-kube-api-access-gsd8t\") pod \"ironic-inspector-db-create-pwcj4\" (UID: \"0a0262a8-c31e-4022-bf1e-7952af276733\") " pod="openstack/ironic-inspector-db-create-pwcj4" Feb 24 05:53:04.332965 master-0 kubenswrapper[34361]: I0224 05:53:04.329210 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0a0262a8-c31e-4022-bf1e-7952af276733-operator-scripts\") pod \"ironic-inspector-db-create-pwcj4\" (UID: \"0a0262a8-c31e-4022-bf1e-7952af276733\") " pod="openstack/ironic-inspector-db-create-pwcj4" Feb 24 05:53:04.332965 master-0 kubenswrapper[34361]: I0224 05:53:04.329369 34361 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/68c5e68e-c7ed-4fb9-a323-2104110a3742-operator-scripts\") pod \"ironic-inspector-6402-account-create-update-kj7ts\" (UID: \"68c5e68e-c7ed-4fb9-a323-2104110a3742\") " pod="openstack/ironic-inspector-6402-account-create-update-kj7ts" Feb 24 05:53:04.332965 master-0 kubenswrapper[34361]: I0224 05:53:04.330849 34361 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0a0262a8-c31e-4022-bf1e-7952af276733-operator-scripts\") pod \"ironic-inspector-db-create-pwcj4\" (UID: \"0a0262a8-c31e-4022-bf1e-7952af276733\") " pod="openstack/ironic-inspector-db-create-pwcj4" Feb 24 05:53:04.364092 master-0 kubenswrapper[34361]: I0224 05:53:04.363526 34361 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gsd8t\" (UniqueName: \"kubernetes.io/projected/0a0262a8-c31e-4022-bf1e-7952af276733-kube-api-access-gsd8t\") pod \"ironic-inspector-db-create-pwcj4\" (UID: \"0a0262a8-c31e-4022-bf1e-7952af276733\") " pod="openstack/ironic-inspector-db-create-pwcj4" Feb 24 05:53:04.373126 master-0 kubenswrapper[34361]: I0224 05:53:04.371977 34361 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s96kw\" (UniqueName: \"kubernetes.io/projected/68c5e68e-c7ed-4fb9-a323-2104110a3742-kube-api-access-s96kw\") pod \"ironic-inspector-6402-account-create-update-kj7ts\" (UID: \"68c5e68e-c7ed-4fb9-a323-2104110a3742\") " pod="openstack/ironic-inspector-6402-account-create-update-kj7ts" Feb 24 05:53:04.381570 master-0 kubenswrapper[34361]: I0224 05:53:04.381475 34361 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ironic-neutron-agent-856d98ff5d-2p7np"] Feb 24 05:53:04.384221 master-0 kubenswrapper[34361]: I0224 05:53:04.383986 34361 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ironic-neutron-agent-856d98ff5d-2p7np" Feb 24 05:53:04.400380 master-0 kubenswrapper[34361]: I0224 05:53:04.391125 34361 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ironic-ironic-neutron-agent-config-data" Feb 24 05:53:04.412983 master-0 kubenswrapper[34361]: I0224 05:53:04.412907 34361 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-66c9d5d889-nmpw7"] Feb 24 05:53:04.413426 master-0 kubenswrapper[34361]: I0224 05:53:04.413335 34361 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-66c9d5d889-nmpw7" podUID="3c458e23-405b-449a-8e0b-aa6e42a286c9" containerName="dnsmasq-dns" containerID="cri-o://719381efa530f9326b166c8c44acc394693bf446e30448e49cc2d04aeac11fa2" gracePeriod=10 Feb 24 05:53:04.416600 master-0 kubenswrapper[34361]: I0224 05:53:04.416105 34361 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-66c9d5d889-nmpw7" Feb 24 05:53:04.439571 master-0 kubenswrapper[34361]: I0224 05:53:04.432274 34361 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ironic-neutron-agent-856d98ff5d-2p7np"] Feb 24 05:53:04.468509 master-0 kubenswrapper[34361]: I0224 05:53:04.455204 34361 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ironic-inspector-db-create-pwcj4" Feb 24 05:53:04.468509 master-0 kubenswrapper[34361]: I0224 05:53:04.466412 34361 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ironic-555fd64789-cgpft"] Feb 24 05:53:04.482372 master-0 kubenswrapper[34361]: I0224 05:53:04.480170 34361 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ironic-555fd64789-cgpft" Feb 24 05:53:04.507067 master-0 kubenswrapper[34361]: I0224 05:53:04.495733 34361 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ironic-api-scripts" Feb 24 05:53:04.507067 master-0 kubenswrapper[34361]: I0224 05:53:04.496377 34361 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ironic-api-config-data" Feb 24 05:53:04.507067 master-0 kubenswrapper[34361]: I0224 05:53:04.497051 34361 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"osp-secret" Feb 24 05:53:04.507067 master-0 kubenswrapper[34361]: I0224 05:53:04.497175 34361 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ironic-config-data" Feb 24 05:53:04.535946 master-0 kubenswrapper[34361]: I0224 05:53:04.528751 34361 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-transport-url-ironic-transport" Feb 24 05:53:04.537096 master-0 kubenswrapper[34361]: I0224 05:53:04.536938 34361 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/40a5b237-764f-4367-85a5-4153a8f90a3e-combined-ca-bundle\") pod \"ironic-neutron-agent-856d98ff5d-2p7np\" (UID: \"40a5b237-764f-4367-85a5-4153a8f90a3e\") " pod="openstack/ironic-neutron-agent-856d98ff5d-2p7np" Feb 24 05:53:04.537195 master-0 kubenswrapper[34361]: I0224 05:53:04.537138 34361 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/40a5b237-764f-4367-85a5-4153a8f90a3e-config\") pod \"ironic-neutron-agent-856d98ff5d-2p7np\" (UID: \"40a5b237-764f-4367-85a5-4153a8f90a3e\") " pod="openstack/ironic-neutron-agent-856d98ff5d-2p7np" Feb 24 05:53:04.537267 master-0 kubenswrapper[34361]: I0224 05:53:04.537214 34361 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5c2vq\" (UniqueName: \"kubernetes.io/projected/40a5b237-764f-4367-85a5-4153a8f90a3e-kube-api-access-5c2vq\") pod \"ironic-neutron-agent-856d98ff5d-2p7np\" (UID: \"40a5b237-764f-4367-85a5-4153a8f90a3e\") " pod="openstack/ironic-neutron-agent-856d98ff5d-2p7np" Feb 24 05:53:04.548367 master-0 kubenswrapper[34361]: I0224 05:53:04.546565 34361 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-7d9d8bd467-64rvv"] Feb 24 05:53:04.549773 master-0 kubenswrapper[34361]: I0224 05:53:04.549714 34361 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7d9d8bd467-64rvv" Feb 24 05:53:04.560504 master-0 kubenswrapper[34361]: I0224 05:53:04.560420 34361 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ironic-555fd64789-cgpft"] Feb 24 05:53:04.582265 master-0 kubenswrapper[34361]: I0224 05:53:04.580064 34361 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7d9d8bd467-64rvv"] Feb 24 05:53:04.624678 master-0 kubenswrapper[34361]: I0224 05:53:04.612089 34361 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ironic-inspector-6402-account-create-update-kj7ts" Feb 24 05:53:04.642204 master-0 kubenswrapper[34361]: I0224 05:53:04.641421 34361 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5p2h7\" (UniqueName: \"kubernetes.io/projected/700c3143-d1a3-47a3-92f5-02a0b1e428a4-kube-api-access-5p2h7\") pod \"ironic-555fd64789-cgpft\" (UID: \"700c3143-d1a3-47a3-92f5-02a0b1e428a4\") " pod="openstack/ironic-555fd64789-cgpft" Feb 24 05:53:04.642204 master-0 kubenswrapper[34361]: I0224 05:53:04.641496 34361 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/eb5e7cfa-75df-4db4-87aa-34e7c7acf852-config\") pod \"dnsmasq-dns-7d9d8bd467-64rvv\" (UID: \"eb5e7cfa-75df-4db4-87aa-34e7c7acf852\") " pod="openstack/dnsmasq-dns-7d9d8bd467-64rvv" Feb 24 05:53:04.642204 master-0 kubenswrapper[34361]: I0224 05:53:04.641537 34361 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n2hpp\" (UniqueName: \"kubernetes.io/projected/eb5e7cfa-75df-4db4-87aa-34e7c7acf852-kube-api-access-n2hpp\") pod \"dnsmasq-dns-7d9d8bd467-64rvv\" (UID: \"eb5e7cfa-75df-4db4-87aa-34e7c7acf852\") " pod="openstack/dnsmasq-dns-7d9d8bd467-64rvv" Feb 24 05:53:04.642204 master-0 kubenswrapper[34361]: I0224 05:53:04.641574 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/40a5b237-764f-4367-85a5-4153a8f90a3e-combined-ca-bundle\") pod \"ironic-neutron-agent-856d98ff5d-2p7np\" (UID: \"40a5b237-764f-4367-85a5-4153a8f90a3e\") " pod="openstack/ironic-neutron-agent-856d98ff5d-2p7np" Feb 24 05:53:04.642204 master-0 kubenswrapper[34361]: I0224 05:53:04.641615 34361 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/eb5e7cfa-75df-4db4-87aa-34e7c7acf852-ovsdbserver-nb\") pod \"dnsmasq-dns-7d9d8bd467-64rvv\" (UID: \"eb5e7cfa-75df-4db4-87aa-34e7c7acf852\") " pod="openstack/dnsmasq-dns-7d9d8bd467-64rvv" Feb 24 05:53:04.642204 master-0 kubenswrapper[34361]: I0224 05:53:04.641654 34361 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/eb5e7cfa-75df-4db4-87aa-34e7c7acf852-ovsdbserver-sb\") pod \"dnsmasq-dns-7d9d8bd467-64rvv\" (UID: \"eb5e7cfa-75df-4db4-87aa-34e7c7acf852\") " pod="openstack/dnsmasq-dns-7d9d8bd467-64rvv" Feb 24 05:53:04.642204 master-0 kubenswrapper[34361]: I0224 05:53:04.641674 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/40a5b237-764f-4367-85a5-4153a8f90a3e-config\") pod \"ironic-neutron-agent-856d98ff5d-2p7np\" (UID: \"40a5b237-764f-4367-85a5-4153a8f90a3e\") " pod="openstack/ironic-neutron-agent-856d98ff5d-2p7np" Feb 24 05:53:04.642204 master-0 kubenswrapper[34361]: I0224 05:53:04.641699 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5c2vq\" (UniqueName: \"kubernetes.io/projected/40a5b237-764f-4367-85a5-4153a8f90a3e-kube-api-access-5c2vq\") pod \"ironic-neutron-agent-856d98ff5d-2p7np\" (UID: \"40a5b237-764f-4367-85a5-4153a8f90a3e\") " pod="openstack/ironic-neutron-agent-856d98ff5d-2p7np" Feb 24 05:53:04.642204 master-0 kubenswrapper[34361]: I0224 05:53:04.641720 34361 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/700c3143-d1a3-47a3-92f5-02a0b1e428a4-logs\") pod \"ironic-555fd64789-cgpft\" (UID: \"700c3143-d1a3-47a3-92f5-02a0b1e428a4\") " pod="openstack/ironic-555fd64789-cgpft" Feb 24 05:53:04.642204 master-0 kubenswrapper[34361]: I0224 05:53:04.641751 34361 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/eb5e7cfa-75df-4db4-87aa-34e7c7acf852-dns-swift-storage-0\") pod \"dnsmasq-dns-7d9d8bd467-64rvv\" (UID: \"eb5e7cfa-75df-4db4-87aa-34e7c7acf852\") " pod="openstack/dnsmasq-dns-7d9d8bd467-64rvv" Feb 24 05:53:04.642204 master-0 kubenswrapper[34361]: I0224 05:53:04.641770 34361 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-merged\" (UniqueName: \"kubernetes.io/empty-dir/700c3143-d1a3-47a3-92f5-02a0b1e428a4-config-data-merged\") pod \"ironic-555fd64789-cgpft\" (UID: \"700c3143-d1a3-47a3-92f5-02a0b1e428a4\") " pod="openstack/ironic-555fd64789-cgpft" Feb 24 05:53:04.642204 master-0 kubenswrapper[34361]: I0224 05:53:04.641792 34361 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/700c3143-d1a3-47a3-92f5-02a0b1e428a4-config-data\") pod \"ironic-555fd64789-cgpft\" (UID: \"700c3143-d1a3-47a3-92f5-02a0b1e428a4\") " pod="openstack/ironic-555fd64789-cgpft" Feb 24 05:53:04.642204 master-0 kubenswrapper[34361]: I0224 05:53:04.641815 34361 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-podinfo\" (UniqueName: \"kubernetes.io/downward-api/700c3143-d1a3-47a3-92f5-02a0b1e428a4-etc-podinfo\") pod \"ironic-555fd64789-cgpft\" (UID: \"700c3143-d1a3-47a3-92f5-02a0b1e428a4\") " pod="openstack/ironic-555fd64789-cgpft" Feb 24 05:53:04.642204 master-0 kubenswrapper[34361]: I0224 05:53:04.641832 34361 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/eb5e7cfa-75df-4db4-87aa-34e7c7acf852-dns-svc\") pod \"dnsmasq-dns-7d9d8bd467-64rvv\" (UID: \"eb5e7cfa-75df-4db4-87aa-34e7c7acf852\") " pod="openstack/dnsmasq-dns-7d9d8bd467-64rvv" Feb 24 05:53:04.642204 master-0 kubenswrapper[34361]: I0224 05:53:04.641848 34361 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/700c3143-d1a3-47a3-92f5-02a0b1e428a4-combined-ca-bundle\") pod \"ironic-555fd64789-cgpft\" (UID: \"700c3143-d1a3-47a3-92f5-02a0b1e428a4\") " pod="openstack/ironic-555fd64789-cgpft" Feb 24 05:53:04.642204 master-0 kubenswrapper[34361]: I0224 05:53:04.641873 34361 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/700c3143-d1a3-47a3-92f5-02a0b1e428a4-scripts\") pod \"ironic-555fd64789-cgpft\" (UID: \"700c3143-d1a3-47a3-92f5-02a0b1e428a4\") " pod="openstack/ironic-555fd64789-cgpft" Feb 24 05:53:04.642204 master-0 kubenswrapper[34361]: I0224 05:53:04.641892 34361 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/700c3143-d1a3-47a3-92f5-02a0b1e428a4-config-data-custom\") pod \"ironic-555fd64789-cgpft\" (UID: \"700c3143-d1a3-47a3-92f5-02a0b1e428a4\") " pod="openstack/ironic-555fd64789-cgpft" Feb 24 05:53:04.664743 master-0 kubenswrapper[34361]: I0224 05:53:04.649410 34361 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/40a5b237-764f-4367-85a5-4153a8f90a3e-combined-ca-bundle\") pod \"ironic-neutron-agent-856d98ff5d-2p7np\" (UID: \"40a5b237-764f-4367-85a5-4153a8f90a3e\") " pod="openstack/ironic-neutron-agent-856d98ff5d-2p7np" Feb 24 05:53:04.681530 master-0 kubenswrapper[34361]: I0224 05:53:04.681379 34361 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/40a5b237-764f-4367-85a5-4153a8f90a3e-config\") pod \"ironic-neutron-agent-856d98ff5d-2p7np\" (UID: \"40a5b237-764f-4367-85a5-4153a8f90a3e\") " pod="openstack/ironic-neutron-agent-856d98ff5d-2p7np" Feb 24 05:53:04.689134 master-0 kubenswrapper[34361]: I0224 05:53:04.687269 34361 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5c2vq\" (UniqueName: \"kubernetes.io/projected/40a5b237-764f-4367-85a5-4153a8f90a3e-kube-api-access-5c2vq\") pod \"ironic-neutron-agent-856d98ff5d-2p7np\" (UID: \"40a5b237-764f-4367-85a5-4153a8f90a3e\") " pod="openstack/ironic-neutron-agent-856d98ff5d-2p7np" Feb 24 05:53:04.696818 master-0 kubenswrapper[34361]: I0224 05:53:04.696752 34361 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3d496d7e-f8ee-477d-ba68-1084904b9b33" path="/var/lib/kubelet/pods/3d496d7e-f8ee-477d-ba68-1084904b9b33/volumes" Feb 24 05:53:04.763567 master-0 kubenswrapper[34361]: I0224 05:53:04.748868 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/eb5e7cfa-75df-4db4-87aa-34e7c7acf852-ovsdbserver-nb\") pod \"dnsmasq-dns-7d9d8bd467-64rvv\" (UID: \"eb5e7cfa-75df-4db4-87aa-34e7c7acf852\") " pod="openstack/dnsmasq-dns-7d9d8bd467-64rvv" Feb 24 05:53:04.763882 master-0 kubenswrapper[34361]: I0224 05:53:04.763757 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/eb5e7cfa-75df-4db4-87aa-34e7c7acf852-ovsdbserver-sb\") pod \"dnsmasq-dns-7d9d8bd467-64rvv\" (UID: \"eb5e7cfa-75df-4db4-87aa-34e7c7acf852\") " pod="openstack/dnsmasq-dns-7d9d8bd467-64rvv" Feb 24 05:53:04.763882 master-0 kubenswrapper[34361]: I0224 05:53:04.763851 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/700c3143-d1a3-47a3-92f5-02a0b1e428a4-logs\") pod \"ironic-555fd64789-cgpft\" (UID: \"700c3143-d1a3-47a3-92f5-02a0b1e428a4\") " pod="openstack/ironic-555fd64789-cgpft" Feb 24 05:53:04.763987 master-0 kubenswrapper[34361]: I0224 05:53:04.763956 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/eb5e7cfa-75df-4db4-87aa-34e7c7acf852-dns-swift-storage-0\") pod \"dnsmasq-dns-7d9d8bd467-64rvv\" (UID: \"eb5e7cfa-75df-4db4-87aa-34e7c7acf852\") " pod="openstack/dnsmasq-dns-7d9d8bd467-64rvv" Feb 24 05:53:04.764036 master-0 kubenswrapper[34361]: I0224 05:53:04.764009 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-merged\" (UniqueName: \"kubernetes.io/empty-dir/700c3143-d1a3-47a3-92f5-02a0b1e428a4-config-data-merged\") pod \"ironic-555fd64789-cgpft\" (UID: \"700c3143-d1a3-47a3-92f5-02a0b1e428a4\") " pod="openstack/ironic-555fd64789-cgpft" Feb 24 05:53:04.764100 master-0 kubenswrapper[34361]: I0224 05:53:04.764077 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/700c3143-d1a3-47a3-92f5-02a0b1e428a4-config-data\") pod \"ironic-555fd64789-cgpft\" (UID: \"700c3143-d1a3-47a3-92f5-02a0b1e428a4\") " pod="openstack/ironic-555fd64789-cgpft" Feb 24 05:53:04.764169 master-0 kubenswrapper[34361]: I0224 05:53:04.764147 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-podinfo\" (UniqueName: \"kubernetes.io/downward-api/700c3143-d1a3-47a3-92f5-02a0b1e428a4-etc-podinfo\") pod \"ironic-555fd64789-cgpft\" (UID: \"700c3143-d1a3-47a3-92f5-02a0b1e428a4\") " pod="openstack/ironic-555fd64789-cgpft" Feb 24 05:53:04.764218 master-0 kubenswrapper[34361]: I0224 05:53:04.764185 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/eb5e7cfa-75df-4db4-87aa-34e7c7acf852-dns-svc\") pod \"dnsmasq-dns-7d9d8bd467-64rvv\" (UID: \"eb5e7cfa-75df-4db4-87aa-34e7c7acf852\") " pod="openstack/dnsmasq-dns-7d9d8bd467-64rvv" Feb 24 05:53:04.764218 master-0 kubenswrapper[34361]: I0224 05:53:04.764210 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/700c3143-d1a3-47a3-92f5-02a0b1e428a4-combined-ca-bundle\") pod \"ironic-555fd64789-cgpft\" (UID: \"700c3143-d1a3-47a3-92f5-02a0b1e428a4\") " pod="openstack/ironic-555fd64789-cgpft" Feb 24 05:53:04.764285 master-0 kubenswrapper[34361]: I0224 05:53:04.764264 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/700c3143-d1a3-47a3-92f5-02a0b1e428a4-scripts\") pod \"ironic-555fd64789-cgpft\" (UID: \"700c3143-d1a3-47a3-92f5-02a0b1e428a4\") " pod="openstack/ironic-555fd64789-cgpft" Feb 24 05:53:04.764348 master-0 kubenswrapper[34361]: I0224 05:53:04.764328 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/700c3143-d1a3-47a3-92f5-02a0b1e428a4-config-data-custom\") pod \"ironic-555fd64789-cgpft\" (UID: \"700c3143-d1a3-47a3-92f5-02a0b1e428a4\") " pod="openstack/ironic-555fd64789-cgpft" Feb 24 05:53:04.764610 master-0 kubenswrapper[34361]: I0224 05:53:04.764586 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5p2h7\" (UniqueName: \"kubernetes.io/projected/700c3143-d1a3-47a3-92f5-02a0b1e428a4-kube-api-access-5p2h7\") pod \"ironic-555fd64789-cgpft\" (UID: \"700c3143-d1a3-47a3-92f5-02a0b1e428a4\") " pod="openstack/ironic-555fd64789-cgpft" Feb 24 05:53:04.764661 master-0 kubenswrapper[34361]: I0224 05:53:04.764633 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/eb5e7cfa-75df-4db4-87aa-34e7c7acf852-config\") pod \"dnsmasq-dns-7d9d8bd467-64rvv\" (UID: \"eb5e7cfa-75df-4db4-87aa-34e7c7acf852\") " pod="openstack/dnsmasq-dns-7d9d8bd467-64rvv" Feb 24 05:53:04.764719 master-0 kubenswrapper[34361]: I0224 05:53:04.764697 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n2hpp\" (UniqueName: \"kubernetes.io/projected/eb5e7cfa-75df-4db4-87aa-34e7c7acf852-kube-api-access-n2hpp\") pod \"dnsmasq-dns-7d9d8bd467-64rvv\" (UID: \"eb5e7cfa-75df-4db4-87aa-34e7c7acf852\") " pod="openstack/dnsmasq-dns-7d9d8bd467-64rvv" Feb 24 05:53:04.766093 master-0 kubenswrapper[34361]: I0224 05:53:04.766045 34361 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/eb5e7cfa-75df-4db4-87aa-34e7c7acf852-dns-svc\") pod \"dnsmasq-dns-7d9d8bd467-64rvv\" (UID: \"eb5e7cfa-75df-4db4-87aa-34e7c7acf852\") " pod="openstack/dnsmasq-dns-7d9d8bd467-64rvv" Feb 24 05:53:04.766698 master-0 kubenswrapper[34361]: I0224 05:53:04.766671 34361 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/eb5e7cfa-75df-4db4-87aa-34e7c7acf852-ovsdbserver-nb\") pod \"dnsmasq-dns-7d9d8bd467-64rvv\" (UID: \"eb5e7cfa-75df-4db4-87aa-34e7c7acf852\") " pod="openstack/dnsmasq-dns-7d9d8bd467-64rvv" Feb 24 05:53:04.767457 master-0 kubenswrapper[34361]: I0224 05:53:04.767296 34361 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/eb5e7cfa-75df-4db4-87aa-34e7c7acf852-ovsdbserver-sb\") pod \"dnsmasq-dns-7d9d8bd467-64rvv\" (UID: \"eb5e7cfa-75df-4db4-87aa-34e7c7acf852\") " pod="openstack/dnsmasq-dns-7d9d8bd467-64rvv" Feb 24 05:53:04.768437 master-0 kubenswrapper[34361]: I0224 05:53:04.768356 34361 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/eb5e7cfa-75df-4db4-87aa-34e7c7acf852-config\") pod \"dnsmasq-dns-7d9d8bd467-64rvv\" (UID: \"eb5e7cfa-75df-4db4-87aa-34e7c7acf852\") " pod="openstack/dnsmasq-dns-7d9d8bd467-64rvv" Feb 24 05:53:04.770115 master-0 kubenswrapper[34361]: I0224 05:53:04.770067 34361 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ironic-neutron-agent-856d98ff5d-2p7np" Feb 24 05:53:04.773797 master-0 kubenswrapper[34361]: I0224 05:53:04.770633 34361 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/eb5e7cfa-75df-4db4-87aa-34e7c7acf852-dns-swift-storage-0\") pod \"dnsmasq-dns-7d9d8bd467-64rvv\" (UID: \"eb5e7cfa-75df-4db4-87aa-34e7c7acf852\") " pod="openstack/dnsmasq-dns-7d9d8bd467-64rvv" Feb 24 05:53:04.780973 master-0 kubenswrapper[34361]: I0224 05:53:04.780851 34361 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/700c3143-d1a3-47a3-92f5-02a0b1e428a4-logs\") pod \"ironic-555fd64789-cgpft\" (UID: \"700c3143-d1a3-47a3-92f5-02a0b1e428a4\") " pod="openstack/ironic-555fd64789-cgpft" Feb 24 05:53:04.785672 master-0 kubenswrapper[34361]: I0224 05:53:04.784669 34361 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-merged\" (UniqueName: \"kubernetes.io/empty-dir/700c3143-d1a3-47a3-92f5-02a0b1e428a4-config-data-merged\") pod \"ironic-555fd64789-cgpft\" (UID: \"700c3143-d1a3-47a3-92f5-02a0b1e428a4\") " pod="openstack/ironic-555fd64789-cgpft" Feb 24 05:53:04.785672 master-0 kubenswrapper[34361]: I0224 05:53:04.785576 34361 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/700c3143-d1a3-47a3-92f5-02a0b1e428a4-scripts\") pod \"ironic-555fd64789-cgpft\" (UID: \"700c3143-d1a3-47a3-92f5-02a0b1e428a4\") " pod="openstack/ironic-555fd64789-cgpft" Feb 24 05:53:04.792595 master-0 kubenswrapper[34361]: I0224 05:53:04.791771 34361 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-podinfo\" (UniqueName: \"kubernetes.io/downward-api/700c3143-d1a3-47a3-92f5-02a0b1e428a4-etc-podinfo\") pod \"ironic-555fd64789-cgpft\" (UID: \"700c3143-d1a3-47a3-92f5-02a0b1e428a4\") " pod="openstack/ironic-555fd64789-cgpft" Feb 24 05:53:04.792595 master-0 kubenswrapper[34361]: I0224 05:53:04.792494 34361 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/700c3143-d1a3-47a3-92f5-02a0b1e428a4-combined-ca-bundle\") pod \"ironic-555fd64789-cgpft\" (UID: \"700c3143-d1a3-47a3-92f5-02a0b1e428a4\") " pod="openstack/ironic-555fd64789-cgpft" Feb 24 05:53:04.792595 master-0 kubenswrapper[34361]: I0224 05:53:04.792514 34361 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/700c3143-d1a3-47a3-92f5-02a0b1e428a4-config-data\") pod \"ironic-555fd64789-cgpft\" (UID: \"700c3143-d1a3-47a3-92f5-02a0b1e428a4\") " pod="openstack/ironic-555fd64789-cgpft" Feb 24 05:53:04.803326 master-0 kubenswrapper[34361]: I0224 05:53:04.801585 34361 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n2hpp\" (UniqueName: \"kubernetes.io/projected/eb5e7cfa-75df-4db4-87aa-34e7c7acf852-kube-api-access-n2hpp\") pod \"dnsmasq-dns-7d9d8bd467-64rvv\" (UID: \"eb5e7cfa-75df-4db4-87aa-34e7c7acf852\") " pod="openstack/dnsmasq-dns-7d9d8bd467-64rvv" Feb 24 05:53:04.823189 master-0 kubenswrapper[34361]: I0224 05:53:04.823074 34361 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/700c3143-d1a3-47a3-92f5-02a0b1e428a4-config-data-custom\") pod \"ironic-555fd64789-cgpft\" (UID: \"700c3143-d1a3-47a3-92f5-02a0b1e428a4\") " pod="openstack/ironic-555fd64789-cgpft" Feb 24 05:53:04.825860 master-0 kubenswrapper[34361]: I0224 05:53:04.825585 34361 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5p2h7\" (UniqueName: \"kubernetes.io/projected/700c3143-d1a3-47a3-92f5-02a0b1e428a4-kube-api-access-5p2h7\") pod \"ironic-555fd64789-cgpft\" (UID: \"700c3143-d1a3-47a3-92f5-02a0b1e428a4\") " pod="openstack/ironic-555fd64789-cgpft" Feb 24 05:53:04.886006 master-0 kubenswrapper[34361]: I0224 05:53:04.865255 34361 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-b7346-api-0"] Feb 24 05:53:04.895355 master-0 kubenswrapper[34361]: I0224 05:53:04.895240 34361 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ironic-555fd64789-cgpft" Feb 24 05:53:04.906068 master-0 kubenswrapper[34361]: I0224 05:53:04.901768 34361 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-7d9548858-h45cl"] Feb 24 05:53:04.931291 master-0 kubenswrapper[34361]: I0224 05:53:04.927457 34361 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7d9d8bd467-64rvv" Feb 24 05:53:05.437367 master-0 kubenswrapper[34361]: I0224 05:53:05.437209 34361 generic.go:334] "Generic (PLEG): container finished" podID="3c458e23-405b-449a-8e0b-aa6e42a286c9" containerID="719381efa530f9326b166c8c44acc394693bf446e30448e49cc2d04aeac11fa2" exitCode=0 Feb 24 05:53:05.437367 master-0 kubenswrapper[34361]: I0224 05:53:05.437294 34361 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-66c9d5d889-nmpw7" event={"ID":"3c458e23-405b-449a-8e0b-aa6e42a286c9","Type":"ContainerDied","Data":"719381efa530f9326b166c8c44acc394693bf446e30448e49cc2d04aeac11fa2"} Feb 24 05:53:05.440640 master-0 kubenswrapper[34361]: I0224 05:53:05.440223 34361 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-b7346-api-0" event={"ID":"abacbccf-fef9-4c23-86af-7d01714da00b","Type":"ContainerStarted","Data":"66d8072def779a712546abb88086080a72ff16ffd7301e4a2751bd67b6768027"} Feb 24 05:53:05.458909 master-0 kubenswrapper[34361]: I0224 05:53:05.457205 34361 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-7d9548858-h45cl" event={"ID":"e585e3c7-27e7-4583-8053-fd9d301a9881","Type":"ContainerStarted","Data":"3aec742cdbd5226fbff15a85efd7a1d9d56bf9b9d10734c6caf2560144f640ce"} Feb 24 05:53:05.475822 master-0 kubenswrapper[34361]: I0224 05:53:05.475765 34361 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-66c9d5d889-nmpw7" Feb 24 05:53:05.589346 master-0 kubenswrapper[34361]: I0224 05:53:05.589018 34361 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/3c458e23-405b-449a-8e0b-aa6e42a286c9-ovsdbserver-sb\") pod \"3c458e23-405b-449a-8e0b-aa6e42a286c9\" (UID: \"3c458e23-405b-449a-8e0b-aa6e42a286c9\") " Feb 24 05:53:05.589698 master-0 kubenswrapper[34361]: I0224 05:53:05.589443 34361 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/3c458e23-405b-449a-8e0b-aa6e42a286c9-dns-svc\") pod \"3c458e23-405b-449a-8e0b-aa6e42a286c9\" (UID: \"3c458e23-405b-449a-8e0b-aa6e42a286c9\") " Feb 24 05:53:05.589698 master-0 kubenswrapper[34361]: I0224 05:53:05.589485 34361 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/3c458e23-405b-449a-8e0b-aa6e42a286c9-dns-swift-storage-0\") pod \"3c458e23-405b-449a-8e0b-aa6e42a286c9\" (UID: \"3c458e23-405b-449a-8e0b-aa6e42a286c9\") " Feb 24 05:53:05.589698 master-0 kubenswrapper[34361]: I0224 05:53:05.589538 34361 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7msnw\" (UniqueName: \"kubernetes.io/projected/3c458e23-405b-449a-8e0b-aa6e42a286c9-kube-api-access-7msnw\") pod \"3c458e23-405b-449a-8e0b-aa6e42a286c9\" (UID: \"3c458e23-405b-449a-8e0b-aa6e42a286c9\") " Feb 24 05:53:05.589853 master-0 kubenswrapper[34361]: I0224 05:53:05.589704 34361 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3c458e23-405b-449a-8e0b-aa6e42a286c9-config\") pod \"3c458e23-405b-449a-8e0b-aa6e42a286c9\" (UID: \"3c458e23-405b-449a-8e0b-aa6e42a286c9\") " Feb 24 05:53:05.589853 master-0 kubenswrapper[34361]: I0224 05:53:05.589848 34361 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/3c458e23-405b-449a-8e0b-aa6e42a286c9-ovsdbserver-nb\") pod \"3c458e23-405b-449a-8e0b-aa6e42a286c9\" (UID: \"3c458e23-405b-449a-8e0b-aa6e42a286c9\") " Feb 24 05:53:05.605288 master-0 kubenswrapper[34361]: I0224 05:53:05.605149 34361 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3c458e23-405b-449a-8e0b-aa6e42a286c9-kube-api-access-7msnw" (OuterVolumeSpecName: "kube-api-access-7msnw") pod "3c458e23-405b-449a-8e0b-aa6e42a286c9" (UID: "3c458e23-405b-449a-8e0b-aa6e42a286c9"). InnerVolumeSpecName "kube-api-access-7msnw". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 24 05:53:05.616352 master-0 kubenswrapper[34361]: I0224 05:53:05.614074 34361 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7msnw\" (UniqueName: \"kubernetes.io/projected/3c458e23-405b-449a-8e0b-aa6e42a286c9-kube-api-access-7msnw\") on node \"master-0\" DevicePath \"\"" Feb 24 05:53:05.630579 master-0 kubenswrapper[34361]: I0224 05:53:05.630421 34361 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ironic-inspector-db-create-pwcj4"] Feb 24 05:53:05.632257 master-0 kubenswrapper[34361]: W0224 05:53:05.632175 34361 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod0a0262a8_c31e_4022_bf1e_7952af276733.slice/crio-414fd064914eb015747b522e5afbfbf0b4a0918dba46d57eab2124e4475f12e7 WatchSource:0}: Error finding container 414fd064914eb015747b522e5afbfbf0b4a0918dba46d57eab2124e4475f12e7: Status 404 returned error can't find the container with id 414fd064914eb015747b522e5afbfbf0b4a0918dba46d57eab2124e4475f12e7 Feb 24 05:53:05.692450 master-0 kubenswrapper[34361]: I0224 05:53:05.692363 34361 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3c458e23-405b-449a-8e0b-aa6e42a286c9-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "3c458e23-405b-449a-8e0b-aa6e42a286c9" (UID: "3c458e23-405b-449a-8e0b-aa6e42a286c9"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 24 05:53:05.718020 master-0 kubenswrapper[34361]: I0224 05:53:05.717485 34361 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/3c458e23-405b-449a-8e0b-aa6e42a286c9-dns-swift-storage-0\") on node \"master-0\" DevicePath \"\"" Feb 24 05:53:05.755548 master-0 kubenswrapper[34361]: I0224 05:53:05.755437 34361 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ironic-neutron-agent-856d98ff5d-2p7np"] Feb 24 05:53:05.769119 master-0 kubenswrapper[34361]: I0224 05:53:05.769015 34361 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3c458e23-405b-449a-8e0b-aa6e42a286c9-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "3c458e23-405b-449a-8e0b-aa6e42a286c9" (UID: "3c458e23-405b-449a-8e0b-aa6e42a286c9"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 24 05:53:05.771352 master-0 kubenswrapper[34361]: I0224 05:53:05.771174 34361 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3c458e23-405b-449a-8e0b-aa6e42a286c9-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "3c458e23-405b-449a-8e0b-aa6e42a286c9" (UID: "3c458e23-405b-449a-8e0b-aa6e42a286c9"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 24 05:53:05.771566 master-0 kubenswrapper[34361]: I0224 05:53:05.771498 34361 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3c458e23-405b-449a-8e0b-aa6e42a286c9-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "3c458e23-405b-449a-8e0b-aa6e42a286c9" (UID: "3c458e23-405b-449a-8e0b-aa6e42a286c9"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 24 05:53:05.772979 master-0 kubenswrapper[34361]: I0224 05:53:05.772887 34361 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ironic-inspector-6402-account-create-update-kj7ts"] Feb 24 05:53:05.801067 master-0 kubenswrapper[34361]: W0224 05:53:05.800990 34361 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod40a5b237_764f_4367_85a5_4153a8f90a3e.slice/crio-cc0abdf0ea403d15270005c7abbf931646c4a827d6baab946f7ffdc00b87b2f9 WatchSource:0}: Error finding container cc0abdf0ea403d15270005c7abbf931646c4a827d6baab946f7ffdc00b87b2f9: Status 404 returned error can't find the container with id cc0abdf0ea403d15270005c7abbf931646c4a827d6baab946f7ffdc00b87b2f9 Feb 24 05:53:05.819804 master-0 kubenswrapper[34361]: I0224 05:53:05.819730 34361 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/3c458e23-405b-449a-8e0b-aa6e42a286c9-dns-svc\") on node \"master-0\" DevicePath \"\"" Feb 24 05:53:05.819804 master-0 kubenswrapper[34361]: I0224 05:53:05.819776 34361 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/3c458e23-405b-449a-8e0b-aa6e42a286c9-ovsdbserver-nb\") on node \"master-0\" DevicePath \"\"" Feb 24 05:53:05.819804 master-0 kubenswrapper[34361]: I0224 05:53:05.819787 34361 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/3c458e23-405b-449a-8e0b-aa6e42a286c9-ovsdbserver-sb\") on node \"master-0\" DevicePath \"\"" Feb 24 05:53:05.825093 master-0 kubenswrapper[34361]: I0224 05:53:05.825025 34361 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3c458e23-405b-449a-8e0b-aa6e42a286c9-config" (OuterVolumeSpecName: "config") pod "3c458e23-405b-449a-8e0b-aa6e42a286c9" (UID: "3c458e23-405b-449a-8e0b-aa6e42a286c9"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 24 05:53:05.922357 master-0 kubenswrapper[34361]: I0224 05:53:05.922134 34361 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3c458e23-405b-449a-8e0b-aa6e42a286c9-config\") on node \"master-0\" DevicePath \"\"" Feb 24 05:53:06.084714 master-0 kubenswrapper[34361]: I0224 05:53:06.084631 34361 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ironic-conductor-0"] Feb 24 05:53:06.085365 master-0 kubenswrapper[34361]: E0224 05:53:06.085301 34361 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3c458e23-405b-449a-8e0b-aa6e42a286c9" containerName="dnsmasq-dns" Feb 24 05:53:06.085365 master-0 kubenswrapper[34361]: I0224 05:53:06.085350 34361 state_mem.go:107] "Deleted CPUSet assignment" podUID="3c458e23-405b-449a-8e0b-aa6e42a286c9" containerName="dnsmasq-dns" Feb 24 05:53:06.085449 master-0 kubenswrapper[34361]: E0224 05:53:06.085394 34361 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3c458e23-405b-449a-8e0b-aa6e42a286c9" containerName="init" Feb 24 05:53:06.085449 master-0 kubenswrapper[34361]: I0224 05:53:06.085402 34361 state_mem.go:107] "Deleted CPUSet assignment" podUID="3c458e23-405b-449a-8e0b-aa6e42a286c9" containerName="init" Feb 24 05:53:06.085703 master-0 kubenswrapper[34361]: I0224 05:53:06.085668 34361 memory_manager.go:354] "RemoveStaleState removing state" podUID="3c458e23-405b-449a-8e0b-aa6e42a286c9" containerName="dnsmasq-dns" Feb 24 05:53:06.089789 master-0 kubenswrapper[34361]: I0224 05:53:06.089705 34361 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ironic-conductor-0" Feb 24 05:53:06.118693 master-0 kubenswrapper[34361]: I0224 05:53:06.118599 34361 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ironic-conductor-0"] Feb 24 05:53:06.128367 master-0 kubenswrapper[34361]: I0224 05:53:06.128290 34361 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ironic-conductor-scripts" Feb 24 05:53:06.128704 master-0 kubenswrapper[34361]: I0224 05:53:06.128618 34361 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ironic-conductor-config-data" Feb 24 05:53:06.231538 master-0 kubenswrapper[34361]: I0224 05:53:06.231464 34361 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-podinfo\" (UniqueName: \"kubernetes.io/downward-api/74198545-a0ee-4142-93a6-86175a1d3c02-etc-podinfo\") pod \"ironic-conductor-0\" (UID: \"74198545-a0ee-4142-93a6-86175a1d3c02\") " pod="openstack/ironic-conductor-0" Feb 24 05:53:06.231644 master-0 kubenswrapper[34361]: I0224 05:53:06.231555 34361 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tpl5c\" (UniqueName: \"kubernetes.io/projected/74198545-a0ee-4142-93a6-86175a1d3c02-kube-api-access-tpl5c\") pod \"ironic-conductor-0\" (UID: \"74198545-a0ee-4142-93a6-86175a1d3c02\") " pod="openstack/ironic-conductor-0" Feb 24 05:53:06.231644 master-0 kubenswrapper[34361]: I0224 05:53:06.231612 34361 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/74198545-a0ee-4142-93a6-86175a1d3c02-scripts\") pod \"ironic-conductor-0\" (UID: \"74198545-a0ee-4142-93a6-86175a1d3c02\") " pod="openstack/ironic-conductor-0" Feb 24 05:53:06.231718 master-0 kubenswrapper[34361]: I0224 05:53:06.231700 34361 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-merged\" (UniqueName: \"kubernetes.io/empty-dir/74198545-a0ee-4142-93a6-86175a1d3c02-config-data-merged\") pod \"ironic-conductor-0\" (UID: \"74198545-a0ee-4142-93a6-86175a1d3c02\") " pod="openstack/ironic-conductor-0" Feb 24 05:53:06.231772 master-0 kubenswrapper[34361]: I0224 05:53:06.231752 34361 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-856664b8-8c8a-4ded-8789-2098a6951852\" (UniqueName: \"kubernetes.io/csi/topolvm.io^800e953b-d53e-4206-915b-3ee0f5b4a2c2\") pod \"ironic-conductor-0\" (UID: \"74198545-a0ee-4142-93a6-86175a1d3c02\") " pod="openstack/ironic-conductor-0" Feb 24 05:53:06.231814 master-0 kubenswrapper[34361]: I0224 05:53:06.231774 34361 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/74198545-a0ee-4142-93a6-86175a1d3c02-config-data-custom\") pod \"ironic-conductor-0\" (UID: \"74198545-a0ee-4142-93a6-86175a1d3c02\") " pod="openstack/ironic-conductor-0" Feb 24 05:53:06.231848 master-0 kubenswrapper[34361]: I0224 05:53:06.231815 34361 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/74198545-a0ee-4142-93a6-86175a1d3c02-combined-ca-bundle\") pod \"ironic-conductor-0\" (UID: \"74198545-a0ee-4142-93a6-86175a1d3c02\") " pod="openstack/ironic-conductor-0" Feb 24 05:53:06.231888 master-0 kubenswrapper[34361]: I0224 05:53:06.231857 34361 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/74198545-a0ee-4142-93a6-86175a1d3c02-config-data\") pod \"ironic-conductor-0\" (UID: \"74198545-a0ee-4142-93a6-86175a1d3c02\") " pod="openstack/ironic-conductor-0" Feb 24 05:53:06.243903 master-0 kubenswrapper[34361]: I0224 05:53:06.243070 34361 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7d9d8bd467-64rvv"] Feb 24 05:53:06.249368 master-0 kubenswrapper[34361]: W0224 05:53:06.249265 34361 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podeb5e7cfa_75df_4db4_87aa_34e7c7acf852.slice/crio-6bb379766bb7417e819a428f2f7aae035911cdff8b6f55a0e3566352c7b03eb6 WatchSource:0}: Error finding container 6bb379766bb7417e819a428f2f7aae035911cdff8b6f55a0e3566352c7b03eb6: Status 404 returned error can't find the container with id 6bb379766bb7417e819a428f2f7aae035911cdff8b6f55a0e3566352c7b03eb6 Feb 24 05:53:06.264245 master-0 kubenswrapper[34361]: I0224 05:53:06.264131 34361 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ironic-555fd64789-cgpft"] Feb 24 05:53:06.337375 master-0 kubenswrapper[34361]: I0224 05:53:06.334074 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/74198545-a0ee-4142-93a6-86175a1d3c02-combined-ca-bundle\") pod \"ironic-conductor-0\" (UID: \"74198545-a0ee-4142-93a6-86175a1d3c02\") " pod="openstack/ironic-conductor-0" Feb 24 05:53:06.337375 master-0 kubenswrapper[34361]: I0224 05:53:06.334141 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/74198545-a0ee-4142-93a6-86175a1d3c02-config-data\") pod \"ironic-conductor-0\" (UID: \"74198545-a0ee-4142-93a6-86175a1d3c02\") " pod="openstack/ironic-conductor-0" Feb 24 05:53:06.337375 master-0 kubenswrapper[34361]: I0224 05:53:06.334193 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-podinfo\" (UniqueName: \"kubernetes.io/downward-api/74198545-a0ee-4142-93a6-86175a1d3c02-etc-podinfo\") pod \"ironic-conductor-0\" (UID: \"74198545-a0ee-4142-93a6-86175a1d3c02\") " pod="openstack/ironic-conductor-0" Feb 24 05:53:06.337375 master-0 kubenswrapper[34361]: I0224 05:53:06.334227 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tpl5c\" (UniqueName: \"kubernetes.io/projected/74198545-a0ee-4142-93a6-86175a1d3c02-kube-api-access-tpl5c\") pod \"ironic-conductor-0\" (UID: \"74198545-a0ee-4142-93a6-86175a1d3c02\") " pod="openstack/ironic-conductor-0" Feb 24 05:53:06.337375 master-0 kubenswrapper[34361]: I0224 05:53:06.334265 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/74198545-a0ee-4142-93a6-86175a1d3c02-scripts\") pod \"ironic-conductor-0\" (UID: \"74198545-a0ee-4142-93a6-86175a1d3c02\") " pod="openstack/ironic-conductor-0" Feb 24 05:53:06.337375 master-0 kubenswrapper[34361]: I0224 05:53:06.334352 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-merged\" (UniqueName: \"kubernetes.io/empty-dir/74198545-a0ee-4142-93a6-86175a1d3c02-config-data-merged\") pod \"ironic-conductor-0\" (UID: \"74198545-a0ee-4142-93a6-86175a1d3c02\") " pod="openstack/ironic-conductor-0" Feb 24 05:53:06.337375 master-0 kubenswrapper[34361]: I0224 05:53:06.334398 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/74198545-a0ee-4142-93a6-86175a1d3c02-config-data-custom\") pod \"ironic-conductor-0\" (UID: \"74198545-a0ee-4142-93a6-86175a1d3c02\") " pod="openstack/ironic-conductor-0" Feb 24 05:53:06.337375 master-0 kubenswrapper[34361]: I0224 05:53:06.334422 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-856664b8-8c8a-4ded-8789-2098a6951852\" (UniqueName: \"kubernetes.io/csi/topolvm.io^800e953b-d53e-4206-915b-3ee0f5b4a2c2\") pod \"ironic-conductor-0\" (UID: \"74198545-a0ee-4142-93a6-86175a1d3c02\") " pod="openstack/ironic-conductor-0" Feb 24 05:53:06.337375 master-0 kubenswrapper[34361]: I0224 05:53:06.335400 34361 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-merged\" (UniqueName: \"kubernetes.io/empty-dir/74198545-a0ee-4142-93a6-86175a1d3c02-config-data-merged\") pod \"ironic-conductor-0\" (UID: \"74198545-a0ee-4142-93a6-86175a1d3c02\") " pod="openstack/ironic-conductor-0" Feb 24 05:53:06.339265 master-0 kubenswrapper[34361]: I0224 05:53:06.339207 34361 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 24 05:53:06.339367 master-0 kubenswrapper[34361]: I0224 05:53:06.339264 34361 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-856664b8-8c8a-4ded-8789-2098a6951852\" (UniqueName: \"kubernetes.io/csi/topolvm.io^800e953b-d53e-4206-915b-3ee0f5b4a2c2\") pod \"ironic-conductor-0\" (UID: \"74198545-a0ee-4142-93a6-86175a1d3c02\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/topolvm.io/22080b0e2061ce18475c9de4dd4f27aebf1a1416f6a99437dff0918c16e08488/globalmount\"" pod="openstack/ironic-conductor-0" Feb 24 05:53:06.353532 master-0 kubenswrapper[34361]: I0224 05:53:06.352154 34361 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/74198545-a0ee-4142-93a6-86175a1d3c02-scripts\") pod \"ironic-conductor-0\" (UID: \"74198545-a0ee-4142-93a6-86175a1d3c02\") " pod="openstack/ironic-conductor-0" Feb 24 05:53:06.353532 master-0 kubenswrapper[34361]: I0224 05:53:06.352165 34361 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/74198545-a0ee-4142-93a6-86175a1d3c02-config-data\") pod \"ironic-conductor-0\" (UID: \"74198545-a0ee-4142-93a6-86175a1d3c02\") " pod="openstack/ironic-conductor-0" Feb 24 05:53:06.353532 master-0 kubenswrapper[34361]: I0224 05:53:06.353009 34361 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/74198545-a0ee-4142-93a6-86175a1d3c02-combined-ca-bundle\") pod \"ironic-conductor-0\" (UID: \"74198545-a0ee-4142-93a6-86175a1d3c02\") " pod="openstack/ironic-conductor-0" Feb 24 05:53:06.357060 master-0 kubenswrapper[34361]: I0224 05:53:06.356501 34361 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-podinfo\" (UniqueName: \"kubernetes.io/downward-api/74198545-a0ee-4142-93a6-86175a1d3c02-etc-podinfo\") pod \"ironic-conductor-0\" (UID: \"74198545-a0ee-4142-93a6-86175a1d3c02\") " pod="openstack/ironic-conductor-0" Feb 24 05:53:06.357060 master-0 kubenswrapper[34361]: I0224 05:53:06.356733 34361 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tpl5c\" (UniqueName: \"kubernetes.io/projected/74198545-a0ee-4142-93a6-86175a1d3c02-kube-api-access-tpl5c\") pod \"ironic-conductor-0\" (UID: \"74198545-a0ee-4142-93a6-86175a1d3c02\") " pod="openstack/ironic-conductor-0" Feb 24 05:53:06.357060 master-0 kubenswrapper[34361]: I0224 05:53:06.356993 34361 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/74198545-a0ee-4142-93a6-86175a1d3c02-config-data-custom\") pod \"ironic-conductor-0\" (UID: \"74198545-a0ee-4142-93a6-86175a1d3c02\") " pod="openstack/ironic-conductor-0" Feb 24 05:53:06.480025 master-0 kubenswrapper[34361]: I0224 05:53:06.479152 34361 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-neutron-agent-856d98ff5d-2p7np" event={"ID":"40a5b237-764f-4367-85a5-4153a8f90a3e","Type":"ContainerStarted","Data":"cc0abdf0ea403d15270005c7abbf931646c4a827d6baab946f7ffdc00b87b2f9"} Feb 24 05:53:06.482341 master-0 kubenswrapper[34361]: I0224 05:53:06.482107 34361 generic.go:334] "Generic (PLEG): container finished" podID="0a0262a8-c31e-4022-bf1e-7952af276733" containerID="06699e4a7aac5b8021ea710da54b96f039768ebde19dc562ecdc64e5a9245ac0" exitCode=0 Feb 24 05:53:06.482341 master-0 kubenswrapper[34361]: I0224 05:53:06.482179 34361 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-inspector-db-create-pwcj4" event={"ID":"0a0262a8-c31e-4022-bf1e-7952af276733","Type":"ContainerDied","Data":"06699e4a7aac5b8021ea710da54b96f039768ebde19dc562ecdc64e5a9245ac0"} Feb 24 05:53:06.482341 master-0 kubenswrapper[34361]: I0224 05:53:06.482198 34361 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-inspector-db-create-pwcj4" event={"ID":"0a0262a8-c31e-4022-bf1e-7952af276733","Type":"ContainerStarted","Data":"414fd064914eb015747b522e5afbfbf0b4a0918dba46d57eab2124e4475f12e7"} Feb 24 05:53:06.485468 master-0 kubenswrapper[34361]: I0224 05:53:06.484799 34361 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-inspector-6402-account-create-update-kj7ts" event={"ID":"68c5e68e-c7ed-4fb9-a323-2104110a3742","Type":"ContainerStarted","Data":"56b61e70b135e0157c325da061cf145839f62281a175f804247358f1c3ec123a"} Feb 24 05:53:06.485468 master-0 kubenswrapper[34361]: I0224 05:53:06.484873 34361 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-inspector-6402-account-create-update-kj7ts" event={"ID":"68c5e68e-c7ed-4fb9-a323-2104110a3742","Type":"ContainerStarted","Data":"63c0a74998a544d44a66937940746fd2fe043f0a067a7bb1efec736830bae986"} Feb 24 05:53:06.488426 master-0 kubenswrapper[34361]: I0224 05:53:06.488138 34361 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7d9d8bd467-64rvv" event={"ID":"eb5e7cfa-75df-4db4-87aa-34e7c7acf852","Type":"ContainerStarted","Data":"6bb379766bb7417e819a428f2f7aae035911cdff8b6f55a0e3566352c7b03eb6"} Feb 24 05:53:06.491880 master-0 kubenswrapper[34361]: I0224 05:53:06.491777 34361 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-66c9d5d889-nmpw7" event={"ID":"3c458e23-405b-449a-8e0b-aa6e42a286c9","Type":"ContainerDied","Data":"d7a8e7e943ec22b815d47f56d603c628058fbafba9f3477f332edcb39f803433"} Feb 24 05:53:06.492045 master-0 kubenswrapper[34361]: I0224 05:53:06.491916 34361 scope.go:117] "RemoveContainer" containerID="719381efa530f9326b166c8c44acc394693bf446e30448e49cc2d04aeac11fa2" Feb 24 05:53:06.492213 master-0 kubenswrapper[34361]: I0224 05:53:06.492177 34361 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-66c9d5d889-nmpw7" Feb 24 05:53:06.520737 master-0 kubenswrapper[34361]: I0224 05:53:06.510087 34361 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-b7346-api-0" event={"ID":"abacbccf-fef9-4c23-86af-7d01714da00b","Type":"ContainerStarted","Data":"82892e4c6a718e1716bcfc4f951c33d1aef161b972420caeb74cd65be23f3b93"} Feb 24 05:53:06.550102 master-0 kubenswrapper[34361]: I0224 05:53:06.549770 34361 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-555fd64789-cgpft" event={"ID":"700c3143-d1a3-47a3-92f5-02a0b1e428a4","Type":"ContainerStarted","Data":"901d1fef311d3ee7dc425fda3b0a6ce4475633456db187f2a83f24b8deabf5a0"} Feb 24 05:53:06.550102 master-0 kubenswrapper[34361]: I0224 05:53:06.550038 34361 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ironic-inspector-6402-account-create-update-kj7ts" podStartSLOduration=3.550009732 podStartE2EDuration="3.550009732s" podCreationTimestamp="2026-02-24 05:53:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-24 05:53:06.531777279 +0000 UTC m=+946.234394325" watchObservedRunningTime="2026-02-24 05:53:06.550009732 +0000 UTC m=+946.252626778" Feb 24 05:53:06.632011 master-0 kubenswrapper[34361]: I0224 05:53:06.627232 34361 scope.go:117] "RemoveContainer" containerID="0360e68d2b78f485981b0d4adbdeaef4ed36c422da62966ca8112b12a599df35" Feb 24 05:53:06.637146 master-0 kubenswrapper[34361]: I0224 05:53:06.634872 34361 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-7d9548858-h45cl" podStartSLOduration=3.634832639 podStartE2EDuration="3.634832639s" podCreationTimestamp="2026-02-24 05:53:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-24 05:53:06.632203838 +0000 UTC m=+946.334820904" watchObservedRunningTime="2026-02-24 05:53:06.634832639 +0000 UTC m=+946.337449685" Feb 24 05:53:06.693904 master-0 kubenswrapper[34361]: I0224 05:53:06.693277 34361 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/placement-7d9548858-h45cl" Feb 24 05:53:06.693904 master-0 kubenswrapper[34361]: I0224 05:53:06.693423 34361 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-7d9548858-h45cl" event={"ID":"e585e3c7-27e7-4583-8053-fd9d301a9881","Type":"ContainerStarted","Data":"9738b0f6b66b3c3c6057e1029e731903316836e6200752b9b86f5963b09c2224"} Feb 24 05:53:06.693904 master-0 kubenswrapper[34361]: I0224 05:53:06.693447 34361 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-7d9548858-h45cl" event={"ID":"e585e3c7-27e7-4583-8053-fd9d301a9881","Type":"ContainerStarted","Data":"8827da9b750eea22efd133b836544e9c3dcfe9c32823706724808e7cebca332d"} Feb 24 05:53:06.693904 master-0 kubenswrapper[34361]: I0224 05:53:06.693459 34361 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-66c9d5d889-nmpw7"] Feb 24 05:53:06.693904 master-0 kubenswrapper[34361]: I0224 05:53:06.693508 34361 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-66c9d5d889-nmpw7"] Feb 24 05:53:07.097904 master-0 kubenswrapper[34361]: I0224 05:53:07.097788 34361 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/cinder-b7346-scheduler-0" Feb 24 05:53:07.121580 master-0 kubenswrapper[34361]: I0224 05:53:07.120191 34361 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/cinder-b7346-backup-0" Feb 24 05:53:07.202610 master-0 kubenswrapper[34361]: I0224 05:53:07.202523 34361 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-b7346-scheduler-0"] Feb 24 05:53:07.247183 master-0 kubenswrapper[34361]: I0224 05:53:07.244971 34361 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-b7346-backup-0"] Feb 24 05:53:07.423412 master-0 kubenswrapper[34361]: I0224 05:53:07.423335 34361 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/cinder-b7346-volume-lvm-iscsi-0" Feb 24 05:53:07.524370 master-0 kubenswrapper[34361]: I0224 05:53:07.524293 34361 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-b7346-volume-lvm-iscsi-0"] Feb 24 05:53:07.642575 master-0 kubenswrapper[34361]: I0224 05:53:07.642412 34361 generic.go:334] "Generic (PLEG): container finished" podID="68c5e68e-c7ed-4fb9-a323-2104110a3742" containerID="56b61e70b135e0157c325da061cf145839f62281a175f804247358f1c3ec123a" exitCode=0 Feb 24 05:53:07.642575 master-0 kubenswrapper[34361]: I0224 05:53:07.642547 34361 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-inspector-6402-account-create-update-kj7ts" event={"ID":"68c5e68e-c7ed-4fb9-a323-2104110a3742","Type":"ContainerDied","Data":"56b61e70b135e0157c325da061cf145839f62281a175f804247358f1c3ec123a"} Feb 24 05:53:07.647300 master-0 kubenswrapper[34361]: I0224 05:53:07.647230 34361 generic.go:334] "Generic (PLEG): container finished" podID="eb5e7cfa-75df-4db4-87aa-34e7c7acf852" containerID="71921d7aa4509fe7f718e89b8283c5fffe0419d27c0581ee838f12bde363b61c" exitCode=0 Feb 24 05:53:07.647300 master-0 kubenswrapper[34361]: I0224 05:53:07.647427 34361 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7d9d8bd467-64rvv" event={"ID":"eb5e7cfa-75df-4db4-87aa-34e7c7acf852","Type":"ContainerDied","Data":"71921d7aa4509fe7f718e89b8283c5fffe0419d27c0581ee838f12bde363b61c"} Feb 24 05:53:07.655766 master-0 kubenswrapper[34361]: I0224 05:53:07.655699 34361 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-b7346-api-0" event={"ID":"abacbccf-fef9-4c23-86af-7d01714da00b","Type":"ContainerStarted","Data":"5599c05bbf888cee04f9d426c3e4d04044baf3f02583bf68de94822b2ebaa61a"} Feb 24 05:53:07.656825 master-0 kubenswrapper[34361]: I0224 05:53:07.656696 34361 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-b7346-backup-0" podUID="c1490d04-fc1d-488b-a427-285554ec1692" containerName="cinder-backup" containerID="cri-o://f7d03b7d30fa8baa7cb21136279a2fed9c3dda6e15069cd7df8b5a2cd646f37e" gracePeriod=30 Feb 24 05:53:07.657205 master-0 kubenswrapper[34361]: I0224 05:53:07.657105 34361 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-b7346-backup-0" podUID="c1490d04-fc1d-488b-a427-285554ec1692" containerName="probe" containerID="cri-o://7e1b2ebdea4f1ac8655c81e6c89c7a613460cc40d12b5b35c1d7fd9d0e90e437" gracePeriod=30 Feb 24 05:53:07.665841 master-0 kubenswrapper[34361]: I0224 05:53:07.659156 34361 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-b7346-scheduler-0" podUID="2f0b28b5-741c-4761-b250-30d89ea99407" containerName="cinder-scheduler" containerID="cri-o://f3f9c022e793ae50651ba44d048e2d2e445c8cdf9660ca198173ab50b867ecb7" gracePeriod=30 Feb 24 05:53:07.665841 master-0 kubenswrapper[34361]: I0224 05:53:07.659264 34361 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/placement-7d9548858-h45cl" Feb 24 05:53:07.665841 master-0 kubenswrapper[34361]: I0224 05:53:07.659350 34361 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-b7346-scheduler-0" podUID="2f0b28b5-741c-4761-b250-30d89ea99407" containerName="probe" containerID="cri-o://6644f2541ea8eb1d0e102c41726b036f76b1bcafd2712081e27ca63f8e5ac73f" gracePeriod=30 Feb 24 05:53:07.665841 master-0 kubenswrapper[34361]: I0224 05:53:07.662348 34361 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-b7346-volume-lvm-iscsi-0" podUID="fad53e67-bd04-4577-af57-e5b896b6e56f" containerName="cinder-volume" containerID="cri-o://1c34a792ad75c0733c51a93cf45561e771b6c1fed09ddf0c6d96fe0d13d23f16" gracePeriod=30 Feb 24 05:53:07.665841 master-0 kubenswrapper[34361]: I0224 05:53:07.662804 34361 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-b7346-volume-lvm-iscsi-0" podUID="fad53e67-bd04-4577-af57-e5b896b6e56f" containerName="probe" containerID="cri-o://fc623512700ec8e8994d8e079171552546ec3764436c6fded4cca2506b97aa97" gracePeriod=30 Feb 24 05:53:07.803344 master-0 kubenswrapper[34361]: I0224 05:53:07.801408 34361 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ironic-6cc9f57487-vklxq"] Feb 24 05:53:07.812187 master-0 kubenswrapper[34361]: I0224 05:53:07.804997 34361 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ironic-6cc9f57487-vklxq" Feb 24 05:53:07.815774 master-0 kubenswrapper[34361]: I0224 05:53:07.815718 34361 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ironic-public-svc" Feb 24 05:53:07.821332 master-0 kubenswrapper[34361]: I0224 05:53:07.817769 34361 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ironic-6cc9f57487-vklxq"] Feb 24 05:53:07.821332 master-0 kubenswrapper[34361]: I0224 05:53:07.820045 34361 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ironic-internal-svc" Feb 24 05:53:07.836459 master-0 kubenswrapper[34361]: I0224 05:53:07.836052 34361 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-b7346-api-0" podStartSLOduration=4.836026026 podStartE2EDuration="4.836026026s" podCreationTimestamp="2026-02-24 05:53:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-24 05:53:07.814954038 +0000 UTC m=+947.517571084" watchObservedRunningTime="2026-02-24 05:53:07.836026026 +0000 UTC m=+947.538643072" Feb 24 05:53:07.910341 master-0 kubenswrapper[34361]: I0224 05:53:07.909295 34361 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-856664b8-8c8a-4ded-8789-2098a6951852\" (UniqueName: \"kubernetes.io/csi/topolvm.io^800e953b-d53e-4206-915b-3ee0f5b4a2c2\") pod \"ironic-conductor-0\" (UID: \"74198545-a0ee-4142-93a6-86175a1d3c02\") " pod="openstack/ironic-conductor-0" Feb 24 05:53:07.928645 master-0 kubenswrapper[34361]: I0224 05:53:07.928514 34361 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-podinfo\" (UniqueName: \"kubernetes.io/downward-api/e92bebe0-2823-46e2-bd8f-7755ed558ab8-etc-podinfo\") pod \"ironic-6cc9f57487-vklxq\" (UID: \"e92bebe0-2823-46e2-bd8f-7755ed558ab8\") " pod="openstack/ironic-6cc9f57487-vklxq" Feb 24 05:53:07.928645 master-0 kubenswrapper[34361]: I0224 05:53:07.928586 34361 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e92bebe0-2823-46e2-bd8f-7755ed558ab8-combined-ca-bundle\") pod \"ironic-6cc9f57487-vklxq\" (UID: \"e92bebe0-2823-46e2-bd8f-7755ed558ab8\") " pod="openstack/ironic-6cc9f57487-vklxq" Feb 24 05:53:07.928901 master-0 kubenswrapper[34361]: I0224 05:53:07.928849 34361 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/e92bebe0-2823-46e2-bd8f-7755ed558ab8-public-tls-certs\") pod \"ironic-6cc9f57487-vklxq\" (UID: \"e92bebe0-2823-46e2-bd8f-7755ed558ab8\") " pod="openstack/ironic-6cc9f57487-vklxq" Feb 24 05:53:07.928901 master-0 kubenswrapper[34361]: I0224 05:53:07.928884 34361 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e92bebe0-2823-46e2-bd8f-7755ed558ab8-scripts\") pod \"ironic-6cc9f57487-vklxq\" (UID: \"e92bebe0-2823-46e2-bd8f-7755ed558ab8\") " pod="openstack/ironic-6cc9f57487-vklxq" Feb 24 05:53:07.928982 master-0 kubenswrapper[34361]: I0224 05:53:07.928920 34361 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e92bebe0-2823-46e2-bd8f-7755ed558ab8-config-data\") pod \"ironic-6cc9f57487-vklxq\" (UID: \"e92bebe0-2823-46e2-bd8f-7755ed558ab8\") " pod="openstack/ironic-6cc9f57487-vklxq" Feb 24 05:53:07.929023 master-0 kubenswrapper[34361]: I0224 05:53:07.928998 34361 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e92bebe0-2823-46e2-bd8f-7755ed558ab8-logs\") pod \"ironic-6cc9f57487-vklxq\" (UID: \"e92bebe0-2823-46e2-bd8f-7755ed558ab8\") " pod="openstack/ironic-6cc9f57487-vklxq" Feb 24 05:53:07.929023 master-0 kubenswrapper[34361]: I0224 05:53:07.929021 34361 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/e92bebe0-2823-46e2-bd8f-7755ed558ab8-internal-tls-certs\") pod \"ironic-6cc9f57487-vklxq\" (UID: \"e92bebe0-2823-46e2-bd8f-7755ed558ab8\") " pod="openstack/ironic-6cc9f57487-vklxq" Feb 24 05:53:07.929090 master-0 kubenswrapper[34361]: I0224 05:53:07.929047 34361 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/e92bebe0-2823-46e2-bd8f-7755ed558ab8-config-data-custom\") pod \"ironic-6cc9f57487-vklxq\" (UID: \"e92bebe0-2823-46e2-bd8f-7755ed558ab8\") " pod="openstack/ironic-6cc9f57487-vklxq" Feb 24 05:53:07.929133 master-0 kubenswrapper[34361]: I0224 05:53:07.929094 34361 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-merged\" (UniqueName: \"kubernetes.io/empty-dir/e92bebe0-2823-46e2-bd8f-7755ed558ab8-config-data-merged\") pod \"ironic-6cc9f57487-vklxq\" (UID: \"e92bebe0-2823-46e2-bd8f-7755ed558ab8\") " pod="openstack/ironic-6cc9f57487-vklxq" Feb 24 05:53:07.929133 master-0 kubenswrapper[34361]: I0224 05:53:07.929114 34361 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g75qn\" (UniqueName: \"kubernetes.io/projected/e92bebe0-2823-46e2-bd8f-7755ed558ab8-kube-api-access-g75qn\") pod \"ironic-6cc9f57487-vklxq\" (UID: \"e92bebe0-2823-46e2-bd8f-7755ed558ab8\") " pod="openstack/ironic-6cc9f57487-vklxq" Feb 24 05:53:07.996226 master-0 kubenswrapper[34361]: I0224 05:53:07.996123 34361 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ironic-conductor-0" Feb 24 05:53:08.048453 master-0 kubenswrapper[34361]: I0224 05:53:08.039094 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e92bebe0-2823-46e2-bd8f-7755ed558ab8-logs\") pod \"ironic-6cc9f57487-vklxq\" (UID: \"e92bebe0-2823-46e2-bd8f-7755ed558ab8\") " pod="openstack/ironic-6cc9f57487-vklxq" Feb 24 05:53:08.048453 master-0 kubenswrapper[34361]: I0224 05:53:08.039162 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/e92bebe0-2823-46e2-bd8f-7755ed558ab8-internal-tls-certs\") pod \"ironic-6cc9f57487-vklxq\" (UID: \"e92bebe0-2823-46e2-bd8f-7755ed558ab8\") " pod="openstack/ironic-6cc9f57487-vklxq" Feb 24 05:53:08.048453 master-0 kubenswrapper[34361]: I0224 05:53:08.039197 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/e92bebe0-2823-46e2-bd8f-7755ed558ab8-config-data-custom\") pod \"ironic-6cc9f57487-vklxq\" (UID: \"e92bebe0-2823-46e2-bd8f-7755ed558ab8\") " pod="openstack/ironic-6cc9f57487-vklxq" Feb 24 05:53:08.048453 master-0 kubenswrapper[34361]: I0224 05:53:08.039243 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-merged\" (UniqueName: \"kubernetes.io/empty-dir/e92bebe0-2823-46e2-bd8f-7755ed558ab8-config-data-merged\") pod \"ironic-6cc9f57487-vklxq\" (UID: \"e92bebe0-2823-46e2-bd8f-7755ed558ab8\") " pod="openstack/ironic-6cc9f57487-vklxq" Feb 24 05:53:08.048453 master-0 kubenswrapper[34361]: I0224 05:53:08.039264 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g75qn\" (UniqueName: \"kubernetes.io/projected/e92bebe0-2823-46e2-bd8f-7755ed558ab8-kube-api-access-g75qn\") pod \"ironic-6cc9f57487-vklxq\" (UID: \"e92bebe0-2823-46e2-bd8f-7755ed558ab8\") " pod="openstack/ironic-6cc9f57487-vklxq" Feb 24 05:53:08.048453 master-0 kubenswrapper[34361]: I0224 05:53:08.039328 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-podinfo\" (UniqueName: \"kubernetes.io/downward-api/e92bebe0-2823-46e2-bd8f-7755ed558ab8-etc-podinfo\") pod \"ironic-6cc9f57487-vklxq\" (UID: \"e92bebe0-2823-46e2-bd8f-7755ed558ab8\") " pod="openstack/ironic-6cc9f57487-vklxq" Feb 24 05:53:08.048453 master-0 kubenswrapper[34361]: I0224 05:53:08.039350 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e92bebe0-2823-46e2-bd8f-7755ed558ab8-combined-ca-bundle\") pod \"ironic-6cc9f57487-vklxq\" (UID: \"e92bebe0-2823-46e2-bd8f-7755ed558ab8\") " pod="openstack/ironic-6cc9f57487-vklxq" Feb 24 05:53:08.048453 master-0 kubenswrapper[34361]: I0224 05:53:08.039429 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/e92bebe0-2823-46e2-bd8f-7755ed558ab8-public-tls-certs\") pod \"ironic-6cc9f57487-vklxq\" (UID: \"e92bebe0-2823-46e2-bd8f-7755ed558ab8\") " pod="openstack/ironic-6cc9f57487-vklxq" Feb 24 05:53:08.048453 master-0 kubenswrapper[34361]: I0224 05:53:08.039450 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e92bebe0-2823-46e2-bd8f-7755ed558ab8-scripts\") pod \"ironic-6cc9f57487-vklxq\" (UID: \"e92bebe0-2823-46e2-bd8f-7755ed558ab8\") " pod="openstack/ironic-6cc9f57487-vklxq" Feb 24 05:53:08.048453 master-0 kubenswrapper[34361]: I0224 05:53:08.039483 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e92bebe0-2823-46e2-bd8f-7755ed558ab8-config-data\") pod \"ironic-6cc9f57487-vklxq\" (UID: \"e92bebe0-2823-46e2-bd8f-7755ed558ab8\") " pod="openstack/ironic-6cc9f57487-vklxq" Feb 24 05:53:08.048453 master-0 kubenswrapper[34361]: I0224 05:53:08.042154 34361 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e92bebe0-2823-46e2-bd8f-7755ed558ab8-logs\") pod \"ironic-6cc9f57487-vklxq\" (UID: \"e92bebe0-2823-46e2-bd8f-7755ed558ab8\") " pod="openstack/ironic-6cc9f57487-vklxq" Feb 24 05:53:08.048453 master-0 kubenswrapper[34361]: I0224 05:53:08.045657 34361 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-merged\" (UniqueName: \"kubernetes.io/empty-dir/e92bebe0-2823-46e2-bd8f-7755ed558ab8-config-data-merged\") pod \"ironic-6cc9f57487-vklxq\" (UID: \"e92bebe0-2823-46e2-bd8f-7755ed558ab8\") " pod="openstack/ironic-6cc9f57487-vklxq" Feb 24 05:53:08.048453 master-0 kubenswrapper[34361]: I0224 05:53:08.046574 34361 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e92bebe0-2823-46e2-bd8f-7755ed558ab8-combined-ca-bundle\") pod \"ironic-6cc9f57487-vklxq\" (UID: \"e92bebe0-2823-46e2-bd8f-7755ed558ab8\") " pod="openstack/ironic-6cc9f57487-vklxq" Feb 24 05:53:08.063791 master-0 kubenswrapper[34361]: I0224 05:53:08.063684 34361 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e92bebe0-2823-46e2-bd8f-7755ed558ab8-scripts\") pod \"ironic-6cc9f57487-vklxq\" (UID: \"e92bebe0-2823-46e2-bd8f-7755ed558ab8\") " pod="openstack/ironic-6cc9f57487-vklxq" Feb 24 05:53:08.064510 master-0 kubenswrapper[34361]: I0224 05:53:08.064053 34361 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/e92bebe0-2823-46e2-bd8f-7755ed558ab8-public-tls-certs\") pod \"ironic-6cc9f57487-vklxq\" (UID: \"e92bebe0-2823-46e2-bd8f-7755ed558ab8\") " pod="openstack/ironic-6cc9f57487-vklxq" Feb 24 05:53:08.064510 master-0 kubenswrapper[34361]: I0224 05:53:08.064423 34361 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/e92bebe0-2823-46e2-bd8f-7755ed558ab8-internal-tls-certs\") pod \"ironic-6cc9f57487-vklxq\" (UID: \"e92bebe0-2823-46e2-bd8f-7755ed558ab8\") " pod="openstack/ironic-6cc9f57487-vklxq" Feb 24 05:53:08.064750 master-0 kubenswrapper[34361]: I0224 05:53:08.064709 34361 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/e92bebe0-2823-46e2-bd8f-7755ed558ab8-config-data-custom\") pod \"ironic-6cc9f57487-vklxq\" (UID: \"e92bebe0-2823-46e2-bd8f-7755ed558ab8\") " pod="openstack/ironic-6cc9f57487-vklxq" Feb 24 05:53:08.065111 master-0 kubenswrapper[34361]: I0224 05:53:08.065067 34361 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e92bebe0-2823-46e2-bd8f-7755ed558ab8-config-data\") pod \"ironic-6cc9f57487-vklxq\" (UID: \"e92bebe0-2823-46e2-bd8f-7755ed558ab8\") " pod="openstack/ironic-6cc9f57487-vklxq" Feb 24 05:53:08.065214 master-0 kubenswrapper[34361]: I0224 05:53:08.065181 34361 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-podinfo\" (UniqueName: \"kubernetes.io/downward-api/e92bebe0-2823-46e2-bd8f-7755ed558ab8-etc-podinfo\") pod \"ironic-6cc9f57487-vklxq\" (UID: \"e92bebe0-2823-46e2-bd8f-7755ed558ab8\") " pod="openstack/ironic-6cc9f57487-vklxq" Feb 24 05:53:08.075019 master-0 kubenswrapper[34361]: I0224 05:53:08.074955 34361 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g75qn\" (UniqueName: \"kubernetes.io/projected/e92bebe0-2823-46e2-bd8f-7755ed558ab8-kube-api-access-g75qn\") pod \"ironic-6cc9f57487-vklxq\" (UID: \"e92bebe0-2823-46e2-bd8f-7755ed558ab8\") " pod="openstack/ironic-6cc9f57487-vklxq" Feb 24 05:53:08.177046 master-0 kubenswrapper[34361]: I0224 05:53:08.176933 34361 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ironic-6cc9f57487-vklxq" Feb 24 05:53:08.627096 master-0 kubenswrapper[34361]: I0224 05:53:08.627024 34361 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3c458e23-405b-449a-8e0b-aa6e42a286c9" path="/var/lib/kubelet/pods/3c458e23-405b-449a-8e0b-aa6e42a286c9/volumes" Feb 24 05:53:08.691903 master-0 kubenswrapper[34361]: I0224 05:53:08.691670 34361 generic.go:334] "Generic (PLEG): container finished" podID="fad53e67-bd04-4577-af57-e5b896b6e56f" containerID="1c34a792ad75c0733c51a93cf45561e771b6c1fed09ddf0c6d96fe0d13d23f16" exitCode=0 Feb 24 05:53:08.695122 master-0 kubenswrapper[34361]: I0224 05:53:08.693562 34361 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-b7346-volume-lvm-iscsi-0" event={"ID":"fad53e67-bd04-4577-af57-e5b896b6e56f","Type":"ContainerDied","Data":"1c34a792ad75c0733c51a93cf45561e771b6c1fed09ddf0c6d96fe0d13d23f16"} Feb 24 05:53:08.695122 master-0 kubenswrapper[34361]: I0224 05:53:08.693641 34361 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/cinder-b7346-api-0" Feb 24 05:53:09.658866 master-0 kubenswrapper[34361]: I0224 05:53:09.657977 34361 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ironic-inspector-db-create-pwcj4" Feb 24 05:53:09.681478 master-0 kubenswrapper[34361]: I0224 05:53:09.681406 34361 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ironic-inspector-6402-account-create-update-kj7ts" Feb 24 05:53:09.746407 master-0 kubenswrapper[34361]: I0224 05:53:09.738773 34361 generic.go:334] "Generic (PLEG): container finished" podID="c1490d04-fc1d-488b-a427-285554ec1692" containerID="7e1b2ebdea4f1ac8655c81e6c89c7a613460cc40d12b5b35c1d7fd9d0e90e437" exitCode=0 Feb 24 05:53:09.746407 master-0 kubenswrapper[34361]: I0224 05:53:09.738830 34361 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-b7346-backup-0" event={"ID":"c1490d04-fc1d-488b-a427-285554ec1692","Type":"ContainerDied","Data":"7e1b2ebdea4f1ac8655c81e6c89c7a613460cc40d12b5b35c1d7fd9d0e90e437"} Feb 24 05:53:09.746407 master-0 kubenswrapper[34361]: I0224 05:53:09.738899 34361 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-b7346-backup-0" event={"ID":"c1490d04-fc1d-488b-a427-285554ec1692","Type":"ContainerDied","Data":"f7d03b7d30fa8baa7cb21136279a2fed9c3dda6e15069cd7df8b5a2cd646f37e"} Feb 24 05:53:09.746407 master-0 kubenswrapper[34361]: I0224 05:53:09.738844 34361 generic.go:334] "Generic (PLEG): container finished" podID="c1490d04-fc1d-488b-a427-285554ec1692" containerID="f7d03b7d30fa8baa7cb21136279a2fed9c3dda6e15069cd7df8b5a2cd646f37e" exitCode=0 Feb 24 05:53:09.746763 master-0 kubenswrapper[34361]: I0224 05:53:09.746460 34361 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ironic-inspector-db-create-pwcj4" Feb 24 05:53:09.746763 master-0 kubenswrapper[34361]: I0224 05:53:09.746522 34361 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-inspector-db-create-pwcj4" event={"ID":"0a0262a8-c31e-4022-bf1e-7952af276733","Type":"ContainerDied","Data":"414fd064914eb015747b522e5afbfbf0b4a0918dba46d57eab2124e4475f12e7"} Feb 24 05:53:09.746763 master-0 kubenswrapper[34361]: I0224 05:53:09.746589 34361 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="414fd064914eb015747b522e5afbfbf0b4a0918dba46d57eab2124e4475f12e7" Feb 24 05:53:09.748509 master-0 kubenswrapper[34361]: I0224 05:53:09.748476 34361 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-inspector-6402-account-create-update-kj7ts" event={"ID":"68c5e68e-c7ed-4fb9-a323-2104110a3742","Type":"ContainerDied","Data":"63c0a74998a544d44a66937940746fd2fe043f0a067a7bb1efec736830bae986"} Feb 24 05:53:09.748572 master-0 kubenswrapper[34361]: I0224 05:53:09.748506 34361 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="63c0a74998a544d44a66937940746fd2fe043f0a067a7bb1efec736830bae986" Feb 24 05:53:09.748615 master-0 kubenswrapper[34361]: I0224 05:53:09.748574 34361 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ironic-inspector-6402-account-create-update-kj7ts" Feb 24 05:53:09.753038 master-0 kubenswrapper[34361]: I0224 05:53:09.752994 34361 generic.go:334] "Generic (PLEG): container finished" podID="fad53e67-bd04-4577-af57-e5b896b6e56f" containerID="fc623512700ec8e8994d8e079171552546ec3764436c6fded4cca2506b97aa97" exitCode=0 Feb 24 05:53:09.753038 master-0 kubenswrapper[34361]: I0224 05:53:09.753044 34361 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-b7346-volume-lvm-iscsi-0" event={"ID":"fad53e67-bd04-4577-af57-e5b896b6e56f","Type":"ContainerDied","Data":"fc623512700ec8e8994d8e079171552546ec3764436c6fded4cca2506b97aa97"} Feb 24 05:53:09.755463 master-0 kubenswrapper[34361]: I0224 05:53:09.755422 34361 generic.go:334] "Generic (PLEG): container finished" podID="2f0b28b5-741c-4761-b250-30d89ea99407" containerID="6644f2541ea8eb1d0e102c41726b036f76b1bcafd2712081e27ca63f8e5ac73f" exitCode=0 Feb 24 05:53:09.755463 master-0 kubenswrapper[34361]: I0224 05:53:09.755443 34361 generic.go:334] "Generic (PLEG): container finished" podID="2f0b28b5-741c-4761-b250-30d89ea99407" containerID="f3f9c022e793ae50651ba44d048e2d2e445c8cdf9660ca198173ab50b867ecb7" exitCode=0 Feb 24 05:53:09.756149 master-0 kubenswrapper[34361]: I0224 05:53:09.756118 34361 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-b7346-scheduler-0" event={"ID":"2f0b28b5-741c-4761-b250-30d89ea99407","Type":"ContainerDied","Data":"6644f2541ea8eb1d0e102c41726b036f76b1bcafd2712081e27ca63f8e5ac73f"} Feb 24 05:53:09.756149 master-0 kubenswrapper[34361]: I0224 05:53:09.756147 34361 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-b7346-scheduler-0" event={"ID":"2f0b28b5-741c-4761-b250-30d89ea99407","Type":"ContainerDied","Data":"f3f9c022e793ae50651ba44d048e2d2e445c8cdf9660ca198173ab50b867ecb7"} Feb 24 05:53:09.810282 master-0 kubenswrapper[34361]: I0224 05:53:09.810196 34361 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/68c5e68e-c7ed-4fb9-a323-2104110a3742-operator-scripts\") pod \"68c5e68e-c7ed-4fb9-a323-2104110a3742\" (UID: \"68c5e68e-c7ed-4fb9-a323-2104110a3742\") " Feb 24 05:53:09.810496 master-0 kubenswrapper[34361]: I0224 05:53:09.810466 34361 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s96kw\" (UniqueName: \"kubernetes.io/projected/68c5e68e-c7ed-4fb9-a323-2104110a3742-kube-api-access-s96kw\") pod \"68c5e68e-c7ed-4fb9-a323-2104110a3742\" (UID: \"68c5e68e-c7ed-4fb9-a323-2104110a3742\") " Feb 24 05:53:09.810610 master-0 kubenswrapper[34361]: I0224 05:53:09.810569 34361 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gsd8t\" (UniqueName: \"kubernetes.io/projected/0a0262a8-c31e-4022-bf1e-7952af276733-kube-api-access-gsd8t\") pod \"0a0262a8-c31e-4022-bf1e-7952af276733\" (UID: \"0a0262a8-c31e-4022-bf1e-7952af276733\") " Feb 24 05:53:09.810738 master-0 kubenswrapper[34361]: I0224 05:53:09.810665 34361 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0a0262a8-c31e-4022-bf1e-7952af276733-operator-scripts\") pod \"0a0262a8-c31e-4022-bf1e-7952af276733\" (UID: \"0a0262a8-c31e-4022-bf1e-7952af276733\") " Feb 24 05:53:09.810786 master-0 kubenswrapper[34361]: I0224 05:53:09.810747 34361 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/68c5e68e-c7ed-4fb9-a323-2104110a3742-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "68c5e68e-c7ed-4fb9-a323-2104110a3742" (UID: "68c5e68e-c7ed-4fb9-a323-2104110a3742"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 24 05:53:09.816088 master-0 kubenswrapper[34361]: I0224 05:53:09.812465 34361 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0a0262a8-c31e-4022-bf1e-7952af276733-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "0a0262a8-c31e-4022-bf1e-7952af276733" (UID: "0a0262a8-c31e-4022-bf1e-7952af276733"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 24 05:53:09.816088 master-0 kubenswrapper[34361]: I0224 05:53:09.814593 34361 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/68c5e68e-c7ed-4fb9-a323-2104110a3742-kube-api-access-s96kw" (OuterVolumeSpecName: "kube-api-access-s96kw") pod "68c5e68e-c7ed-4fb9-a323-2104110a3742" (UID: "68c5e68e-c7ed-4fb9-a323-2104110a3742"). InnerVolumeSpecName "kube-api-access-s96kw". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 24 05:53:09.821267 master-0 kubenswrapper[34361]: I0224 05:53:09.816592 34361 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0a0262a8-c31e-4022-bf1e-7952af276733-kube-api-access-gsd8t" (OuterVolumeSpecName: "kube-api-access-gsd8t") pod "0a0262a8-c31e-4022-bf1e-7952af276733" (UID: "0a0262a8-c31e-4022-bf1e-7952af276733"). InnerVolumeSpecName "kube-api-access-gsd8t". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 24 05:53:09.834345 master-0 kubenswrapper[34361]: I0224 05:53:09.833935 34361 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/68c5e68e-c7ed-4fb9-a323-2104110a3742-operator-scripts\") on node \"master-0\" DevicePath \"\"" Feb 24 05:53:09.834345 master-0 kubenswrapper[34361]: I0224 05:53:09.834001 34361 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-s96kw\" (UniqueName: \"kubernetes.io/projected/68c5e68e-c7ed-4fb9-a323-2104110a3742-kube-api-access-s96kw\") on node \"master-0\" DevicePath \"\"" Feb 24 05:53:09.834345 master-0 kubenswrapper[34361]: I0224 05:53:09.834024 34361 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gsd8t\" (UniqueName: \"kubernetes.io/projected/0a0262a8-c31e-4022-bf1e-7952af276733-kube-api-access-gsd8t\") on node \"master-0\" DevicePath \"\"" Feb 24 05:53:09.834345 master-0 kubenswrapper[34361]: I0224 05:53:09.834037 34361 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0a0262a8-c31e-4022-bf1e-7952af276733-operator-scripts\") on node \"master-0\" DevicePath \"\"" Feb 24 05:53:10.066586 master-0 kubenswrapper[34361]: I0224 05:53:10.066532 34361 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-b7346-volume-lvm-iscsi-0" Feb 24 05:53:10.169369 master-0 kubenswrapper[34361]: I0224 05:53:10.167276 34361 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/fad53e67-bd04-4577-af57-e5b896b6e56f-dev\") pod \"fad53e67-bd04-4577-af57-e5b896b6e56f\" (UID: \"fad53e67-bd04-4577-af57-e5b896b6e56f\") " Feb 24 05:53:10.169526 master-0 kubenswrapper[34361]: I0224 05:53:10.169462 34361 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/fad53e67-bd04-4577-af57-e5b896b6e56f-run\") pod \"fad53e67-bd04-4577-af57-e5b896b6e56f\" (UID: \"fad53e67-bd04-4577-af57-e5b896b6e56f\") " Feb 24 05:53:10.169526 master-0 kubenswrapper[34361]: I0224 05:53:10.169517 34361 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lib-cinder\" (UniqueName: \"kubernetes.io/host-path/fad53e67-bd04-4577-af57-e5b896b6e56f-var-lib-cinder\") pod \"fad53e67-bd04-4577-af57-e5b896b6e56f\" (UID: \"fad53e67-bd04-4577-af57-e5b896b6e56f\") " Feb 24 05:53:10.169665 master-0 kubenswrapper[34361]: I0224 05:53:10.169582 34361 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/fad53e67-bd04-4577-af57-e5b896b6e56f-lib-modules\") pod \"fad53e67-bd04-4577-af57-e5b896b6e56f\" (UID: \"fad53e67-bd04-4577-af57-e5b896b6e56f\") " Feb 24 05:53:10.169982 master-0 kubenswrapper[34361]: I0224 05:53:10.167600 34361 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/fad53e67-bd04-4577-af57-e5b896b6e56f-dev" (OuterVolumeSpecName: "dev") pod "fad53e67-bd04-4577-af57-e5b896b6e56f" (UID: "fad53e67-bd04-4577-af57-e5b896b6e56f"). InnerVolumeSpecName "dev". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 24 05:53:10.170063 master-0 kubenswrapper[34361]: I0224 05:53:10.169937 34361 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/fad53e67-bd04-4577-af57-e5b896b6e56f-var-lib-cinder" (OuterVolumeSpecName: "var-lib-cinder") pod "fad53e67-bd04-4577-af57-e5b896b6e56f" (UID: "fad53e67-bd04-4577-af57-e5b896b6e56f"). InnerVolumeSpecName "var-lib-cinder". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 24 05:53:10.170155 master-0 kubenswrapper[34361]: I0224 05:53:10.169940 34361 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/fad53e67-bd04-4577-af57-e5b896b6e56f-run" (OuterVolumeSpecName: "run") pod "fad53e67-bd04-4577-af57-e5b896b6e56f" (UID: "fad53e67-bd04-4577-af57-e5b896b6e56f"). InnerVolumeSpecName "run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 24 05:53:10.170249 master-0 kubenswrapper[34361]: I0224 05:53:10.170161 34361 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/fad53e67-bd04-4577-af57-e5b896b6e56f-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "fad53e67-bd04-4577-af57-e5b896b6e56f" (UID: "fad53e67-bd04-4577-af57-e5b896b6e56f"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 24 05:53:10.173909 master-0 kubenswrapper[34361]: I0224 05:53:10.173860 34361 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fad53e67-bd04-4577-af57-e5b896b6e56f-combined-ca-bundle\") pod \"fad53e67-bd04-4577-af57-e5b896b6e56f\" (UID: \"fad53e67-bd04-4577-af57-e5b896b6e56f\") " Feb 24 05:53:10.173909 master-0 kubenswrapper[34361]: I0224 05:53:10.173916 34361 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fad53e67-bd04-4577-af57-e5b896b6e56f-config-data\") pod \"fad53e67-bd04-4577-af57-e5b896b6e56f\" (UID: \"fad53e67-bd04-4577-af57-e5b896b6e56f\") " Feb 24 05:53:10.174141 master-0 kubenswrapper[34361]: I0224 05:53:10.173957 34361 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-locks-cinder\" (UniqueName: \"kubernetes.io/host-path/fad53e67-bd04-4577-af57-e5b896b6e56f-var-locks-cinder\") pod \"fad53e67-bd04-4577-af57-e5b896b6e56f\" (UID: \"fad53e67-bd04-4577-af57-e5b896b6e56f\") " Feb 24 05:53:10.174141 master-0 kubenswrapper[34361]: I0224 05:53:10.173988 34361 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/fad53e67-bd04-4577-af57-e5b896b6e56f-sys\") pod \"fad53e67-bd04-4577-af57-e5b896b6e56f\" (UID: \"fad53e67-bd04-4577-af57-e5b896b6e56f\") " Feb 24 05:53:10.174141 master-0 kubenswrapper[34361]: I0224 05:53:10.174082 34361 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/fad53e67-bd04-4577-af57-e5b896b6e56f-etc-machine-id\") pod \"fad53e67-bd04-4577-af57-e5b896b6e56f\" (UID: \"fad53e67-bd04-4577-af57-e5b896b6e56f\") " Feb 24 05:53:10.174302 master-0 kubenswrapper[34361]: I0224 05:53:10.174159 34361 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/fad53e67-bd04-4577-af57-e5b896b6e56f-config-data-custom\") pod \"fad53e67-bd04-4577-af57-e5b896b6e56f\" (UID: \"fad53e67-bd04-4577-af57-e5b896b6e56f\") " Feb 24 05:53:10.174302 master-0 kubenswrapper[34361]: I0224 05:53:10.174224 34361 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zzhlb\" (UniqueName: \"kubernetes.io/projected/fad53e67-bd04-4577-af57-e5b896b6e56f-kube-api-access-zzhlb\") pod \"fad53e67-bd04-4577-af57-e5b896b6e56f\" (UID: \"fad53e67-bd04-4577-af57-e5b896b6e56f\") " Feb 24 05:53:10.174302 master-0 kubenswrapper[34361]: I0224 05:53:10.174281 34361 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/fad53e67-bd04-4577-af57-e5b896b6e56f-var-locks-brick\") pod \"fad53e67-bd04-4577-af57-e5b896b6e56f\" (UID: \"fad53e67-bd04-4577-af57-e5b896b6e56f\") " Feb 24 05:53:10.174461 master-0 kubenswrapper[34361]: I0224 05:53:10.174374 34361 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/fad53e67-bd04-4577-af57-e5b896b6e56f-etc-iscsi\") pod \"fad53e67-bd04-4577-af57-e5b896b6e56f\" (UID: \"fad53e67-bd04-4577-af57-e5b896b6e56f\") " Feb 24 05:53:10.174559 master-0 kubenswrapper[34361]: I0224 05:53:10.174532 34361 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/fad53e67-bd04-4577-af57-e5b896b6e56f-etc-nvme\") pod \"fad53e67-bd04-4577-af57-e5b896b6e56f\" (UID: \"fad53e67-bd04-4577-af57-e5b896b6e56f\") " Feb 24 05:53:10.174632 master-0 kubenswrapper[34361]: I0224 05:53:10.174586 34361 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fad53e67-bd04-4577-af57-e5b896b6e56f-scripts\") pod \"fad53e67-bd04-4577-af57-e5b896b6e56f\" (UID: \"fad53e67-bd04-4577-af57-e5b896b6e56f\") " Feb 24 05:53:10.178625 master-0 kubenswrapper[34361]: I0224 05:53:10.178106 34361 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/fad53e67-bd04-4577-af57-e5b896b6e56f-var-locks-brick" (OuterVolumeSpecName: "var-locks-brick") pod "fad53e67-bd04-4577-af57-e5b896b6e56f" (UID: "fad53e67-bd04-4577-af57-e5b896b6e56f"). InnerVolumeSpecName "var-locks-brick". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 24 05:53:10.178930 master-0 kubenswrapper[34361]: I0224 05:53:10.178850 34361 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/fad53e67-bd04-4577-af57-e5b896b6e56f-var-locks-cinder" (OuterVolumeSpecName: "var-locks-cinder") pod "fad53e67-bd04-4577-af57-e5b896b6e56f" (UID: "fad53e67-bd04-4577-af57-e5b896b6e56f"). InnerVolumeSpecName "var-locks-cinder". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 24 05:53:10.179033 master-0 kubenswrapper[34361]: I0224 05:53:10.178969 34361 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/fad53e67-bd04-4577-af57-e5b896b6e56f-sys" (OuterVolumeSpecName: "sys") pod "fad53e67-bd04-4577-af57-e5b896b6e56f" (UID: "fad53e67-bd04-4577-af57-e5b896b6e56f"). InnerVolumeSpecName "sys". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 24 05:53:10.179033 master-0 kubenswrapper[34361]: I0224 05:53:10.179002 34361 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/fad53e67-bd04-4577-af57-e5b896b6e56f-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "fad53e67-bd04-4577-af57-e5b896b6e56f" (UID: "fad53e67-bd04-4577-af57-e5b896b6e56f"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 24 05:53:10.179033 master-0 kubenswrapper[34361]: I0224 05:53:10.179027 34361 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/fad53e67-bd04-4577-af57-e5b896b6e56f-etc-iscsi" (OuterVolumeSpecName: "etc-iscsi") pod "fad53e67-bd04-4577-af57-e5b896b6e56f" (UID: "fad53e67-bd04-4577-af57-e5b896b6e56f"). InnerVolumeSpecName "etc-iscsi". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 24 05:53:10.179302 master-0 kubenswrapper[34361]: I0224 05:53:10.179057 34361 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/fad53e67-bd04-4577-af57-e5b896b6e56f-etc-nvme" (OuterVolumeSpecName: "etc-nvme") pod "fad53e67-bd04-4577-af57-e5b896b6e56f" (UID: "fad53e67-bd04-4577-af57-e5b896b6e56f"). InnerVolumeSpecName "etc-nvme". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 24 05:53:10.182407 master-0 kubenswrapper[34361]: I0224 05:53:10.182349 34361 reconciler_common.go:293] "Volume detached for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/fad53e67-bd04-4577-af57-e5b896b6e56f-var-locks-brick\") on node \"master-0\" DevicePath \"\"" Feb 24 05:53:10.182407 master-0 kubenswrapper[34361]: I0224 05:53:10.182404 34361 reconciler_common.go:293] "Volume detached for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/fad53e67-bd04-4577-af57-e5b896b6e56f-etc-iscsi\") on node \"master-0\" DevicePath \"\"" Feb 24 05:53:10.182407 master-0 kubenswrapper[34361]: I0224 05:53:10.182418 34361 reconciler_common.go:293] "Volume detached for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/fad53e67-bd04-4577-af57-e5b896b6e56f-etc-nvme\") on node \"master-0\" DevicePath \"\"" Feb 24 05:53:10.182407 master-0 kubenswrapper[34361]: I0224 05:53:10.182431 34361 reconciler_common.go:293] "Volume detached for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/fad53e67-bd04-4577-af57-e5b896b6e56f-dev\") on node \"master-0\" DevicePath \"\"" Feb 24 05:53:10.182407 master-0 kubenswrapper[34361]: I0224 05:53:10.182446 34361 reconciler_common.go:293] "Volume detached for volume \"run\" (UniqueName: \"kubernetes.io/host-path/fad53e67-bd04-4577-af57-e5b896b6e56f-run\") on node \"master-0\" DevicePath \"\"" Feb 24 05:53:10.182988 master-0 kubenswrapper[34361]: I0224 05:53:10.182459 34361 reconciler_common.go:293] "Volume detached for volume \"var-lib-cinder\" (UniqueName: \"kubernetes.io/host-path/fad53e67-bd04-4577-af57-e5b896b6e56f-var-lib-cinder\") on node \"master-0\" DevicePath \"\"" Feb 24 05:53:10.182988 master-0 kubenswrapper[34361]: I0224 05:53:10.182471 34361 reconciler_common.go:293] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/fad53e67-bd04-4577-af57-e5b896b6e56f-lib-modules\") on node \"master-0\" DevicePath \"\"" Feb 24 05:53:10.182988 master-0 kubenswrapper[34361]: I0224 05:53:10.182481 34361 reconciler_common.go:293] "Volume detached for volume \"var-locks-cinder\" (UniqueName: \"kubernetes.io/host-path/fad53e67-bd04-4577-af57-e5b896b6e56f-var-locks-cinder\") on node \"master-0\" DevicePath \"\"" Feb 24 05:53:10.182988 master-0 kubenswrapper[34361]: I0224 05:53:10.182492 34361 reconciler_common.go:293] "Volume detached for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/fad53e67-bd04-4577-af57-e5b896b6e56f-sys\") on node \"master-0\" DevicePath \"\"" Feb 24 05:53:10.182988 master-0 kubenswrapper[34361]: I0224 05:53:10.182503 34361 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/fad53e67-bd04-4577-af57-e5b896b6e56f-etc-machine-id\") on node \"master-0\" DevicePath \"\"" Feb 24 05:53:10.185262 master-0 kubenswrapper[34361]: I0224 05:53:10.185204 34361 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fad53e67-bd04-4577-af57-e5b896b6e56f-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "fad53e67-bd04-4577-af57-e5b896b6e56f" (UID: "fad53e67-bd04-4577-af57-e5b896b6e56f"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 24 05:53:10.185530 master-0 kubenswrapper[34361]: I0224 05:53:10.185505 34361 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fad53e67-bd04-4577-af57-e5b896b6e56f-kube-api-access-zzhlb" (OuterVolumeSpecName: "kube-api-access-zzhlb") pod "fad53e67-bd04-4577-af57-e5b896b6e56f" (UID: "fad53e67-bd04-4577-af57-e5b896b6e56f"). InnerVolumeSpecName "kube-api-access-zzhlb". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 24 05:53:10.189455 master-0 kubenswrapper[34361]: I0224 05:53:10.189356 34361 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fad53e67-bd04-4577-af57-e5b896b6e56f-scripts" (OuterVolumeSpecName: "scripts") pod "fad53e67-bd04-4577-af57-e5b896b6e56f" (UID: "fad53e67-bd04-4577-af57-e5b896b6e56f"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 24 05:53:10.327018 master-0 kubenswrapper[34361]: I0224 05:53:10.325626 34361 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fad53e67-bd04-4577-af57-e5b896b6e56f-scripts\") on node \"master-0\" DevicePath \"\"" Feb 24 05:53:10.327018 master-0 kubenswrapper[34361]: I0224 05:53:10.325676 34361 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/fad53e67-bd04-4577-af57-e5b896b6e56f-config-data-custom\") on node \"master-0\" DevicePath \"\"" Feb 24 05:53:10.327018 master-0 kubenswrapper[34361]: I0224 05:53:10.325689 34361 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zzhlb\" (UniqueName: \"kubernetes.io/projected/fad53e67-bd04-4577-af57-e5b896b6e56f-kube-api-access-zzhlb\") on node \"master-0\" DevicePath \"\"" Feb 24 05:53:10.495676 master-0 kubenswrapper[34361]: I0224 05:53:10.495616 34361 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fad53e67-bd04-4577-af57-e5b896b6e56f-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "fad53e67-bd04-4577-af57-e5b896b6e56f" (UID: "fad53e67-bd04-4577-af57-e5b896b6e56f"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 24 05:53:10.519565 master-0 kubenswrapper[34361]: I0224 05:53:10.512338 34361 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fad53e67-bd04-4577-af57-e5b896b6e56f-config-data" (OuterVolumeSpecName: "config-data") pod "fad53e67-bd04-4577-af57-e5b896b6e56f" (UID: "fad53e67-bd04-4577-af57-e5b896b6e56f"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 24 05:53:10.530449 master-0 kubenswrapper[34361]: I0224 05:53:10.530380 34361 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fad53e67-bd04-4577-af57-e5b896b6e56f-combined-ca-bundle\") on node \"master-0\" DevicePath \"\"" Feb 24 05:53:10.530449 master-0 kubenswrapper[34361]: I0224 05:53:10.530441 34361 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fad53e67-bd04-4577-af57-e5b896b6e56f-config-data\") on node \"master-0\" DevicePath \"\"" Feb 24 05:53:10.692686 master-0 kubenswrapper[34361]: I0224 05:53:10.692605 34361 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-b7346-scheduler-0" Feb 24 05:53:10.701955 master-0 kubenswrapper[34361]: I0224 05:53:10.701899 34361 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-b7346-backup-0" Feb 24 05:53:10.760928 master-0 kubenswrapper[34361]: I0224 05:53:10.760833 34361 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/2f0b28b5-741c-4761-b250-30d89ea99407-config-data-custom\") pod \"2f0b28b5-741c-4761-b250-30d89ea99407\" (UID: \"2f0b28b5-741c-4761-b250-30d89ea99407\") " Feb 24 05:53:10.761280 master-0 kubenswrapper[34361]: I0224 05:53:10.760953 34361 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2f0b28b5-741c-4761-b250-30d89ea99407-combined-ca-bundle\") pod \"2f0b28b5-741c-4761-b250-30d89ea99407\" (UID: \"2f0b28b5-741c-4761-b250-30d89ea99407\") " Feb 24 05:53:10.761280 master-0 kubenswrapper[34361]: I0224 05:53:10.761017 34361 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9dgmw\" (UniqueName: \"kubernetes.io/projected/2f0b28b5-741c-4761-b250-30d89ea99407-kube-api-access-9dgmw\") pod \"2f0b28b5-741c-4761-b250-30d89ea99407\" (UID: \"2f0b28b5-741c-4761-b250-30d89ea99407\") " Feb 24 05:53:10.761280 master-0 kubenswrapper[34361]: I0224 05:53:10.761163 34361 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2f0b28b5-741c-4761-b250-30d89ea99407-config-data\") pod \"2f0b28b5-741c-4761-b250-30d89ea99407\" (UID: \"2f0b28b5-741c-4761-b250-30d89ea99407\") " Feb 24 05:53:10.761425 master-0 kubenswrapper[34361]: I0224 05:53:10.761328 34361 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2f0b28b5-741c-4761-b250-30d89ea99407-scripts\") pod \"2f0b28b5-741c-4761-b250-30d89ea99407\" (UID: \"2f0b28b5-741c-4761-b250-30d89ea99407\") " Feb 24 05:53:10.761425 master-0 kubenswrapper[34361]: I0224 05:53:10.761419 34361 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/2f0b28b5-741c-4761-b250-30d89ea99407-etc-machine-id\") pod \"2f0b28b5-741c-4761-b250-30d89ea99407\" (UID: \"2f0b28b5-741c-4761-b250-30d89ea99407\") " Feb 24 05:53:10.767367 master-0 kubenswrapper[34361]: I0224 05:53:10.766947 34361 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2f0b28b5-741c-4761-b250-30d89ea99407-kube-api-access-9dgmw" (OuterVolumeSpecName: "kube-api-access-9dgmw") pod "2f0b28b5-741c-4761-b250-30d89ea99407" (UID: "2f0b28b5-741c-4761-b250-30d89ea99407"). InnerVolumeSpecName "kube-api-access-9dgmw". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 24 05:53:10.778379 master-0 kubenswrapper[34361]: I0224 05:53:10.773931 34361 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2f0b28b5-741c-4761-b250-30d89ea99407-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "2f0b28b5-741c-4761-b250-30d89ea99407" (UID: "2f0b28b5-741c-4761-b250-30d89ea99407"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 24 05:53:10.778379 master-0 kubenswrapper[34361]: I0224 05:53:10.774779 34361 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2f0b28b5-741c-4761-b250-30d89ea99407-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "2f0b28b5-741c-4761-b250-30d89ea99407" (UID: "2f0b28b5-741c-4761-b250-30d89ea99407"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 24 05:53:10.782071 master-0 kubenswrapper[34361]: I0224 05:53:10.781999 34361 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2f0b28b5-741c-4761-b250-30d89ea99407-scripts" (OuterVolumeSpecName: "scripts") pod "2f0b28b5-741c-4761-b250-30d89ea99407" (UID: "2f0b28b5-741c-4761-b250-30d89ea99407"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 24 05:53:10.790329 master-0 kubenswrapper[34361]: I0224 05:53:10.787804 34361 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-b7346-backup-0" event={"ID":"c1490d04-fc1d-488b-a427-285554ec1692","Type":"ContainerDied","Data":"712f67cdabb3bae9608d2725a92cb1b04710e175f31705fa460471515a7feebd"} Feb 24 05:53:10.790329 master-0 kubenswrapper[34361]: I0224 05:53:10.787962 34361 scope.go:117] "RemoveContainer" containerID="7e1b2ebdea4f1ac8655c81e6c89c7a613460cc40d12b5b35c1d7fd9d0e90e437" Feb 24 05:53:10.790329 master-0 kubenswrapper[34361]: I0224 05:53:10.788385 34361 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-b7346-backup-0" Feb 24 05:53:10.794752 master-0 kubenswrapper[34361]: I0224 05:53:10.792947 34361 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-neutron-agent-856d98ff5d-2p7np" event={"ID":"40a5b237-764f-4367-85a5-4153a8f90a3e","Type":"ContainerStarted","Data":"bd91a8454d87028b9e8706db9f6b6940724d7cb0d147ecc297afc26e76ed0e85"} Feb 24 05:53:10.794752 master-0 kubenswrapper[34361]: I0224 05:53:10.793028 34361 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ironic-neutron-agent-856d98ff5d-2p7np" Feb 24 05:53:10.801840 master-0 kubenswrapper[34361]: I0224 05:53:10.799508 34361 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-b7346-volume-lvm-iscsi-0" event={"ID":"fad53e67-bd04-4577-af57-e5b896b6e56f","Type":"ContainerDied","Data":"8ea5b994ca9263487cbb12d51da025f9077de5fd96dee4a3591fdd5b7fcf61e7"} Feb 24 05:53:10.801840 master-0 kubenswrapper[34361]: I0224 05:53:10.799656 34361 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-b7346-volume-lvm-iscsi-0" Feb 24 05:53:10.815173 master-0 kubenswrapper[34361]: I0224 05:53:10.813758 34361 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-b7346-scheduler-0" event={"ID":"2f0b28b5-741c-4761-b250-30d89ea99407","Type":"ContainerDied","Data":"ad356392f9d21e069c43944c4d6d2a68d8b26d9fa531697713595e104dd045ee"} Feb 24 05:53:10.815173 master-0 kubenswrapper[34361]: I0224 05:53:10.813893 34361 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-b7346-scheduler-0" Feb 24 05:53:10.818926 master-0 kubenswrapper[34361]: I0224 05:53:10.818895 34361 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7d9d8bd467-64rvv" event={"ID":"eb5e7cfa-75df-4db4-87aa-34e7c7acf852","Type":"ContainerStarted","Data":"d96522ea33b3d8004f404168435c7d13a18d9f8612f5020d4ed449d6c052677b"} Feb 24 05:53:10.819060 master-0 kubenswrapper[34361]: I0224 05:53:10.819034 34361 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-7d9d8bd467-64rvv" Feb 24 05:53:10.822289 master-0 kubenswrapper[34361]: I0224 05:53:10.821429 34361 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-555fd64789-cgpft" event={"ID":"700c3143-d1a3-47a3-92f5-02a0b1e428a4","Type":"ContainerStarted","Data":"abba14a9d83c013392814f045bc1cc7b0b1c9b871724050cb75b4193d8291663"} Feb 24 05:53:10.847829 master-0 kubenswrapper[34361]: I0224 05:53:10.847750 34361 scope.go:117] "RemoveContainer" containerID="f7d03b7d30fa8baa7cb21136279a2fed9c3dda6e15069cd7df8b5a2cd646f37e" Feb 24 05:53:10.864366 master-0 kubenswrapper[34361]: I0224 05:53:10.864263 34361 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/c1490d04-fc1d-488b-a427-285554ec1692-dev\") pod \"c1490d04-fc1d-488b-a427-285554ec1692\" (UID: \"c1490d04-fc1d-488b-a427-285554ec1692\") " Feb 24 05:53:10.864366 master-0 kubenswrapper[34361]: I0224 05:53:10.864364 34361 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/c1490d04-fc1d-488b-a427-285554ec1692-lib-modules\") pod \"c1490d04-fc1d-488b-a427-285554ec1692\" (UID: \"c1490d04-fc1d-488b-a427-285554ec1692\") " Feb 24 05:53:10.864684 master-0 kubenswrapper[34361]: I0224 05:53:10.864451 34361 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c1490d04-fc1d-488b-a427-285554ec1692-combined-ca-bundle\") pod \"c1490d04-fc1d-488b-a427-285554ec1692\" (UID: \"c1490d04-fc1d-488b-a427-285554ec1692\") " Feb 24 05:53:10.864684 master-0 kubenswrapper[34361]: I0224 05:53:10.864471 34361 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-locks-cinder\" (UniqueName: \"kubernetes.io/host-path/c1490d04-fc1d-488b-a427-285554ec1692-var-locks-cinder\") pod \"c1490d04-fc1d-488b-a427-285554ec1692\" (UID: \"c1490d04-fc1d-488b-a427-285554ec1692\") " Feb 24 05:53:10.864684 master-0 kubenswrapper[34361]: I0224 05:53:10.864520 34361 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/c1490d04-fc1d-488b-a427-285554ec1692-sys\") pod \"c1490d04-fc1d-488b-a427-285554ec1692\" (UID: \"c1490d04-fc1d-488b-a427-285554ec1692\") " Feb 24 05:53:10.864684 master-0 kubenswrapper[34361]: I0224 05:53:10.864540 34361 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/c1490d04-fc1d-488b-a427-285554ec1692-var-locks-brick\") pod \"c1490d04-fc1d-488b-a427-285554ec1692\" (UID: \"c1490d04-fc1d-488b-a427-285554ec1692\") " Feb 24 05:53:10.864684 master-0 kubenswrapper[34361]: I0224 05:53:10.864559 34361 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/c1490d04-fc1d-488b-a427-285554ec1692-run\") pod \"c1490d04-fc1d-488b-a427-285554ec1692\" (UID: \"c1490d04-fc1d-488b-a427-285554ec1692\") " Feb 24 05:53:10.864684 master-0 kubenswrapper[34361]: I0224 05:53:10.864620 34361 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/c1490d04-fc1d-488b-a427-285554ec1692-config-data-custom\") pod \"c1490d04-fc1d-488b-a427-285554ec1692\" (UID: \"c1490d04-fc1d-488b-a427-285554ec1692\") " Feb 24 05:53:10.864684 master-0 kubenswrapper[34361]: I0224 05:53:10.864648 34361 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/c1490d04-fc1d-488b-a427-285554ec1692-etc-nvme\") pod \"c1490d04-fc1d-488b-a427-285554ec1692\" (UID: \"c1490d04-fc1d-488b-a427-285554ec1692\") " Feb 24 05:53:10.864684 master-0 kubenswrapper[34361]: I0224 05:53:10.864683 34361 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lndt4\" (UniqueName: \"kubernetes.io/projected/c1490d04-fc1d-488b-a427-285554ec1692-kube-api-access-lndt4\") pod \"c1490d04-fc1d-488b-a427-285554ec1692\" (UID: \"c1490d04-fc1d-488b-a427-285554ec1692\") " Feb 24 05:53:10.864989 master-0 kubenswrapper[34361]: I0224 05:53:10.864745 34361 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c1490d04-fc1d-488b-a427-285554ec1692-config-data\") pod \"c1490d04-fc1d-488b-a427-285554ec1692\" (UID: \"c1490d04-fc1d-488b-a427-285554ec1692\") " Feb 24 05:53:10.864989 master-0 kubenswrapper[34361]: I0224 05:53:10.864797 34361 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lib-cinder\" (UniqueName: \"kubernetes.io/host-path/c1490d04-fc1d-488b-a427-285554ec1692-var-lib-cinder\") pod \"c1490d04-fc1d-488b-a427-285554ec1692\" (UID: \"c1490d04-fc1d-488b-a427-285554ec1692\") " Feb 24 05:53:10.864989 master-0 kubenswrapper[34361]: I0224 05:53:10.864927 34361 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/c1490d04-fc1d-488b-a427-285554ec1692-etc-iscsi\") pod \"c1490d04-fc1d-488b-a427-285554ec1692\" (UID: \"c1490d04-fc1d-488b-a427-285554ec1692\") " Feb 24 05:53:10.865088 master-0 kubenswrapper[34361]: I0224 05:53:10.865038 34361 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/c1490d04-fc1d-488b-a427-285554ec1692-etc-machine-id\") pod \"c1490d04-fc1d-488b-a427-285554ec1692\" (UID: \"c1490d04-fc1d-488b-a427-285554ec1692\") " Feb 24 05:53:10.865140 master-0 kubenswrapper[34361]: I0224 05:53:10.865117 34361 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c1490d04-fc1d-488b-a427-285554ec1692-scripts\") pod \"c1490d04-fc1d-488b-a427-285554ec1692\" (UID: \"c1490d04-fc1d-488b-a427-285554ec1692\") " Feb 24 05:53:10.865728 master-0 kubenswrapper[34361]: I0224 05:53:10.865692 34361 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9dgmw\" (UniqueName: \"kubernetes.io/projected/2f0b28b5-741c-4761-b250-30d89ea99407-kube-api-access-9dgmw\") on node \"master-0\" DevicePath \"\"" Feb 24 05:53:10.865728 master-0 kubenswrapper[34361]: I0224 05:53:10.865713 34361 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2f0b28b5-741c-4761-b250-30d89ea99407-scripts\") on node \"master-0\" DevicePath \"\"" Feb 24 05:53:10.865728 master-0 kubenswrapper[34361]: I0224 05:53:10.865723 34361 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/2f0b28b5-741c-4761-b250-30d89ea99407-etc-machine-id\") on node \"master-0\" DevicePath \"\"" Feb 24 05:53:10.865728 master-0 kubenswrapper[34361]: I0224 05:53:10.865733 34361 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/2f0b28b5-741c-4761-b250-30d89ea99407-config-data-custom\") on node \"master-0\" DevicePath \"\"" Feb 24 05:53:10.866390 master-0 kubenswrapper[34361]: I0224 05:53:10.866200 34361 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c1490d04-fc1d-488b-a427-285554ec1692-dev" (OuterVolumeSpecName: "dev") pod "c1490d04-fc1d-488b-a427-285554ec1692" (UID: "c1490d04-fc1d-488b-a427-285554ec1692"). InnerVolumeSpecName "dev". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 24 05:53:10.866390 master-0 kubenswrapper[34361]: I0224 05:53:10.866226 34361 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c1490d04-fc1d-488b-a427-285554ec1692-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "c1490d04-fc1d-488b-a427-285554ec1692" (UID: "c1490d04-fc1d-488b-a427-285554ec1692"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 24 05:53:10.871549 master-0 kubenswrapper[34361]: I0224 05:53:10.871089 34361 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c1490d04-fc1d-488b-a427-285554ec1692-var-locks-cinder" (OuterVolumeSpecName: "var-locks-cinder") pod "c1490d04-fc1d-488b-a427-285554ec1692" (UID: "c1490d04-fc1d-488b-a427-285554ec1692"). InnerVolumeSpecName "var-locks-cinder". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 24 05:53:10.872144 master-0 kubenswrapper[34361]: I0224 05:53:10.872124 34361 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c1490d04-fc1d-488b-a427-285554ec1692-run" (OuterVolumeSpecName: "run") pod "c1490d04-fc1d-488b-a427-285554ec1692" (UID: "c1490d04-fc1d-488b-a427-285554ec1692"). InnerVolumeSpecName "run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 24 05:53:10.874250 master-0 kubenswrapper[34361]: I0224 05:53:10.873045 34361 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c1490d04-fc1d-488b-a427-285554ec1692-kube-api-access-lndt4" (OuterVolumeSpecName: "kube-api-access-lndt4") pod "c1490d04-fc1d-488b-a427-285554ec1692" (UID: "c1490d04-fc1d-488b-a427-285554ec1692"). InnerVolumeSpecName "kube-api-access-lndt4". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 24 05:53:10.874825 master-0 kubenswrapper[34361]: I0224 05:53:10.873097 34361 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c1490d04-fc1d-488b-a427-285554ec1692-etc-nvme" (OuterVolumeSpecName: "etc-nvme") pod "c1490d04-fc1d-488b-a427-285554ec1692" (UID: "c1490d04-fc1d-488b-a427-285554ec1692"). InnerVolumeSpecName "etc-nvme". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 24 05:53:10.874978 master-0 kubenswrapper[34361]: I0224 05:53:10.873129 34361 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c1490d04-fc1d-488b-a427-285554ec1692-sys" (OuterVolumeSpecName: "sys") pod "c1490d04-fc1d-488b-a427-285554ec1692" (UID: "c1490d04-fc1d-488b-a427-285554ec1692"). InnerVolumeSpecName "sys". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 24 05:53:10.874978 master-0 kubenswrapper[34361]: I0224 05:53:10.873154 34361 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c1490d04-fc1d-488b-a427-285554ec1692-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "c1490d04-fc1d-488b-a427-285554ec1692" (UID: "c1490d04-fc1d-488b-a427-285554ec1692"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 24 05:53:10.874978 master-0 kubenswrapper[34361]: I0224 05:53:10.873485 34361 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c1490d04-fc1d-488b-a427-285554ec1692-var-locks-brick" (OuterVolumeSpecName: "var-locks-brick") pod "c1490d04-fc1d-488b-a427-285554ec1692" (UID: "c1490d04-fc1d-488b-a427-285554ec1692"). InnerVolumeSpecName "var-locks-brick". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 24 05:53:10.874978 master-0 kubenswrapper[34361]: I0224 05:53:10.873538 34361 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c1490d04-fc1d-488b-a427-285554ec1692-var-lib-cinder" (OuterVolumeSpecName: "var-lib-cinder") pod "c1490d04-fc1d-488b-a427-285554ec1692" (UID: "c1490d04-fc1d-488b-a427-285554ec1692"). InnerVolumeSpecName "var-lib-cinder". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 24 05:53:10.875403 master-0 kubenswrapper[34361]: I0224 05:53:10.873569 34361 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c1490d04-fc1d-488b-a427-285554ec1692-etc-iscsi" (OuterVolumeSpecName: "etc-iscsi") pod "c1490d04-fc1d-488b-a427-285554ec1692" (UID: "c1490d04-fc1d-488b-a427-285554ec1692"). InnerVolumeSpecName "etc-iscsi". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 24 05:53:10.887631 master-0 kubenswrapper[34361]: I0224 05:53:10.887423 34361 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c1490d04-fc1d-488b-a427-285554ec1692-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "c1490d04-fc1d-488b-a427-285554ec1692" (UID: "c1490d04-fc1d-488b-a427-285554ec1692"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 24 05:53:10.889404 master-0 kubenswrapper[34361]: I0224 05:53:10.889356 34361 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c1490d04-fc1d-488b-a427-285554ec1692-scripts" (OuterVolumeSpecName: "scripts") pod "c1490d04-fc1d-488b-a427-285554ec1692" (UID: "c1490d04-fc1d-488b-a427-285554ec1692"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 24 05:53:10.891098 master-0 kubenswrapper[34361]: I0224 05:53:10.891011 34361 scope.go:117] "RemoveContainer" containerID="fc623512700ec8e8994d8e079171552546ec3764436c6fded4cca2506b97aa97" Feb 24 05:53:10.914646 master-0 kubenswrapper[34361]: I0224 05:53:10.914534 34361 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ironic-neutron-agent-856d98ff5d-2p7np" podStartSLOduration=2.96592209 podStartE2EDuration="6.91429485s" podCreationTimestamp="2026-02-24 05:53:04 +0000 UTC" firstStartedPulling="2026-02-24 05:53:05.831647807 +0000 UTC m=+945.534264863" lastFinishedPulling="2026-02-24 05:53:09.780020577 +0000 UTC m=+949.482637623" observedRunningTime="2026-02-24 05:53:10.832909165 +0000 UTC m=+950.535526211" watchObservedRunningTime="2026-02-24 05:53:10.91429485 +0000 UTC m=+950.616911886" Feb 24 05:53:10.929180 master-0 kubenswrapper[34361]: I0224 05:53:10.929092 34361 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ironic-6cc9f57487-vklxq"] Feb 24 05:53:10.934147 master-0 kubenswrapper[34361]: I0224 05:53:10.933586 34361 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-7d9d8bd467-64rvv" podStartSLOduration=6.93356468 podStartE2EDuration="6.93356468s" podCreationTimestamp="2026-02-24 05:53:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-24 05:53:10.877892068 +0000 UTC m=+950.580509124" watchObservedRunningTime="2026-02-24 05:53:10.93356468 +0000 UTC m=+950.636181726" Feb 24 05:53:10.934794 master-0 kubenswrapper[34361]: I0224 05:53:10.934572 34361 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2f0b28b5-741c-4761-b250-30d89ea99407-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "2f0b28b5-741c-4761-b250-30d89ea99407" (UID: "2f0b28b5-741c-4761-b250-30d89ea99407"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 24 05:53:10.955380 master-0 kubenswrapper[34361]: I0224 05:53:10.954214 34361 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2f0b28b5-741c-4761-b250-30d89ea99407-config-data" (OuterVolumeSpecName: "config-data") pod "2f0b28b5-741c-4761-b250-30d89ea99407" (UID: "2f0b28b5-741c-4761-b250-30d89ea99407"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 24 05:53:10.965525 master-0 kubenswrapper[34361]: W0224 05:53:10.965405 34361 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode92bebe0_2823_46e2_bd8f_7755ed558ab8.slice/crio-d22cb69ff90b2128fa48a74e2e65dffaf4edb31fab048969c511a289526ec3ce WatchSource:0}: Error finding container d22cb69ff90b2128fa48a74e2e65dffaf4edb31fab048969c511a289526ec3ce: Status 404 returned error can't find the container with id d22cb69ff90b2128fa48a74e2e65dffaf4edb31fab048969c511a289526ec3ce Feb 24 05:53:10.970062 master-0 kubenswrapper[34361]: I0224 05:53:10.970012 34361 reconciler_common.go:293] "Volume detached for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/c1490d04-fc1d-488b-a427-285554ec1692-sys\") on node \"master-0\" DevicePath \"\"" Feb 24 05:53:10.970062 master-0 kubenswrapper[34361]: I0224 05:53:10.970054 34361 reconciler_common.go:293] "Volume detached for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/c1490d04-fc1d-488b-a427-285554ec1692-var-locks-brick\") on node \"master-0\" DevicePath \"\"" Feb 24 05:53:10.970197 master-0 kubenswrapper[34361]: I0224 05:53:10.970069 34361 reconciler_common.go:293] "Volume detached for volume \"run\" (UniqueName: \"kubernetes.io/host-path/c1490d04-fc1d-488b-a427-285554ec1692-run\") on node \"master-0\" DevicePath \"\"" Feb 24 05:53:10.970197 master-0 kubenswrapper[34361]: I0224 05:53:10.970081 34361 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/c1490d04-fc1d-488b-a427-285554ec1692-config-data-custom\") on node \"master-0\" DevicePath \"\"" Feb 24 05:53:10.970197 master-0 kubenswrapper[34361]: I0224 05:53:10.970091 34361 reconciler_common.go:293] "Volume detached for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/c1490d04-fc1d-488b-a427-285554ec1692-etc-nvme\") on node \"master-0\" DevicePath \"\"" Feb 24 05:53:10.970197 master-0 kubenswrapper[34361]: I0224 05:53:10.970103 34361 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lndt4\" (UniqueName: \"kubernetes.io/projected/c1490d04-fc1d-488b-a427-285554ec1692-kube-api-access-lndt4\") on node \"master-0\" DevicePath \"\"" Feb 24 05:53:10.970197 master-0 kubenswrapper[34361]: I0224 05:53:10.970114 34361 reconciler_common.go:293] "Volume detached for volume \"var-lib-cinder\" (UniqueName: \"kubernetes.io/host-path/c1490d04-fc1d-488b-a427-285554ec1692-var-lib-cinder\") on node \"master-0\" DevicePath \"\"" Feb 24 05:53:10.970197 master-0 kubenswrapper[34361]: I0224 05:53:10.970124 34361 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2f0b28b5-741c-4761-b250-30d89ea99407-combined-ca-bundle\") on node \"master-0\" DevicePath \"\"" Feb 24 05:53:10.970197 master-0 kubenswrapper[34361]: I0224 05:53:10.970134 34361 reconciler_common.go:293] "Volume detached for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/c1490d04-fc1d-488b-a427-285554ec1692-etc-iscsi\") on node \"master-0\" DevicePath \"\"" Feb 24 05:53:10.970197 master-0 kubenswrapper[34361]: I0224 05:53:10.970159 34361 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2f0b28b5-741c-4761-b250-30d89ea99407-config-data\") on node \"master-0\" DevicePath \"\"" Feb 24 05:53:10.970197 master-0 kubenswrapper[34361]: I0224 05:53:10.970170 34361 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/c1490d04-fc1d-488b-a427-285554ec1692-etc-machine-id\") on node \"master-0\" DevicePath \"\"" Feb 24 05:53:10.970197 master-0 kubenswrapper[34361]: I0224 05:53:10.970179 34361 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c1490d04-fc1d-488b-a427-285554ec1692-scripts\") on node \"master-0\" DevicePath \"\"" Feb 24 05:53:10.970197 master-0 kubenswrapper[34361]: I0224 05:53:10.970192 34361 reconciler_common.go:293] "Volume detached for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/c1490d04-fc1d-488b-a427-285554ec1692-dev\") on node \"master-0\" DevicePath \"\"" Feb 24 05:53:10.970197 master-0 kubenswrapper[34361]: I0224 05:53:10.970201 34361 reconciler_common.go:293] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/c1490d04-fc1d-488b-a427-285554ec1692-lib-modules\") on node \"master-0\" DevicePath \"\"" Feb 24 05:53:10.970623 master-0 kubenswrapper[34361]: I0224 05:53:10.970241 34361 reconciler_common.go:293] "Volume detached for volume \"var-locks-cinder\" (UniqueName: \"kubernetes.io/host-path/c1490d04-fc1d-488b-a427-285554ec1692-var-locks-cinder\") on node \"master-0\" DevicePath \"\"" Feb 24 05:53:10.977387 master-0 kubenswrapper[34361]: I0224 05:53:10.977281 34361 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c1490d04-fc1d-488b-a427-285554ec1692-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "c1490d04-fc1d-488b-a427-285554ec1692" (UID: "c1490d04-fc1d-488b-a427-285554ec1692"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 24 05:53:10.989182 master-0 kubenswrapper[34361]: I0224 05:53:10.989140 34361 scope.go:117] "RemoveContainer" containerID="1c34a792ad75c0733c51a93cf45561e771b6c1fed09ddf0c6d96fe0d13d23f16" Feb 24 05:53:10.999359 master-0 kubenswrapper[34361]: I0224 05:53:10.998930 34361 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-b7346-volume-lvm-iscsi-0"] Feb 24 05:53:11.011201 master-0 kubenswrapper[34361]: I0224 05:53:11.011134 34361 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-b7346-volume-lvm-iscsi-0"] Feb 24 05:53:11.023851 master-0 kubenswrapper[34361]: I0224 05:53:11.023744 34361 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-b7346-volume-lvm-iscsi-0"] Feb 24 05:53:11.024927 master-0 kubenswrapper[34361]: E0224 05:53:11.024900 34361 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0a0262a8-c31e-4022-bf1e-7952af276733" containerName="mariadb-database-create" Feb 24 05:53:11.024927 master-0 kubenswrapper[34361]: I0224 05:53:11.024924 34361 state_mem.go:107] "Deleted CPUSet assignment" podUID="0a0262a8-c31e-4022-bf1e-7952af276733" containerName="mariadb-database-create" Feb 24 05:53:11.025010 master-0 kubenswrapper[34361]: E0224 05:53:11.024944 34361 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fad53e67-bd04-4577-af57-e5b896b6e56f" containerName="cinder-volume" Feb 24 05:53:11.025010 master-0 kubenswrapper[34361]: I0224 05:53:11.024952 34361 state_mem.go:107] "Deleted CPUSet assignment" podUID="fad53e67-bd04-4577-af57-e5b896b6e56f" containerName="cinder-volume" Feb 24 05:53:11.025108 master-0 kubenswrapper[34361]: E0224 05:53:11.025086 34361 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fad53e67-bd04-4577-af57-e5b896b6e56f" containerName="probe" Feb 24 05:53:11.025108 master-0 kubenswrapper[34361]: I0224 05:53:11.025103 34361 state_mem.go:107] "Deleted CPUSet assignment" podUID="fad53e67-bd04-4577-af57-e5b896b6e56f" containerName="probe" Feb 24 05:53:11.025178 master-0 kubenswrapper[34361]: E0224 05:53:11.025119 34361 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2f0b28b5-741c-4761-b250-30d89ea99407" containerName="probe" Feb 24 05:53:11.025178 master-0 kubenswrapper[34361]: I0224 05:53:11.025126 34361 state_mem.go:107] "Deleted CPUSet assignment" podUID="2f0b28b5-741c-4761-b250-30d89ea99407" containerName="probe" Feb 24 05:53:11.025178 master-0 kubenswrapper[34361]: E0224 05:53:11.025137 34361 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="68c5e68e-c7ed-4fb9-a323-2104110a3742" containerName="mariadb-account-create-update" Feb 24 05:53:11.025178 master-0 kubenswrapper[34361]: I0224 05:53:11.025143 34361 state_mem.go:107] "Deleted CPUSet assignment" podUID="68c5e68e-c7ed-4fb9-a323-2104110a3742" containerName="mariadb-account-create-update" Feb 24 05:53:11.025178 master-0 kubenswrapper[34361]: E0224 05:53:11.025157 34361 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2f0b28b5-741c-4761-b250-30d89ea99407" containerName="cinder-scheduler" Feb 24 05:53:11.025178 master-0 kubenswrapper[34361]: I0224 05:53:11.025164 34361 state_mem.go:107] "Deleted CPUSet assignment" podUID="2f0b28b5-741c-4761-b250-30d89ea99407" containerName="cinder-scheduler" Feb 24 05:53:11.025359 master-0 kubenswrapper[34361]: E0224 05:53:11.025188 34361 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c1490d04-fc1d-488b-a427-285554ec1692" containerName="cinder-backup" Feb 24 05:53:11.025359 master-0 kubenswrapper[34361]: I0224 05:53:11.025196 34361 state_mem.go:107] "Deleted CPUSet assignment" podUID="c1490d04-fc1d-488b-a427-285554ec1692" containerName="cinder-backup" Feb 24 05:53:11.025359 master-0 kubenswrapper[34361]: E0224 05:53:11.025208 34361 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c1490d04-fc1d-488b-a427-285554ec1692" containerName="probe" Feb 24 05:53:11.025359 master-0 kubenswrapper[34361]: I0224 05:53:11.025214 34361 state_mem.go:107] "Deleted CPUSet assignment" podUID="c1490d04-fc1d-488b-a427-285554ec1692" containerName="probe" Feb 24 05:53:11.025524 master-0 kubenswrapper[34361]: I0224 05:53:11.025501 34361 memory_manager.go:354] "RemoveStaleState removing state" podUID="2f0b28b5-741c-4761-b250-30d89ea99407" containerName="probe" Feb 24 05:53:11.025560 master-0 kubenswrapper[34361]: I0224 05:53:11.025548 34361 memory_manager.go:354] "RemoveStaleState removing state" podUID="0a0262a8-c31e-4022-bf1e-7952af276733" containerName="mariadb-database-create" Feb 24 05:53:11.025593 master-0 kubenswrapper[34361]: I0224 05:53:11.025567 34361 memory_manager.go:354] "RemoveStaleState removing state" podUID="c1490d04-fc1d-488b-a427-285554ec1692" containerName="cinder-backup" Feb 24 05:53:11.025593 master-0 kubenswrapper[34361]: I0224 05:53:11.025584 34361 memory_manager.go:354] "RemoveStaleState removing state" podUID="c1490d04-fc1d-488b-a427-285554ec1692" containerName="probe" Feb 24 05:53:11.025670 master-0 kubenswrapper[34361]: I0224 05:53:11.025617 34361 memory_manager.go:354] "RemoveStaleState removing state" podUID="fad53e67-bd04-4577-af57-e5b896b6e56f" containerName="cinder-volume" Feb 24 05:53:11.025670 master-0 kubenswrapper[34361]: I0224 05:53:11.025628 34361 memory_manager.go:354] "RemoveStaleState removing state" podUID="68c5e68e-c7ed-4fb9-a323-2104110a3742" containerName="mariadb-account-create-update" Feb 24 05:53:11.025670 master-0 kubenswrapper[34361]: I0224 05:53:11.025637 34361 memory_manager.go:354] "RemoveStaleState removing state" podUID="fad53e67-bd04-4577-af57-e5b896b6e56f" containerName="probe" Feb 24 05:53:11.025670 master-0 kubenswrapper[34361]: I0224 05:53:11.025658 34361 memory_manager.go:354] "RemoveStaleState removing state" podUID="2f0b28b5-741c-4761-b250-30d89ea99407" containerName="cinder-scheduler" Feb 24 05:53:11.027672 master-0 kubenswrapper[34361]: I0224 05:53:11.027643 34361 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-b7346-volume-lvm-iscsi-0" Feb 24 05:53:11.029523 master-0 kubenswrapper[34361]: I0224 05:53:11.029488 34361 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-b7346-volume-lvm-iscsi-config-data" Feb 24 05:53:11.042799 master-0 kubenswrapper[34361]: W0224 05:53:11.042709 34361 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod74198545_a0ee_4142_93a6_86175a1d3c02.slice/crio-85043cc2d3bf49279706da8319df6ab4c90efe3d64f23ecf94e8402792cf86b2 WatchSource:0}: Error finding container 85043cc2d3bf49279706da8319df6ab4c90efe3d64f23ecf94e8402792cf86b2: Status 404 returned error can't find the container with id 85043cc2d3bf49279706da8319df6ab4c90efe3d64f23ecf94e8402792cf86b2 Feb 24 05:53:11.054495 master-0 kubenswrapper[34361]: I0224 05:53:11.053151 34361 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-b7346-volume-lvm-iscsi-0"] Feb 24 05:53:11.067436 master-0 kubenswrapper[34361]: I0224 05:53:11.067371 34361 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ironic-conductor-0"] Feb 24 05:53:11.075821 master-0 kubenswrapper[34361]: I0224 05:53:11.075746 34361 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c1490d04-fc1d-488b-a427-285554ec1692-combined-ca-bundle\") on node \"master-0\" DevicePath \"\"" Feb 24 05:53:11.078269 master-0 kubenswrapper[34361]: I0224 05:53:11.078236 34361 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c1490d04-fc1d-488b-a427-285554ec1692-config-data" (OuterVolumeSpecName: "config-data") pod "c1490d04-fc1d-488b-a427-285554ec1692" (UID: "c1490d04-fc1d-488b-a427-285554ec1692"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 24 05:53:11.110139 master-0 kubenswrapper[34361]: I0224 05:53:11.108645 34361 scope.go:117] "RemoveContainer" containerID="6644f2541ea8eb1d0e102c41726b036f76b1bcafd2712081e27ca63f8e5ac73f" Feb 24 05:53:11.135060 master-0 kubenswrapper[34361]: I0224 05:53:11.135002 34361 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-b7346-backup-0"] Feb 24 05:53:11.141067 master-0 kubenswrapper[34361]: I0224 05:53:11.140709 34361 scope.go:117] "RemoveContainer" containerID="f3f9c022e793ae50651ba44d048e2d2e445c8cdf9660ca198173ab50b867ecb7" Feb 24 05:53:11.154300 master-0 kubenswrapper[34361]: I0224 05:53:11.153761 34361 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-b7346-backup-0"] Feb 24 05:53:11.187514 master-0 kubenswrapper[34361]: I0224 05:53:11.178093 34361 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7b306f6d-75b5-44a6-921c-edbe44ef1c10-config-data\") pod \"cinder-b7346-volume-lvm-iscsi-0\" (UID: \"7b306f6d-75b5-44a6-921c-edbe44ef1c10\") " pod="openstack/cinder-b7346-volume-lvm-iscsi-0" Feb 24 05:53:11.187514 master-0 kubenswrapper[34361]: I0224 05:53:11.178191 34361 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7b306f6d-75b5-44a6-921c-edbe44ef1c10-combined-ca-bundle\") pod \"cinder-b7346-volume-lvm-iscsi-0\" (UID: \"7b306f6d-75b5-44a6-921c-edbe44ef1c10\") " pod="openstack/cinder-b7346-volume-lvm-iscsi-0" Feb 24 05:53:11.187514 master-0 kubenswrapper[34361]: I0224 05:53:11.178241 34361 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/7b306f6d-75b5-44a6-921c-edbe44ef1c10-sys\") pod \"cinder-b7346-volume-lvm-iscsi-0\" (UID: \"7b306f6d-75b5-44a6-921c-edbe44ef1c10\") " pod="openstack/cinder-b7346-volume-lvm-iscsi-0" Feb 24 05:53:11.187514 master-0 kubenswrapper[34361]: I0224 05:53:11.178327 34361 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lp7xt\" (UniqueName: \"kubernetes.io/projected/7b306f6d-75b5-44a6-921c-edbe44ef1c10-kube-api-access-lp7xt\") pod \"cinder-b7346-volume-lvm-iscsi-0\" (UID: \"7b306f6d-75b5-44a6-921c-edbe44ef1c10\") " pod="openstack/cinder-b7346-volume-lvm-iscsi-0" Feb 24 05:53:11.187514 master-0 kubenswrapper[34361]: I0224 05:53:11.178372 34361 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-locks-cinder\" (UniqueName: \"kubernetes.io/host-path/7b306f6d-75b5-44a6-921c-edbe44ef1c10-var-locks-cinder\") pod \"cinder-b7346-volume-lvm-iscsi-0\" (UID: \"7b306f6d-75b5-44a6-921c-edbe44ef1c10\") " pod="openstack/cinder-b7346-volume-lvm-iscsi-0" Feb 24 05:53:11.187514 master-0 kubenswrapper[34361]: I0224 05:53:11.178397 34361 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-cinder\" (UniqueName: \"kubernetes.io/host-path/7b306f6d-75b5-44a6-921c-edbe44ef1c10-var-lib-cinder\") pod \"cinder-b7346-volume-lvm-iscsi-0\" (UID: \"7b306f6d-75b5-44a6-921c-edbe44ef1c10\") " pod="openstack/cinder-b7346-volume-lvm-iscsi-0" Feb 24 05:53:11.187514 master-0 kubenswrapper[34361]: I0224 05:53:11.178419 34361 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7b306f6d-75b5-44a6-921c-edbe44ef1c10-scripts\") pod \"cinder-b7346-volume-lvm-iscsi-0\" (UID: \"7b306f6d-75b5-44a6-921c-edbe44ef1c10\") " pod="openstack/cinder-b7346-volume-lvm-iscsi-0" Feb 24 05:53:11.187514 master-0 kubenswrapper[34361]: I0224 05:53:11.178442 34361 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/7b306f6d-75b5-44a6-921c-edbe44ef1c10-etc-iscsi\") pod \"cinder-b7346-volume-lvm-iscsi-0\" (UID: \"7b306f6d-75b5-44a6-921c-edbe44ef1c10\") " pod="openstack/cinder-b7346-volume-lvm-iscsi-0" Feb 24 05:53:11.187514 master-0 kubenswrapper[34361]: I0224 05:53:11.178465 34361 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/7b306f6d-75b5-44a6-921c-edbe44ef1c10-lib-modules\") pod \"cinder-b7346-volume-lvm-iscsi-0\" (UID: \"7b306f6d-75b5-44a6-921c-edbe44ef1c10\") " pod="openstack/cinder-b7346-volume-lvm-iscsi-0" Feb 24 05:53:11.187514 master-0 kubenswrapper[34361]: I0224 05:53:11.178503 34361 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/7b306f6d-75b5-44a6-921c-edbe44ef1c10-etc-machine-id\") pod \"cinder-b7346-volume-lvm-iscsi-0\" (UID: \"7b306f6d-75b5-44a6-921c-edbe44ef1c10\") " pod="openstack/cinder-b7346-volume-lvm-iscsi-0" Feb 24 05:53:11.187514 master-0 kubenswrapper[34361]: I0224 05:53:11.178523 34361 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/7b306f6d-75b5-44a6-921c-edbe44ef1c10-config-data-custom\") pod \"cinder-b7346-volume-lvm-iscsi-0\" (UID: \"7b306f6d-75b5-44a6-921c-edbe44ef1c10\") " pod="openstack/cinder-b7346-volume-lvm-iscsi-0" Feb 24 05:53:11.187514 master-0 kubenswrapper[34361]: I0224 05:53:11.178544 34361 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/7b306f6d-75b5-44a6-921c-edbe44ef1c10-var-locks-brick\") pod \"cinder-b7346-volume-lvm-iscsi-0\" (UID: \"7b306f6d-75b5-44a6-921c-edbe44ef1c10\") " pod="openstack/cinder-b7346-volume-lvm-iscsi-0" Feb 24 05:53:11.187514 master-0 kubenswrapper[34361]: I0224 05:53:11.178565 34361 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/7b306f6d-75b5-44a6-921c-edbe44ef1c10-dev\") pod \"cinder-b7346-volume-lvm-iscsi-0\" (UID: \"7b306f6d-75b5-44a6-921c-edbe44ef1c10\") " pod="openstack/cinder-b7346-volume-lvm-iscsi-0" Feb 24 05:53:11.187514 master-0 kubenswrapper[34361]: I0224 05:53:11.178588 34361 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/7b306f6d-75b5-44a6-921c-edbe44ef1c10-etc-nvme\") pod \"cinder-b7346-volume-lvm-iscsi-0\" (UID: \"7b306f6d-75b5-44a6-921c-edbe44ef1c10\") " pod="openstack/cinder-b7346-volume-lvm-iscsi-0" Feb 24 05:53:11.187514 master-0 kubenswrapper[34361]: I0224 05:53:11.178626 34361 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/7b306f6d-75b5-44a6-921c-edbe44ef1c10-run\") pod \"cinder-b7346-volume-lvm-iscsi-0\" (UID: \"7b306f6d-75b5-44a6-921c-edbe44ef1c10\") " pod="openstack/cinder-b7346-volume-lvm-iscsi-0" Feb 24 05:53:11.187514 master-0 kubenswrapper[34361]: I0224 05:53:11.178720 34361 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c1490d04-fc1d-488b-a427-285554ec1692-config-data\") on node \"master-0\" DevicePath \"\"" Feb 24 05:53:11.187514 master-0 kubenswrapper[34361]: I0224 05:53:11.185350 34361 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-b7346-scheduler-0"] Feb 24 05:53:11.221668 master-0 kubenswrapper[34361]: I0224 05:53:11.203814 34361 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-b7346-backup-0"] Feb 24 05:53:11.221668 master-0 kubenswrapper[34361]: I0224 05:53:11.207460 34361 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-b7346-backup-0" Feb 24 05:53:11.221668 master-0 kubenswrapper[34361]: I0224 05:53:11.209687 34361 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-b7346-backup-config-data" Feb 24 05:53:11.224789 master-0 kubenswrapper[34361]: I0224 05:53:11.224702 34361 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-b7346-scheduler-0"] Feb 24 05:53:11.254157 master-0 kubenswrapper[34361]: I0224 05:53:11.243865 34361 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-b7346-backup-0"] Feb 24 05:53:11.255301 master-0 kubenswrapper[34361]: I0224 05:53:11.255250 34361 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-b7346-scheduler-0"] Feb 24 05:53:11.257501 master-0 kubenswrapper[34361]: I0224 05:53:11.257466 34361 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-b7346-scheduler-0" Feb 24 05:53:11.260275 master-0 kubenswrapper[34361]: I0224 05:53:11.260201 34361 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-b7346-scheduler-config-data" Feb 24 05:53:11.266258 master-0 kubenswrapper[34361]: I0224 05:53:11.266193 34361 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-b7346-scheduler-0"] Feb 24 05:53:11.284882 master-0 kubenswrapper[34361]: I0224 05:53:11.284818 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-locks-cinder\" (UniqueName: \"kubernetes.io/host-path/7b306f6d-75b5-44a6-921c-edbe44ef1c10-var-locks-cinder\") pod \"cinder-b7346-volume-lvm-iscsi-0\" (UID: \"7b306f6d-75b5-44a6-921c-edbe44ef1c10\") " pod="openstack/cinder-b7346-volume-lvm-iscsi-0" Feb 24 05:53:11.284882 master-0 kubenswrapper[34361]: I0224 05:53:11.284884 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-cinder\" (UniqueName: \"kubernetes.io/host-path/7b306f6d-75b5-44a6-921c-edbe44ef1c10-var-lib-cinder\") pod \"cinder-b7346-volume-lvm-iscsi-0\" (UID: \"7b306f6d-75b5-44a6-921c-edbe44ef1c10\") " pod="openstack/cinder-b7346-volume-lvm-iscsi-0" Feb 24 05:53:11.285207 master-0 kubenswrapper[34361]: I0224 05:53:11.284918 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7b306f6d-75b5-44a6-921c-edbe44ef1c10-scripts\") pod \"cinder-b7346-volume-lvm-iscsi-0\" (UID: \"7b306f6d-75b5-44a6-921c-edbe44ef1c10\") " pod="openstack/cinder-b7346-volume-lvm-iscsi-0" Feb 24 05:53:11.285207 master-0 kubenswrapper[34361]: I0224 05:53:11.284939 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/7b306f6d-75b5-44a6-921c-edbe44ef1c10-etc-iscsi\") pod \"cinder-b7346-volume-lvm-iscsi-0\" (UID: \"7b306f6d-75b5-44a6-921c-edbe44ef1c10\") " pod="openstack/cinder-b7346-volume-lvm-iscsi-0" Feb 24 05:53:11.285207 master-0 kubenswrapper[34361]: I0224 05:53:11.284967 34361 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/8874ade6-37d3-4a82-b833-d67ae2d4b704-dev\") pod \"cinder-b7346-backup-0\" (UID: \"8874ade6-37d3-4a82-b833-d67ae2d4b704\") " pod="openstack/cinder-b7346-backup-0" Feb 24 05:53:11.285207 master-0 kubenswrapper[34361]: I0224 05:53:11.284989 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/7b306f6d-75b5-44a6-921c-edbe44ef1c10-lib-modules\") pod \"cinder-b7346-volume-lvm-iscsi-0\" (UID: \"7b306f6d-75b5-44a6-921c-edbe44ef1c10\") " pod="openstack/cinder-b7346-volume-lvm-iscsi-0" Feb 24 05:53:11.285207 master-0 kubenswrapper[34361]: I0224 05:53:11.285014 34361 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/8874ade6-37d3-4a82-b833-d67ae2d4b704-etc-machine-id\") pod \"cinder-b7346-backup-0\" (UID: \"8874ade6-37d3-4a82-b833-d67ae2d4b704\") " pod="openstack/cinder-b7346-backup-0" Feb 24 05:53:11.285207 master-0 kubenswrapper[34361]: I0224 05:53:11.285043 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/7b306f6d-75b5-44a6-921c-edbe44ef1c10-etc-machine-id\") pod \"cinder-b7346-volume-lvm-iscsi-0\" (UID: \"7b306f6d-75b5-44a6-921c-edbe44ef1c10\") " pod="openstack/cinder-b7346-volume-lvm-iscsi-0" Feb 24 05:53:11.285207 master-0 kubenswrapper[34361]: I0224 05:53:11.285061 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/7b306f6d-75b5-44a6-921c-edbe44ef1c10-config-data-custom\") pod \"cinder-b7346-volume-lvm-iscsi-0\" (UID: \"7b306f6d-75b5-44a6-921c-edbe44ef1c10\") " pod="openstack/cinder-b7346-volume-lvm-iscsi-0" Feb 24 05:53:11.285207 master-0 kubenswrapper[34361]: I0224 05:53:11.285110 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/7b306f6d-75b5-44a6-921c-edbe44ef1c10-var-locks-brick\") pod \"cinder-b7346-volume-lvm-iscsi-0\" (UID: \"7b306f6d-75b5-44a6-921c-edbe44ef1c10\") " pod="openstack/cinder-b7346-volume-lvm-iscsi-0" Feb 24 05:53:11.285207 master-0 kubenswrapper[34361]: I0224 05:53:11.285137 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/7b306f6d-75b5-44a6-921c-edbe44ef1c10-dev\") pod \"cinder-b7346-volume-lvm-iscsi-0\" (UID: \"7b306f6d-75b5-44a6-921c-edbe44ef1c10\") " pod="openstack/cinder-b7346-volume-lvm-iscsi-0" Feb 24 05:53:11.285207 master-0 kubenswrapper[34361]: I0224 05:53:11.285155 34361 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/8874ade6-37d3-4a82-b833-d67ae2d4b704-run\") pod \"cinder-b7346-backup-0\" (UID: \"8874ade6-37d3-4a82-b833-d67ae2d4b704\") " pod="openstack/cinder-b7346-backup-0" Feb 24 05:53:11.285207 master-0 kubenswrapper[34361]: I0224 05:53:11.285182 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/7b306f6d-75b5-44a6-921c-edbe44ef1c10-etc-nvme\") pod \"cinder-b7346-volume-lvm-iscsi-0\" (UID: \"7b306f6d-75b5-44a6-921c-edbe44ef1c10\") " pod="openstack/cinder-b7346-volume-lvm-iscsi-0" Feb 24 05:53:11.285207 master-0 kubenswrapper[34361]: I0224 05:53:11.285207 34361 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-68nbn\" (UniqueName: \"kubernetes.io/projected/8874ade6-37d3-4a82-b833-d67ae2d4b704-kube-api-access-68nbn\") pod \"cinder-b7346-backup-0\" (UID: \"8874ade6-37d3-4a82-b833-d67ae2d4b704\") " pod="openstack/cinder-b7346-backup-0" Feb 24 05:53:11.285575 master-0 kubenswrapper[34361]: I0224 05:53:11.285231 34361 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/8874ade6-37d3-4a82-b833-d67ae2d4b704-var-locks-brick\") pod \"cinder-b7346-backup-0\" (UID: \"8874ade6-37d3-4a82-b833-d67ae2d4b704\") " pod="openstack/cinder-b7346-backup-0" Feb 24 05:53:11.285575 master-0 kubenswrapper[34361]: I0224 05:53:11.285243 34361 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/7b306f6d-75b5-44a6-921c-edbe44ef1c10-lib-modules\") pod \"cinder-b7346-volume-lvm-iscsi-0\" (UID: \"7b306f6d-75b5-44a6-921c-edbe44ef1c10\") " pod="openstack/cinder-b7346-volume-lvm-iscsi-0" Feb 24 05:53:11.285575 master-0 kubenswrapper[34361]: I0224 05:53:11.285254 34361 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8874ade6-37d3-4a82-b833-d67ae2d4b704-combined-ca-bundle\") pod \"cinder-b7346-backup-0\" (UID: \"8874ade6-37d3-4a82-b833-d67ae2d4b704\") " pod="openstack/cinder-b7346-backup-0" Feb 24 05:53:11.285575 master-0 kubenswrapper[34361]: I0224 05:53:11.285364 34361 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/7b306f6d-75b5-44a6-921c-edbe44ef1c10-var-locks-brick\") pod \"cinder-b7346-volume-lvm-iscsi-0\" (UID: \"7b306f6d-75b5-44a6-921c-edbe44ef1c10\") " pod="openstack/cinder-b7346-volume-lvm-iscsi-0" Feb 24 05:53:11.285575 master-0 kubenswrapper[34361]: I0224 05:53:11.285404 34361 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-locks-cinder\" (UniqueName: \"kubernetes.io/host-path/7b306f6d-75b5-44a6-921c-edbe44ef1c10-var-locks-cinder\") pod \"cinder-b7346-volume-lvm-iscsi-0\" (UID: \"7b306f6d-75b5-44a6-921c-edbe44ef1c10\") " pod="openstack/cinder-b7346-volume-lvm-iscsi-0" Feb 24 05:53:11.285575 master-0 kubenswrapper[34361]: I0224 05:53:11.285447 34361 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-cinder\" (UniqueName: \"kubernetes.io/host-path/7b306f6d-75b5-44a6-921c-edbe44ef1c10-var-lib-cinder\") pod \"cinder-b7346-volume-lvm-iscsi-0\" (UID: \"7b306f6d-75b5-44a6-921c-edbe44ef1c10\") " pod="openstack/cinder-b7346-volume-lvm-iscsi-0" Feb 24 05:53:11.287594 master-0 kubenswrapper[34361]: I0224 05:53:11.287564 34361 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/7b306f6d-75b5-44a6-921c-edbe44ef1c10-etc-machine-id\") pod \"cinder-b7346-volume-lvm-iscsi-0\" (UID: \"7b306f6d-75b5-44a6-921c-edbe44ef1c10\") " pod="openstack/cinder-b7346-volume-lvm-iscsi-0" Feb 24 05:53:11.287665 master-0 kubenswrapper[34361]: I0224 05:53:11.287596 34361 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/7b306f6d-75b5-44a6-921c-edbe44ef1c10-etc-iscsi\") pod \"cinder-b7346-volume-lvm-iscsi-0\" (UID: \"7b306f6d-75b5-44a6-921c-edbe44ef1c10\") " pod="openstack/cinder-b7346-volume-lvm-iscsi-0" Feb 24 05:53:11.287665 master-0 kubenswrapper[34361]: I0224 05:53:11.287637 34361 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/8874ade6-37d3-4a82-b833-d67ae2d4b704-lib-modules\") pod \"cinder-b7346-backup-0\" (UID: \"8874ade6-37d3-4a82-b833-d67ae2d4b704\") " pod="openstack/cinder-b7346-backup-0" Feb 24 05:53:11.287728 master-0 kubenswrapper[34361]: I0224 05:53:11.287684 34361 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-cinder\" (UniqueName: \"kubernetes.io/host-path/8874ade6-37d3-4a82-b833-d67ae2d4b704-var-lib-cinder\") pod \"cinder-b7346-backup-0\" (UID: \"8874ade6-37d3-4a82-b833-d67ae2d4b704\") " pod="openstack/cinder-b7346-backup-0" Feb 24 05:53:11.287728 master-0 kubenswrapper[34361]: I0224 05:53:11.287725 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/7b306f6d-75b5-44a6-921c-edbe44ef1c10-run\") pod \"cinder-b7346-volume-lvm-iscsi-0\" (UID: \"7b306f6d-75b5-44a6-921c-edbe44ef1c10\") " pod="openstack/cinder-b7346-volume-lvm-iscsi-0" Feb 24 05:53:11.287788 master-0 kubenswrapper[34361]: I0224 05:53:11.287751 34361 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run\" (UniqueName: \"kubernetes.io/host-path/7b306f6d-75b5-44a6-921c-edbe44ef1c10-run\") pod \"cinder-b7346-volume-lvm-iscsi-0\" (UID: \"7b306f6d-75b5-44a6-921c-edbe44ef1c10\") " pod="openstack/cinder-b7346-volume-lvm-iscsi-0" Feb 24 05:53:11.287788 master-0 kubenswrapper[34361]: I0224 05:53:11.287781 34361 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/8874ade6-37d3-4a82-b833-d67ae2d4b704-config-data-custom\") pod \"cinder-b7346-backup-0\" (UID: \"8874ade6-37d3-4a82-b833-d67ae2d4b704\") " pod="openstack/cinder-b7346-backup-0" Feb 24 05:53:11.287851 master-0 kubenswrapper[34361]: I0224 05:53:11.287821 34361 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-locks-cinder\" (UniqueName: \"kubernetes.io/host-path/8874ade6-37d3-4a82-b833-d67ae2d4b704-var-locks-cinder\") pod \"cinder-b7346-backup-0\" (UID: \"8874ade6-37d3-4a82-b833-d67ae2d4b704\") " pod="openstack/cinder-b7346-backup-0" Feb 24 05:53:11.287886 master-0 kubenswrapper[34361]: I0224 05:53:11.287850 34361 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/8874ade6-37d3-4a82-b833-d67ae2d4b704-etc-iscsi\") pod \"cinder-b7346-backup-0\" (UID: \"8874ade6-37d3-4a82-b833-d67ae2d4b704\") " pod="openstack/cinder-b7346-backup-0" Feb 24 05:53:11.287886 master-0 kubenswrapper[34361]: I0224 05:53:11.287871 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7b306f6d-75b5-44a6-921c-edbe44ef1c10-config-data\") pod \"cinder-b7346-volume-lvm-iscsi-0\" (UID: \"7b306f6d-75b5-44a6-921c-edbe44ef1c10\") " pod="openstack/cinder-b7346-volume-lvm-iscsi-0" Feb 24 05:53:11.287948 master-0 kubenswrapper[34361]: I0224 05:53:11.287899 34361 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/8874ade6-37d3-4a82-b833-d67ae2d4b704-sys\") pod \"cinder-b7346-backup-0\" (UID: \"8874ade6-37d3-4a82-b833-d67ae2d4b704\") " pod="openstack/cinder-b7346-backup-0" Feb 24 05:53:11.287983 master-0 kubenswrapper[34361]: I0224 05:53:11.287964 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7b306f6d-75b5-44a6-921c-edbe44ef1c10-combined-ca-bundle\") pod \"cinder-b7346-volume-lvm-iscsi-0\" (UID: \"7b306f6d-75b5-44a6-921c-edbe44ef1c10\") " pod="openstack/cinder-b7346-volume-lvm-iscsi-0" Feb 24 05:53:11.288018 master-0 kubenswrapper[34361]: I0224 05:53:11.288004 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/7b306f6d-75b5-44a6-921c-edbe44ef1c10-sys\") pod \"cinder-b7346-volume-lvm-iscsi-0\" (UID: \"7b306f6d-75b5-44a6-921c-edbe44ef1c10\") " pod="openstack/cinder-b7346-volume-lvm-iscsi-0" Feb 24 05:53:11.288063 master-0 kubenswrapper[34361]: I0224 05:53:11.288023 34361 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8874ade6-37d3-4a82-b833-d67ae2d4b704-config-data\") pod \"cinder-b7346-backup-0\" (UID: \"8874ade6-37d3-4a82-b833-d67ae2d4b704\") " pod="openstack/cinder-b7346-backup-0" Feb 24 05:53:11.288119 master-0 kubenswrapper[34361]: I0224 05:53:11.288077 34361 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/8874ade6-37d3-4a82-b833-d67ae2d4b704-etc-nvme\") pod \"cinder-b7346-backup-0\" (UID: \"8874ade6-37d3-4a82-b833-d67ae2d4b704\") " pod="openstack/cinder-b7346-backup-0" Feb 24 05:53:11.288119 master-0 kubenswrapper[34361]: I0224 05:53:11.288101 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lp7xt\" (UniqueName: \"kubernetes.io/projected/7b306f6d-75b5-44a6-921c-edbe44ef1c10-kube-api-access-lp7xt\") pod \"cinder-b7346-volume-lvm-iscsi-0\" (UID: \"7b306f6d-75b5-44a6-921c-edbe44ef1c10\") " pod="openstack/cinder-b7346-volume-lvm-iscsi-0" Feb 24 05:53:11.288220 master-0 kubenswrapper[34361]: I0224 05:53:11.288120 34361 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8874ade6-37d3-4a82-b833-d67ae2d4b704-scripts\") pod \"cinder-b7346-backup-0\" (UID: \"8874ade6-37d3-4a82-b833-d67ae2d4b704\") " pod="openstack/cinder-b7346-backup-0" Feb 24 05:53:11.290167 master-0 kubenswrapper[34361]: I0224 05:53:11.290117 34361 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7b306f6d-75b5-44a6-921c-edbe44ef1c10-scripts\") pod \"cinder-b7346-volume-lvm-iscsi-0\" (UID: \"7b306f6d-75b5-44a6-921c-edbe44ef1c10\") " pod="openstack/cinder-b7346-volume-lvm-iscsi-0" Feb 24 05:53:11.290167 master-0 kubenswrapper[34361]: I0224 05:53:11.290168 34361 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/7b306f6d-75b5-44a6-921c-edbe44ef1c10-etc-nvme\") pod \"cinder-b7346-volume-lvm-iscsi-0\" (UID: \"7b306f6d-75b5-44a6-921c-edbe44ef1c10\") " pod="openstack/cinder-b7346-volume-lvm-iscsi-0" Feb 24 05:53:11.290258 master-0 kubenswrapper[34361]: I0224 05:53:11.290190 34361 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/7b306f6d-75b5-44a6-921c-edbe44ef1c10-dev\") pod \"cinder-b7346-volume-lvm-iscsi-0\" (UID: \"7b306f6d-75b5-44a6-921c-edbe44ef1c10\") " pod="openstack/cinder-b7346-volume-lvm-iscsi-0" Feb 24 05:53:11.292450 master-0 kubenswrapper[34361]: I0224 05:53:11.292424 34361 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7b306f6d-75b5-44a6-921c-edbe44ef1c10-config-data\") pod \"cinder-b7346-volume-lvm-iscsi-0\" (UID: \"7b306f6d-75b5-44a6-921c-edbe44ef1c10\") " pod="openstack/cinder-b7346-volume-lvm-iscsi-0" Feb 24 05:53:11.293370 master-0 kubenswrapper[34361]: I0224 05:53:11.293344 34361 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/7b306f6d-75b5-44a6-921c-edbe44ef1c10-config-data-custom\") pod \"cinder-b7346-volume-lvm-iscsi-0\" (UID: \"7b306f6d-75b5-44a6-921c-edbe44ef1c10\") " pod="openstack/cinder-b7346-volume-lvm-iscsi-0" Feb 24 05:53:11.294543 master-0 kubenswrapper[34361]: I0224 05:53:11.294518 34361 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/7b306f6d-75b5-44a6-921c-edbe44ef1c10-sys\") pod \"cinder-b7346-volume-lvm-iscsi-0\" (UID: \"7b306f6d-75b5-44a6-921c-edbe44ef1c10\") " pod="openstack/cinder-b7346-volume-lvm-iscsi-0" Feb 24 05:53:11.295561 master-0 kubenswrapper[34361]: I0224 05:53:11.295537 34361 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7b306f6d-75b5-44a6-921c-edbe44ef1c10-combined-ca-bundle\") pod \"cinder-b7346-volume-lvm-iscsi-0\" (UID: \"7b306f6d-75b5-44a6-921c-edbe44ef1c10\") " pod="openstack/cinder-b7346-volume-lvm-iscsi-0" Feb 24 05:53:11.317134 master-0 kubenswrapper[34361]: I0224 05:53:11.317071 34361 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lp7xt\" (UniqueName: \"kubernetes.io/projected/7b306f6d-75b5-44a6-921c-edbe44ef1c10-kube-api-access-lp7xt\") pod \"cinder-b7346-volume-lvm-iscsi-0\" (UID: \"7b306f6d-75b5-44a6-921c-edbe44ef1c10\") " pod="openstack/cinder-b7346-volume-lvm-iscsi-0" Feb 24 05:53:11.356753 master-0 kubenswrapper[34361]: I0224 05:53:11.356692 34361 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-b7346-volume-lvm-iscsi-0" Feb 24 05:53:11.393908 master-0 kubenswrapper[34361]: I0224 05:53:11.393834 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-locks-cinder\" (UniqueName: \"kubernetes.io/host-path/8874ade6-37d3-4a82-b833-d67ae2d4b704-var-locks-cinder\") pod \"cinder-b7346-backup-0\" (UID: \"8874ade6-37d3-4a82-b833-d67ae2d4b704\") " pod="openstack/cinder-b7346-backup-0" Feb 24 05:53:11.393908 master-0 kubenswrapper[34361]: I0224 05:53:11.393913 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/8874ade6-37d3-4a82-b833-d67ae2d4b704-etc-iscsi\") pod \"cinder-b7346-backup-0\" (UID: \"8874ade6-37d3-4a82-b833-d67ae2d4b704\") " pod="openstack/cinder-b7346-backup-0" Feb 24 05:53:11.394256 master-0 kubenswrapper[34361]: I0224 05:53:11.393966 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/8874ade6-37d3-4a82-b833-d67ae2d4b704-sys\") pod \"cinder-b7346-backup-0\" (UID: \"8874ade6-37d3-4a82-b833-d67ae2d4b704\") " pod="openstack/cinder-b7346-backup-0" Feb 24 05:53:11.394256 master-0 kubenswrapper[34361]: I0224 05:53:11.394078 34361 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/da83b788-d2de-45b6-8213-669729adb6d8-scripts\") pod \"cinder-b7346-scheduler-0\" (UID: \"da83b788-d2de-45b6-8213-669729adb6d8\") " pod="openstack/cinder-b7346-scheduler-0" Feb 24 05:53:11.394256 master-0 kubenswrapper[34361]: I0224 05:53:11.394109 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8874ade6-37d3-4a82-b833-d67ae2d4b704-config-data\") pod \"cinder-b7346-backup-0\" (UID: \"8874ade6-37d3-4a82-b833-d67ae2d4b704\") " pod="openstack/cinder-b7346-backup-0" Feb 24 05:53:11.394256 master-0 kubenswrapper[34361]: I0224 05:53:11.394139 34361 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/da83b788-d2de-45b6-8213-669729adb6d8-config-data\") pod \"cinder-b7346-scheduler-0\" (UID: \"da83b788-d2de-45b6-8213-669729adb6d8\") " pod="openstack/cinder-b7346-scheduler-0" Feb 24 05:53:11.394256 master-0 kubenswrapper[34361]: I0224 05:53:11.394202 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/8874ade6-37d3-4a82-b833-d67ae2d4b704-etc-nvme\") pod \"cinder-b7346-backup-0\" (UID: \"8874ade6-37d3-4a82-b833-d67ae2d4b704\") " pod="openstack/cinder-b7346-backup-0" Feb 24 05:53:11.394256 master-0 kubenswrapper[34361]: I0224 05:53:11.394238 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8874ade6-37d3-4a82-b833-d67ae2d4b704-scripts\") pod \"cinder-b7346-backup-0\" (UID: \"8874ade6-37d3-4a82-b833-d67ae2d4b704\") " pod="openstack/cinder-b7346-backup-0" Feb 24 05:53:11.394472 master-0 kubenswrapper[34361]: I0224 05:53:11.394275 34361 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zr2wb\" (UniqueName: \"kubernetes.io/projected/da83b788-d2de-45b6-8213-669729adb6d8-kube-api-access-zr2wb\") pod \"cinder-b7346-scheduler-0\" (UID: \"da83b788-d2de-45b6-8213-669729adb6d8\") " pod="openstack/cinder-b7346-scheduler-0" Feb 24 05:53:11.394472 master-0 kubenswrapper[34361]: I0224 05:53:11.394343 34361 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/da83b788-d2de-45b6-8213-669729adb6d8-config-data-custom\") pod \"cinder-b7346-scheduler-0\" (UID: \"da83b788-d2de-45b6-8213-669729adb6d8\") " pod="openstack/cinder-b7346-scheduler-0" Feb 24 05:53:11.394472 master-0 kubenswrapper[34361]: I0224 05:53:11.394415 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/8874ade6-37d3-4a82-b833-d67ae2d4b704-dev\") pod \"cinder-b7346-backup-0\" (UID: \"8874ade6-37d3-4a82-b833-d67ae2d4b704\") " pod="openstack/cinder-b7346-backup-0" Feb 24 05:53:11.394472 master-0 kubenswrapper[34361]: I0224 05:53:11.394455 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/8874ade6-37d3-4a82-b833-d67ae2d4b704-etc-machine-id\") pod \"cinder-b7346-backup-0\" (UID: \"8874ade6-37d3-4a82-b833-d67ae2d4b704\") " pod="openstack/cinder-b7346-backup-0" Feb 24 05:53:11.394625 master-0 kubenswrapper[34361]: I0224 05:53:11.394504 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/8874ade6-37d3-4a82-b833-d67ae2d4b704-run\") pod \"cinder-b7346-backup-0\" (UID: \"8874ade6-37d3-4a82-b833-d67ae2d4b704\") " pod="openstack/cinder-b7346-backup-0" Feb 24 05:53:11.394625 master-0 kubenswrapper[34361]: I0224 05:53:11.394537 34361 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/da83b788-d2de-45b6-8213-669729adb6d8-combined-ca-bundle\") pod \"cinder-b7346-scheduler-0\" (UID: \"da83b788-d2de-45b6-8213-669729adb6d8\") " pod="openstack/cinder-b7346-scheduler-0" Feb 24 05:53:11.394625 master-0 kubenswrapper[34361]: I0224 05:53:11.394571 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-68nbn\" (UniqueName: \"kubernetes.io/projected/8874ade6-37d3-4a82-b833-d67ae2d4b704-kube-api-access-68nbn\") pod \"cinder-b7346-backup-0\" (UID: \"8874ade6-37d3-4a82-b833-d67ae2d4b704\") " pod="openstack/cinder-b7346-backup-0" Feb 24 05:53:11.394625 master-0 kubenswrapper[34361]: I0224 05:53:11.394599 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/8874ade6-37d3-4a82-b833-d67ae2d4b704-var-locks-brick\") pod \"cinder-b7346-backup-0\" (UID: \"8874ade6-37d3-4a82-b833-d67ae2d4b704\") " pod="openstack/cinder-b7346-backup-0" Feb 24 05:53:11.394786 master-0 kubenswrapper[34361]: I0224 05:53:11.394629 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8874ade6-37d3-4a82-b833-d67ae2d4b704-combined-ca-bundle\") pod \"cinder-b7346-backup-0\" (UID: \"8874ade6-37d3-4a82-b833-d67ae2d4b704\") " pod="openstack/cinder-b7346-backup-0" Feb 24 05:53:11.394786 master-0 kubenswrapper[34361]: I0224 05:53:11.394662 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/8874ade6-37d3-4a82-b833-d67ae2d4b704-lib-modules\") pod \"cinder-b7346-backup-0\" (UID: \"8874ade6-37d3-4a82-b833-d67ae2d4b704\") " pod="openstack/cinder-b7346-backup-0" Feb 24 05:53:11.394786 master-0 kubenswrapper[34361]: I0224 05:53:11.394702 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-cinder\" (UniqueName: \"kubernetes.io/host-path/8874ade6-37d3-4a82-b833-d67ae2d4b704-var-lib-cinder\") pod \"cinder-b7346-backup-0\" (UID: \"8874ade6-37d3-4a82-b833-d67ae2d4b704\") " pod="openstack/cinder-b7346-backup-0" Feb 24 05:53:11.394786 master-0 kubenswrapper[34361]: I0224 05:53:11.394737 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/8874ade6-37d3-4a82-b833-d67ae2d4b704-config-data-custom\") pod \"cinder-b7346-backup-0\" (UID: \"8874ade6-37d3-4a82-b833-d67ae2d4b704\") " pod="openstack/cinder-b7346-backup-0" Feb 24 05:53:11.394786 master-0 kubenswrapper[34361]: I0224 05:53:11.394781 34361 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/da83b788-d2de-45b6-8213-669729adb6d8-etc-machine-id\") pod \"cinder-b7346-scheduler-0\" (UID: \"da83b788-d2de-45b6-8213-669729adb6d8\") " pod="openstack/cinder-b7346-scheduler-0" Feb 24 05:53:11.394997 master-0 kubenswrapper[34361]: I0224 05:53:11.394962 34361 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-locks-cinder\" (UniqueName: \"kubernetes.io/host-path/8874ade6-37d3-4a82-b833-d67ae2d4b704-var-locks-cinder\") pod \"cinder-b7346-backup-0\" (UID: \"8874ade6-37d3-4a82-b833-d67ae2d4b704\") " pod="openstack/cinder-b7346-backup-0" Feb 24 05:53:11.395051 master-0 kubenswrapper[34361]: I0224 05:53:11.395023 34361 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/8874ade6-37d3-4a82-b833-d67ae2d4b704-etc-iscsi\") pod \"cinder-b7346-backup-0\" (UID: \"8874ade6-37d3-4a82-b833-d67ae2d4b704\") " pod="openstack/cinder-b7346-backup-0" Feb 24 05:53:11.395100 master-0 kubenswrapper[34361]: I0224 05:53:11.395060 34361 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/8874ade6-37d3-4a82-b833-d67ae2d4b704-sys\") pod \"cinder-b7346-backup-0\" (UID: \"8874ade6-37d3-4a82-b833-d67ae2d4b704\") " pod="openstack/cinder-b7346-backup-0" Feb 24 05:53:11.403260 master-0 kubenswrapper[34361]: I0224 05:53:11.395464 34361 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run\" (UniqueName: \"kubernetes.io/host-path/8874ade6-37d3-4a82-b833-d67ae2d4b704-run\") pod \"cinder-b7346-backup-0\" (UID: \"8874ade6-37d3-4a82-b833-d67ae2d4b704\") " pod="openstack/cinder-b7346-backup-0" Feb 24 05:53:11.405176 master-0 kubenswrapper[34361]: I0224 05:53:11.400958 34361 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/8874ade6-37d3-4a82-b833-d67ae2d4b704-var-locks-brick\") pod \"cinder-b7346-backup-0\" (UID: \"8874ade6-37d3-4a82-b833-d67ae2d4b704\") " pod="openstack/cinder-b7346-backup-0" Feb 24 05:53:11.405176 master-0 kubenswrapper[34361]: I0224 05:53:11.401017 34361 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-cinder\" (UniqueName: \"kubernetes.io/host-path/8874ade6-37d3-4a82-b833-d67ae2d4b704-var-lib-cinder\") pod \"cinder-b7346-backup-0\" (UID: \"8874ade6-37d3-4a82-b833-d67ae2d4b704\") " pod="openstack/cinder-b7346-backup-0" Feb 24 05:53:11.405176 master-0 kubenswrapper[34361]: I0224 05:53:11.401062 34361 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/8874ade6-37d3-4a82-b833-d67ae2d4b704-lib-modules\") pod \"cinder-b7346-backup-0\" (UID: \"8874ade6-37d3-4a82-b833-d67ae2d4b704\") " pod="openstack/cinder-b7346-backup-0" Feb 24 05:53:11.405176 master-0 kubenswrapper[34361]: I0224 05:53:11.401166 34361 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/8874ade6-37d3-4a82-b833-d67ae2d4b704-etc-nvme\") pod \"cinder-b7346-backup-0\" (UID: \"8874ade6-37d3-4a82-b833-d67ae2d4b704\") " pod="openstack/cinder-b7346-backup-0" Feb 24 05:53:11.405176 master-0 kubenswrapper[34361]: I0224 05:53:11.401709 34361 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/8874ade6-37d3-4a82-b833-d67ae2d4b704-etc-machine-id\") pod \"cinder-b7346-backup-0\" (UID: \"8874ade6-37d3-4a82-b833-d67ae2d4b704\") " pod="openstack/cinder-b7346-backup-0" Feb 24 05:53:11.405176 master-0 kubenswrapper[34361]: I0224 05:53:11.401730 34361 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/8874ade6-37d3-4a82-b833-d67ae2d4b704-dev\") pod \"cinder-b7346-backup-0\" (UID: \"8874ade6-37d3-4a82-b833-d67ae2d4b704\") " pod="openstack/cinder-b7346-backup-0" Feb 24 05:53:11.405176 master-0 kubenswrapper[34361]: I0224 05:53:11.404037 34361 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8874ade6-37d3-4a82-b833-d67ae2d4b704-config-data\") pod \"cinder-b7346-backup-0\" (UID: \"8874ade6-37d3-4a82-b833-d67ae2d4b704\") " pod="openstack/cinder-b7346-backup-0" Feb 24 05:53:11.405176 master-0 kubenswrapper[34361]: I0224 05:53:11.404047 34361 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8874ade6-37d3-4a82-b833-d67ae2d4b704-combined-ca-bundle\") pod \"cinder-b7346-backup-0\" (UID: \"8874ade6-37d3-4a82-b833-d67ae2d4b704\") " pod="openstack/cinder-b7346-backup-0" Feb 24 05:53:11.406263 master-0 kubenswrapper[34361]: I0224 05:53:11.406214 34361 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/8874ade6-37d3-4a82-b833-d67ae2d4b704-config-data-custom\") pod \"cinder-b7346-backup-0\" (UID: \"8874ade6-37d3-4a82-b833-d67ae2d4b704\") " pod="openstack/cinder-b7346-backup-0" Feb 24 05:53:11.409786 master-0 kubenswrapper[34361]: I0224 05:53:11.409738 34361 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8874ade6-37d3-4a82-b833-d67ae2d4b704-scripts\") pod \"cinder-b7346-backup-0\" (UID: \"8874ade6-37d3-4a82-b833-d67ae2d4b704\") " pod="openstack/cinder-b7346-backup-0" Feb 24 05:53:11.441579 master-0 kubenswrapper[34361]: I0224 05:53:11.441515 34361 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-68nbn\" (UniqueName: \"kubernetes.io/projected/8874ade6-37d3-4a82-b833-d67ae2d4b704-kube-api-access-68nbn\") pod \"cinder-b7346-backup-0\" (UID: \"8874ade6-37d3-4a82-b833-d67ae2d4b704\") " pod="openstack/cinder-b7346-backup-0" Feb 24 05:53:11.497017 master-0 kubenswrapper[34361]: I0224 05:53:11.496944 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/da83b788-d2de-45b6-8213-669729adb6d8-combined-ca-bundle\") pod \"cinder-b7346-scheduler-0\" (UID: \"da83b788-d2de-45b6-8213-669729adb6d8\") " pod="openstack/cinder-b7346-scheduler-0" Feb 24 05:53:11.497125 master-0 kubenswrapper[34361]: I0224 05:53:11.497089 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/da83b788-d2de-45b6-8213-669729adb6d8-etc-machine-id\") pod \"cinder-b7346-scheduler-0\" (UID: \"da83b788-d2de-45b6-8213-669729adb6d8\") " pod="openstack/cinder-b7346-scheduler-0" Feb 24 05:53:11.497221 master-0 kubenswrapper[34361]: I0224 05:53:11.497192 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/da83b788-d2de-45b6-8213-669729adb6d8-scripts\") pod \"cinder-b7346-scheduler-0\" (UID: \"da83b788-d2de-45b6-8213-669729adb6d8\") " pod="openstack/cinder-b7346-scheduler-0" Feb 24 05:53:11.497275 master-0 kubenswrapper[34361]: I0224 05:53:11.497226 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/da83b788-d2de-45b6-8213-669729adb6d8-config-data\") pod \"cinder-b7346-scheduler-0\" (UID: \"da83b788-d2de-45b6-8213-669729adb6d8\") " pod="openstack/cinder-b7346-scheduler-0" Feb 24 05:53:11.498404 master-0 kubenswrapper[34361]: I0224 05:53:11.497405 34361 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/da83b788-d2de-45b6-8213-669729adb6d8-etc-machine-id\") pod \"cinder-b7346-scheduler-0\" (UID: \"da83b788-d2de-45b6-8213-669729adb6d8\") " pod="openstack/cinder-b7346-scheduler-0" Feb 24 05:53:11.498404 master-0 kubenswrapper[34361]: I0224 05:53:11.497535 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zr2wb\" (UniqueName: \"kubernetes.io/projected/da83b788-d2de-45b6-8213-669729adb6d8-kube-api-access-zr2wb\") pod \"cinder-b7346-scheduler-0\" (UID: \"da83b788-d2de-45b6-8213-669729adb6d8\") " pod="openstack/cinder-b7346-scheduler-0" Feb 24 05:53:11.498404 master-0 kubenswrapper[34361]: I0224 05:53:11.497740 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/da83b788-d2de-45b6-8213-669729adb6d8-config-data-custom\") pod \"cinder-b7346-scheduler-0\" (UID: \"da83b788-d2de-45b6-8213-669729adb6d8\") " pod="openstack/cinder-b7346-scheduler-0" Feb 24 05:53:11.501726 master-0 kubenswrapper[34361]: I0224 05:53:11.501213 34361 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/da83b788-d2de-45b6-8213-669729adb6d8-config-data-custom\") pod \"cinder-b7346-scheduler-0\" (UID: \"da83b788-d2de-45b6-8213-669729adb6d8\") " pod="openstack/cinder-b7346-scheduler-0" Feb 24 05:53:11.501726 master-0 kubenswrapper[34361]: I0224 05:53:11.501442 34361 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/da83b788-d2de-45b6-8213-669729adb6d8-combined-ca-bundle\") pod \"cinder-b7346-scheduler-0\" (UID: \"da83b788-d2de-45b6-8213-669729adb6d8\") " pod="openstack/cinder-b7346-scheduler-0" Feb 24 05:53:11.503216 master-0 kubenswrapper[34361]: I0224 05:53:11.503168 34361 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/da83b788-d2de-45b6-8213-669729adb6d8-config-data\") pod \"cinder-b7346-scheduler-0\" (UID: \"da83b788-d2de-45b6-8213-669729adb6d8\") " pod="openstack/cinder-b7346-scheduler-0" Feb 24 05:53:11.504049 master-0 kubenswrapper[34361]: I0224 05:53:11.504000 34361 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/da83b788-d2de-45b6-8213-669729adb6d8-scripts\") pod \"cinder-b7346-scheduler-0\" (UID: \"da83b788-d2de-45b6-8213-669729adb6d8\") " pod="openstack/cinder-b7346-scheduler-0" Feb 24 05:53:11.525204 master-0 kubenswrapper[34361]: I0224 05:53:11.521427 34361 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zr2wb\" (UniqueName: \"kubernetes.io/projected/da83b788-d2de-45b6-8213-669729adb6d8-kube-api-access-zr2wb\") pod \"cinder-b7346-scheduler-0\" (UID: \"da83b788-d2de-45b6-8213-669729adb6d8\") " pod="openstack/cinder-b7346-scheduler-0" Feb 24 05:53:11.642346 master-0 kubenswrapper[34361]: I0224 05:53:11.642245 34361 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-b7346-backup-0" Feb 24 05:53:11.652656 master-0 kubenswrapper[34361]: I0224 05:53:11.651905 34361 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-b7346-scheduler-0" Feb 24 05:53:11.872016 master-0 kubenswrapper[34361]: I0224 05:53:11.871947 34361 generic.go:334] "Generic (PLEG): container finished" podID="700c3143-d1a3-47a3-92f5-02a0b1e428a4" containerID="abba14a9d83c013392814f045bc1cc7b0b1c9b871724050cb75b4193d8291663" exitCode=0 Feb 24 05:53:11.874209 master-0 kubenswrapper[34361]: I0224 05:53:11.874161 34361 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-555fd64789-cgpft" event={"ID":"700c3143-d1a3-47a3-92f5-02a0b1e428a4","Type":"ContainerDied","Data":"abba14a9d83c013392814f045bc1cc7b0b1c9b871724050cb75b4193d8291663"} Feb 24 05:53:11.901088 master-0 kubenswrapper[34361]: I0224 05:53:11.900905 34361 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-6cc9f57487-vklxq" event={"ID":"e92bebe0-2823-46e2-bd8f-7755ed558ab8","Type":"ContainerStarted","Data":"0a59a260eb29d14d516d8653648a5baa29997aac495c5449a94d94dc146c16b0"} Feb 24 05:53:11.901088 master-0 kubenswrapper[34361]: I0224 05:53:11.901019 34361 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-6cc9f57487-vklxq" event={"ID":"e92bebe0-2823-46e2-bd8f-7755ed558ab8","Type":"ContainerStarted","Data":"d22cb69ff90b2128fa48a74e2e65dffaf4edb31fab048969c511a289526ec3ce"} Feb 24 05:53:11.937887 master-0 kubenswrapper[34361]: I0224 05:53:11.936247 34361 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-b7346-volume-lvm-iscsi-0"] Feb 24 05:53:11.961191 master-0 kubenswrapper[34361]: I0224 05:53:11.961123 34361 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-conductor-0" event={"ID":"74198545-a0ee-4142-93a6-86175a1d3c02","Type":"ContainerStarted","Data":"85043cc2d3bf49279706da8319df6ab4c90efe3d64f23ecf94e8402792cf86b2"} Feb 24 05:53:11.966278 master-0 kubenswrapper[34361]: W0224 05:53:11.966219 34361 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod7b306f6d_75b5_44a6_921c_edbe44ef1c10.slice/crio-1326d459e45d79f656033010276d2fece3ec55680626333ee2d44e09899eb579 WatchSource:0}: Error finding container 1326d459e45d79f656033010276d2fece3ec55680626333ee2d44e09899eb579: Status 404 returned error can't find the container with id 1326d459e45d79f656033010276d2fece3ec55680626333ee2d44e09899eb579 Feb 24 05:53:12.323937 master-0 kubenswrapper[34361]: I0224 05:53:12.323785 34361 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-b7346-backup-0"] Feb 24 05:53:12.358525 master-0 kubenswrapper[34361]: W0224 05:53:12.357382 34361 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod8874ade6_37d3_4a82_b833_d67ae2d4b704.slice/crio-04bd6ee6b476c308e3c282fd91ce7f3105567e568c12887b84bf73d5976549b2 WatchSource:0}: Error finding container 04bd6ee6b476c308e3c282fd91ce7f3105567e568c12887b84bf73d5976549b2: Status 404 returned error can't find the container with id 04bd6ee6b476c308e3c282fd91ce7f3105567e568c12887b84bf73d5976549b2 Feb 24 05:53:12.435329 master-0 kubenswrapper[34361]: W0224 05:53:12.435233 34361 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podda83b788_d2de_45b6_8213_669729adb6d8.slice/crio-839009fc77d8f383dfb9d0a866c0cf0c8ed2cdb7feaec4c16817feafb95bddb9 WatchSource:0}: Error finding container 839009fc77d8f383dfb9d0a866c0cf0c8ed2cdb7feaec4c16817feafb95bddb9: Status 404 returned error can't find the container with id 839009fc77d8f383dfb9d0a866c0cf0c8ed2cdb7feaec4c16817feafb95bddb9 Feb 24 05:53:12.436813 master-0 kubenswrapper[34361]: I0224 05:53:12.436750 34361 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-b7346-scheduler-0"] Feb 24 05:53:12.633626 master-0 kubenswrapper[34361]: I0224 05:53:12.633572 34361 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2f0b28b5-741c-4761-b250-30d89ea99407" path="/var/lib/kubelet/pods/2f0b28b5-741c-4761-b250-30d89ea99407/volumes" Feb 24 05:53:12.634998 master-0 kubenswrapper[34361]: I0224 05:53:12.634959 34361 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c1490d04-fc1d-488b-a427-285554ec1692" path="/var/lib/kubelet/pods/c1490d04-fc1d-488b-a427-285554ec1692/volumes" Feb 24 05:53:12.635946 master-0 kubenswrapper[34361]: I0224 05:53:12.635890 34361 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fad53e67-bd04-4577-af57-e5b896b6e56f" path="/var/lib/kubelet/pods/fad53e67-bd04-4577-af57-e5b896b6e56f/volumes" Feb 24 05:53:13.030352 master-0 kubenswrapper[34361]: I0224 05:53:13.027630 34361 generic.go:334] "Generic (PLEG): container finished" podID="e92bebe0-2823-46e2-bd8f-7755ed558ab8" containerID="0a59a260eb29d14d516d8653648a5baa29997aac495c5449a94d94dc146c16b0" exitCode=0 Feb 24 05:53:13.030352 master-0 kubenswrapper[34361]: I0224 05:53:13.027766 34361 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-6cc9f57487-vklxq" event={"ID":"e92bebe0-2823-46e2-bd8f-7755ed558ab8","Type":"ContainerDied","Data":"0a59a260eb29d14d516d8653648a5baa29997aac495c5449a94d94dc146c16b0"} Feb 24 05:53:13.071418 master-0 kubenswrapper[34361]: I0224 05:53:13.053809 34361 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-conductor-0" event={"ID":"74198545-a0ee-4142-93a6-86175a1d3c02","Type":"ContainerStarted","Data":"248e35dd7b4b76ce44f62f5e95e08b52647f21279f414712e30e8145f414f515"} Feb 24 05:53:13.071418 master-0 kubenswrapper[34361]: I0224 05:53:13.059148 34361 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-b7346-backup-0" event={"ID":"8874ade6-37d3-4a82-b833-d67ae2d4b704","Type":"ContainerStarted","Data":"5cb73e86284ab84b5d5b0d9561ead651f35ca206b0a719f8f0703871eaa9fa1d"} Feb 24 05:53:13.071418 master-0 kubenswrapper[34361]: I0224 05:53:13.059207 34361 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-b7346-backup-0" event={"ID":"8874ade6-37d3-4a82-b833-d67ae2d4b704","Type":"ContainerStarted","Data":"04bd6ee6b476c308e3c282fd91ce7f3105567e568c12887b84bf73d5976549b2"} Feb 24 05:53:13.149656 master-0 kubenswrapper[34361]: I0224 05:53:13.149597 34361 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-555fd64789-cgpft" event={"ID":"700c3143-d1a3-47a3-92f5-02a0b1e428a4","Type":"ContainerStarted","Data":"302e4fe8f1a6cfaf578aaa36a65fcc1c76f5ccb9cc311bdce5902a0fcefb4d5e"} Feb 24 05:53:13.149656 master-0 kubenswrapper[34361]: I0224 05:53:13.149653 34361 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-555fd64789-cgpft" event={"ID":"700c3143-d1a3-47a3-92f5-02a0b1e428a4","Type":"ContainerStarted","Data":"c83a58778fa74559b98b933ae2239f61460d230529049c178b776b567793eed8"} Feb 24 05:53:13.151361 master-0 kubenswrapper[34361]: I0224 05:53:13.151328 34361 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ironic-555fd64789-cgpft" Feb 24 05:53:13.170783 master-0 kubenswrapper[34361]: I0224 05:53:13.170701 34361 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-b7346-volume-lvm-iscsi-0" event={"ID":"7b306f6d-75b5-44a6-921c-edbe44ef1c10","Type":"ContainerStarted","Data":"981df9b6afb2cd92c33846f33e31d66bb0f9485a4dcef8a58d93399c0d3cb3c1"} Feb 24 05:53:13.170783 master-0 kubenswrapper[34361]: I0224 05:53:13.170776 34361 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-b7346-volume-lvm-iscsi-0" event={"ID":"7b306f6d-75b5-44a6-921c-edbe44ef1c10","Type":"ContainerStarted","Data":"6ca194502dc672528e9163dc23d5ce90596727360a747fa20a310b60e848d8ef"} Feb 24 05:53:13.170995 master-0 kubenswrapper[34361]: I0224 05:53:13.170790 34361 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-b7346-volume-lvm-iscsi-0" event={"ID":"7b306f6d-75b5-44a6-921c-edbe44ef1c10","Type":"ContainerStarted","Data":"1326d459e45d79f656033010276d2fece3ec55680626333ee2d44e09899eb579"} Feb 24 05:53:13.177428 master-0 kubenswrapper[34361]: I0224 05:53:13.175373 34361 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-b7346-scheduler-0" event={"ID":"da83b788-d2de-45b6-8213-669729adb6d8","Type":"ContainerStarted","Data":"839009fc77d8f383dfb9d0a866c0cf0c8ed2cdb7feaec4c16817feafb95bddb9"} Feb 24 05:53:13.229736 master-0 kubenswrapper[34361]: I0224 05:53:13.228468 34361 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ironic-555fd64789-cgpft" podStartSLOduration=5.55724585 podStartE2EDuration="9.228437044s" podCreationTimestamp="2026-02-24 05:53:04 +0000 UTC" firstStartedPulling="2026-02-24 05:53:06.265607501 +0000 UTC m=+945.968224547" lastFinishedPulling="2026-02-24 05:53:09.936798695 +0000 UTC m=+949.639415741" observedRunningTime="2026-02-24 05:53:13.213359727 +0000 UTC m=+952.915976783" watchObservedRunningTime="2026-02-24 05:53:13.228437044 +0000 UTC m=+952.931054090" Feb 24 05:53:13.260115 master-0 kubenswrapper[34361]: I0224 05:53:13.260053 34361 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-b7346-volume-lvm-iscsi-0" podStartSLOduration=3.260034715 podStartE2EDuration="3.260034715s" podCreationTimestamp="2026-02-24 05:53:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-24 05:53:13.247098527 +0000 UTC m=+952.949715573" watchObservedRunningTime="2026-02-24 05:53:13.260034715 +0000 UTC m=+952.962651761" Feb 24 05:53:14.238722 master-0 kubenswrapper[34361]: I0224 05:53:14.238637 34361 generic.go:334] "Generic (PLEG): container finished" podID="40a5b237-764f-4367-85a5-4153a8f90a3e" containerID="bd91a8454d87028b9e8706db9f6b6940724d7cb0d147ecc297afc26e76ed0e85" exitCode=1 Feb 24 05:53:14.239495 master-0 kubenswrapper[34361]: I0224 05:53:14.238766 34361 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-neutron-agent-856d98ff5d-2p7np" event={"ID":"40a5b237-764f-4367-85a5-4153a8f90a3e","Type":"ContainerDied","Data":"bd91a8454d87028b9e8706db9f6b6940724d7cb0d147ecc297afc26e76ed0e85"} Feb 24 05:53:14.239849 master-0 kubenswrapper[34361]: I0224 05:53:14.239815 34361 scope.go:117] "RemoveContainer" containerID="bd91a8454d87028b9e8706db9f6b6940724d7cb0d147ecc297afc26e76ed0e85" Feb 24 05:53:14.248350 master-0 kubenswrapper[34361]: I0224 05:53:14.248256 34361 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-b7346-scheduler-0" event={"ID":"da83b788-d2de-45b6-8213-669729adb6d8","Type":"ContainerStarted","Data":"c59dbd14779d4fb4685be52410eefcdf4b28b8f04a9fbb21d1d4ff5593adf590"} Feb 24 05:53:14.250544 master-0 kubenswrapper[34361]: I0224 05:53:14.250509 34361 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-6cc9f57487-vklxq" event={"ID":"e92bebe0-2823-46e2-bd8f-7755ed558ab8","Type":"ContainerStarted","Data":"713e7d4435f295030a4b857750d17c1d3134eb4dd80a9b7b0403b8a62f4a441b"} Feb 24 05:53:14.250544 master-0 kubenswrapper[34361]: I0224 05:53:14.250539 34361 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-6cc9f57487-vklxq" event={"ID":"e92bebe0-2823-46e2-bd8f-7755ed558ab8","Type":"ContainerStarted","Data":"e53f19431c57c0d1e4bbbad932003fb40b2a06582838d0533d1f816c2c126897"} Feb 24 05:53:14.251837 master-0 kubenswrapper[34361]: I0224 05:53:14.251802 34361 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ironic-6cc9f57487-vklxq" Feb 24 05:53:14.263832 master-0 kubenswrapper[34361]: I0224 05:53:14.263766 34361 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-b7346-backup-0" event={"ID":"8874ade6-37d3-4a82-b833-d67ae2d4b704","Type":"ContainerStarted","Data":"7783b4d61c7f00d5d419249ece14ad148e13cf57949fd0cf1d1f9621243059b3"} Feb 24 05:53:14.277922 master-0 kubenswrapper[34361]: I0224 05:53:14.277835 34361 generic.go:334] "Generic (PLEG): container finished" podID="700c3143-d1a3-47a3-92f5-02a0b1e428a4" containerID="302e4fe8f1a6cfaf578aaa36a65fcc1c76f5ccb9cc311bdce5902a0fcefb4d5e" exitCode=1 Feb 24 05:53:14.280658 master-0 kubenswrapper[34361]: I0224 05:53:14.280620 34361 scope.go:117] "RemoveContainer" containerID="302e4fe8f1a6cfaf578aaa36a65fcc1c76f5ccb9cc311bdce5902a0fcefb4d5e" Feb 24 05:53:14.281171 master-0 kubenswrapper[34361]: I0224 05:53:14.281094 34361 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-555fd64789-cgpft" event={"ID":"700c3143-d1a3-47a3-92f5-02a0b1e428a4","Type":"ContainerDied","Data":"302e4fe8f1a6cfaf578aaa36a65fcc1c76f5ccb9cc311bdce5902a0fcefb4d5e"} Feb 24 05:53:14.459462 master-0 kubenswrapper[34361]: I0224 05:53:14.459303 34361 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ironic-6cc9f57487-vklxq" podStartSLOduration=7.45927412 podStartE2EDuration="7.45927412s" podCreationTimestamp="2026-02-24 05:53:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-24 05:53:14.453891944 +0000 UTC m=+954.156508990" watchObservedRunningTime="2026-02-24 05:53:14.45927412 +0000 UTC m=+954.161891166" Feb 24 05:53:14.496963 master-0 kubenswrapper[34361]: I0224 05:53:14.496883 34361 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-b7346-backup-0" podStartSLOduration=3.496860564 podStartE2EDuration="3.496860564s" podCreationTimestamp="2026-02-24 05:53:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-24 05:53:14.492478135 +0000 UTC m=+954.195095181" watchObservedRunningTime="2026-02-24 05:53:14.496860564 +0000 UTC m=+954.199477610" Feb 24 05:53:14.746604 master-0 kubenswrapper[34361]: I0224 05:53:14.746532 34361 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/keystone-64cf598f88-t2877" Feb 24 05:53:14.774651 master-0 kubenswrapper[34361]: I0224 05:53:14.774563 34361 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack/ironic-neutron-agent-856d98ff5d-2p7np" Feb 24 05:53:14.898794 master-0 kubenswrapper[34361]: I0224 05:53:14.898600 34361 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack/ironic-555fd64789-cgpft" Feb 24 05:53:15.296884 master-0 kubenswrapper[34361]: I0224 05:53:15.296808 34361 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-neutron-agent-856d98ff5d-2p7np" event={"ID":"40a5b237-764f-4367-85a5-4153a8f90a3e","Type":"ContainerStarted","Data":"8d2749b27df058fd8c580c7ee172eac939e79e13640fc3cdd1176aef20aced3c"} Feb 24 05:53:15.297634 master-0 kubenswrapper[34361]: I0224 05:53:15.297056 34361 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ironic-neutron-agent-856d98ff5d-2p7np" Feb 24 05:53:15.314294 master-0 kubenswrapper[34361]: I0224 05:53:15.314009 34361 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-b7346-scheduler-0" event={"ID":"da83b788-d2de-45b6-8213-669729adb6d8","Type":"ContainerStarted","Data":"8e9f7b3c90386ff7968b946947301e3ae0fda31c086ebcb0e536959e315d0540"} Feb 24 05:53:15.325377 master-0 kubenswrapper[34361]: I0224 05:53:15.321763 34361 generic.go:334] "Generic (PLEG): container finished" podID="74198545-a0ee-4142-93a6-86175a1d3c02" containerID="248e35dd7b4b76ce44f62f5e95e08b52647f21279f414712e30e8145f414f515" exitCode=0 Feb 24 05:53:15.325377 master-0 kubenswrapper[34361]: I0224 05:53:15.321910 34361 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-conductor-0" event={"ID":"74198545-a0ee-4142-93a6-86175a1d3c02","Type":"ContainerDied","Data":"248e35dd7b4b76ce44f62f5e95e08b52647f21279f414712e30e8145f414f515"} Feb 24 05:53:15.326991 master-0 kubenswrapper[34361]: I0224 05:53:15.326935 34361 generic.go:334] "Generic (PLEG): container finished" podID="700c3143-d1a3-47a3-92f5-02a0b1e428a4" containerID="be12c84a9104c46902effdcc5e08f45363e8dfda49b4e87c40f902fee9e4f074" exitCode=1 Feb 24 05:53:15.327143 master-0 kubenswrapper[34361]: I0224 05:53:15.327075 34361 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-555fd64789-cgpft" event={"ID":"700c3143-d1a3-47a3-92f5-02a0b1e428a4","Type":"ContainerDied","Data":"be12c84a9104c46902effdcc5e08f45363e8dfda49b4e87c40f902fee9e4f074"} Feb 24 05:53:15.327198 master-0 kubenswrapper[34361]: I0224 05:53:15.327174 34361 scope.go:117] "RemoveContainer" containerID="302e4fe8f1a6cfaf578aaa36a65fcc1c76f5ccb9cc311bdce5902a0fcefb4d5e" Feb 24 05:53:15.328172 master-0 kubenswrapper[34361]: I0224 05:53:15.328043 34361 scope.go:117] "RemoveContainer" containerID="be12c84a9104c46902effdcc5e08f45363e8dfda49b4e87c40f902fee9e4f074" Feb 24 05:53:15.328448 master-0 kubenswrapper[34361]: E0224 05:53:15.328409 34361 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ironic-api\" with CrashLoopBackOff: \"back-off 10s restarting failed container=ironic-api pod=ironic-555fd64789-cgpft_openstack(700c3143-d1a3-47a3-92f5-02a0b1e428a4)\"" pod="openstack/ironic-555fd64789-cgpft" podUID="700c3143-d1a3-47a3-92f5-02a0b1e428a4" Feb 24 05:53:15.396682 master-0 kubenswrapper[34361]: I0224 05:53:15.396568 34361 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-b7346-scheduler-0" podStartSLOduration=4.396537159 podStartE2EDuration="4.396537159s" podCreationTimestamp="2026-02-24 05:53:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-24 05:53:15.395796149 +0000 UTC m=+955.098413215" watchObservedRunningTime="2026-02-24 05:53:15.396537159 +0000 UTC m=+955.099154205" Feb 24 05:53:16.094432 master-0 kubenswrapper[34361]: I0224 05:53:16.093738 34361 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstackclient"] Feb 24 05:53:16.098855 master-0 kubenswrapper[34361]: I0224 05:53:16.097905 34361 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Feb 24 05:53:16.101408 master-0 kubenswrapper[34361]: I0224 05:53:16.101363 34361 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-config-secret" Feb 24 05:53:16.101643 master-0 kubenswrapper[34361]: I0224 05:53:16.101615 34361 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-config" Feb 24 05:53:16.132994 master-0 kubenswrapper[34361]: I0224 05:53:16.132140 34361 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstackclient"] Feb 24 05:53:16.167901 master-0 kubenswrapper[34361]: I0224 05:53:16.167810 34361 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/cinder-b7346-api-0" Feb 24 05:53:16.195733 master-0 kubenswrapper[34361]: I0224 05:53:16.195632 34361 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/42e4dc08-bb96-4726-9ef0-9dc587361403-combined-ca-bundle\") pod \"openstackclient\" (UID: \"42e4dc08-bb96-4726-9ef0-9dc587361403\") " pod="openstack/openstackclient" Feb 24 05:53:16.203488 master-0 kubenswrapper[34361]: I0224 05:53:16.195903 34361 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/42e4dc08-bb96-4726-9ef0-9dc587361403-openstack-config-secret\") pod \"openstackclient\" (UID: \"42e4dc08-bb96-4726-9ef0-9dc587361403\") " pod="openstack/openstackclient" Feb 24 05:53:16.203488 master-0 kubenswrapper[34361]: I0224 05:53:16.195941 34361 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ncm67\" (UniqueName: \"kubernetes.io/projected/42e4dc08-bb96-4726-9ef0-9dc587361403-kube-api-access-ncm67\") pod \"openstackclient\" (UID: \"42e4dc08-bb96-4726-9ef0-9dc587361403\") " pod="openstack/openstackclient" Feb 24 05:53:16.203488 master-0 kubenswrapper[34361]: I0224 05:53:16.196016 34361 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/42e4dc08-bb96-4726-9ef0-9dc587361403-openstack-config\") pod \"openstackclient\" (UID: \"42e4dc08-bb96-4726-9ef0-9dc587361403\") " pod="openstack/openstackclient" Feb 24 05:53:16.271547 master-0 kubenswrapper[34361]: I0224 05:53:16.271432 34361 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/openstackclient"] Feb 24 05:53:16.278350 master-0 kubenswrapper[34361]: E0224 05:53:16.273412 34361 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[combined-ca-bundle kube-api-access-ncm67 openstack-config openstack-config-secret], unattached volumes=[], failed to process volumes=[]: context canceled" pod="openstack/openstackclient" podUID="42e4dc08-bb96-4726-9ef0-9dc587361403" Feb 24 05:53:16.282659 master-0 kubenswrapper[34361]: I0224 05:53:16.281940 34361 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/openstackclient"] Feb 24 05:53:16.307716 master-0 kubenswrapper[34361]: I0224 05:53:16.307559 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/42e4dc08-bb96-4726-9ef0-9dc587361403-combined-ca-bundle\") pod \"openstackclient\" (UID: \"42e4dc08-bb96-4726-9ef0-9dc587361403\") " pod="openstack/openstackclient" Feb 24 05:53:16.308681 master-0 kubenswrapper[34361]: I0224 05:53:16.308224 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/42e4dc08-bb96-4726-9ef0-9dc587361403-openstack-config-secret\") pod \"openstackclient\" (UID: \"42e4dc08-bb96-4726-9ef0-9dc587361403\") " pod="openstack/openstackclient" Feb 24 05:53:16.308681 master-0 kubenswrapper[34361]: I0224 05:53:16.308354 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ncm67\" (UniqueName: \"kubernetes.io/projected/42e4dc08-bb96-4726-9ef0-9dc587361403-kube-api-access-ncm67\") pod \"openstackclient\" (UID: \"42e4dc08-bb96-4726-9ef0-9dc587361403\") " pod="openstack/openstackclient" Feb 24 05:53:16.308814 master-0 kubenswrapper[34361]: I0224 05:53:16.308687 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/42e4dc08-bb96-4726-9ef0-9dc587361403-openstack-config\") pod \"openstackclient\" (UID: \"42e4dc08-bb96-4726-9ef0-9dc587361403\") " pod="openstack/openstackclient" Feb 24 05:53:16.331238 master-0 kubenswrapper[34361]: E0224 05:53:16.327018 34361 projected.go:194] Error preparing data for projected volume kube-api-access-ncm67 for pod openstack/openstackclient: failed to fetch token: serviceaccounts "openstackclient-openstackclient" is forbidden: User "system:node:master-0" cannot create resource "serviceaccounts/token" in API group "" in the namespace "openstack": no relationship found between node 'master-0' and this object Feb 24 05:53:16.331238 master-0 kubenswrapper[34361]: E0224 05:53:16.330097 34361 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/42e4dc08-bb96-4726-9ef0-9dc587361403-kube-api-access-ncm67 podName:42e4dc08-bb96-4726-9ef0-9dc587361403 nodeName:}" failed. No retries permitted until 2026-02-24 05:53:16.830066476 +0000 UTC m=+956.532683522 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-ncm67" (UniqueName: "kubernetes.io/projected/42e4dc08-bb96-4726-9ef0-9dc587361403-kube-api-access-ncm67") pod "openstackclient" (UID: "42e4dc08-bb96-4726-9ef0-9dc587361403") : failed to fetch token: serviceaccounts "openstackclient-openstackclient" is forbidden: User "system:node:master-0" cannot create resource "serviceaccounts/token" in API group "" in the namespace "openstack": no relationship found between node 'master-0' and this object Feb 24 05:53:16.331238 master-0 kubenswrapper[34361]: I0224 05:53:16.331096 34361 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/42e4dc08-bb96-4726-9ef0-9dc587361403-openstack-config-secret\") pod \"openstackclient\" (UID: \"42e4dc08-bb96-4726-9ef0-9dc587361403\") " pod="openstack/openstackclient" Feb 24 05:53:16.354338 master-0 kubenswrapper[34361]: I0224 05:53:16.354188 34361 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/42e4dc08-bb96-4726-9ef0-9dc587361403-openstack-config\") pod \"openstackclient\" (UID: \"42e4dc08-bb96-4726-9ef0-9dc587361403\") " pod="openstack/openstackclient" Feb 24 05:53:16.358206 master-0 kubenswrapper[34361]: I0224 05:53:16.358076 34361 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-b7346-volume-lvm-iscsi-0" Feb 24 05:53:16.358926 master-0 kubenswrapper[34361]: I0224 05:53:16.358864 34361 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/42e4dc08-bb96-4726-9ef0-9dc587361403-combined-ca-bundle\") pod \"openstackclient\" (UID: \"42e4dc08-bb96-4726-9ef0-9dc587361403\") " pod="openstack/openstackclient" Feb 24 05:53:16.422703 master-0 kubenswrapper[34361]: I0224 05:53:16.422642 34361 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Feb 24 05:53:16.423665 master-0 kubenswrapper[34361]: I0224 05:53:16.423631 34361 scope.go:117] "RemoveContainer" containerID="be12c84a9104c46902effdcc5e08f45363e8dfda49b4e87c40f902fee9e4f074" Feb 24 05:53:16.424039 master-0 kubenswrapper[34361]: E0224 05:53:16.424015 34361 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ironic-api\" with CrashLoopBackOff: \"back-off 10s restarting failed container=ironic-api pod=ironic-555fd64789-cgpft_openstack(700c3143-d1a3-47a3-92f5-02a0b1e428a4)\"" pod="openstack/ironic-555fd64789-cgpft" podUID="700c3143-d1a3-47a3-92f5-02a0b1e428a4" Feb 24 05:53:16.466479 master-0 kubenswrapper[34361]: I0224 05:53:16.466401 34361 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openstack/openstackclient" oldPodUID="42e4dc08-bb96-4726-9ef0-9dc587361403" podUID="6be4831a-3890-44a6-8e35-58245f3d1ae0" Feb 24 05:53:16.481488 master-0 kubenswrapper[34361]: I0224 05:53:16.480908 34361 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Feb 24 05:53:16.515844 master-0 kubenswrapper[34361]: I0224 05:53:16.510936 34361 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstackclient"] Feb 24 05:53:16.518776 master-0 kubenswrapper[34361]: I0224 05:53:16.518655 34361 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Feb 24 05:53:16.539274 master-0 kubenswrapper[34361]: I0224 05:53:16.536245 34361 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstackclient"] Feb 24 05:53:16.630473 master-0 kubenswrapper[34361]: I0224 05:53:16.625266 34361 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/42e4dc08-bb96-4726-9ef0-9dc587361403-openstack-config-secret\") pod \"42e4dc08-bb96-4726-9ef0-9dc587361403\" (UID: \"42e4dc08-bb96-4726-9ef0-9dc587361403\") " Feb 24 05:53:16.630473 master-0 kubenswrapper[34361]: I0224 05:53:16.625771 34361 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/42e4dc08-bb96-4726-9ef0-9dc587361403-combined-ca-bundle\") pod \"42e4dc08-bb96-4726-9ef0-9dc587361403\" (UID: \"42e4dc08-bb96-4726-9ef0-9dc587361403\") " Feb 24 05:53:16.630473 master-0 kubenswrapper[34361]: I0224 05:53:16.625819 34361 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/42e4dc08-bb96-4726-9ef0-9dc587361403-openstack-config\") pod \"42e4dc08-bb96-4726-9ef0-9dc587361403\" (UID: \"42e4dc08-bb96-4726-9ef0-9dc587361403\") " Feb 24 05:53:16.630473 master-0 kubenswrapper[34361]: I0224 05:53:16.626245 34361 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/6be4831a-3890-44a6-8e35-58245f3d1ae0-openstack-config-secret\") pod \"openstackclient\" (UID: \"6be4831a-3890-44a6-8e35-58245f3d1ae0\") " pod="openstack/openstackclient" Feb 24 05:53:16.630473 master-0 kubenswrapper[34361]: I0224 05:53:16.626328 34361 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6be4831a-3890-44a6-8e35-58245f3d1ae0-combined-ca-bundle\") pod \"openstackclient\" (UID: \"6be4831a-3890-44a6-8e35-58245f3d1ae0\") " pod="openstack/openstackclient" Feb 24 05:53:16.630473 master-0 kubenswrapper[34361]: I0224 05:53:16.626360 34361 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gj5ds\" (UniqueName: \"kubernetes.io/projected/6be4831a-3890-44a6-8e35-58245f3d1ae0-kube-api-access-gj5ds\") pod \"openstackclient\" (UID: \"6be4831a-3890-44a6-8e35-58245f3d1ae0\") " pod="openstack/openstackclient" Feb 24 05:53:16.630473 master-0 kubenswrapper[34361]: I0224 05:53:16.626405 34361 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/6be4831a-3890-44a6-8e35-58245f3d1ae0-openstack-config\") pod \"openstackclient\" (UID: \"6be4831a-3890-44a6-8e35-58245f3d1ae0\") " pod="openstack/openstackclient" Feb 24 05:53:16.634336 master-0 kubenswrapper[34361]: I0224 05:53:16.633660 34361 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/42e4dc08-bb96-4726-9ef0-9dc587361403-openstack-config" (OuterVolumeSpecName: "openstack-config") pod "42e4dc08-bb96-4726-9ef0-9dc587361403" (UID: "42e4dc08-bb96-4726-9ef0-9dc587361403"). InnerVolumeSpecName "openstack-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 24 05:53:16.643469 master-0 kubenswrapper[34361]: I0224 05:53:16.641476 34361 reconciler_common.go:293] "Volume detached for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/42e4dc08-bb96-4726-9ef0-9dc587361403-openstack-config\") on node \"master-0\" DevicePath \"\"" Feb 24 05:53:16.643469 master-0 kubenswrapper[34361]: I0224 05:53:16.641628 34361 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ncm67\" (UniqueName: \"kubernetes.io/projected/42e4dc08-bb96-4726-9ef0-9dc587361403-kube-api-access-ncm67\") on node \"master-0\" DevicePath \"\"" Feb 24 05:53:16.646410 master-0 kubenswrapper[34361]: I0224 05:53:16.645541 34361 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/42e4dc08-bb96-4726-9ef0-9dc587361403-openstack-config-secret" (OuterVolumeSpecName: "openstack-config-secret") pod "42e4dc08-bb96-4726-9ef0-9dc587361403" (UID: "42e4dc08-bb96-4726-9ef0-9dc587361403"). InnerVolumeSpecName "openstack-config-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 24 05:53:16.646410 master-0 kubenswrapper[34361]: I0224 05:53:16.645775 34361 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/42e4dc08-bb96-4726-9ef0-9dc587361403-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "42e4dc08-bb96-4726-9ef0-9dc587361403" (UID: "42e4dc08-bb96-4726-9ef0-9dc587361403"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 24 05:53:16.683427 master-0 kubenswrapper[34361]: I0224 05:53:16.683219 34361 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="42e4dc08-bb96-4726-9ef0-9dc587361403" path="/var/lib/kubelet/pods/42e4dc08-bb96-4726-9ef0-9dc587361403/volumes" Feb 24 05:53:16.691397 master-0 kubenswrapper[34361]: I0224 05:53:16.684621 34361 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-b7346-backup-0" Feb 24 05:53:16.691397 master-0 kubenswrapper[34361]: I0224 05:53:16.684660 34361 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-b7346-scheduler-0" Feb 24 05:53:16.751302 master-0 kubenswrapper[34361]: I0224 05:53:16.751140 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/6be4831a-3890-44a6-8e35-58245f3d1ae0-openstack-config-secret\") pod \"openstackclient\" (UID: \"6be4831a-3890-44a6-8e35-58245f3d1ae0\") " pod="openstack/openstackclient" Feb 24 05:53:16.751302 master-0 kubenswrapper[34361]: I0224 05:53:16.751285 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6be4831a-3890-44a6-8e35-58245f3d1ae0-combined-ca-bundle\") pod \"openstackclient\" (UID: \"6be4831a-3890-44a6-8e35-58245f3d1ae0\") " pod="openstack/openstackclient" Feb 24 05:53:16.751302 master-0 kubenswrapper[34361]: I0224 05:53:16.751327 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gj5ds\" (UniqueName: \"kubernetes.io/projected/6be4831a-3890-44a6-8e35-58245f3d1ae0-kube-api-access-gj5ds\") pod \"openstackclient\" (UID: \"6be4831a-3890-44a6-8e35-58245f3d1ae0\") " pod="openstack/openstackclient" Feb 24 05:53:16.751703 master-0 kubenswrapper[34361]: I0224 05:53:16.751386 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/6be4831a-3890-44a6-8e35-58245f3d1ae0-openstack-config\") pod \"openstackclient\" (UID: \"6be4831a-3890-44a6-8e35-58245f3d1ae0\") " pod="openstack/openstackclient" Feb 24 05:53:16.751703 master-0 kubenswrapper[34361]: I0224 05:53:16.751510 34361 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/42e4dc08-bb96-4726-9ef0-9dc587361403-combined-ca-bundle\") on node \"master-0\" DevicePath \"\"" Feb 24 05:53:16.751703 master-0 kubenswrapper[34361]: I0224 05:53:16.751526 34361 reconciler_common.go:293] "Volume detached for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/42e4dc08-bb96-4726-9ef0-9dc587361403-openstack-config-secret\") on node \"master-0\" DevicePath \"\"" Feb 24 05:53:16.766086 master-0 kubenswrapper[34361]: I0224 05:53:16.762839 34361 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/6be4831a-3890-44a6-8e35-58245f3d1ae0-openstack-config\") pod \"openstackclient\" (UID: \"6be4831a-3890-44a6-8e35-58245f3d1ae0\") " pod="openstack/openstackclient" Feb 24 05:53:16.773327 master-0 kubenswrapper[34361]: I0224 05:53:16.773197 34361 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6be4831a-3890-44a6-8e35-58245f3d1ae0-combined-ca-bundle\") pod \"openstackclient\" (UID: \"6be4831a-3890-44a6-8e35-58245f3d1ae0\") " pod="openstack/openstackclient" Feb 24 05:53:16.777365 master-0 kubenswrapper[34361]: I0224 05:53:16.774069 34361 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/6be4831a-3890-44a6-8e35-58245f3d1ae0-openstack-config-secret\") pod \"openstackclient\" (UID: \"6be4831a-3890-44a6-8e35-58245f3d1ae0\") " pod="openstack/openstackclient" Feb 24 05:53:16.807499 master-0 kubenswrapper[34361]: I0224 05:53:16.807384 34361 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gj5ds\" (UniqueName: \"kubernetes.io/projected/6be4831a-3890-44a6-8e35-58245f3d1ae0-kube-api-access-gj5ds\") pod \"openstackclient\" (UID: \"6be4831a-3890-44a6-8e35-58245f3d1ae0\") " pod="openstack/openstackclient" Feb 24 05:53:16.853172 master-0 kubenswrapper[34361]: I0224 05:53:16.848203 34361 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Feb 24 05:53:17.437071 master-0 kubenswrapper[34361]: I0224 05:53:17.437010 34361 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Feb 24 05:53:17.465932 master-0 kubenswrapper[34361]: I0224 05:53:17.465850 34361 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openstack/openstackclient" oldPodUID="42e4dc08-bb96-4726-9ef0-9dc587361403" podUID="6be4831a-3890-44a6-8e35-58245f3d1ae0" Feb 24 05:53:17.485601 master-0 kubenswrapper[34361]: I0224 05:53:17.485472 34361 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstackclient"] Feb 24 05:53:18.351685 master-0 kubenswrapper[34361]: I0224 05:53:18.351597 34361 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ironic-6cc9f57487-vklxq" Feb 24 05:53:18.470513 master-0 kubenswrapper[34361]: I0224 05:53:18.470454 34361 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ironic-555fd64789-cgpft"] Feb 24 05:53:18.470938 master-0 kubenswrapper[34361]: I0224 05:53:18.470723 34361 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ironic-555fd64789-cgpft" podUID="700c3143-d1a3-47a3-92f5-02a0b1e428a4" containerName="ironic-api-log" containerID="cri-o://c83a58778fa74559b98b933ae2239f61460d230529049c178b776b567793eed8" gracePeriod=60 Feb 24 05:53:18.476271 master-0 kubenswrapper[34361]: I0224 05:53:18.476201 34361 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstackclient" event={"ID":"6be4831a-3890-44a6-8e35-58245f3d1ae0","Type":"ContainerStarted","Data":"1a09dc2ef5b09f108c25c8f695654e542bd28b87b9d1919019b29e08d30a0763"} Feb 24 05:53:18.489725 master-0 kubenswrapper[34361]: I0224 05:53:18.488748 34361 generic.go:334] "Generic (PLEG): container finished" podID="40a5b237-764f-4367-85a5-4153a8f90a3e" containerID="8d2749b27df058fd8c580c7ee172eac939e79e13640fc3cdd1176aef20aced3c" exitCode=1 Feb 24 05:53:18.489725 master-0 kubenswrapper[34361]: I0224 05:53:18.488810 34361 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-neutron-agent-856d98ff5d-2p7np" event={"ID":"40a5b237-764f-4367-85a5-4153a8f90a3e","Type":"ContainerDied","Data":"8d2749b27df058fd8c580c7ee172eac939e79e13640fc3cdd1176aef20aced3c"} Feb 24 05:53:18.489725 master-0 kubenswrapper[34361]: I0224 05:53:18.488855 34361 scope.go:117] "RemoveContainer" containerID="bd91a8454d87028b9e8706db9f6b6940724d7cb0d147ecc297afc26e76ed0e85" Feb 24 05:53:18.490688 master-0 kubenswrapper[34361]: I0224 05:53:18.490509 34361 scope.go:117] "RemoveContainer" containerID="8d2749b27df058fd8c580c7ee172eac939e79e13640fc3cdd1176aef20aced3c" Feb 24 05:53:18.490850 master-0 kubenswrapper[34361]: E0224 05:53:18.490819 34361 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ironic-neutron-agent\" with CrashLoopBackOff: \"back-off 10s restarting failed container=ironic-neutron-agent pod=ironic-neutron-agent-856d98ff5d-2p7np_openstack(40a5b237-764f-4367-85a5-4153a8f90a3e)\"" pod="openstack/ironic-neutron-agent-856d98ff5d-2p7np" podUID="40a5b237-764f-4367-85a5-4153a8f90a3e" Feb 24 05:53:19.260361 master-0 kubenswrapper[34361]: I0224 05:53:19.258774 34361 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ironic-inspector-db-sync-pd272"] Feb 24 05:53:19.270036 master-0 kubenswrapper[34361]: I0224 05:53:19.267381 34361 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ironic-inspector-db-sync-pd272" Feb 24 05:53:19.278482 master-0 kubenswrapper[34361]: I0224 05:53:19.277732 34361 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ironic-inspector-scripts" Feb 24 05:53:19.315737 master-0 kubenswrapper[34361]: I0224 05:53:19.315662 34361 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ironic-inspector-config-data" Feb 24 05:53:19.325611 master-0 kubenswrapper[34361]: I0224 05:53:19.325539 34361 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ironic-555fd64789-cgpft" Feb 24 05:53:19.439935 master-0 kubenswrapper[34361]: I0224 05:53:19.439841 34361 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ironic-inspector-db-sync-pd272"] Feb 24 05:53:19.463142 master-0 kubenswrapper[34361]: I0224 05:53:19.463001 34361 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-podinfo\" (UniqueName: \"kubernetes.io/downward-api/700c3143-d1a3-47a3-92f5-02a0b1e428a4-etc-podinfo\") pod \"700c3143-d1a3-47a3-92f5-02a0b1e428a4\" (UID: \"700c3143-d1a3-47a3-92f5-02a0b1e428a4\") " Feb 24 05:53:19.463381 master-0 kubenswrapper[34361]: I0224 05:53:19.463239 34361 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/700c3143-d1a3-47a3-92f5-02a0b1e428a4-scripts\") pod \"700c3143-d1a3-47a3-92f5-02a0b1e428a4\" (UID: \"700c3143-d1a3-47a3-92f5-02a0b1e428a4\") " Feb 24 05:53:19.463431 master-0 kubenswrapper[34361]: I0224 05:53:19.463404 34361 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-merged\" (UniqueName: \"kubernetes.io/empty-dir/700c3143-d1a3-47a3-92f5-02a0b1e428a4-config-data-merged\") pod \"700c3143-d1a3-47a3-92f5-02a0b1e428a4\" (UID: \"700c3143-d1a3-47a3-92f5-02a0b1e428a4\") " Feb 24 05:53:19.463497 master-0 kubenswrapper[34361]: I0224 05:53:19.463478 34361 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5p2h7\" (UniqueName: \"kubernetes.io/projected/700c3143-d1a3-47a3-92f5-02a0b1e428a4-kube-api-access-5p2h7\") pod \"700c3143-d1a3-47a3-92f5-02a0b1e428a4\" (UID: \"700c3143-d1a3-47a3-92f5-02a0b1e428a4\") " Feb 24 05:53:19.463555 master-0 kubenswrapper[34361]: I0224 05:53:19.463506 34361 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/700c3143-d1a3-47a3-92f5-02a0b1e428a4-config-data\") pod \"700c3143-d1a3-47a3-92f5-02a0b1e428a4\" (UID: \"700c3143-d1a3-47a3-92f5-02a0b1e428a4\") " Feb 24 05:53:19.463555 master-0 kubenswrapper[34361]: I0224 05:53:19.463535 34361 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/700c3143-d1a3-47a3-92f5-02a0b1e428a4-config-data-custom\") pod \"700c3143-d1a3-47a3-92f5-02a0b1e428a4\" (UID: \"700c3143-d1a3-47a3-92f5-02a0b1e428a4\") " Feb 24 05:53:19.463629 master-0 kubenswrapper[34361]: I0224 05:53:19.463595 34361 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/700c3143-d1a3-47a3-92f5-02a0b1e428a4-logs\") pod \"700c3143-d1a3-47a3-92f5-02a0b1e428a4\" (UID: \"700c3143-d1a3-47a3-92f5-02a0b1e428a4\") " Feb 24 05:53:19.463776 master-0 kubenswrapper[34361]: I0224 05:53:19.463758 34361 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/700c3143-d1a3-47a3-92f5-02a0b1e428a4-combined-ca-bundle\") pod \"700c3143-d1a3-47a3-92f5-02a0b1e428a4\" (UID: \"700c3143-d1a3-47a3-92f5-02a0b1e428a4\") " Feb 24 05:53:19.464136 master-0 kubenswrapper[34361]: I0224 05:53:19.464107 34361 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fw9c9\" (UniqueName: \"kubernetes.io/projected/e59553b6-01d7-45a8-8475-647431627701-kube-api-access-fw9c9\") pod \"ironic-inspector-db-sync-pd272\" (UID: \"e59553b6-01d7-45a8-8475-647431627701\") " pod="openstack/ironic-inspector-db-sync-pd272" Feb 24 05:53:19.464214 master-0 kubenswrapper[34361]: I0224 05:53:19.464191 34361 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-podinfo\" (UniqueName: \"kubernetes.io/downward-api/e59553b6-01d7-45a8-8475-647431627701-etc-podinfo\") pod \"ironic-inspector-db-sync-pd272\" (UID: \"e59553b6-01d7-45a8-8475-647431627701\") " pod="openstack/ironic-inspector-db-sync-pd272" Feb 24 05:53:19.464288 master-0 kubenswrapper[34361]: I0224 05:53:19.464264 34361 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-ironic-inspector-dhcp-hostsdir\" (UniqueName: \"kubernetes.io/empty-dir/e59553b6-01d7-45a8-8475-647431627701-var-lib-ironic-inspector-dhcp-hostsdir\") pod \"ironic-inspector-db-sync-pd272\" (UID: \"e59553b6-01d7-45a8-8475-647431627701\") " pod="openstack/ironic-inspector-db-sync-pd272" Feb 24 05:53:19.464368 master-0 kubenswrapper[34361]: I0224 05:53:19.464293 34361 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-ironic\" (UniqueName: \"kubernetes.io/empty-dir/e59553b6-01d7-45a8-8475-647431627701-var-lib-ironic\") pod \"ironic-inspector-db-sync-pd272\" (UID: \"e59553b6-01d7-45a8-8475-647431627701\") " pod="openstack/ironic-inspector-db-sync-pd272" Feb 24 05:53:19.464368 master-0 kubenswrapper[34361]: I0224 05:53:19.464349 34361 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/e59553b6-01d7-45a8-8475-647431627701-config\") pod \"ironic-inspector-db-sync-pd272\" (UID: \"e59553b6-01d7-45a8-8475-647431627701\") " pod="openstack/ironic-inspector-db-sync-pd272" Feb 24 05:53:19.464433 master-0 kubenswrapper[34361]: I0224 05:53:19.464413 34361 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e59553b6-01d7-45a8-8475-647431627701-scripts\") pod \"ironic-inspector-db-sync-pd272\" (UID: \"e59553b6-01d7-45a8-8475-647431627701\") " pod="openstack/ironic-inspector-db-sync-pd272" Feb 24 05:53:19.464470 master-0 kubenswrapper[34361]: I0224 05:53:19.464453 34361 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e59553b6-01d7-45a8-8475-647431627701-combined-ca-bundle\") pod \"ironic-inspector-db-sync-pd272\" (UID: \"e59553b6-01d7-45a8-8475-647431627701\") " pod="openstack/ironic-inspector-db-sync-pd272" Feb 24 05:53:19.465909 master-0 kubenswrapper[34361]: I0224 05:53:19.465806 34361 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/700c3143-d1a3-47a3-92f5-02a0b1e428a4-logs" (OuterVolumeSpecName: "logs") pod "700c3143-d1a3-47a3-92f5-02a0b1e428a4" (UID: "700c3143-d1a3-47a3-92f5-02a0b1e428a4"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 24 05:53:19.469389 master-0 kubenswrapper[34361]: I0224 05:53:19.468268 34361 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/700c3143-d1a3-47a3-92f5-02a0b1e428a4-config-data-merged" (OuterVolumeSpecName: "config-data-merged") pod "700c3143-d1a3-47a3-92f5-02a0b1e428a4" (UID: "700c3143-d1a3-47a3-92f5-02a0b1e428a4"). InnerVolumeSpecName "config-data-merged". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 24 05:53:19.469389 master-0 kubenswrapper[34361]: I0224 05:53:19.468845 34361 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/downward-api/700c3143-d1a3-47a3-92f5-02a0b1e428a4-etc-podinfo" (OuterVolumeSpecName: "etc-podinfo") pod "700c3143-d1a3-47a3-92f5-02a0b1e428a4" (UID: "700c3143-d1a3-47a3-92f5-02a0b1e428a4"). InnerVolumeSpecName "etc-podinfo". PluginName "kubernetes.io/downward-api", VolumeGidValue "" Feb 24 05:53:19.477088 master-0 kubenswrapper[34361]: I0224 05:53:19.470128 34361 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/700c3143-d1a3-47a3-92f5-02a0b1e428a4-kube-api-access-5p2h7" (OuterVolumeSpecName: "kube-api-access-5p2h7") pod "700c3143-d1a3-47a3-92f5-02a0b1e428a4" (UID: "700c3143-d1a3-47a3-92f5-02a0b1e428a4"). InnerVolumeSpecName "kube-api-access-5p2h7". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 24 05:53:19.477595 master-0 kubenswrapper[34361]: I0224 05:53:19.477538 34361 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/700c3143-d1a3-47a3-92f5-02a0b1e428a4-scripts" (OuterVolumeSpecName: "scripts") pod "700c3143-d1a3-47a3-92f5-02a0b1e428a4" (UID: "700c3143-d1a3-47a3-92f5-02a0b1e428a4"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 24 05:53:19.477874 master-0 kubenswrapper[34361]: I0224 05:53:19.477806 34361 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/700c3143-d1a3-47a3-92f5-02a0b1e428a4-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "700c3143-d1a3-47a3-92f5-02a0b1e428a4" (UID: "700c3143-d1a3-47a3-92f5-02a0b1e428a4"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 24 05:53:19.533459 master-0 kubenswrapper[34361]: I0224 05:53:19.530631 34361 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/700c3143-d1a3-47a3-92f5-02a0b1e428a4-config-data" (OuterVolumeSpecName: "config-data") pod "700c3143-d1a3-47a3-92f5-02a0b1e428a4" (UID: "700c3143-d1a3-47a3-92f5-02a0b1e428a4"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 24 05:53:19.550761 master-0 kubenswrapper[34361]: I0224 05:53:19.533929 34361 generic.go:334] "Generic (PLEG): container finished" podID="700c3143-d1a3-47a3-92f5-02a0b1e428a4" containerID="c83a58778fa74559b98b933ae2239f61460d230529049c178b776b567793eed8" exitCode=143 Feb 24 05:53:19.550761 master-0 kubenswrapper[34361]: I0224 05:53:19.534019 34361 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-555fd64789-cgpft" event={"ID":"700c3143-d1a3-47a3-92f5-02a0b1e428a4","Type":"ContainerDied","Data":"c83a58778fa74559b98b933ae2239f61460d230529049c178b776b567793eed8"} Feb 24 05:53:19.550761 master-0 kubenswrapper[34361]: I0224 05:53:19.534057 34361 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-555fd64789-cgpft" event={"ID":"700c3143-d1a3-47a3-92f5-02a0b1e428a4","Type":"ContainerDied","Data":"901d1fef311d3ee7dc425fda3b0a6ce4475633456db187f2a83f24b8deabf5a0"} Feb 24 05:53:19.550761 master-0 kubenswrapper[34361]: I0224 05:53:19.534077 34361 scope.go:117] "RemoveContainer" containerID="be12c84a9104c46902effdcc5e08f45363e8dfda49b4e87c40f902fee9e4f074" Feb 24 05:53:19.550761 master-0 kubenswrapper[34361]: I0224 05:53:19.534230 34361 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ironic-555fd64789-cgpft" Feb 24 05:53:19.567234 master-0 kubenswrapper[34361]: I0224 05:53:19.567173 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e59553b6-01d7-45a8-8475-647431627701-scripts\") pod \"ironic-inspector-db-sync-pd272\" (UID: \"e59553b6-01d7-45a8-8475-647431627701\") " pod="openstack/ironic-inspector-db-sync-pd272" Feb 24 05:53:19.567234 master-0 kubenswrapper[34361]: I0224 05:53:19.567252 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e59553b6-01d7-45a8-8475-647431627701-combined-ca-bundle\") pod \"ironic-inspector-db-sync-pd272\" (UID: \"e59553b6-01d7-45a8-8475-647431627701\") " pod="openstack/ironic-inspector-db-sync-pd272" Feb 24 05:53:19.567614 master-0 kubenswrapper[34361]: I0224 05:53:19.567319 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fw9c9\" (UniqueName: \"kubernetes.io/projected/e59553b6-01d7-45a8-8475-647431627701-kube-api-access-fw9c9\") pod \"ironic-inspector-db-sync-pd272\" (UID: \"e59553b6-01d7-45a8-8475-647431627701\") " pod="openstack/ironic-inspector-db-sync-pd272" Feb 24 05:53:19.567614 master-0 kubenswrapper[34361]: I0224 05:53:19.567585 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-podinfo\" (UniqueName: \"kubernetes.io/downward-api/e59553b6-01d7-45a8-8475-647431627701-etc-podinfo\") pod \"ironic-inspector-db-sync-pd272\" (UID: \"e59553b6-01d7-45a8-8475-647431627701\") " pod="openstack/ironic-inspector-db-sync-pd272" Feb 24 05:53:19.567679 master-0 kubenswrapper[34361]: I0224 05:53:19.567660 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-ironic\" (UniqueName: \"kubernetes.io/empty-dir/e59553b6-01d7-45a8-8475-647431627701-var-lib-ironic\") pod \"ironic-inspector-db-sync-pd272\" (UID: \"e59553b6-01d7-45a8-8475-647431627701\") " pod="openstack/ironic-inspector-db-sync-pd272" Feb 24 05:53:19.567713 master-0 kubenswrapper[34361]: I0224 05:53:19.567683 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-ironic-inspector-dhcp-hostsdir\" (UniqueName: \"kubernetes.io/empty-dir/e59553b6-01d7-45a8-8475-647431627701-var-lib-ironic-inspector-dhcp-hostsdir\") pod \"ironic-inspector-db-sync-pd272\" (UID: \"e59553b6-01d7-45a8-8475-647431627701\") " pod="openstack/ironic-inspector-db-sync-pd272" Feb 24 05:53:19.567751 master-0 kubenswrapper[34361]: I0224 05:53:19.567718 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/e59553b6-01d7-45a8-8475-647431627701-config\") pod \"ironic-inspector-db-sync-pd272\" (UID: \"e59553b6-01d7-45a8-8475-647431627701\") " pod="openstack/ironic-inspector-db-sync-pd272" Feb 24 05:53:19.567812 master-0 kubenswrapper[34361]: I0224 05:53:19.567791 34361 reconciler_common.go:293] "Volume detached for volume \"config-data-merged\" (UniqueName: \"kubernetes.io/empty-dir/700c3143-d1a3-47a3-92f5-02a0b1e428a4-config-data-merged\") on node \"master-0\" DevicePath \"\"" Feb 24 05:53:19.567812 master-0 kubenswrapper[34361]: I0224 05:53:19.567810 34361 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5p2h7\" (UniqueName: \"kubernetes.io/projected/700c3143-d1a3-47a3-92f5-02a0b1e428a4-kube-api-access-5p2h7\") on node \"master-0\" DevicePath \"\"" Feb 24 05:53:19.567895 master-0 kubenswrapper[34361]: I0224 05:53:19.567822 34361 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/700c3143-d1a3-47a3-92f5-02a0b1e428a4-config-data\") on node \"master-0\" DevicePath \"\"" Feb 24 05:53:19.567895 master-0 kubenswrapper[34361]: I0224 05:53:19.567833 34361 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/700c3143-d1a3-47a3-92f5-02a0b1e428a4-config-data-custom\") on node \"master-0\" DevicePath \"\"" Feb 24 05:53:19.567895 master-0 kubenswrapper[34361]: I0224 05:53:19.567842 34361 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/700c3143-d1a3-47a3-92f5-02a0b1e428a4-logs\") on node \"master-0\" DevicePath \"\"" Feb 24 05:53:19.567895 master-0 kubenswrapper[34361]: I0224 05:53:19.567851 34361 reconciler_common.go:293] "Volume detached for volume \"etc-podinfo\" (UniqueName: \"kubernetes.io/downward-api/700c3143-d1a3-47a3-92f5-02a0b1e428a4-etc-podinfo\") on node \"master-0\" DevicePath \"\"" Feb 24 05:53:19.567895 master-0 kubenswrapper[34361]: I0224 05:53:19.567861 34361 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/700c3143-d1a3-47a3-92f5-02a0b1e428a4-scripts\") on node \"master-0\" DevicePath \"\"" Feb 24 05:53:19.571362 master-0 kubenswrapper[34361]: I0224 05:53:19.570414 34361 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-ironic\" (UniqueName: \"kubernetes.io/empty-dir/e59553b6-01d7-45a8-8475-647431627701-var-lib-ironic\") pod \"ironic-inspector-db-sync-pd272\" (UID: \"e59553b6-01d7-45a8-8475-647431627701\") " pod="openstack/ironic-inspector-db-sync-pd272" Feb 24 05:53:19.571362 master-0 kubenswrapper[34361]: I0224 05:53:19.571173 34361 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-ironic-inspector-dhcp-hostsdir\" (UniqueName: \"kubernetes.io/empty-dir/e59553b6-01d7-45a8-8475-647431627701-var-lib-ironic-inspector-dhcp-hostsdir\") pod \"ironic-inspector-db-sync-pd272\" (UID: \"e59553b6-01d7-45a8-8475-647431627701\") " pod="openstack/ironic-inspector-db-sync-pd272" Feb 24 05:53:19.582751 master-0 kubenswrapper[34361]: I0224 05:53:19.575900 34361 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e59553b6-01d7-45a8-8475-647431627701-scripts\") pod \"ironic-inspector-db-sync-pd272\" (UID: \"e59553b6-01d7-45a8-8475-647431627701\") " pod="openstack/ironic-inspector-db-sync-pd272" Feb 24 05:53:19.582751 master-0 kubenswrapper[34361]: I0224 05:53:19.576550 34361 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/e59553b6-01d7-45a8-8475-647431627701-config\") pod \"ironic-inspector-db-sync-pd272\" (UID: \"e59553b6-01d7-45a8-8475-647431627701\") " pod="openstack/ironic-inspector-db-sync-pd272" Feb 24 05:53:19.582751 master-0 kubenswrapper[34361]: I0224 05:53:19.579199 34361 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-podinfo\" (UniqueName: \"kubernetes.io/downward-api/e59553b6-01d7-45a8-8475-647431627701-etc-podinfo\") pod \"ironic-inspector-db-sync-pd272\" (UID: \"e59553b6-01d7-45a8-8475-647431627701\") " pod="openstack/ironic-inspector-db-sync-pd272" Feb 24 05:53:19.584770 master-0 kubenswrapper[34361]: I0224 05:53:19.584702 34361 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/700c3143-d1a3-47a3-92f5-02a0b1e428a4-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "700c3143-d1a3-47a3-92f5-02a0b1e428a4" (UID: "700c3143-d1a3-47a3-92f5-02a0b1e428a4"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 24 05:53:19.585744 master-0 kubenswrapper[34361]: I0224 05:53:19.585604 34361 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e59553b6-01d7-45a8-8475-647431627701-combined-ca-bundle\") pod \"ironic-inspector-db-sync-pd272\" (UID: \"e59553b6-01d7-45a8-8475-647431627701\") " pod="openstack/ironic-inspector-db-sync-pd272" Feb 24 05:53:19.596708 master-0 kubenswrapper[34361]: I0224 05:53:19.592211 34361 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fw9c9\" (UniqueName: \"kubernetes.io/projected/e59553b6-01d7-45a8-8475-647431627701-kube-api-access-fw9c9\") pod \"ironic-inspector-db-sync-pd272\" (UID: \"e59553b6-01d7-45a8-8475-647431627701\") " pod="openstack/ironic-inspector-db-sync-pd272" Feb 24 05:53:19.650261 master-0 kubenswrapper[34361]: I0224 05:53:19.650200 34361 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ironic-inspector-db-sync-pd272" Feb 24 05:53:19.670848 master-0 kubenswrapper[34361]: I0224 05:53:19.670798 34361 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/700c3143-d1a3-47a3-92f5-02a0b1e428a4-combined-ca-bundle\") on node \"master-0\" DevicePath \"\"" Feb 24 05:53:19.762532 master-0 kubenswrapper[34361]: I0224 05:53:19.762461 34361 scope.go:117] "RemoveContainer" containerID="c83a58778fa74559b98b933ae2239f61460d230529049c178b776b567793eed8" Feb 24 05:53:19.770900 master-0 kubenswrapper[34361]: I0224 05:53:19.770853 34361 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack/ironic-neutron-agent-856d98ff5d-2p7np" Feb 24 05:53:19.772042 master-0 kubenswrapper[34361]: I0224 05:53:19.772017 34361 scope.go:117] "RemoveContainer" containerID="8d2749b27df058fd8c580c7ee172eac939e79e13640fc3cdd1176aef20aced3c" Feb 24 05:53:19.773284 master-0 kubenswrapper[34361]: E0224 05:53:19.773252 34361 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ironic-neutron-agent\" with CrashLoopBackOff: \"back-off 10s restarting failed container=ironic-neutron-agent pod=ironic-neutron-agent-856d98ff5d-2p7np_openstack(40a5b237-764f-4367-85a5-4153a8f90a3e)\"" pod="openstack/ironic-neutron-agent-856d98ff5d-2p7np" podUID="40a5b237-764f-4367-85a5-4153a8f90a3e" Feb 24 05:53:19.885914 master-0 kubenswrapper[34361]: I0224 05:53:19.885478 34361 scope.go:117] "RemoveContainer" containerID="abba14a9d83c013392814f045bc1cc7b0b1c9b871724050cb75b4193d8291663" Feb 24 05:53:19.929063 master-0 kubenswrapper[34361]: I0224 05:53:19.925523 34361 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ironic-555fd64789-cgpft"] Feb 24 05:53:19.931730 master-0 kubenswrapper[34361]: I0224 05:53:19.929595 34361 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-7d9d8bd467-64rvv" Feb 24 05:53:19.992658 master-0 kubenswrapper[34361]: I0224 05:53:19.978158 34361 scope.go:117] "RemoveContainer" containerID="be12c84a9104c46902effdcc5e08f45363e8dfda49b4e87c40f902fee9e4f074" Feb 24 05:53:19.992658 master-0 kubenswrapper[34361]: E0224 05:53:19.982555 34361 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"be12c84a9104c46902effdcc5e08f45363e8dfda49b4e87c40f902fee9e4f074\": container with ID starting with be12c84a9104c46902effdcc5e08f45363e8dfda49b4e87c40f902fee9e4f074 not found: ID does not exist" containerID="be12c84a9104c46902effdcc5e08f45363e8dfda49b4e87c40f902fee9e4f074" Feb 24 05:53:19.992658 master-0 kubenswrapper[34361]: I0224 05:53:19.982655 34361 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"be12c84a9104c46902effdcc5e08f45363e8dfda49b4e87c40f902fee9e4f074"} err="failed to get container status \"be12c84a9104c46902effdcc5e08f45363e8dfda49b4e87c40f902fee9e4f074\": rpc error: code = NotFound desc = could not find container \"be12c84a9104c46902effdcc5e08f45363e8dfda49b4e87c40f902fee9e4f074\": container with ID starting with be12c84a9104c46902effdcc5e08f45363e8dfda49b4e87c40f902fee9e4f074 not found: ID does not exist" Feb 24 05:53:19.992658 master-0 kubenswrapper[34361]: I0224 05:53:19.982688 34361 scope.go:117] "RemoveContainer" containerID="c83a58778fa74559b98b933ae2239f61460d230529049c178b776b567793eed8" Feb 24 05:53:19.992658 master-0 kubenswrapper[34361]: I0224 05:53:19.992532 34361 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ironic-555fd64789-cgpft"] Feb 24 05:53:19.998433 master-0 kubenswrapper[34361]: E0224 05:53:19.994166 34361 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c83a58778fa74559b98b933ae2239f61460d230529049c178b776b567793eed8\": container with ID starting with c83a58778fa74559b98b933ae2239f61460d230529049c178b776b567793eed8 not found: ID does not exist" containerID="c83a58778fa74559b98b933ae2239f61460d230529049c178b776b567793eed8" Feb 24 05:53:19.998433 master-0 kubenswrapper[34361]: I0224 05:53:19.994229 34361 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c83a58778fa74559b98b933ae2239f61460d230529049c178b776b567793eed8"} err="failed to get container status \"c83a58778fa74559b98b933ae2239f61460d230529049c178b776b567793eed8\": rpc error: code = NotFound desc = could not find container \"c83a58778fa74559b98b933ae2239f61460d230529049c178b776b567793eed8\": container with ID starting with c83a58778fa74559b98b933ae2239f61460d230529049c178b776b567793eed8 not found: ID does not exist" Feb 24 05:53:19.998433 master-0 kubenswrapper[34361]: I0224 05:53:19.994261 34361 scope.go:117] "RemoveContainer" containerID="abba14a9d83c013392814f045bc1cc7b0b1c9b871724050cb75b4193d8291663" Feb 24 05:53:20.011100 master-0 kubenswrapper[34361]: E0224 05:53:20.003563 34361 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"abba14a9d83c013392814f045bc1cc7b0b1c9b871724050cb75b4193d8291663\": container with ID starting with abba14a9d83c013392814f045bc1cc7b0b1c9b871724050cb75b4193d8291663 not found: ID does not exist" containerID="abba14a9d83c013392814f045bc1cc7b0b1c9b871724050cb75b4193d8291663" Feb 24 05:53:20.011100 master-0 kubenswrapper[34361]: I0224 05:53:20.003622 34361 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"abba14a9d83c013392814f045bc1cc7b0b1c9b871724050cb75b4193d8291663"} err="failed to get container status \"abba14a9d83c013392814f045bc1cc7b0b1c9b871724050cb75b4193d8291663\": rpc error: code = NotFound desc = could not find container \"abba14a9d83c013392814f045bc1cc7b0b1c9b871724050cb75b4193d8291663\": container with ID starting with abba14a9d83c013392814f045bc1cc7b0b1c9b871724050cb75b4193d8291663 not found: ID does not exist" Feb 24 05:53:20.051353 master-0 kubenswrapper[34361]: I0224 05:53:20.049885 34361 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-564d4966c5-82kwv"] Feb 24 05:53:20.051353 master-0 kubenswrapper[34361]: I0224 05:53:20.050263 34361 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-564d4966c5-82kwv" podUID="76fe2580-1b17-4dd5-bdac-693e4027a09e" containerName="dnsmasq-dns" containerID="cri-o://bf87fa3ad91c9791fd5c0f1f5dee9989e63c440a986753d19e543def1d63c006" gracePeriod=10 Feb 24 05:53:20.272829 master-0 kubenswrapper[34361]: I0224 05:53:20.272765 34361 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ironic-inspector-db-sync-pd272"] Feb 24 05:53:20.496624 master-0 kubenswrapper[34361]: I0224 05:53:20.495345 34361 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/swift-proxy-8695dc84b-bccck"] Feb 24 05:53:20.496624 master-0 kubenswrapper[34361]: E0224 05:53:20.496176 34361 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="700c3143-d1a3-47a3-92f5-02a0b1e428a4" containerName="ironic-api" Feb 24 05:53:20.496624 master-0 kubenswrapper[34361]: I0224 05:53:20.496192 34361 state_mem.go:107] "Deleted CPUSet assignment" podUID="700c3143-d1a3-47a3-92f5-02a0b1e428a4" containerName="ironic-api" Feb 24 05:53:20.496624 master-0 kubenswrapper[34361]: E0224 05:53:20.496240 34361 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="700c3143-d1a3-47a3-92f5-02a0b1e428a4" containerName="ironic-api" Feb 24 05:53:20.496624 master-0 kubenswrapper[34361]: I0224 05:53:20.496254 34361 state_mem.go:107] "Deleted CPUSet assignment" podUID="700c3143-d1a3-47a3-92f5-02a0b1e428a4" containerName="ironic-api" Feb 24 05:53:20.496624 master-0 kubenswrapper[34361]: E0224 05:53:20.496270 34361 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="700c3143-d1a3-47a3-92f5-02a0b1e428a4" containerName="ironic-api-log" Feb 24 05:53:20.496624 master-0 kubenswrapper[34361]: I0224 05:53:20.496278 34361 state_mem.go:107] "Deleted CPUSet assignment" podUID="700c3143-d1a3-47a3-92f5-02a0b1e428a4" containerName="ironic-api-log" Feb 24 05:53:20.496624 master-0 kubenswrapper[34361]: E0224 05:53:20.496361 34361 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="700c3143-d1a3-47a3-92f5-02a0b1e428a4" containerName="init" Feb 24 05:53:20.496624 master-0 kubenswrapper[34361]: I0224 05:53:20.496389 34361 state_mem.go:107] "Deleted CPUSet assignment" podUID="700c3143-d1a3-47a3-92f5-02a0b1e428a4" containerName="init" Feb 24 05:53:20.498130 master-0 kubenswrapper[34361]: I0224 05:53:20.496744 34361 memory_manager.go:354] "RemoveStaleState removing state" podUID="700c3143-d1a3-47a3-92f5-02a0b1e428a4" containerName="ironic-api-log" Feb 24 05:53:20.498130 master-0 kubenswrapper[34361]: I0224 05:53:20.496768 34361 memory_manager.go:354] "RemoveStaleState removing state" podUID="700c3143-d1a3-47a3-92f5-02a0b1e428a4" containerName="ironic-api" Feb 24 05:53:20.498130 master-0 kubenswrapper[34361]: I0224 05:53:20.497413 34361 memory_manager.go:354] "RemoveStaleState removing state" podUID="700c3143-d1a3-47a3-92f5-02a0b1e428a4" containerName="ironic-api" Feb 24 05:53:20.499348 master-0 kubenswrapper[34361]: I0224 05:53:20.499298 34361 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-proxy-8695dc84b-bccck" Feb 24 05:53:20.513299 master-0 kubenswrapper[34361]: I0224 05:53:20.504709 34361 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-proxy-config-data" Feb 24 05:53:20.513299 master-0 kubenswrapper[34361]: I0224 05:53:20.504812 34361 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-swift-internal-svc" Feb 24 05:53:20.513299 master-0 kubenswrapper[34361]: I0224 05:53:20.504723 34361 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-swift-public-svc" Feb 24 05:53:20.513683 master-0 kubenswrapper[34361]: I0224 05:53:20.513628 34361 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-proxy-8695dc84b-bccck"] Feb 24 05:53:20.567407 master-0 kubenswrapper[34361]: I0224 05:53:20.566415 34361 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/54d8708a-1dae-47bc-aead-fa87ab028821-combined-ca-bundle\") pod \"swift-proxy-8695dc84b-bccck\" (UID: \"54d8708a-1dae-47bc-aead-fa87ab028821\") " pod="openstack/swift-proxy-8695dc84b-bccck" Feb 24 05:53:20.567407 master-0 kubenswrapper[34361]: I0224 05:53:20.566636 34361 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/54d8708a-1dae-47bc-aead-fa87ab028821-public-tls-certs\") pod \"swift-proxy-8695dc84b-bccck\" (UID: \"54d8708a-1dae-47bc-aead-fa87ab028821\") " pod="openstack/swift-proxy-8695dc84b-bccck" Feb 24 05:53:20.567407 master-0 kubenswrapper[34361]: I0224 05:53:20.566829 34361 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/54d8708a-1dae-47bc-aead-fa87ab028821-internal-tls-certs\") pod \"swift-proxy-8695dc84b-bccck\" (UID: \"54d8708a-1dae-47bc-aead-fa87ab028821\") " pod="openstack/swift-proxy-8695dc84b-bccck" Feb 24 05:53:20.567407 master-0 kubenswrapper[34361]: I0224 05:53:20.567011 34361 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qcffc\" (UniqueName: \"kubernetes.io/projected/54d8708a-1dae-47bc-aead-fa87ab028821-kube-api-access-qcffc\") pod \"swift-proxy-8695dc84b-bccck\" (UID: \"54d8708a-1dae-47bc-aead-fa87ab028821\") " pod="openstack/swift-proxy-8695dc84b-bccck" Feb 24 05:53:20.567407 master-0 kubenswrapper[34361]: I0224 05:53:20.567064 34361 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/54d8708a-1dae-47bc-aead-fa87ab028821-etc-swift\") pod \"swift-proxy-8695dc84b-bccck\" (UID: \"54d8708a-1dae-47bc-aead-fa87ab028821\") " pod="openstack/swift-proxy-8695dc84b-bccck" Feb 24 05:53:20.567407 master-0 kubenswrapper[34361]: I0224 05:53:20.567278 34361 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/54d8708a-1dae-47bc-aead-fa87ab028821-run-httpd\") pod \"swift-proxy-8695dc84b-bccck\" (UID: \"54d8708a-1dae-47bc-aead-fa87ab028821\") " pod="openstack/swift-proxy-8695dc84b-bccck" Feb 24 05:53:20.567407 master-0 kubenswrapper[34361]: I0224 05:53:20.567295 34361 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/54d8708a-1dae-47bc-aead-fa87ab028821-log-httpd\") pod \"swift-proxy-8695dc84b-bccck\" (UID: \"54d8708a-1dae-47bc-aead-fa87ab028821\") " pod="openstack/swift-proxy-8695dc84b-bccck" Feb 24 05:53:20.567915 master-0 kubenswrapper[34361]: I0224 05:53:20.567442 34361 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/54d8708a-1dae-47bc-aead-fa87ab028821-config-data\") pod \"swift-proxy-8695dc84b-bccck\" (UID: \"54d8708a-1dae-47bc-aead-fa87ab028821\") " pod="openstack/swift-proxy-8695dc84b-bccck" Feb 24 05:53:20.606186 master-0 kubenswrapper[34361]: I0224 05:53:20.606125 34361 generic.go:334] "Generic (PLEG): container finished" podID="76fe2580-1b17-4dd5-bdac-693e4027a09e" containerID="bf87fa3ad91c9791fd5c0f1f5dee9989e63c440a986753d19e543def1d63c006" exitCode=0 Feb 24 05:53:20.679663 master-0 kubenswrapper[34361]: I0224 05:53:20.677070 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/54d8708a-1dae-47bc-aead-fa87ab028821-etc-swift\") pod \"swift-proxy-8695dc84b-bccck\" (UID: \"54d8708a-1dae-47bc-aead-fa87ab028821\") " pod="openstack/swift-proxy-8695dc84b-bccck" Feb 24 05:53:20.680482 master-0 kubenswrapper[34361]: I0224 05:53:20.678437 34361 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="700c3143-d1a3-47a3-92f5-02a0b1e428a4" path="/var/lib/kubelet/pods/700c3143-d1a3-47a3-92f5-02a0b1e428a4/volumes" Feb 24 05:53:20.684101 master-0 kubenswrapper[34361]: I0224 05:53:20.682174 34361 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-proxy-config-data" Feb 24 05:53:20.684581 master-0 kubenswrapper[34361]: I0224 05:53:20.684530 34361 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-564d4966c5-82kwv" event={"ID":"76fe2580-1b17-4dd5-bdac-693e4027a09e","Type":"ContainerDied","Data":"bf87fa3ad91c9791fd5c0f1f5dee9989e63c440a986753d19e543def1d63c006"} Feb 24 05:53:20.685217 master-0 kubenswrapper[34361]: I0224 05:53:20.685184 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/54d8708a-1dae-47bc-aead-fa87ab028821-run-httpd\") pod \"swift-proxy-8695dc84b-bccck\" (UID: \"54d8708a-1dae-47bc-aead-fa87ab028821\") " pod="openstack/swift-proxy-8695dc84b-bccck" Feb 24 05:53:20.688420 master-0 kubenswrapper[34361]: I0224 05:53:20.688391 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/54d8708a-1dae-47bc-aead-fa87ab028821-log-httpd\") pod \"swift-proxy-8695dc84b-bccck\" (UID: \"54d8708a-1dae-47bc-aead-fa87ab028821\") " pod="openstack/swift-proxy-8695dc84b-bccck" Feb 24 05:53:20.688928 master-0 kubenswrapper[34361]: I0224 05:53:20.685695 34361 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/54d8708a-1dae-47bc-aead-fa87ab028821-run-httpd\") pod \"swift-proxy-8695dc84b-bccck\" (UID: \"54d8708a-1dae-47bc-aead-fa87ab028821\") " pod="openstack/swift-proxy-8695dc84b-bccck" Feb 24 05:53:20.688928 master-0 kubenswrapper[34361]: I0224 05:53:20.688882 34361 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/54d8708a-1dae-47bc-aead-fa87ab028821-log-httpd\") pod \"swift-proxy-8695dc84b-bccck\" (UID: \"54d8708a-1dae-47bc-aead-fa87ab028821\") " pod="openstack/swift-proxy-8695dc84b-bccck" Feb 24 05:53:20.689702 master-0 kubenswrapper[34361]: I0224 05:53:20.688578 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/54d8708a-1dae-47bc-aead-fa87ab028821-config-data\") pod \"swift-proxy-8695dc84b-bccck\" (UID: \"54d8708a-1dae-47bc-aead-fa87ab028821\") " pod="openstack/swift-proxy-8695dc84b-bccck" Feb 24 05:53:20.694329 master-0 kubenswrapper[34361]: I0224 05:53:20.692632 34361 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/54d8708a-1dae-47bc-aead-fa87ab028821-etc-swift\") pod \"swift-proxy-8695dc84b-bccck\" (UID: \"54d8708a-1dae-47bc-aead-fa87ab028821\") " pod="openstack/swift-proxy-8695dc84b-bccck" Feb 24 05:53:20.695201 master-0 kubenswrapper[34361]: I0224 05:53:20.695179 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/54d8708a-1dae-47bc-aead-fa87ab028821-combined-ca-bundle\") pod \"swift-proxy-8695dc84b-bccck\" (UID: \"54d8708a-1dae-47bc-aead-fa87ab028821\") " pod="openstack/swift-proxy-8695dc84b-bccck" Feb 24 05:53:20.695521 master-0 kubenswrapper[34361]: I0224 05:53:20.695504 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/54d8708a-1dae-47bc-aead-fa87ab028821-public-tls-certs\") pod \"swift-proxy-8695dc84b-bccck\" (UID: \"54d8708a-1dae-47bc-aead-fa87ab028821\") " pod="openstack/swift-proxy-8695dc84b-bccck" Feb 24 05:53:20.695839 master-0 kubenswrapper[34361]: I0224 05:53:20.695817 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/54d8708a-1dae-47bc-aead-fa87ab028821-internal-tls-certs\") pod \"swift-proxy-8695dc84b-bccck\" (UID: \"54d8708a-1dae-47bc-aead-fa87ab028821\") " pod="openstack/swift-proxy-8695dc84b-bccck" Feb 24 05:53:20.696711 master-0 kubenswrapper[34361]: I0224 05:53:20.696680 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qcffc\" (UniqueName: \"kubernetes.io/projected/54d8708a-1dae-47bc-aead-fa87ab028821-kube-api-access-qcffc\") pod \"swift-proxy-8695dc84b-bccck\" (UID: \"54d8708a-1dae-47bc-aead-fa87ab028821\") " pod="openstack/swift-proxy-8695dc84b-bccck" Feb 24 05:53:20.697142 master-0 kubenswrapper[34361]: I0224 05:53:20.696029 34361 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/54d8708a-1dae-47bc-aead-fa87ab028821-config-data\") pod \"swift-proxy-8695dc84b-bccck\" (UID: \"54d8708a-1dae-47bc-aead-fa87ab028821\") " pod="openstack/swift-proxy-8695dc84b-bccck" Feb 24 05:53:20.699000 master-0 kubenswrapper[34361]: I0224 05:53:20.698979 34361 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-swift-internal-svc" Feb 24 05:53:20.699467 master-0 kubenswrapper[34361]: I0224 05:53:20.699452 34361 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-swift-public-svc" Feb 24 05:53:20.713643 master-0 kubenswrapper[34361]: I0224 05:53:20.713577 34361 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/54d8708a-1dae-47bc-aead-fa87ab028821-combined-ca-bundle\") pod \"swift-proxy-8695dc84b-bccck\" (UID: \"54d8708a-1dae-47bc-aead-fa87ab028821\") " pod="openstack/swift-proxy-8695dc84b-bccck" Feb 24 05:53:20.714824 master-0 kubenswrapper[34361]: I0224 05:53:20.714784 34361 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/54d8708a-1dae-47bc-aead-fa87ab028821-internal-tls-certs\") pod \"swift-proxy-8695dc84b-bccck\" (UID: \"54d8708a-1dae-47bc-aead-fa87ab028821\") " pod="openstack/swift-proxy-8695dc84b-bccck" Feb 24 05:53:20.715019 master-0 kubenswrapper[34361]: I0224 05:53:20.714974 34361 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/54d8708a-1dae-47bc-aead-fa87ab028821-public-tls-certs\") pod \"swift-proxy-8695dc84b-bccck\" (UID: \"54d8708a-1dae-47bc-aead-fa87ab028821\") " pod="openstack/swift-proxy-8695dc84b-bccck" Feb 24 05:53:20.787332 master-0 kubenswrapper[34361]: I0224 05:53:20.787152 34361 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qcffc\" (UniqueName: \"kubernetes.io/projected/54d8708a-1dae-47bc-aead-fa87ab028821-kube-api-access-qcffc\") pod \"swift-proxy-8695dc84b-bccck\" (UID: \"54d8708a-1dae-47bc-aead-fa87ab028821\") " pod="openstack/swift-proxy-8695dc84b-bccck" Feb 24 05:53:20.886699 master-0 kubenswrapper[34361]: I0224 05:53:20.886038 34361 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-proxy-8695dc84b-bccck" Feb 24 05:53:21.656147 master-0 kubenswrapper[34361]: I0224 05:53:21.655980 34361 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/cinder-b7346-volume-lvm-iscsi-0" Feb 24 05:53:21.918676 master-0 kubenswrapper[34361]: I0224 05:53:21.917337 34361 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/cinder-b7346-backup-0" Feb 24 05:53:21.982497 master-0 kubenswrapper[34361]: I0224 05:53:21.982420 34361 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-bdafd-default-external-api-0"] Feb 24 05:53:21.982848 master-0 kubenswrapper[34361]: I0224 05:53:21.982810 34361 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-bdafd-default-external-api-0" podUID="167c633e-12d2-45f6-a746-7437ee0bbfff" containerName="glance-log" containerID="cri-o://0942ee6a98d6bb5a9765463e9f2e7b660623d6201a2a9b274816da5132fa8c64" gracePeriod=30 Feb 24 05:53:21.985503 master-0 kubenswrapper[34361]: I0224 05:53:21.983524 34361 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-bdafd-default-external-api-0" podUID="167c633e-12d2-45f6-a746-7437ee0bbfff" containerName="glance-httpd" containerID="cri-o://e7c9aa8d6472db53b2f7d10161065f254ecbe1f61c23fa12fbb9fd5d661a9703" gracePeriod=30 Feb 24 05:53:21.985503 master-0 kubenswrapper[34361]: I0224 05:53:21.984997 34361 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/cinder-b7346-scheduler-0" Feb 24 05:53:22.646230 master-0 kubenswrapper[34361]: I0224 05:53:22.646080 34361 generic.go:334] "Generic (PLEG): container finished" podID="167c633e-12d2-45f6-a746-7437ee0bbfff" containerID="0942ee6a98d6bb5a9765463e9f2e7b660623d6201a2a9b274816da5132fa8c64" exitCode=143 Feb 24 05:53:22.646230 master-0 kubenswrapper[34361]: I0224 05:53:22.646157 34361 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-bdafd-default-external-api-0" event={"ID":"167c633e-12d2-45f6-a746-7437ee0bbfff","Type":"ContainerDied","Data":"0942ee6a98d6bb5a9765463e9f2e7b660623d6201a2a9b274816da5132fa8c64"} Feb 24 05:53:22.794108 master-0 kubenswrapper[34361]: W0224 05:53:22.794016 34361 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode59553b6_01d7_45a8_8475_647431627701.slice/crio-677cf3bd52fee7c3317f7d93aae9a5e99b1e65346ae46196b17aef8d62b4e4d4 WatchSource:0}: Error finding container 677cf3bd52fee7c3317f7d93aae9a5e99b1e65346ae46196b17aef8d62b4e4d4: Status 404 returned error can't find the container with id 677cf3bd52fee7c3317f7d93aae9a5e99b1e65346ae46196b17aef8d62b4e4d4 Feb 24 05:53:23.457184 master-0 kubenswrapper[34361]: I0224 05:53:23.457107 34361 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-564d4966c5-82kwv" Feb 24 05:53:23.542399 master-0 kubenswrapper[34361]: I0224 05:53:23.542321 34361 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/76fe2580-1b17-4dd5-bdac-693e4027a09e-ovsdbserver-nb\") pod \"76fe2580-1b17-4dd5-bdac-693e4027a09e\" (UID: \"76fe2580-1b17-4dd5-bdac-693e4027a09e\") " Feb 24 05:53:23.542718 master-0 kubenswrapper[34361]: I0224 05:53:23.542639 34361 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5txg6\" (UniqueName: \"kubernetes.io/projected/76fe2580-1b17-4dd5-bdac-693e4027a09e-kube-api-access-5txg6\") pod \"76fe2580-1b17-4dd5-bdac-693e4027a09e\" (UID: \"76fe2580-1b17-4dd5-bdac-693e4027a09e\") " Feb 24 05:53:23.542718 master-0 kubenswrapper[34361]: I0224 05:53:23.542670 34361 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/76fe2580-1b17-4dd5-bdac-693e4027a09e-dns-swift-storage-0\") pod \"76fe2580-1b17-4dd5-bdac-693e4027a09e\" (UID: \"76fe2580-1b17-4dd5-bdac-693e4027a09e\") " Feb 24 05:53:23.542823 master-0 kubenswrapper[34361]: I0224 05:53:23.542797 34361 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/76fe2580-1b17-4dd5-bdac-693e4027a09e-dns-svc\") pod \"76fe2580-1b17-4dd5-bdac-693e4027a09e\" (UID: \"76fe2580-1b17-4dd5-bdac-693e4027a09e\") " Feb 24 05:53:23.542898 master-0 kubenswrapper[34361]: I0224 05:53:23.542876 34361 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/76fe2580-1b17-4dd5-bdac-693e4027a09e-ovsdbserver-sb\") pod \"76fe2580-1b17-4dd5-bdac-693e4027a09e\" (UID: \"76fe2580-1b17-4dd5-bdac-693e4027a09e\") " Feb 24 05:53:23.543043 master-0 kubenswrapper[34361]: I0224 05:53:23.543010 34361 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/76fe2580-1b17-4dd5-bdac-693e4027a09e-config\") pod \"76fe2580-1b17-4dd5-bdac-693e4027a09e\" (UID: \"76fe2580-1b17-4dd5-bdac-693e4027a09e\") " Feb 24 05:53:23.558381 master-0 kubenswrapper[34361]: I0224 05:53:23.558238 34361 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-proxy-8695dc84b-bccck"] Feb 24 05:53:23.565473 master-0 kubenswrapper[34361]: I0224 05:53:23.565387 34361 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/76fe2580-1b17-4dd5-bdac-693e4027a09e-kube-api-access-5txg6" (OuterVolumeSpecName: "kube-api-access-5txg6") pod "76fe2580-1b17-4dd5-bdac-693e4027a09e" (UID: "76fe2580-1b17-4dd5-bdac-693e4027a09e"). InnerVolumeSpecName "kube-api-access-5txg6". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 24 05:53:23.611459 master-0 kubenswrapper[34361]: I0224 05:53:23.611167 34361 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/76fe2580-1b17-4dd5-bdac-693e4027a09e-config" (OuterVolumeSpecName: "config") pod "76fe2580-1b17-4dd5-bdac-693e4027a09e" (UID: "76fe2580-1b17-4dd5-bdac-693e4027a09e"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 24 05:53:23.648821 master-0 kubenswrapper[34361]: I0224 05:53:23.648754 34361 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5txg6\" (UniqueName: \"kubernetes.io/projected/76fe2580-1b17-4dd5-bdac-693e4027a09e-kube-api-access-5txg6\") on node \"master-0\" DevicePath \"\"" Feb 24 05:53:23.648821 master-0 kubenswrapper[34361]: I0224 05:53:23.648796 34361 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/76fe2580-1b17-4dd5-bdac-693e4027a09e-config\") on node \"master-0\" DevicePath \"\"" Feb 24 05:53:23.661759 master-0 kubenswrapper[34361]: I0224 05:53:23.661660 34361 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/76fe2580-1b17-4dd5-bdac-693e4027a09e-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "76fe2580-1b17-4dd5-bdac-693e4027a09e" (UID: "76fe2580-1b17-4dd5-bdac-693e4027a09e"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 24 05:53:23.667865 master-0 kubenswrapper[34361]: I0224 05:53:23.667799 34361 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-564d4966c5-82kwv" event={"ID":"76fe2580-1b17-4dd5-bdac-693e4027a09e","Type":"ContainerDied","Data":"458afe3acc00c047535908b73d4386c29e9cd2f688113b67fb1c968c8f330588"} Feb 24 05:53:23.667865 master-0 kubenswrapper[34361]: I0224 05:53:23.667845 34361 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-564d4966c5-82kwv" Feb 24 05:53:23.668171 master-0 kubenswrapper[34361]: I0224 05:53:23.667886 34361 scope.go:117] "RemoveContainer" containerID="bf87fa3ad91c9791fd5c0f1f5dee9989e63c440a986753d19e543def1d63c006" Feb 24 05:53:23.671956 master-0 kubenswrapper[34361]: I0224 05:53:23.671718 34361 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-inspector-db-sync-pd272" event={"ID":"e59553b6-01d7-45a8-8475-647431627701","Type":"ContainerStarted","Data":"677cf3bd52fee7c3317f7d93aae9a5e99b1e65346ae46196b17aef8d62b4e4d4"} Feb 24 05:53:23.674290 master-0 kubenswrapper[34361]: I0224 05:53:23.674231 34361 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-8695dc84b-bccck" event={"ID":"54d8708a-1dae-47bc-aead-fa87ab028821","Type":"ContainerStarted","Data":"2ed275a865b1ac9ac799ef67897b8b21b96ff5197849a01d13ef1afdc00a7383"} Feb 24 05:53:23.691150 master-0 kubenswrapper[34361]: I0224 05:53:23.691041 34361 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/76fe2580-1b17-4dd5-bdac-693e4027a09e-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "76fe2580-1b17-4dd5-bdac-693e4027a09e" (UID: "76fe2580-1b17-4dd5-bdac-693e4027a09e"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 24 05:53:23.713137 master-0 kubenswrapper[34361]: I0224 05:53:23.712838 34361 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/76fe2580-1b17-4dd5-bdac-693e4027a09e-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "76fe2580-1b17-4dd5-bdac-693e4027a09e" (UID: "76fe2580-1b17-4dd5-bdac-693e4027a09e"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 24 05:53:23.714760 master-0 kubenswrapper[34361]: I0224 05:53:23.714691 34361 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/76fe2580-1b17-4dd5-bdac-693e4027a09e-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "76fe2580-1b17-4dd5-bdac-693e4027a09e" (UID: "76fe2580-1b17-4dd5-bdac-693e4027a09e"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 24 05:53:23.750943 master-0 kubenswrapper[34361]: I0224 05:53:23.750893 34361 scope.go:117] "RemoveContainer" containerID="e3e63cffc76806b2461a1e6bc7c6a3e085e9a0b605198c8b84b959db2c742953" Feb 24 05:53:23.754837 master-0 kubenswrapper[34361]: I0224 05:53:23.753718 34361 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/76fe2580-1b17-4dd5-bdac-693e4027a09e-dns-swift-storage-0\") on node \"master-0\" DevicePath \"\"" Feb 24 05:53:23.754837 master-0 kubenswrapper[34361]: I0224 05:53:23.754572 34361 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/76fe2580-1b17-4dd5-bdac-693e4027a09e-dns-svc\") on node \"master-0\" DevicePath \"\"" Feb 24 05:53:23.754837 master-0 kubenswrapper[34361]: I0224 05:53:23.754658 34361 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/76fe2580-1b17-4dd5-bdac-693e4027a09e-ovsdbserver-sb\") on node \"master-0\" DevicePath \"\"" Feb 24 05:53:23.754837 master-0 kubenswrapper[34361]: I0224 05:53:23.754671 34361 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/76fe2580-1b17-4dd5-bdac-693e4027a09e-ovsdbserver-nb\") on node \"master-0\" DevicePath \"\"" Feb 24 05:53:23.754837 master-0 kubenswrapper[34361]: I0224 05:53:23.754695 34361 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/neutron-d477bdc58-p8d8s" Feb 24 05:53:23.894416 master-0 kubenswrapper[34361]: I0224 05:53:23.894224 34361 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-bdafd-default-internal-api-0"] Feb 24 05:53:23.895044 master-0 kubenswrapper[34361]: I0224 05:53:23.894546 34361 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-bdafd-default-internal-api-0" podUID="9353daa8-f1c5-493d-8f31-bfc3074c6223" containerName="glance-log" containerID="cri-o://f21829fd6b0d389f5b690cffbcf84955a80e446f4b913fa10461795c84683f71" gracePeriod=30 Feb 24 05:53:23.895148 master-0 kubenswrapper[34361]: I0224 05:53:23.895110 34361 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-bdafd-default-internal-api-0" podUID="9353daa8-f1c5-493d-8f31-bfc3074c6223" containerName="glance-httpd" containerID="cri-o://c3119b8a3607fa9c3df6b54da589b714968f583afefab83eb324d1696714b2b6" gracePeriod=30 Feb 24 05:53:24.143893 master-0 kubenswrapper[34361]: I0224 05:53:24.143783 34361 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-564d4966c5-82kwv"] Feb 24 05:53:24.204975 master-0 kubenswrapper[34361]: I0224 05:53:24.204885 34361 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-564d4966c5-82kwv"] Feb 24 05:53:24.627996 master-0 kubenswrapper[34361]: I0224 05:53:24.627921 34361 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="76fe2580-1b17-4dd5-bdac-693e4027a09e" path="/var/lib/kubelet/pods/76fe2580-1b17-4dd5-bdac-693e4027a09e/volumes" Feb 24 05:53:24.695916 master-0 kubenswrapper[34361]: I0224 05:53:24.695839 34361 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-8695dc84b-bccck" event={"ID":"54d8708a-1dae-47bc-aead-fa87ab028821","Type":"ContainerStarted","Data":"da7463f8818f7494b530d4784116b880ccce9ccc9b36e29782908cc4fefcac85"} Feb 24 05:53:24.695916 master-0 kubenswrapper[34361]: I0224 05:53:24.695905 34361 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-8695dc84b-bccck" event={"ID":"54d8708a-1dae-47bc-aead-fa87ab028821","Type":"ContainerStarted","Data":"7ba27a5caeb8512814942f69eb8db5b1241d73fd4677f2b6c3a2ad5cde1731e0"} Feb 24 05:53:24.696231 master-0 kubenswrapper[34361]: I0224 05:53:24.696085 34361 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/swift-proxy-8695dc84b-bccck" Feb 24 05:53:24.696231 master-0 kubenswrapper[34361]: I0224 05:53:24.696140 34361 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/swift-proxy-8695dc84b-bccck" Feb 24 05:53:24.705193 master-0 kubenswrapper[34361]: I0224 05:53:24.705096 34361 generic.go:334] "Generic (PLEG): container finished" podID="9353daa8-f1c5-493d-8f31-bfc3074c6223" containerID="f21829fd6b0d389f5b690cffbcf84955a80e446f4b913fa10461795c84683f71" exitCode=143 Feb 24 05:53:24.705193 master-0 kubenswrapper[34361]: I0224 05:53:24.705183 34361 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-bdafd-default-internal-api-0" event={"ID":"9353daa8-f1c5-493d-8f31-bfc3074c6223","Type":"ContainerDied","Data":"f21829fd6b0d389f5b690cffbcf84955a80e446f4b913fa10461795c84683f71"} Feb 24 05:53:24.747365 master-0 kubenswrapper[34361]: I0224 05:53:24.742007 34361 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/swift-proxy-8695dc84b-bccck" podStartSLOduration=4.741978681 podStartE2EDuration="4.741978681s" podCreationTimestamp="2026-02-24 05:53:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-24 05:53:24.725512397 +0000 UTC m=+964.428129453" watchObservedRunningTime="2026-02-24 05:53:24.741978681 +0000 UTC m=+964.444595717" Feb 24 05:53:25.423537 master-0 kubenswrapper[34361]: I0224 05:53:25.423358 34361 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/glance-bdafd-default-external-api-0" podUID="167c633e-12d2-45f6-a746-7437ee0bbfff" containerName="glance-httpd" probeResult="failure" output="Get \"https://10.128.0.208:9292/healthcheck\": read tcp 10.128.0.2:41682->10.128.0.208:9292: read: connection reset by peer" Feb 24 05:53:25.423537 master-0 kubenswrapper[34361]: I0224 05:53:25.423326 34361 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/glance-bdafd-default-external-api-0" podUID="167c633e-12d2-45f6-a746-7437ee0bbfff" containerName="glance-log" probeResult="failure" output="Get \"https://10.128.0.208:9292/healthcheck\": read tcp 10.128.0.2:41684->10.128.0.208:9292: read: connection reset by peer" Feb 24 05:53:25.753199 master-0 kubenswrapper[34361]: I0224 05:53:25.753120 34361 generic.go:334] "Generic (PLEG): container finished" podID="167c633e-12d2-45f6-a746-7437ee0bbfff" containerID="e7c9aa8d6472db53b2f7d10161065f254ecbe1f61c23fa12fbb9fd5d661a9703" exitCode=0 Feb 24 05:53:25.753512 master-0 kubenswrapper[34361]: I0224 05:53:25.753214 34361 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-bdafd-default-external-api-0" event={"ID":"167c633e-12d2-45f6-a746-7437ee0bbfff","Type":"ContainerDied","Data":"e7c9aa8d6472db53b2f7d10161065f254ecbe1f61c23fa12fbb9fd5d661a9703"} Feb 24 05:53:25.936217 master-0 kubenswrapper[34361]: I0224 05:53:25.935952 34361 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/neutron-564b95b965-jqq92" Feb 24 05:53:26.049345 master-0 kubenswrapper[34361]: I0224 05:53:26.049258 34361 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-d477bdc58-p8d8s"] Feb 24 05:53:26.051866 master-0 kubenswrapper[34361]: I0224 05:53:26.051730 34361 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-d477bdc58-p8d8s" podUID="3057364f-388c-47da-adc8-4c8e074b8362" containerName="neutron-api" containerID="cri-o://b4136cfb871ee6b478ec62af981a996389b5b5b3043351647079c6db301b06b0" gracePeriod=30 Feb 24 05:53:26.052858 master-0 kubenswrapper[34361]: I0224 05:53:26.052724 34361 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-d477bdc58-p8d8s" podUID="3057364f-388c-47da-adc8-4c8e074b8362" containerName="neutron-httpd" containerID="cri-o://4f7f4865ef45e0d6f6cd182f7e4ff15bcccff460d685cf4e910cc0f51615f94e" gracePeriod=30 Feb 24 05:53:26.787276 master-0 kubenswrapper[34361]: I0224 05:53:26.780844 34361 generic.go:334] "Generic (PLEG): container finished" podID="3057364f-388c-47da-adc8-4c8e074b8362" containerID="4f7f4865ef45e0d6f6cd182f7e4ff15bcccff460d685cf4e910cc0f51615f94e" exitCode=0 Feb 24 05:53:26.787276 master-0 kubenswrapper[34361]: I0224 05:53:26.780937 34361 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-d477bdc58-p8d8s" event={"ID":"3057364f-388c-47da-adc8-4c8e074b8362","Type":"ContainerDied","Data":"4f7f4865ef45e0d6f6cd182f7e4ff15bcccff460d685cf4e910cc0f51615f94e"} Feb 24 05:53:26.818653 master-0 kubenswrapper[34361]: I0224 05:53:26.792564 34361 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-inspector-db-sync-pd272" event={"ID":"e59553b6-01d7-45a8-8475-647431627701","Type":"ContainerStarted","Data":"df0fe7b9bd2f8eb372c48486e25717ab273f2d537f398fb8c3309e2504cfb362"} Feb 24 05:53:26.833152 master-0 kubenswrapper[34361]: I0224 05:53:26.833036 34361 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ironic-inspector-db-sync-pd272" podStartSLOduration=4.404386165 podStartE2EDuration="7.833010237s" podCreationTimestamp="2026-02-24 05:53:19 +0000 UTC" firstStartedPulling="2026-02-24 05:53:22.801042633 +0000 UTC m=+962.503659679" lastFinishedPulling="2026-02-24 05:53:26.229666705 +0000 UTC m=+965.932283751" observedRunningTime="2026-02-24 05:53:26.818231449 +0000 UTC m=+966.520848495" watchObservedRunningTime="2026-02-24 05:53:26.833010237 +0000 UTC m=+966.535627283" Feb 24 05:53:27.818117 master-0 kubenswrapper[34361]: I0224 05:53:27.816998 34361 generic.go:334] "Generic (PLEG): container finished" podID="9353daa8-f1c5-493d-8f31-bfc3074c6223" containerID="c3119b8a3607fa9c3df6b54da589b714968f583afefab83eb324d1696714b2b6" exitCode=0 Feb 24 05:53:27.818117 master-0 kubenswrapper[34361]: I0224 05:53:27.817071 34361 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-bdafd-default-internal-api-0" event={"ID":"9353daa8-f1c5-493d-8f31-bfc3074c6223","Type":"ContainerDied","Data":"c3119b8a3607fa9c3df6b54da589b714968f583afefab83eb324d1696714b2b6"} Feb 24 05:53:28.902762 master-0 kubenswrapper[34361]: I0224 05:53:28.902300 34361 generic.go:334] "Generic (PLEG): container finished" podID="3057364f-388c-47da-adc8-4c8e074b8362" containerID="b4136cfb871ee6b478ec62af981a996389b5b5b3043351647079c6db301b06b0" exitCode=0 Feb 24 05:53:28.902762 master-0 kubenswrapper[34361]: I0224 05:53:28.902389 34361 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-d477bdc58-p8d8s" event={"ID":"3057364f-388c-47da-adc8-4c8e074b8362","Type":"ContainerDied","Data":"b4136cfb871ee6b478ec62af981a996389b5b5b3043351647079c6db301b06b0"} Feb 24 05:53:29.927348 master-0 kubenswrapper[34361]: I0224 05:53:29.927240 34361 generic.go:334] "Generic (PLEG): container finished" podID="e59553b6-01d7-45a8-8475-647431627701" containerID="df0fe7b9bd2f8eb372c48486e25717ab273f2d537f398fb8c3309e2504cfb362" exitCode=0 Feb 24 05:53:29.928176 master-0 kubenswrapper[34361]: I0224 05:53:29.927371 34361 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-inspector-db-sync-pd272" event={"ID":"e59553b6-01d7-45a8-8475-647431627701","Type":"ContainerDied","Data":"df0fe7b9bd2f8eb372c48486e25717ab273f2d537f398fb8c3309e2504cfb362"} Feb 24 05:53:30.900859 master-0 kubenswrapper[34361]: I0224 05:53:30.900774 34361 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/swift-proxy-8695dc84b-bccck" Feb 24 05:53:30.902725 master-0 kubenswrapper[34361]: I0224 05:53:30.902683 34361 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/swift-proxy-8695dc84b-bccck" Feb 24 05:53:33.598615 master-0 kubenswrapper[34361]: I0224 05:53:33.598542 34361 scope.go:117] "RemoveContainer" containerID="8d2749b27df058fd8c580c7ee172eac939e79e13640fc3cdd1176aef20aced3c" Feb 24 05:53:34.154749 master-0 kubenswrapper[34361]: I0224 05:53:34.154690 34361 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-bdafd-default-external-api-0" Feb 24 05:53:34.162353 master-0 kubenswrapper[34361]: I0224 05:53:34.161038 34361 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ironic-inspector-db-sync-pd272" Feb 24 05:53:34.303339 master-0 kubenswrapper[34361]: I0224 05:53:34.301530 34361 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/167c633e-12d2-45f6-a746-7437ee0bbfff-config-data\") pod \"167c633e-12d2-45f6-a746-7437ee0bbfff\" (UID: \"167c633e-12d2-45f6-a746-7437ee0bbfff\") " Feb 24 05:53:34.303339 master-0 kubenswrapper[34361]: I0224 05:53:34.301622 34361 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/167c633e-12d2-45f6-a746-7437ee0bbfff-public-tls-certs\") pod \"167c633e-12d2-45f6-a746-7437ee0bbfff\" (UID: \"167c633e-12d2-45f6-a746-7437ee0bbfff\") " Feb 24 05:53:34.303339 master-0 kubenswrapper[34361]: I0224 05:53:34.301647 34361 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e59553b6-01d7-45a8-8475-647431627701-scripts\") pod \"e59553b6-01d7-45a8-8475-647431627701\" (UID: \"e59553b6-01d7-45a8-8475-647431627701\") " Feb 24 05:53:34.303339 master-0 kubenswrapper[34361]: I0224 05:53:34.301696 34361 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fw9c9\" (UniqueName: \"kubernetes.io/projected/e59553b6-01d7-45a8-8475-647431627701-kube-api-access-fw9c9\") pod \"e59553b6-01d7-45a8-8475-647431627701\" (UID: \"e59553b6-01d7-45a8-8475-647431627701\") " Feb 24 05:53:34.303339 master-0 kubenswrapper[34361]: I0224 05:53:34.301755 34361 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/167c633e-12d2-45f6-a746-7437ee0bbfff-combined-ca-bundle\") pod \"167c633e-12d2-45f6-a746-7437ee0bbfff\" (UID: \"167c633e-12d2-45f6-a746-7437ee0bbfff\") " Feb 24 05:53:34.303339 master-0 kubenswrapper[34361]: I0224 05:53:34.301837 34361 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lib-ironic-inspector-dhcp-hostsdir\" (UniqueName: \"kubernetes.io/empty-dir/e59553b6-01d7-45a8-8475-647431627701-var-lib-ironic-inspector-dhcp-hostsdir\") pod \"e59553b6-01d7-45a8-8475-647431627701\" (UID: \"e59553b6-01d7-45a8-8475-647431627701\") " Feb 24 05:53:34.303339 master-0 kubenswrapper[34361]: I0224 05:53:34.301865 34361 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-podinfo\" (UniqueName: \"kubernetes.io/downward-api/e59553b6-01d7-45a8-8475-647431627701-etc-podinfo\") pod \"e59553b6-01d7-45a8-8475-647431627701\" (UID: \"e59553b6-01d7-45a8-8475-647431627701\") " Feb 24 05:53:34.303339 master-0 kubenswrapper[34361]: I0224 05:53:34.301895 34361 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/e59553b6-01d7-45a8-8475-647431627701-config\") pod \"e59553b6-01d7-45a8-8475-647431627701\" (UID: \"e59553b6-01d7-45a8-8475-647431627701\") " Feb 24 05:53:34.303339 master-0 kubenswrapper[34361]: I0224 05:53:34.301928 34361 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/167c633e-12d2-45f6-a746-7437ee0bbfff-scripts\") pod \"167c633e-12d2-45f6-a746-7437ee0bbfff\" (UID: \"167c633e-12d2-45f6-a746-7437ee0bbfff\") " Feb 24 05:53:34.303339 master-0 kubenswrapper[34361]: I0224 05:53:34.301957 34361 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/167c633e-12d2-45f6-a746-7437ee0bbfff-logs\") pod \"167c633e-12d2-45f6-a746-7437ee0bbfff\" (UID: \"167c633e-12d2-45f6-a746-7437ee0bbfff\") " Feb 24 05:53:34.303339 master-0 kubenswrapper[34361]: I0224 05:53:34.302007 34361 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kd5nv\" (UniqueName: \"kubernetes.io/projected/167c633e-12d2-45f6-a746-7437ee0bbfff-kube-api-access-kd5nv\") pod \"167c633e-12d2-45f6-a746-7437ee0bbfff\" (UID: \"167c633e-12d2-45f6-a746-7437ee0bbfff\") " Feb 24 05:53:34.303339 master-0 kubenswrapper[34361]: I0224 05:53:34.302189 34361 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/csi/topolvm.io^3c776d19-5d94-41f0-be3d-1e63c1bc4e65\") pod \"167c633e-12d2-45f6-a746-7437ee0bbfff\" (UID: \"167c633e-12d2-45f6-a746-7437ee0bbfff\") " Feb 24 05:53:34.303339 master-0 kubenswrapper[34361]: I0224 05:53:34.302211 34361 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lib-ironic\" (UniqueName: \"kubernetes.io/empty-dir/e59553b6-01d7-45a8-8475-647431627701-var-lib-ironic\") pod \"e59553b6-01d7-45a8-8475-647431627701\" (UID: \"e59553b6-01d7-45a8-8475-647431627701\") " Feb 24 05:53:34.303339 master-0 kubenswrapper[34361]: I0224 05:53:34.302244 34361 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e59553b6-01d7-45a8-8475-647431627701-combined-ca-bundle\") pod \"e59553b6-01d7-45a8-8475-647431627701\" (UID: \"e59553b6-01d7-45a8-8475-647431627701\") " Feb 24 05:53:34.303339 master-0 kubenswrapper[34361]: I0224 05:53:34.302393 34361 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/167c633e-12d2-45f6-a746-7437ee0bbfff-httpd-run\") pod \"167c633e-12d2-45f6-a746-7437ee0bbfff\" (UID: \"167c633e-12d2-45f6-a746-7437ee0bbfff\") " Feb 24 05:53:34.303339 master-0 kubenswrapper[34361]: I0224 05:53:34.303257 34361 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/167c633e-12d2-45f6-a746-7437ee0bbfff-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "167c633e-12d2-45f6-a746-7437ee0bbfff" (UID: "167c633e-12d2-45f6-a746-7437ee0bbfff"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 24 05:53:34.333342 master-0 kubenswrapper[34361]: I0224 05:53:34.333136 34361 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e59553b6-01d7-45a8-8475-647431627701-var-lib-ironic-inspector-dhcp-hostsdir" (OuterVolumeSpecName: "var-lib-ironic-inspector-dhcp-hostsdir") pod "e59553b6-01d7-45a8-8475-647431627701" (UID: "e59553b6-01d7-45a8-8475-647431627701"). InnerVolumeSpecName "var-lib-ironic-inspector-dhcp-hostsdir". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 24 05:53:34.339364 master-0 kubenswrapper[34361]: I0224 05:53:34.334715 34361 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e59553b6-01d7-45a8-8475-647431627701-var-lib-ironic" (OuterVolumeSpecName: "var-lib-ironic") pod "e59553b6-01d7-45a8-8475-647431627701" (UID: "e59553b6-01d7-45a8-8475-647431627701"). InnerVolumeSpecName "var-lib-ironic". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 24 05:53:34.339364 master-0 kubenswrapper[34361]: I0224 05:53:34.336227 34361 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/167c633e-12d2-45f6-a746-7437ee0bbfff-logs" (OuterVolumeSpecName: "logs") pod "167c633e-12d2-45f6-a746-7437ee0bbfff" (UID: "167c633e-12d2-45f6-a746-7437ee0bbfff"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 24 05:53:34.339364 master-0 kubenswrapper[34361]: I0224 05:53:34.338021 34361 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e59553b6-01d7-45a8-8475-647431627701-kube-api-access-fw9c9" (OuterVolumeSpecName: "kube-api-access-fw9c9") pod "e59553b6-01d7-45a8-8475-647431627701" (UID: "e59553b6-01d7-45a8-8475-647431627701"). InnerVolumeSpecName "kube-api-access-fw9c9". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 24 05:53:34.339364 master-0 kubenswrapper[34361]: I0224 05:53:34.339035 34361 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/downward-api/e59553b6-01d7-45a8-8475-647431627701-etc-podinfo" (OuterVolumeSpecName: "etc-podinfo") pod "e59553b6-01d7-45a8-8475-647431627701" (UID: "e59553b6-01d7-45a8-8475-647431627701"). InnerVolumeSpecName "etc-podinfo". PluginName "kubernetes.io/downward-api", VolumeGidValue "" Feb 24 05:53:34.343327 master-0 kubenswrapper[34361]: I0224 05:53:34.342649 34361 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/167c633e-12d2-45f6-a746-7437ee0bbfff-kube-api-access-kd5nv" (OuterVolumeSpecName: "kube-api-access-kd5nv") pod "167c633e-12d2-45f6-a746-7437ee0bbfff" (UID: "167c633e-12d2-45f6-a746-7437ee0bbfff"). InnerVolumeSpecName "kube-api-access-kd5nv". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 24 05:53:34.349331 master-0 kubenswrapper[34361]: I0224 05:53:34.346729 34361 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e59553b6-01d7-45a8-8475-647431627701-scripts" (OuterVolumeSpecName: "scripts") pod "e59553b6-01d7-45a8-8475-647431627701" (UID: "e59553b6-01d7-45a8-8475-647431627701"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 24 05:53:34.379348 master-0 kubenswrapper[34361]: I0224 05:53:34.378027 34361 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/167c633e-12d2-45f6-a746-7437ee0bbfff-scripts" (OuterVolumeSpecName: "scripts") pod "167c633e-12d2-45f6-a746-7437ee0bbfff" (UID: "167c633e-12d2-45f6-a746-7437ee0bbfff"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 24 05:53:34.379348 master-0 kubenswrapper[34361]: I0224 05:53:34.378258 34361 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/topolvm.io^3c776d19-5d94-41f0-be3d-1e63c1bc4e65" (OuterVolumeSpecName: "glance") pod "167c633e-12d2-45f6-a746-7437ee0bbfff" (UID: "167c633e-12d2-45f6-a746-7437ee0bbfff"). InnerVolumeSpecName "pvc-8fb5a103-1e29-48ca-b6ff-e05e0e3dadaf". PluginName "kubernetes.io/csi", VolumeGidValue "" Feb 24 05:53:34.391340 master-0 kubenswrapper[34361]: I0224 05:53:34.389502 34361 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/167c633e-12d2-45f6-a746-7437ee0bbfff-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "167c633e-12d2-45f6-a746-7437ee0bbfff" (UID: "167c633e-12d2-45f6-a746-7437ee0bbfff"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 24 05:53:34.421341 master-0 kubenswrapper[34361]: I0224 05:53:34.417050 34361 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/167c633e-12d2-45f6-a746-7437ee0bbfff-scripts\") on node \"master-0\" DevicePath \"\"" Feb 24 05:53:34.421341 master-0 kubenswrapper[34361]: I0224 05:53:34.417109 34361 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/167c633e-12d2-45f6-a746-7437ee0bbfff-logs\") on node \"master-0\" DevicePath \"\"" Feb 24 05:53:34.421341 master-0 kubenswrapper[34361]: I0224 05:53:34.417123 34361 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kd5nv\" (UniqueName: \"kubernetes.io/projected/167c633e-12d2-45f6-a746-7437ee0bbfff-kube-api-access-kd5nv\") on node \"master-0\" DevicePath \"\"" Feb 24 05:53:34.421341 master-0 kubenswrapper[34361]: I0224 05:53:34.417174 34361 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"pvc-8fb5a103-1e29-48ca-b6ff-e05e0e3dadaf\" (UniqueName: \"kubernetes.io/csi/topolvm.io^3c776d19-5d94-41f0-be3d-1e63c1bc4e65\") on node \"master-0\" " Feb 24 05:53:34.421341 master-0 kubenswrapper[34361]: I0224 05:53:34.417191 34361 reconciler_common.go:293] "Volume detached for volume \"var-lib-ironic\" (UniqueName: \"kubernetes.io/empty-dir/e59553b6-01d7-45a8-8475-647431627701-var-lib-ironic\") on node \"master-0\" DevicePath \"\"" Feb 24 05:53:34.421341 master-0 kubenswrapper[34361]: I0224 05:53:34.417202 34361 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/167c633e-12d2-45f6-a746-7437ee0bbfff-httpd-run\") on node \"master-0\" DevicePath \"\"" Feb 24 05:53:34.421341 master-0 kubenswrapper[34361]: I0224 05:53:34.417213 34361 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e59553b6-01d7-45a8-8475-647431627701-scripts\") on node \"master-0\" DevicePath \"\"" Feb 24 05:53:34.421341 master-0 kubenswrapper[34361]: I0224 05:53:34.417224 34361 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fw9c9\" (UniqueName: \"kubernetes.io/projected/e59553b6-01d7-45a8-8475-647431627701-kube-api-access-fw9c9\") on node \"master-0\" DevicePath \"\"" Feb 24 05:53:34.421341 master-0 kubenswrapper[34361]: I0224 05:53:34.417236 34361 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/167c633e-12d2-45f6-a746-7437ee0bbfff-combined-ca-bundle\") on node \"master-0\" DevicePath \"\"" Feb 24 05:53:34.421341 master-0 kubenswrapper[34361]: I0224 05:53:34.417254 34361 reconciler_common.go:293] "Volume detached for volume \"var-lib-ironic-inspector-dhcp-hostsdir\" (UniqueName: \"kubernetes.io/empty-dir/e59553b6-01d7-45a8-8475-647431627701-var-lib-ironic-inspector-dhcp-hostsdir\") on node \"master-0\" DevicePath \"\"" Feb 24 05:53:34.421341 master-0 kubenswrapper[34361]: I0224 05:53:34.417266 34361 reconciler_common.go:293] "Volume detached for volume \"etc-podinfo\" (UniqueName: \"kubernetes.io/downward-api/e59553b6-01d7-45a8-8475-647431627701-etc-podinfo\") on node \"master-0\" DevicePath \"\"" Feb 24 05:53:34.431225 master-0 kubenswrapper[34361]: I0224 05:53:34.429374 34361 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/167c633e-12d2-45f6-a746-7437ee0bbfff-config-data" (OuterVolumeSpecName: "config-data") pod "167c633e-12d2-45f6-a746-7437ee0bbfff" (UID: "167c633e-12d2-45f6-a746-7437ee0bbfff"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 24 05:53:34.457342 master-0 kubenswrapper[34361]: I0224 05:53:34.452932 34361 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e59553b6-01d7-45a8-8475-647431627701-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "e59553b6-01d7-45a8-8475-647431627701" (UID: "e59553b6-01d7-45a8-8475-647431627701"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 24 05:53:34.457342 master-0 kubenswrapper[34361]: I0224 05:53:34.457350 34361 csi_attacher.go:630] kubernetes.io/csi: attacher.UnmountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping UnmountDevice... Feb 24 05:53:34.461333 master-0 kubenswrapper[34361]: I0224 05:53:34.457812 34361 operation_generator.go:917] UnmountDevice succeeded for volume "pvc-8fb5a103-1e29-48ca-b6ff-e05e0e3dadaf" (UniqueName: "kubernetes.io/csi/topolvm.io^3c776d19-5d94-41f0-be3d-1e63c1bc4e65") on node "master-0" Feb 24 05:53:34.463344 master-0 kubenswrapper[34361]: I0224 05:53:34.462569 34361 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e59553b6-01d7-45a8-8475-647431627701-config" (OuterVolumeSpecName: "config") pod "e59553b6-01d7-45a8-8475-647431627701" (UID: "e59553b6-01d7-45a8-8475-647431627701"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 24 05:53:34.520081 master-0 kubenswrapper[34361]: I0224 05:53:34.520002 34361 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/e59553b6-01d7-45a8-8475-647431627701-config\") on node \"master-0\" DevicePath \"\"" Feb 24 05:53:34.520081 master-0 kubenswrapper[34361]: I0224 05:53:34.520058 34361 reconciler_common.go:293] "Volume detached for volume \"pvc-8fb5a103-1e29-48ca-b6ff-e05e0e3dadaf\" (UniqueName: \"kubernetes.io/csi/topolvm.io^3c776d19-5d94-41f0-be3d-1e63c1bc4e65\") on node \"master-0\" DevicePath \"\"" Feb 24 05:53:34.520081 master-0 kubenswrapper[34361]: I0224 05:53:34.520078 34361 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e59553b6-01d7-45a8-8475-647431627701-combined-ca-bundle\") on node \"master-0\" DevicePath \"\"" Feb 24 05:53:34.520081 master-0 kubenswrapper[34361]: I0224 05:53:34.520090 34361 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/167c633e-12d2-45f6-a746-7437ee0bbfff-config-data\") on node \"master-0\" DevicePath \"\"" Feb 24 05:53:34.533559 master-0 kubenswrapper[34361]: I0224 05:53:34.533379 34361 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/167c633e-12d2-45f6-a746-7437ee0bbfff-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "167c633e-12d2-45f6-a746-7437ee0bbfff" (UID: "167c633e-12d2-45f6-a746-7437ee0bbfff"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 24 05:53:34.623744 master-0 kubenswrapper[34361]: I0224 05:53:34.623194 34361 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/167c633e-12d2-45f6-a746-7437ee0bbfff-public-tls-certs\") on node \"master-0\" DevicePath \"\"" Feb 24 05:53:35.045441 master-0 kubenswrapper[34361]: I0224 05:53:35.045361 34361 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-bdafd-default-external-api-0" event={"ID":"167c633e-12d2-45f6-a746-7437ee0bbfff","Type":"ContainerDied","Data":"6ab03244ef140d4acc3ab994daa83b061d31c315a4212896c9d2a272c8b71fc0"} Feb 24 05:53:35.046083 master-0 kubenswrapper[34361]: I0224 05:53:35.046061 34361 scope.go:117] "RemoveContainer" containerID="e7c9aa8d6472db53b2f7d10161065f254ecbe1f61c23fa12fbb9fd5d661a9703" Feb 24 05:53:35.046809 master-0 kubenswrapper[34361]: I0224 05:53:35.046781 34361 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-bdafd-default-external-api-0" Feb 24 05:53:35.065133 master-0 kubenswrapper[34361]: I0224 05:53:35.065026 34361 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-inspector-db-sync-pd272" event={"ID":"e59553b6-01d7-45a8-8475-647431627701","Type":"ContainerDied","Data":"677cf3bd52fee7c3317f7d93aae9a5e99b1e65346ae46196b17aef8d62b4e4d4"} Feb 24 05:53:35.065442 master-0 kubenswrapper[34361]: I0224 05:53:35.065425 34361 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="677cf3bd52fee7c3317f7d93aae9a5e99b1e65346ae46196b17aef8d62b4e4d4" Feb 24 05:53:35.065939 master-0 kubenswrapper[34361]: I0224 05:53:35.065799 34361 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ironic-inspector-db-sync-pd272" Feb 24 05:53:35.354332 master-0 kubenswrapper[34361]: I0224 05:53:35.354131 34361 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-bdafd-default-external-api-0"] Feb 24 05:53:35.595459 master-0 kubenswrapper[34361]: I0224 05:53:35.588996 34361 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-bdafd-default-external-api-0"] Feb 24 05:53:35.622466 master-0 kubenswrapper[34361]: I0224 05:53:35.622284 34361 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-bdafd-default-external-api-0"] Feb 24 05:53:35.624898 master-0 kubenswrapper[34361]: E0224 05:53:35.624836 34361 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e59553b6-01d7-45a8-8475-647431627701" containerName="ironic-inspector-db-sync" Feb 24 05:53:35.624898 master-0 kubenswrapper[34361]: I0224 05:53:35.624877 34361 state_mem.go:107] "Deleted CPUSet assignment" podUID="e59553b6-01d7-45a8-8475-647431627701" containerName="ironic-inspector-db-sync" Feb 24 05:53:35.625335 master-0 kubenswrapper[34361]: E0224 05:53:35.624933 34361 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="76fe2580-1b17-4dd5-bdac-693e4027a09e" containerName="dnsmasq-dns" Feb 24 05:53:35.625335 master-0 kubenswrapper[34361]: I0224 05:53:35.624948 34361 state_mem.go:107] "Deleted CPUSet assignment" podUID="76fe2580-1b17-4dd5-bdac-693e4027a09e" containerName="dnsmasq-dns" Feb 24 05:53:35.625335 master-0 kubenswrapper[34361]: E0224 05:53:35.625061 34361 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="76fe2580-1b17-4dd5-bdac-693e4027a09e" containerName="init" Feb 24 05:53:35.625335 master-0 kubenswrapper[34361]: I0224 05:53:35.625075 34361 state_mem.go:107] "Deleted CPUSet assignment" podUID="76fe2580-1b17-4dd5-bdac-693e4027a09e" containerName="init" Feb 24 05:53:35.625335 master-0 kubenswrapper[34361]: E0224 05:53:35.625098 34361 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="167c633e-12d2-45f6-a746-7437ee0bbfff" containerName="glance-log" Feb 24 05:53:35.625335 master-0 kubenswrapper[34361]: I0224 05:53:35.625109 34361 state_mem.go:107] "Deleted CPUSet assignment" podUID="167c633e-12d2-45f6-a746-7437ee0bbfff" containerName="glance-log" Feb 24 05:53:35.625335 master-0 kubenswrapper[34361]: E0224 05:53:35.625132 34361 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="167c633e-12d2-45f6-a746-7437ee0bbfff" containerName="glance-httpd" Feb 24 05:53:35.625335 master-0 kubenswrapper[34361]: I0224 05:53:35.625141 34361 state_mem.go:107] "Deleted CPUSet assignment" podUID="167c633e-12d2-45f6-a746-7437ee0bbfff" containerName="glance-httpd" Feb 24 05:53:35.626076 master-0 kubenswrapper[34361]: I0224 05:53:35.625995 34361 memory_manager.go:354] "RemoveStaleState removing state" podUID="167c633e-12d2-45f6-a746-7437ee0bbfff" containerName="glance-log" Feb 24 05:53:35.626139 master-0 kubenswrapper[34361]: I0224 05:53:35.626090 34361 memory_manager.go:354] "RemoveStaleState removing state" podUID="167c633e-12d2-45f6-a746-7437ee0bbfff" containerName="glance-httpd" Feb 24 05:53:35.626139 master-0 kubenswrapper[34361]: I0224 05:53:35.626134 34361 memory_manager.go:354] "RemoveStaleState removing state" podUID="76fe2580-1b17-4dd5-bdac-693e4027a09e" containerName="dnsmasq-dns" Feb 24 05:53:35.626213 master-0 kubenswrapper[34361]: I0224 05:53:35.626174 34361 memory_manager.go:354] "RemoveStaleState removing state" podUID="e59553b6-01d7-45a8-8475-647431627701" containerName="ironic-inspector-db-sync" Feb 24 05:53:35.636320 master-0 kubenswrapper[34361]: I0224 05:53:35.636200 34361 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-bdafd-default-external-api-0" Feb 24 05:53:35.643219 master-0 kubenswrapper[34361]: I0224 05:53:35.643157 34361 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-public-svc" Feb 24 05:53:35.643861 master-0 kubenswrapper[34361]: I0224 05:53:35.643759 34361 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-bdafd-default-external-config-data" Feb 24 05:53:35.654424 master-0 kubenswrapper[34361]: I0224 05:53:35.654280 34361 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-bdafd-default-external-api-0"] Feb 24 05:53:35.659012 master-0 kubenswrapper[34361]: I0224 05:53:35.658938 34361 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/placement-7d9548858-h45cl" Feb 24 05:53:35.761136 master-0 kubenswrapper[34361]: I0224 05:53:35.761035 34361 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7cc967eb-a8c3-4147-a3ac-bd6af5dd3025-scripts\") pod \"glance-bdafd-default-external-api-0\" (UID: \"7cc967eb-a8c3-4147-a3ac-bd6af5dd3025\") " pod="openstack/glance-bdafd-default-external-api-0" Feb 24 05:53:35.761475 master-0 kubenswrapper[34361]: I0224 05:53:35.761152 34361 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-8fb5a103-1e29-48ca-b6ff-e05e0e3dadaf\" (UniqueName: \"kubernetes.io/csi/topolvm.io^3c776d19-5d94-41f0-be3d-1e63c1bc4e65\") pod \"glance-bdafd-default-external-api-0\" (UID: \"7cc967eb-a8c3-4147-a3ac-bd6af5dd3025\") " pod="openstack/glance-bdafd-default-external-api-0" Feb 24 05:53:35.761475 master-0 kubenswrapper[34361]: I0224 05:53:35.761224 34361 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7cc967eb-a8c3-4147-a3ac-bd6af5dd3025-logs\") pod \"glance-bdafd-default-external-api-0\" (UID: \"7cc967eb-a8c3-4147-a3ac-bd6af5dd3025\") " pod="openstack/glance-bdafd-default-external-api-0" Feb 24 05:53:35.761978 master-0 kubenswrapper[34361]: I0224 05:53:35.761934 34361 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7cc967eb-a8c3-4147-a3ac-bd6af5dd3025-config-data\") pod \"glance-bdafd-default-external-api-0\" (UID: \"7cc967eb-a8c3-4147-a3ac-bd6af5dd3025\") " pod="openstack/glance-bdafd-default-external-api-0" Feb 24 05:53:35.762069 master-0 kubenswrapper[34361]: I0224 05:53:35.762030 34361 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/7cc967eb-a8c3-4147-a3ac-bd6af5dd3025-public-tls-certs\") pod \"glance-bdafd-default-external-api-0\" (UID: \"7cc967eb-a8c3-4147-a3ac-bd6af5dd3025\") " pod="openstack/glance-bdafd-default-external-api-0" Feb 24 05:53:35.763034 master-0 kubenswrapper[34361]: I0224 05:53:35.762533 34361 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4hg6b\" (UniqueName: \"kubernetes.io/projected/7cc967eb-a8c3-4147-a3ac-bd6af5dd3025-kube-api-access-4hg6b\") pod \"glance-bdafd-default-external-api-0\" (UID: \"7cc967eb-a8c3-4147-a3ac-bd6af5dd3025\") " pod="openstack/glance-bdafd-default-external-api-0" Feb 24 05:53:35.763348 master-0 kubenswrapper[34361]: I0224 05:53:35.763273 34361 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/7cc967eb-a8c3-4147-a3ac-bd6af5dd3025-httpd-run\") pod \"glance-bdafd-default-external-api-0\" (UID: \"7cc967eb-a8c3-4147-a3ac-bd6af5dd3025\") " pod="openstack/glance-bdafd-default-external-api-0" Feb 24 05:53:35.763472 master-0 kubenswrapper[34361]: I0224 05:53:35.763387 34361 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7cc967eb-a8c3-4147-a3ac-bd6af5dd3025-combined-ca-bundle\") pod \"glance-bdafd-default-external-api-0\" (UID: \"7cc967eb-a8c3-4147-a3ac-bd6af5dd3025\") " pod="openstack/glance-bdafd-default-external-api-0" Feb 24 05:53:35.798676 master-0 kubenswrapper[34361]: I0224 05:53:35.798618 34361 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/placement-7d9548858-h45cl" Feb 24 05:53:35.876475 master-0 kubenswrapper[34361]: I0224 05:53:35.875063 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-8fb5a103-1e29-48ca-b6ff-e05e0e3dadaf\" (UniqueName: \"kubernetes.io/csi/topolvm.io^3c776d19-5d94-41f0-be3d-1e63c1bc4e65\") pod \"glance-bdafd-default-external-api-0\" (UID: \"7cc967eb-a8c3-4147-a3ac-bd6af5dd3025\") " pod="openstack/glance-bdafd-default-external-api-0" Feb 24 05:53:35.876475 master-0 kubenswrapper[34361]: I0224 05:53:35.875144 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7cc967eb-a8c3-4147-a3ac-bd6af5dd3025-logs\") pod \"glance-bdafd-default-external-api-0\" (UID: \"7cc967eb-a8c3-4147-a3ac-bd6af5dd3025\") " pod="openstack/glance-bdafd-default-external-api-0" Feb 24 05:53:35.876475 master-0 kubenswrapper[34361]: I0224 05:53:35.875176 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7cc967eb-a8c3-4147-a3ac-bd6af5dd3025-config-data\") pod \"glance-bdafd-default-external-api-0\" (UID: \"7cc967eb-a8c3-4147-a3ac-bd6af5dd3025\") " pod="openstack/glance-bdafd-default-external-api-0" Feb 24 05:53:35.876475 master-0 kubenswrapper[34361]: I0224 05:53:35.875204 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/7cc967eb-a8c3-4147-a3ac-bd6af5dd3025-public-tls-certs\") pod \"glance-bdafd-default-external-api-0\" (UID: \"7cc967eb-a8c3-4147-a3ac-bd6af5dd3025\") " pod="openstack/glance-bdafd-default-external-api-0" Feb 24 05:53:35.876475 master-0 kubenswrapper[34361]: I0224 05:53:35.875283 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4hg6b\" (UniqueName: \"kubernetes.io/projected/7cc967eb-a8c3-4147-a3ac-bd6af5dd3025-kube-api-access-4hg6b\") pod \"glance-bdafd-default-external-api-0\" (UID: \"7cc967eb-a8c3-4147-a3ac-bd6af5dd3025\") " pod="openstack/glance-bdafd-default-external-api-0" Feb 24 05:53:35.876475 master-0 kubenswrapper[34361]: I0224 05:53:35.875341 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/7cc967eb-a8c3-4147-a3ac-bd6af5dd3025-httpd-run\") pod \"glance-bdafd-default-external-api-0\" (UID: \"7cc967eb-a8c3-4147-a3ac-bd6af5dd3025\") " pod="openstack/glance-bdafd-default-external-api-0" Feb 24 05:53:35.876475 master-0 kubenswrapper[34361]: I0224 05:53:35.875379 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7cc967eb-a8c3-4147-a3ac-bd6af5dd3025-combined-ca-bundle\") pod \"glance-bdafd-default-external-api-0\" (UID: \"7cc967eb-a8c3-4147-a3ac-bd6af5dd3025\") " pod="openstack/glance-bdafd-default-external-api-0" Feb 24 05:53:35.876475 master-0 kubenswrapper[34361]: I0224 05:53:35.875438 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7cc967eb-a8c3-4147-a3ac-bd6af5dd3025-scripts\") pod \"glance-bdafd-default-external-api-0\" (UID: \"7cc967eb-a8c3-4147-a3ac-bd6af5dd3025\") " pod="openstack/glance-bdafd-default-external-api-0" Feb 24 05:53:35.881252 master-0 kubenswrapper[34361]: I0224 05:53:35.880904 34361 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7cc967eb-a8c3-4147-a3ac-bd6af5dd3025-scripts\") pod \"glance-bdafd-default-external-api-0\" (UID: \"7cc967eb-a8c3-4147-a3ac-bd6af5dd3025\") " pod="openstack/glance-bdafd-default-external-api-0" Feb 24 05:53:35.882133 master-0 kubenswrapper[34361]: I0224 05:53:35.882070 34361 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/7cc967eb-a8c3-4147-a3ac-bd6af5dd3025-httpd-run\") pod \"glance-bdafd-default-external-api-0\" (UID: \"7cc967eb-a8c3-4147-a3ac-bd6af5dd3025\") " pod="openstack/glance-bdafd-default-external-api-0" Feb 24 05:53:35.882345 master-0 kubenswrapper[34361]: I0224 05:53:35.882108 34361 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7cc967eb-a8c3-4147-a3ac-bd6af5dd3025-logs\") pod \"glance-bdafd-default-external-api-0\" (UID: \"7cc967eb-a8c3-4147-a3ac-bd6af5dd3025\") " pod="openstack/glance-bdafd-default-external-api-0" Feb 24 05:53:35.886390 master-0 kubenswrapper[34361]: I0224 05:53:35.885496 34361 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 24 05:53:35.886390 master-0 kubenswrapper[34361]: I0224 05:53:35.885552 34361 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-8fb5a103-1e29-48ca-b6ff-e05e0e3dadaf\" (UniqueName: \"kubernetes.io/csi/topolvm.io^3c776d19-5d94-41f0-be3d-1e63c1bc4e65\") pod \"glance-bdafd-default-external-api-0\" (UID: \"7cc967eb-a8c3-4147-a3ac-bd6af5dd3025\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/topolvm.io/7d863fd5501d6d1171206f6d6ea42c84796ef7fcbd0ecfb3be968cf37320363b/globalmount\"" pod="openstack/glance-bdafd-default-external-api-0" Feb 24 05:53:35.888873 master-0 kubenswrapper[34361]: I0224 05:53:35.888829 34361 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7cc967eb-a8c3-4147-a3ac-bd6af5dd3025-combined-ca-bundle\") pod \"glance-bdafd-default-external-api-0\" (UID: \"7cc967eb-a8c3-4147-a3ac-bd6af5dd3025\") " pod="openstack/glance-bdafd-default-external-api-0" Feb 24 05:53:35.892049 master-0 kubenswrapper[34361]: I0224 05:53:35.891824 34361 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/7cc967eb-a8c3-4147-a3ac-bd6af5dd3025-public-tls-certs\") pod \"glance-bdafd-default-external-api-0\" (UID: \"7cc967eb-a8c3-4147-a3ac-bd6af5dd3025\") " pod="openstack/glance-bdafd-default-external-api-0" Feb 24 05:53:35.908168 master-0 kubenswrapper[34361]: I0224 05:53:35.908087 34361 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-fb464bf7d-gv8b6"] Feb 24 05:53:35.908491 master-0 kubenswrapper[34361]: I0224 05:53:35.908424 34361 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/placement-fb464bf7d-gv8b6" podUID="05343afd-e975-47cb-a3f4-58664d26d871" containerName="placement-log" containerID="cri-o://84af720b033e0813084f651dc8d59e820c61bb2232501fe50e4f346a78960db9" gracePeriod=30 Feb 24 05:53:35.909117 master-0 kubenswrapper[34361]: I0224 05:53:35.909065 34361 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/placement-fb464bf7d-gv8b6" podUID="05343afd-e975-47cb-a3f4-58664d26d871" containerName="placement-api" containerID="cri-o://95ec194de761882be6bf22ca2973e3e0e5fbb4be965fad586d74dc01ee70cc37" gracePeriod=30 Feb 24 05:53:35.975550 master-0 kubenswrapper[34361]: I0224 05:53:35.963579 34361 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7cc967eb-a8c3-4147-a3ac-bd6af5dd3025-config-data\") pod \"glance-bdafd-default-external-api-0\" (UID: \"7cc967eb-a8c3-4147-a3ac-bd6af5dd3025\") " pod="openstack/glance-bdafd-default-external-api-0" Feb 24 05:53:35.977678 master-0 kubenswrapper[34361]: I0224 05:53:35.977633 34361 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4hg6b\" (UniqueName: \"kubernetes.io/projected/7cc967eb-a8c3-4147-a3ac-bd6af5dd3025-kube-api-access-4hg6b\") pod \"glance-bdafd-default-external-api-0\" (UID: \"7cc967eb-a8c3-4147-a3ac-bd6af5dd3025\") " pod="openstack/glance-bdafd-default-external-api-0" Feb 24 05:53:36.091136 master-0 kubenswrapper[34361]: I0224 05:53:36.090893 34361 generic.go:334] "Generic (PLEG): container finished" podID="05343afd-e975-47cb-a3f4-58664d26d871" containerID="84af720b033e0813084f651dc8d59e820c61bb2232501fe50e4f346a78960db9" exitCode=143 Feb 24 05:53:36.091436 master-0 kubenswrapper[34361]: I0224 05:53:36.091135 34361 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-fb464bf7d-gv8b6" event={"ID":"05343afd-e975-47cb-a3f4-58664d26d871","Type":"ContainerDied","Data":"84af720b033e0813084f651dc8d59e820c61bb2232501fe50e4f346a78960db9"} Feb 24 05:53:36.316633 master-0 kubenswrapper[34361]: I0224 05:53:36.316560 34361 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-db-create-qrtq2"] Feb 24 05:53:36.320356 master-0 kubenswrapper[34361]: I0224 05:53:36.320005 34361 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-qrtq2" Feb 24 05:53:36.327106 master-0 kubenswrapper[34361]: I0224 05:53:36.327030 34361 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-db-create-qrtq2"] Feb 24 05:53:36.397854 master-0 kubenswrapper[34361]: I0224 05:53:36.397691 34361 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ebf1db3e-e40c-41b6-ad8f-c1decbbfba24-operator-scripts\") pod \"nova-api-db-create-qrtq2\" (UID: \"ebf1db3e-e40c-41b6-ad8f-c1decbbfba24\") " pod="openstack/nova-api-db-create-qrtq2" Feb 24 05:53:36.398167 master-0 kubenswrapper[34361]: I0224 05:53:36.397990 34361 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mgcfn\" (UniqueName: \"kubernetes.io/projected/ebf1db3e-e40c-41b6-ad8f-c1decbbfba24-kube-api-access-mgcfn\") pod \"nova-api-db-create-qrtq2\" (UID: \"ebf1db3e-e40c-41b6-ad8f-c1decbbfba24\") " pod="openstack/nova-api-db-create-qrtq2" Feb 24 05:53:36.411528 master-0 kubenswrapper[34361]: I0224 05:53:36.411479 34361 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-db-create-kzhmb"] Feb 24 05:53:36.413626 master-0 kubenswrapper[34361]: I0224 05:53:36.413562 34361 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-kzhmb" Feb 24 05:53:36.425872 master-0 kubenswrapper[34361]: I0224 05:53:36.425756 34361 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-db-create-kzhmb"] Feb 24 05:53:36.492013 master-0 kubenswrapper[34361]: I0224 05:53:36.491743 34361 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-e077-account-create-update-fnxnr"] Feb 24 05:53:36.497060 master-0 kubenswrapper[34361]: I0224 05:53:36.496827 34361 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-e077-account-create-update-fnxnr" Feb 24 05:53:36.501226 master-0 kubenswrapper[34361]: I0224 05:53:36.501143 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ebf1db3e-e40c-41b6-ad8f-c1decbbfba24-operator-scripts\") pod \"nova-api-db-create-qrtq2\" (UID: \"ebf1db3e-e40c-41b6-ad8f-c1decbbfba24\") " pod="openstack/nova-api-db-create-qrtq2" Feb 24 05:53:36.501375 master-0 kubenswrapper[34361]: I0224 05:53:36.501283 34361 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ckddz\" (UniqueName: \"kubernetes.io/projected/d21763a0-0808-4fe2-94bb-37aea78c00f0-kube-api-access-ckddz\") pod \"nova-cell0-db-create-kzhmb\" (UID: \"d21763a0-0808-4fe2-94bb-37aea78c00f0\") " pod="openstack/nova-cell0-db-create-kzhmb" Feb 24 05:53:36.501871 master-0 kubenswrapper[34361]: I0224 05:53:36.501411 34361 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d21763a0-0808-4fe2-94bb-37aea78c00f0-operator-scripts\") pod \"nova-cell0-db-create-kzhmb\" (UID: \"d21763a0-0808-4fe2-94bb-37aea78c00f0\") " pod="openstack/nova-cell0-db-create-kzhmb" Feb 24 05:53:36.501871 master-0 kubenswrapper[34361]: I0224 05:53:36.501693 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mgcfn\" (UniqueName: \"kubernetes.io/projected/ebf1db3e-e40c-41b6-ad8f-c1decbbfba24-kube-api-access-mgcfn\") pod \"nova-api-db-create-qrtq2\" (UID: \"ebf1db3e-e40c-41b6-ad8f-c1decbbfba24\") " pod="openstack/nova-api-db-create-qrtq2" Feb 24 05:53:36.503237 master-0 kubenswrapper[34361]: I0224 05:53:36.503194 34361 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ebf1db3e-e40c-41b6-ad8f-c1decbbfba24-operator-scripts\") pod \"nova-api-db-create-qrtq2\" (UID: \"ebf1db3e-e40c-41b6-ad8f-c1decbbfba24\") " pod="openstack/nova-api-db-create-qrtq2" Feb 24 05:53:36.504799 master-0 kubenswrapper[34361]: I0224 05:53:36.504745 34361 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-db-secret" Feb 24 05:53:36.539979 master-0 kubenswrapper[34361]: I0224 05:53:36.538384 34361 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mgcfn\" (UniqueName: \"kubernetes.io/projected/ebf1db3e-e40c-41b6-ad8f-c1decbbfba24-kube-api-access-mgcfn\") pod \"nova-api-db-create-qrtq2\" (UID: \"ebf1db3e-e40c-41b6-ad8f-c1decbbfba24\") " pod="openstack/nova-api-db-create-qrtq2" Feb 24 05:53:36.561591 master-0 kubenswrapper[34361]: I0224 05:53:36.560374 34361 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-e077-account-create-update-fnxnr"] Feb 24 05:53:36.606024 master-0 kubenswrapper[34361]: I0224 05:53:36.605153 34361 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v54xk\" (UniqueName: \"kubernetes.io/projected/24794552-5cfa-428e-ad46-ce7a1794c7ec-kube-api-access-v54xk\") pod \"nova-api-e077-account-create-update-fnxnr\" (UID: \"24794552-5cfa-428e-ad46-ce7a1794c7ec\") " pod="openstack/nova-api-e077-account-create-update-fnxnr" Feb 24 05:53:36.606024 master-0 kubenswrapper[34361]: I0224 05:53:36.605356 34361 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/24794552-5cfa-428e-ad46-ce7a1794c7ec-operator-scripts\") pod \"nova-api-e077-account-create-update-fnxnr\" (UID: \"24794552-5cfa-428e-ad46-ce7a1794c7ec\") " pod="openstack/nova-api-e077-account-create-update-fnxnr" Feb 24 05:53:36.606024 master-0 kubenswrapper[34361]: I0224 05:53:36.605412 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ckddz\" (UniqueName: \"kubernetes.io/projected/d21763a0-0808-4fe2-94bb-37aea78c00f0-kube-api-access-ckddz\") pod \"nova-cell0-db-create-kzhmb\" (UID: \"d21763a0-0808-4fe2-94bb-37aea78c00f0\") " pod="openstack/nova-cell0-db-create-kzhmb" Feb 24 05:53:36.606024 master-0 kubenswrapper[34361]: I0224 05:53:36.605446 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d21763a0-0808-4fe2-94bb-37aea78c00f0-operator-scripts\") pod \"nova-cell0-db-create-kzhmb\" (UID: \"d21763a0-0808-4fe2-94bb-37aea78c00f0\") " pod="openstack/nova-cell0-db-create-kzhmb" Feb 24 05:53:36.606523 master-0 kubenswrapper[34361]: I0224 05:53:36.606456 34361 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d21763a0-0808-4fe2-94bb-37aea78c00f0-operator-scripts\") pod \"nova-cell0-db-create-kzhmb\" (UID: \"d21763a0-0808-4fe2-94bb-37aea78c00f0\") " pod="openstack/nova-cell0-db-create-kzhmb" Feb 24 05:53:36.634766 master-0 kubenswrapper[34361]: I0224 05:53:36.634635 34361 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="167c633e-12d2-45f6-a746-7437ee0bbfff" path="/var/lib/kubelet/pods/167c633e-12d2-45f6-a746-7437ee0bbfff/volumes" Feb 24 05:53:36.635774 master-0 kubenswrapper[34361]: I0224 05:53:36.635572 34361 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-db-create-4kz4t"] Feb 24 05:53:36.637996 master-0 kubenswrapper[34361]: I0224 05:53:36.637953 34361 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-db-create-4kz4t"] Feb 24 05:53:36.638099 master-0 kubenswrapper[34361]: I0224 05:53:36.638073 34361 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-4kz4t" Feb 24 05:53:36.653134 master-0 kubenswrapper[34361]: I0224 05:53:36.653058 34361 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ckddz\" (UniqueName: \"kubernetes.io/projected/d21763a0-0808-4fe2-94bb-37aea78c00f0-kube-api-access-ckddz\") pod \"nova-cell0-db-create-kzhmb\" (UID: \"d21763a0-0808-4fe2-94bb-37aea78c00f0\") " pod="openstack/nova-cell0-db-create-kzhmb" Feb 24 05:53:36.686730 master-0 kubenswrapper[34361]: I0224 05:53:36.686654 34361 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-qrtq2" Feb 24 05:53:36.700959 master-0 kubenswrapper[34361]: I0224 05:53:36.700882 34361 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-8a9d-account-create-update-hxq4n"] Feb 24 05:53:36.707910 master-0 kubenswrapper[34361]: I0224 05:53:36.707834 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v54xk\" (UniqueName: \"kubernetes.io/projected/24794552-5cfa-428e-ad46-ce7a1794c7ec-kube-api-access-v54xk\") pod \"nova-api-e077-account-create-update-fnxnr\" (UID: \"24794552-5cfa-428e-ad46-ce7a1794c7ec\") " pod="openstack/nova-api-e077-account-create-update-fnxnr" Feb 24 05:53:36.708013 master-0 kubenswrapper[34361]: I0224 05:53:36.707951 34361 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gjxbh\" (UniqueName: \"kubernetes.io/projected/481f56ba-4864-42fb-b0f3-02a4e4311e7d-kube-api-access-gjxbh\") pod \"nova-cell1-db-create-4kz4t\" (UID: \"481f56ba-4864-42fb-b0f3-02a4e4311e7d\") " pod="openstack/nova-cell1-db-create-4kz4t" Feb 24 05:53:36.708135 master-0 kubenswrapper[34361]: I0224 05:53:36.708106 34361 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/481f56ba-4864-42fb-b0f3-02a4e4311e7d-operator-scripts\") pod \"nova-cell1-db-create-4kz4t\" (UID: \"481f56ba-4864-42fb-b0f3-02a4e4311e7d\") " pod="openstack/nova-cell1-db-create-4kz4t" Feb 24 05:53:36.708182 master-0 kubenswrapper[34361]: I0224 05:53:36.708136 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/24794552-5cfa-428e-ad46-ce7a1794c7ec-operator-scripts\") pod \"nova-api-e077-account-create-update-fnxnr\" (UID: \"24794552-5cfa-428e-ad46-ce7a1794c7ec\") " pod="openstack/nova-api-e077-account-create-update-fnxnr" Feb 24 05:53:36.709418 master-0 kubenswrapper[34361]: I0224 05:53:36.709383 34361 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/24794552-5cfa-428e-ad46-ce7a1794c7ec-operator-scripts\") pod \"nova-api-e077-account-create-update-fnxnr\" (UID: \"24794552-5cfa-428e-ad46-ce7a1794c7ec\") " pod="openstack/nova-api-e077-account-create-update-fnxnr" Feb 24 05:53:36.724224 master-0 kubenswrapper[34361]: I0224 05:53:36.720070 34361 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-8a9d-account-create-update-hxq4n" Feb 24 05:53:36.729336 master-0 kubenswrapper[34361]: I0224 05:53:36.725816 34361 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-db-secret" Feb 24 05:53:36.734274 master-0 kubenswrapper[34361]: I0224 05:53:36.734228 34361 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v54xk\" (UniqueName: \"kubernetes.io/projected/24794552-5cfa-428e-ad46-ce7a1794c7ec-kube-api-access-v54xk\") pod \"nova-api-e077-account-create-update-fnxnr\" (UID: \"24794552-5cfa-428e-ad46-ce7a1794c7ec\") " pod="openstack/nova-api-e077-account-create-update-fnxnr" Feb 24 05:53:36.751136 master-0 kubenswrapper[34361]: I0224 05:53:36.749016 34361 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-kzhmb" Feb 24 05:53:36.756762 master-0 kubenswrapper[34361]: I0224 05:53:36.756697 34361 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-8a9d-account-create-update-hxq4n"] Feb 24 05:53:36.789679 master-0 kubenswrapper[34361]: I0224 05:53:36.789487 34361 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-8fb5a103-1e29-48ca-b6ff-e05e0e3dadaf\" (UniqueName: \"kubernetes.io/csi/topolvm.io^3c776d19-5d94-41f0-be3d-1e63c1bc4e65\") pod \"glance-bdafd-default-external-api-0\" (UID: \"7cc967eb-a8c3-4147-a3ac-bd6af5dd3025\") " pod="openstack/glance-bdafd-default-external-api-0" Feb 24 05:53:36.810710 master-0 kubenswrapper[34361]: I0224 05:53:36.810650 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gjxbh\" (UniqueName: \"kubernetes.io/projected/481f56ba-4864-42fb-b0f3-02a4e4311e7d-kube-api-access-gjxbh\") pod \"nova-cell1-db-create-4kz4t\" (UID: \"481f56ba-4864-42fb-b0f3-02a4e4311e7d\") " pod="openstack/nova-cell1-db-create-4kz4t" Feb 24 05:53:36.810844 master-0 kubenswrapper[34361]: I0224 05:53:36.810825 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/481f56ba-4864-42fb-b0f3-02a4e4311e7d-operator-scripts\") pod \"nova-cell1-db-create-4kz4t\" (UID: \"481f56ba-4864-42fb-b0f3-02a4e4311e7d\") " pod="openstack/nova-cell1-db-create-4kz4t" Feb 24 05:53:36.810937 master-0 kubenswrapper[34361]: I0224 05:53:36.810902 34361 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mf6bp\" (UniqueName: \"kubernetes.io/projected/398060c6-ec35-4659-89a2-550ad8c81453-kube-api-access-mf6bp\") pod \"nova-cell0-8a9d-account-create-update-hxq4n\" (UID: \"398060c6-ec35-4659-89a2-550ad8c81453\") " pod="openstack/nova-cell0-8a9d-account-create-update-hxq4n" Feb 24 05:53:36.810992 master-0 kubenswrapper[34361]: I0224 05:53:36.810975 34361 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/398060c6-ec35-4659-89a2-550ad8c81453-operator-scripts\") pod \"nova-cell0-8a9d-account-create-update-hxq4n\" (UID: \"398060c6-ec35-4659-89a2-550ad8c81453\") " pod="openstack/nova-cell0-8a9d-account-create-update-hxq4n" Feb 24 05:53:36.812582 master-0 kubenswrapper[34361]: I0224 05:53:36.812531 34361 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/481f56ba-4864-42fb-b0f3-02a4e4311e7d-operator-scripts\") pod \"nova-cell1-db-create-4kz4t\" (UID: \"481f56ba-4864-42fb-b0f3-02a4e4311e7d\") " pod="openstack/nova-cell1-db-create-4kz4t" Feb 24 05:53:36.827342 master-0 kubenswrapper[34361]: I0224 05:53:36.827272 34361 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gjxbh\" (UniqueName: \"kubernetes.io/projected/481f56ba-4864-42fb-b0f3-02a4e4311e7d-kube-api-access-gjxbh\") pod \"nova-cell1-db-create-4kz4t\" (UID: \"481f56ba-4864-42fb-b0f3-02a4e4311e7d\") " pod="openstack/nova-cell1-db-create-4kz4t" Feb 24 05:53:36.847539 master-0 kubenswrapper[34361]: I0224 05:53:36.841335 34361 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-e077-account-create-update-fnxnr" Feb 24 05:53:36.877693 master-0 kubenswrapper[34361]: I0224 05:53:36.877528 34361 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-bdafd-default-external-api-0" Feb 24 05:53:36.913840 master-0 kubenswrapper[34361]: I0224 05:53:36.913756 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mf6bp\" (UniqueName: \"kubernetes.io/projected/398060c6-ec35-4659-89a2-550ad8c81453-kube-api-access-mf6bp\") pod \"nova-cell0-8a9d-account-create-update-hxq4n\" (UID: \"398060c6-ec35-4659-89a2-550ad8c81453\") " pod="openstack/nova-cell0-8a9d-account-create-update-hxq4n" Feb 24 05:53:36.914144 master-0 kubenswrapper[34361]: I0224 05:53:36.913947 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/398060c6-ec35-4659-89a2-550ad8c81453-operator-scripts\") pod \"nova-cell0-8a9d-account-create-update-hxq4n\" (UID: \"398060c6-ec35-4659-89a2-550ad8c81453\") " pod="openstack/nova-cell0-8a9d-account-create-update-hxq4n" Feb 24 05:53:36.914748 master-0 kubenswrapper[34361]: I0224 05:53:36.914666 34361 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-c618-account-create-update-mmq8h"] Feb 24 05:53:36.916728 master-0 kubenswrapper[34361]: I0224 05:53:36.916648 34361 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-c618-account-create-update-mmq8h" Feb 24 05:53:36.917691 master-0 kubenswrapper[34361]: I0224 05:53:36.917627 34361 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/398060c6-ec35-4659-89a2-550ad8c81453-operator-scripts\") pod \"nova-cell0-8a9d-account-create-update-hxq4n\" (UID: \"398060c6-ec35-4659-89a2-550ad8c81453\") " pod="openstack/nova-cell0-8a9d-account-create-update-hxq4n" Feb 24 05:53:36.919527 master-0 kubenswrapper[34361]: I0224 05:53:36.919462 34361 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-db-secret" Feb 24 05:53:36.939878 master-0 kubenswrapper[34361]: I0224 05:53:36.939808 34361 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-c618-account-create-update-mmq8h"] Feb 24 05:53:36.940234 master-0 kubenswrapper[34361]: I0224 05:53:36.940145 34361 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mf6bp\" (UniqueName: \"kubernetes.io/projected/398060c6-ec35-4659-89a2-550ad8c81453-kube-api-access-mf6bp\") pod \"nova-cell0-8a9d-account-create-update-hxq4n\" (UID: \"398060c6-ec35-4659-89a2-550ad8c81453\") " pod="openstack/nova-cell0-8a9d-account-create-update-hxq4n" Feb 24 05:53:37.017937 master-0 kubenswrapper[34361]: I0224 05:53:37.017832 34361 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8fw2r\" (UniqueName: \"kubernetes.io/projected/06a1970d-fc4d-4522-a195-fa7fc9d5485d-kube-api-access-8fw2r\") pod \"nova-cell1-c618-account-create-update-mmq8h\" (UID: \"06a1970d-fc4d-4522-a195-fa7fc9d5485d\") " pod="openstack/nova-cell1-c618-account-create-update-mmq8h" Feb 24 05:53:37.018345 master-0 kubenswrapper[34361]: I0224 05:53:37.018157 34361 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/06a1970d-fc4d-4522-a195-fa7fc9d5485d-operator-scripts\") pod \"nova-cell1-c618-account-create-update-mmq8h\" (UID: \"06a1970d-fc4d-4522-a195-fa7fc9d5485d\") " pod="openstack/nova-cell1-c618-account-create-update-mmq8h" Feb 24 05:53:37.105079 master-0 kubenswrapper[34361]: I0224 05:53:37.104998 34361 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-4kz4t" Feb 24 05:53:37.111425 master-0 kubenswrapper[34361]: I0224 05:53:37.111393 34361 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-8a9d-account-create-update-hxq4n" Feb 24 05:53:37.124022 master-0 kubenswrapper[34361]: I0224 05:53:37.123521 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8fw2r\" (UniqueName: \"kubernetes.io/projected/06a1970d-fc4d-4522-a195-fa7fc9d5485d-kube-api-access-8fw2r\") pod \"nova-cell1-c618-account-create-update-mmq8h\" (UID: \"06a1970d-fc4d-4522-a195-fa7fc9d5485d\") " pod="openstack/nova-cell1-c618-account-create-update-mmq8h" Feb 24 05:53:37.124022 master-0 kubenswrapper[34361]: I0224 05:53:37.123814 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/06a1970d-fc4d-4522-a195-fa7fc9d5485d-operator-scripts\") pod \"nova-cell1-c618-account-create-update-mmq8h\" (UID: \"06a1970d-fc4d-4522-a195-fa7fc9d5485d\") " pod="openstack/nova-cell1-c618-account-create-update-mmq8h" Feb 24 05:53:37.125353 master-0 kubenswrapper[34361]: I0224 05:53:37.125294 34361 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/06a1970d-fc4d-4522-a195-fa7fc9d5485d-operator-scripts\") pod \"nova-cell1-c618-account-create-update-mmq8h\" (UID: \"06a1970d-fc4d-4522-a195-fa7fc9d5485d\") " pod="openstack/nova-cell1-c618-account-create-update-mmq8h" Feb 24 05:53:37.154261 master-0 kubenswrapper[34361]: I0224 05:53:37.154094 34361 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8fw2r\" (UniqueName: \"kubernetes.io/projected/06a1970d-fc4d-4522-a195-fa7fc9d5485d-kube-api-access-8fw2r\") pod \"nova-cell1-c618-account-create-update-mmq8h\" (UID: \"06a1970d-fc4d-4522-a195-fa7fc9d5485d\") " pod="openstack/nova-cell1-c618-account-create-update-mmq8h" Feb 24 05:53:37.244700 master-0 kubenswrapper[34361]: I0224 05:53:37.244615 34361 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-c618-account-create-update-mmq8h" Feb 24 05:53:37.748826 master-0 kubenswrapper[34361]: I0224 05:53:37.747270 34361 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-55b78786dc-sn557"] Feb 24 05:53:37.750265 master-0 kubenswrapper[34361]: I0224 05:53:37.749952 34361 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-55b78786dc-sn557" Feb 24 05:53:37.766932 master-0 kubenswrapper[34361]: I0224 05:53:37.766816 34361 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-55b78786dc-sn557"] Feb 24 05:53:37.833752 master-0 kubenswrapper[34361]: I0224 05:53:37.833662 34361 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ironic-inspector-0"] Feb 24 05:53:37.839766 master-0 kubenswrapper[34361]: I0224 05:53:37.839690 34361 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ironic-inspector-0" Feb 24 05:53:37.844274 master-0 kubenswrapper[34361]: I0224 05:53:37.844202 34361 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ironic-inspector-scripts" Feb 24 05:53:37.844640 master-0 kubenswrapper[34361]: I0224 05:53:37.844433 34361 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ironic-inspector-config-data" Feb 24 05:53:37.844640 master-0 kubenswrapper[34361]: I0224 05:53:37.844570 34361 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-transport-url-ironic-inspector-transport" Feb 24 05:53:37.858586 master-0 kubenswrapper[34361]: I0224 05:53:37.858506 34361 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ironic-inspector-0"] Feb 24 05:53:37.875277 master-0 kubenswrapper[34361]: I0224 05:53:37.874925 34361 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/719517cc-5f72-4139-aaa2-99bd0923702d-dns-swift-storage-0\") pod \"dnsmasq-dns-55b78786dc-sn557\" (UID: \"719517cc-5f72-4139-aaa2-99bd0923702d\") " pod="openstack/dnsmasq-dns-55b78786dc-sn557" Feb 24 05:53:37.875956 master-0 kubenswrapper[34361]: I0224 05:53:37.875075 34361 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/719517cc-5f72-4139-aaa2-99bd0923702d-ovsdbserver-nb\") pod \"dnsmasq-dns-55b78786dc-sn557\" (UID: \"719517cc-5f72-4139-aaa2-99bd0923702d\") " pod="openstack/dnsmasq-dns-55b78786dc-sn557" Feb 24 05:53:37.875956 master-0 kubenswrapper[34361]: I0224 05:53:37.875529 34361 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h7x5d\" (UniqueName: \"kubernetes.io/projected/719517cc-5f72-4139-aaa2-99bd0923702d-kube-api-access-h7x5d\") pod \"dnsmasq-dns-55b78786dc-sn557\" (UID: \"719517cc-5f72-4139-aaa2-99bd0923702d\") " pod="openstack/dnsmasq-dns-55b78786dc-sn557" Feb 24 05:53:37.875956 master-0 kubenswrapper[34361]: I0224 05:53:37.875578 34361 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/719517cc-5f72-4139-aaa2-99bd0923702d-ovsdbserver-sb\") pod \"dnsmasq-dns-55b78786dc-sn557\" (UID: \"719517cc-5f72-4139-aaa2-99bd0923702d\") " pod="openstack/dnsmasq-dns-55b78786dc-sn557" Feb 24 05:53:37.875956 master-0 kubenswrapper[34361]: I0224 05:53:37.875623 34361 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/719517cc-5f72-4139-aaa2-99bd0923702d-config\") pod \"dnsmasq-dns-55b78786dc-sn557\" (UID: \"719517cc-5f72-4139-aaa2-99bd0923702d\") " pod="openstack/dnsmasq-dns-55b78786dc-sn557" Feb 24 05:53:37.875956 master-0 kubenswrapper[34361]: I0224 05:53:37.875719 34361 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/719517cc-5f72-4139-aaa2-99bd0923702d-dns-svc\") pod \"dnsmasq-dns-55b78786dc-sn557\" (UID: \"719517cc-5f72-4139-aaa2-99bd0923702d\") " pod="openstack/dnsmasq-dns-55b78786dc-sn557" Feb 24 05:53:37.979387 master-0 kubenswrapper[34361]: I0224 05:53:37.979172 34361 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dtccb\" (UniqueName: \"kubernetes.io/projected/0e7e3217-839a-4443-9bcf-a7e25f1ac757-kube-api-access-dtccb\") pod \"ironic-inspector-0\" (UID: \"0e7e3217-839a-4443-9bcf-a7e25f1ac757\") " pod="openstack/ironic-inspector-0" Feb 24 05:53:37.979387 master-0 kubenswrapper[34361]: I0224 05:53:37.979348 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/719517cc-5f72-4139-aaa2-99bd0923702d-dns-swift-storage-0\") pod \"dnsmasq-dns-55b78786dc-sn557\" (UID: \"719517cc-5f72-4139-aaa2-99bd0923702d\") " pod="openstack/dnsmasq-dns-55b78786dc-sn557" Feb 24 05:53:37.979673 master-0 kubenswrapper[34361]: I0224 05:53:37.979442 34361 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0e7e3217-839a-4443-9bcf-a7e25f1ac757-combined-ca-bundle\") pod \"ironic-inspector-0\" (UID: \"0e7e3217-839a-4443-9bcf-a7e25f1ac757\") " pod="openstack/ironic-inspector-0" Feb 24 05:53:37.979673 master-0 kubenswrapper[34361]: I0224 05:53:37.979518 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/719517cc-5f72-4139-aaa2-99bd0923702d-ovsdbserver-nb\") pod \"dnsmasq-dns-55b78786dc-sn557\" (UID: \"719517cc-5f72-4139-aaa2-99bd0923702d\") " pod="openstack/dnsmasq-dns-55b78786dc-sn557" Feb 24 05:53:37.979673 master-0 kubenswrapper[34361]: I0224 05:53:37.979544 34361 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-ironic-inspector-dhcp-hostsdir\" (UniqueName: \"kubernetes.io/empty-dir/0e7e3217-839a-4443-9bcf-a7e25f1ac757-var-lib-ironic-inspector-dhcp-hostsdir\") pod \"ironic-inspector-0\" (UID: \"0e7e3217-839a-4443-9bcf-a7e25f1ac757\") " pod="openstack/ironic-inspector-0" Feb 24 05:53:37.979673 master-0 kubenswrapper[34361]: I0224 05:53:37.979606 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h7x5d\" (UniqueName: \"kubernetes.io/projected/719517cc-5f72-4139-aaa2-99bd0923702d-kube-api-access-h7x5d\") pod \"dnsmasq-dns-55b78786dc-sn557\" (UID: \"719517cc-5f72-4139-aaa2-99bd0923702d\") " pod="openstack/dnsmasq-dns-55b78786dc-sn557" Feb 24 05:53:37.979673 master-0 kubenswrapper[34361]: I0224 05:53:37.979643 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/719517cc-5f72-4139-aaa2-99bd0923702d-ovsdbserver-sb\") pod \"dnsmasq-dns-55b78786dc-sn557\" (UID: \"719517cc-5f72-4139-aaa2-99bd0923702d\") " pod="openstack/dnsmasq-dns-55b78786dc-sn557" Feb 24 05:53:37.979843 master-0 kubenswrapper[34361]: I0224 05:53:37.979708 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/719517cc-5f72-4139-aaa2-99bd0923702d-config\") pod \"dnsmasq-dns-55b78786dc-sn557\" (UID: \"719517cc-5f72-4139-aaa2-99bd0923702d\") " pod="openstack/dnsmasq-dns-55b78786dc-sn557" Feb 24 05:53:37.979843 master-0 kubenswrapper[34361]: I0224 05:53:37.979808 34361 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/0e7e3217-839a-4443-9bcf-a7e25f1ac757-config\") pod \"ironic-inspector-0\" (UID: \"0e7e3217-839a-4443-9bcf-a7e25f1ac757\") " pod="openstack/ironic-inspector-0" Feb 24 05:53:37.979905 master-0 kubenswrapper[34361]: I0224 05:53:37.979853 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/719517cc-5f72-4139-aaa2-99bd0923702d-dns-svc\") pod \"dnsmasq-dns-55b78786dc-sn557\" (UID: \"719517cc-5f72-4139-aaa2-99bd0923702d\") " pod="openstack/dnsmasq-dns-55b78786dc-sn557" Feb 24 05:53:37.979905 master-0 kubenswrapper[34361]: I0224 05:53:37.979882 34361 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0e7e3217-839a-4443-9bcf-a7e25f1ac757-scripts\") pod \"ironic-inspector-0\" (UID: \"0e7e3217-839a-4443-9bcf-a7e25f1ac757\") " pod="openstack/ironic-inspector-0" Feb 24 05:53:37.979975 master-0 kubenswrapper[34361]: I0224 05:53:37.979937 34361 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-podinfo\" (UniqueName: \"kubernetes.io/downward-api/0e7e3217-839a-4443-9bcf-a7e25f1ac757-etc-podinfo\") pod \"ironic-inspector-0\" (UID: \"0e7e3217-839a-4443-9bcf-a7e25f1ac757\") " pod="openstack/ironic-inspector-0" Feb 24 05:53:37.979975 master-0 kubenswrapper[34361]: I0224 05:53:37.979973 34361 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-ironic\" (UniqueName: \"kubernetes.io/empty-dir/0e7e3217-839a-4443-9bcf-a7e25f1ac757-var-lib-ironic\") pod \"ironic-inspector-0\" (UID: \"0e7e3217-839a-4443-9bcf-a7e25f1ac757\") " pod="openstack/ironic-inspector-0" Feb 24 05:53:37.981236 master-0 kubenswrapper[34361]: I0224 05:53:37.981202 34361 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/719517cc-5f72-4139-aaa2-99bd0923702d-dns-swift-storage-0\") pod \"dnsmasq-dns-55b78786dc-sn557\" (UID: \"719517cc-5f72-4139-aaa2-99bd0923702d\") " pod="openstack/dnsmasq-dns-55b78786dc-sn557" Feb 24 05:53:37.981524 master-0 kubenswrapper[34361]: I0224 05:53:37.981485 34361 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/719517cc-5f72-4139-aaa2-99bd0923702d-ovsdbserver-nb\") pod \"dnsmasq-dns-55b78786dc-sn557\" (UID: \"719517cc-5f72-4139-aaa2-99bd0923702d\") " pod="openstack/dnsmasq-dns-55b78786dc-sn557" Feb 24 05:53:37.981701 master-0 kubenswrapper[34361]: I0224 05:53:37.981642 34361 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/719517cc-5f72-4139-aaa2-99bd0923702d-ovsdbserver-sb\") pod \"dnsmasq-dns-55b78786dc-sn557\" (UID: \"719517cc-5f72-4139-aaa2-99bd0923702d\") " pod="openstack/dnsmasq-dns-55b78786dc-sn557" Feb 24 05:53:37.982608 master-0 kubenswrapper[34361]: I0224 05:53:37.982557 34361 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/719517cc-5f72-4139-aaa2-99bd0923702d-dns-svc\") pod \"dnsmasq-dns-55b78786dc-sn557\" (UID: \"719517cc-5f72-4139-aaa2-99bd0923702d\") " pod="openstack/dnsmasq-dns-55b78786dc-sn557" Feb 24 05:53:37.983008 master-0 kubenswrapper[34361]: I0224 05:53:37.982989 34361 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/719517cc-5f72-4139-aaa2-99bd0923702d-config\") pod \"dnsmasq-dns-55b78786dc-sn557\" (UID: \"719517cc-5f72-4139-aaa2-99bd0923702d\") " pod="openstack/dnsmasq-dns-55b78786dc-sn557" Feb 24 05:53:38.008800 master-0 kubenswrapper[34361]: I0224 05:53:38.008754 34361 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h7x5d\" (UniqueName: \"kubernetes.io/projected/719517cc-5f72-4139-aaa2-99bd0923702d-kube-api-access-h7x5d\") pod \"dnsmasq-dns-55b78786dc-sn557\" (UID: \"719517cc-5f72-4139-aaa2-99bd0923702d\") " pod="openstack/dnsmasq-dns-55b78786dc-sn557" Feb 24 05:53:38.082912 master-0 kubenswrapper[34361]: I0224 05:53:38.082444 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/0e7e3217-839a-4443-9bcf-a7e25f1ac757-config\") pod \"ironic-inspector-0\" (UID: \"0e7e3217-839a-4443-9bcf-a7e25f1ac757\") " pod="openstack/ironic-inspector-0" Feb 24 05:53:38.082912 master-0 kubenswrapper[34361]: I0224 05:53:38.082545 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0e7e3217-839a-4443-9bcf-a7e25f1ac757-scripts\") pod \"ironic-inspector-0\" (UID: \"0e7e3217-839a-4443-9bcf-a7e25f1ac757\") " pod="openstack/ironic-inspector-0" Feb 24 05:53:38.082912 master-0 kubenswrapper[34361]: I0224 05:53:38.082576 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-podinfo\" (UniqueName: \"kubernetes.io/downward-api/0e7e3217-839a-4443-9bcf-a7e25f1ac757-etc-podinfo\") pod \"ironic-inspector-0\" (UID: \"0e7e3217-839a-4443-9bcf-a7e25f1ac757\") " pod="openstack/ironic-inspector-0" Feb 24 05:53:38.084561 master-0 kubenswrapper[34361]: I0224 05:53:38.084081 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-ironic\" (UniqueName: \"kubernetes.io/empty-dir/0e7e3217-839a-4443-9bcf-a7e25f1ac757-var-lib-ironic\") pod \"ironic-inspector-0\" (UID: \"0e7e3217-839a-4443-9bcf-a7e25f1ac757\") " pod="openstack/ironic-inspector-0" Feb 24 05:53:38.084561 master-0 kubenswrapper[34361]: I0224 05:53:38.084382 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dtccb\" (UniqueName: \"kubernetes.io/projected/0e7e3217-839a-4443-9bcf-a7e25f1ac757-kube-api-access-dtccb\") pod \"ironic-inspector-0\" (UID: \"0e7e3217-839a-4443-9bcf-a7e25f1ac757\") " pod="openstack/ironic-inspector-0" Feb 24 05:53:38.084711 master-0 kubenswrapper[34361]: I0224 05:53:38.084643 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0e7e3217-839a-4443-9bcf-a7e25f1ac757-combined-ca-bundle\") pod \"ironic-inspector-0\" (UID: \"0e7e3217-839a-4443-9bcf-a7e25f1ac757\") " pod="openstack/ironic-inspector-0" Feb 24 05:53:38.084866 master-0 kubenswrapper[34361]: I0224 05:53:38.084820 34361 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-ironic\" (UniqueName: \"kubernetes.io/empty-dir/0e7e3217-839a-4443-9bcf-a7e25f1ac757-var-lib-ironic\") pod \"ironic-inspector-0\" (UID: \"0e7e3217-839a-4443-9bcf-a7e25f1ac757\") " pod="openstack/ironic-inspector-0" Feb 24 05:53:38.084866 master-0 kubenswrapper[34361]: I0224 05:53:38.084842 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-ironic-inspector-dhcp-hostsdir\" (UniqueName: \"kubernetes.io/empty-dir/0e7e3217-839a-4443-9bcf-a7e25f1ac757-var-lib-ironic-inspector-dhcp-hostsdir\") pod \"ironic-inspector-0\" (UID: \"0e7e3217-839a-4443-9bcf-a7e25f1ac757\") " pod="openstack/ironic-inspector-0" Feb 24 05:53:38.085349 master-0 kubenswrapper[34361]: I0224 05:53:38.085296 34361 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-ironic-inspector-dhcp-hostsdir\" (UniqueName: \"kubernetes.io/empty-dir/0e7e3217-839a-4443-9bcf-a7e25f1ac757-var-lib-ironic-inspector-dhcp-hostsdir\") pod \"ironic-inspector-0\" (UID: \"0e7e3217-839a-4443-9bcf-a7e25f1ac757\") " pod="openstack/ironic-inspector-0" Feb 24 05:53:38.095855 master-0 kubenswrapper[34361]: I0224 05:53:38.095788 34361 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/0e7e3217-839a-4443-9bcf-a7e25f1ac757-config\") pod \"ironic-inspector-0\" (UID: \"0e7e3217-839a-4443-9bcf-a7e25f1ac757\") " pod="openstack/ironic-inspector-0" Feb 24 05:53:38.107401 master-0 kubenswrapper[34361]: I0224 05:53:38.104330 34361 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0e7e3217-839a-4443-9bcf-a7e25f1ac757-combined-ca-bundle\") pod \"ironic-inspector-0\" (UID: \"0e7e3217-839a-4443-9bcf-a7e25f1ac757\") " pod="openstack/ironic-inspector-0" Feb 24 05:53:38.108821 master-0 kubenswrapper[34361]: I0224 05:53:38.108578 34361 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0e7e3217-839a-4443-9bcf-a7e25f1ac757-scripts\") pod \"ironic-inspector-0\" (UID: \"0e7e3217-839a-4443-9bcf-a7e25f1ac757\") " pod="openstack/ironic-inspector-0" Feb 24 05:53:38.112034 master-0 kubenswrapper[34361]: I0224 05:53:38.111922 34361 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-podinfo\" (UniqueName: \"kubernetes.io/downward-api/0e7e3217-839a-4443-9bcf-a7e25f1ac757-etc-podinfo\") pod \"ironic-inspector-0\" (UID: \"0e7e3217-839a-4443-9bcf-a7e25f1ac757\") " pod="openstack/ironic-inspector-0" Feb 24 05:53:38.112698 master-0 kubenswrapper[34361]: I0224 05:53:38.112654 34361 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dtccb\" (UniqueName: \"kubernetes.io/projected/0e7e3217-839a-4443-9bcf-a7e25f1ac757-kube-api-access-dtccb\") pod \"ironic-inspector-0\" (UID: \"0e7e3217-839a-4443-9bcf-a7e25f1ac757\") " pod="openstack/ironic-inspector-0" Feb 24 05:53:38.123409 master-0 kubenswrapper[34361]: I0224 05:53:38.123355 34361 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-55b78786dc-sn557" Feb 24 05:53:38.183338 master-0 kubenswrapper[34361]: I0224 05:53:38.181999 34361 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ironic-inspector-0" Feb 24 05:53:40.259641 master-0 kubenswrapper[34361]: I0224 05:53:40.259533 34361 generic.go:334] "Generic (PLEG): container finished" podID="05343afd-e975-47cb-a3f4-58664d26d871" containerID="95ec194de761882be6bf22ca2973e3e0e5fbb4be965fad586d74dc01ee70cc37" exitCode=0 Feb 24 05:53:40.259641 master-0 kubenswrapper[34361]: I0224 05:53:40.259628 34361 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-fb464bf7d-gv8b6" event={"ID":"05343afd-e975-47cb-a3f4-58664d26d871","Type":"ContainerDied","Data":"95ec194de761882be6bf22ca2973e3e0e5fbb4be965fad586d74dc01ee70cc37"} Feb 24 05:53:41.474209 master-0 kubenswrapper[34361]: I0224 05:53:41.474141 34361 scope.go:117] "RemoveContainer" containerID="0942ee6a98d6bb5a9765463e9f2e7b660623d6201a2a9b274816da5132fa8c64" Feb 24 05:53:41.587906 master-0 kubenswrapper[34361]: I0224 05:53:41.587825 34361 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-d477bdc58-p8d8s" Feb 24 05:53:41.599247 master-0 kubenswrapper[34361]: I0224 05:53:41.599180 34361 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-bdafd-default-internal-api-0" Feb 24 05:53:41.674411 master-0 kubenswrapper[34361]: I0224 05:53:41.674105 34361 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9353daa8-f1c5-493d-8f31-bfc3074c6223-scripts\") pod \"9353daa8-f1c5-493d-8f31-bfc3074c6223\" (UID: \"9353daa8-f1c5-493d-8f31-bfc3074c6223\") " Feb 24 05:53:41.674411 master-0 kubenswrapper[34361]: I0224 05:53:41.674251 34361 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hrfr2\" (UniqueName: \"kubernetes.io/projected/9353daa8-f1c5-493d-8f31-bfc3074c6223-kube-api-access-hrfr2\") pod \"9353daa8-f1c5-493d-8f31-bfc3074c6223\" (UID: \"9353daa8-f1c5-493d-8f31-bfc3074c6223\") " Feb 24 05:53:41.674411 master-0 kubenswrapper[34361]: I0224 05:53:41.674382 34361 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9353daa8-f1c5-493d-8f31-bfc3074c6223-config-data\") pod \"9353daa8-f1c5-493d-8f31-bfc3074c6223\" (UID: \"9353daa8-f1c5-493d-8f31-bfc3074c6223\") " Feb 24 05:53:41.674944 master-0 kubenswrapper[34361]: I0224 05:53:41.674428 34361 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9353daa8-f1c5-493d-8f31-bfc3074c6223-combined-ca-bundle\") pod \"9353daa8-f1c5-493d-8f31-bfc3074c6223\" (UID: \"9353daa8-f1c5-493d-8f31-bfc3074c6223\") " Feb 24 05:53:41.674944 master-0 kubenswrapper[34361]: I0224 05:53:41.674505 34361 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/3057364f-388c-47da-adc8-4c8e074b8362-config\") pod \"3057364f-388c-47da-adc8-4c8e074b8362\" (UID: \"3057364f-388c-47da-adc8-4c8e074b8362\") " Feb 24 05:53:41.676425 master-0 kubenswrapper[34361]: I0224 05:53:41.674965 34361 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/csi/topolvm.io^db8b87f9-2acd-43ac-b8a5-43ce0b4aa5b0\") pod \"9353daa8-f1c5-493d-8f31-bfc3074c6223\" (UID: \"9353daa8-f1c5-493d-8f31-bfc3074c6223\") " Feb 24 05:53:41.676425 master-0 kubenswrapper[34361]: I0224 05:53:41.675001 34361 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/9353daa8-f1c5-493d-8f31-bfc3074c6223-httpd-run\") pod \"9353daa8-f1c5-493d-8f31-bfc3074c6223\" (UID: \"9353daa8-f1c5-493d-8f31-bfc3074c6223\") " Feb 24 05:53:41.676425 master-0 kubenswrapper[34361]: I0224 05:53:41.675122 34361 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/9353daa8-f1c5-493d-8f31-bfc3074c6223-internal-tls-certs\") pod \"9353daa8-f1c5-493d-8f31-bfc3074c6223\" (UID: \"9353daa8-f1c5-493d-8f31-bfc3074c6223\") " Feb 24 05:53:41.676425 master-0 kubenswrapper[34361]: I0224 05:53:41.675274 34361 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3057364f-388c-47da-adc8-4c8e074b8362-combined-ca-bundle\") pod \"3057364f-388c-47da-adc8-4c8e074b8362\" (UID: \"3057364f-388c-47da-adc8-4c8e074b8362\") " Feb 24 05:53:41.676425 master-0 kubenswrapper[34361]: I0224 05:53:41.675311 34361 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5k2wm\" (UniqueName: \"kubernetes.io/projected/3057364f-388c-47da-adc8-4c8e074b8362-kube-api-access-5k2wm\") pod \"3057364f-388c-47da-adc8-4c8e074b8362\" (UID: \"3057364f-388c-47da-adc8-4c8e074b8362\") " Feb 24 05:53:41.676425 master-0 kubenswrapper[34361]: I0224 05:53:41.675355 34361 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/3057364f-388c-47da-adc8-4c8e074b8362-httpd-config\") pod \"3057364f-388c-47da-adc8-4c8e074b8362\" (UID: \"3057364f-388c-47da-adc8-4c8e074b8362\") " Feb 24 05:53:41.676425 master-0 kubenswrapper[34361]: I0224 05:53:41.675418 34361 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/3057364f-388c-47da-adc8-4c8e074b8362-ovndb-tls-certs\") pod \"3057364f-388c-47da-adc8-4c8e074b8362\" (UID: \"3057364f-388c-47da-adc8-4c8e074b8362\") " Feb 24 05:53:41.676425 master-0 kubenswrapper[34361]: I0224 05:53:41.675463 34361 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9353daa8-f1c5-493d-8f31-bfc3074c6223-logs\") pod \"9353daa8-f1c5-493d-8f31-bfc3074c6223\" (UID: \"9353daa8-f1c5-493d-8f31-bfc3074c6223\") " Feb 24 05:53:41.679359 master-0 kubenswrapper[34361]: I0224 05:53:41.678934 34361 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9353daa8-f1c5-493d-8f31-bfc3074c6223-kube-api-access-hrfr2" (OuterVolumeSpecName: "kube-api-access-hrfr2") pod "9353daa8-f1c5-493d-8f31-bfc3074c6223" (UID: "9353daa8-f1c5-493d-8f31-bfc3074c6223"). InnerVolumeSpecName "kube-api-access-hrfr2". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 24 05:53:41.679359 master-0 kubenswrapper[34361]: I0224 05:53:41.679289 34361 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9353daa8-f1c5-493d-8f31-bfc3074c6223-logs" (OuterVolumeSpecName: "logs") pod "9353daa8-f1c5-493d-8f31-bfc3074c6223" (UID: "9353daa8-f1c5-493d-8f31-bfc3074c6223"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 24 05:53:41.679576 master-0 kubenswrapper[34361]: I0224 05:53:41.679518 34361 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3057364f-388c-47da-adc8-4c8e074b8362-httpd-config" (OuterVolumeSpecName: "httpd-config") pod "3057364f-388c-47da-adc8-4c8e074b8362" (UID: "3057364f-388c-47da-adc8-4c8e074b8362"). InnerVolumeSpecName "httpd-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 24 05:53:41.681039 master-0 kubenswrapper[34361]: I0224 05:53:41.680361 34361 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9353daa8-f1c5-493d-8f31-bfc3074c6223-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "9353daa8-f1c5-493d-8f31-bfc3074c6223" (UID: "9353daa8-f1c5-493d-8f31-bfc3074c6223"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 24 05:53:41.681039 master-0 kubenswrapper[34361]: I0224 05:53:41.680523 34361 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9353daa8-f1c5-493d-8f31-bfc3074c6223-scripts" (OuterVolumeSpecName: "scripts") pod "9353daa8-f1c5-493d-8f31-bfc3074c6223" (UID: "9353daa8-f1c5-493d-8f31-bfc3074c6223"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 24 05:53:41.711904 master-0 kubenswrapper[34361]: I0224 05:53:41.711830 34361 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3057364f-388c-47da-adc8-4c8e074b8362-kube-api-access-5k2wm" (OuterVolumeSpecName: "kube-api-access-5k2wm") pod "3057364f-388c-47da-adc8-4c8e074b8362" (UID: "3057364f-388c-47da-adc8-4c8e074b8362"). InnerVolumeSpecName "kube-api-access-5k2wm". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 24 05:53:41.755517 master-0 kubenswrapper[34361]: I0224 05:53:41.755438 34361 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9353daa8-f1c5-493d-8f31-bfc3074c6223-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "9353daa8-f1c5-493d-8f31-bfc3074c6223" (UID: "9353daa8-f1c5-493d-8f31-bfc3074c6223"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 24 05:53:41.782699 master-0 kubenswrapper[34361]: I0224 05:53:41.782632 34361 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/9353daa8-f1c5-493d-8f31-bfc3074c6223-httpd-run\") on node \"master-0\" DevicePath \"\"" Feb 24 05:53:41.782699 master-0 kubenswrapper[34361]: I0224 05:53:41.782682 34361 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5k2wm\" (UniqueName: \"kubernetes.io/projected/3057364f-388c-47da-adc8-4c8e074b8362-kube-api-access-5k2wm\") on node \"master-0\" DevicePath \"\"" Feb 24 05:53:41.782699 master-0 kubenswrapper[34361]: I0224 05:53:41.782699 34361 reconciler_common.go:293] "Volume detached for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/3057364f-388c-47da-adc8-4c8e074b8362-httpd-config\") on node \"master-0\" DevicePath \"\"" Feb 24 05:53:41.782699 master-0 kubenswrapper[34361]: I0224 05:53:41.782709 34361 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9353daa8-f1c5-493d-8f31-bfc3074c6223-logs\") on node \"master-0\" DevicePath \"\"" Feb 24 05:53:41.782699 master-0 kubenswrapper[34361]: I0224 05:53:41.782717 34361 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9353daa8-f1c5-493d-8f31-bfc3074c6223-scripts\") on node \"master-0\" DevicePath \"\"" Feb 24 05:53:41.782699 master-0 kubenswrapper[34361]: I0224 05:53:41.782727 34361 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hrfr2\" (UniqueName: \"kubernetes.io/projected/9353daa8-f1c5-493d-8f31-bfc3074c6223-kube-api-access-hrfr2\") on node \"master-0\" DevicePath \"\"" Feb 24 05:53:41.783351 master-0 kubenswrapper[34361]: I0224 05:53:41.782737 34361 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9353daa8-f1c5-493d-8f31-bfc3074c6223-combined-ca-bundle\") on node \"master-0\" DevicePath \"\"" Feb 24 05:53:41.784355 master-0 kubenswrapper[34361]: I0224 05:53:41.784284 34361 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/topolvm.io^db8b87f9-2acd-43ac-b8a5-43ce0b4aa5b0" (OuterVolumeSpecName: "glance") pod "9353daa8-f1c5-493d-8f31-bfc3074c6223" (UID: "9353daa8-f1c5-493d-8f31-bfc3074c6223"). InnerVolumeSpecName "pvc-261a4ed4-1134-4ffa-8b7c-639ec8b67db7". PluginName "kubernetes.io/csi", VolumeGidValue "" Feb 24 05:53:41.813781 master-0 kubenswrapper[34361]: I0224 05:53:41.813709 34361 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9353daa8-f1c5-493d-8f31-bfc3074c6223-config-data" (OuterVolumeSpecName: "config-data") pod "9353daa8-f1c5-493d-8f31-bfc3074c6223" (UID: "9353daa8-f1c5-493d-8f31-bfc3074c6223"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 24 05:53:41.912867 master-0 kubenswrapper[34361]: I0224 05:53:41.877524 34361 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9353daa8-f1c5-493d-8f31-bfc3074c6223-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "9353daa8-f1c5-493d-8f31-bfc3074c6223" (UID: "9353daa8-f1c5-493d-8f31-bfc3074c6223"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 24 05:53:41.912867 master-0 kubenswrapper[34361]: I0224 05:53:41.886518 34361 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9353daa8-f1c5-493d-8f31-bfc3074c6223-config-data\") on node \"master-0\" DevicePath \"\"" Feb 24 05:53:41.912867 master-0 kubenswrapper[34361]: I0224 05:53:41.886596 34361 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"pvc-261a4ed4-1134-4ffa-8b7c-639ec8b67db7\" (UniqueName: \"kubernetes.io/csi/topolvm.io^db8b87f9-2acd-43ac-b8a5-43ce0b4aa5b0\") on node \"master-0\" " Feb 24 05:53:41.912867 master-0 kubenswrapper[34361]: I0224 05:53:41.886613 34361 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/9353daa8-f1c5-493d-8f31-bfc3074c6223-internal-tls-certs\") on node \"master-0\" DevicePath \"\"" Feb 24 05:53:41.912867 master-0 kubenswrapper[34361]: I0224 05:53:41.891677 34361 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3057364f-388c-47da-adc8-4c8e074b8362-config" (OuterVolumeSpecName: "config") pod "3057364f-388c-47da-adc8-4c8e074b8362" (UID: "3057364f-388c-47da-adc8-4c8e074b8362"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 24 05:53:41.927992 master-0 kubenswrapper[34361]: I0224 05:53:41.925821 34361 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3057364f-388c-47da-adc8-4c8e074b8362-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "3057364f-388c-47da-adc8-4c8e074b8362" (UID: "3057364f-388c-47da-adc8-4c8e074b8362"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 24 05:53:41.936979 master-0 kubenswrapper[34361]: I0224 05:53:41.936726 34361 csi_attacher.go:630] kubernetes.io/csi: attacher.UnmountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping UnmountDevice... Feb 24 05:53:41.936979 master-0 kubenswrapper[34361]: I0224 05:53:41.936888 34361 operation_generator.go:917] UnmountDevice succeeded for volume "pvc-261a4ed4-1134-4ffa-8b7c-639ec8b67db7" (UniqueName: "kubernetes.io/csi/topolvm.io^db8b87f9-2acd-43ac-b8a5-43ce0b4aa5b0") on node "master-0" Feb 24 05:53:41.975457 master-0 kubenswrapper[34361]: I0224 05:53:41.975377 34361 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3057364f-388c-47da-adc8-4c8e074b8362-ovndb-tls-certs" (OuterVolumeSpecName: "ovndb-tls-certs") pod "3057364f-388c-47da-adc8-4c8e074b8362" (UID: "3057364f-388c-47da-adc8-4c8e074b8362"). InnerVolumeSpecName "ovndb-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 24 05:53:41.989036 master-0 kubenswrapper[34361]: I0224 05:53:41.988956 34361 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3057364f-388c-47da-adc8-4c8e074b8362-combined-ca-bundle\") on node \"master-0\" DevicePath \"\"" Feb 24 05:53:41.989036 master-0 kubenswrapper[34361]: I0224 05:53:41.988989 34361 reconciler_common.go:293] "Volume detached for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/3057364f-388c-47da-adc8-4c8e074b8362-ovndb-tls-certs\") on node \"master-0\" DevicePath \"\"" Feb 24 05:53:41.989036 master-0 kubenswrapper[34361]: I0224 05:53:41.989001 34361 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/3057364f-388c-47da-adc8-4c8e074b8362-config\") on node \"master-0\" DevicePath \"\"" Feb 24 05:53:41.989036 master-0 kubenswrapper[34361]: I0224 05:53:41.989012 34361 reconciler_common.go:293] "Volume detached for volume \"pvc-261a4ed4-1134-4ffa-8b7c-639ec8b67db7\" (UniqueName: \"kubernetes.io/csi/topolvm.io^db8b87f9-2acd-43ac-b8a5-43ce0b4aa5b0\") on node \"master-0\" DevicePath \"\"" Feb 24 05:53:42.083685 master-0 kubenswrapper[34361]: I0224 05:53:42.083629 34361 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-fb464bf7d-gv8b6" Feb 24 05:53:42.092523 master-0 kubenswrapper[34361]: I0224 05:53:42.092460 34361 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/05343afd-e975-47cb-a3f4-58664d26d871-public-tls-certs\") pod \"05343afd-e975-47cb-a3f4-58664d26d871\" (UID: \"05343afd-e975-47cb-a3f4-58664d26d871\") " Feb 24 05:53:42.092619 master-0 kubenswrapper[34361]: I0224 05:53:42.092592 34361 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/05343afd-e975-47cb-a3f4-58664d26d871-combined-ca-bundle\") pod \"05343afd-e975-47cb-a3f4-58664d26d871\" (UID: \"05343afd-e975-47cb-a3f4-58664d26d871\") " Feb 24 05:53:42.092688 master-0 kubenswrapper[34361]: I0224 05:53:42.092641 34361 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/05343afd-e975-47cb-a3f4-58664d26d871-config-data\") pod \"05343afd-e975-47cb-a3f4-58664d26d871\" (UID: \"05343afd-e975-47cb-a3f4-58664d26d871\") " Feb 24 05:53:42.092793 master-0 kubenswrapper[34361]: I0224 05:53:42.092777 34361 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/05343afd-e975-47cb-a3f4-58664d26d871-scripts\") pod \"05343afd-e975-47cb-a3f4-58664d26d871\" (UID: \"05343afd-e975-47cb-a3f4-58664d26d871\") " Feb 24 05:53:42.092895 master-0 kubenswrapper[34361]: I0224 05:53:42.092861 34361 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-m4pxh\" (UniqueName: \"kubernetes.io/projected/05343afd-e975-47cb-a3f4-58664d26d871-kube-api-access-m4pxh\") pod \"05343afd-e975-47cb-a3f4-58664d26d871\" (UID: \"05343afd-e975-47cb-a3f4-58664d26d871\") " Feb 24 05:53:42.092951 master-0 kubenswrapper[34361]: I0224 05:53:42.092937 34361 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/05343afd-e975-47cb-a3f4-58664d26d871-logs\") pod \"05343afd-e975-47cb-a3f4-58664d26d871\" (UID: \"05343afd-e975-47cb-a3f4-58664d26d871\") " Feb 24 05:53:42.094683 master-0 kubenswrapper[34361]: I0224 05:53:42.094511 34361 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/05343afd-e975-47cb-a3f4-58664d26d871-logs" (OuterVolumeSpecName: "logs") pod "05343afd-e975-47cb-a3f4-58664d26d871" (UID: "05343afd-e975-47cb-a3f4-58664d26d871"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 24 05:53:42.105350 master-0 kubenswrapper[34361]: I0224 05:53:42.105238 34361 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/05343afd-e975-47cb-a3f4-58664d26d871-kube-api-access-m4pxh" (OuterVolumeSpecName: "kube-api-access-m4pxh") pod "05343afd-e975-47cb-a3f4-58664d26d871" (UID: "05343afd-e975-47cb-a3f4-58664d26d871"). InnerVolumeSpecName "kube-api-access-m4pxh". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 24 05:53:42.108667 master-0 kubenswrapper[34361]: I0224 05:53:42.108593 34361 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/05343afd-e975-47cb-a3f4-58664d26d871-scripts" (OuterVolumeSpecName: "scripts") pod "05343afd-e975-47cb-a3f4-58664d26d871" (UID: "05343afd-e975-47cb-a3f4-58664d26d871"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 24 05:53:42.205053 master-0 kubenswrapper[34361]: I0224 05:53:42.198613 34361 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/05343afd-e975-47cb-a3f4-58664d26d871-internal-tls-certs\") pod \"05343afd-e975-47cb-a3f4-58664d26d871\" (UID: \"05343afd-e975-47cb-a3f4-58664d26d871\") " Feb 24 05:53:42.205053 master-0 kubenswrapper[34361]: I0224 05:53:42.199577 34361 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/05343afd-e975-47cb-a3f4-58664d26d871-scripts\") on node \"master-0\" DevicePath \"\"" Feb 24 05:53:42.205053 master-0 kubenswrapper[34361]: I0224 05:53:42.199596 34361 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-m4pxh\" (UniqueName: \"kubernetes.io/projected/05343afd-e975-47cb-a3f4-58664d26d871-kube-api-access-m4pxh\") on node \"master-0\" DevicePath \"\"" Feb 24 05:53:42.205053 master-0 kubenswrapper[34361]: I0224 05:53:42.199612 34361 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/05343afd-e975-47cb-a3f4-58664d26d871-logs\") on node \"master-0\" DevicePath \"\"" Feb 24 05:53:42.231313 master-0 kubenswrapper[34361]: I0224 05:53:42.231159 34361 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/05343afd-e975-47cb-a3f4-58664d26d871-config-data" (OuterVolumeSpecName: "config-data") pod "05343afd-e975-47cb-a3f4-58664d26d871" (UID: "05343afd-e975-47cb-a3f4-58664d26d871"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 24 05:53:42.238682 master-0 kubenswrapper[34361]: I0224 05:53:42.238476 34361 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/05343afd-e975-47cb-a3f4-58664d26d871-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "05343afd-e975-47cb-a3f4-58664d26d871" (UID: "05343afd-e975-47cb-a3f4-58664d26d871"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 24 05:53:42.319117 master-0 kubenswrapper[34361]: I0224 05:53:42.313422 34361 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/05343afd-e975-47cb-a3f4-58664d26d871-combined-ca-bundle\") on node \"master-0\" DevicePath \"\"" Feb 24 05:53:42.319117 master-0 kubenswrapper[34361]: I0224 05:53:42.313505 34361 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/05343afd-e975-47cb-a3f4-58664d26d871-config-data\") on node \"master-0\" DevicePath \"\"" Feb 24 05:53:42.356821 master-0 kubenswrapper[34361]: I0224 05:53:42.354220 34361 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-fb464bf7d-gv8b6" event={"ID":"05343afd-e975-47cb-a3f4-58664d26d871","Type":"ContainerDied","Data":"fb2bf7b7c831f1e20ad5adf412de75ab97252a198dcc478117147f864b83b15c"} Feb 24 05:53:42.356821 master-0 kubenswrapper[34361]: I0224 05:53:42.354308 34361 scope.go:117] "RemoveContainer" containerID="95ec194de761882be6bf22ca2973e3e0e5fbb4be965fad586d74dc01ee70cc37" Feb 24 05:53:42.356821 master-0 kubenswrapper[34361]: I0224 05:53:42.354609 34361 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-fb464bf7d-gv8b6" Feb 24 05:53:42.362201 master-0 kubenswrapper[34361]: I0224 05:53:42.362137 34361 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-d477bdc58-p8d8s" event={"ID":"3057364f-388c-47da-adc8-4c8e074b8362","Type":"ContainerDied","Data":"b135f60c4c74d5c03d568fb4b645d7d7e27c289dc4554b7bffed6440ae659678"} Feb 24 05:53:42.362393 master-0 kubenswrapper[34361]: I0224 05:53:42.362352 34361 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-d477bdc58-p8d8s" Feb 24 05:53:42.371173 master-0 kubenswrapper[34361]: I0224 05:53:42.371042 34361 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-neutron-agent-856d98ff5d-2p7np" event={"ID":"40a5b237-764f-4367-85a5-4153a8f90a3e","Type":"ContainerStarted","Data":"9a980b2d218dc784fd4aeaeb833cfa11480f5f68bf7cc1ef2a5ea44537be4012"} Feb 24 05:53:42.371432 master-0 kubenswrapper[34361]: I0224 05:53:42.371373 34361 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ironic-neutron-agent-856d98ff5d-2p7np" Feb 24 05:53:42.375069 master-0 kubenswrapper[34361]: I0224 05:53:42.375022 34361 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-bdafd-default-internal-api-0" event={"ID":"9353daa8-f1c5-493d-8f31-bfc3074c6223","Type":"ContainerDied","Data":"88a62162cc1b7d58341f32b6deb8263d1cc7b2de23fed7a618d79da9c1aad7c7"} Feb 24 05:53:42.375178 master-0 kubenswrapper[34361]: I0224 05:53:42.375129 34361 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-bdafd-default-internal-api-0" Feb 24 05:53:42.382102 master-0 kubenswrapper[34361]: I0224 05:53:42.382058 34361 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/05343afd-e975-47cb-a3f4-58664d26d871-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "05343afd-e975-47cb-a3f4-58664d26d871" (UID: "05343afd-e975-47cb-a3f4-58664d26d871"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 24 05:53:42.417362 master-0 kubenswrapper[34361]: I0224 05:53:42.417283 34361 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/05343afd-e975-47cb-a3f4-58664d26d871-internal-tls-certs\") on node \"master-0\" DevicePath \"\"" Feb 24 05:53:42.449194 master-0 kubenswrapper[34361]: I0224 05:53:42.449075 34361 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/05343afd-e975-47cb-a3f4-58664d26d871-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "05343afd-e975-47cb-a3f4-58664d26d871" (UID: "05343afd-e975-47cb-a3f4-58664d26d871"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 24 05:53:42.527785 master-0 kubenswrapper[34361]: I0224 05:53:42.527700 34361 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/05343afd-e975-47cb-a3f4-58664d26d871-public-tls-certs\") on node \"master-0\" DevicePath \"\"" Feb 24 05:53:43.091116 master-0 kubenswrapper[34361]: I0224 05:53:43.091047 34361 scope.go:117] "RemoveContainer" containerID="84af720b033e0813084f651dc8d59e820c61bb2232501fe50e4f346a78960db9" Feb 24 05:53:43.137820 master-0 kubenswrapper[34361]: I0224 05:53:43.137773 34361 scope.go:117] "RemoveContainer" containerID="4f7f4865ef45e0d6f6cd182f7e4ff15bcccff460d685cf4e910cc0f51615f94e" Feb 24 05:53:43.192067 master-0 kubenswrapper[34361]: I0224 05:53:43.190927 34361 scope.go:117] "RemoveContainer" containerID="b4136cfb871ee6b478ec62af981a996389b5b5b3043351647079c6db301b06b0" Feb 24 05:53:43.221752 master-0 kubenswrapper[34361]: I0224 05:53:43.221469 34361 scope.go:117] "RemoveContainer" containerID="c3119b8a3607fa9c3df6b54da589b714968f583afefab83eb324d1696714b2b6" Feb 24 05:53:43.253196 master-0 kubenswrapper[34361]: I0224 05:53:43.253132 34361 scope.go:117] "RemoveContainer" containerID="f21829fd6b0d389f5b690cffbcf84955a80e446f4b913fa10461795c84683f71" Feb 24 05:53:43.388253 master-0 kubenswrapper[34361]: I0224 05:53:43.387946 34361 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-db-create-4kz4t"] Feb 24 05:53:43.418864 master-0 kubenswrapper[34361]: W0224 05:53:43.418212 34361 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod24794552_5cfa_428e_ad46_ce7a1794c7ec.slice/crio-4eb611ed7ce5643b79ab7cd363c2fca1c9a408bf10eabcb62f9bf7f508fc15b5 WatchSource:0}: Error finding container 4eb611ed7ce5643b79ab7cd363c2fca1c9a408bf10eabcb62f9bf7f508fc15b5: Status 404 returned error can't find the container with id 4eb611ed7ce5643b79ab7cd363c2fca1c9a408bf10eabcb62f9bf7f508fc15b5 Feb 24 05:53:43.420574 master-0 kubenswrapper[34361]: I0224 05:53:43.420492 34361 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-e077-account-create-update-fnxnr"] Feb 24 05:53:43.422570 master-0 kubenswrapper[34361]: W0224 05:53:43.422513 34361 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd21763a0_0808_4fe2_94bb_37aea78c00f0.slice/crio-89eb2e381a90e7e2742ae5d5a8fb151304b333b06cc12bbef8f33394ded9e0f6 WatchSource:0}: Error finding container 89eb2e381a90e7e2742ae5d5a8fb151304b333b06cc12bbef8f33394ded9e0f6: Status 404 returned error can't find the container with id 89eb2e381a90e7e2742ae5d5a8fb151304b333b06cc12bbef8f33394ded9e0f6 Feb 24 05:53:43.468803 master-0 kubenswrapper[34361]: I0224 05:53:43.468743 34361 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-db-create-qrtq2"] Feb 24 05:53:43.479549 master-0 kubenswrapper[34361]: I0224 05:53:43.477005 34361 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstackclient" event={"ID":"6be4831a-3890-44a6-8e35-58245f3d1ae0","Type":"ContainerStarted","Data":"7b6f6ab46af3ded3ff74f925a85dca3fae023ae7137823583e389dd8cd0858dd"} Feb 24 05:53:43.486670 master-0 kubenswrapper[34361]: I0224 05:53:43.486584 34361 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-c618-account-create-update-mmq8h"] Feb 24 05:53:43.517851 master-0 kubenswrapper[34361]: I0224 05:53:43.517762 34361 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-db-create-kzhmb"] Feb 24 05:53:43.548777 master-0 kubenswrapper[34361]: I0224 05:53:43.544349 34361 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-8a9d-account-create-update-hxq4n"] Feb 24 05:53:43.590991 master-0 kubenswrapper[34361]: I0224 05:53:43.590875 34361 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-55b78786dc-sn557"] Feb 24 05:53:43.988016 master-0 kubenswrapper[34361]: I0224 05:53:43.987913 34361 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-fb464bf7d-gv8b6"] Feb 24 05:53:44.154986 master-0 kubenswrapper[34361]: I0224 05:53:44.148132 34361 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-fb464bf7d-gv8b6"] Feb 24 05:53:44.195639 master-0 kubenswrapper[34361]: I0224 05:53:44.193456 34361 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-bdafd-default-internal-api-0"] Feb 24 05:53:44.260591 master-0 kubenswrapper[34361]: I0224 05:53:44.247826 34361 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-bdafd-default-internal-api-0"] Feb 24 05:53:44.260591 master-0 kubenswrapper[34361]: I0224 05:53:44.247931 34361 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-d477bdc58-p8d8s"] Feb 24 05:53:44.260591 master-0 kubenswrapper[34361]: I0224 05:53:44.247945 34361 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-d477bdc58-p8d8s"] Feb 24 05:53:44.260591 master-0 kubenswrapper[34361]: I0224 05:53:44.260218 34361 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-bdafd-default-internal-api-0"] Feb 24 05:53:44.262152 master-0 kubenswrapper[34361]: E0224 05:53:44.261596 34361 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="05343afd-e975-47cb-a3f4-58664d26d871" containerName="placement-log" Feb 24 05:53:44.262152 master-0 kubenswrapper[34361]: I0224 05:53:44.261638 34361 state_mem.go:107] "Deleted CPUSet assignment" podUID="05343afd-e975-47cb-a3f4-58664d26d871" containerName="placement-log" Feb 24 05:53:44.262152 master-0 kubenswrapper[34361]: E0224 05:53:44.261671 34361 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3057364f-388c-47da-adc8-4c8e074b8362" containerName="neutron-httpd" Feb 24 05:53:44.262152 master-0 kubenswrapper[34361]: I0224 05:53:44.261681 34361 state_mem.go:107] "Deleted CPUSet assignment" podUID="3057364f-388c-47da-adc8-4c8e074b8362" containerName="neutron-httpd" Feb 24 05:53:44.262152 master-0 kubenswrapper[34361]: E0224 05:53:44.261713 34361 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3057364f-388c-47da-adc8-4c8e074b8362" containerName="neutron-api" Feb 24 05:53:44.262152 master-0 kubenswrapper[34361]: I0224 05:53:44.261727 34361 state_mem.go:107] "Deleted CPUSet assignment" podUID="3057364f-388c-47da-adc8-4c8e074b8362" containerName="neutron-api" Feb 24 05:53:44.262152 master-0 kubenswrapper[34361]: E0224 05:53:44.261756 34361 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9353daa8-f1c5-493d-8f31-bfc3074c6223" containerName="glance-log" Feb 24 05:53:44.262152 master-0 kubenswrapper[34361]: I0224 05:53:44.261769 34361 state_mem.go:107] "Deleted CPUSet assignment" podUID="9353daa8-f1c5-493d-8f31-bfc3074c6223" containerName="glance-log" Feb 24 05:53:44.262152 master-0 kubenswrapper[34361]: E0224 05:53:44.261792 34361 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="05343afd-e975-47cb-a3f4-58664d26d871" containerName="placement-api" Feb 24 05:53:44.262152 master-0 kubenswrapper[34361]: I0224 05:53:44.261803 34361 state_mem.go:107] "Deleted CPUSet assignment" podUID="05343afd-e975-47cb-a3f4-58664d26d871" containerName="placement-api" Feb 24 05:53:44.262152 master-0 kubenswrapper[34361]: E0224 05:53:44.261832 34361 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9353daa8-f1c5-493d-8f31-bfc3074c6223" containerName="glance-httpd" Feb 24 05:53:44.262152 master-0 kubenswrapper[34361]: I0224 05:53:44.261842 34361 state_mem.go:107] "Deleted CPUSet assignment" podUID="9353daa8-f1c5-493d-8f31-bfc3074c6223" containerName="glance-httpd" Feb 24 05:53:44.262600 master-0 kubenswrapper[34361]: I0224 05:53:44.262242 34361 memory_manager.go:354] "RemoveStaleState removing state" podUID="05343afd-e975-47cb-a3f4-58664d26d871" containerName="placement-api" Feb 24 05:53:44.262600 master-0 kubenswrapper[34361]: I0224 05:53:44.262277 34361 memory_manager.go:354] "RemoveStaleState removing state" podUID="05343afd-e975-47cb-a3f4-58664d26d871" containerName="placement-log" Feb 24 05:53:44.262600 master-0 kubenswrapper[34361]: I0224 05:53:44.262300 34361 memory_manager.go:354] "RemoveStaleState removing state" podUID="9353daa8-f1c5-493d-8f31-bfc3074c6223" containerName="glance-httpd" Feb 24 05:53:44.262600 master-0 kubenswrapper[34361]: I0224 05:53:44.262362 34361 memory_manager.go:354] "RemoveStaleState removing state" podUID="3057364f-388c-47da-adc8-4c8e074b8362" containerName="neutron-api" Feb 24 05:53:44.262600 master-0 kubenswrapper[34361]: I0224 05:53:44.262379 34361 memory_manager.go:354] "RemoveStaleState removing state" podUID="9353daa8-f1c5-493d-8f31-bfc3074c6223" containerName="glance-log" Feb 24 05:53:44.262600 master-0 kubenswrapper[34361]: I0224 05:53:44.262400 34361 memory_manager.go:354] "RemoveStaleState removing state" podUID="3057364f-388c-47da-adc8-4c8e074b8362" containerName="neutron-httpd" Feb 24 05:53:44.273348 master-0 kubenswrapper[34361]: I0224 05:53:44.272175 34361 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-bdafd-default-internal-api-0" Feb 24 05:53:44.273664 master-0 kubenswrapper[34361]: W0224 05:53:44.273594 34361 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod7cc967eb_a8c3_4147_a3ac_bd6af5dd3025.slice/crio-d2fd7722240ee71f4dbf4d7be8ca57edf6cd92318e9d2f9ce2f96ea134c1865f WatchSource:0}: Error finding container d2fd7722240ee71f4dbf4d7be8ca57edf6cd92318e9d2f9ce2f96ea134c1865f: Status 404 returned error can't find the container with id d2fd7722240ee71f4dbf4d7be8ca57edf6cd92318e9d2f9ce2f96ea134c1865f Feb 24 05:53:44.278247 master-0 kubenswrapper[34361]: I0224 05:53:44.278187 34361 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-bdafd-default-internal-config-data" Feb 24 05:53:44.278506 master-0 kubenswrapper[34361]: I0224 05:53:44.278445 34361 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-internal-svc" Feb 24 05:53:44.295853 master-0 kubenswrapper[34361]: I0224 05:53:44.294166 34361 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-bdafd-default-external-api-0"] Feb 24 05:53:44.318265 master-0 kubenswrapper[34361]: I0224 05:53:44.318179 34361 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-bdafd-default-internal-api-0"] Feb 24 05:53:44.369680 master-0 kubenswrapper[34361]: I0224 05:53:44.361607 34361 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstackclient" podStartSLOduration=4.247947723 podStartE2EDuration="28.361582434s" podCreationTimestamp="2026-02-24 05:53:16 +0000 UTC" firstStartedPulling="2026-02-24 05:53:17.506938007 +0000 UTC m=+957.209555053" lastFinishedPulling="2026-02-24 05:53:41.620572718 +0000 UTC m=+981.323189764" observedRunningTime="2026-02-24 05:53:44.323995871 +0000 UTC m=+984.026612917" watchObservedRunningTime="2026-02-24 05:53:44.361582434 +0000 UTC m=+984.064199500" Feb 24 05:53:44.435711 master-0 kubenswrapper[34361]: I0224 05:53:44.433125 34361 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zpg5f\" (UniqueName: \"kubernetes.io/projected/ce78ac60-4347-4838-95d0-09b1342445d9-kube-api-access-zpg5f\") pod \"glance-bdafd-default-internal-api-0\" (UID: \"ce78ac60-4347-4838-95d0-09b1342445d9\") " pod="openstack/glance-bdafd-default-internal-api-0" Feb 24 05:53:44.435711 master-0 kubenswrapper[34361]: I0224 05:53:44.433207 34361 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ce78ac60-4347-4838-95d0-09b1342445d9-scripts\") pod \"glance-bdafd-default-internal-api-0\" (UID: \"ce78ac60-4347-4838-95d0-09b1342445d9\") " pod="openstack/glance-bdafd-default-internal-api-0" Feb 24 05:53:44.435711 master-0 kubenswrapper[34361]: I0224 05:53:44.433235 34361 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ce78ac60-4347-4838-95d0-09b1342445d9-combined-ca-bundle\") pod \"glance-bdafd-default-internal-api-0\" (UID: \"ce78ac60-4347-4838-95d0-09b1342445d9\") " pod="openstack/glance-bdafd-default-internal-api-0" Feb 24 05:53:44.435711 master-0 kubenswrapper[34361]: I0224 05:53:44.433995 34361 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ce78ac60-4347-4838-95d0-09b1342445d9-config-data\") pod \"glance-bdafd-default-internal-api-0\" (UID: \"ce78ac60-4347-4838-95d0-09b1342445d9\") " pod="openstack/glance-bdafd-default-internal-api-0" Feb 24 05:53:44.435711 master-0 kubenswrapper[34361]: I0224 05:53:44.434186 34361 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/ce78ac60-4347-4838-95d0-09b1342445d9-httpd-run\") pod \"glance-bdafd-default-internal-api-0\" (UID: \"ce78ac60-4347-4838-95d0-09b1342445d9\") " pod="openstack/glance-bdafd-default-internal-api-0" Feb 24 05:53:44.435711 master-0 kubenswrapper[34361]: I0224 05:53:44.434250 34361 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/ce78ac60-4347-4838-95d0-09b1342445d9-internal-tls-certs\") pod \"glance-bdafd-default-internal-api-0\" (UID: \"ce78ac60-4347-4838-95d0-09b1342445d9\") " pod="openstack/glance-bdafd-default-internal-api-0" Feb 24 05:53:44.435711 master-0 kubenswrapper[34361]: I0224 05:53:44.434345 34361 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ce78ac60-4347-4838-95d0-09b1342445d9-logs\") pod \"glance-bdafd-default-internal-api-0\" (UID: \"ce78ac60-4347-4838-95d0-09b1342445d9\") " pod="openstack/glance-bdafd-default-internal-api-0" Feb 24 05:53:44.435711 master-0 kubenswrapper[34361]: I0224 05:53:44.434377 34361 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-261a4ed4-1134-4ffa-8b7c-639ec8b67db7\" (UniqueName: \"kubernetes.io/csi/topolvm.io^db8b87f9-2acd-43ac-b8a5-43ce0b4aa5b0\") pod \"glance-bdafd-default-internal-api-0\" (UID: \"ce78ac60-4347-4838-95d0-09b1342445d9\") " pod="openstack/glance-bdafd-default-internal-api-0" Feb 24 05:53:44.492998 master-0 kubenswrapper[34361]: I0224 05:53:44.492900 34361 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ironic-inspector-0"] Feb 24 05:53:44.542435 master-0 kubenswrapper[34361]: I0224 05:53:44.538351 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zpg5f\" (UniqueName: \"kubernetes.io/projected/ce78ac60-4347-4838-95d0-09b1342445d9-kube-api-access-zpg5f\") pod \"glance-bdafd-default-internal-api-0\" (UID: \"ce78ac60-4347-4838-95d0-09b1342445d9\") " pod="openstack/glance-bdafd-default-internal-api-0" Feb 24 05:53:44.542435 master-0 kubenswrapper[34361]: I0224 05:53:44.538423 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ce78ac60-4347-4838-95d0-09b1342445d9-scripts\") pod \"glance-bdafd-default-internal-api-0\" (UID: \"ce78ac60-4347-4838-95d0-09b1342445d9\") " pod="openstack/glance-bdafd-default-internal-api-0" Feb 24 05:53:44.542435 master-0 kubenswrapper[34361]: I0224 05:53:44.538454 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ce78ac60-4347-4838-95d0-09b1342445d9-combined-ca-bundle\") pod \"glance-bdafd-default-internal-api-0\" (UID: \"ce78ac60-4347-4838-95d0-09b1342445d9\") " pod="openstack/glance-bdafd-default-internal-api-0" Feb 24 05:53:44.542435 master-0 kubenswrapper[34361]: I0224 05:53:44.538494 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ce78ac60-4347-4838-95d0-09b1342445d9-config-data\") pod \"glance-bdafd-default-internal-api-0\" (UID: \"ce78ac60-4347-4838-95d0-09b1342445d9\") " pod="openstack/glance-bdafd-default-internal-api-0" Feb 24 05:53:44.542435 master-0 kubenswrapper[34361]: I0224 05:53:44.538573 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/ce78ac60-4347-4838-95d0-09b1342445d9-httpd-run\") pod \"glance-bdafd-default-internal-api-0\" (UID: \"ce78ac60-4347-4838-95d0-09b1342445d9\") " pod="openstack/glance-bdafd-default-internal-api-0" Feb 24 05:53:44.542435 master-0 kubenswrapper[34361]: I0224 05:53:44.538602 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/ce78ac60-4347-4838-95d0-09b1342445d9-internal-tls-certs\") pod \"glance-bdafd-default-internal-api-0\" (UID: \"ce78ac60-4347-4838-95d0-09b1342445d9\") " pod="openstack/glance-bdafd-default-internal-api-0" Feb 24 05:53:44.542435 master-0 kubenswrapper[34361]: I0224 05:53:44.538642 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ce78ac60-4347-4838-95d0-09b1342445d9-logs\") pod \"glance-bdafd-default-internal-api-0\" (UID: \"ce78ac60-4347-4838-95d0-09b1342445d9\") " pod="openstack/glance-bdafd-default-internal-api-0" Feb 24 05:53:44.542435 master-0 kubenswrapper[34361]: I0224 05:53:44.538666 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-261a4ed4-1134-4ffa-8b7c-639ec8b67db7\" (UniqueName: \"kubernetes.io/csi/topolvm.io^db8b87f9-2acd-43ac-b8a5-43ce0b4aa5b0\") pod \"glance-bdafd-default-internal-api-0\" (UID: \"ce78ac60-4347-4838-95d0-09b1342445d9\") " pod="openstack/glance-bdafd-default-internal-api-0" Feb 24 05:53:44.542907 master-0 kubenswrapper[34361]: I0224 05:53:44.542682 34361 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ce78ac60-4347-4838-95d0-09b1342445d9-logs\") pod \"glance-bdafd-default-internal-api-0\" (UID: \"ce78ac60-4347-4838-95d0-09b1342445d9\") " pod="openstack/glance-bdafd-default-internal-api-0" Feb 24 05:53:44.543407 master-0 kubenswrapper[34361]: I0224 05:53:44.543342 34361 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/ce78ac60-4347-4838-95d0-09b1342445d9-httpd-run\") pod \"glance-bdafd-default-internal-api-0\" (UID: \"ce78ac60-4347-4838-95d0-09b1342445d9\") " pod="openstack/glance-bdafd-default-internal-api-0" Feb 24 05:53:44.553347 master-0 kubenswrapper[34361]: I0224 05:53:44.551183 34361 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ce78ac60-4347-4838-95d0-09b1342445d9-combined-ca-bundle\") pod \"glance-bdafd-default-internal-api-0\" (UID: \"ce78ac60-4347-4838-95d0-09b1342445d9\") " pod="openstack/glance-bdafd-default-internal-api-0" Feb 24 05:53:44.553347 master-0 kubenswrapper[34361]: I0224 05:53:44.552255 34361 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ce78ac60-4347-4838-95d0-09b1342445d9-scripts\") pod \"glance-bdafd-default-internal-api-0\" (UID: \"ce78ac60-4347-4838-95d0-09b1342445d9\") " pod="openstack/glance-bdafd-default-internal-api-0" Feb 24 05:53:44.563349 master-0 kubenswrapper[34361]: I0224 05:53:44.558191 34361 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ce78ac60-4347-4838-95d0-09b1342445d9-config-data\") pod \"glance-bdafd-default-internal-api-0\" (UID: \"ce78ac60-4347-4838-95d0-09b1342445d9\") " pod="openstack/glance-bdafd-default-internal-api-0" Feb 24 05:53:44.570365 master-0 kubenswrapper[34361]: I0224 05:53:44.568433 34361 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-8a9d-account-create-update-hxq4n" event={"ID":"398060c6-ec35-4659-89a2-550ad8c81453","Type":"ContainerStarted","Data":"50a6bed273455d05151b6f2708e3f0bc7e0af934424f6af21e302193e2b54a6c"} Feb 24 05:53:44.570365 master-0 kubenswrapper[34361]: I0224 05:53:44.568516 34361 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-8a9d-account-create-update-hxq4n" event={"ID":"398060c6-ec35-4659-89a2-550ad8c81453","Type":"ContainerStarted","Data":"475820b476d159cc8647c4a8b134a5aae6f8dda113412dec21f92603f4fa45b5"} Feb 24 05:53:44.570365 master-0 kubenswrapper[34361]: I0224 05:53:44.570187 34361 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 24 05:53:44.570365 master-0 kubenswrapper[34361]: I0224 05:53:44.570219 34361 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-261a4ed4-1134-4ffa-8b7c-639ec8b67db7\" (UniqueName: \"kubernetes.io/csi/topolvm.io^db8b87f9-2acd-43ac-b8a5-43ce0b4aa5b0\") pod \"glance-bdafd-default-internal-api-0\" (UID: \"ce78ac60-4347-4838-95d0-09b1342445d9\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/topolvm.io/03c215443f2c43fe19f38e42f351895e0bcaecfa5c9fe4b43c46bb54166b4232/globalmount\"" pod="openstack/glance-bdafd-default-internal-api-0" Feb 24 05:53:44.579411 master-0 kubenswrapper[34361]: I0224 05:53:44.572284 34361 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/ce78ac60-4347-4838-95d0-09b1342445d9-internal-tls-certs\") pod \"glance-bdafd-default-internal-api-0\" (UID: \"ce78ac60-4347-4838-95d0-09b1342445d9\") " pod="openstack/glance-bdafd-default-internal-api-0" Feb 24 05:53:44.590964 master-0 kubenswrapper[34361]: I0224 05:53:44.586300 34361 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zpg5f\" (UniqueName: \"kubernetes.io/projected/ce78ac60-4347-4838-95d0-09b1342445d9-kube-api-access-zpg5f\") pod \"glance-bdafd-default-internal-api-0\" (UID: \"ce78ac60-4347-4838-95d0-09b1342445d9\") " pod="openstack/glance-bdafd-default-internal-api-0" Feb 24 05:53:44.651343 master-0 kubenswrapper[34361]: I0224 05:53:44.644099 34361 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="05343afd-e975-47cb-a3f4-58664d26d871" path="/var/lib/kubelet/pods/05343afd-e975-47cb-a3f4-58664d26d871/volumes" Feb 24 05:53:44.651343 master-0 kubenswrapper[34361]: I0224 05:53:44.644946 34361 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3057364f-388c-47da-adc8-4c8e074b8362" path="/var/lib/kubelet/pods/3057364f-388c-47da-adc8-4c8e074b8362/volumes" Feb 24 05:53:44.651343 master-0 kubenswrapper[34361]: I0224 05:53:44.645798 34361 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9353daa8-f1c5-493d-8f31-bfc3074c6223" path="/var/lib/kubelet/pods/9353daa8-f1c5-493d-8f31-bfc3074c6223/volumes" Feb 24 05:53:44.651728 master-0 kubenswrapper[34361]: I0224 05:53:44.651647 34361 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-8a9d-account-create-update-hxq4n" podStartSLOduration=8.651620107 podStartE2EDuration="8.651620107s" podCreationTimestamp="2026-02-24 05:53:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-24 05:53:44.613880519 +0000 UTC m=+984.316497565" watchObservedRunningTime="2026-02-24 05:53:44.651620107 +0000 UTC m=+984.354237143" Feb 24 05:53:44.667310 master-0 kubenswrapper[34361]: I0224 05:53:44.661445 34361 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-qrtq2" event={"ID":"ebf1db3e-e40c-41b6-ad8f-c1decbbfba24","Type":"ContainerStarted","Data":"982f90cfe38d398e0f9cff69f20afa1d36878ca7162b82c475a6720604383ca4"} Feb 24 05:53:44.667310 master-0 kubenswrapper[34361]: I0224 05:53:44.661547 34361 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-qrtq2" event={"ID":"ebf1db3e-e40c-41b6-ad8f-c1decbbfba24","Type":"ContainerStarted","Data":"5b347a4b605cf41f29d8784998ad19b07e4c7704556f23168308e50585d9500d"} Feb 24 05:53:44.667310 master-0 kubenswrapper[34361]: I0224 05:53:44.663732 34361 generic.go:334] "Generic (PLEG): container finished" podID="719517cc-5f72-4139-aaa2-99bd0923702d" containerID="4db5f799d1cb3b12ed9df426a5a4502b09298ce90f3e8b66ca85c1216d557c0a" exitCode=0 Feb 24 05:53:44.667310 master-0 kubenswrapper[34361]: I0224 05:53:44.663845 34361 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-55b78786dc-sn557" event={"ID":"719517cc-5f72-4139-aaa2-99bd0923702d","Type":"ContainerDied","Data":"4db5f799d1cb3b12ed9df426a5a4502b09298ce90f3e8b66ca85c1216d557c0a"} Feb 24 05:53:44.667310 master-0 kubenswrapper[34361]: I0224 05:53:44.663892 34361 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-55b78786dc-sn557" event={"ID":"719517cc-5f72-4139-aaa2-99bd0923702d","Type":"ContainerStarted","Data":"ee83f25c2f6fb0446e017ccdecde0942684b4e552d4a223f5047ffd46a8aa895"} Feb 24 05:53:44.677355 master-0 kubenswrapper[34361]: I0224 05:53:44.676502 34361 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-c618-account-create-update-mmq8h" event={"ID":"06a1970d-fc4d-4522-a195-fa7fc9d5485d","Type":"ContainerStarted","Data":"af1522d029f8bb57a552add3e575b79016723a4e45e4ce2d7eb1c88f2b1f6d45"} Feb 24 05:53:44.677355 master-0 kubenswrapper[34361]: I0224 05:53:44.676564 34361 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-c618-account-create-update-mmq8h" event={"ID":"06a1970d-fc4d-4522-a195-fa7fc9d5485d","Type":"ContainerStarted","Data":"c425cb162f2e54b16a673c6ca1e30cbb974a0d7dbf7c282aaf228c2defe63583"} Feb 24 05:53:44.695383 master-0 kubenswrapper[34361]: I0224 05:53:44.691272 34361 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-bdafd-default-external-api-0" event={"ID":"7cc967eb-a8c3-4147-a3ac-bd6af5dd3025","Type":"ContainerStarted","Data":"d2fd7722240ee71f4dbf4d7be8ca57edf6cd92318e9d2f9ce2f96ea134c1865f"} Feb 24 05:53:44.727507 master-0 kubenswrapper[34361]: I0224 05:53:44.726398 34361 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-kzhmb" event={"ID":"d21763a0-0808-4fe2-94bb-37aea78c00f0","Type":"ContainerStarted","Data":"e5e444e3b28484ac024aae033f87647e5055ae23d82ae3f0756fc7aa00b3d0b0"} Feb 24 05:53:44.727507 master-0 kubenswrapper[34361]: I0224 05:53:44.726495 34361 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-kzhmb" event={"ID":"d21763a0-0808-4fe2-94bb-37aea78c00f0","Type":"ContainerStarted","Data":"89eb2e381a90e7e2742ae5d5a8fb151304b333b06cc12bbef8f33394ded9e0f6"} Feb 24 05:53:44.745340 master-0 kubenswrapper[34361]: I0224 05:53:44.733374 34361 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-db-create-qrtq2" podStartSLOduration=8.733346951 podStartE2EDuration="8.733346951s" podCreationTimestamp="2026-02-24 05:53:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-24 05:53:44.647675901 +0000 UTC m=+984.350292947" watchObservedRunningTime="2026-02-24 05:53:44.733346951 +0000 UTC m=+984.435963997" Feb 24 05:53:44.775350 master-0 kubenswrapper[34361]: I0224 05:53:44.774864 34361 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-conductor-0" event={"ID":"74198545-a0ee-4142-93a6-86175a1d3c02","Type":"ContainerStarted","Data":"a9bdc48aaecc10c6f9a5959e782e51eb61f90df09f16a8d5d0b273e885f8fb3c"} Feb 24 05:53:44.776348 master-0 kubenswrapper[34361]: I0224 05:53:44.775608 34361 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-c618-account-create-update-mmq8h" podStartSLOduration=8.77558627 podStartE2EDuration="8.77558627s" podCreationTimestamp="2026-02-24 05:53:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-24 05:53:44.761165991 +0000 UTC m=+984.463783037" watchObservedRunningTime="2026-02-24 05:53:44.77558627 +0000 UTC m=+984.478203316" Feb 24 05:53:44.789379 master-0 kubenswrapper[34361]: I0224 05:53:44.786408 34361 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-e077-account-create-update-fnxnr" event={"ID":"24794552-5cfa-428e-ad46-ce7a1794c7ec","Type":"ContainerStarted","Data":"d465ae7bc5b67aa22453114bb9d2bca2a310263c4849f3130a7ab19495572ff4"} Feb 24 05:53:44.789379 master-0 kubenswrapper[34361]: I0224 05:53:44.787020 34361 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-e077-account-create-update-fnxnr" event={"ID":"24794552-5cfa-428e-ad46-ce7a1794c7ec","Type":"ContainerStarted","Data":"4eb611ed7ce5643b79ab7cd363c2fca1c9a408bf10eabcb62f9bf7f508fc15b5"} Feb 24 05:53:44.804432 master-0 kubenswrapper[34361]: I0224 05:53:44.800342 34361 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-4kz4t" event={"ID":"481f56ba-4864-42fb-b0f3-02a4e4311e7d","Type":"ContainerStarted","Data":"9926dabed789d363070cf3d4e2ba027bba87827031dcfcbffeb311425028de1f"} Feb 24 05:53:44.804432 master-0 kubenswrapper[34361]: I0224 05:53:44.800405 34361 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-4kz4t" event={"ID":"481f56ba-4864-42fb-b0f3-02a4e4311e7d","Type":"ContainerStarted","Data":"48ed1b26591155342feef2c5dd49cacb5f8e9300c805866df4a810eddb92fabc"} Feb 24 05:53:44.856387 master-0 kubenswrapper[34361]: I0224 05:53:44.855211 34361 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-db-create-kzhmb" podStartSLOduration=8.855186538 podStartE2EDuration="8.855186538s" podCreationTimestamp="2026-02-24 05:53:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-24 05:53:44.806117184 +0000 UTC m=+984.508734230" watchObservedRunningTime="2026-02-24 05:53:44.855186538 +0000 UTC m=+984.557803584" Feb 24 05:53:45.051790 master-0 kubenswrapper[34361]: I0224 05:53:45.051698 34361 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-e077-account-create-update-fnxnr" podStartSLOduration=9.051666937 podStartE2EDuration="9.051666937s" podCreationTimestamp="2026-02-24 05:53:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-24 05:53:44.841967271 +0000 UTC m=+984.544584327" watchObservedRunningTime="2026-02-24 05:53:45.051666937 +0000 UTC m=+984.754283983" Feb 24 05:53:45.070105 master-0 kubenswrapper[34361]: W0224 05:53:45.069536 34361 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod0e7e3217_839a_4443_9bcf_a7e25f1ac757.slice/crio-a720671d1cb8c09fe545335325aa751addb46ee0d865ab190cbd82b20db28bf5 WatchSource:0}: Error finding container a720671d1cb8c09fe545335325aa751addb46ee0d865ab190cbd82b20db28bf5: Status 404 returned error can't find the container with id a720671d1cb8c09fe545335325aa751addb46ee0d865ab190cbd82b20db28bf5 Feb 24 05:53:45.131314 master-0 kubenswrapper[34361]: I0224 05:53:45.126990 34361 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-db-create-4kz4t" podStartSLOduration=9.126960688 podStartE2EDuration="9.126960688s" podCreationTimestamp="2026-02-24 05:53:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-24 05:53:44.959706256 +0000 UTC m=+984.662323302" watchObservedRunningTime="2026-02-24 05:53:45.126960688 +0000 UTC m=+984.829577734" Feb 24 05:53:45.160235 master-0 kubenswrapper[34361]: I0224 05:53:45.160179 34361 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ironic-inspector-0"] Feb 24 05:53:45.819257 master-0 kubenswrapper[34361]: I0224 05:53:45.819194 34361 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-261a4ed4-1134-4ffa-8b7c-639ec8b67db7\" (UniqueName: \"kubernetes.io/csi/topolvm.io^db8b87f9-2acd-43ac-b8a5-43ce0b4aa5b0\") pod \"glance-bdafd-default-internal-api-0\" (UID: \"ce78ac60-4347-4838-95d0-09b1342445d9\") " pod="openstack/glance-bdafd-default-internal-api-0" Feb 24 05:53:45.833331 master-0 kubenswrapper[34361]: I0224 05:53:45.833242 34361 generic.go:334] "Generic (PLEG): container finished" podID="398060c6-ec35-4659-89a2-550ad8c81453" containerID="50a6bed273455d05151b6f2708e3f0bc7e0af934424f6af21e302193e2b54a6c" exitCode=0 Feb 24 05:53:45.833820 master-0 kubenswrapper[34361]: I0224 05:53:45.833374 34361 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-8a9d-account-create-update-hxq4n" event={"ID":"398060c6-ec35-4659-89a2-550ad8c81453","Type":"ContainerDied","Data":"50a6bed273455d05151b6f2708e3f0bc7e0af934424f6af21e302193e2b54a6c"} Feb 24 05:53:45.843528 master-0 kubenswrapper[34361]: I0224 05:53:45.843263 34361 generic.go:334] "Generic (PLEG): container finished" podID="24794552-5cfa-428e-ad46-ce7a1794c7ec" containerID="d465ae7bc5b67aa22453114bb9d2bca2a310263c4849f3130a7ab19495572ff4" exitCode=0 Feb 24 05:53:45.843528 master-0 kubenswrapper[34361]: I0224 05:53:45.843398 34361 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-e077-account-create-update-fnxnr" event={"ID":"24794552-5cfa-428e-ad46-ce7a1794c7ec","Type":"ContainerDied","Data":"d465ae7bc5b67aa22453114bb9d2bca2a310263c4849f3130a7ab19495572ff4"} Feb 24 05:53:45.853375 master-0 kubenswrapper[34361]: I0224 05:53:45.849177 34361 generic.go:334] "Generic (PLEG): container finished" podID="481f56ba-4864-42fb-b0f3-02a4e4311e7d" containerID="9926dabed789d363070cf3d4e2ba027bba87827031dcfcbffeb311425028de1f" exitCode=0 Feb 24 05:53:45.853375 master-0 kubenswrapper[34361]: I0224 05:53:45.849811 34361 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-4kz4t" event={"ID":"481f56ba-4864-42fb-b0f3-02a4e4311e7d","Type":"ContainerDied","Data":"9926dabed789d363070cf3d4e2ba027bba87827031dcfcbffeb311425028de1f"} Feb 24 05:53:45.877347 master-0 kubenswrapper[34361]: I0224 05:53:45.870432 34361 generic.go:334] "Generic (PLEG): container finished" podID="06a1970d-fc4d-4522-a195-fa7fc9d5485d" containerID="af1522d029f8bb57a552add3e575b79016723a4e45e4ce2d7eb1c88f2b1f6d45" exitCode=0 Feb 24 05:53:45.877347 master-0 kubenswrapper[34361]: I0224 05:53:45.870628 34361 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-c618-account-create-update-mmq8h" event={"ID":"06a1970d-fc4d-4522-a195-fa7fc9d5485d","Type":"ContainerDied","Data":"af1522d029f8bb57a552add3e575b79016723a4e45e4ce2d7eb1c88f2b1f6d45"} Feb 24 05:53:45.877784 master-0 kubenswrapper[34361]: I0224 05:53:45.877608 34361 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-bdafd-default-internal-api-0" Feb 24 05:53:45.901303 master-0 kubenswrapper[34361]: I0224 05:53:45.900504 34361 generic.go:334] "Generic (PLEG): container finished" podID="0e7e3217-839a-4443-9bcf-a7e25f1ac757" containerID="d501fc2cc32d826a41e952f91a222fb16e3c4dd6877c7da15caa5392ed872c9a" exitCode=0 Feb 24 05:53:45.901303 master-0 kubenswrapper[34361]: I0224 05:53:45.900600 34361 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-inspector-0" event={"ID":"0e7e3217-839a-4443-9bcf-a7e25f1ac757","Type":"ContainerDied","Data":"d501fc2cc32d826a41e952f91a222fb16e3c4dd6877c7da15caa5392ed872c9a"} Feb 24 05:53:45.901303 master-0 kubenswrapper[34361]: I0224 05:53:45.900632 34361 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-inspector-0" event={"ID":"0e7e3217-839a-4443-9bcf-a7e25f1ac757","Type":"ContainerStarted","Data":"a720671d1cb8c09fe545335325aa751addb46ee0d865ab190cbd82b20db28bf5"} Feb 24 05:53:45.904106 master-0 kubenswrapper[34361]: I0224 05:53:45.903979 34361 generic.go:334] "Generic (PLEG): container finished" podID="d21763a0-0808-4fe2-94bb-37aea78c00f0" containerID="e5e444e3b28484ac024aae033f87647e5055ae23d82ae3f0756fc7aa00b3d0b0" exitCode=0 Feb 24 05:53:45.904106 master-0 kubenswrapper[34361]: I0224 05:53:45.904051 34361 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-kzhmb" event={"ID":"d21763a0-0808-4fe2-94bb-37aea78c00f0","Type":"ContainerDied","Data":"e5e444e3b28484ac024aae033f87647e5055ae23d82ae3f0756fc7aa00b3d0b0"} Feb 24 05:53:45.911796 master-0 kubenswrapper[34361]: I0224 05:53:45.911605 34361 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-bdafd-default-external-api-0" event={"ID":"7cc967eb-a8c3-4147-a3ac-bd6af5dd3025","Type":"ContainerStarted","Data":"b9147b9c554ed663c4735d003db6e0e92dd8ffbc61a14917e31c7bdbf0dabab0"} Feb 24 05:53:45.945693 master-0 kubenswrapper[34361]: I0224 05:53:45.945633 34361 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-55b78786dc-sn557" event={"ID":"719517cc-5f72-4139-aaa2-99bd0923702d","Type":"ContainerStarted","Data":"631798e53026c7e2f2a5bac4494414ef08f5d0fe4686810f59fc7283ca65a56d"} Feb 24 05:53:45.947979 master-0 kubenswrapper[34361]: I0224 05:53:45.947565 34361 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-55b78786dc-sn557" Feb 24 05:53:45.969862 master-0 kubenswrapper[34361]: I0224 05:53:45.969814 34361 generic.go:334] "Generic (PLEG): container finished" podID="ebf1db3e-e40c-41b6-ad8f-c1decbbfba24" containerID="982f90cfe38d398e0f9cff69f20afa1d36878ca7162b82c475a6720604383ca4" exitCode=0 Feb 24 05:53:45.970085 master-0 kubenswrapper[34361]: I0224 05:53:45.970059 34361 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-qrtq2" event={"ID":"ebf1db3e-e40c-41b6-ad8f-c1decbbfba24","Type":"ContainerDied","Data":"982f90cfe38d398e0f9cff69f20afa1d36878ca7162b82c475a6720604383ca4"} Feb 24 05:53:46.064067 master-0 kubenswrapper[34361]: I0224 05:53:46.063080 34361 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-55b78786dc-sn557" podStartSLOduration=9.063054625 podStartE2EDuration="9.063054625s" podCreationTimestamp="2026-02-24 05:53:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-24 05:53:46.047897695 +0000 UTC m=+985.750514731" watchObservedRunningTime="2026-02-24 05:53:46.063054625 +0000 UTC m=+985.765671671" Feb 24 05:53:46.618803 master-0 kubenswrapper[34361]: I0224 05:53:46.618733 34361 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ironic-inspector-0" Feb 24 05:53:46.750182 master-0 kubenswrapper[34361]: I0224 05:53:46.750129 34361 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/0e7e3217-839a-4443-9bcf-a7e25f1ac757-config\") pod \"0e7e3217-839a-4443-9bcf-a7e25f1ac757\" (UID: \"0e7e3217-839a-4443-9bcf-a7e25f1ac757\") " Feb 24 05:53:46.750582 master-0 kubenswrapper[34361]: I0224 05:53:46.750217 34361 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lib-ironic\" (UniqueName: \"kubernetes.io/empty-dir/0e7e3217-839a-4443-9bcf-a7e25f1ac757-var-lib-ironic\") pod \"0e7e3217-839a-4443-9bcf-a7e25f1ac757\" (UID: \"0e7e3217-839a-4443-9bcf-a7e25f1ac757\") " Feb 24 05:53:46.750582 master-0 kubenswrapper[34361]: I0224 05:53:46.750310 34361 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-podinfo\" (UniqueName: \"kubernetes.io/downward-api/0e7e3217-839a-4443-9bcf-a7e25f1ac757-etc-podinfo\") pod \"0e7e3217-839a-4443-9bcf-a7e25f1ac757\" (UID: \"0e7e3217-839a-4443-9bcf-a7e25f1ac757\") " Feb 24 05:53:46.750700 master-0 kubenswrapper[34361]: I0224 05:53:46.750678 34361 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lib-ironic-inspector-dhcp-hostsdir\" (UniqueName: \"kubernetes.io/empty-dir/0e7e3217-839a-4443-9bcf-a7e25f1ac757-var-lib-ironic-inspector-dhcp-hostsdir\") pod \"0e7e3217-839a-4443-9bcf-a7e25f1ac757\" (UID: \"0e7e3217-839a-4443-9bcf-a7e25f1ac757\") " Feb 24 05:53:46.750741 master-0 kubenswrapper[34361]: I0224 05:53:46.750713 34361 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dtccb\" (UniqueName: \"kubernetes.io/projected/0e7e3217-839a-4443-9bcf-a7e25f1ac757-kube-api-access-dtccb\") pod \"0e7e3217-839a-4443-9bcf-a7e25f1ac757\" (UID: \"0e7e3217-839a-4443-9bcf-a7e25f1ac757\") " Feb 24 05:53:46.750786 master-0 kubenswrapper[34361]: I0224 05:53:46.750778 34361 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0e7e3217-839a-4443-9bcf-a7e25f1ac757-scripts\") pod \"0e7e3217-839a-4443-9bcf-a7e25f1ac757\" (UID: \"0e7e3217-839a-4443-9bcf-a7e25f1ac757\") " Feb 24 05:53:46.751015 master-0 kubenswrapper[34361]: I0224 05:53:46.750961 34361 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0e7e3217-839a-4443-9bcf-a7e25f1ac757-combined-ca-bundle\") pod \"0e7e3217-839a-4443-9bcf-a7e25f1ac757\" (UID: \"0e7e3217-839a-4443-9bcf-a7e25f1ac757\") " Feb 24 05:53:46.751211 master-0 kubenswrapper[34361]: I0224 05:53:46.751151 34361 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0e7e3217-839a-4443-9bcf-a7e25f1ac757-var-lib-ironic" (OuterVolumeSpecName: "var-lib-ironic") pod "0e7e3217-839a-4443-9bcf-a7e25f1ac757" (UID: "0e7e3217-839a-4443-9bcf-a7e25f1ac757"). InnerVolumeSpecName "var-lib-ironic". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 24 05:53:46.751559 master-0 kubenswrapper[34361]: I0224 05:53:46.751522 34361 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0e7e3217-839a-4443-9bcf-a7e25f1ac757-var-lib-ironic-inspector-dhcp-hostsdir" (OuterVolumeSpecName: "var-lib-ironic-inspector-dhcp-hostsdir") pod "0e7e3217-839a-4443-9bcf-a7e25f1ac757" (UID: "0e7e3217-839a-4443-9bcf-a7e25f1ac757"). InnerVolumeSpecName "var-lib-ironic-inspector-dhcp-hostsdir". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 24 05:53:46.752494 master-0 kubenswrapper[34361]: I0224 05:53:46.752459 34361 reconciler_common.go:293] "Volume detached for volume \"var-lib-ironic\" (UniqueName: \"kubernetes.io/empty-dir/0e7e3217-839a-4443-9bcf-a7e25f1ac757-var-lib-ironic\") on node \"master-0\" DevicePath \"\"" Feb 24 05:53:46.752494 master-0 kubenswrapper[34361]: I0224 05:53:46.752489 34361 reconciler_common.go:293] "Volume detached for volume \"var-lib-ironic-inspector-dhcp-hostsdir\" (UniqueName: \"kubernetes.io/empty-dir/0e7e3217-839a-4443-9bcf-a7e25f1ac757-var-lib-ironic-inspector-dhcp-hostsdir\") on node \"master-0\" DevicePath \"\"" Feb 24 05:53:46.754850 master-0 kubenswrapper[34361]: I0224 05:53:46.754554 34361 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/downward-api/0e7e3217-839a-4443-9bcf-a7e25f1ac757-etc-podinfo" (OuterVolumeSpecName: "etc-podinfo") pod "0e7e3217-839a-4443-9bcf-a7e25f1ac757" (UID: "0e7e3217-839a-4443-9bcf-a7e25f1ac757"). InnerVolumeSpecName "etc-podinfo". PluginName "kubernetes.io/downward-api", VolumeGidValue "" Feb 24 05:53:46.756917 master-0 kubenswrapper[34361]: I0224 05:53:46.756826 34361 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0e7e3217-839a-4443-9bcf-a7e25f1ac757-scripts" (OuterVolumeSpecName: "scripts") pod "0e7e3217-839a-4443-9bcf-a7e25f1ac757" (UID: "0e7e3217-839a-4443-9bcf-a7e25f1ac757"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 24 05:53:46.758686 master-0 kubenswrapper[34361]: I0224 05:53:46.758643 34361 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0e7e3217-839a-4443-9bcf-a7e25f1ac757-kube-api-access-dtccb" (OuterVolumeSpecName: "kube-api-access-dtccb") pod "0e7e3217-839a-4443-9bcf-a7e25f1ac757" (UID: "0e7e3217-839a-4443-9bcf-a7e25f1ac757"). InnerVolumeSpecName "kube-api-access-dtccb". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 24 05:53:46.758976 master-0 kubenswrapper[34361]: I0224 05:53:46.758945 34361 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0e7e3217-839a-4443-9bcf-a7e25f1ac757-config" (OuterVolumeSpecName: "config") pod "0e7e3217-839a-4443-9bcf-a7e25f1ac757" (UID: "0e7e3217-839a-4443-9bcf-a7e25f1ac757"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 24 05:53:46.799148 master-0 kubenswrapper[34361]: I0224 05:53:46.798972 34361 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0e7e3217-839a-4443-9bcf-a7e25f1ac757-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "0e7e3217-839a-4443-9bcf-a7e25f1ac757" (UID: "0e7e3217-839a-4443-9bcf-a7e25f1ac757"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 24 05:53:46.808500 master-0 kubenswrapper[34361]: I0224 05:53:46.808419 34361 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-bdafd-default-internal-api-0"] Feb 24 05:53:46.857365 master-0 kubenswrapper[34361]: I0224 05:53:46.856041 34361 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dtccb\" (UniqueName: \"kubernetes.io/projected/0e7e3217-839a-4443-9bcf-a7e25f1ac757-kube-api-access-dtccb\") on node \"master-0\" DevicePath \"\"" Feb 24 05:53:46.857365 master-0 kubenswrapper[34361]: I0224 05:53:46.856113 34361 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0e7e3217-839a-4443-9bcf-a7e25f1ac757-scripts\") on node \"master-0\" DevicePath \"\"" Feb 24 05:53:46.857365 master-0 kubenswrapper[34361]: I0224 05:53:46.856129 34361 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0e7e3217-839a-4443-9bcf-a7e25f1ac757-combined-ca-bundle\") on node \"master-0\" DevicePath \"\"" Feb 24 05:53:46.857365 master-0 kubenswrapper[34361]: I0224 05:53:46.856142 34361 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/0e7e3217-839a-4443-9bcf-a7e25f1ac757-config\") on node \"master-0\" DevicePath \"\"" Feb 24 05:53:46.857365 master-0 kubenswrapper[34361]: I0224 05:53:46.856155 34361 reconciler_common.go:293] "Volume detached for volume \"etc-podinfo\" (UniqueName: \"kubernetes.io/downward-api/0e7e3217-839a-4443-9bcf-a7e25f1ac757-etc-podinfo\") on node \"master-0\" DevicePath \"\"" Feb 24 05:53:46.990014 master-0 kubenswrapper[34361]: I0224 05:53:46.989917 34361 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-inspector-0" event={"ID":"0e7e3217-839a-4443-9bcf-a7e25f1ac757","Type":"ContainerDied","Data":"a720671d1cb8c09fe545335325aa751addb46ee0d865ab190cbd82b20db28bf5"} Feb 24 05:53:46.990014 master-0 kubenswrapper[34361]: I0224 05:53:46.989981 34361 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ironic-inspector-0" Feb 24 05:53:46.990468 master-0 kubenswrapper[34361]: I0224 05:53:46.990000 34361 scope.go:117] "RemoveContainer" containerID="d501fc2cc32d826a41e952f91a222fb16e3c4dd6877c7da15caa5392ed872c9a" Feb 24 05:53:46.994721 master-0 kubenswrapper[34361]: I0224 05:53:46.994658 34361 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-bdafd-default-internal-api-0" event={"ID":"ce78ac60-4347-4838-95d0-09b1342445d9","Type":"ContainerStarted","Data":"24e260f8bd892cce03322b34579726c6f507bafa1c66d145c0c0f8f3f2184f08"} Feb 24 05:53:47.000254 master-0 kubenswrapper[34361]: I0224 05:53:47.000143 34361 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-bdafd-default-external-api-0" event={"ID":"7cc967eb-a8c3-4147-a3ac-bd6af5dd3025","Type":"ContainerStarted","Data":"092b7c1429714bd60e1d81298af2e5bb8f764c1a2e96256148547704aa2691aa"} Feb 24 05:53:47.048531 master-0 kubenswrapper[34361]: I0224 05:53:47.047404 34361 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-bdafd-default-external-api-0" podStartSLOduration=12.047369342 podStartE2EDuration="12.047369342s" podCreationTimestamp="2026-02-24 05:53:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-24 05:53:47.035376729 +0000 UTC m=+986.737993775" watchObservedRunningTime="2026-02-24 05:53:47.047369342 +0000 UTC m=+986.749986388" Feb 24 05:53:47.170280 master-0 kubenswrapper[34361]: I0224 05:53:47.169596 34361 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ironic-inspector-0"] Feb 24 05:53:47.264997 master-0 kubenswrapper[34361]: I0224 05:53:47.262817 34361 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ironic-inspector-0"] Feb 24 05:53:47.288072 master-0 kubenswrapper[34361]: I0224 05:53:47.288013 34361 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ironic-inspector-0"] Feb 24 05:53:47.288791 master-0 kubenswrapper[34361]: E0224 05:53:47.288764 34361 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0e7e3217-839a-4443-9bcf-a7e25f1ac757" containerName="ironic-python-agent-init" Feb 24 05:53:47.288791 master-0 kubenswrapper[34361]: I0224 05:53:47.288787 34361 state_mem.go:107] "Deleted CPUSet assignment" podUID="0e7e3217-839a-4443-9bcf-a7e25f1ac757" containerName="ironic-python-agent-init" Feb 24 05:53:47.289113 master-0 kubenswrapper[34361]: I0224 05:53:47.289087 34361 memory_manager.go:354] "RemoveStaleState removing state" podUID="0e7e3217-839a-4443-9bcf-a7e25f1ac757" containerName="ironic-python-agent-init" Feb 24 05:53:47.306524 master-0 kubenswrapper[34361]: I0224 05:53:47.306451 34361 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ironic-inspector-0" Feb 24 05:53:47.308521 master-0 kubenswrapper[34361]: I0224 05:53:47.308246 34361 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ironic-inspector-0"] Feb 24 05:53:47.313913 master-0 kubenswrapper[34361]: I0224 05:53:47.313887 34361 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ironic-inspector-config-data" Feb 24 05:53:47.314279 master-0 kubenswrapper[34361]: I0224 05:53:47.314207 34361 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ironic-inspector-internal-svc" Feb 24 05:53:47.314347 master-0 kubenswrapper[34361]: I0224 05:53:47.314293 34361 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ironic-inspector-scripts" Feb 24 05:53:47.314491 master-0 kubenswrapper[34361]: I0224 05:53:47.314475 34361 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ironic-inspector-public-svc" Feb 24 05:53:47.314727 master-0 kubenswrapper[34361]: I0224 05:53:47.314701 34361 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-transport-url-ironic-inspector-transport" Feb 24 05:53:47.377781 master-0 kubenswrapper[34361]: I0224 05:53:47.377691 34361 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/2c572e96-fe42-4adb-83c9-3316ba4e374a-config\") pod \"ironic-inspector-0\" (UID: \"2c572e96-fe42-4adb-83c9-3316ba4e374a\") " pod="openstack/ironic-inspector-0" Feb 24 05:53:47.378040 master-0 kubenswrapper[34361]: I0224 05:53:47.377939 34361 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2c572e96-fe42-4adb-83c9-3316ba4e374a-scripts\") pod \"ironic-inspector-0\" (UID: \"2c572e96-fe42-4adb-83c9-3316ba4e374a\") " pod="openstack/ironic-inspector-0" Feb 24 05:53:47.378040 master-0 kubenswrapper[34361]: I0224 05:53:47.377975 34361 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-ironic\" (UniqueName: \"kubernetes.io/empty-dir/2c572e96-fe42-4adb-83c9-3316ba4e374a-var-lib-ironic\") pod \"ironic-inspector-0\" (UID: \"2c572e96-fe42-4adb-83c9-3316ba4e374a\") " pod="openstack/ironic-inspector-0" Feb 24 05:53:47.378119 master-0 kubenswrapper[34361]: I0224 05:53:47.378057 34361 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-podinfo\" (UniqueName: \"kubernetes.io/downward-api/2c572e96-fe42-4adb-83c9-3316ba4e374a-etc-podinfo\") pod \"ironic-inspector-0\" (UID: \"2c572e96-fe42-4adb-83c9-3316ba4e374a\") " pod="openstack/ironic-inspector-0" Feb 24 05:53:47.378119 master-0 kubenswrapper[34361]: I0224 05:53:47.378102 34361 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jx44r\" (UniqueName: \"kubernetes.io/projected/2c572e96-fe42-4adb-83c9-3316ba4e374a-kube-api-access-jx44r\") pod \"ironic-inspector-0\" (UID: \"2c572e96-fe42-4adb-83c9-3316ba4e374a\") " pod="openstack/ironic-inspector-0" Feb 24 05:53:47.378203 master-0 kubenswrapper[34361]: I0224 05:53:47.378149 34361 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2c572e96-fe42-4adb-83c9-3316ba4e374a-combined-ca-bundle\") pod \"ironic-inspector-0\" (UID: \"2c572e96-fe42-4adb-83c9-3316ba4e374a\") " pod="openstack/ironic-inspector-0" Feb 24 05:53:47.378241 master-0 kubenswrapper[34361]: I0224 05:53:47.378197 34361 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/2c572e96-fe42-4adb-83c9-3316ba4e374a-public-tls-certs\") pod \"ironic-inspector-0\" (UID: \"2c572e96-fe42-4adb-83c9-3316ba4e374a\") " pod="openstack/ironic-inspector-0" Feb 24 05:53:47.378281 master-0 kubenswrapper[34361]: I0224 05:53:47.378254 34361 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/2c572e96-fe42-4adb-83c9-3316ba4e374a-internal-tls-certs\") pod \"ironic-inspector-0\" (UID: \"2c572e96-fe42-4adb-83c9-3316ba4e374a\") " pod="openstack/ironic-inspector-0" Feb 24 05:53:47.378767 master-0 kubenswrapper[34361]: I0224 05:53:47.378345 34361 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-ironic-inspector-dhcp-hostsdir\" (UniqueName: \"kubernetes.io/empty-dir/2c572e96-fe42-4adb-83c9-3316ba4e374a-var-lib-ironic-inspector-dhcp-hostsdir\") pod \"ironic-inspector-0\" (UID: \"2c572e96-fe42-4adb-83c9-3316ba4e374a\") " pod="openstack/ironic-inspector-0" Feb 24 05:53:47.497284 master-0 kubenswrapper[34361]: I0224 05:53:47.496703 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/2c572e96-fe42-4adb-83c9-3316ba4e374a-config\") pod \"ironic-inspector-0\" (UID: \"2c572e96-fe42-4adb-83c9-3316ba4e374a\") " pod="openstack/ironic-inspector-0" Feb 24 05:53:47.497284 master-0 kubenswrapper[34361]: I0224 05:53:47.496820 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2c572e96-fe42-4adb-83c9-3316ba4e374a-scripts\") pod \"ironic-inspector-0\" (UID: \"2c572e96-fe42-4adb-83c9-3316ba4e374a\") " pod="openstack/ironic-inspector-0" Feb 24 05:53:47.503169 master-0 kubenswrapper[34361]: I0224 05:53:47.503049 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-ironic\" (UniqueName: \"kubernetes.io/empty-dir/2c572e96-fe42-4adb-83c9-3316ba4e374a-var-lib-ironic\") pod \"ironic-inspector-0\" (UID: \"2c572e96-fe42-4adb-83c9-3316ba4e374a\") " pod="openstack/ironic-inspector-0" Feb 24 05:53:47.503591 master-0 kubenswrapper[34361]: I0224 05:53:47.503522 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-podinfo\" (UniqueName: \"kubernetes.io/downward-api/2c572e96-fe42-4adb-83c9-3316ba4e374a-etc-podinfo\") pod \"ironic-inspector-0\" (UID: \"2c572e96-fe42-4adb-83c9-3316ba4e374a\") " pod="openstack/ironic-inspector-0" Feb 24 05:53:47.503591 master-0 kubenswrapper[34361]: I0224 05:53:47.503568 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jx44r\" (UniqueName: \"kubernetes.io/projected/2c572e96-fe42-4adb-83c9-3316ba4e374a-kube-api-access-jx44r\") pod \"ironic-inspector-0\" (UID: \"2c572e96-fe42-4adb-83c9-3316ba4e374a\") " pod="openstack/ironic-inspector-0" Feb 24 05:53:47.503707 master-0 kubenswrapper[34361]: I0224 05:53:47.503615 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2c572e96-fe42-4adb-83c9-3316ba4e374a-combined-ca-bundle\") pod \"ironic-inspector-0\" (UID: \"2c572e96-fe42-4adb-83c9-3316ba4e374a\") " pod="openstack/ironic-inspector-0" Feb 24 05:53:47.503707 master-0 kubenswrapper[34361]: I0224 05:53:47.503669 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/2c572e96-fe42-4adb-83c9-3316ba4e374a-public-tls-certs\") pod \"ironic-inspector-0\" (UID: \"2c572e96-fe42-4adb-83c9-3316ba4e374a\") " pod="openstack/ironic-inspector-0" Feb 24 05:53:47.503841 master-0 kubenswrapper[34361]: I0224 05:53:47.503726 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/2c572e96-fe42-4adb-83c9-3316ba4e374a-internal-tls-certs\") pod \"ironic-inspector-0\" (UID: \"2c572e96-fe42-4adb-83c9-3316ba4e374a\") " pod="openstack/ironic-inspector-0" Feb 24 05:53:47.503841 master-0 kubenswrapper[34361]: I0224 05:53:47.503814 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-ironic-inspector-dhcp-hostsdir\" (UniqueName: \"kubernetes.io/empty-dir/2c572e96-fe42-4adb-83c9-3316ba4e374a-var-lib-ironic-inspector-dhcp-hostsdir\") pod \"ironic-inspector-0\" (UID: \"2c572e96-fe42-4adb-83c9-3316ba4e374a\") " pod="openstack/ironic-inspector-0" Feb 24 05:53:47.507797 master-0 kubenswrapper[34361]: I0224 05:53:47.507672 34361 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-ironic-inspector-dhcp-hostsdir\" (UniqueName: \"kubernetes.io/empty-dir/2c572e96-fe42-4adb-83c9-3316ba4e374a-var-lib-ironic-inspector-dhcp-hostsdir\") pod \"ironic-inspector-0\" (UID: \"2c572e96-fe42-4adb-83c9-3316ba4e374a\") " pod="openstack/ironic-inspector-0" Feb 24 05:53:47.508407 master-0 kubenswrapper[34361]: I0224 05:53:47.508315 34361 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/2c572e96-fe42-4adb-83c9-3316ba4e374a-config\") pod \"ironic-inspector-0\" (UID: \"2c572e96-fe42-4adb-83c9-3316ba4e374a\") " pod="openstack/ironic-inspector-0" Feb 24 05:53:47.508730 master-0 kubenswrapper[34361]: I0224 05:53:47.508670 34361 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-ironic\" (UniqueName: \"kubernetes.io/empty-dir/2c572e96-fe42-4adb-83c9-3316ba4e374a-var-lib-ironic\") pod \"ironic-inspector-0\" (UID: \"2c572e96-fe42-4adb-83c9-3316ba4e374a\") " pod="openstack/ironic-inspector-0" Feb 24 05:53:47.509822 master-0 kubenswrapper[34361]: I0224 05:53:47.509782 34361 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/2c572e96-fe42-4adb-83c9-3316ba4e374a-internal-tls-certs\") pod \"ironic-inspector-0\" (UID: \"2c572e96-fe42-4adb-83c9-3316ba4e374a\") " pod="openstack/ironic-inspector-0" Feb 24 05:53:47.516146 master-0 kubenswrapper[34361]: I0224 05:53:47.516059 34361 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-podinfo\" (UniqueName: \"kubernetes.io/downward-api/2c572e96-fe42-4adb-83c9-3316ba4e374a-etc-podinfo\") pod \"ironic-inspector-0\" (UID: \"2c572e96-fe42-4adb-83c9-3316ba4e374a\") " pod="openstack/ironic-inspector-0" Feb 24 05:53:47.516146 master-0 kubenswrapper[34361]: I0224 05:53:47.516064 34361 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2c572e96-fe42-4adb-83c9-3316ba4e374a-scripts\") pod \"ironic-inspector-0\" (UID: \"2c572e96-fe42-4adb-83c9-3316ba4e374a\") " pod="openstack/ironic-inspector-0" Feb 24 05:53:47.521053 master-0 kubenswrapper[34361]: I0224 05:53:47.520461 34361 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2c572e96-fe42-4adb-83c9-3316ba4e374a-combined-ca-bundle\") pod \"ironic-inspector-0\" (UID: \"2c572e96-fe42-4adb-83c9-3316ba4e374a\") " pod="openstack/ironic-inspector-0" Feb 24 05:53:47.527675 master-0 kubenswrapper[34361]: I0224 05:53:47.527630 34361 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jx44r\" (UniqueName: \"kubernetes.io/projected/2c572e96-fe42-4adb-83c9-3316ba4e374a-kube-api-access-jx44r\") pod \"ironic-inspector-0\" (UID: \"2c572e96-fe42-4adb-83c9-3316ba4e374a\") " pod="openstack/ironic-inspector-0" Feb 24 05:53:47.529603 master-0 kubenswrapper[34361]: I0224 05:53:47.529562 34361 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/2c572e96-fe42-4adb-83c9-3316ba4e374a-public-tls-certs\") pod \"ironic-inspector-0\" (UID: \"2c572e96-fe42-4adb-83c9-3316ba4e374a\") " pod="openstack/ironic-inspector-0" Feb 24 05:53:47.642691 master-0 kubenswrapper[34361]: I0224 05:53:47.642643 34361 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ironic-inspector-0" Feb 24 05:53:47.927779 master-0 kubenswrapper[34361]: I0224 05:53:47.902588 34361 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-qrtq2" Feb 24 05:53:48.047055 master-0 kubenswrapper[34361]: I0224 05:53:48.046926 34361 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ebf1db3e-e40c-41b6-ad8f-c1decbbfba24-operator-scripts\") pod \"ebf1db3e-e40c-41b6-ad8f-c1decbbfba24\" (UID: \"ebf1db3e-e40c-41b6-ad8f-c1decbbfba24\") " Feb 24 05:53:48.047780 master-0 kubenswrapper[34361]: I0224 05:53:48.047752 34361 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mgcfn\" (UniqueName: \"kubernetes.io/projected/ebf1db3e-e40c-41b6-ad8f-c1decbbfba24-kube-api-access-mgcfn\") pod \"ebf1db3e-e40c-41b6-ad8f-c1decbbfba24\" (UID: \"ebf1db3e-e40c-41b6-ad8f-c1decbbfba24\") " Feb 24 05:53:48.049994 master-0 kubenswrapper[34361]: I0224 05:53:48.048409 34361 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ebf1db3e-e40c-41b6-ad8f-c1decbbfba24-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "ebf1db3e-e40c-41b6-ad8f-c1decbbfba24" (UID: "ebf1db3e-e40c-41b6-ad8f-c1decbbfba24"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 24 05:53:48.051130 master-0 kubenswrapper[34361]: I0224 05:53:48.051097 34361 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ebf1db3e-e40c-41b6-ad8f-c1decbbfba24-operator-scripts\") on node \"master-0\" DevicePath \"\"" Feb 24 05:53:48.052750 master-0 kubenswrapper[34361]: I0224 05:53:48.052680 34361 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ebf1db3e-e40c-41b6-ad8f-c1decbbfba24-kube-api-access-mgcfn" (OuterVolumeSpecName: "kube-api-access-mgcfn") pod "ebf1db3e-e40c-41b6-ad8f-c1decbbfba24" (UID: "ebf1db3e-e40c-41b6-ad8f-c1decbbfba24"). InnerVolumeSpecName "kube-api-access-mgcfn". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 24 05:53:48.061251 master-0 kubenswrapper[34361]: I0224 05:53:48.060872 34361 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-qrtq2" event={"ID":"ebf1db3e-e40c-41b6-ad8f-c1decbbfba24","Type":"ContainerDied","Data":"5b347a4b605cf41f29d8784998ad19b07e4c7704556f23168308e50585d9500d"} Feb 24 05:53:48.061251 master-0 kubenswrapper[34361]: I0224 05:53:48.060913 34361 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5b347a4b605cf41f29d8784998ad19b07e4c7704556f23168308e50585d9500d" Feb 24 05:53:48.061251 master-0 kubenswrapper[34361]: I0224 05:53:48.060984 34361 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-qrtq2" Feb 24 05:53:48.068751 master-0 kubenswrapper[34361]: I0224 05:53:48.068615 34361 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-bdafd-default-internal-api-0" event={"ID":"ce78ac60-4347-4838-95d0-09b1342445d9","Type":"ContainerStarted","Data":"b10059c92e7c9c30bc137f111c737798bfd3b45b936697b519e9d5d7722f0d8f"} Feb 24 05:53:48.092802 master-0 kubenswrapper[34361]: I0224 05:53:48.092740 34361 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-4kz4t" Feb 24 05:53:48.141720 master-0 kubenswrapper[34361]: I0224 05:53:48.141618 34361 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-e077-account-create-update-fnxnr" Feb 24 05:53:48.155606 master-0 kubenswrapper[34361]: I0224 05:53:48.155059 34361 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-c618-account-create-update-mmq8h" Feb 24 05:53:48.184634 master-0 kubenswrapper[34361]: I0224 05:53:48.184573 34361 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-kzhmb" Feb 24 05:53:48.185227 master-0 kubenswrapper[34361]: I0224 05:53:48.185183 34361 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-8a9d-account-create-update-hxq4n" Feb 24 05:53:48.192829 master-0 kubenswrapper[34361]: I0224 05:53:48.192777 34361 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mgcfn\" (UniqueName: \"kubernetes.io/projected/ebf1db3e-e40c-41b6-ad8f-c1decbbfba24-kube-api-access-mgcfn\") on node \"master-0\" DevicePath \"\"" Feb 24 05:53:48.301040 master-0 kubenswrapper[34361]: I0224 05:53:48.298930 34361 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ckddz\" (UniqueName: \"kubernetes.io/projected/d21763a0-0808-4fe2-94bb-37aea78c00f0-kube-api-access-ckddz\") pod \"d21763a0-0808-4fe2-94bb-37aea78c00f0\" (UID: \"d21763a0-0808-4fe2-94bb-37aea78c00f0\") " Feb 24 05:53:48.301040 master-0 kubenswrapper[34361]: I0224 05:53:48.299070 34361 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mf6bp\" (UniqueName: \"kubernetes.io/projected/398060c6-ec35-4659-89a2-550ad8c81453-kube-api-access-mf6bp\") pod \"398060c6-ec35-4659-89a2-550ad8c81453\" (UID: \"398060c6-ec35-4659-89a2-550ad8c81453\") " Feb 24 05:53:48.301040 master-0 kubenswrapper[34361]: I0224 05:53:48.299145 34361 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/481f56ba-4864-42fb-b0f3-02a4e4311e7d-operator-scripts\") pod \"481f56ba-4864-42fb-b0f3-02a4e4311e7d\" (UID: \"481f56ba-4864-42fb-b0f3-02a4e4311e7d\") " Feb 24 05:53:48.301040 master-0 kubenswrapper[34361]: I0224 05:53:48.299218 34361 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v54xk\" (UniqueName: \"kubernetes.io/projected/24794552-5cfa-428e-ad46-ce7a1794c7ec-kube-api-access-v54xk\") pod \"24794552-5cfa-428e-ad46-ce7a1794c7ec\" (UID: \"24794552-5cfa-428e-ad46-ce7a1794c7ec\") " Feb 24 05:53:48.301040 master-0 kubenswrapper[34361]: I0224 05:53:48.299554 34361 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d21763a0-0808-4fe2-94bb-37aea78c00f0-operator-scripts\") pod \"d21763a0-0808-4fe2-94bb-37aea78c00f0\" (UID: \"d21763a0-0808-4fe2-94bb-37aea78c00f0\") " Feb 24 05:53:48.301040 master-0 kubenswrapper[34361]: I0224 05:53:48.299950 34361 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/481f56ba-4864-42fb-b0f3-02a4e4311e7d-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "481f56ba-4864-42fb-b0f3-02a4e4311e7d" (UID: "481f56ba-4864-42fb-b0f3-02a4e4311e7d"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 24 05:53:48.301040 master-0 kubenswrapper[34361]: I0224 05:53:48.300254 34361 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gjxbh\" (UniqueName: \"kubernetes.io/projected/481f56ba-4864-42fb-b0f3-02a4e4311e7d-kube-api-access-gjxbh\") pod \"481f56ba-4864-42fb-b0f3-02a4e4311e7d\" (UID: \"481f56ba-4864-42fb-b0f3-02a4e4311e7d\") " Feb 24 05:53:48.301040 master-0 kubenswrapper[34361]: I0224 05:53:48.300497 34361 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8fw2r\" (UniqueName: \"kubernetes.io/projected/06a1970d-fc4d-4522-a195-fa7fc9d5485d-kube-api-access-8fw2r\") pod \"06a1970d-fc4d-4522-a195-fa7fc9d5485d\" (UID: \"06a1970d-fc4d-4522-a195-fa7fc9d5485d\") " Feb 24 05:53:48.301040 master-0 kubenswrapper[34361]: I0224 05:53:48.300536 34361 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/398060c6-ec35-4659-89a2-550ad8c81453-operator-scripts\") pod \"398060c6-ec35-4659-89a2-550ad8c81453\" (UID: \"398060c6-ec35-4659-89a2-550ad8c81453\") " Feb 24 05:53:48.301040 master-0 kubenswrapper[34361]: I0224 05:53:48.300571 34361 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/06a1970d-fc4d-4522-a195-fa7fc9d5485d-operator-scripts\") pod \"06a1970d-fc4d-4522-a195-fa7fc9d5485d\" (UID: \"06a1970d-fc4d-4522-a195-fa7fc9d5485d\") " Feb 24 05:53:48.301040 master-0 kubenswrapper[34361]: I0224 05:53:48.300631 34361 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/24794552-5cfa-428e-ad46-ce7a1794c7ec-operator-scripts\") pod \"24794552-5cfa-428e-ad46-ce7a1794c7ec\" (UID: \"24794552-5cfa-428e-ad46-ce7a1794c7ec\") " Feb 24 05:53:48.301040 master-0 kubenswrapper[34361]: I0224 05:53:48.300671 34361 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d21763a0-0808-4fe2-94bb-37aea78c00f0-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "d21763a0-0808-4fe2-94bb-37aea78c00f0" (UID: "d21763a0-0808-4fe2-94bb-37aea78c00f0"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 24 05:53:48.301866 master-0 kubenswrapper[34361]: I0224 05:53:48.301834 34361 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/481f56ba-4864-42fb-b0f3-02a4e4311e7d-operator-scripts\") on node \"master-0\" DevicePath \"\"" Feb 24 05:53:48.302008 master-0 kubenswrapper[34361]: I0224 05:53:48.301866 34361 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d21763a0-0808-4fe2-94bb-37aea78c00f0-operator-scripts\") on node \"master-0\" DevicePath \"\"" Feb 24 05:53:48.304275 master-0 kubenswrapper[34361]: I0224 05:53:48.303009 34361 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/398060c6-ec35-4659-89a2-550ad8c81453-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "398060c6-ec35-4659-89a2-550ad8c81453" (UID: "398060c6-ec35-4659-89a2-550ad8c81453"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 24 05:53:48.304275 master-0 kubenswrapper[34361]: I0224 05:53:48.303458 34361 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/06a1970d-fc4d-4522-a195-fa7fc9d5485d-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "06a1970d-fc4d-4522-a195-fa7fc9d5485d" (UID: "06a1970d-fc4d-4522-a195-fa7fc9d5485d"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 24 05:53:48.305771 master-0 kubenswrapper[34361]: I0224 05:53:48.305742 34361 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/24794552-5cfa-428e-ad46-ce7a1794c7ec-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "24794552-5cfa-428e-ad46-ce7a1794c7ec" (UID: "24794552-5cfa-428e-ad46-ce7a1794c7ec"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 24 05:53:48.307876 master-0 kubenswrapper[34361]: I0224 05:53:48.307797 34361 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/398060c6-ec35-4659-89a2-550ad8c81453-kube-api-access-mf6bp" (OuterVolumeSpecName: "kube-api-access-mf6bp") pod "398060c6-ec35-4659-89a2-550ad8c81453" (UID: "398060c6-ec35-4659-89a2-550ad8c81453"). InnerVolumeSpecName "kube-api-access-mf6bp". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 24 05:53:48.308577 master-0 kubenswrapper[34361]: I0224 05:53:48.308508 34361 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d21763a0-0808-4fe2-94bb-37aea78c00f0-kube-api-access-ckddz" (OuterVolumeSpecName: "kube-api-access-ckddz") pod "d21763a0-0808-4fe2-94bb-37aea78c00f0" (UID: "d21763a0-0808-4fe2-94bb-37aea78c00f0"). InnerVolumeSpecName "kube-api-access-ckddz". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 24 05:53:48.312837 master-0 kubenswrapper[34361]: I0224 05:53:48.312756 34361 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/24794552-5cfa-428e-ad46-ce7a1794c7ec-kube-api-access-v54xk" (OuterVolumeSpecName: "kube-api-access-v54xk") pod "24794552-5cfa-428e-ad46-ce7a1794c7ec" (UID: "24794552-5cfa-428e-ad46-ce7a1794c7ec"). InnerVolumeSpecName "kube-api-access-v54xk". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 24 05:53:48.316477 master-0 kubenswrapper[34361]: I0224 05:53:48.316330 34361 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/06a1970d-fc4d-4522-a195-fa7fc9d5485d-kube-api-access-8fw2r" (OuterVolumeSpecName: "kube-api-access-8fw2r") pod "06a1970d-fc4d-4522-a195-fa7fc9d5485d" (UID: "06a1970d-fc4d-4522-a195-fa7fc9d5485d"). InnerVolumeSpecName "kube-api-access-8fw2r". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 24 05:53:48.321681 master-0 kubenswrapper[34361]: I0224 05:53:48.321545 34361 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/481f56ba-4864-42fb-b0f3-02a4e4311e7d-kube-api-access-gjxbh" (OuterVolumeSpecName: "kube-api-access-gjxbh") pod "481f56ba-4864-42fb-b0f3-02a4e4311e7d" (UID: "481f56ba-4864-42fb-b0f3-02a4e4311e7d"). InnerVolumeSpecName "kube-api-access-gjxbh". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 24 05:53:48.404527 master-0 kubenswrapper[34361]: I0224 05:53:48.404471 34361 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ckddz\" (UniqueName: \"kubernetes.io/projected/d21763a0-0808-4fe2-94bb-37aea78c00f0-kube-api-access-ckddz\") on node \"master-0\" DevicePath \"\"" Feb 24 05:53:48.404527 master-0 kubenswrapper[34361]: I0224 05:53:48.404523 34361 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mf6bp\" (UniqueName: \"kubernetes.io/projected/398060c6-ec35-4659-89a2-550ad8c81453-kube-api-access-mf6bp\") on node \"master-0\" DevicePath \"\"" Feb 24 05:53:48.404527 master-0 kubenswrapper[34361]: I0224 05:53:48.404541 34361 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v54xk\" (UniqueName: \"kubernetes.io/projected/24794552-5cfa-428e-ad46-ce7a1794c7ec-kube-api-access-v54xk\") on node \"master-0\" DevicePath \"\"" Feb 24 05:53:48.404808 master-0 kubenswrapper[34361]: I0224 05:53:48.404552 34361 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gjxbh\" (UniqueName: \"kubernetes.io/projected/481f56ba-4864-42fb-b0f3-02a4e4311e7d-kube-api-access-gjxbh\") on node \"master-0\" DevicePath \"\"" Feb 24 05:53:48.404808 master-0 kubenswrapper[34361]: I0224 05:53:48.404564 34361 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8fw2r\" (UniqueName: \"kubernetes.io/projected/06a1970d-fc4d-4522-a195-fa7fc9d5485d-kube-api-access-8fw2r\") on node \"master-0\" DevicePath \"\"" Feb 24 05:53:48.404808 master-0 kubenswrapper[34361]: I0224 05:53:48.404577 34361 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/398060c6-ec35-4659-89a2-550ad8c81453-operator-scripts\") on node \"master-0\" DevicePath \"\"" Feb 24 05:53:48.404808 master-0 kubenswrapper[34361]: I0224 05:53:48.404588 34361 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/06a1970d-fc4d-4522-a195-fa7fc9d5485d-operator-scripts\") on node \"master-0\" DevicePath \"\"" Feb 24 05:53:48.404808 master-0 kubenswrapper[34361]: I0224 05:53:48.404600 34361 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/24794552-5cfa-428e-ad46-ce7a1794c7ec-operator-scripts\") on node \"master-0\" DevicePath \"\"" Feb 24 05:53:48.620845 master-0 kubenswrapper[34361]: I0224 05:53:48.620779 34361 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0e7e3217-839a-4443-9bcf-a7e25f1ac757" path="/var/lib/kubelet/pods/0e7e3217-839a-4443-9bcf-a7e25f1ac757/volumes" Feb 24 05:53:48.622617 master-0 kubenswrapper[34361]: I0224 05:53:48.622581 34361 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ironic-inspector-0"] Feb 24 05:53:49.090286 master-0 kubenswrapper[34361]: I0224 05:53:49.090205 34361 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-kzhmb" event={"ID":"d21763a0-0808-4fe2-94bb-37aea78c00f0","Type":"ContainerDied","Data":"89eb2e381a90e7e2742ae5d5a8fb151304b333b06cc12bbef8f33394ded9e0f6"} Feb 24 05:53:49.090286 master-0 kubenswrapper[34361]: I0224 05:53:49.090275 34361 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="89eb2e381a90e7e2742ae5d5a8fb151304b333b06cc12bbef8f33394ded9e0f6" Feb 24 05:53:49.091117 master-0 kubenswrapper[34361]: I0224 05:53:49.090269 34361 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-kzhmb" Feb 24 05:53:49.099104 master-0 kubenswrapper[34361]: I0224 05:53:49.099042 34361 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-8a9d-account-create-update-hxq4n" Feb 24 05:53:49.099354 master-0 kubenswrapper[34361]: I0224 05:53:49.099085 34361 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-8a9d-account-create-update-hxq4n" event={"ID":"398060c6-ec35-4659-89a2-550ad8c81453","Type":"ContainerDied","Data":"475820b476d159cc8647c4a8b134a5aae6f8dda113412dec21f92603f4fa45b5"} Feb 24 05:53:49.099461 master-0 kubenswrapper[34361]: I0224 05:53:49.099444 34361 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="475820b476d159cc8647c4a8b134a5aae6f8dda113412dec21f92603f4fa45b5" Feb 24 05:53:49.104242 master-0 kubenswrapper[34361]: I0224 05:53:49.104192 34361 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-e077-account-create-update-fnxnr" event={"ID":"24794552-5cfa-428e-ad46-ce7a1794c7ec","Type":"ContainerDied","Data":"4eb611ed7ce5643b79ab7cd363c2fca1c9a408bf10eabcb62f9bf7f508fc15b5"} Feb 24 05:53:49.104371 master-0 kubenswrapper[34361]: I0224 05:53:49.104244 34361 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4eb611ed7ce5643b79ab7cd363c2fca1c9a408bf10eabcb62f9bf7f508fc15b5" Feb 24 05:53:49.104459 master-0 kubenswrapper[34361]: I0224 05:53:49.104302 34361 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-e077-account-create-update-fnxnr" Feb 24 05:53:49.112398 master-0 kubenswrapper[34361]: I0224 05:53:49.112226 34361 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-4kz4t" event={"ID":"481f56ba-4864-42fb-b0f3-02a4e4311e7d","Type":"ContainerDied","Data":"48ed1b26591155342feef2c5dd49cacb5f8e9300c805866df4a810eddb92fabc"} Feb 24 05:53:49.112398 master-0 kubenswrapper[34361]: I0224 05:53:49.112280 34361 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="48ed1b26591155342feef2c5dd49cacb5f8e9300c805866df4a810eddb92fabc" Feb 24 05:53:49.112398 master-0 kubenswrapper[34361]: I0224 05:53:49.112389 34361 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-4kz4t" Feb 24 05:53:49.119211 master-0 kubenswrapper[34361]: I0224 05:53:49.119123 34361 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-bdafd-default-internal-api-0" event={"ID":"ce78ac60-4347-4838-95d0-09b1342445d9","Type":"ContainerStarted","Data":"58a6fbdf6662a9c669bfc30fa6ef966f10aabb12d8ef649475129148f6f2a82d"} Feb 24 05:53:49.123627 master-0 kubenswrapper[34361]: I0224 05:53:49.123558 34361 generic.go:334] "Generic (PLEG): container finished" podID="2c572e96-fe42-4adb-83c9-3316ba4e374a" containerID="73145dea213fc3103a4588a954a2b5ebf2bdd362bd0d85a095390f65aeda1c94" exitCode=0 Feb 24 05:53:49.123708 master-0 kubenswrapper[34361]: I0224 05:53:49.123675 34361 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-inspector-0" event={"ID":"2c572e96-fe42-4adb-83c9-3316ba4e374a","Type":"ContainerDied","Data":"73145dea213fc3103a4588a954a2b5ebf2bdd362bd0d85a095390f65aeda1c94"} Feb 24 05:53:49.123708 master-0 kubenswrapper[34361]: I0224 05:53:49.123704 34361 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-inspector-0" event={"ID":"2c572e96-fe42-4adb-83c9-3316ba4e374a","Type":"ContainerStarted","Data":"17db0aa4e8ace51fff564ca6aea495199350ff8d293ed12f74ce7bba06136078"} Feb 24 05:53:49.129890 master-0 kubenswrapper[34361]: I0224 05:53:49.128785 34361 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-c618-account-create-update-mmq8h" event={"ID":"06a1970d-fc4d-4522-a195-fa7fc9d5485d","Type":"ContainerDied","Data":"c425cb162f2e54b16a673c6ca1e30cbb974a0d7dbf7c282aaf228c2defe63583"} Feb 24 05:53:49.129890 master-0 kubenswrapper[34361]: I0224 05:53:49.128819 34361 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c425cb162f2e54b16a673c6ca1e30cbb974a0d7dbf7c282aaf228c2defe63583" Feb 24 05:53:49.129890 master-0 kubenswrapper[34361]: I0224 05:53:49.128914 34361 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-c618-account-create-update-mmq8h" Feb 24 05:53:49.187107 master-0 kubenswrapper[34361]: I0224 05:53:49.182976 34361 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-bdafd-default-internal-api-0" podStartSLOduration=5.182949412 podStartE2EDuration="5.182949412s" podCreationTimestamp="2026-02-24 05:53:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-24 05:53:49.157097315 +0000 UTC m=+988.859714391" watchObservedRunningTime="2026-02-24 05:53:49.182949412 +0000 UTC m=+988.885566458" Feb 24 05:53:49.822946 master-0 kubenswrapper[34361]: I0224 05:53:49.822282 34361 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ironic-neutron-agent-856d98ff5d-2p7np" Feb 24 05:53:50.150532 master-0 kubenswrapper[34361]: I0224 05:53:50.150280 34361 generic.go:334] "Generic (PLEG): container finished" podID="74198545-a0ee-4142-93a6-86175a1d3c02" containerID="a9bdc48aaecc10c6f9a5959e782e51eb61f90df09f16a8d5d0b273e885f8fb3c" exitCode=0 Feb 24 05:53:50.151417 master-0 kubenswrapper[34361]: I0224 05:53:50.150364 34361 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-conductor-0" event={"ID":"74198545-a0ee-4142-93a6-86175a1d3c02","Type":"ContainerDied","Data":"a9bdc48aaecc10c6f9a5959e782e51eb61f90df09f16a8d5d0b273e885f8fb3c"} Feb 24 05:53:52.618122 master-0 kubenswrapper[34361]: I0224 05:53:52.618039 34361 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-conductor-db-sync-ph4c9"] Feb 24 05:53:52.618880 master-0 kubenswrapper[34361]: E0224 05:53:52.618590 34361 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ebf1db3e-e40c-41b6-ad8f-c1decbbfba24" containerName="mariadb-database-create" Feb 24 05:53:52.618880 master-0 kubenswrapper[34361]: I0224 05:53:52.618610 34361 state_mem.go:107] "Deleted CPUSet assignment" podUID="ebf1db3e-e40c-41b6-ad8f-c1decbbfba24" containerName="mariadb-database-create" Feb 24 05:53:52.618880 master-0 kubenswrapper[34361]: E0224 05:53:52.618637 34361 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="06a1970d-fc4d-4522-a195-fa7fc9d5485d" containerName="mariadb-account-create-update" Feb 24 05:53:52.618880 master-0 kubenswrapper[34361]: I0224 05:53:52.618660 34361 state_mem.go:107] "Deleted CPUSet assignment" podUID="06a1970d-fc4d-4522-a195-fa7fc9d5485d" containerName="mariadb-account-create-update" Feb 24 05:53:52.618880 master-0 kubenswrapper[34361]: E0224 05:53:52.618676 34361 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="24794552-5cfa-428e-ad46-ce7a1794c7ec" containerName="mariadb-account-create-update" Feb 24 05:53:52.618880 master-0 kubenswrapper[34361]: I0224 05:53:52.618686 34361 state_mem.go:107] "Deleted CPUSet assignment" podUID="24794552-5cfa-428e-ad46-ce7a1794c7ec" containerName="mariadb-account-create-update" Feb 24 05:53:52.618880 master-0 kubenswrapper[34361]: E0224 05:53:52.618729 34361 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="398060c6-ec35-4659-89a2-550ad8c81453" containerName="mariadb-account-create-update" Feb 24 05:53:52.618880 master-0 kubenswrapper[34361]: I0224 05:53:52.618739 34361 state_mem.go:107] "Deleted CPUSet assignment" podUID="398060c6-ec35-4659-89a2-550ad8c81453" containerName="mariadb-account-create-update" Feb 24 05:53:52.618880 master-0 kubenswrapper[34361]: E0224 05:53:52.618753 34361 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="481f56ba-4864-42fb-b0f3-02a4e4311e7d" containerName="mariadb-database-create" Feb 24 05:53:52.618880 master-0 kubenswrapper[34361]: I0224 05:53:52.618762 34361 state_mem.go:107] "Deleted CPUSet assignment" podUID="481f56ba-4864-42fb-b0f3-02a4e4311e7d" containerName="mariadb-database-create" Feb 24 05:53:52.618880 master-0 kubenswrapper[34361]: E0224 05:53:52.618817 34361 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d21763a0-0808-4fe2-94bb-37aea78c00f0" containerName="mariadb-database-create" Feb 24 05:53:52.618880 master-0 kubenswrapper[34361]: I0224 05:53:52.618827 34361 state_mem.go:107] "Deleted CPUSet assignment" podUID="d21763a0-0808-4fe2-94bb-37aea78c00f0" containerName="mariadb-database-create" Feb 24 05:53:52.619276 master-0 kubenswrapper[34361]: I0224 05:53:52.619131 34361 memory_manager.go:354] "RemoveStaleState removing state" podUID="d21763a0-0808-4fe2-94bb-37aea78c00f0" containerName="mariadb-database-create" Feb 24 05:53:52.619276 master-0 kubenswrapper[34361]: I0224 05:53:52.619161 34361 memory_manager.go:354] "RemoveStaleState removing state" podUID="24794552-5cfa-428e-ad46-ce7a1794c7ec" containerName="mariadb-account-create-update" Feb 24 05:53:52.619276 master-0 kubenswrapper[34361]: I0224 05:53:52.619189 34361 memory_manager.go:354] "RemoveStaleState removing state" podUID="481f56ba-4864-42fb-b0f3-02a4e4311e7d" containerName="mariadb-database-create" Feb 24 05:53:52.619276 master-0 kubenswrapper[34361]: I0224 05:53:52.619223 34361 memory_manager.go:354] "RemoveStaleState removing state" podUID="ebf1db3e-e40c-41b6-ad8f-c1decbbfba24" containerName="mariadb-database-create" Feb 24 05:53:52.619276 master-0 kubenswrapper[34361]: I0224 05:53:52.619247 34361 memory_manager.go:354] "RemoveStaleState removing state" podUID="06a1970d-fc4d-4522-a195-fa7fc9d5485d" containerName="mariadb-account-create-update" Feb 24 05:53:52.619276 master-0 kubenswrapper[34361]: I0224 05:53:52.619264 34361 memory_manager.go:354] "RemoveStaleState removing state" podUID="398060c6-ec35-4659-89a2-550ad8c81453" containerName="mariadb-account-create-update" Feb 24 05:53:52.620940 master-0 kubenswrapper[34361]: I0224 05:53:52.620282 34361 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-ph4c9" Feb 24 05:53:52.623452 master-0 kubenswrapper[34361]: I0224 05:53:52.623349 34361 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-scripts" Feb 24 05:53:52.625142 master-0 kubenswrapper[34361]: I0224 05:53:52.625086 34361 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-config-data" Feb 24 05:53:52.760998 master-0 kubenswrapper[34361]: I0224 05:53:52.760920 34361 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d4656fa5-01da-43e7-8bc9-f2b67c89b70d-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-ph4c9\" (UID: \"d4656fa5-01da-43e7-8bc9-f2b67c89b70d\") " pod="openstack/nova-cell0-conductor-db-sync-ph4c9" Feb 24 05:53:52.761328 master-0 kubenswrapper[34361]: I0224 05:53:52.761215 34361 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d4656fa5-01da-43e7-8bc9-f2b67c89b70d-scripts\") pod \"nova-cell0-conductor-db-sync-ph4c9\" (UID: \"d4656fa5-01da-43e7-8bc9-f2b67c89b70d\") " pod="openstack/nova-cell0-conductor-db-sync-ph4c9" Feb 24 05:53:52.762554 master-0 kubenswrapper[34361]: I0224 05:53:52.762504 34361 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bf5xz\" (UniqueName: \"kubernetes.io/projected/d4656fa5-01da-43e7-8bc9-f2b67c89b70d-kube-api-access-bf5xz\") pod \"nova-cell0-conductor-db-sync-ph4c9\" (UID: \"d4656fa5-01da-43e7-8bc9-f2b67c89b70d\") " pod="openstack/nova-cell0-conductor-db-sync-ph4c9" Feb 24 05:53:52.762791 master-0 kubenswrapper[34361]: I0224 05:53:52.762764 34361 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d4656fa5-01da-43e7-8bc9-f2b67c89b70d-config-data\") pod \"nova-cell0-conductor-db-sync-ph4c9\" (UID: \"d4656fa5-01da-43e7-8bc9-f2b67c89b70d\") " pod="openstack/nova-cell0-conductor-db-sync-ph4c9" Feb 24 05:53:52.899953 master-0 kubenswrapper[34361]: I0224 05:53:52.876710 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d4656fa5-01da-43e7-8bc9-f2b67c89b70d-config-data\") pod \"nova-cell0-conductor-db-sync-ph4c9\" (UID: \"d4656fa5-01da-43e7-8bc9-f2b67c89b70d\") " pod="openstack/nova-cell0-conductor-db-sync-ph4c9" Feb 24 05:53:52.899953 master-0 kubenswrapper[34361]: I0224 05:53:52.876989 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d4656fa5-01da-43e7-8bc9-f2b67c89b70d-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-ph4c9\" (UID: \"d4656fa5-01da-43e7-8bc9-f2b67c89b70d\") " pod="openstack/nova-cell0-conductor-db-sync-ph4c9" Feb 24 05:53:52.899953 master-0 kubenswrapper[34361]: I0224 05:53:52.877060 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d4656fa5-01da-43e7-8bc9-f2b67c89b70d-scripts\") pod \"nova-cell0-conductor-db-sync-ph4c9\" (UID: \"d4656fa5-01da-43e7-8bc9-f2b67c89b70d\") " pod="openstack/nova-cell0-conductor-db-sync-ph4c9" Feb 24 05:53:52.899953 master-0 kubenswrapper[34361]: I0224 05:53:52.877119 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bf5xz\" (UniqueName: \"kubernetes.io/projected/d4656fa5-01da-43e7-8bc9-f2b67c89b70d-kube-api-access-bf5xz\") pod \"nova-cell0-conductor-db-sync-ph4c9\" (UID: \"d4656fa5-01da-43e7-8bc9-f2b67c89b70d\") " pod="openstack/nova-cell0-conductor-db-sync-ph4c9" Feb 24 05:53:52.899953 master-0 kubenswrapper[34361]: I0224 05:53:52.883465 34361 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d4656fa5-01da-43e7-8bc9-f2b67c89b70d-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-ph4c9\" (UID: \"d4656fa5-01da-43e7-8bc9-f2b67c89b70d\") " pod="openstack/nova-cell0-conductor-db-sync-ph4c9" Feb 24 05:53:52.899953 master-0 kubenswrapper[34361]: I0224 05:53:52.884664 34361 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-ph4c9"] Feb 24 05:53:52.900750 master-0 kubenswrapper[34361]: I0224 05:53:52.900703 34361 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d4656fa5-01da-43e7-8bc9-f2b67c89b70d-scripts\") pod \"nova-cell0-conductor-db-sync-ph4c9\" (UID: \"d4656fa5-01da-43e7-8bc9-f2b67c89b70d\") " pod="openstack/nova-cell0-conductor-db-sync-ph4c9" Feb 24 05:53:52.900853 master-0 kubenswrapper[34361]: I0224 05:53:52.900804 34361 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d4656fa5-01da-43e7-8bc9-f2b67c89b70d-config-data\") pod \"nova-cell0-conductor-db-sync-ph4c9\" (UID: \"d4656fa5-01da-43e7-8bc9-f2b67c89b70d\") " pod="openstack/nova-cell0-conductor-db-sync-ph4c9" Feb 24 05:53:53.022705 master-0 kubenswrapper[34361]: I0224 05:53:53.022624 34361 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bf5xz\" (UniqueName: \"kubernetes.io/projected/d4656fa5-01da-43e7-8bc9-f2b67c89b70d-kube-api-access-bf5xz\") pod \"nova-cell0-conductor-db-sync-ph4c9\" (UID: \"d4656fa5-01da-43e7-8bc9-f2b67c89b70d\") " pod="openstack/nova-cell0-conductor-db-sync-ph4c9" Feb 24 05:53:53.127193 master-0 kubenswrapper[34361]: I0224 05:53:53.127135 34361 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-55b78786dc-sn557" Feb 24 05:53:53.247589 master-0 kubenswrapper[34361]: I0224 05:53:53.245125 34361 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7d9d8bd467-64rvv"] Feb 24 05:53:53.247589 master-0 kubenswrapper[34361]: I0224 05:53:53.245968 34361 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-7d9d8bd467-64rvv" podUID="eb5e7cfa-75df-4db4-87aa-34e7c7acf852" containerName="dnsmasq-dns" containerID="cri-o://d96522ea33b3d8004f404168435c7d13a18d9f8612f5020d4ed449d6c052677b" gracePeriod=10 Feb 24 05:53:53.266171 master-0 kubenswrapper[34361]: I0224 05:53:53.265575 34361 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-ph4c9" Feb 24 05:53:53.842838 master-0 kubenswrapper[34361]: I0224 05:53:53.829150 34361 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-ph4c9"] Feb 24 05:53:54.066918 master-0 kubenswrapper[34361]: I0224 05:53:54.066470 34361 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7d9d8bd467-64rvv" Feb 24 05:53:54.151340 master-0 kubenswrapper[34361]: I0224 05:53:54.150693 34361 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/eb5e7cfa-75df-4db4-87aa-34e7c7acf852-ovsdbserver-nb\") pod \"eb5e7cfa-75df-4db4-87aa-34e7c7acf852\" (UID: \"eb5e7cfa-75df-4db4-87aa-34e7c7acf852\") " Feb 24 05:53:54.151340 master-0 kubenswrapper[34361]: I0224 05:53:54.150797 34361 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-n2hpp\" (UniqueName: \"kubernetes.io/projected/eb5e7cfa-75df-4db4-87aa-34e7c7acf852-kube-api-access-n2hpp\") pod \"eb5e7cfa-75df-4db4-87aa-34e7c7acf852\" (UID: \"eb5e7cfa-75df-4db4-87aa-34e7c7acf852\") " Feb 24 05:53:54.151340 master-0 kubenswrapper[34361]: I0224 05:53:54.150947 34361 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/eb5e7cfa-75df-4db4-87aa-34e7c7acf852-dns-svc\") pod \"eb5e7cfa-75df-4db4-87aa-34e7c7acf852\" (UID: \"eb5e7cfa-75df-4db4-87aa-34e7c7acf852\") " Feb 24 05:53:54.151340 master-0 kubenswrapper[34361]: I0224 05:53:54.151251 34361 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/eb5e7cfa-75df-4db4-87aa-34e7c7acf852-dns-swift-storage-0\") pod \"eb5e7cfa-75df-4db4-87aa-34e7c7acf852\" (UID: \"eb5e7cfa-75df-4db4-87aa-34e7c7acf852\") " Feb 24 05:53:54.151756 master-0 kubenswrapper[34361]: I0224 05:53:54.151553 34361 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/eb5e7cfa-75df-4db4-87aa-34e7c7acf852-config\") pod \"eb5e7cfa-75df-4db4-87aa-34e7c7acf852\" (UID: \"eb5e7cfa-75df-4db4-87aa-34e7c7acf852\") " Feb 24 05:53:54.151756 master-0 kubenswrapper[34361]: I0224 05:53:54.151623 34361 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/eb5e7cfa-75df-4db4-87aa-34e7c7acf852-ovsdbserver-sb\") pod \"eb5e7cfa-75df-4db4-87aa-34e7c7acf852\" (UID: \"eb5e7cfa-75df-4db4-87aa-34e7c7acf852\") " Feb 24 05:53:54.166463 master-0 kubenswrapper[34361]: I0224 05:53:54.155868 34361 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/eb5e7cfa-75df-4db4-87aa-34e7c7acf852-kube-api-access-n2hpp" (OuterVolumeSpecName: "kube-api-access-n2hpp") pod "eb5e7cfa-75df-4db4-87aa-34e7c7acf852" (UID: "eb5e7cfa-75df-4db4-87aa-34e7c7acf852"). InnerVolumeSpecName "kube-api-access-n2hpp". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 24 05:53:54.233821 master-0 kubenswrapper[34361]: I0224 05:53:54.226535 34361 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/eb5e7cfa-75df-4db4-87aa-34e7c7acf852-config" (OuterVolumeSpecName: "config") pod "eb5e7cfa-75df-4db4-87aa-34e7c7acf852" (UID: "eb5e7cfa-75df-4db4-87aa-34e7c7acf852"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 24 05:53:54.256734 master-0 kubenswrapper[34361]: I0224 05:53:54.256634 34361 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/eb5e7cfa-75df-4db4-87aa-34e7c7acf852-config\") on node \"master-0\" DevicePath \"\"" Feb 24 05:53:54.256734 master-0 kubenswrapper[34361]: I0224 05:53:54.256702 34361 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-n2hpp\" (UniqueName: \"kubernetes.io/projected/eb5e7cfa-75df-4db4-87aa-34e7c7acf852-kube-api-access-n2hpp\") on node \"master-0\" DevicePath \"\"" Feb 24 05:53:54.271758 master-0 kubenswrapper[34361]: I0224 05:53:54.271709 34361 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/eb5e7cfa-75df-4db4-87aa-34e7c7acf852-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "eb5e7cfa-75df-4db4-87aa-34e7c7acf852" (UID: "eb5e7cfa-75df-4db4-87aa-34e7c7acf852"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 24 05:53:54.282790 master-0 kubenswrapper[34361]: I0224 05:53:54.282228 34361 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/eb5e7cfa-75df-4db4-87aa-34e7c7acf852-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "eb5e7cfa-75df-4db4-87aa-34e7c7acf852" (UID: "eb5e7cfa-75df-4db4-87aa-34e7c7acf852"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 24 05:53:54.285206 master-0 kubenswrapper[34361]: I0224 05:53:54.285071 34361 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/eb5e7cfa-75df-4db4-87aa-34e7c7acf852-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "eb5e7cfa-75df-4db4-87aa-34e7c7acf852" (UID: "eb5e7cfa-75df-4db4-87aa-34e7c7acf852"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 24 05:53:54.285206 master-0 kubenswrapper[34361]: I0224 05:53:54.285079 34361 generic.go:334] "Generic (PLEG): container finished" podID="eb5e7cfa-75df-4db4-87aa-34e7c7acf852" containerID="d96522ea33b3d8004f404168435c7d13a18d9f8612f5020d4ed449d6c052677b" exitCode=0 Feb 24 05:53:54.285206 master-0 kubenswrapper[34361]: I0224 05:53:54.285183 34361 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7d9d8bd467-64rvv" Feb 24 05:53:54.285449 master-0 kubenswrapper[34361]: I0224 05:53:54.285182 34361 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7d9d8bd467-64rvv" event={"ID":"eb5e7cfa-75df-4db4-87aa-34e7c7acf852","Type":"ContainerDied","Data":"d96522ea33b3d8004f404168435c7d13a18d9f8612f5020d4ed449d6c052677b"} Feb 24 05:53:54.285449 master-0 kubenswrapper[34361]: I0224 05:53:54.285265 34361 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7d9d8bd467-64rvv" event={"ID":"eb5e7cfa-75df-4db4-87aa-34e7c7acf852","Type":"ContainerDied","Data":"6bb379766bb7417e819a428f2f7aae035911cdff8b6f55a0e3566352c7b03eb6"} Feb 24 05:53:54.285449 master-0 kubenswrapper[34361]: I0224 05:53:54.285302 34361 scope.go:117] "RemoveContainer" containerID="d96522ea33b3d8004f404168435c7d13a18d9f8612f5020d4ed449d6c052677b" Feb 24 05:53:54.288180 master-0 kubenswrapper[34361]: I0224 05:53:54.287757 34361 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/eb5e7cfa-75df-4db4-87aa-34e7c7acf852-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "eb5e7cfa-75df-4db4-87aa-34e7c7acf852" (UID: "eb5e7cfa-75df-4db4-87aa-34e7c7acf852"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 24 05:53:54.288765 master-0 kubenswrapper[34361]: I0224 05:53:54.288715 34361 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-ph4c9" event={"ID":"d4656fa5-01da-43e7-8bc9-f2b67c89b70d","Type":"ContainerStarted","Data":"f716977c0c3045870b88a404f8cf3c315aa1b6597dfa49e7888ea3c8ffb4f44b"} Feb 24 05:53:54.298183 master-0 kubenswrapper[34361]: I0224 05:53:54.295723 34361 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-conductor-0" event={"ID":"74198545-a0ee-4142-93a6-86175a1d3c02","Type":"ContainerStarted","Data":"a46b2b88f29666f8e93397c596b1f2291619af1cb350863ee7a532e52ba78799"} Feb 24 05:53:54.300216 master-0 kubenswrapper[34361]: I0224 05:53:54.300165 34361 generic.go:334] "Generic (PLEG): container finished" podID="2c572e96-fe42-4adb-83c9-3316ba4e374a" containerID="b01b744cadfef77bc931a8e024d9ba46a53bedac25d74fdabb1800a0d1759096" exitCode=0 Feb 24 05:53:54.300276 master-0 kubenswrapper[34361]: I0224 05:53:54.300226 34361 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-inspector-0" event={"ID":"2c572e96-fe42-4adb-83c9-3316ba4e374a","Type":"ContainerDied","Data":"b01b744cadfef77bc931a8e024d9ba46a53bedac25d74fdabb1800a0d1759096"} Feb 24 05:53:54.364337 master-0 kubenswrapper[34361]: I0224 05:53:54.359876 34361 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/eb5e7cfa-75df-4db4-87aa-34e7c7acf852-dns-svc\") on node \"master-0\" DevicePath \"\"" Feb 24 05:53:54.364337 master-0 kubenswrapper[34361]: I0224 05:53:54.359940 34361 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/eb5e7cfa-75df-4db4-87aa-34e7c7acf852-dns-swift-storage-0\") on node \"master-0\" DevicePath \"\"" Feb 24 05:53:54.364337 master-0 kubenswrapper[34361]: I0224 05:53:54.359954 34361 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/eb5e7cfa-75df-4db4-87aa-34e7c7acf852-ovsdbserver-sb\") on node \"master-0\" DevicePath \"\"" Feb 24 05:53:54.364337 master-0 kubenswrapper[34361]: I0224 05:53:54.359964 34361 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/eb5e7cfa-75df-4db4-87aa-34e7c7acf852-ovsdbserver-nb\") on node \"master-0\" DevicePath \"\"" Feb 24 05:53:54.371667 master-0 kubenswrapper[34361]: I0224 05:53:54.367917 34361 scope.go:117] "RemoveContainer" containerID="71921d7aa4509fe7f718e89b8283c5fffe0419d27c0581ee838f12bde363b61c" Feb 24 05:53:54.420532 master-0 kubenswrapper[34361]: I0224 05:53:54.420348 34361 scope.go:117] "RemoveContainer" containerID="d96522ea33b3d8004f404168435c7d13a18d9f8612f5020d4ed449d6c052677b" Feb 24 05:53:54.421174 master-0 kubenswrapper[34361]: E0224 05:53:54.421114 34361 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d96522ea33b3d8004f404168435c7d13a18d9f8612f5020d4ed449d6c052677b\": container with ID starting with d96522ea33b3d8004f404168435c7d13a18d9f8612f5020d4ed449d6c052677b not found: ID does not exist" containerID="d96522ea33b3d8004f404168435c7d13a18d9f8612f5020d4ed449d6c052677b" Feb 24 05:53:54.421241 master-0 kubenswrapper[34361]: I0224 05:53:54.421200 34361 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d96522ea33b3d8004f404168435c7d13a18d9f8612f5020d4ed449d6c052677b"} err="failed to get container status \"d96522ea33b3d8004f404168435c7d13a18d9f8612f5020d4ed449d6c052677b\": rpc error: code = NotFound desc = could not find container \"d96522ea33b3d8004f404168435c7d13a18d9f8612f5020d4ed449d6c052677b\": container with ID starting with d96522ea33b3d8004f404168435c7d13a18d9f8612f5020d4ed449d6c052677b not found: ID does not exist" Feb 24 05:53:54.421296 master-0 kubenswrapper[34361]: I0224 05:53:54.421247 34361 scope.go:117] "RemoveContainer" containerID="71921d7aa4509fe7f718e89b8283c5fffe0419d27c0581ee838f12bde363b61c" Feb 24 05:53:54.422023 master-0 kubenswrapper[34361]: E0224 05:53:54.421985 34361 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"71921d7aa4509fe7f718e89b8283c5fffe0419d27c0581ee838f12bde363b61c\": container with ID starting with 71921d7aa4509fe7f718e89b8283c5fffe0419d27c0581ee838f12bde363b61c not found: ID does not exist" containerID="71921d7aa4509fe7f718e89b8283c5fffe0419d27c0581ee838f12bde363b61c" Feb 24 05:53:54.422233 master-0 kubenswrapper[34361]: I0224 05:53:54.422170 34361 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"71921d7aa4509fe7f718e89b8283c5fffe0419d27c0581ee838f12bde363b61c"} err="failed to get container status \"71921d7aa4509fe7f718e89b8283c5fffe0419d27c0581ee838f12bde363b61c\": rpc error: code = NotFound desc = could not find container \"71921d7aa4509fe7f718e89b8283c5fffe0419d27c0581ee838f12bde363b61c\": container with ID starting with 71921d7aa4509fe7f718e89b8283c5fffe0419d27c0581ee838f12bde363b61c not found: ID does not exist" Feb 24 05:53:54.764346 master-0 kubenswrapper[34361]: I0224 05:53:54.763560 34361 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7d9d8bd467-64rvv"] Feb 24 05:53:54.845959 master-0 kubenswrapper[34361]: I0224 05:53:54.845846 34361 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-7d9d8bd467-64rvv"] Feb 24 05:53:55.327709 master-0 kubenswrapper[34361]: I0224 05:53:55.327467 34361 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-inspector-0" event={"ID":"2c572e96-fe42-4adb-83c9-3316ba4e374a","Type":"ContainerStarted","Data":"d0407accd39060b095e121f4ea4f22f5033862f584714eebb1d704dcef164207"} Feb 24 05:53:55.878951 master-0 kubenswrapper[34361]: I0224 05:53:55.878866 34361 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-bdafd-default-internal-api-0" Feb 24 05:53:55.878951 master-0 kubenswrapper[34361]: I0224 05:53:55.878953 34361 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-bdafd-default-internal-api-0" Feb 24 05:53:55.942644 master-0 kubenswrapper[34361]: I0224 05:53:55.942363 34361 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-bdafd-default-internal-api-0" Feb 24 05:53:55.949987 master-0 kubenswrapper[34361]: I0224 05:53:55.949915 34361 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-bdafd-default-internal-api-0" Feb 24 05:53:56.641355 master-0 kubenswrapper[34361]: I0224 05:53:56.641254 34361 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="eb5e7cfa-75df-4db4-87aa-34e7c7acf852" path="/var/lib/kubelet/pods/eb5e7cfa-75df-4db4-87aa-34e7c7acf852/volumes" Feb 24 05:53:56.645732 master-0 kubenswrapper[34361]: I0224 05:53:56.645638 34361 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-inspector-0" event={"ID":"2c572e96-fe42-4adb-83c9-3316ba4e374a","Type":"ContainerStarted","Data":"ecb084cd987f5e204061d7ebd0e9d68d172aa3c73629dee51d8d5ccd6a9b0ee7"} Feb 24 05:53:56.646373 master-0 kubenswrapper[34361]: I0224 05:53:56.646339 34361 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-bdafd-default-internal-api-0" Feb 24 05:53:56.646373 master-0 kubenswrapper[34361]: I0224 05:53:56.646369 34361 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-bdafd-default-internal-api-0" Feb 24 05:53:56.879531 master-0 kubenswrapper[34361]: I0224 05:53:56.879397 34361 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-bdafd-default-external-api-0" Feb 24 05:53:56.879531 master-0 kubenswrapper[34361]: I0224 05:53:56.879530 34361 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-bdafd-default-external-api-0" Feb 24 05:53:56.934880 master-0 kubenswrapper[34361]: I0224 05:53:56.934737 34361 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-bdafd-default-external-api-0" Feb 24 05:53:56.941405 master-0 kubenswrapper[34361]: I0224 05:53:56.940823 34361 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-bdafd-default-external-api-0" Feb 24 05:53:57.675300 master-0 kubenswrapper[34361]: I0224 05:53:57.674741 34361 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-inspector-0" event={"ID":"2c572e96-fe42-4adb-83c9-3316ba4e374a","Type":"ContainerStarted","Data":"7bf82fc2b4f5f9e201e9d450d594a8889a805a915216fd2418c7b8df203b6e04"} Feb 24 05:53:57.675300 master-0 kubenswrapper[34361]: I0224 05:53:57.674835 34361 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-inspector-0" event={"ID":"2c572e96-fe42-4adb-83c9-3316ba4e374a","Type":"ContainerStarted","Data":"511aeafbfc7255d37a6fdbce494e16a5af55d282747fcf11ceca7812640c46ba"} Feb 24 05:53:57.675580 master-0 kubenswrapper[34361]: I0224 05:53:57.675439 34361 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-bdafd-default-external-api-0" Feb 24 05:53:57.675580 master-0 kubenswrapper[34361]: I0224 05:53:57.675507 34361 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-bdafd-default-external-api-0" Feb 24 05:53:58.703851 master-0 kubenswrapper[34361]: I0224 05:53:58.703747 34361 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-inspector-0" event={"ID":"2c572e96-fe42-4adb-83c9-3316ba4e374a","Type":"ContainerStarted","Data":"0c125e1db9d83f704bc7a3c6eee86506c9751f45ae11eb6c5771084b337c68a0"} Feb 24 05:53:58.703851 master-0 kubenswrapper[34361]: I0224 05:53:58.703825 34361 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ironic-inspector-0" Feb 24 05:53:58.703851 master-0 kubenswrapper[34361]: I0224 05:53:58.703853 34361 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ironic-inspector-0" Feb 24 05:53:58.818936 master-0 kubenswrapper[34361]: I0224 05:53:58.818825 34361 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ironic-inspector-0" podStartSLOduration=7.829224956 podStartE2EDuration="11.818799678s" podCreationTimestamp="2026-02-24 05:53:47 +0000 UTC" firstStartedPulling="2026-02-24 05:53:49.12615334 +0000 UTC m=+988.828770386" lastFinishedPulling="2026-02-24 05:53:53.115728062 +0000 UTC m=+992.818345108" observedRunningTime="2026-02-24 05:53:58.81181233 +0000 UTC m=+998.514429376" watchObservedRunningTime="2026-02-24 05:53:58.818799678 +0000 UTC m=+998.521416724" Feb 24 05:53:58.936010 master-0 kubenswrapper[34361]: I0224 05:53:58.935326 34361 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-bdafd-default-internal-api-0" Feb 24 05:53:58.936010 master-0 kubenswrapper[34361]: I0224 05:53:58.935467 34361 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 24 05:53:58.936387 master-0 kubenswrapper[34361]: I0224 05:53:58.936215 34361 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-bdafd-default-internal-api-0" Feb 24 05:54:00.135785 master-0 kubenswrapper[34361]: I0224 05:54:00.135708 34361 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-bdafd-default-external-api-0" Feb 24 05:54:00.136536 master-0 kubenswrapper[34361]: I0224 05:54:00.135879 34361 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 24 05:54:00.184338 master-0 kubenswrapper[34361]: I0224 05:54:00.184118 34361 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-bdafd-default-external-api-0" Feb 24 05:54:00.789367 master-0 kubenswrapper[34361]: I0224 05:54:00.788097 34361 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ironic-inspector-0" Feb 24 05:54:01.772629 master-0 kubenswrapper[34361]: I0224 05:54:01.772531 34361 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ironic-inspector-0" Feb 24 05:54:02.645153 master-0 kubenswrapper[34361]: I0224 05:54:02.644951 34361 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ironic-inspector-0" Feb 24 05:54:02.645153 master-0 kubenswrapper[34361]: I0224 05:54:02.645057 34361 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ironic-inspector-0" Feb 24 05:54:04.830765 master-0 kubenswrapper[34361]: I0224 05:54:04.830666 34361 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-ph4c9" event={"ID":"d4656fa5-01da-43e7-8bc9-f2b67c89b70d","Type":"ContainerStarted","Data":"c16fcc32e461add8199d9cf339ba076ac5d2296d2d3b20d6e0fdc5e9ebe01b73"} Feb 24 05:54:04.868490 master-0 kubenswrapper[34361]: I0224 05:54:04.868354 34361 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-conductor-db-sync-ph4c9" podStartSLOduration=2.258414203 podStartE2EDuration="12.868304488s" podCreationTimestamp="2026-02-24 05:53:52 +0000 UTC" firstStartedPulling="2026-02-24 05:53:53.896044789 +0000 UTC m=+993.598661845" lastFinishedPulling="2026-02-24 05:54:04.505935084 +0000 UTC m=+1004.208552130" observedRunningTime="2026-02-24 05:54:04.853631391 +0000 UTC m=+1004.556248447" watchObservedRunningTime="2026-02-24 05:54:04.868304488 +0000 UTC m=+1004.570921554" Feb 24 05:54:07.644094 master-0 kubenswrapper[34361]: I0224 05:54:07.644012 34361 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/ironic-inspector-0" Feb 24 05:54:07.644094 master-0 kubenswrapper[34361]: I0224 05:54:07.644086 34361 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/ironic-inspector-0" Feb 24 05:54:07.691374 master-0 kubenswrapper[34361]: I0224 05:54:07.690521 34361 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/ironic-inspector-0" Feb 24 05:54:07.697904 master-0 kubenswrapper[34361]: I0224 05:54:07.697822 34361 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/ironic-inspector-0" Feb 24 05:54:07.892610 master-0 kubenswrapper[34361]: I0224 05:54:07.892543 34361 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ironic-inspector-0" Feb 24 05:54:07.897118 master-0 kubenswrapper[34361]: I0224 05:54:07.896910 34361 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ironic-inspector-0" Feb 24 05:54:23.148522 master-0 kubenswrapper[34361]: I0224 05:54:23.148402 34361 generic.go:334] "Generic (PLEG): container finished" podID="d4656fa5-01da-43e7-8bc9-f2b67c89b70d" containerID="c16fcc32e461add8199d9cf339ba076ac5d2296d2d3b20d6e0fdc5e9ebe01b73" exitCode=0 Feb 24 05:54:23.148522 master-0 kubenswrapper[34361]: I0224 05:54:23.148493 34361 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-ph4c9" event={"ID":"d4656fa5-01da-43e7-8bc9-f2b67c89b70d","Type":"ContainerDied","Data":"c16fcc32e461add8199d9cf339ba076ac5d2296d2d3b20d6e0fdc5e9ebe01b73"} Feb 24 05:54:24.702769 master-0 kubenswrapper[34361]: I0224 05:54:24.702710 34361 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-ph4c9" Feb 24 05:54:24.881462 master-0 kubenswrapper[34361]: I0224 05:54:24.881259 34361 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bf5xz\" (UniqueName: \"kubernetes.io/projected/d4656fa5-01da-43e7-8bc9-f2b67c89b70d-kube-api-access-bf5xz\") pod \"d4656fa5-01da-43e7-8bc9-f2b67c89b70d\" (UID: \"d4656fa5-01da-43e7-8bc9-f2b67c89b70d\") " Feb 24 05:54:24.881462 master-0 kubenswrapper[34361]: I0224 05:54:24.881379 34361 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d4656fa5-01da-43e7-8bc9-f2b67c89b70d-scripts\") pod \"d4656fa5-01da-43e7-8bc9-f2b67c89b70d\" (UID: \"d4656fa5-01da-43e7-8bc9-f2b67c89b70d\") " Feb 24 05:54:24.881462 master-0 kubenswrapper[34361]: I0224 05:54:24.881416 34361 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d4656fa5-01da-43e7-8bc9-f2b67c89b70d-config-data\") pod \"d4656fa5-01da-43e7-8bc9-f2b67c89b70d\" (UID: \"d4656fa5-01da-43e7-8bc9-f2b67c89b70d\") " Feb 24 05:54:24.881462 master-0 kubenswrapper[34361]: I0224 05:54:24.881457 34361 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d4656fa5-01da-43e7-8bc9-f2b67c89b70d-combined-ca-bundle\") pod \"d4656fa5-01da-43e7-8bc9-f2b67c89b70d\" (UID: \"d4656fa5-01da-43e7-8bc9-f2b67c89b70d\") " Feb 24 05:54:24.886404 master-0 kubenswrapper[34361]: I0224 05:54:24.886277 34361 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d4656fa5-01da-43e7-8bc9-f2b67c89b70d-scripts" (OuterVolumeSpecName: "scripts") pod "d4656fa5-01da-43e7-8bc9-f2b67c89b70d" (UID: "d4656fa5-01da-43e7-8bc9-f2b67c89b70d"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 24 05:54:24.887925 master-0 kubenswrapper[34361]: I0224 05:54:24.887855 34361 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d4656fa5-01da-43e7-8bc9-f2b67c89b70d-kube-api-access-bf5xz" (OuterVolumeSpecName: "kube-api-access-bf5xz") pod "d4656fa5-01da-43e7-8bc9-f2b67c89b70d" (UID: "d4656fa5-01da-43e7-8bc9-f2b67c89b70d"). InnerVolumeSpecName "kube-api-access-bf5xz". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 24 05:54:24.912990 master-0 kubenswrapper[34361]: I0224 05:54:24.912904 34361 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d4656fa5-01da-43e7-8bc9-f2b67c89b70d-config-data" (OuterVolumeSpecName: "config-data") pod "d4656fa5-01da-43e7-8bc9-f2b67c89b70d" (UID: "d4656fa5-01da-43e7-8bc9-f2b67c89b70d"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 24 05:54:24.932563 master-0 kubenswrapper[34361]: I0224 05:54:24.932494 34361 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d4656fa5-01da-43e7-8bc9-f2b67c89b70d-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "d4656fa5-01da-43e7-8bc9-f2b67c89b70d" (UID: "d4656fa5-01da-43e7-8bc9-f2b67c89b70d"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 24 05:54:24.984687 master-0 kubenswrapper[34361]: I0224 05:54:24.984617 34361 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bf5xz\" (UniqueName: \"kubernetes.io/projected/d4656fa5-01da-43e7-8bc9-f2b67c89b70d-kube-api-access-bf5xz\") on node \"master-0\" DevicePath \"\"" Feb 24 05:54:24.984687 master-0 kubenswrapper[34361]: I0224 05:54:24.984663 34361 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d4656fa5-01da-43e7-8bc9-f2b67c89b70d-scripts\") on node \"master-0\" DevicePath \"\"" Feb 24 05:54:24.984687 master-0 kubenswrapper[34361]: I0224 05:54:24.984677 34361 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d4656fa5-01da-43e7-8bc9-f2b67c89b70d-config-data\") on node \"master-0\" DevicePath \"\"" Feb 24 05:54:24.984687 master-0 kubenswrapper[34361]: I0224 05:54:24.984687 34361 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d4656fa5-01da-43e7-8bc9-f2b67c89b70d-combined-ca-bundle\") on node \"master-0\" DevicePath \"\"" Feb 24 05:54:25.187878 master-0 kubenswrapper[34361]: I0224 05:54:25.187641 34361 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-ph4c9" event={"ID":"d4656fa5-01da-43e7-8bc9-f2b67c89b70d","Type":"ContainerDied","Data":"f716977c0c3045870b88a404f8cf3c315aa1b6597dfa49e7888ea3c8ffb4f44b"} Feb 24 05:54:25.187878 master-0 kubenswrapper[34361]: I0224 05:54:25.187800 34361 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f716977c0c3045870b88a404f8cf3c315aa1b6597dfa49e7888ea3c8ffb4f44b" Feb 24 05:54:25.187878 master-0 kubenswrapper[34361]: I0224 05:54:25.187749 34361 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-ph4c9" Feb 24 05:54:25.391078 master-0 kubenswrapper[34361]: I0224 05:54:25.390979 34361 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-conductor-0"] Feb 24 05:54:25.391686 master-0 kubenswrapper[34361]: E0224 05:54:25.391645 34361 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d4656fa5-01da-43e7-8bc9-f2b67c89b70d" containerName="nova-cell0-conductor-db-sync" Feb 24 05:54:25.391686 master-0 kubenswrapper[34361]: I0224 05:54:25.391676 34361 state_mem.go:107] "Deleted CPUSet assignment" podUID="d4656fa5-01da-43e7-8bc9-f2b67c89b70d" containerName="nova-cell0-conductor-db-sync" Feb 24 05:54:25.391810 master-0 kubenswrapper[34361]: E0224 05:54:25.391715 34361 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="eb5e7cfa-75df-4db4-87aa-34e7c7acf852" containerName="init" Feb 24 05:54:25.391810 master-0 kubenswrapper[34361]: I0224 05:54:25.391724 34361 state_mem.go:107] "Deleted CPUSet assignment" podUID="eb5e7cfa-75df-4db4-87aa-34e7c7acf852" containerName="init" Feb 24 05:54:25.391810 master-0 kubenswrapper[34361]: E0224 05:54:25.391795 34361 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="eb5e7cfa-75df-4db4-87aa-34e7c7acf852" containerName="dnsmasq-dns" Feb 24 05:54:25.391810 master-0 kubenswrapper[34361]: I0224 05:54:25.391802 34361 state_mem.go:107] "Deleted CPUSet assignment" podUID="eb5e7cfa-75df-4db4-87aa-34e7c7acf852" containerName="dnsmasq-dns" Feb 24 05:54:25.392135 master-0 kubenswrapper[34361]: I0224 05:54:25.392102 34361 memory_manager.go:354] "RemoveStaleState removing state" podUID="eb5e7cfa-75df-4db4-87aa-34e7c7acf852" containerName="dnsmasq-dns" Feb 24 05:54:25.392193 master-0 kubenswrapper[34361]: I0224 05:54:25.392158 34361 memory_manager.go:354] "RemoveStaleState removing state" podUID="d4656fa5-01da-43e7-8bc9-f2b67c89b70d" containerName="nova-cell0-conductor-db-sync" Feb 24 05:54:25.393170 master-0 kubenswrapper[34361]: I0224 05:54:25.393133 34361 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Feb 24 05:54:25.396541 master-0 kubenswrapper[34361]: I0224 05:54:25.396496 34361 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-config-data" Feb 24 05:54:25.407602 master-0 kubenswrapper[34361]: I0224 05:54:25.407536 34361 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-0"] Feb 24 05:54:25.498917 master-0 kubenswrapper[34361]: I0224 05:54:25.498841 34361 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/38ec5730-a1d1-4df2-945d-05f93fcf8a4d-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"38ec5730-a1d1-4df2-945d-05f93fcf8a4d\") " pod="openstack/nova-cell0-conductor-0" Feb 24 05:54:25.499451 master-0 kubenswrapper[34361]: I0224 05:54:25.499422 34361 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kwzk6\" (UniqueName: \"kubernetes.io/projected/38ec5730-a1d1-4df2-945d-05f93fcf8a4d-kube-api-access-kwzk6\") pod \"nova-cell0-conductor-0\" (UID: \"38ec5730-a1d1-4df2-945d-05f93fcf8a4d\") " pod="openstack/nova-cell0-conductor-0" Feb 24 05:54:25.499688 master-0 kubenswrapper[34361]: I0224 05:54:25.499666 34361 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/38ec5730-a1d1-4df2-945d-05f93fcf8a4d-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"38ec5730-a1d1-4df2-945d-05f93fcf8a4d\") " pod="openstack/nova-cell0-conductor-0" Feb 24 05:54:25.602572 master-0 kubenswrapper[34361]: I0224 05:54:25.602498 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/38ec5730-a1d1-4df2-945d-05f93fcf8a4d-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"38ec5730-a1d1-4df2-945d-05f93fcf8a4d\") " pod="openstack/nova-cell0-conductor-0" Feb 24 05:54:25.602979 master-0 kubenswrapper[34361]: I0224 05:54:25.602949 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kwzk6\" (UniqueName: \"kubernetes.io/projected/38ec5730-a1d1-4df2-945d-05f93fcf8a4d-kube-api-access-kwzk6\") pod \"nova-cell0-conductor-0\" (UID: \"38ec5730-a1d1-4df2-945d-05f93fcf8a4d\") " pod="openstack/nova-cell0-conductor-0" Feb 24 05:54:25.603211 master-0 kubenswrapper[34361]: I0224 05:54:25.603187 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/38ec5730-a1d1-4df2-945d-05f93fcf8a4d-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"38ec5730-a1d1-4df2-945d-05f93fcf8a4d\") " pod="openstack/nova-cell0-conductor-0" Feb 24 05:54:25.609615 master-0 kubenswrapper[34361]: I0224 05:54:25.609542 34361 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/38ec5730-a1d1-4df2-945d-05f93fcf8a4d-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"38ec5730-a1d1-4df2-945d-05f93fcf8a4d\") " pod="openstack/nova-cell0-conductor-0" Feb 24 05:54:25.616978 master-0 kubenswrapper[34361]: I0224 05:54:25.616893 34361 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/38ec5730-a1d1-4df2-945d-05f93fcf8a4d-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"38ec5730-a1d1-4df2-945d-05f93fcf8a4d\") " pod="openstack/nova-cell0-conductor-0" Feb 24 05:54:25.623001 master-0 kubenswrapper[34361]: I0224 05:54:25.622948 34361 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kwzk6\" (UniqueName: \"kubernetes.io/projected/38ec5730-a1d1-4df2-945d-05f93fcf8a4d-kube-api-access-kwzk6\") pod \"nova-cell0-conductor-0\" (UID: \"38ec5730-a1d1-4df2-945d-05f93fcf8a4d\") " pod="openstack/nova-cell0-conductor-0" Feb 24 05:54:25.758057 master-0 kubenswrapper[34361]: I0224 05:54:25.757974 34361 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Feb 24 05:54:26.353684 master-0 kubenswrapper[34361]: I0224 05:54:26.353602 34361 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-0"] Feb 24 05:54:27.234170 master-0 kubenswrapper[34361]: I0224 05:54:27.234077 34361 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"38ec5730-a1d1-4df2-945d-05f93fcf8a4d","Type":"ContainerStarted","Data":"fcbec4ae94cca31c2d5857ae326816e800301593001e7877844edfe7cd2b2d37"} Feb 24 05:54:27.234170 master-0 kubenswrapper[34361]: I0224 05:54:27.234155 34361 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"38ec5730-a1d1-4df2-945d-05f93fcf8a4d","Type":"ContainerStarted","Data":"0d856a442283e4d24a27c919c3c700237bda68e7871e58c4c27c859a285c8218"} Feb 24 05:54:27.279439 master-0 kubenswrapper[34361]: I0224 05:54:27.279297 34361 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-conductor-0" podStartSLOduration=2.279260597 podStartE2EDuration="2.279260597s" podCreationTimestamp="2026-02-24 05:54:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-24 05:54:27.261554739 +0000 UTC m=+1026.964171795" watchObservedRunningTime="2026-02-24 05:54:27.279260597 +0000 UTC m=+1026.981877673" Feb 24 05:54:28.246881 master-0 kubenswrapper[34361]: I0224 05:54:28.246798 34361 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell0-conductor-0" Feb 24 05:54:35.796022 master-0 kubenswrapper[34361]: I0224 05:54:35.795942 34361 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell0-conductor-0" Feb 24 05:54:36.462590 master-0 kubenswrapper[34361]: I0224 05:54:36.462518 34361 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-cell-mapping-fck78"] Feb 24 05:54:36.464675 master-0 kubenswrapper[34361]: I0224 05:54:36.464642 34361 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-fck78" Feb 24 05:54:36.468408 master-0 kubenswrapper[34361]: I0224 05:54:36.468333 34361 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-manage-scripts" Feb 24 05:54:36.480519 master-0 kubenswrapper[34361]: I0224 05:54:36.480462 34361 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-manage-config-data" Feb 24 05:54:36.481593 master-0 kubenswrapper[34361]: I0224 05:54:36.481539 34361 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-cell-mapping-fck78"] Feb 24 05:54:36.566452 master-0 kubenswrapper[34361]: I0224 05:54:36.566348 34361 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-compute-ironic-compute-0"] Feb 24 05:54:36.569558 master-0 kubenswrapper[34361]: I0224 05:54:36.569516 34361 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-compute-ironic-compute-0" Feb 24 05:54:36.576467 master-0 kubenswrapper[34361]: I0224 05:54:36.576417 34361 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l2sgv\" (UniqueName: \"kubernetes.io/projected/b6bc1039-ed61-4fdb-9e4d-0cb8a4aa1e0d-kube-api-access-l2sgv\") pod \"nova-cell0-cell-mapping-fck78\" (UID: \"b6bc1039-ed61-4fdb-9e4d-0cb8a4aa1e0d\") " pod="openstack/nova-cell0-cell-mapping-fck78" Feb 24 05:54:36.576587 master-0 kubenswrapper[34361]: I0224 05:54:36.576503 34361 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b6bc1039-ed61-4fdb-9e4d-0cb8a4aa1e0d-config-data\") pod \"nova-cell0-cell-mapping-fck78\" (UID: \"b6bc1039-ed61-4fdb-9e4d-0cb8a4aa1e0d\") " pod="openstack/nova-cell0-cell-mapping-fck78" Feb 24 05:54:36.576639 master-0 kubenswrapper[34361]: I0224 05:54:36.576596 34361 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b6bc1039-ed61-4fdb-9e4d-0cb8a4aa1e0d-scripts\") pod \"nova-cell0-cell-mapping-fck78\" (UID: \"b6bc1039-ed61-4fdb-9e4d-0cb8a4aa1e0d\") " pod="openstack/nova-cell0-cell-mapping-fck78" Feb 24 05:54:36.576681 master-0 kubenswrapper[34361]: I0224 05:54:36.576655 34361 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b6bc1039-ed61-4fdb-9e4d-0cb8a4aa1e0d-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-fck78\" (UID: \"b6bc1039-ed61-4fdb-9e4d-0cb8a4aa1e0d\") " pod="openstack/nova-cell0-cell-mapping-fck78" Feb 24 05:54:36.579713 master-0 kubenswrapper[34361]: I0224 05:54:36.577044 34361 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-compute-ironic-compute-config-data" Feb 24 05:54:36.593890 master-0 kubenswrapper[34361]: I0224 05:54:36.593155 34361 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-compute-ironic-compute-0"] Feb 24 05:54:36.711078 master-0 kubenswrapper[34361]: I0224 05:54:36.704389 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l2sgv\" (UniqueName: \"kubernetes.io/projected/b6bc1039-ed61-4fdb-9e4d-0cb8a4aa1e0d-kube-api-access-l2sgv\") pod \"nova-cell0-cell-mapping-fck78\" (UID: \"b6bc1039-ed61-4fdb-9e4d-0cb8a4aa1e0d\") " pod="openstack/nova-cell0-cell-mapping-fck78" Feb 24 05:54:36.711078 master-0 kubenswrapper[34361]: I0224 05:54:36.705119 34361 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f8gfb\" (UniqueName: \"kubernetes.io/projected/455d7bcc-647b-4c91-b293-aaa0cd448723-kube-api-access-f8gfb\") pod \"nova-cell1-compute-ironic-compute-0\" (UID: \"455d7bcc-647b-4c91-b293-aaa0cd448723\") " pod="openstack/nova-cell1-compute-ironic-compute-0" Feb 24 05:54:36.711078 master-0 kubenswrapper[34361]: I0224 05:54:36.705385 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b6bc1039-ed61-4fdb-9e4d-0cb8a4aa1e0d-config-data\") pod \"nova-cell0-cell-mapping-fck78\" (UID: \"b6bc1039-ed61-4fdb-9e4d-0cb8a4aa1e0d\") " pod="openstack/nova-cell0-cell-mapping-fck78" Feb 24 05:54:36.711078 master-0 kubenswrapper[34361]: I0224 05:54:36.705554 34361 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/455d7bcc-647b-4c91-b293-aaa0cd448723-config-data\") pod \"nova-cell1-compute-ironic-compute-0\" (UID: \"455d7bcc-647b-4c91-b293-aaa0cd448723\") " pod="openstack/nova-cell1-compute-ironic-compute-0" Feb 24 05:54:36.711078 master-0 kubenswrapper[34361]: I0224 05:54:36.705578 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b6bc1039-ed61-4fdb-9e4d-0cb8a4aa1e0d-scripts\") pod \"nova-cell0-cell-mapping-fck78\" (UID: \"b6bc1039-ed61-4fdb-9e4d-0cb8a4aa1e0d\") " pod="openstack/nova-cell0-cell-mapping-fck78" Feb 24 05:54:36.711078 master-0 kubenswrapper[34361]: I0224 05:54:36.705688 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b6bc1039-ed61-4fdb-9e4d-0cb8a4aa1e0d-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-fck78\" (UID: \"b6bc1039-ed61-4fdb-9e4d-0cb8a4aa1e0d\") " pod="openstack/nova-cell0-cell-mapping-fck78" Feb 24 05:54:36.711078 master-0 kubenswrapper[34361]: I0224 05:54:36.706078 34361 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/455d7bcc-647b-4c91-b293-aaa0cd448723-combined-ca-bundle\") pod \"nova-cell1-compute-ironic-compute-0\" (UID: \"455d7bcc-647b-4c91-b293-aaa0cd448723\") " pod="openstack/nova-cell1-compute-ironic-compute-0" Feb 24 05:54:36.713496 master-0 kubenswrapper[34361]: I0224 05:54:36.712480 34361 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b6bc1039-ed61-4fdb-9e4d-0cb8a4aa1e0d-scripts\") pod \"nova-cell0-cell-mapping-fck78\" (UID: \"b6bc1039-ed61-4fdb-9e4d-0cb8a4aa1e0d\") " pod="openstack/nova-cell0-cell-mapping-fck78" Feb 24 05:54:36.713496 master-0 kubenswrapper[34361]: I0224 05:54:36.713300 34361 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b6bc1039-ed61-4fdb-9e4d-0cb8a4aa1e0d-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-fck78\" (UID: \"b6bc1039-ed61-4fdb-9e4d-0cb8a4aa1e0d\") " pod="openstack/nova-cell0-cell-mapping-fck78" Feb 24 05:54:36.715207 master-0 kubenswrapper[34361]: I0224 05:54:36.715166 34361 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b6bc1039-ed61-4fdb-9e4d-0cb8a4aa1e0d-config-data\") pod \"nova-cell0-cell-mapping-fck78\" (UID: \"b6bc1039-ed61-4fdb-9e4d-0cb8a4aa1e0d\") " pod="openstack/nova-cell0-cell-mapping-fck78" Feb 24 05:54:36.754685 master-0 kubenswrapper[34361]: I0224 05:54:36.754289 34361 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l2sgv\" (UniqueName: \"kubernetes.io/projected/b6bc1039-ed61-4fdb-9e4d-0cb8a4aa1e0d-kube-api-access-l2sgv\") pod \"nova-cell0-cell-mapping-fck78\" (UID: \"b6bc1039-ed61-4fdb-9e4d-0cb8a4aa1e0d\") " pod="openstack/nova-cell0-cell-mapping-fck78" Feb 24 05:54:36.802772 master-0 kubenswrapper[34361]: I0224 05:54:36.802688 34361 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-fck78" Feb 24 05:54:36.828285 master-0 kubenswrapper[34361]: I0224 05:54:36.828221 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/455d7bcc-647b-4c91-b293-aaa0cd448723-combined-ca-bundle\") pod \"nova-cell1-compute-ironic-compute-0\" (UID: \"455d7bcc-647b-4c91-b293-aaa0cd448723\") " pod="openstack/nova-cell1-compute-ironic-compute-0" Feb 24 05:54:36.828813 master-0 kubenswrapper[34361]: I0224 05:54:36.828793 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f8gfb\" (UniqueName: \"kubernetes.io/projected/455d7bcc-647b-4c91-b293-aaa0cd448723-kube-api-access-f8gfb\") pod \"nova-cell1-compute-ironic-compute-0\" (UID: \"455d7bcc-647b-4c91-b293-aaa0cd448723\") " pod="openstack/nova-cell1-compute-ironic-compute-0" Feb 24 05:54:36.828988 master-0 kubenswrapper[34361]: I0224 05:54:36.828974 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/455d7bcc-647b-4c91-b293-aaa0cd448723-config-data\") pod \"nova-cell1-compute-ironic-compute-0\" (UID: \"455d7bcc-647b-4c91-b293-aaa0cd448723\") " pod="openstack/nova-cell1-compute-ironic-compute-0" Feb 24 05:54:36.833205 master-0 kubenswrapper[34361]: I0224 05:54:36.833177 34361 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/455d7bcc-647b-4c91-b293-aaa0cd448723-config-data\") pod \"nova-cell1-compute-ironic-compute-0\" (UID: \"455d7bcc-647b-4c91-b293-aaa0cd448723\") " pod="openstack/nova-cell1-compute-ironic-compute-0" Feb 24 05:54:36.837091 master-0 kubenswrapper[34361]: I0224 05:54:36.836061 34361 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/455d7bcc-647b-4c91-b293-aaa0cd448723-combined-ca-bundle\") pod \"nova-cell1-compute-ironic-compute-0\" (UID: \"455d7bcc-647b-4c91-b293-aaa0cd448723\") " pod="openstack/nova-cell1-compute-ironic-compute-0" Feb 24 05:54:36.861694 master-0 kubenswrapper[34361]: I0224 05:54:36.861633 34361 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Feb 24 05:54:36.867959 master-0 kubenswrapper[34361]: I0224 05:54:36.867893 34361 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 24 05:54:36.874899 master-0 kubenswrapper[34361]: I0224 05:54:36.872742 34361 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Feb 24 05:54:36.878327 master-0 kubenswrapper[34361]: I0224 05:54:36.878241 34361 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Feb 24 05:54:36.905583 master-0 kubenswrapper[34361]: I0224 05:54:36.905124 34361 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f8gfb\" (UniqueName: \"kubernetes.io/projected/455d7bcc-647b-4c91-b293-aaa0cd448723-kube-api-access-f8gfb\") pod \"nova-cell1-compute-ironic-compute-0\" (UID: \"455d7bcc-647b-4c91-b293-aaa0cd448723\") " pod="openstack/nova-cell1-compute-ironic-compute-0" Feb 24 05:54:36.926345 master-0 kubenswrapper[34361]: I0224 05:54:36.912468 34361 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-compute-ironic-compute-0" Feb 24 05:54:36.944074 master-0 kubenswrapper[34361]: I0224 05:54:36.943926 34361 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/01fe89a6-ca30-4209-b7e2-a95d94ed57ac-logs\") pod \"nova-api-0\" (UID: \"01fe89a6-ca30-4209-b7e2-a95d94ed57ac\") " pod="openstack/nova-api-0" Feb 24 05:54:36.944074 master-0 kubenswrapper[34361]: I0224 05:54:36.944064 34361 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/01fe89a6-ca30-4209-b7e2-a95d94ed57ac-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"01fe89a6-ca30-4209-b7e2-a95d94ed57ac\") " pod="openstack/nova-api-0" Feb 24 05:54:36.944427 master-0 kubenswrapper[34361]: I0224 05:54:36.944394 34361 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/01fe89a6-ca30-4209-b7e2-a95d94ed57ac-config-data\") pod \"nova-api-0\" (UID: \"01fe89a6-ca30-4209-b7e2-a95d94ed57ac\") " pod="openstack/nova-api-0" Feb 24 05:54:36.959676 master-0 kubenswrapper[34361]: I0224 05:54:36.944481 34361 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ltcms\" (UniqueName: \"kubernetes.io/projected/01fe89a6-ca30-4209-b7e2-a95d94ed57ac-kube-api-access-ltcms\") pod \"nova-api-0\" (UID: \"01fe89a6-ca30-4209-b7e2-a95d94ed57ac\") " pod="openstack/nova-api-0" Feb 24 05:54:37.055546 master-0 kubenswrapper[34361]: I0224 05:54:37.055361 34361 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Feb 24 05:54:37.057303 master-0 kubenswrapper[34361]: I0224 05:54:37.057229 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/01fe89a6-ca30-4209-b7e2-a95d94ed57ac-config-data\") pod \"nova-api-0\" (UID: \"01fe89a6-ca30-4209-b7e2-a95d94ed57ac\") " pod="openstack/nova-api-0" Feb 24 05:54:37.057476 master-0 kubenswrapper[34361]: I0224 05:54:37.057447 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ltcms\" (UniqueName: \"kubernetes.io/projected/01fe89a6-ca30-4209-b7e2-a95d94ed57ac-kube-api-access-ltcms\") pod \"nova-api-0\" (UID: \"01fe89a6-ca30-4209-b7e2-a95d94ed57ac\") " pod="openstack/nova-api-0" Feb 24 05:54:37.057753 master-0 kubenswrapper[34361]: I0224 05:54:37.057732 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/01fe89a6-ca30-4209-b7e2-a95d94ed57ac-logs\") pod \"nova-api-0\" (UID: \"01fe89a6-ca30-4209-b7e2-a95d94ed57ac\") " pod="openstack/nova-api-0" Feb 24 05:54:37.057916 master-0 kubenswrapper[34361]: I0224 05:54:37.057887 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/01fe89a6-ca30-4209-b7e2-a95d94ed57ac-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"01fe89a6-ca30-4209-b7e2-a95d94ed57ac\") " pod="openstack/nova-api-0" Feb 24 05:54:37.062454 master-0 kubenswrapper[34361]: I0224 05:54:37.060648 34361 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/01fe89a6-ca30-4209-b7e2-a95d94ed57ac-logs\") pod \"nova-api-0\" (UID: \"01fe89a6-ca30-4209-b7e2-a95d94ed57ac\") " pod="openstack/nova-api-0" Feb 24 05:54:37.065864 master-0 kubenswrapper[34361]: I0224 05:54:37.065064 34361 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 24 05:54:37.071349 master-0 kubenswrapper[34361]: I0224 05:54:37.068596 34361 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/01fe89a6-ca30-4209-b7e2-a95d94ed57ac-config-data\") pod \"nova-api-0\" (UID: \"01fe89a6-ca30-4209-b7e2-a95d94ed57ac\") " pod="openstack/nova-api-0" Feb 24 05:54:37.071349 master-0 kubenswrapper[34361]: I0224 05:54:37.069166 34361 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/01fe89a6-ca30-4209-b7e2-a95d94ed57ac-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"01fe89a6-ca30-4209-b7e2-a95d94ed57ac\") " pod="openstack/nova-api-0" Feb 24 05:54:37.071349 master-0 kubenswrapper[34361]: I0224 05:54:37.069565 34361 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Feb 24 05:54:37.095533 master-0 kubenswrapper[34361]: I0224 05:54:37.086410 34361 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Feb 24 05:54:37.104555 master-0 kubenswrapper[34361]: I0224 05:54:37.104331 34361 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ltcms\" (UniqueName: \"kubernetes.io/projected/01fe89a6-ca30-4209-b7e2-a95d94ed57ac-kube-api-access-ltcms\") pod \"nova-api-0\" (UID: \"01fe89a6-ca30-4209-b7e2-a95d94ed57ac\") " pod="openstack/nova-api-0" Feb 24 05:54:37.129369 master-0 kubenswrapper[34361]: I0224 05:54:37.129285 34361 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Feb 24 05:54:37.132151 master-0 kubenswrapper[34361]: I0224 05:54:37.132100 34361 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Feb 24 05:54:37.140964 master-0 kubenswrapper[34361]: I0224 05:54:37.134988 34361 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Feb 24 05:54:37.155910 master-0 kubenswrapper[34361]: I0224 05:54:37.155599 34361 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Feb 24 05:54:37.166450 master-0 kubenswrapper[34361]: I0224 05:54:37.166394 34361 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2j974\" (UniqueName: \"kubernetes.io/projected/8ac6ecc5-c6bf-4048-9ea8-941669b1ecd1-kube-api-access-2j974\") pod \"nova-metadata-0\" (UID: \"8ac6ecc5-c6bf-4048-9ea8-941669b1ecd1\") " pod="openstack/nova-metadata-0" Feb 24 05:54:37.166571 master-0 kubenswrapper[34361]: I0224 05:54:37.166531 34361 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8ac6ecc5-c6bf-4048-9ea8-941669b1ecd1-logs\") pod \"nova-metadata-0\" (UID: \"8ac6ecc5-c6bf-4048-9ea8-941669b1ecd1\") " pod="openstack/nova-metadata-0" Feb 24 05:54:37.166630 master-0 kubenswrapper[34361]: I0224 05:54:37.166609 34361 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8ac6ecc5-c6bf-4048-9ea8-941669b1ecd1-config-data\") pod \"nova-metadata-0\" (UID: \"8ac6ecc5-c6bf-4048-9ea8-941669b1ecd1\") " pod="openstack/nova-metadata-0" Feb 24 05:54:37.166667 master-0 kubenswrapper[34361]: I0224 05:54:37.166649 34361 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f527b398-2fba-4b52-bdf5-0bab54c9394b-config-data\") pod \"nova-scheduler-0\" (UID: \"f527b398-2fba-4b52-bdf5-0bab54c9394b\") " pod="openstack/nova-scheduler-0" Feb 24 05:54:37.166705 master-0 kubenswrapper[34361]: I0224 05:54:37.166669 34361 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6g4l5\" (UniqueName: \"kubernetes.io/projected/f527b398-2fba-4b52-bdf5-0bab54c9394b-kube-api-access-6g4l5\") pod \"nova-scheduler-0\" (UID: \"f527b398-2fba-4b52-bdf5-0bab54c9394b\") " pod="openstack/nova-scheduler-0" Feb 24 05:54:37.166946 master-0 kubenswrapper[34361]: I0224 05:54:37.166913 34361 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f527b398-2fba-4b52-bdf5-0bab54c9394b-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"f527b398-2fba-4b52-bdf5-0bab54c9394b\") " pod="openstack/nova-scheduler-0" Feb 24 05:54:37.166994 master-0 kubenswrapper[34361]: I0224 05:54:37.166949 34361 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8ac6ecc5-c6bf-4048-9ea8-941669b1ecd1-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"8ac6ecc5-c6bf-4048-9ea8-941669b1ecd1\") " pod="openstack/nova-metadata-0" Feb 24 05:54:37.255906 master-0 kubenswrapper[34361]: I0224 05:54:37.255820 34361 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-6fcf8f9d6f-578q8"] Feb 24 05:54:37.265342 master-0 kubenswrapper[34361]: I0224 05:54:37.261071 34361 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6fcf8f9d6f-578q8" Feb 24 05:54:37.276482 master-0 kubenswrapper[34361]: I0224 05:54:37.276235 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f527b398-2fba-4b52-bdf5-0bab54c9394b-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"f527b398-2fba-4b52-bdf5-0bab54c9394b\") " pod="openstack/nova-scheduler-0" Feb 24 05:54:37.276482 master-0 kubenswrapper[34361]: I0224 05:54:37.276293 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8ac6ecc5-c6bf-4048-9ea8-941669b1ecd1-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"8ac6ecc5-c6bf-4048-9ea8-941669b1ecd1\") " pod="openstack/nova-metadata-0" Feb 24 05:54:37.276482 master-0 kubenswrapper[34361]: I0224 05:54:37.276375 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2j974\" (UniqueName: \"kubernetes.io/projected/8ac6ecc5-c6bf-4048-9ea8-941669b1ecd1-kube-api-access-2j974\") pod \"nova-metadata-0\" (UID: \"8ac6ecc5-c6bf-4048-9ea8-941669b1ecd1\") " pod="openstack/nova-metadata-0" Feb 24 05:54:37.276482 master-0 kubenswrapper[34361]: I0224 05:54:37.276421 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8ac6ecc5-c6bf-4048-9ea8-941669b1ecd1-logs\") pod \"nova-metadata-0\" (UID: \"8ac6ecc5-c6bf-4048-9ea8-941669b1ecd1\") " pod="openstack/nova-metadata-0" Feb 24 05:54:37.276482 master-0 kubenswrapper[34361]: I0224 05:54:37.276462 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8ac6ecc5-c6bf-4048-9ea8-941669b1ecd1-config-data\") pod \"nova-metadata-0\" (UID: \"8ac6ecc5-c6bf-4048-9ea8-941669b1ecd1\") " pod="openstack/nova-metadata-0" Feb 24 05:54:37.276482 master-0 kubenswrapper[34361]: I0224 05:54:37.276491 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f527b398-2fba-4b52-bdf5-0bab54c9394b-config-data\") pod \"nova-scheduler-0\" (UID: \"f527b398-2fba-4b52-bdf5-0bab54c9394b\") " pod="openstack/nova-scheduler-0" Feb 24 05:54:37.278611 master-0 kubenswrapper[34361]: I0224 05:54:37.276512 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6g4l5\" (UniqueName: \"kubernetes.io/projected/f527b398-2fba-4b52-bdf5-0bab54c9394b-kube-api-access-6g4l5\") pod \"nova-scheduler-0\" (UID: \"f527b398-2fba-4b52-bdf5-0bab54c9394b\") " pod="openstack/nova-scheduler-0" Feb 24 05:54:37.285227 master-0 kubenswrapper[34361]: I0224 05:54:37.284016 34361 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6fcf8f9d6f-578q8"] Feb 24 05:54:37.320121 master-0 kubenswrapper[34361]: I0224 05:54:37.314448 34361 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f527b398-2fba-4b52-bdf5-0bab54c9394b-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"f527b398-2fba-4b52-bdf5-0bab54c9394b\") " pod="openstack/nova-scheduler-0" Feb 24 05:54:37.320121 master-0 kubenswrapper[34361]: I0224 05:54:37.314924 34361 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f527b398-2fba-4b52-bdf5-0bab54c9394b-config-data\") pod \"nova-scheduler-0\" (UID: \"f527b398-2fba-4b52-bdf5-0bab54c9394b\") " pod="openstack/nova-scheduler-0" Feb 24 05:54:37.320121 master-0 kubenswrapper[34361]: I0224 05:54:37.318697 34361 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8ac6ecc5-c6bf-4048-9ea8-941669b1ecd1-logs\") pod \"nova-metadata-0\" (UID: \"8ac6ecc5-c6bf-4048-9ea8-941669b1ecd1\") " pod="openstack/nova-metadata-0" Feb 24 05:54:37.323118 master-0 kubenswrapper[34361]: I0224 05:54:37.321188 34361 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8ac6ecc5-c6bf-4048-9ea8-941669b1ecd1-config-data\") pod \"nova-metadata-0\" (UID: \"8ac6ecc5-c6bf-4048-9ea8-941669b1ecd1\") " pod="openstack/nova-metadata-0" Feb 24 05:54:37.323118 master-0 kubenswrapper[34361]: I0224 05:54:37.321298 34361 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6g4l5\" (UniqueName: \"kubernetes.io/projected/f527b398-2fba-4b52-bdf5-0bab54c9394b-kube-api-access-6g4l5\") pod \"nova-scheduler-0\" (UID: \"f527b398-2fba-4b52-bdf5-0bab54c9394b\") " pod="openstack/nova-scheduler-0" Feb 24 05:54:37.330426 master-0 kubenswrapper[34361]: I0224 05:54:37.330380 34361 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8ac6ecc5-c6bf-4048-9ea8-941669b1ecd1-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"8ac6ecc5-c6bf-4048-9ea8-941669b1ecd1\") " pod="openstack/nova-metadata-0" Feb 24 05:54:37.353198 master-0 kubenswrapper[34361]: I0224 05:54:37.353134 34361 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2j974\" (UniqueName: \"kubernetes.io/projected/8ac6ecc5-c6bf-4048-9ea8-941669b1ecd1-kube-api-access-2j974\") pod \"nova-metadata-0\" (UID: \"8ac6ecc5-c6bf-4048-9ea8-941669b1ecd1\") " pod="openstack/nova-metadata-0" Feb 24 05:54:37.368013 master-0 kubenswrapper[34361]: I0224 05:54:37.367690 34361 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Feb 24 05:54:37.370295 master-0 kubenswrapper[34361]: I0224 05:54:37.370263 34361 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Feb 24 05:54:37.377876 master-0 kubenswrapper[34361]: I0224 05:54:37.377833 34361 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-novncproxy-config-data" Feb 24 05:54:37.383237 master-0 kubenswrapper[34361]: I0224 05:54:37.383172 34361 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/2d229fa5-0153-43d2-92d6-e548ed604b0b-dns-swift-storage-0\") pod \"dnsmasq-dns-6fcf8f9d6f-578q8\" (UID: \"2d229fa5-0153-43d2-92d6-e548ed604b0b\") " pod="openstack/dnsmasq-dns-6fcf8f9d6f-578q8" Feb 24 05:54:37.383331 master-0 kubenswrapper[34361]: I0224 05:54:37.383269 34361 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g7cq4\" (UniqueName: \"kubernetes.io/projected/2d229fa5-0153-43d2-92d6-e548ed604b0b-kube-api-access-g7cq4\") pod \"dnsmasq-dns-6fcf8f9d6f-578q8\" (UID: \"2d229fa5-0153-43d2-92d6-e548ed604b0b\") " pod="openstack/dnsmasq-dns-6fcf8f9d6f-578q8" Feb 24 05:54:37.383331 master-0 kubenswrapper[34361]: I0224 05:54:37.383327 34361 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/2d229fa5-0153-43d2-92d6-e548ed604b0b-dns-svc\") pod \"dnsmasq-dns-6fcf8f9d6f-578q8\" (UID: \"2d229fa5-0153-43d2-92d6-e548ed604b0b\") " pod="openstack/dnsmasq-dns-6fcf8f9d6f-578q8" Feb 24 05:54:37.383412 master-0 kubenswrapper[34361]: I0224 05:54:37.383363 34361 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/2d229fa5-0153-43d2-92d6-e548ed604b0b-ovsdbserver-sb\") pod \"dnsmasq-dns-6fcf8f9d6f-578q8\" (UID: \"2d229fa5-0153-43d2-92d6-e548ed604b0b\") " pod="openstack/dnsmasq-dns-6fcf8f9d6f-578q8" Feb 24 05:54:37.383412 master-0 kubenswrapper[34361]: I0224 05:54:37.383395 34361 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/2d229fa5-0153-43d2-92d6-e548ed604b0b-ovsdbserver-nb\") pod \"dnsmasq-dns-6fcf8f9d6f-578q8\" (UID: \"2d229fa5-0153-43d2-92d6-e548ed604b0b\") " pod="openstack/dnsmasq-dns-6fcf8f9d6f-578q8" Feb 24 05:54:37.383508 master-0 kubenswrapper[34361]: I0224 05:54:37.383484 34361 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2d229fa5-0153-43d2-92d6-e548ed604b0b-config\") pod \"dnsmasq-dns-6fcf8f9d6f-578q8\" (UID: \"2d229fa5-0153-43d2-92d6-e548ed604b0b\") " pod="openstack/dnsmasq-dns-6fcf8f9d6f-578q8" Feb 24 05:54:37.396908 master-0 kubenswrapper[34361]: I0224 05:54:37.396849 34361 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 24 05:54:37.398036 master-0 kubenswrapper[34361]: I0224 05:54:37.397951 34361 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Feb 24 05:54:37.487466 master-0 kubenswrapper[34361]: I0224 05:54:37.487129 34361 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 24 05:54:37.500074 master-0 kubenswrapper[34361]: I0224 05:54:37.499992 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g7cq4\" (UniqueName: \"kubernetes.io/projected/2d229fa5-0153-43d2-92d6-e548ed604b0b-kube-api-access-g7cq4\") pod \"dnsmasq-dns-6fcf8f9d6f-578q8\" (UID: \"2d229fa5-0153-43d2-92d6-e548ed604b0b\") " pod="openstack/dnsmasq-dns-6fcf8f9d6f-578q8" Feb 24 05:54:37.500232 master-0 kubenswrapper[34361]: I0224 05:54:37.500141 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/2d229fa5-0153-43d2-92d6-e548ed604b0b-dns-svc\") pod \"dnsmasq-dns-6fcf8f9d6f-578q8\" (UID: \"2d229fa5-0153-43d2-92d6-e548ed604b0b\") " pod="openstack/dnsmasq-dns-6fcf8f9d6f-578q8" Feb 24 05:54:37.500326 master-0 kubenswrapper[34361]: I0224 05:54:37.500243 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/2d229fa5-0153-43d2-92d6-e548ed604b0b-ovsdbserver-sb\") pod \"dnsmasq-dns-6fcf8f9d6f-578q8\" (UID: \"2d229fa5-0153-43d2-92d6-e548ed604b0b\") " pod="openstack/dnsmasq-dns-6fcf8f9d6f-578q8" Feb 24 05:54:37.500407 master-0 kubenswrapper[34361]: I0224 05:54:37.500303 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/2d229fa5-0153-43d2-92d6-e548ed604b0b-ovsdbserver-nb\") pod \"dnsmasq-dns-6fcf8f9d6f-578q8\" (UID: \"2d229fa5-0153-43d2-92d6-e548ed604b0b\") " pod="openstack/dnsmasq-dns-6fcf8f9d6f-578q8" Feb 24 05:54:37.500803 master-0 kubenswrapper[34361]: I0224 05:54:37.500746 34361 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/25249dd5-54b4-44dc-ab35-e8532b1d0875-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"25249dd5-54b4-44dc-ab35-e8532b1d0875\") " pod="openstack/nova-cell1-novncproxy-0" Feb 24 05:54:37.500991 master-0 kubenswrapper[34361]: I0224 05:54:37.500964 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2d229fa5-0153-43d2-92d6-e548ed604b0b-config\") pod \"dnsmasq-dns-6fcf8f9d6f-578q8\" (UID: \"2d229fa5-0153-43d2-92d6-e548ed604b0b\") " pod="openstack/dnsmasq-dns-6fcf8f9d6f-578q8" Feb 24 05:54:37.501060 master-0 kubenswrapper[34361]: I0224 05:54:37.501020 34361 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/25249dd5-54b4-44dc-ab35-e8532b1d0875-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"25249dd5-54b4-44dc-ab35-e8532b1d0875\") " pod="openstack/nova-cell1-novncproxy-0" Feb 24 05:54:37.501434 master-0 kubenswrapper[34361]: I0224 05:54:37.501410 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/2d229fa5-0153-43d2-92d6-e548ed604b0b-dns-swift-storage-0\") pod \"dnsmasq-dns-6fcf8f9d6f-578q8\" (UID: \"2d229fa5-0153-43d2-92d6-e548ed604b0b\") " pod="openstack/dnsmasq-dns-6fcf8f9d6f-578q8" Feb 24 05:54:37.501498 master-0 kubenswrapper[34361]: I0224 05:54:37.501482 34361 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xg9jd\" (UniqueName: \"kubernetes.io/projected/25249dd5-54b4-44dc-ab35-e8532b1d0875-kube-api-access-xg9jd\") pod \"nova-cell1-novncproxy-0\" (UID: \"25249dd5-54b4-44dc-ab35-e8532b1d0875\") " pod="openstack/nova-cell1-novncproxy-0" Feb 24 05:54:37.503338 master-0 kubenswrapper[34361]: I0224 05:54:37.503295 34361 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/2d229fa5-0153-43d2-92d6-e548ed604b0b-dns-svc\") pod \"dnsmasq-dns-6fcf8f9d6f-578q8\" (UID: \"2d229fa5-0153-43d2-92d6-e548ed604b0b\") " pod="openstack/dnsmasq-dns-6fcf8f9d6f-578q8" Feb 24 05:54:37.503930 master-0 kubenswrapper[34361]: I0224 05:54:37.503902 34361 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/2d229fa5-0153-43d2-92d6-e548ed604b0b-ovsdbserver-sb\") pod \"dnsmasq-dns-6fcf8f9d6f-578q8\" (UID: \"2d229fa5-0153-43d2-92d6-e548ed604b0b\") " pod="openstack/dnsmasq-dns-6fcf8f9d6f-578q8" Feb 24 05:54:37.504668 master-0 kubenswrapper[34361]: I0224 05:54:37.504641 34361 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/2d229fa5-0153-43d2-92d6-e548ed604b0b-ovsdbserver-nb\") pod \"dnsmasq-dns-6fcf8f9d6f-578q8\" (UID: \"2d229fa5-0153-43d2-92d6-e548ed604b0b\") " pod="openstack/dnsmasq-dns-6fcf8f9d6f-578q8" Feb 24 05:54:37.505479 master-0 kubenswrapper[34361]: I0224 05:54:37.505277 34361 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/2d229fa5-0153-43d2-92d6-e548ed604b0b-dns-swift-storage-0\") pod \"dnsmasq-dns-6fcf8f9d6f-578q8\" (UID: \"2d229fa5-0153-43d2-92d6-e548ed604b0b\") " pod="openstack/dnsmasq-dns-6fcf8f9d6f-578q8" Feb 24 05:54:37.509618 master-0 kubenswrapper[34361]: I0224 05:54:37.509589 34361 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2d229fa5-0153-43d2-92d6-e548ed604b0b-config\") pod \"dnsmasq-dns-6fcf8f9d6f-578q8\" (UID: \"2d229fa5-0153-43d2-92d6-e548ed604b0b\") " pod="openstack/dnsmasq-dns-6fcf8f9d6f-578q8" Feb 24 05:54:37.549112 master-0 kubenswrapper[34361]: I0224 05:54:37.546748 34361 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Feb 24 05:54:37.566857 master-0 kubenswrapper[34361]: I0224 05:54:37.566808 34361 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g7cq4\" (UniqueName: \"kubernetes.io/projected/2d229fa5-0153-43d2-92d6-e548ed604b0b-kube-api-access-g7cq4\") pod \"dnsmasq-dns-6fcf8f9d6f-578q8\" (UID: \"2d229fa5-0153-43d2-92d6-e548ed604b0b\") " pod="openstack/dnsmasq-dns-6fcf8f9d6f-578q8" Feb 24 05:54:37.604334 master-0 kubenswrapper[34361]: I0224 05:54:37.603947 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/25249dd5-54b4-44dc-ab35-e8532b1d0875-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"25249dd5-54b4-44dc-ab35-e8532b1d0875\") " pod="openstack/nova-cell1-novncproxy-0" Feb 24 05:54:37.604334 master-0 kubenswrapper[34361]: I0224 05:54:37.604032 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/25249dd5-54b4-44dc-ab35-e8532b1d0875-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"25249dd5-54b4-44dc-ab35-e8532b1d0875\") " pod="openstack/nova-cell1-novncproxy-0" Feb 24 05:54:37.605373 master-0 kubenswrapper[34361]: I0224 05:54:37.604868 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xg9jd\" (UniqueName: \"kubernetes.io/projected/25249dd5-54b4-44dc-ab35-e8532b1d0875-kube-api-access-xg9jd\") pod \"nova-cell1-novncproxy-0\" (UID: \"25249dd5-54b4-44dc-ab35-e8532b1d0875\") " pod="openstack/nova-cell1-novncproxy-0" Feb 24 05:54:37.609392 master-0 kubenswrapper[34361]: I0224 05:54:37.609137 34361 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/25249dd5-54b4-44dc-ab35-e8532b1d0875-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"25249dd5-54b4-44dc-ab35-e8532b1d0875\") " pod="openstack/nova-cell1-novncproxy-0" Feb 24 05:54:37.609392 master-0 kubenswrapper[34361]: I0224 05:54:37.609242 34361 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-cell-mapping-fck78"] Feb 24 05:54:37.611266 master-0 kubenswrapper[34361]: I0224 05:54:37.611032 34361 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/25249dd5-54b4-44dc-ab35-e8532b1d0875-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"25249dd5-54b4-44dc-ab35-e8532b1d0875\") " pod="openstack/nova-cell1-novncproxy-0" Feb 24 05:54:37.624197 master-0 kubenswrapper[34361]: I0224 05:54:37.624138 34361 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6fcf8f9d6f-578q8" Feb 24 05:54:37.647216 master-0 kubenswrapper[34361]: I0224 05:54:37.647143 34361 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xg9jd\" (UniqueName: \"kubernetes.io/projected/25249dd5-54b4-44dc-ab35-e8532b1d0875-kube-api-access-xg9jd\") pod \"nova-cell1-novncproxy-0\" (UID: \"25249dd5-54b4-44dc-ab35-e8532b1d0875\") " pod="openstack/nova-cell1-novncproxy-0" Feb 24 05:54:37.705145 master-0 kubenswrapper[34361]: I0224 05:54:37.705057 34361 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Feb 24 05:54:37.889219 master-0 kubenswrapper[34361]: I0224 05:54:37.889143 34361 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-conductor-db-sync-lc7xf"] Feb 24 05:54:37.894516 master-0 kubenswrapper[34361]: I0224 05:54:37.894464 34361 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-lc7xf" Feb 24 05:54:37.902233 master-0 kubenswrapper[34361]: I0224 05:54:37.899871 34361 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-config-data" Feb 24 05:54:37.902233 master-0 kubenswrapper[34361]: I0224 05:54:37.900014 34361 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-scripts" Feb 24 05:54:37.909454 master-0 kubenswrapper[34361]: I0224 05:54:37.906972 34361 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-lc7xf"] Feb 24 05:54:37.952916 master-0 kubenswrapper[34361]: I0224 05:54:37.951707 34361 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 24 05:54:37.965111 master-0 kubenswrapper[34361]: I0224 05:54:37.964447 34361 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-compute-ironic-compute-0"] Feb 24 05:54:38.027333 master-0 kubenswrapper[34361]: I0224 05:54:38.022075 34361 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8fs5m\" (UniqueName: \"kubernetes.io/projected/7d0a5bab-2c7d-4526-8505-873c732edcf1-kube-api-access-8fs5m\") pod \"nova-cell1-conductor-db-sync-lc7xf\" (UID: \"7d0a5bab-2c7d-4526-8505-873c732edcf1\") " pod="openstack/nova-cell1-conductor-db-sync-lc7xf" Feb 24 05:54:38.027333 master-0 kubenswrapper[34361]: I0224 05:54:38.022301 34361 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7d0a5bab-2c7d-4526-8505-873c732edcf1-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-lc7xf\" (UID: \"7d0a5bab-2c7d-4526-8505-873c732edcf1\") " pod="openstack/nova-cell1-conductor-db-sync-lc7xf" Feb 24 05:54:38.027333 master-0 kubenswrapper[34361]: I0224 05:54:38.022412 34361 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7d0a5bab-2c7d-4526-8505-873c732edcf1-scripts\") pod \"nova-cell1-conductor-db-sync-lc7xf\" (UID: \"7d0a5bab-2c7d-4526-8505-873c732edcf1\") " pod="openstack/nova-cell1-conductor-db-sync-lc7xf" Feb 24 05:54:38.027333 master-0 kubenswrapper[34361]: I0224 05:54:38.022461 34361 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7d0a5bab-2c7d-4526-8505-873c732edcf1-config-data\") pod \"nova-cell1-conductor-db-sync-lc7xf\" (UID: \"7d0a5bab-2c7d-4526-8505-873c732edcf1\") " pod="openstack/nova-cell1-conductor-db-sync-lc7xf" Feb 24 05:54:38.143158 master-0 kubenswrapper[34361]: I0224 05:54:38.142301 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7d0a5bab-2c7d-4526-8505-873c732edcf1-scripts\") pod \"nova-cell1-conductor-db-sync-lc7xf\" (UID: \"7d0a5bab-2c7d-4526-8505-873c732edcf1\") " pod="openstack/nova-cell1-conductor-db-sync-lc7xf" Feb 24 05:54:38.143158 master-0 kubenswrapper[34361]: I0224 05:54:38.142411 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7d0a5bab-2c7d-4526-8505-873c732edcf1-config-data\") pod \"nova-cell1-conductor-db-sync-lc7xf\" (UID: \"7d0a5bab-2c7d-4526-8505-873c732edcf1\") " pod="openstack/nova-cell1-conductor-db-sync-lc7xf" Feb 24 05:54:38.143158 master-0 kubenswrapper[34361]: I0224 05:54:38.142556 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8fs5m\" (UniqueName: \"kubernetes.io/projected/7d0a5bab-2c7d-4526-8505-873c732edcf1-kube-api-access-8fs5m\") pod \"nova-cell1-conductor-db-sync-lc7xf\" (UID: \"7d0a5bab-2c7d-4526-8505-873c732edcf1\") " pod="openstack/nova-cell1-conductor-db-sync-lc7xf" Feb 24 05:54:38.143158 master-0 kubenswrapper[34361]: I0224 05:54:38.142662 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7d0a5bab-2c7d-4526-8505-873c732edcf1-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-lc7xf\" (UID: \"7d0a5bab-2c7d-4526-8505-873c732edcf1\") " pod="openstack/nova-cell1-conductor-db-sync-lc7xf" Feb 24 05:54:38.149246 master-0 kubenswrapper[34361]: I0224 05:54:38.147754 34361 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7d0a5bab-2c7d-4526-8505-873c732edcf1-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-lc7xf\" (UID: \"7d0a5bab-2c7d-4526-8505-873c732edcf1\") " pod="openstack/nova-cell1-conductor-db-sync-lc7xf" Feb 24 05:54:38.149246 master-0 kubenswrapper[34361]: I0224 05:54:38.148846 34361 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7d0a5bab-2c7d-4526-8505-873c732edcf1-scripts\") pod \"nova-cell1-conductor-db-sync-lc7xf\" (UID: \"7d0a5bab-2c7d-4526-8505-873c732edcf1\") " pod="openstack/nova-cell1-conductor-db-sync-lc7xf" Feb 24 05:54:38.155499 master-0 kubenswrapper[34361]: I0224 05:54:38.154935 34361 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7d0a5bab-2c7d-4526-8505-873c732edcf1-config-data\") pod \"nova-cell1-conductor-db-sync-lc7xf\" (UID: \"7d0a5bab-2c7d-4526-8505-873c732edcf1\") " pod="openstack/nova-cell1-conductor-db-sync-lc7xf" Feb 24 05:54:38.175739 master-0 kubenswrapper[34361]: I0224 05:54:38.174984 34361 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8fs5m\" (UniqueName: \"kubernetes.io/projected/7d0a5bab-2c7d-4526-8505-873c732edcf1-kube-api-access-8fs5m\") pod \"nova-cell1-conductor-db-sync-lc7xf\" (UID: \"7d0a5bab-2c7d-4526-8505-873c732edcf1\") " pod="openstack/nova-cell1-conductor-db-sync-lc7xf" Feb 24 05:54:38.199797 master-0 kubenswrapper[34361]: I0224 05:54:38.198717 34361 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Feb 24 05:54:38.210647 master-0 kubenswrapper[34361]: W0224 05:54:38.209933 34361 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod01fe89a6_ca30_4209_b7e2_a95d94ed57ac.slice/crio-d690055e2baf2f7400223cde46bc419abce14cd30a198ae7d6510596a2951437 WatchSource:0}: Error finding container d690055e2baf2f7400223cde46bc419abce14cd30a198ae7d6510596a2951437: Status 404 returned error can't find the container with id d690055e2baf2f7400223cde46bc419abce14cd30a198ae7d6510596a2951437 Feb 24 05:54:38.381257 master-0 kubenswrapper[34361]: I0224 05:54:38.380767 34361 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-lc7xf" Feb 24 05:54:38.404840 master-0 kubenswrapper[34361]: I0224 05:54:38.403919 34361 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Feb 24 05:54:38.431068 master-0 kubenswrapper[34361]: I0224 05:54:38.430052 34361 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"01fe89a6-ca30-4209-b7e2-a95d94ed57ac","Type":"ContainerStarted","Data":"d690055e2baf2f7400223cde46bc419abce14cd30a198ae7d6510596a2951437"} Feb 24 05:54:38.435163 master-0 kubenswrapper[34361]: I0224 05:54:38.434409 34361 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-fck78" event={"ID":"b6bc1039-ed61-4fdb-9e4d-0cb8a4aa1e0d","Type":"ContainerStarted","Data":"e8550d306f17ed524e9dca5e627f8e0163ebe91bf66c9fb133708d5539e8e635"} Feb 24 05:54:38.435163 master-0 kubenswrapper[34361]: I0224 05:54:38.434489 34361 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-fck78" event={"ID":"b6bc1039-ed61-4fdb-9e4d-0cb8a4aa1e0d","Type":"ContainerStarted","Data":"16db2a6f473a7d0f762f0a9daf7dca7456d5d10d6d3a078ce7639ea46abd9ce4"} Feb 24 05:54:38.436335 master-0 kubenswrapper[34361]: I0224 05:54:38.435513 34361 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Feb 24 05:54:38.436335 master-0 kubenswrapper[34361]: I0224 05:54:38.436120 34361 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-compute-ironic-compute-0" event={"ID":"455d7bcc-647b-4c91-b293-aaa0cd448723","Type":"ContainerStarted","Data":"73edfed8937e84b0b3cd0c8a2dde5b025db11c841b381374c0896a763794bf9c"} Feb 24 05:54:38.463246 master-0 kubenswrapper[34361]: W0224 05:54:38.462821 34361 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod8ac6ecc5_c6bf_4048_9ea8_941669b1ecd1.slice/crio-b5c09a2e4b2dab303c75fec965c1220a7ff6df827fd3d2ab0559e924fd39672d WatchSource:0}: Error finding container b5c09a2e4b2dab303c75fec965c1220a7ff6df827fd3d2ab0559e924fd39672d: Status 404 returned error can't find the container with id b5c09a2e4b2dab303c75fec965c1220a7ff6df827fd3d2ab0559e924fd39672d Feb 24 05:54:38.468829 master-0 kubenswrapper[34361]: I0224 05:54:38.467871 34361 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-cell-mapping-fck78" podStartSLOduration=2.46784008 podStartE2EDuration="2.46784008s" podCreationTimestamp="2026-02-24 05:54:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-24 05:54:38.457780659 +0000 UTC m=+1038.160397695" watchObservedRunningTime="2026-02-24 05:54:38.46784008 +0000 UTC m=+1038.170457126" Feb 24 05:54:38.638273 master-0 kubenswrapper[34361]: W0224 05:54:38.638162 34361 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod25249dd5_54b4_44dc_ab35_e8532b1d0875.slice/crio-183ab0821ade2735e9e3e697e6ae92172e0515adcacc52fdfd0a552690c0ef6f WatchSource:0}: Error finding container 183ab0821ade2735e9e3e697e6ae92172e0515adcacc52fdfd0a552690c0ef6f: Status 404 returned error can't find the container with id 183ab0821ade2735e9e3e697e6ae92172e0515adcacc52fdfd0a552690c0ef6f Feb 24 05:54:38.652393 master-0 kubenswrapper[34361]: I0224 05:54:38.652169 34361 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6fcf8f9d6f-578q8"] Feb 24 05:54:38.677912 master-0 kubenswrapper[34361]: I0224 05:54:38.676799 34361 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Feb 24 05:54:39.108180 master-0 kubenswrapper[34361]: I0224 05:54:39.108071 34361 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-lc7xf"] Feb 24 05:54:39.465074 master-0 kubenswrapper[34361]: I0224 05:54:39.464946 34361 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"8ac6ecc5-c6bf-4048-9ea8-941669b1ecd1","Type":"ContainerStarted","Data":"b5c09a2e4b2dab303c75fec965c1220a7ff6df827fd3d2ab0559e924fd39672d"} Feb 24 05:54:39.472105 master-0 kubenswrapper[34361]: I0224 05:54:39.472030 34361 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"f527b398-2fba-4b52-bdf5-0bab54c9394b","Type":"ContainerStarted","Data":"be0ff1f910ca308d084f34ab8fed9c51995a0f0f24bac74a2f09f119f0beb28c"} Feb 24 05:54:39.475815 master-0 kubenswrapper[34361]: I0224 05:54:39.475780 34361 generic.go:334] "Generic (PLEG): container finished" podID="2d229fa5-0153-43d2-92d6-e548ed604b0b" containerID="215edc49d60e772323fd3bbc4c69723c8dadafacdce47e7c7984dd2521caa018" exitCode=0 Feb 24 05:54:39.475894 master-0 kubenswrapper[34361]: I0224 05:54:39.475875 34361 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6fcf8f9d6f-578q8" event={"ID":"2d229fa5-0153-43d2-92d6-e548ed604b0b","Type":"ContainerDied","Data":"215edc49d60e772323fd3bbc4c69723c8dadafacdce47e7c7984dd2521caa018"} Feb 24 05:54:39.475939 master-0 kubenswrapper[34361]: I0224 05:54:39.475900 34361 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6fcf8f9d6f-578q8" event={"ID":"2d229fa5-0153-43d2-92d6-e548ed604b0b","Type":"ContainerStarted","Data":"96293aa1aeb850959c682803c0bbd53c471e71c8830a61709e8562e15eb31920"} Feb 24 05:54:39.482651 master-0 kubenswrapper[34361]: I0224 05:54:39.482541 34361 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"25249dd5-54b4-44dc-ab35-e8532b1d0875","Type":"ContainerStarted","Data":"183ab0821ade2735e9e3e697e6ae92172e0515adcacc52fdfd0a552690c0ef6f"} Feb 24 05:54:39.523336 master-0 kubenswrapper[34361]: I0224 05:54:39.514935 34361 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-lc7xf" event={"ID":"7d0a5bab-2c7d-4526-8505-873c732edcf1","Type":"ContainerStarted","Data":"f2fd1d7ef0bf47ac43e0ab7bb68e36eb517781f84ab1c43b2e45aa3ca590517a"} Feb 24 05:54:40.546338 master-0 kubenswrapper[34361]: I0224 05:54:40.545364 34361 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-lc7xf" event={"ID":"7d0a5bab-2c7d-4526-8505-873c732edcf1","Type":"ContainerStarted","Data":"515d55b62b2b88bfd6765de031608c20d95a75d635ec2d3ad786f86826787472"} Feb 24 05:54:40.550717 master-0 kubenswrapper[34361]: I0224 05:54:40.549070 34361 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6fcf8f9d6f-578q8" event={"ID":"2d229fa5-0153-43d2-92d6-e548ed604b0b","Type":"ContainerStarted","Data":"65d2b8dd751716ea675453d0f0ff5d427a09809a3ad40f1add62946e5d0a5571"} Feb 24 05:54:40.550717 master-0 kubenswrapper[34361]: I0224 05:54:40.549377 34361 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-6fcf8f9d6f-578q8" Feb 24 05:54:40.608340 master-0 kubenswrapper[34361]: I0224 05:54:40.607408 34361 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-6fcf8f9d6f-578q8" podStartSLOduration=3.607381265 podStartE2EDuration="3.607381265s" podCreationTimestamp="2026-02-24 05:54:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-24 05:54:40.607142548 +0000 UTC m=+1040.309759594" watchObservedRunningTime="2026-02-24 05:54:40.607381265 +0000 UTC m=+1040.309998311" Feb 24 05:54:40.618380 master-0 kubenswrapper[34361]: I0224 05:54:40.617246 34361 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-conductor-db-sync-lc7xf" podStartSLOduration=3.6172216 podStartE2EDuration="3.6172216s" podCreationTimestamp="2026-02-24 05:54:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-24 05:54:40.572920295 +0000 UTC m=+1040.275537351" watchObservedRunningTime="2026-02-24 05:54:40.6172216 +0000 UTC m=+1040.319838646" Feb 24 05:54:41.116886 master-0 kubenswrapper[34361]: I0224 05:54:41.116810 34361 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Feb 24 05:54:41.173579 master-0 kubenswrapper[34361]: I0224 05:54:41.173493 34361 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Feb 24 05:54:43.683343 master-0 kubenswrapper[34361]: I0224 05:54:43.679145 34361 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"25249dd5-54b4-44dc-ab35-e8532b1d0875","Type":"ContainerStarted","Data":"ea151b7ae69ab3bd70944165ae65abbf25d3a10ddb132d07a67405c0bb5d49a5"} Feb 24 05:54:43.683343 master-0 kubenswrapper[34361]: I0224 05:54:43.679298 34361 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-cell1-novncproxy-0" podUID="25249dd5-54b4-44dc-ab35-e8532b1d0875" containerName="nova-cell1-novncproxy-novncproxy" containerID="cri-o://ea151b7ae69ab3bd70944165ae65abbf25d3a10ddb132d07a67405c0bb5d49a5" gracePeriod=30 Feb 24 05:54:43.685615 master-0 kubenswrapper[34361]: I0224 05:54:43.684956 34361 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"01fe89a6-ca30-4209-b7e2-a95d94ed57ac","Type":"ContainerStarted","Data":"783ce86def8ae54d1eda1ad07d9334fcaa1758e41e4eae2748be7ffac7040baf"} Feb 24 05:54:43.685615 master-0 kubenswrapper[34361]: I0224 05:54:43.685016 34361 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"01fe89a6-ca30-4209-b7e2-a95d94ed57ac","Type":"ContainerStarted","Data":"743f6d4b679a2fbafa8e8e2923430908e23e11b1bcc4dbf8dee74ad6c001272b"} Feb 24 05:54:43.689863 master-0 kubenswrapper[34361]: I0224 05:54:43.689716 34361 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"8ac6ecc5-c6bf-4048-9ea8-941669b1ecd1","Type":"ContainerStarted","Data":"b71b3a92533db3dbc2f832db5e681500180ad7c2d7afe854680d4134423eb5d2"} Feb 24 05:54:43.689863 master-0 kubenswrapper[34361]: I0224 05:54:43.689756 34361 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"8ac6ecc5-c6bf-4048-9ea8-941669b1ecd1","Type":"ContainerStarted","Data":"db78bd4a1f0111f9317f33807bf9c78e2ff9c3edf44855e4910104247bdb27b2"} Feb 24 05:54:43.690006 master-0 kubenswrapper[34361]: I0224 05:54:43.689876 34361 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="8ac6ecc5-c6bf-4048-9ea8-941669b1ecd1" containerName="nova-metadata-log" containerID="cri-o://db78bd4a1f0111f9317f33807bf9c78e2ff9c3edf44855e4910104247bdb27b2" gracePeriod=30 Feb 24 05:54:43.690045 master-0 kubenswrapper[34361]: I0224 05:54:43.689992 34361 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="8ac6ecc5-c6bf-4048-9ea8-941669b1ecd1" containerName="nova-metadata-metadata" containerID="cri-o://b71b3a92533db3dbc2f832db5e681500180ad7c2d7afe854680d4134423eb5d2" gracePeriod=30 Feb 24 05:54:43.692931 master-0 kubenswrapper[34361]: I0224 05:54:43.692586 34361 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"f527b398-2fba-4b52-bdf5-0bab54c9394b","Type":"ContainerStarted","Data":"ab1d2a18d057a7e514b3ad03f1314ee6704a40bd22b1cdd2bbeebf6481205336"} Feb 24 05:54:43.697025 master-0 kubenswrapper[34361]: I0224 05:54:43.696987 34361 generic.go:334] "Generic (PLEG): container finished" podID="74198545-a0ee-4142-93a6-86175a1d3c02" containerID="a46b2b88f29666f8e93397c596b1f2291619af1cb350863ee7a532e52ba78799" exitCode=0 Feb 24 05:54:43.697099 master-0 kubenswrapper[34361]: I0224 05:54:43.697040 34361 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-conductor-0" event={"ID":"74198545-a0ee-4142-93a6-86175a1d3c02","Type":"ContainerDied","Data":"a46b2b88f29666f8e93397c596b1f2291619af1cb350863ee7a532e52ba78799"} Feb 24 05:54:43.715985 master-0 kubenswrapper[34361]: I0224 05:54:43.715885 34361 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-novncproxy-0" podStartSLOduration=2.978864164 podStartE2EDuration="6.715856083s" podCreationTimestamp="2026-02-24 05:54:37 +0000 UTC" firstStartedPulling="2026-02-24 05:54:38.644128344 +0000 UTC m=+1038.346745390" lastFinishedPulling="2026-02-24 05:54:42.381120273 +0000 UTC m=+1042.083737309" observedRunningTime="2026-02-24 05:54:43.709276605 +0000 UTC m=+1043.411893651" watchObservedRunningTime="2026-02-24 05:54:43.715856083 +0000 UTC m=+1043.418473129" Feb 24 05:54:43.798743 master-0 kubenswrapper[34361]: I0224 05:54:43.798448 34361 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=2.868746494 podStartE2EDuration="6.798421279s" podCreationTimestamp="2026-02-24 05:54:37 +0000 UTC" firstStartedPulling="2026-02-24 05:54:38.448595911 +0000 UTC m=+1038.151212957" lastFinishedPulling="2026-02-24 05:54:42.378270696 +0000 UTC m=+1042.080887742" observedRunningTime="2026-02-24 05:54:43.733621591 +0000 UTC m=+1043.436238637" watchObservedRunningTime="2026-02-24 05:54:43.798421279 +0000 UTC m=+1043.501038325" Feb 24 05:54:43.811276 master-0 kubenswrapper[34361]: I0224 05:54:43.810891 34361 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=3.8923677 podStartE2EDuration="7.810868485s" podCreationTimestamp="2026-02-24 05:54:36 +0000 UTC" firstStartedPulling="2026-02-24 05:54:38.466773601 +0000 UTC m=+1038.169390647" lastFinishedPulling="2026-02-24 05:54:42.385274386 +0000 UTC m=+1042.087891432" observedRunningTime="2026-02-24 05:54:43.757875376 +0000 UTC m=+1043.460492432" watchObservedRunningTime="2026-02-24 05:54:43.810868485 +0000 UTC m=+1043.513485531" Feb 24 05:54:43.882893 master-0 kubenswrapper[34361]: I0224 05:54:43.882756 34361 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=3.719033315 podStartE2EDuration="7.882729003s" podCreationTimestamp="2026-02-24 05:54:36 +0000 UTC" firstStartedPulling="2026-02-24 05:54:38.21348804 +0000 UTC m=+1037.916105086" lastFinishedPulling="2026-02-24 05:54:42.377183728 +0000 UTC m=+1042.079800774" observedRunningTime="2026-02-24 05:54:43.782009177 +0000 UTC m=+1043.484626233" watchObservedRunningTime="2026-02-24 05:54:43.882729003 +0000 UTC m=+1043.585346049" Feb 24 05:54:44.320912 master-0 kubenswrapper[34361]: I0224 05:54:44.320836 34361 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 24 05:54:44.421167 master-0 kubenswrapper[34361]: I0224 05:54:44.421001 34361 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8ac6ecc5-c6bf-4048-9ea8-941669b1ecd1-combined-ca-bundle\") pod \"8ac6ecc5-c6bf-4048-9ea8-941669b1ecd1\" (UID: \"8ac6ecc5-c6bf-4048-9ea8-941669b1ecd1\") " Feb 24 05:54:44.421476 master-0 kubenswrapper[34361]: I0224 05:54:44.421271 34361 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8ac6ecc5-c6bf-4048-9ea8-941669b1ecd1-logs\") pod \"8ac6ecc5-c6bf-4048-9ea8-941669b1ecd1\" (UID: \"8ac6ecc5-c6bf-4048-9ea8-941669b1ecd1\") " Feb 24 05:54:44.421527 master-0 kubenswrapper[34361]: I0224 05:54:44.421487 34361 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8ac6ecc5-c6bf-4048-9ea8-941669b1ecd1-config-data\") pod \"8ac6ecc5-c6bf-4048-9ea8-941669b1ecd1\" (UID: \"8ac6ecc5-c6bf-4048-9ea8-941669b1ecd1\") " Feb 24 05:54:44.421701 master-0 kubenswrapper[34361]: I0224 05:54:44.421674 34361 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2j974\" (UniqueName: \"kubernetes.io/projected/8ac6ecc5-c6bf-4048-9ea8-941669b1ecd1-kube-api-access-2j974\") pod \"8ac6ecc5-c6bf-4048-9ea8-941669b1ecd1\" (UID: \"8ac6ecc5-c6bf-4048-9ea8-941669b1ecd1\") " Feb 24 05:54:44.421955 master-0 kubenswrapper[34361]: I0224 05:54:44.421819 34361 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8ac6ecc5-c6bf-4048-9ea8-941669b1ecd1-logs" (OuterVolumeSpecName: "logs") pod "8ac6ecc5-c6bf-4048-9ea8-941669b1ecd1" (UID: "8ac6ecc5-c6bf-4048-9ea8-941669b1ecd1"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 24 05:54:44.424109 master-0 kubenswrapper[34361]: I0224 05:54:44.424068 34361 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8ac6ecc5-c6bf-4048-9ea8-941669b1ecd1-logs\") on node \"master-0\" DevicePath \"\"" Feb 24 05:54:44.429076 master-0 kubenswrapper[34361]: I0224 05:54:44.429023 34361 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8ac6ecc5-c6bf-4048-9ea8-941669b1ecd1-kube-api-access-2j974" (OuterVolumeSpecName: "kube-api-access-2j974") pod "8ac6ecc5-c6bf-4048-9ea8-941669b1ecd1" (UID: "8ac6ecc5-c6bf-4048-9ea8-941669b1ecd1"). InnerVolumeSpecName "kube-api-access-2j974". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 24 05:54:44.463709 master-0 kubenswrapper[34361]: I0224 05:54:44.463505 34361 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8ac6ecc5-c6bf-4048-9ea8-941669b1ecd1-config-data" (OuterVolumeSpecName: "config-data") pod "8ac6ecc5-c6bf-4048-9ea8-941669b1ecd1" (UID: "8ac6ecc5-c6bf-4048-9ea8-941669b1ecd1"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 24 05:54:44.489663 master-0 kubenswrapper[34361]: I0224 05:54:44.489577 34361 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8ac6ecc5-c6bf-4048-9ea8-941669b1ecd1-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "8ac6ecc5-c6bf-4048-9ea8-941669b1ecd1" (UID: "8ac6ecc5-c6bf-4048-9ea8-941669b1ecd1"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 24 05:54:44.526996 master-0 kubenswrapper[34361]: I0224 05:54:44.526851 34361 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8ac6ecc5-c6bf-4048-9ea8-941669b1ecd1-combined-ca-bundle\") on node \"master-0\" DevicePath \"\"" Feb 24 05:54:44.526996 master-0 kubenswrapper[34361]: I0224 05:54:44.526911 34361 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8ac6ecc5-c6bf-4048-9ea8-941669b1ecd1-config-data\") on node \"master-0\" DevicePath \"\"" Feb 24 05:54:44.526996 master-0 kubenswrapper[34361]: I0224 05:54:44.526924 34361 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2j974\" (UniqueName: \"kubernetes.io/projected/8ac6ecc5-c6bf-4048-9ea8-941669b1ecd1-kube-api-access-2j974\") on node \"master-0\" DevicePath \"\"" Feb 24 05:54:44.725678 master-0 kubenswrapper[34361]: I0224 05:54:44.725624 34361 generic.go:334] "Generic (PLEG): container finished" podID="8ac6ecc5-c6bf-4048-9ea8-941669b1ecd1" containerID="b71b3a92533db3dbc2f832db5e681500180ad7c2d7afe854680d4134423eb5d2" exitCode=0 Feb 24 05:54:44.725678 master-0 kubenswrapper[34361]: I0224 05:54:44.725674 34361 generic.go:334] "Generic (PLEG): container finished" podID="8ac6ecc5-c6bf-4048-9ea8-941669b1ecd1" containerID="db78bd4a1f0111f9317f33807bf9c78e2ff9c3edf44855e4910104247bdb27b2" exitCode=143 Feb 24 05:54:44.726207 master-0 kubenswrapper[34361]: I0224 05:54:44.725733 34361 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"8ac6ecc5-c6bf-4048-9ea8-941669b1ecd1","Type":"ContainerDied","Data":"b71b3a92533db3dbc2f832db5e681500180ad7c2d7afe854680d4134423eb5d2"} Feb 24 05:54:44.726207 master-0 kubenswrapper[34361]: I0224 05:54:44.725771 34361 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"8ac6ecc5-c6bf-4048-9ea8-941669b1ecd1","Type":"ContainerDied","Data":"db78bd4a1f0111f9317f33807bf9c78e2ff9c3edf44855e4910104247bdb27b2"} Feb 24 05:54:44.726207 master-0 kubenswrapper[34361]: I0224 05:54:44.725786 34361 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"8ac6ecc5-c6bf-4048-9ea8-941669b1ecd1","Type":"ContainerDied","Data":"b5c09a2e4b2dab303c75fec965c1220a7ff6df827fd3d2ab0559e924fd39672d"} Feb 24 05:54:44.726207 master-0 kubenswrapper[34361]: I0224 05:54:44.725806 34361 scope.go:117] "RemoveContainer" containerID="b71b3a92533db3dbc2f832db5e681500180ad7c2d7afe854680d4134423eb5d2" Feb 24 05:54:44.726207 master-0 kubenswrapper[34361]: I0224 05:54:44.725981 34361 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 24 05:54:44.737247 master-0 kubenswrapper[34361]: I0224 05:54:44.737169 34361 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-conductor-0" event={"ID":"74198545-a0ee-4142-93a6-86175a1d3c02","Type":"ContainerStarted","Data":"de6cbb34fb6f88adec0bb19f187337f0239d673c36e1aebc3df9e3814b125565"} Feb 24 05:54:44.782272 master-0 kubenswrapper[34361]: I0224 05:54:44.782104 34361 scope.go:117] "RemoveContainer" containerID="db78bd4a1f0111f9317f33807bf9c78e2ff9c3edf44855e4910104247bdb27b2" Feb 24 05:54:44.798643 master-0 kubenswrapper[34361]: I0224 05:54:44.795286 34361 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Feb 24 05:54:44.830503 master-0 kubenswrapper[34361]: I0224 05:54:44.830438 34361 scope.go:117] "RemoveContainer" containerID="b71b3a92533db3dbc2f832db5e681500180ad7c2d7afe854680d4134423eb5d2" Feb 24 05:54:44.831761 master-0 kubenswrapper[34361]: E0224 05:54:44.831658 34361 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b71b3a92533db3dbc2f832db5e681500180ad7c2d7afe854680d4134423eb5d2\": container with ID starting with b71b3a92533db3dbc2f832db5e681500180ad7c2d7afe854680d4134423eb5d2 not found: ID does not exist" containerID="b71b3a92533db3dbc2f832db5e681500180ad7c2d7afe854680d4134423eb5d2" Feb 24 05:54:44.831761 master-0 kubenswrapper[34361]: I0224 05:54:44.831730 34361 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b71b3a92533db3dbc2f832db5e681500180ad7c2d7afe854680d4134423eb5d2"} err="failed to get container status \"b71b3a92533db3dbc2f832db5e681500180ad7c2d7afe854680d4134423eb5d2\": rpc error: code = NotFound desc = could not find container \"b71b3a92533db3dbc2f832db5e681500180ad7c2d7afe854680d4134423eb5d2\": container with ID starting with b71b3a92533db3dbc2f832db5e681500180ad7c2d7afe854680d4134423eb5d2 not found: ID does not exist" Feb 24 05:54:44.831860 master-0 kubenswrapper[34361]: I0224 05:54:44.831765 34361 scope.go:117] "RemoveContainer" containerID="db78bd4a1f0111f9317f33807bf9c78e2ff9c3edf44855e4910104247bdb27b2" Feb 24 05:54:44.832985 master-0 kubenswrapper[34361]: E0224 05:54:44.832861 34361 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"db78bd4a1f0111f9317f33807bf9c78e2ff9c3edf44855e4910104247bdb27b2\": container with ID starting with db78bd4a1f0111f9317f33807bf9c78e2ff9c3edf44855e4910104247bdb27b2 not found: ID does not exist" containerID="db78bd4a1f0111f9317f33807bf9c78e2ff9c3edf44855e4910104247bdb27b2" Feb 24 05:54:44.832985 master-0 kubenswrapper[34361]: I0224 05:54:44.832888 34361 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"db78bd4a1f0111f9317f33807bf9c78e2ff9c3edf44855e4910104247bdb27b2"} err="failed to get container status \"db78bd4a1f0111f9317f33807bf9c78e2ff9c3edf44855e4910104247bdb27b2\": rpc error: code = NotFound desc = could not find container \"db78bd4a1f0111f9317f33807bf9c78e2ff9c3edf44855e4910104247bdb27b2\": container with ID starting with db78bd4a1f0111f9317f33807bf9c78e2ff9c3edf44855e4910104247bdb27b2 not found: ID does not exist" Feb 24 05:54:44.832985 master-0 kubenswrapper[34361]: I0224 05:54:44.832903 34361 scope.go:117] "RemoveContainer" containerID="b71b3a92533db3dbc2f832db5e681500180ad7c2d7afe854680d4134423eb5d2" Feb 24 05:54:44.833192 master-0 kubenswrapper[34361]: I0224 05:54:44.833156 34361 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b71b3a92533db3dbc2f832db5e681500180ad7c2d7afe854680d4134423eb5d2"} err="failed to get container status \"b71b3a92533db3dbc2f832db5e681500180ad7c2d7afe854680d4134423eb5d2\": rpc error: code = NotFound desc = could not find container \"b71b3a92533db3dbc2f832db5e681500180ad7c2d7afe854680d4134423eb5d2\": container with ID starting with b71b3a92533db3dbc2f832db5e681500180ad7c2d7afe854680d4134423eb5d2 not found: ID does not exist" Feb 24 05:54:44.833192 master-0 kubenswrapper[34361]: I0224 05:54:44.833183 34361 scope.go:117] "RemoveContainer" containerID="db78bd4a1f0111f9317f33807bf9c78e2ff9c3edf44855e4910104247bdb27b2" Feb 24 05:54:44.833668 master-0 kubenswrapper[34361]: I0224 05:54:44.833638 34361 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"db78bd4a1f0111f9317f33807bf9c78e2ff9c3edf44855e4910104247bdb27b2"} err="failed to get container status \"db78bd4a1f0111f9317f33807bf9c78e2ff9c3edf44855e4910104247bdb27b2\": rpc error: code = NotFound desc = could not find container \"db78bd4a1f0111f9317f33807bf9c78e2ff9c3edf44855e4910104247bdb27b2\": container with ID starting with db78bd4a1f0111f9317f33807bf9c78e2ff9c3edf44855e4910104247bdb27b2 not found: ID does not exist" Feb 24 05:54:44.843667 master-0 kubenswrapper[34361]: I0224 05:54:44.843013 34361 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Feb 24 05:54:44.885847 master-0 kubenswrapper[34361]: I0224 05:54:44.885777 34361 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Feb 24 05:54:44.886582 master-0 kubenswrapper[34361]: E0224 05:54:44.886563 34361 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8ac6ecc5-c6bf-4048-9ea8-941669b1ecd1" containerName="nova-metadata-metadata" Feb 24 05:54:44.886665 master-0 kubenswrapper[34361]: I0224 05:54:44.886585 34361 state_mem.go:107] "Deleted CPUSet assignment" podUID="8ac6ecc5-c6bf-4048-9ea8-941669b1ecd1" containerName="nova-metadata-metadata" Feb 24 05:54:44.886665 master-0 kubenswrapper[34361]: E0224 05:54:44.886621 34361 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8ac6ecc5-c6bf-4048-9ea8-941669b1ecd1" containerName="nova-metadata-log" Feb 24 05:54:44.886665 master-0 kubenswrapper[34361]: I0224 05:54:44.886629 34361 state_mem.go:107] "Deleted CPUSet assignment" podUID="8ac6ecc5-c6bf-4048-9ea8-941669b1ecd1" containerName="nova-metadata-log" Feb 24 05:54:44.887184 master-0 kubenswrapper[34361]: I0224 05:54:44.887063 34361 memory_manager.go:354] "RemoveStaleState removing state" podUID="8ac6ecc5-c6bf-4048-9ea8-941669b1ecd1" containerName="nova-metadata-metadata" Feb 24 05:54:44.887184 master-0 kubenswrapper[34361]: I0224 05:54:44.887085 34361 memory_manager.go:354] "RemoveStaleState removing state" podUID="8ac6ecc5-c6bf-4048-9ea8-941669b1ecd1" containerName="nova-metadata-log" Feb 24 05:54:44.888794 master-0 kubenswrapper[34361]: I0224 05:54:44.888766 34361 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 24 05:54:44.892426 master-0 kubenswrapper[34361]: I0224 05:54:44.892366 34361 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Feb 24 05:54:44.892504 master-0 kubenswrapper[34361]: I0224 05:54:44.892453 34361 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-metadata-internal-svc" Feb 24 05:54:44.915533 master-0 kubenswrapper[34361]: I0224 05:54:44.915372 34361 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Feb 24 05:54:45.040612 master-0 kubenswrapper[34361]: I0224 05:54:45.040368 34361 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-glhn7\" (UniqueName: \"kubernetes.io/projected/8d47e818-32f5-4976-9bc3-959daf5d5d73-kube-api-access-glhn7\") pod \"nova-metadata-0\" (UID: \"8d47e818-32f5-4976-9bc3-959daf5d5d73\") " pod="openstack/nova-metadata-0" Feb 24 05:54:45.040612 master-0 kubenswrapper[34361]: I0224 05:54:45.040432 34361 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/8d47e818-32f5-4976-9bc3-959daf5d5d73-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"8d47e818-32f5-4976-9bc3-959daf5d5d73\") " pod="openstack/nova-metadata-0" Feb 24 05:54:45.040612 master-0 kubenswrapper[34361]: I0224 05:54:45.040507 34361 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8d47e818-32f5-4976-9bc3-959daf5d5d73-config-data\") pod \"nova-metadata-0\" (UID: \"8d47e818-32f5-4976-9bc3-959daf5d5d73\") " pod="openstack/nova-metadata-0" Feb 24 05:54:45.041703 master-0 kubenswrapper[34361]: I0224 05:54:45.040684 34361 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8d47e818-32f5-4976-9bc3-959daf5d5d73-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"8d47e818-32f5-4976-9bc3-959daf5d5d73\") " pod="openstack/nova-metadata-0" Feb 24 05:54:45.041703 master-0 kubenswrapper[34361]: I0224 05:54:45.041292 34361 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8d47e818-32f5-4976-9bc3-959daf5d5d73-logs\") pod \"nova-metadata-0\" (UID: \"8d47e818-32f5-4976-9bc3-959daf5d5d73\") " pod="openstack/nova-metadata-0" Feb 24 05:54:45.144929 master-0 kubenswrapper[34361]: I0224 05:54:45.144828 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8d47e818-32f5-4976-9bc3-959daf5d5d73-logs\") pod \"nova-metadata-0\" (UID: \"8d47e818-32f5-4976-9bc3-959daf5d5d73\") " pod="openstack/nova-metadata-0" Feb 24 05:54:45.145239 master-0 kubenswrapper[34361]: I0224 05:54:45.145004 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-glhn7\" (UniqueName: \"kubernetes.io/projected/8d47e818-32f5-4976-9bc3-959daf5d5d73-kube-api-access-glhn7\") pod \"nova-metadata-0\" (UID: \"8d47e818-32f5-4976-9bc3-959daf5d5d73\") " pod="openstack/nova-metadata-0" Feb 24 05:54:45.145239 master-0 kubenswrapper[34361]: I0224 05:54:45.145045 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/8d47e818-32f5-4976-9bc3-959daf5d5d73-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"8d47e818-32f5-4976-9bc3-959daf5d5d73\") " pod="openstack/nova-metadata-0" Feb 24 05:54:45.145239 master-0 kubenswrapper[34361]: I0224 05:54:45.145114 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8d47e818-32f5-4976-9bc3-959daf5d5d73-config-data\") pod \"nova-metadata-0\" (UID: \"8d47e818-32f5-4976-9bc3-959daf5d5d73\") " pod="openstack/nova-metadata-0" Feb 24 05:54:45.145239 master-0 kubenswrapper[34361]: I0224 05:54:45.145172 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8d47e818-32f5-4976-9bc3-959daf5d5d73-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"8d47e818-32f5-4976-9bc3-959daf5d5d73\") " pod="openstack/nova-metadata-0" Feb 24 05:54:45.146450 master-0 kubenswrapper[34361]: I0224 05:54:45.145957 34361 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8d47e818-32f5-4976-9bc3-959daf5d5d73-logs\") pod \"nova-metadata-0\" (UID: \"8d47e818-32f5-4976-9bc3-959daf5d5d73\") " pod="openstack/nova-metadata-0" Feb 24 05:54:45.149886 master-0 kubenswrapper[34361]: I0224 05:54:45.149804 34361 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8d47e818-32f5-4976-9bc3-959daf5d5d73-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"8d47e818-32f5-4976-9bc3-959daf5d5d73\") " pod="openstack/nova-metadata-0" Feb 24 05:54:45.155180 master-0 kubenswrapper[34361]: I0224 05:54:45.155035 34361 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/8d47e818-32f5-4976-9bc3-959daf5d5d73-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"8d47e818-32f5-4976-9bc3-959daf5d5d73\") " pod="openstack/nova-metadata-0" Feb 24 05:54:45.155578 master-0 kubenswrapper[34361]: I0224 05:54:45.155484 34361 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8d47e818-32f5-4976-9bc3-959daf5d5d73-config-data\") pod \"nova-metadata-0\" (UID: \"8d47e818-32f5-4976-9bc3-959daf5d5d73\") " pod="openstack/nova-metadata-0" Feb 24 05:54:45.170826 master-0 kubenswrapper[34361]: I0224 05:54:45.170756 34361 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-glhn7\" (UniqueName: \"kubernetes.io/projected/8d47e818-32f5-4976-9bc3-959daf5d5d73-kube-api-access-glhn7\") pod \"nova-metadata-0\" (UID: \"8d47e818-32f5-4976-9bc3-959daf5d5d73\") " pod="openstack/nova-metadata-0" Feb 24 05:54:45.318389 master-0 kubenswrapper[34361]: I0224 05:54:45.318317 34361 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 24 05:54:45.767149 master-0 kubenswrapper[34361]: I0224 05:54:45.767075 34361 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-conductor-0" event={"ID":"74198545-a0ee-4142-93a6-86175a1d3c02","Type":"ContainerStarted","Data":"68214f38a3118c1263efeed6718974533074b04f03b5a4ab0d02e68c6438b3e5"} Feb 24 05:54:45.767149 master-0 kubenswrapper[34361]: I0224 05:54:45.767139 34361 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-conductor-0" event={"ID":"74198545-a0ee-4142-93a6-86175a1d3c02","Type":"ContainerStarted","Data":"d6834f295b0bd3919c3799b79181b9d76a32b43c27ba4d2edf631ffa5c4ca655"} Feb 24 05:54:45.767793 master-0 kubenswrapper[34361]: I0224 05:54:45.767588 34361 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ironic-conductor-0" Feb 24 05:54:45.816606 master-0 kubenswrapper[34361]: I0224 05:54:45.816517 34361 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Feb 24 05:54:45.829734 master-0 kubenswrapper[34361]: I0224 05:54:45.829486 34361 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ironic-conductor-0" podStartSLOduration=65.026422711 podStartE2EDuration="1m42.829458077s" podCreationTimestamp="2026-02-24 05:53:03 +0000 UTC" firstStartedPulling="2026-02-24 05:53:15.324770053 +0000 UTC m=+955.027387099" lastFinishedPulling="2026-02-24 05:53:53.127805419 +0000 UTC m=+992.830422465" observedRunningTime="2026-02-24 05:54:45.801854464 +0000 UTC m=+1045.504471530" watchObservedRunningTime="2026-02-24 05:54:45.829458077 +0000 UTC m=+1045.532075123" Feb 24 05:54:45.996839 master-0 kubenswrapper[34361]: I0224 05:54:45.996750 34361 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/ironic-conductor-0" Feb 24 05:54:46.616210 master-0 kubenswrapper[34361]: I0224 05:54:46.616006 34361 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8ac6ecc5-c6bf-4048-9ea8-941669b1ecd1" path="/var/lib/kubelet/pods/8ac6ecc5-c6bf-4048-9ea8-941669b1ecd1/volumes" Feb 24 05:54:46.783873 master-0 kubenswrapper[34361]: I0224 05:54:46.783794 34361 generic.go:334] "Generic (PLEG): container finished" podID="b6bc1039-ed61-4fdb-9e4d-0cb8a4aa1e0d" containerID="e8550d306f17ed524e9dca5e627f8e0163ebe91bf66c9fb133708d5539e8e635" exitCode=0 Feb 24 05:54:46.784532 master-0 kubenswrapper[34361]: I0224 05:54:46.783898 34361 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-fck78" event={"ID":"b6bc1039-ed61-4fdb-9e4d-0cb8a4aa1e0d","Type":"ContainerDied","Data":"e8550d306f17ed524e9dca5e627f8e0163ebe91bf66c9fb133708d5539e8e635"} Feb 24 05:54:46.785901 master-0 kubenswrapper[34361]: I0224 05:54:46.785863 34361 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"8d47e818-32f5-4976-9bc3-959daf5d5d73","Type":"ContainerStarted","Data":"3096270daa9c7c12213ff9ea7640adcac49012e07656f3865f9515eb1c3467c1"} Feb 24 05:54:46.785901 master-0 kubenswrapper[34361]: I0224 05:54:46.785898 34361 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"8d47e818-32f5-4976-9bc3-959daf5d5d73","Type":"ContainerStarted","Data":"c748358e5cbe43af6783c712217e4547963a25755e7125847d8bdd3036484652"} Feb 24 05:54:46.786001 master-0 kubenswrapper[34361]: I0224 05:54:46.785913 34361 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"8d47e818-32f5-4976-9bc3-959daf5d5d73","Type":"ContainerStarted","Data":"194d58904bf6ceccbd2d44a4a299755634e54cb3b9e6614d13284b5615a96b3e"} Feb 24 05:54:46.786724 master-0 kubenswrapper[34361]: I0224 05:54:46.786675 34361 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ironic-conductor-0" Feb 24 05:54:46.844765 master-0 kubenswrapper[34361]: I0224 05:54:46.844646 34361 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=2.844610677 podStartE2EDuration="2.844610677s" podCreationTimestamp="2026-02-24 05:54:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-24 05:54:46.828074661 +0000 UTC m=+1046.530691727" watchObservedRunningTime="2026-02-24 05:54:46.844610677 +0000 UTC m=+1046.547227733" Feb 24 05:54:47.399926 master-0 kubenswrapper[34361]: I0224 05:54:47.399855 34361 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Feb 24 05:54:47.400212 master-0 kubenswrapper[34361]: I0224 05:54:47.399957 34361 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Feb 24 05:54:47.445782 master-0 kubenswrapper[34361]: I0224 05:54:47.445727 34361 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/ironic-conductor-0" Feb 24 05:54:47.547419 master-0 kubenswrapper[34361]: I0224 05:54:47.547353 34361 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-0" Feb 24 05:54:47.548147 master-0 kubenswrapper[34361]: I0224 05:54:47.548097 34361 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Feb 24 05:54:47.614892 master-0 kubenswrapper[34361]: I0224 05:54:47.614814 34361 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-0" Feb 24 05:54:47.627633 master-0 kubenswrapper[34361]: I0224 05:54:47.627559 34361 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-6fcf8f9d6f-578q8" Feb 24 05:54:47.716172 master-0 kubenswrapper[34361]: I0224 05:54:47.715993 34361 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-novncproxy-0" Feb 24 05:54:47.745144 master-0 kubenswrapper[34361]: I0224 05:54:47.745077 34361 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-55b78786dc-sn557"] Feb 24 05:54:47.745467 master-0 kubenswrapper[34361]: I0224 05:54:47.745433 34361 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-55b78786dc-sn557" podUID="719517cc-5f72-4139-aaa2-99bd0923702d" containerName="dnsmasq-dns" containerID="cri-o://631798e53026c7e2f2a5bac4494414ef08f5d0fe4686810f59fc7283ca65a56d" gracePeriod=10 Feb 24 05:54:48.024031 master-0 kubenswrapper[34361]: I0224 05:54:48.023961 34361 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-0" Feb 24 05:54:48.130019 master-0 kubenswrapper[34361]: I0224 05:54:48.129926 34361 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-55b78786dc-sn557" podUID="719517cc-5f72-4139-aaa2-99bd0923702d" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.128.0.244:5353: connect: connection refused" Feb 24 05:54:48.481659 master-0 kubenswrapper[34361]: I0224 05:54:48.481559 34361 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="01fe89a6-ca30-4209-b7e2-a95d94ed57ac" containerName="nova-api-log" probeResult="failure" output="Get \"http://10.128.0.252:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 24 05:54:48.481659 master-0 kubenswrapper[34361]: I0224 05:54:48.481609 34361 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="01fe89a6-ca30-4209-b7e2-a95d94ed57ac" containerName="nova-api-api" probeResult="failure" output="Get \"http://10.128.0.252:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 24 05:54:48.833288 master-0 kubenswrapper[34361]: I0224 05:54:48.833166 34361 generic.go:334] "Generic (PLEG): container finished" podID="719517cc-5f72-4139-aaa2-99bd0923702d" containerID="631798e53026c7e2f2a5bac4494414ef08f5d0fe4686810f59fc7283ca65a56d" exitCode=0 Feb 24 05:54:48.833744 master-0 kubenswrapper[34361]: I0224 05:54:48.833358 34361 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-55b78786dc-sn557" event={"ID":"719517cc-5f72-4139-aaa2-99bd0923702d","Type":"ContainerDied","Data":"631798e53026c7e2f2a5bac4494414ef08f5d0fe4686810f59fc7283ca65a56d"} Feb 24 05:54:48.897040 master-0 kubenswrapper[34361]: I0224 05:54:48.896433 34361 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ironic-conductor-0" Feb 24 05:54:49.852207 master-0 kubenswrapper[34361]: I0224 05:54:49.852136 34361 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ironic-conductor-0" Feb 24 05:54:50.319486 master-0 kubenswrapper[34361]: I0224 05:54:50.319396 34361 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Feb 24 05:54:50.319968 master-0 kubenswrapper[34361]: I0224 05:54:50.319508 34361 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Feb 24 05:54:52.941338 master-0 kubenswrapper[34361]: I0224 05:54:52.940630 34361 generic.go:334] "Generic (PLEG): container finished" podID="7d0a5bab-2c7d-4526-8505-873c732edcf1" containerID="515d55b62b2b88bfd6765de031608c20d95a75d635ec2d3ad786f86826787472" exitCode=0 Feb 24 05:54:52.941338 master-0 kubenswrapper[34361]: I0224 05:54:52.940708 34361 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-lc7xf" event={"ID":"7d0a5bab-2c7d-4526-8505-873c732edcf1","Type":"ContainerDied","Data":"515d55b62b2b88bfd6765de031608c20d95a75d635ec2d3ad786f86826787472"} Feb 24 05:54:53.124656 master-0 kubenswrapper[34361]: I0224 05:54:53.124468 34361 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-55b78786dc-sn557" podUID="719517cc-5f72-4139-aaa2-99bd0923702d" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.128.0.244:5353: connect: connection refused" Feb 24 05:54:53.361446 master-0 kubenswrapper[34361]: I0224 05:54:53.360603 34361 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-fck78" Feb 24 05:54:53.473434 master-0 kubenswrapper[34361]: I0224 05:54:53.473217 34361 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b6bc1039-ed61-4fdb-9e4d-0cb8a4aa1e0d-combined-ca-bundle\") pod \"b6bc1039-ed61-4fdb-9e4d-0cb8a4aa1e0d\" (UID: \"b6bc1039-ed61-4fdb-9e4d-0cb8a4aa1e0d\") " Feb 24 05:54:53.473434 master-0 kubenswrapper[34361]: I0224 05:54:53.473432 34361 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b6bc1039-ed61-4fdb-9e4d-0cb8a4aa1e0d-scripts\") pod \"b6bc1039-ed61-4fdb-9e4d-0cb8a4aa1e0d\" (UID: \"b6bc1039-ed61-4fdb-9e4d-0cb8a4aa1e0d\") " Feb 24 05:54:53.473832 master-0 kubenswrapper[34361]: I0224 05:54:53.473813 34361 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b6bc1039-ed61-4fdb-9e4d-0cb8a4aa1e0d-config-data\") pod \"b6bc1039-ed61-4fdb-9e4d-0cb8a4aa1e0d\" (UID: \"b6bc1039-ed61-4fdb-9e4d-0cb8a4aa1e0d\") " Feb 24 05:54:53.473946 master-0 kubenswrapper[34361]: I0224 05:54:53.473906 34361 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-l2sgv\" (UniqueName: \"kubernetes.io/projected/b6bc1039-ed61-4fdb-9e4d-0cb8a4aa1e0d-kube-api-access-l2sgv\") pod \"b6bc1039-ed61-4fdb-9e4d-0cb8a4aa1e0d\" (UID: \"b6bc1039-ed61-4fdb-9e4d-0cb8a4aa1e0d\") " Feb 24 05:54:53.479213 master-0 kubenswrapper[34361]: I0224 05:54:53.479140 34361 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6bc1039-ed61-4fdb-9e4d-0cb8a4aa1e0d-scripts" (OuterVolumeSpecName: "scripts") pod "b6bc1039-ed61-4fdb-9e4d-0cb8a4aa1e0d" (UID: "b6bc1039-ed61-4fdb-9e4d-0cb8a4aa1e0d"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 24 05:54:53.479292 master-0 kubenswrapper[34361]: I0224 05:54:53.479148 34361 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b6bc1039-ed61-4fdb-9e4d-0cb8a4aa1e0d-kube-api-access-l2sgv" (OuterVolumeSpecName: "kube-api-access-l2sgv") pod "b6bc1039-ed61-4fdb-9e4d-0cb8a4aa1e0d" (UID: "b6bc1039-ed61-4fdb-9e4d-0cb8a4aa1e0d"). InnerVolumeSpecName "kube-api-access-l2sgv". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 24 05:54:53.485476 master-0 kubenswrapper[34361]: I0224 05:54:53.485428 34361 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-55b78786dc-sn557" Feb 24 05:54:53.532385 master-0 kubenswrapper[34361]: I0224 05:54:53.530530 34361 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6bc1039-ed61-4fdb-9e4d-0cb8a4aa1e0d-config-data" (OuterVolumeSpecName: "config-data") pod "b6bc1039-ed61-4fdb-9e4d-0cb8a4aa1e0d" (UID: "b6bc1039-ed61-4fdb-9e4d-0cb8a4aa1e0d"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 24 05:54:53.547383 master-0 kubenswrapper[34361]: I0224 05:54:53.545018 34361 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6bc1039-ed61-4fdb-9e4d-0cb8a4aa1e0d-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "b6bc1039-ed61-4fdb-9e4d-0cb8a4aa1e0d" (UID: "b6bc1039-ed61-4fdb-9e4d-0cb8a4aa1e0d"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 24 05:54:53.577915 master-0 kubenswrapper[34361]: I0224 05:54:53.577772 34361 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b6bc1039-ed61-4fdb-9e4d-0cb8a4aa1e0d-config-data\") on node \"master-0\" DevicePath \"\"" Feb 24 05:54:53.577915 master-0 kubenswrapper[34361]: I0224 05:54:53.577836 34361 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-l2sgv\" (UniqueName: \"kubernetes.io/projected/b6bc1039-ed61-4fdb-9e4d-0cb8a4aa1e0d-kube-api-access-l2sgv\") on node \"master-0\" DevicePath \"\"" Feb 24 05:54:53.577915 master-0 kubenswrapper[34361]: I0224 05:54:53.577849 34361 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b6bc1039-ed61-4fdb-9e4d-0cb8a4aa1e0d-combined-ca-bundle\") on node \"master-0\" DevicePath \"\"" Feb 24 05:54:53.577915 master-0 kubenswrapper[34361]: I0224 05:54:53.577857 34361 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b6bc1039-ed61-4fdb-9e4d-0cb8a4aa1e0d-scripts\") on node \"master-0\" DevicePath \"\"" Feb 24 05:54:53.679695 master-0 kubenswrapper[34361]: I0224 05:54:53.679594 34361 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/719517cc-5f72-4139-aaa2-99bd0923702d-ovsdbserver-nb\") pod \"719517cc-5f72-4139-aaa2-99bd0923702d\" (UID: \"719517cc-5f72-4139-aaa2-99bd0923702d\") " Feb 24 05:54:53.680103 master-0 kubenswrapper[34361]: I0224 05:54:53.679969 34361 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/719517cc-5f72-4139-aaa2-99bd0923702d-dns-swift-storage-0\") pod \"719517cc-5f72-4139-aaa2-99bd0923702d\" (UID: \"719517cc-5f72-4139-aaa2-99bd0923702d\") " Feb 24 05:54:53.680103 master-0 kubenswrapper[34361]: I0224 05:54:53.680043 34361 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/719517cc-5f72-4139-aaa2-99bd0923702d-dns-svc\") pod \"719517cc-5f72-4139-aaa2-99bd0923702d\" (UID: \"719517cc-5f72-4139-aaa2-99bd0923702d\") " Feb 24 05:54:53.680251 master-0 kubenswrapper[34361]: I0224 05:54:53.680196 34361 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-h7x5d\" (UniqueName: \"kubernetes.io/projected/719517cc-5f72-4139-aaa2-99bd0923702d-kube-api-access-h7x5d\") pod \"719517cc-5f72-4139-aaa2-99bd0923702d\" (UID: \"719517cc-5f72-4139-aaa2-99bd0923702d\") " Feb 24 05:54:53.680667 master-0 kubenswrapper[34361]: I0224 05:54:53.680609 34361 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/719517cc-5f72-4139-aaa2-99bd0923702d-ovsdbserver-sb\") pod \"719517cc-5f72-4139-aaa2-99bd0923702d\" (UID: \"719517cc-5f72-4139-aaa2-99bd0923702d\") " Feb 24 05:54:53.680759 master-0 kubenswrapper[34361]: I0224 05:54:53.680702 34361 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/719517cc-5f72-4139-aaa2-99bd0923702d-config\") pod \"719517cc-5f72-4139-aaa2-99bd0923702d\" (UID: \"719517cc-5f72-4139-aaa2-99bd0923702d\") " Feb 24 05:54:53.708014 master-0 kubenswrapper[34361]: I0224 05:54:53.707909 34361 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/719517cc-5f72-4139-aaa2-99bd0923702d-kube-api-access-h7x5d" (OuterVolumeSpecName: "kube-api-access-h7x5d") pod "719517cc-5f72-4139-aaa2-99bd0923702d" (UID: "719517cc-5f72-4139-aaa2-99bd0923702d"). InnerVolumeSpecName "kube-api-access-h7x5d". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 24 05:54:53.743757 master-0 kubenswrapper[34361]: I0224 05:54:53.743669 34361 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/719517cc-5f72-4139-aaa2-99bd0923702d-config" (OuterVolumeSpecName: "config") pod "719517cc-5f72-4139-aaa2-99bd0923702d" (UID: "719517cc-5f72-4139-aaa2-99bd0923702d"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 24 05:54:53.747127 master-0 kubenswrapper[34361]: I0224 05:54:53.747055 34361 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/719517cc-5f72-4139-aaa2-99bd0923702d-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "719517cc-5f72-4139-aaa2-99bd0923702d" (UID: "719517cc-5f72-4139-aaa2-99bd0923702d"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 24 05:54:53.747341 master-0 kubenswrapper[34361]: I0224 05:54:53.747229 34361 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/719517cc-5f72-4139-aaa2-99bd0923702d-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "719517cc-5f72-4139-aaa2-99bd0923702d" (UID: "719517cc-5f72-4139-aaa2-99bd0923702d"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 24 05:54:53.768154 master-0 kubenswrapper[34361]: I0224 05:54:53.768031 34361 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/719517cc-5f72-4139-aaa2-99bd0923702d-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "719517cc-5f72-4139-aaa2-99bd0923702d" (UID: "719517cc-5f72-4139-aaa2-99bd0923702d"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 24 05:54:53.770134 master-0 kubenswrapper[34361]: I0224 05:54:53.770051 34361 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/719517cc-5f72-4139-aaa2-99bd0923702d-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "719517cc-5f72-4139-aaa2-99bd0923702d" (UID: "719517cc-5f72-4139-aaa2-99bd0923702d"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 24 05:54:53.785057 master-0 kubenswrapper[34361]: I0224 05:54:53.784964 34361 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-h7x5d\" (UniqueName: \"kubernetes.io/projected/719517cc-5f72-4139-aaa2-99bd0923702d-kube-api-access-h7x5d\") on node \"master-0\" DevicePath \"\"" Feb 24 05:54:53.785057 master-0 kubenswrapper[34361]: I0224 05:54:53.785024 34361 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/719517cc-5f72-4139-aaa2-99bd0923702d-ovsdbserver-sb\") on node \"master-0\" DevicePath \"\"" Feb 24 05:54:53.785057 master-0 kubenswrapper[34361]: I0224 05:54:53.785039 34361 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/719517cc-5f72-4139-aaa2-99bd0923702d-config\") on node \"master-0\" DevicePath \"\"" Feb 24 05:54:53.785057 master-0 kubenswrapper[34361]: I0224 05:54:53.785059 34361 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/719517cc-5f72-4139-aaa2-99bd0923702d-ovsdbserver-nb\") on node \"master-0\" DevicePath \"\"" Feb 24 05:54:53.785057 master-0 kubenswrapper[34361]: I0224 05:54:53.785075 34361 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/719517cc-5f72-4139-aaa2-99bd0923702d-dns-swift-storage-0\") on node \"master-0\" DevicePath \"\"" Feb 24 05:54:53.785747 master-0 kubenswrapper[34361]: I0224 05:54:53.785088 34361 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/719517cc-5f72-4139-aaa2-99bd0923702d-dns-svc\") on node \"master-0\" DevicePath \"\"" Feb 24 05:54:53.962344 master-0 kubenswrapper[34361]: I0224 05:54:53.962199 34361 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-compute-ironic-compute-0" event={"ID":"455d7bcc-647b-4c91-b293-aaa0cd448723","Type":"ContainerStarted","Data":"7d4d4302aee922336a1cc74071b85c314512326aa951feaea8d500d273b6c940"} Feb 24 05:54:53.963789 master-0 kubenswrapper[34361]: I0224 05:54:53.963692 34361 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-compute-ironic-compute-0" Feb 24 05:54:53.966795 master-0 kubenswrapper[34361]: I0224 05:54:53.966736 34361 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-55b78786dc-sn557" Feb 24 05:54:53.967064 master-0 kubenswrapper[34361]: I0224 05:54:53.967009 34361 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-55b78786dc-sn557" event={"ID":"719517cc-5f72-4139-aaa2-99bd0923702d","Type":"ContainerDied","Data":"ee83f25c2f6fb0446e017ccdecde0942684b4e552d4a223f5047ffd46a8aa895"} Feb 24 05:54:53.967195 master-0 kubenswrapper[34361]: I0224 05:54:53.967082 34361 scope.go:117] "RemoveContainer" containerID="631798e53026c7e2f2a5bac4494414ef08f5d0fe4686810f59fc7283ca65a56d" Feb 24 05:54:53.976750 master-0 kubenswrapper[34361]: I0224 05:54:53.976665 34361 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-fck78" event={"ID":"b6bc1039-ed61-4fdb-9e4d-0cb8a4aa1e0d","Type":"ContainerDied","Data":"16db2a6f473a7d0f762f0a9daf7dca7456d5d10d6d3a078ce7639ea46abd9ce4"} Feb 24 05:54:53.976750 master-0 kubenswrapper[34361]: I0224 05:54:53.976740 34361 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="16db2a6f473a7d0f762f0a9daf7dca7456d5d10d6d3a078ce7639ea46abd9ce4" Feb 24 05:54:53.977060 master-0 kubenswrapper[34361]: I0224 05:54:53.976794 34361 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-fck78" Feb 24 05:54:53.996347 master-0 kubenswrapper[34361]: I0224 05:54:53.996041 34361 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-compute-ironic-compute-0" podStartSLOduration=2.812961378 podStartE2EDuration="17.996009285s" podCreationTimestamp="2026-02-24 05:54:36 +0000 UTC" firstStartedPulling="2026-02-24 05:54:37.951639957 +0000 UTC m=+1037.654257003" lastFinishedPulling="2026-02-24 05:54:53.134687864 +0000 UTC m=+1052.837304910" observedRunningTime="2026-02-24 05:54:53.9865804 +0000 UTC m=+1053.689197456" watchObservedRunningTime="2026-02-24 05:54:53.996009285 +0000 UTC m=+1053.698626341" Feb 24 05:54:54.011867 master-0 kubenswrapper[34361]: I0224 05:54:54.011799 34361 scope.go:117] "RemoveContainer" containerID="4db5f799d1cb3b12ed9df426a5a4502b09298ce90f3e8b66ca85c1216d557c0a" Feb 24 05:54:54.019036 master-0 kubenswrapper[34361]: I0224 05:54:54.017228 34361 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell1-compute-ironic-compute-0" Feb 24 05:54:54.036810 master-0 kubenswrapper[34361]: I0224 05:54:54.036686 34361 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-55b78786dc-sn557"] Feb 24 05:54:54.053460 master-0 kubenswrapper[34361]: I0224 05:54:54.053267 34361 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-55b78786dc-sn557"] Feb 24 05:54:54.597631 master-0 kubenswrapper[34361]: I0224 05:54:54.597566 34361 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-lc7xf" Feb 24 05:54:54.619781 master-0 kubenswrapper[34361]: I0224 05:54:54.619685 34361 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7d0a5bab-2c7d-4526-8505-873c732edcf1-config-data\") pod \"7d0a5bab-2c7d-4526-8505-873c732edcf1\" (UID: \"7d0a5bab-2c7d-4526-8505-873c732edcf1\") " Feb 24 05:54:54.620100 master-0 kubenswrapper[34361]: I0224 05:54:54.620055 34361 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7d0a5bab-2c7d-4526-8505-873c732edcf1-combined-ca-bundle\") pod \"7d0a5bab-2c7d-4526-8505-873c732edcf1\" (UID: \"7d0a5bab-2c7d-4526-8505-873c732edcf1\") " Feb 24 05:54:54.622489 master-0 kubenswrapper[34361]: I0224 05:54:54.620201 34361 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7d0a5bab-2c7d-4526-8505-873c732edcf1-scripts\") pod \"7d0a5bab-2c7d-4526-8505-873c732edcf1\" (UID: \"7d0a5bab-2c7d-4526-8505-873c732edcf1\") " Feb 24 05:54:54.622489 master-0 kubenswrapper[34361]: I0224 05:54:54.620302 34361 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8fs5m\" (UniqueName: \"kubernetes.io/projected/7d0a5bab-2c7d-4526-8505-873c732edcf1-kube-api-access-8fs5m\") pod \"7d0a5bab-2c7d-4526-8505-873c732edcf1\" (UID: \"7d0a5bab-2c7d-4526-8505-873c732edcf1\") " Feb 24 05:54:54.627416 master-0 kubenswrapper[34361]: I0224 05:54:54.626856 34361 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7d0a5bab-2c7d-4526-8505-873c732edcf1-scripts" (OuterVolumeSpecName: "scripts") pod "7d0a5bab-2c7d-4526-8505-873c732edcf1" (UID: "7d0a5bab-2c7d-4526-8505-873c732edcf1"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 24 05:54:54.639287 master-0 kubenswrapper[34361]: I0224 05:54:54.635714 34361 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7d0a5bab-2c7d-4526-8505-873c732edcf1-kube-api-access-8fs5m" (OuterVolumeSpecName: "kube-api-access-8fs5m") pod "7d0a5bab-2c7d-4526-8505-873c732edcf1" (UID: "7d0a5bab-2c7d-4526-8505-873c732edcf1"). InnerVolumeSpecName "kube-api-access-8fs5m". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 24 05:54:54.692836 master-0 kubenswrapper[34361]: I0224 05:54:54.692722 34361 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="719517cc-5f72-4139-aaa2-99bd0923702d" path="/var/lib/kubelet/pods/719517cc-5f72-4139-aaa2-99bd0923702d/volumes" Feb 24 05:54:54.699636 master-0 kubenswrapper[34361]: I0224 05:54:54.698137 34361 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Feb 24 05:54:54.699636 master-0 kubenswrapper[34361]: I0224 05:54:54.698485 34361 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="01fe89a6-ca30-4209-b7e2-a95d94ed57ac" containerName="nova-api-log" containerID="cri-o://743f6d4b679a2fbafa8e8e2923430908e23e11b1bcc4dbf8dee74ad6c001272b" gracePeriod=30 Feb 24 05:54:54.699636 master-0 kubenswrapper[34361]: I0224 05:54:54.699266 34361 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="01fe89a6-ca30-4209-b7e2-a95d94ed57ac" containerName="nova-api-api" containerID="cri-o://783ce86def8ae54d1eda1ad07d9334fcaa1758e41e4eae2748be7ffac7040baf" gracePeriod=30 Feb 24 05:54:54.735421 master-0 kubenswrapper[34361]: I0224 05:54:54.728937 34361 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7d0a5bab-2c7d-4526-8505-873c732edcf1-scripts\") on node \"master-0\" DevicePath \"\"" Feb 24 05:54:54.735421 master-0 kubenswrapper[34361]: I0224 05:54:54.729057 34361 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8fs5m\" (UniqueName: \"kubernetes.io/projected/7d0a5bab-2c7d-4526-8505-873c732edcf1-kube-api-access-8fs5m\") on node \"master-0\" DevicePath \"\"" Feb 24 05:54:54.736647 master-0 kubenswrapper[34361]: I0224 05:54:54.736061 34361 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Feb 24 05:54:54.736647 master-0 kubenswrapper[34361]: I0224 05:54:54.736377 34361 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-scheduler-0" podUID="f527b398-2fba-4b52-bdf5-0bab54c9394b" containerName="nova-scheduler-scheduler" containerID="cri-o://ab1d2a18d057a7e514b3ad03f1314ee6704a40bd22b1cdd2bbeebf6481205336" gracePeriod=30 Feb 24 05:54:54.743290 master-0 kubenswrapper[34361]: I0224 05:54:54.741872 34361 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7d0a5bab-2c7d-4526-8505-873c732edcf1-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "7d0a5bab-2c7d-4526-8505-873c732edcf1" (UID: "7d0a5bab-2c7d-4526-8505-873c732edcf1"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 24 05:54:54.750535 master-0 kubenswrapper[34361]: I0224 05:54:54.750454 34361 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7d0a5bab-2c7d-4526-8505-873c732edcf1-config-data" (OuterVolumeSpecName: "config-data") pod "7d0a5bab-2c7d-4526-8505-873c732edcf1" (UID: "7d0a5bab-2c7d-4526-8505-873c732edcf1"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 24 05:54:54.756264 master-0 kubenswrapper[34361]: I0224 05:54:54.754359 34361 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Feb 24 05:54:54.756264 master-0 kubenswrapper[34361]: I0224 05:54:54.754689 34361 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="8d47e818-32f5-4976-9bc3-959daf5d5d73" containerName="nova-metadata-log" containerID="cri-o://c748358e5cbe43af6783c712217e4547963a25755e7125847d8bdd3036484652" gracePeriod=30 Feb 24 05:54:54.756264 master-0 kubenswrapper[34361]: I0224 05:54:54.754774 34361 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="8d47e818-32f5-4976-9bc3-959daf5d5d73" containerName="nova-metadata-metadata" containerID="cri-o://3096270daa9c7c12213ff9ea7640adcac49012e07656f3865f9515eb1c3467c1" gracePeriod=30 Feb 24 05:54:54.833939 master-0 kubenswrapper[34361]: I0224 05:54:54.833899 34361 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7d0a5bab-2c7d-4526-8505-873c732edcf1-config-data\") on node \"master-0\" DevicePath \"\"" Feb 24 05:54:54.834062 master-0 kubenswrapper[34361]: I0224 05:54:54.834049 34361 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7d0a5bab-2c7d-4526-8505-873c732edcf1-combined-ca-bundle\") on node \"master-0\" DevicePath \"\"" Feb 24 05:54:54.997802 master-0 kubenswrapper[34361]: I0224 05:54:54.997738 34361 generic.go:334] "Generic (PLEG): container finished" podID="01fe89a6-ca30-4209-b7e2-a95d94ed57ac" containerID="743f6d4b679a2fbafa8e8e2923430908e23e11b1bcc4dbf8dee74ad6c001272b" exitCode=143 Feb 24 05:54:54.998584 master-0 kubenswrapper[34361]: I0224 05:54:54.997792 34361 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"01fe89a6-ca30-4209-b7e2-a95d94ed57ac","Type":"ContainerDied","Data":"743f6d4b679a2fbafa8e8e2923430908e23e11b1bcc4dbf8dee74ad6c001272b"} Feb 24 05:54:55.001039 master-0 kubenswrapper[34361]: I0224 05:54:55.000395 34361 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-lc7xf" event={"ID":"7d0a5bab-2c7d-4526-8505-873c732edcf1","Type":"ContainerDied","Data":"f2fd1d7ef0bf47ac43e0ab7bb68e36eb517781f84ab1c43b2e45aa3ca590517a"} Feb 24 05:54:55.001039 master-0 kubenswrapper[34361]: I0224 05:54:55.000466 34361 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f2fd1d7ef0bf47ac43e0ab7bb68e36eb517781f84ab1c43b2e45aa3ca590517a" Feb 24 05:54:55.001039 master-0 kubenswrapper[34361]: I0224 05:54:55.000580 34361 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-lc7xf" Feb 24 05:54:55.012485 master-0 kubenswrapper[34361]: I0224 05:54:55.012363 34361 generic.go:334] "Generic (PLEG): container finished" podID="8d47e818-32f5-4976-9bc3-959daf5d5d73" containerID="3096270daa9c7c12213ff9ea7640adcac49012e07656f3865f9515eb1c3467c1" exitCode=0 Feb 24 05:54:55.012485 master-0 kubenswrapper[34361]: I0224 05:54:55.012412 34361 generic.go:334] "Generic (PLEG): container finished" podID="8d47e818-32f5-4976-9bc3-959daf5d5d73" containerID="c748358e5cbe43af6783c712217e4547963a25755e7125847d8bdd3036484652" exitCode=143 Feb 24 05:54:55.012816 master-0 kubenswrapper[34361]: I0224 05:54:55.012436 34361 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"8d47e818-32f5-4976-9bc3-959daf5d5d73","Type":"ContainerDied","Data":"3096270daa9c7c12213ff9ea7640adcac49012e07656f3865f9515eb1c3467c1"} Feb 24 05:54:55.012816 master-0 kubenswrapper[34361]: I0224 05:54:55.012570 34361 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"8d47e818-32f5-4976-9bc3-959daf5d5d73","Type":"ContainerDied","Data":"c748358e5cbe43af6783c712217e4547963a25755e7125847d8bdd3036484652"} Feb 24 05:54:55.176347 master-0 kubenswrapper[34361]: I0224 05:54:55.174113 34361 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-conductor-0"] Feb 24 05:54:55.176347 master-0 kubenswrapper[34361]: E0224 05:54:55.174879 34361 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b6bc1039-ed61-4fdb-9e4d-0cb8a4aa1e0d" containerName="nova-manage" Feb 24 05:54:55.176347 master-0 kubenswrapper[34361]: I0224 05:54:55.174895 34361 state_mem.go:107] "Deleted CPUSet assignment" podUID="b6bc1039-ed61-4fdb-9e4d-0cb8a4aa1e0d" containerName="nova-manage" Feb 24 05:54:55.176347 master-0 kubenswrapper[34361]: E0224 05:54:55.174975 34361 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="719517cc-5f72-4139-aaa2-99bd0923702d" containerName="dnsmasq-dns" Feb 24 05:54:55.176347 master-0 kubenswrapper[34361]: I0224 05:54:55.175047 34361 state_mem.go:107] "Deleted CPUSet assignment" podUID="719517cc-5f72-4139-aaa2-99bd0923702d" containerName="dnsmasq-dns" Feb 24 05:54:55.176347 master-0 kubenswrapper[34361]: E0224 05:54:55.175089 34361 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7d0a5bab-2c7d-4526-8505-873c732edcf1" containerName="nova-cell1-conductor-db-sync" Feb 24 05:54:55.176347 master-0 kubenswrapper[34361]: I0224 05:54:55.175098 34361 state_mem.go:107] "Deleted CPUSet assignment" podUID="7d0a5bab-2c7d-4526-8505-873c732edcf1" containerName="nova-cell1-conductor-db-sync" Feb 24 05:54:55.176347 master-0 kubenswrapper[34361]: E0224 05:54:55.175142 34361 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="719517cc-5f72-4139-aaa2-99bd0923702d" containerName="init" Feb 24 05:54:55.176347 master-0 kubenswrapper[34361]: I0224 05:54:55.175168 34361 state_mem.go:107] "Deleted CPUSet assignment" podUID="719517cc-5f72-4139-aaa2-99bd0923702d" containerName="init" Feb 24 05:54:55.176347 master-0 kubenswrapper[34361]: I0224 05:54:55.175606 34361 memory_manager.go:354] "RemoveStaleState removing state" podUID="719517cc-5f72-4139-aaa2-99bd0923702d" containerName="dnsmasq-dns" Feb 24 05:54:55.176347 master-0 kubenswrapper[34361]: I0224 05:54:55.175630 34361 memory_manager.go:354] "RemoveStaleState removing state" podUID="b6bc1039-ed61-4fdb-9e4d-0cb8a4aa1e0d" containerName="nova-manage" Feb 24 05:54:55.176347 master-0 kubenswrapper[34361]: I0224 05:54:55.175666 34361 memory_manager.go:354] "RemoveStaleState removing state" podUID="7d0a5bab-2c7d-4526-8505-873c732edcf1" containerName="nova-cell1-conductor-db-sync" Feb 24 05:54:55.177161 master-0 kubenswrapper[34361]: I0224 05:54:55.176651 34361 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-0" Feb 24 05:54:55.182559 master-0 kubenswrapper[34361]: I0224 05:54:55.180659 34361 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-config-data" Feb 24 05:54:55.194216 master-0 kubenswrapper[34361]: I0224 05:54:55.192778 34361 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-0"] Feb 24 05:54:55.254960 master-0 kubenswrapper[34361]: I0224 05:54:55.254602 34361 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/54a9834a-4aeb-463c-988d-2c7acacbc4c2-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"54a9834a-4aeb-463c-988d-2c7acacbc4c2\") " pod="openstack/nova-cell1-conductor-0" Feb 24 05:54:55.255265 master-0 kubenswrapper[34361]: I0224 05:54:55.254951 34361 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/54a9834a-4aeb-463c-988d-2c7acacbc4c2-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"54a9834a-4aeb-463c-988d-2c7acacbc4c2\") " pod="openstack/nova-cell1-conductor-0" Feb 24 05:54:55.255265 master-0 kubenswrapper[34361]: I0224 05:54:55.255017 34361 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-spngk\" (UniqueName: \"kubernetes.io/projected/54a9834a-4aeb-463c-988d-2c7acacbc4c2-kube-api-access-spngk\") pod \"nova-cell1-conductor-0\" (UID: \"54a9834a-4aeb-463c-988d-2c7acacbc4c2\") " pod="openstack/nova-cell1-conductor-0" Feb 24 05:54:55.358060 master-0 kubenswrapper[34361]: I0224 05:54:55.357912 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/54a9834a-4aeb-463c-988d-2c7acacbc4c2-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"54a9834a-4aeb-463c-988d-2c7acacbc4c2\") " pod="openstack/nova-cell1-conductor-0" Feb 24 05:54:55.358685 master-0 kubenswrapper[34361]: I0224 05:54:55.358151 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/54a9834a-4aeb-463c-988d-2c7acacbc4c2-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"54a9834a-4aeb-463c-988d-2c7acacbc4c2\") " pod="openstack/nova-cell1-conductor-0" Feb 24 05:54:55.358685 master-0 kubenswrapper[34361]: I0224 05:54:55.358183 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-spngk\" (UniqueName: \"kubernetes.io/projected/54a9834a-4aeb-463c-988d-2c7acacbc4c2-kube-api-access-spngk\") pod \"nova-cell1-conductor-0\" (UID: \"54a9834a-4aeb-463c-988d-2c7acacbc4c2\") " pod="openstack/nova-cell1-conductor-0" Feb 24 05:54:55.362326 master-0 kubenswrapper[34361]: I0224 05:54:55.362257 34361 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/54a9834a-4aeb-463c-988d-2c7acacbc4c2-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"54a9834a-4aeb-463c-988d-2c7acacbc4c2\") " pod="openstack/nova-cell1-conductor-0" Feb 24 05:54:55.363730 master-0 kubenswrapper[34361]: I0224 05:54:55.363672 34361 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/54a9834a-4aeb-463c-988d-2c7acacbc4c2-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"54a9834a-4aeb-463c-988d-2c7acacbc4c2\") " pod="openstack/nova-cell1-conductor-0" Feb 24 05:54:55.414261 master-0 kubenswrapper[34361]: I0224 05:54:55.414177 34361 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-spngk\" (UniqueName: \"kubernetes.io/projected/54a9834a-4aeb-463c-988d-2c7acacbc4c2-kube-api-access-spngk\") pod \"nova-cell1-conductor-0\" (UID: \"54a9834a-4aeb-463c-988d-2c7acacbc4c2\") " pod="openstack/nova-cell1-conductor-0" Feb 24 05:54:55.515518 master-0 kubenswrapper[34361]: I0224 05:54:55.515168 34361 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-0" Feb 24 05:54:55.521467 master-0 kubenswrapper[34361]: I0224 05:54:55.521416 34361 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 24 05:54:55.563256 master-0 kubenswrapper[34361]: I0224 05:54:55.563176 34361 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8d47e818-32f5-4976-9bc3-959daf5d5d73-combined-ca-bundle\") pod \"8d47e818-32f5-4976-9bc3-959daf5d5d73\" (UID: \"8d47e818-32f5-4976-9bc3-959daf5d5d73\") " Feb 24 05:54:55.601814 master-0 kubenswrapper[34361]: I0224 05:54:55.600889 34361 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8d47e818-32f5-4976-9bc3-959daf5d5d73-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "8d47e818-32f5-4976-9bc3-959daf5d5d73" (UID: "8d47e818-32f5-4976-9bc3-959daf5d5d73"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 24 05:54:55.670780 master-0 kubenswrapper[34361]: I0224 05:54:55.670697 34361 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8d47e818-32f5-4976-9bc3-959daf5d5d73-logs" (OuterVolumeSpecName: "logs") pod "8d47e818-32f5-4976-9bc3-959daf5d5d73" (UID: "8d47e818-32f5-4976-9bc3-959daf5d5d73"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 24 05:54:55.671038 master-0 kubenswrapper[34361]: I0224 05:54:55.670793 34361 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8d47e818-32f5-4976-9bc3-959daf5d5d73-logs\") pod \"8d47e818-32f5-4976-9bc3-959daf5d5d73\" (UID: \"8d47e818-32f5-4976-9bc3-959daf5d5d73\") " Feb 24 05:54:55.671259 master-0 kubenswrapper[34361]: I0224 05:54:55.671221 34361 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8d47e818-32f5-4976-9bc3-959daf5d5d73-config-data\") pod \"8d47e818-32f5-4976-9bc3-959daf5d5d73\" (UID: \"8d47e818-32f5-4976-9bc3-959daf5d5d73\") " Feb 24 05:54:55.671463 master-0 kubenswrapper[34361]: I0224 05:54:55.671439 34361 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/8d47e818-32f5-4976-9bc3-959daf5d5d73-nova-metadata-tls-certs\") pod \"8d47e818-32f5-4976-9bc3-959daf5d5d73\" (UID: \"8d47e818-32f5-4976-9bc3-959daf5d5d73\") " Feb 24 05:54:55.672490 master-0 kubenswrapper[34361]: I0224 05:54:55.672443 34361 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-glhn7\" (UniqueName: \"kubernetes.io/projected/8d47e818-32f5-4976-9bc3-959daf5d5d73-kube-api-access-glhn7\") pod \"8d47e818-32f5-4976-9bc3-959daf5d5d73\" (UID: \"8d47e818-32f5-4976-9bc3-959daf5d5d73\") " Feb 24 05:54:55.678537 master-0 kubenswrapper[34361]: I0224 05:54:55.677795 34361 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8d47e818-32f5-4976-9bc3-959daf5d5d73-combined-ca-bundle\") on node \"master-0\" DevicePath \"\"" Feb 24 05:54:55.678537 master-0 kubenswrapper[34361]: I0224 05:54:55.677849 34361 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8d47e818-32f5-4976-9bc3-959daf5d5d73-logs\") on node \"master-0\" DevicePath \"\"" Feb 24 05:54:55.695796 master-0 kubenswrapper[34361]: I0224 05:54:55.686941 34361 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8d47e818-32f5-4976-9bc3-959daf5d5d73-kube-api-access-glhn7" (OuterVolumeSpecName: "kube-api-access-glhn7") pod "8d47e818-32f5-4976-9bc3-959daf5d5d73" (UID: "8d47e818-32f5-4976-9bc3-959daf5d5d73"). InnerVolumeSpecName "kube-api-access-glhn7". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 24 05:54:55.783726 master-0 kubenswrapper[34361]: I0224 05:54:55.783567 34361 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-glhn7\" (UniqueName: \"kubernetes.io/projected/8d47e818-32f5-4976-9bc3-959daf5d5d73-kube-api-access-glhn7\") on node \"master-0\" DevicePath \"\"" Feb 24 05:54:55.787883 master-0 kubenswrapper[34361]: I0224 05:54:55.787824 34361 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8d47e818-32f5-4976-9bc3-959daf5d5d73-config-data" (OuterVolumeSpecName: "config-data") pod "8d47e818-32f5-4976-9bc3-959daf5d5d73" (UID: "8d47e818-32f5-4976-9bc3-959daf5d5d73"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 24 05:54:55.797915 master-0 kubenswrapper[34361]: I0224 05:54:55.797503 34361 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8d47e818-32f5-4976-9bc3-959daf5d5d73-nova-metadata-tls-certs" (OuterVolumeSpecName: "nova-metadata-tls-certs") pod "8d47e818-32f5-4976-9bc3-959daf5d5d73" (UID: "8d47e818-32f5-4976-9bc3-959daf5d5d73"). InnerVolumeSpecName "nova-metadata-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 24 05:54:55.889879 master-0 kubenswrapper[34361]: I0224 05:54:55.888921 34361 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8d47e818-32f5-4976-9bc3-959daf5d5d73-config-data\") on node \"master-0\" DevicePath \"\"" Feb 24 05:54:55.889879 master-0 kubenswrapper[34361]: I0224 05:54:55.888978 34361 reconciler_common.go:293] "Volume detached for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/8d47e818-32f5-4976-9bc3-959daf5d5d73-nova-metadata-tls-certs\") on node \"master-0\" DevicePath \"\"" Feb 24 05:54:56.036598 master-0 kubenswrapper[34361]: I0224 05:54:56.036454 34361 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 24 05:54:56.037540 master-0 kubenswrapper[34361]: I0224 05:54:56.037281 34361 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"8d47e818-32f5-4976-9bc3-959daf5d5d73","Type":"ContainerDied","Data":"194d58904bf6ceccbd2d44a4a299755634e54cb3b9e6614d13284b5615a96b3e"} Feb 24 05:54:56.037540 master-0 kubenswrapper[34361]: I0224 05:54:56.037495 34361 scope.go:117] "RemoveContainer" containerID="3096270daa9c7c12213ff9ea7640adcac49012e07656f3865f9515eb1c3467c1" Feb 24 05:54:56.093095 master-0 kubenswrapper[34361]: I0224 05:54:56.093024 34361 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-0"] Feb 24 05:54:56.112517 master-0 kubenswrapper[34361]: I0224 05:54:56.112351 34361 scope.go:117] "RemoveContainer" containerID="c748358e5cbe43af6783c712217e4547963a25755e7125847d8bdd3036484652" Feb 24 05:54:56.121818 master-0 kubenswrapper[34361]: I0224 05:54:56.120736 34361 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Feb 24 05:54:56.121818 master-0 kubenswrapper[34361]: W0224 05:54:56.121783 34361 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod54a9834a_4aeb_463c_988d_2c7acacbc4c2.slice/crio-318a469e61a088bc39f6790b7050a2fc6170ea246baf5b43563f1513e0ceab5c WatchSource:0}: Error finding container 318a469e61a088bc39f6790b7050a2fc6170ea246baf5b43563f1513e0ceab5c: Status 404 returned error can't find the container with id 318a469e61a088bc39f6790b7050a2fc6170ea246baf5b43563f1513e0ceab5c Feb 24 05:54:56.135586 master-0 kubenswrapper[34361]: I0224 05:54:56.135503 34361 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Feb 24 05:54:56.168424 master-0 kubenswrapper[34361]: I0224 05:54:56.168179 34361 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Feb 24 05:54:56.169054 master-0 kubenswrapper[34361]: E0224 05:54:56.168990 34361 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8d47e818-32f5-4976-9bc3-959daf5d5d73" containerName="nova-metadata-log" Feb 24 05:54:56.169054 master-0 kubenswrapper[34361]: I0224 05:54:56.169012 34361 state_mem.go:107] "Deleted CPUSet assignment" podUID="8d47e818-32f5-4976-9bc3-959daf5d5d73" containerName="nova-metadata-log" Feb 24 05:54:56.169219 master-0 kubenswrapper[34361]: E0224 05:54:56.169087 34361 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8d47e818-32f5-4976-9bc3-959daf5d5d73" containerName="nova-metadata-metadata" Feb 24 05:54:56.169219 master-0 kubenswrapper[34361]: I0224 05:54:56.169095 34361 state_mem.go:107] "Deleted CPUSet assignment" podUID="8d47e818-32f5-4976-9bc3-959daf5d5d73" containerName="nova-metadata-metadata" Feb 24 05:54:56.169492 master-0 kubenswrapper[34361]: I0224 05:54:56.169460 34361 memory_manager.go:354] "RemoveStaleState removing state" podUID="8d47e818-32f5-4976-9bc3-959daf5d5d73" containerName="nova-metadata-log" Feb 24 05:54:56.169492 master-0 kubenswrapper[34361]: I0224 05:54:56.169492 34361 memory_manager.go:354] "RemoveStaleState removing state" podUID="8d47e818-32f5-4976-9bc3-959daf5d5d73" containerName="nova-metadata-metadata" Feb 24 05:54:56.171081 master-0 kubenswrapper[34361]: I0224 05:54:56.171051 34361 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 24 05:54:56.174140 master-0 kubenswrapper[34361]: I0224 05:54:56.174083 34361 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Feb 24 05:54:56.175171 master-0 kubenswrapper[34361]: I0224 05:54:56.175145 34361 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-metadata-internal-svc" Feb 24 05:54:56.193063 master-0 kubenswrapper[34361]: I0224 05:54:56.184164 34361 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Feb 24 05:54:56.308559 master-0 kubenswrapper[34361]: I0224 05:54:56.301869 34361 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qt2qg\" (UniqueName: \"kubernetes.io/projected/cefa04df-75ac-48a5-ac80-62009d398d01-kube-api-access-qt2qg\") pod \"nova-metadata-0\" (UID: \"cefa04df-75ac-48a5-ac80-62009d398d01\") " pod="openstack/nova-metadata-0" Feb 24 05:54:56.308559 master-0 kubenswrapper[34361]: I0224 05:54:56.301986 34361 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cefa04df-75ac-48a5-ac80-62009d398d01-config-data\") pod \"nova-metadata-0\" (UID: \"cefa04df-75ac-48a5-ac80-62009d398d01\") " pod="openstack/nova-metadata-0" Feb 24 05:54:56.308559 master-0 kubenswrapper[34361]: I0224 05:54:56.302025 34361 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cefa04df-75ac-48a5-ac80-62009d398d01-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"cefa04df-75ac-48a5-ac80-62009d398d01\") " pod="openstack/nova-metadata-0" Feb 24 05:54:56.308559 master-0 kubenswrapper[34361]: I0224 05:54:56.302069 34361 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/cefa04df-75ac-48a5-ac80-62009d398d01-logs\") pod \"nova-metadata-0\" (UID: \"cefa04df-75ac-48a5-ac80-62009d398d01\") " pod="openstack/nova-metadata-0" Feb 24 05:54:56.308559 master-0 kubenswrapper[34361]: I0224 05:54:56.302139 34361 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/cefa04df-75ac-48a5-ac80-62009d398d01-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"cefa04df-75ac-48a5-ac80-62009d398d01\") " pod="openstack/nova-metadata-0" Feb 24 05:54:56.407175 master-0 kubenswrapper[34361]: I0224 05:54:56.406086 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/cefa04df-75ac-48a5-ac80-62009d398d01-logs\") pod \"nova-metadata-0\" (UID: \"cefa04df-75ac-48a5-ac80-62009d398d01\") " pod="openstack/nova-metadata-0" Feb 24 05:54:56.407175 master-0 kubenswrapper[34361]: I0224 05:54:56.406242 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/cefa04df-75ac-48a5-ac80-62009d398d01-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"cefa04df-75ac-48a5-ac80-62009d398d01\") " pod="openstack/nova-metadata-0" Feb 24 05:54:56.407175 master-0 kubenswrapper[34361]: I0224 05:54:56.406400 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qt2qg\" (UniqueName: \"kubernetes.io/projected/cefa04df-75ac-48a5-ac80-62009d398d01-kube-api-access-qt2qg\") pod \"nova-metadata-0\" (UID: \"cefa04df-75ac-48a5-ac80-62009d398d01\") " pod="openstack/nova-metadata-0" Feb 24 05:54:56.407175 master-0 kubenswrapper[34361]: I0224 05:54:56.406458 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cefa04df-75ac-48a5-ac80-62009d398d01-config-data\") pod \"nova-metadata-0\" (UID: \"cefa04df-75ac-48a5-ac80-62009d398d01\") " pod="openstack/nova-metadata-0" Feb 24 05:54:56.407175 master-0 kubenswrapper[34361]: I0224 05:54:56.406488 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cefa04df-75ac-48a5-ac80-62009d398d01-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"cefa04df-75ac-48a5-ac80-62009d398d01\") " pod="openstack/nova-metadata-0" Feb 24 05:54:56.410601 master-0 kubenswrapper[34361]: I0224 05:54:56.410532 34361 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/cefa04df-75ac-48a5-ac80-62009d398d01-logs\") pod \"nova-metadata-0\" (UID: \"cefa04df-75ac-48a5-ac80-62009d398d01\") " pod="openstack/nova-metadata-0" Feb 24 05:54:56.413567 master-0 kubenswrapper[34361]: I0224 05:54:56.413521 34361 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cefa04df-75ac-48a5-ac80-62009d398d01-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"cefa04df-75ac-48a5-ac80-62009d398d01\") " pod="openstack/nova-metadata-0" Feb 24 05:54:56.413924 master-0 kubenswrapper[34361]: I0224 05:54:56.413887 34361 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/cefa04df-75ac-48a5-ac80-62009d398d01-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"cefa04df-75ac-48a5-ac80-62009d398d01\") " pod="openstack/nova-metadata-0" Feb 24 05:54:56.432937 master-0 kubenswrapper[34361]: I0224 05:54:56.432860 34361 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cefa04df-75ac-48a5-ac80-62009d398d01-config-data\") pod \"nova-metadata-0\" (UID: \"cefa04df-75ac-48a5-ac80-62009d398d01\") " pod="openstack/nova-metadata-0" Feb 24 05:54:56.448104 master-0 kubenswrapper[34361]: I0224 05:54:56.448015 34361 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qt2qg\" (UniqueName: \"kubernetes.io/projected/cefa04df-75ac-48a5-ac80-62009d398d01-kube-api-access-qt2qg\") pod \"nova-metadata-0\" (UID: \"cefa04df-75ac-48a5-ac80-62009d398d01\") " pod="openstack/nova-metadata-0" Feb 24 05:54:56.535880 master-0 kubenswrapper[34361]: I0224 05:54:56.535817 34361 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 24 05:54:56.625141 master-0 kubenswrapper[34361]: I0224 05:54:56.625068 34361 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8d47e818-32f5-4976-9bc3-959daf5d5d73" path="/var/lib/kubelet/pods/8d47e818-32f5-4976-9bc3-959daf5d5d73/volumes" Feb 24 05:54:57.016919 master-0 kubenswrapper[34361]: I0224 05:54:57.016823 34361 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Feb 24 05:54:57.033034 master-0 kubenswrapper[34361]: W0224 05:54:57.032939 34361 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podcefa04df_75ac_48a5_ac80_62009d398d01.slice/crio-99116e412da4f7af95a99f6e64d3b866858d279019f6a24b6b7eb6f179ecdbcf WatchSource:0}: Error finding container 99116e412da4f7af95a99f6e64d3b866858d279019f6a24b6b7eb6f179ecdbcf: Status 404 returned error can't find the container with id 99116e412da4f7af95a99f6e64d3b866858d279019f6a24b6b7eb6f179ecdbcf Feb 24 05:54:57.068283 master-0 kubenswrapper[34361]: I0224 05:54:57.067370 34361 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"cefa04df-75ac-48a5-ac80-62009d398d01","Type":"ContainerStarted","Data":"99116e412da4f7af95a99f6e64d3b866858d279019f6a24b6b7eb6f179ecdbcf"} Feb 24 05:54:57.072046 master-0 kubenswrapper[34361]: I0224 05:54:57.071969 34361 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-0" event={"ID":"54a9834a-4aeb-463c-988d-2c7acacbc4c2","Type":"ContainerStarted","Data":"f1de9a7f451064dbdb58ce2ea1d58638bd5e32542d8161a2cd20d0f93df5f7e2"} Feb 24 05:54:57.072046 master-0 kubenswrapper[34361]: I0224 05:54:57.072025 34361 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-0" event={"ID":"54a9834a-4aeb-463c-988d-2c7acacbc4c2","Type":"ContainerStarted","Data":"318a469e61a088bc39f6790b7050a2fc6170ea246baf5b43563f1513e0ceab5c"} Feb 24 05:54:57.072378 master-0 kubenswrapper[34361]: I0224 05:54:57.072340 34361 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-conductor-0" Feb 24 05:54:57.114633 master-0 kubenswrapper[34361]: I0224 05:54:57.113395 34361 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-conductor-0" podStartSLOduration=2.113371662 podStartE2EDuration="2.113371662s" podCreationTimestamp="2026-02-24 05:54:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-24 05:54:57.102778057 +0000 UTC m=+1056.805395133" watchObservedRunningTime="2026-02-24 05:54:57.113371662 +0000 UTC m=+1056.815988708" Feb 24 05:54:57.552955 master-0 kubenswrapper[34361]: E0224 05:54:57.552868 34361 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="ab1d2a18d057a7e514b3ad03f1314ee6704a40bd22b1cdd2bbeebf6481205336" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Feb 24 05:54:57.556060 master-0 kubenswrapper[34361]: E0224 05:54:57.556010 34361 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="ab1d2a18d057a7e514b3ad03f1314ee6704a40bd22b1cdd2bbeebf6481205336" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Feb 24 05:54:57.558481 master-0 kubenswrapper[34361]: E0224 05:54:57.558439 34361 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="ab1d2a18d057a7e514b3ad03f1314ee6704a40bd22b1cdd2bbeebf6481205336" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Feb 24 05:54:57.558545 master-0 kubenswrapper[34361]: E0224 05:54:57.558482 34361 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/nova-scheduler-0" podUID="f527b398-2fba-4b52-bdf5-0bab54c9394b" containerName="nova-scheduler-scheduler" Feb 24 05:54:58.094633 master-0 kubenswrapper[34361]: I0224 05:54:58.094539 34361 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"cefa04df-75ac-48a5-ac80-62009d398d01","Type":"ContainerStarted","Data":"b24c1ec9a4cb118cf8f370ab23b3e38523e5d056fef82cbb1a6b9b9ca58ab3a8"} Feb 24 05:54:58.094633 master-0 kubenswrapper[34361]: I0224 05:54:58.094616 34361 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"cefa04df-75ac-48a5-ac80-62009d398d01","Type":"ContainerStarted","Data":"3f70fabdaa1c10de1289e9175a4e84b8bb9b8438a37c75570f67615cd4a67a5f"} Feb 24 05:54:58.135372 master-0 kubenswrapper[34361]: I0224 05:54:58.135193 34361 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=2.13515477 podStartE2EDuration="2.13515477s" podCreationTimestamp="2026-02-24 05:54:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-24 05:54:58.115962482 +0000 UTC m=+1057.818579568" watchObservedRunningTime="2026-02-24 05:54:58.13515477 +0000 UTC m=+1057.837771856" Feb 24 05:54:58.798039 master-0 kubenswrapper[34361]: I0224 05:54:58.793743 34361 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 24 05:54:58.903852 master-0 kubenswrapper[34361]: I0224 05:54:58.903781 34361 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/01fe89a6-ca30-4209-b7e2-a95d94ed57ac-combined-ca-bundle\") pod \"01fe89a6-ca30-4209-b7e2-a95d94ed57ac\" (UID: \"01fe89a6-ca30-4209-b7e2-a95d94ed57ac\") " Feb 24 05:54:58.904130 master-0 kubenswrapper[34361]: I0224 05:54:58.903907 34361 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ltcms\" (UniqueName: \"kubernetes.io/projected/01fe89a6-ca30-4209-b7e2-a95d94ed57ac-kube-api-access-ltcms\") pod \"01fe89a6-ca30-4209-b7e2-a95d94ed57ac\" (UID: \"01fe89a6-ca30-4209-b7e2-a95d94ed57ac\") " Feb 24 05:54:58.904130 master-0 kubenswrapper[34361]: I0224 05:54:58.904122 34361 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/01fe89a6-ca30-4209-b7e2-a95d94ed57ac-logs\") pod \"01fe89a6-ca30-4209-b7e2-a95d94ed57ac\" (UID: \"01fe89a6-ca30-4209-b7e2-a95d94ed57ac\") " Feb 24 05:54:58.904262 master-0 kubenswrapper[34361]: I0224 05:54:58.904240 34361 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/01fe89a6-ca30-4209-b7e2-a95d94ed57ac-config-data\") pod \"01fe89a6-ca30-4209-b7e2-a95d94ed57ac\" (UID: \"01fe89a6-ca30-4209-b7e2-a95d94ed57ac\") " Feb 24 05:54:58.904842 master-0 kubenswrapper[34361]: I0224 05:54:58.904777 34361 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/01fe89a6-ca30-4209-b7e2-a95d94ed57ac-logs" (OuterVolumeSpecName: "logs") pod "01fe89a6-ca30-4209-b7e2-a95d94ed57ac" (UID: "01fe89a6-ca30-4209-b7e2-a95d94ed57ac"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 24 05:54:58.905661 master-0 kubenswrapper[34361]: I0224 05:54:58.905628 34361 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/01fe89a6-ca30-4209-b7e2-a95d94ed57ac-logs\") on node \"master-0\" DevicePath \"\"" Feb 24 05:54:58.908162 master-0 kubenswrapper[34361]: I0224 05:54:58.908093 34361 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/01fe89a6-ca30-4209-b7e2-a95d94ed57ac-kube-api-access-ltcms" (OuterVolumeSpecName: "kube-api-access-ltcms") pod "01fe89a6-ca30-4209-b7e2-a95d94ed57ac" (UID: "01fe89a6-ca30-4209-b7e2-a95d94ed57ac"). InnerVolumeSpecName "kube-api-access-ltcms". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 24 05:54:58.942907 master-0 kubenswrapper[34361]: I0224 05:54:58.942717 34361 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/01fe89a6-ca30-4209-b7e2-a95d94ed57ac-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "01fe89a6-ca30-4209-b7e2-a95d94ed57ac" (UID: "01fe89a6-ca30-4209-b7e2-a95d94ed57ac"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 24 05:54:58.944650 master-0 kubenswrapper[34361]: I0224 05:54:58.944509 34361 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/01fe89a6-ca30-4209-b7e2-a95d94ed57ac-config-data" (OuterVolumeSpecName: "config-data") pod "01fe89a6-ca30-4209-b7e2-a95d94ed57ac" (UID: "01fe89a6-ca30-4209-b7e2-a95d94ed57ac"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 24 05:54:59.008706 master-0 kubenswrapper[34361]: I0224 05:54:59.008560 34361 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/01fe89a6-ca30-4209-b7e2-a95d94ed57ac-config-data\") on node \"master-0\" DevicePath \"\"" Feb 24 05:54:59.008706 master-0 kubenswrapper[34361]: I0224 05:54:59.008687 34361 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/01fe89a6-ca30-4209-b7e2-a95d94ed57ac-combined-ca-bundle\") on node \"master-0\" DevicePath \"\"" Feb 24 05:54:59.008706 master-0 kubenswrapper[34361]: I0224 05:54:59.008719 34361 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ltcms\" (UniqueName: \"kubernetes.io/projected/01fe89a6-ca30-4209-b7e2-a95d94ed57ac-kube-api-access-ltcms\") on node \"master-0\" DevicePath \"\"" Feb 24 05:54:59.120645 master-0 kubenswrapper[34361]: I0224 05:54:59.120574 34361 generic.go:334] "Generic (PLEG): container finished" podID="01fe89a6-ca30-4209-b7e2-a95d94ed57ac" containerID="783ce86def8ae54d1eda1ad07d9334fcaa1758e41e4eae2748be7ffac7040baf" exitCode=0 Feb 24 05:54:59.121930 master-0 kubenswrapper[34361]: I0224 05:54:59.120751 34361 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 24 05:54:59.121930 master-0 kubenswrapper[34361]: I0224 05:54:59.121632 34361 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"01fe89a6-ca30-4209-b7e2-a95d94ed57ac","Type":"ContainerDied","Data":"783ce86def8ae54d1eda1ad07d9334fcaa1758e41e4eae2748be7ffac7040baf"} Feb 24 05:54:59.121930 master-0 kubenswrapper[34361]: I0224 05:54:59.121672 34361 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"01fe89a6-ca30-4209-b7e2-a95d94ed57ac","Type":"ContainerDied","Data":"d690055e2baf2f7400223cde46bc419abce14cd30a198ae7d6510596a2951437"} Feb 24 05:54:59.121930 master-0 kubenswrapper[34361]: I0224 05:54:59.121694 34361 scope.go:117] "RemoveContainer" containerID="783ce86def8ae54d1eda1ad07d9334fcaa1758e41e4eae2748be7ffac7040baf" Feb 24 05:54:59.164926 master-0 kubenswrapper[34361]: I0224 05:54:59.164552 34361 scope.go:117] "RemoveContainer" containerID="743f6d4b679a2fbafa8e8e2923430908e23e11b1bcc4dbf8dee74ad6c001272b" Feb 24 05:54:59.210054 master-0 kubenswrapper[34361]: I0224 05:54:59.209848 34361 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Feb 24 05:54:59.216334 master-0 kubenswrapper[34361]: I0224 05:54:59.216217 34361 scope.go:117] "RemoveContainer" containerID="783ce86def8ae54d1eda1ad07d9334fcaa1758e41e4eae2748be7ffac7040baf" Feb 24 05:54:59.216909 master-0 kubenswrapper[34361]: E0224 05:54:59.216863 34361 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"783ce86def8ae54d1eda1ad07d9334fcaa1758e41e4eae2748be7ffac7040baf\": container with ID starting with 783ce86def8ae54d1eda1ad07d9334fcaa1758e41e4eae2748be7ffac7040baf not found: ID does not exist" containerID="783ce86def8ae54d1eda1ad07d9334fcaa1758e41e4eae2748be7ffac7040baf" Feb 24 05:54:59.217039 master-0 kubenswrapper[34361]: I0224 05:54:59.216919 34361 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"783ce86def8ae54d1eda1ad07d9334fcaa1758e41e4eae2748be7ffac7040baf"} err="failed to get container status \"783ce86def8ae54d1eda1ad07d9334fcaa1758e41e4eae2748be7ffac7040baf\": rpc error: code = NotFound desc = could not find container \"783ce86def8ae54d1eda1ad07d9334fcaa1758e41e4eae2748be7ffac7040baf\": container with ID starting with 783ce86def8ae54d1eda1ad07d9334fcaa1758e41e4eae2748be7ffac7040baf not found: ID does not exist" Feb 24 05:54:59.217039 master-0 kubenswrapper[34361]: I0224 05:54:59.216956 34361 scope.go:117] "RemoveContainer" containerID="743f6d4b679a2fbafa8e8e2923430908e23e11b1bcc4dbf8dee74ad6c001272b" Feb 24 05:54:59.217434 master-0 kubenswrapper[34361]: E0224 05:54:59.217376 34361 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"743f6d4b679a2fbafa8e8e2923430908e23e11b1bcc4dbf8dee74ad6c001272b\": container with ID starting with 743f6d4b679a2fbafa8e8e2923430908e23e11b1bcc4dbf8dee74ad6c001272b not found: ID does not exist" containerID="743f6d4b679a2fbafa8e8e2923430908e23e11b1bcc4dbf8dee74ad6c001272b" Feb 24 05:54:59.217434 master-0 kubenswrapper[34361]: I0224 05:54:59.217419 34361 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"743f6d4b679a2fbafa8e8e2923430908e23e11b1bcc4dbf8dee74ad6c001272b"} err="failed to get container status \"743f6d4b679a2fbafa8e8e2923430908e23e11b1bcc4dbf8dee74ad6c001272b\": rpc error: code = NotFound desc = could not find container \"743f6d4b679a2fbafa8e8e2923430908e23e11b1bcc4dbf8dee74ad6c001272b\": container with ID starting with 743f6d4b679a2fbafa8e8e2923430908e23e11b1bcc4dbf8dee74ad6c001272b not found: ID does not exist" Feb 24 05:54:59.229595 master-0 kubenswrapper[34361]: I0224 05:54:59.229484 34361 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Feb 24 05:54:59.248650 master-0 kubenswrapper[34361]: I0224 05:54:59.243471 34361 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Feb 24 05:54:59.248650 master-0 kubenswrapper[34361]: E0224 05:54:59.244388 34361 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="01fe89a6-ca30-4209-b7e2-a95d94ed57ac" containerName="nova-api-log" Feb 24 05:54:59.248650 master-0 kubenswrapper[34361]: I0224 05:54:59.244408 34361 state_mem.go:107] "Deleted CPUSet assignment" podUID="01fe89a6-ca30-4209-b7e2-a95d94ed57ac" containerName="nova-api-log" Feb 24 05:54:59.248650 master-0 kubenswrapper[34361]: E0224 05:54:59.244496 34361 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="01fe89a6-ca30-4209-b7e2-a95d94ed57ac" containerName="nova-api-api" Feb 24 05:54:59.248650 master-0 kubenswrapper[34361]: I0224 05:54:59.244506 34361 state_mem.go:107] "Deleted CPUSet assignment" podUID="01fe89a6-ca30-4209-b7e2-a95d94ed57ac" containerName="nova-api-api" Feb 24 05:54:59.248650 master-0 kubenswrapper[34361]: I0224 05:54:59.244833 34361 memory_manager.go:354] "RemoveStaleState removing state" podUID="01fe89a6-ca30-4209-b7e2-a95d94ed57ac" containerName="nova-api-log" Feb 24 05:54:59.248650 master-0 kubenswrapper[34361]: I0224 05:54:59.244907 34361 memory_manager.go:354] "RemoveStaleState removing state" podUID="01fe89a6-ca30-4209-b7e2-a95d94ed57ac" containerName="nova-api-api" Feb 24 05:54:59.248650 master-0 kubenswrapper[34361]: I0224 05:54:59.246769 34361 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 24 05:54:59.252215 master-0 kubenswrapper[34361]: I0224 05:54:59.252157 34361 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Feb 24 05:54:59.255794 master-0 kubenswrapper[34361]: I0224 05:54:59.255712 34361 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Feb 24 05:54:59.316830 master-0 kubenswrapper[34361]: I0224 05:54:59.316758 34361 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ab08d4ad-e3ec-4e99-96ce-f6242fe48f90-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"ab08d4ad-e3ec-4e99-96ce-f6242fe48f90\") " pod="openstack/nova-api-0" Feb 24 05:54:59.316830 master-0 kubenswrapper[34361]: I0224 05:54:59.316818 34361 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ab08d4ad-e3ec-4e99-96ce-f6242fe48f90-config-data\") pod \"nova-api-0\" (UID: \"ab08d4ad-e3ec-4e99-96ce-f6242fe48f90\") " pod="openstack/nova-api-0" Feb 24 05:54:59.317190 master-0 kubenswrapper[34361]: I0224 05:54:59.316879 34361 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ab08d4ad-e3ec-4e99-96ce-f6242fe48f90-logs\") pod \"nova-api-0\" (UID: \"ab08d4ad-e3ec-4e99-96ce-f6242fe48f90\") " pod="openstack/nova-api-0" Feb 24 05:54:59.317190 master-0 kubenswrapper[34361]: I0224 05:54:59.316921 34361 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vfr24\" (UniqueName: \"kubernetes.io/projected/ab08d4ad-e3ec-4e99-96ce-f6242fe48f90-kube-api-access-vfr24\") pod \"nova-api-0\" (UID: \"ab08d4ad-e3ec-4e99-96ce-f6242fe48f90\") " pod="openstack/nova-api-0" Feb 24 05:54:59.427477 master-0 kubenswrapper[34361]: I0224 05:54:59.419175 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ab08d4ad-e3ec-4e99-96ce-f6242fe48f90-logs\") pod \"nova-api-0\" (UID: \"ab08d4ad-e3ec-4e99-96ce-f6242fe48f90\") " pod="openstack/nova-api-0" Feb 24 05:54:59.427477 master-0 kubenswrapper[34361]: I0224 05:54:59.419267 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vfr24\" (UniqueName: \"kubernetes.io/projected/ab08d4ad-e3ec-4e99-96ce-f6242fe48f90-kube-api-access-vfr24\") pod \"nova-api-0\" (UID: \"ab08d4ad-e3ec-4e99-96ce-f6242fe48f90\") " pod="openstack/nova-api-0" Feb 24 05:54:59.427477 master-0 kubenswrapper[34361]: I0224 05:54:59.419448 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ab08d4ad-e3ec-4e99-96ce-f6242fe48f90-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"ab08d4ad-e3ec-4e99-96ce-f6242fe48f90\") " pod="openstack/nova-api-0" Feb 24 05:54:59.427477 master-0 kubenswrapper[34361]: I0224 05:54:59.419474 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ab08d4ad-e3ec-4e99-96ce-f6242fe48f90-config-data\") pod \"nova-api-0\" (UID: \"ab08d4ad-e3ec-4e99-96ce-f6242fe48f90\") " pod="openstack/nova-api-0" Feb 24 05:54:59.427477 master-0 kubenswrapper[34361]: I0224 05:54:59.420358 34361 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ab08d4ad-e3ec-4e99-96ce-f6242fe48f90-logs\") pod \"nova-api-0\" (UID: \"ab08d4ad-e3ec-4e99-96ce-f6242fe48f90\") " pod="openstack/nova-api-0" Feb 24 05:54:59.427477 master-0 kubenswrapper[34361]: I0224 05:54:59.424950 34361 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ab08d4ad-e3ec-4e99-96ce-f6242fe48f90-config-data\") pod \"nova-api-0\" (UID: \"ab08d4ad-e3ec-4e99-96ce-f6242fe48f90\") " pod="openstack/nova-api-0" Feb 24 05:54:59.427477 master-0 kubenswrapper[34361]: I0224 05:54:59.425427 34361 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ab08d4ad-e3ec-4e99-96ce-f6242fe48f90-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"ab08d4ad-e3ec-4e99-96ce-f6242fe48f90\") " pod="openstack/nova-api-0" Feb 24 05:54:59.438463 master-0 kubenswrapper[34361]: I0224 05:54:59.438395 34361 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vfr24\" (UniqueName: \"kubernetes.io/projected/ab08d4ad-e3ec-4e99-96ce-f6242fe48f90-kube-api-access-vfr24\") pod \"nova-api-0\" (UID: \"ab08d4ad-e3ec-4e99-96ce-f6242fe48f90\") " pod="openstack/nova-api-0" Feb 24 05:54:59.584042 master-0 kubenswrapper[34361]: I0224 05:54:59.583907 34361 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 24 05:55:00.122075 master-0 kubenswrapper[34361]: W0224 05:55:00.121817 34361 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podab08d4ad_e3ec_4e99_96ce_f6242fe48f90.slice/crio-5ce2992ec65d69a669593b6dbc2fac418731493a431bfcd324af7f4dec4d669c WatchSource:0}: Error finding container 5ce2992ec65d69a669593b6dbc2fac418731493a431bfcd324af7f4dec4d669c: Status 404 returned error can't find the container with id 5ce2992ec65d69a669593b6dbc2fac418731493a431bfcd324af7f4dec4d669c Feb 24 05:55:00.125942 master-0 kubenswrapper[34361]: I0224 05:55:00.125852 34361 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Feb 24 05:55:00.171007 master-0 kubenswrapper[34361]: I0224 05:55:00.170359 34361 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"ab08d4ad-e3ec-4e99-96ce-f6242fe48f90","Type":"ContainerStarted","Data":"5ce2992ec65d69a669593b6dbc2fac418731493a431bfcd324af7f4dec4d669c"} Feb 24 05:55:00.174954 master-0 kubenswrapper[34361]: I0224 05:55:00.174905 34361 generic.go:334] "Generic (PLEG): container finished" podID="f527b398-2fba-4b52-bdf5-0bab54c9394b" containerID="ab1d2a18d057a7e514b3ad03f1314ee6704a40bd22b1cdd2bbeebf6481205336" exitCode=0 Feb 24 05:55:00.175193 master-0 kubenswrapper[34361]: I0224 05:55:00.175044 34361 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"f527b398-2fba-4b52-bdf5-0bab54c9394b","Type":"ContainerDied","Data":"ab1d2a18d057a7e514b3ad03f1314ee6704a40bd22b1cdd2bbeebf6481205336"} Feb 24 05:55:00.516709 master-0 kubenswrapper[34361]: I0224 05:55:00.516642 34361 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Feb 24 05:55:00.638612 master-0 kubenswrapper[34361]: I0224 05:55:00.638539 34361 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="01fe89a6-ca30-4209-b7e2-a95d94ed57ac" path="/var/lib/kubelet/pods/01fe89a6-ca30-4209-b7e2-a95d94ed57ac/volumes" Feb 24 05:55:00.674428 master-0 kubenswrapper[34361]: I0224 05:55:00.670660 34361 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6g4l5\" (UniqueName: \"kubernetes.io/projected/f527b398-2fba-4b52-bdf5-0bab54c9394b-kube-api-access-6g4l5\") pod \"f527b398-2fba-4b52-bdf5-0bab54c9394b\" (UID: \"f527b398-2fba-4b52-bdf5-0bab54c9394b\") " Feb 24 05:55:00.674428 master-0 kubenswrapper[34361]: I0224 05:55:00.670739 34361 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f527b398-2fba-4b52-bdf5-0bab54c9394b-combined-ca-bundle\") pod \"f527b398-2fba-4b52-bdf5-0bab54c9394b\" (UID: \"f527b398-2fba-4b52-bdf5-0bab54c9394b\") " Feb 24 05:55:00.674428 master-0 kubenswrapper[34361]: I0224 05:55:00.670870 34361 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f527b398-2fba-4b52-bdf5-0bab54c9394b-config-data\") pod \"f527b398-2fba-4b52-bdf5-0bab54c9394b\" (UID: \"f527b398-2fba-4b52-bdf5-0bab54c9394b\") " Feb 24 05:55:00.675795 master-0 kubenswrapper[34361]: I0224 05:55:00.675069 34361 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f527b398-2fba-4b52-bdf5-0bab54c9394b-kube-api-access-6g4l5" (OuterVolumeSpecName: "kube-api-access-6g4l5") pod "f527b398-2fba-4b52-bdf5-0bab54c9394b" (UID: "f527b398-2fba-4b52-bdf5-0bab54c9394b"). InnerVolumeSpecName "kube-api-access-6g4l5". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 24 05:55:00.715271 master-0 kubenswrapper[34361]: I0224 05:55:00.715200 34361 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f527b398-2fba-4b52-bdf5-0bab54c9394b-config-data" (OuterVolumeSpecName: "config-data") pod "f527b398-2fba-4b52-bdf5-0bab54c9394b" (UID: "f527b398-2fba-4b52-bdf5-0bab54c9394b"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 24 05:55:00.772093 master-0 kubenswrapper[34361]: I0224 05:55:00.771686 34361 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f527b398-2fba-4b52-bdf5-0bab54c9394b-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "f527b398-2fba-4b52-bdf5-0bab54c9394b" (UID: "f527b398-2fba-4b52-bdf5-0bab54c9394b"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 24 05:55:00.774597 master-0 kubenswrapper[34361]: I0224 05:55:00.774521 34361 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6g4l5\" (UniqueName: \"kubernetes.io/projected/f527b398-2fba-4b52-bdf5-0bab54c9394b-kube-api-access-6g4l5\") on node \"master-0\" DevicePath \"\"" Feb 24 05:55:00.774597 master-0 kubenswrapper[34361]: I0224 05:55:00.774589 34361 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f527b398-2fba-4b52-bdf5-0bab54c9394b-combined-ca-bundle\") on node \"master-0\" DevicePath \"\"" Feb 24 05:55:00.774709 master-0 kubenswrapper[34361]: I0224 05:55:00.774604 34361 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f527b398-2fba-4b52-bdf5-0bab54c9394b-config-data\") on node \"master-0\" DevicePath \"\"" Feb 24 05:55:01.193084 master-0 kubenswrapper[34361]: I0224 05:55:01.192999 34361 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Feb 24 05:55:01.194542 master-0 kubenswrapper[34361]: I0224 05:55:01.193532 34361 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"f527b398-2fba-4b52-bdf5-0bab54c9394b","Type":"ContainerDied","Data":"be0ff1f910ca308d084f34ab8fed9c51995a0f0f24bac74a2f09f119f0beb28c"} Feb 24 05:55:01.194542 master-0 kubenswrapper[34361]: I0224 05:55:01.193729 34361 scope.go:117] "RemoveContainer" containerID="ab1d2a18d057a7e514b3ad03f1314ee6704a40bd22b1cdd2bbeebf6481205336" Feb 24 05:55:01.197907 master-0 kubenswrapper[34361]: I0224 05:55:01.197821 34361 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"ab08d4ad-e3ec-4e99-96ce-f6242fe48f90","Type":"ContainerStarted","Data":"45519f82cb58fe639143471d2ff7b23337594f893e8e328ded52c40f36c082fb"} Feb 24 05:55:01.197907 master-0 kubenswrapper[34361]: I0224 05:55:01.197900 34361 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"ab08d4ad-e3ec-4e99-96ce-f6242fe48f90","Type":"ContainerStarted","Data":"a6bb288e8b19f3d5a9ba17f1c1e199f60015d808b08b91cdf75f3da907a5a88b"} Feb 24 05:55:01.238187 master-0 kubenswrapper[34361]: I0224 05:55:01.238062 34361 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=2.238029357 podStartE2EDuration="2.238029357s" podCreationTimestamp="2026-02-24 05:54:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-24 05:55:01.224996395 +0000 UTC m=+1060.927613451" watchObservedRunningTime="2026-02-24 05:55:01.238029357 +0000 UTC m=+1060.940646443" Feb 24 05:55:01.292790 master-0 kubenswrapper[34361]: I0224 05:55:01.292702 34361 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Feb 24 05:55:01.346522 master-0 kubenswrapper[34361]: I0224 05:55:01.340493 34361 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-scheduler-0"] Feb 24 05:55:01.357363 master-0 kubenswrapper[34361]: I0224 05:55:01.357258 34361 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Feb 24 05:55:01.358161 master-0 kubenswrapper[34361]: E0224 05:55:01.358120 34361 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f527b398-2fba-4b52-bdf5-0bab54c9394b" containerName="nova-scheduler-scheduler" Feb 24 05:55:01.358161 master-0 kubenswrapper[34361]: I0224 05:55:01.358156 34361 state_mem.go:107] "Deleted CPUSet assignment" podUID="f527b398-2fba-4b52-bdf5-0bab54c9394b" containerName="nova-scheduler-scheduler" Feb 24 05:55:01.358626 master-0 kubenswrapper[34361]: I0224 05:55:01.358594 34361 memory_manager.go:354] "RemoveStaleState removing state" podUID="f527b398-2fba-4b52-bdf5-0bab54c9394b" containerName="nova-scheduler-scheduler" Feb 24 05:55:01.359681 master-0 kubenswrapper[34361]: I0224 05:55:01.359649 34361 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Feb 24 05:55:01.363750 master-0 kubenswrapper[34361]: I0224 05:55:01.363698 34361 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Feb 24 05:55:01.370374 master-0 kubenswrapper[34361]: I0224 05:55:01.370301 34361 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Feb 24 05:55:01.506149 master-0 kubenswrapper[34361]: I0224 05:55:01.506049 34361 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7c71a7ab-625b-4ba1-b7d2-7832cc1ba651-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"7c71a7ab-625b-4ba1-b7d2-7832cc1ba651\") " pod="openstack/nova-scheduler-0" Feb 24 05:55:01.506149 master-0 kubenswrapper[34361]: I0224 05:55:01.506153 34361 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7c71a7ab-625b-4ba1-b7d2-7832cc1ba651-config-data\") pod \"nova-scheduler-0\" (UID: \"7c71a7ab-625b-4ba1-b7d2-7832cc1ba651\") " pod="openstack/nova-scheduler-0" Feb 24 05:55:01.508922 master-0 kubenswrapper[34361]: I0224 05:55:01.508845 34361 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pc284\" (UniqueName: \"kubernetes.io/projected/7c71a7ab-625b-4ba1-b7d2-7832cc1ba651-kube-api-access-pc284\") pod \"nova-scheduler-0\" (UID: \"7c71a7ab-625b-4ba1-b7d2-7832cc1ba651\") " pod="openstack/nova-scheduler-0" Feb 24 05:55:01.536961 master-0 kubenswrapper[34361]: I0224 05:55:01.536861 34361 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Feb 24 05:55:01.536961 master-0 kubenswrapper[34361]: I0224 05:55:01.536941 34361 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Feb 24 05:55:01.612713 master-0 kubenswrapper[34361]: I0224 05:55:01.612424 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pc284\" (UniqueName: \"kubernetes.io/projected/7c71a7ab-625b-4ba1-b7d2-7832cc1ba651-kube-api-access-pc284\") pod \"nova-scheduler-0\" (UID: \"7c71a7ab-625b-4ba1-b7d2-7832cc1ba651\") " pod="openstack/nova-scheduler-0" Feb 24 05:55:01.613118 master-0 kubenswrapper[34361]: I0224 05:55:01.613082 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7c71a7ab-625b-4ba1-b7d2-7832cc1ba651-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"7c71a7ab-625b-4ba1-b7d2-7832cc1ba651\") " pod="openstack/nova-scheduler-0" Feb 24 05:55:01.613220 master-0 kubenswrapper[34361]: I0224 05:55:01.613193 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7c71a7ab-625b-4ba1-b7d2-7832cc1ba651-config-data\") pod \"nova-scheduler-0\" (UID: \"7c71a7ab-625b-4ba1-b7d2-7832cc1ba651\") " pod="openstack/nova-scheduler-0" Feb 24 05:55:01.621136 master-0 kubenswrapper[34361]: I0224 05:55:01.621061 34361 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7c71a7ab-625b-4ba1-b7d2-7832cc1ba651-config-data\") pod \"nova-scheduler-0\" (UID: \"7c71a7ab-625b-4ba1-b7d2-7832cc1ba651\") " pod="openstack/nova-scheduler-0" Feb 24 05:55:01.621479 master-0 kubenswrapper[34361]: I0224 05:55:01.621406 34361 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7c71a7ab-625b-4ba1-b7d2-7832cc1ba651-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"7c71a7ab-625b-4ba1-b7d2-7832cc1ba651\") " pod="openstack/nova-scheduler-0" Feb 24 05:55:01.636540 master-0 kubenswrapper[34361]: I0224 05:55:01.636470 34361 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pc284\" (UniqueName: \"kubernetes.io/projected/7c71a7ab-625b-4ba1-b7d2-7832cc1ba651-kube-api-access-pc284\") pod \"nova-scheduler-0\" (UID: \"7c71a7ab-625b-4ba1-b7d2-7832cc1ba651\") " pod="openstack/nova-scheduler-0" Feb 24 05:55:01.695496 master-0 kubenswrapper[34361]: I0224 05:55:01.695405 34361 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Feb 24 05:55:02.267467 master-0 kubenswrapper[34361]: I0224 05:55:02.267385 34361 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Feb 24 05:55:02.268255 master-0 kubenswrapper[34361]: W0224 05:55:02.268205 34361 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod7c71a7ab_625b_4ba1_b7d2_7832cc1ba651.slice/crio-e0b810cc86386ad7bdb27f354891aa09362873975a56c6fac62f49d32498d1fb WatchSource:0}: Error finding container e0b810cc86386ad7bdb27f354891aa09362873975a56c6fac62f49d32498d1fb: Status 404 returned error can't find the container with id e0b810cc86386ad7bdb27f354891aa09362873975a56c6fac62f49d32498d1fb Feb 24 05:55:02.625806 master-0 kubenswrapper[34361]: I0224 05:55:02.625682 34361 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f527b398-2fba-4b52-bdf5-0bab54c9394b" path="/var/lib/kubelet/pods/f527b398-2fba-4b52-bdf5-0bab54c9394b/volumes" Feb 24 05:55:03.236344 master-0 kubenswrapper[34361]: I0224 05:55:03.236247 34361 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"7c71a7ab-625b-4ba1-b7d2-7832cc1ba651","Type":"ContainerStarted","Data":"59a229cc27e8ab8436e29ad9f6fade4482c5b173ffe204683a6a47581c238e04"} Feb 24 05:55:03.236958 master-0 kubenswrapper[34361]: I0224 05:55:03.236929 34361 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"7c71a7ab-625b-4ba1-b7d2-7832cc1ba651","Type":"ContainerStarted","Data":"e0b810cc86386ad7bdb27f354891aa09362873975a56c6fac62f49d32498d1fb"} Feb 24 05:55:05.575300 master-0 kubenswrapper[34361]: I0224 05:55:05.575166 34361 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell1-conductor-0" Feb 24 05:55:05.612295 master-0 kubenswrapper[34361]: I0224 05:55:05.612142 34361 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=4.612101898 podStartE2EDuration="4.612101898s" podCreationTimestamp="2026-02-24 05:55:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-24 05:55:03.28851816 +0000 UTC m=+1062.991135276" watchObservedRunningTime="2026-02-24 05:55:05.612101898 +0000 UTC m=+1065.314718984" Feb 24 05:55:06.536744 master-0 kubenswrapper[34361]: I0224 05:55:06.536596 34361 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Feb 24 05:55:06.536744 master-0 kubenswrapper[34361]: I0224 05:55:06.536700 34361 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Feb 24 05:55:06.697063 master-0 kubenswrapper[34361]: I0224 05:55:06.696954 34361 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Feb 24 05:55:07.555811 master-0 kubenswrapper[34361]: I0224 05:55:07.555569 34361 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="cefa04df-75ac-48a5-ac80-62009d398d01" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.128.1.4:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 24 05:55:07.556345 master-0 kubenswrapper[34361]: I0224 05:55:07.556144 34361 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="cefa04df-75ac-48a5-ac80-62009d398d01" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.128.1.4:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 24 05:55:09.584815 master-0 kubenswrapper[34361]: I0224 05:55:09.584700 34361 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Feb 24 05:55:09.584815 master-0 kubenswrapper[34361]: I0224 05:55:09.584810 34361 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Feb 24 05:55:10.666708 master-0 kubenswrapper[34361]: I0224 05:55:10.666591 34361 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="ab08d4ad-e3ec-4e99-96ce-f6242fe48f90" containerName="nova-api-api" probeResult="failure" output="Get \"http://10.128.1.5:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 24 05:55:10.667565 master-0 kubenswrapper[34361]: I0224 05:55:10.666710 34361 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="ab08d4ad-e3ec-4e99-96ce-f6242fe48f90" containerName="nova-api-log" probeResult="failure" output="Get \"http://10.128.1.5:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 24 05:55:11.696942 master-0 kubenswrapper[34361]: I0224 05:55:11.696848 34361 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-0" Feb 24 05:55:11.743129 master-0 kubenswrapper[34361]: I0224 05:55:11.740880 34361 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-0" Feb 24 05:55:12.454073 master-0 kubenswrapper[34361]: I0224 05:55:12.453980 34361 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-0" Feb 24 05:55:14.327278 master-0 kubenswrapper[34361]: I0224 05:55:14.327091 34361 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Feb 24 05:55:14.452232 master-0 kubenswrapper[34361]: I0224 05:55:14.452151 34361 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/25249dd5-54b4-44dc-ab35-e8532b1d0875-combined-ca-bundle\") pod \"25249dd5-54b4-44dc-ab35-e8532b1d0875\" (UID: \"25249dd5-54b4-44dc-ab35-e8532b1d0875\") " Feb 24 05:55:14.452571 master-0 kubenswrapper[34361]: I0224 05:55:14.452420 34361 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/25249dd5-54b4-44dc-ab35-e8532b1d0875-config-data\") pod \"25249dd5-54b4-44dc-ab35-e8532b1d0875\" (UID: \"25249dd5-54b4-44dc-ab35-e8532b1d0875\") " Feb 24 05:55:14.452838 master-0 kubenswrapper[34361]: I0224 05:55:14.452801 34361 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xg9jd\" (UniqueName: \"kubernetes.io/projected/25249dd5-54b4-44dc-ab35-e8532b1d0875-kube-api-access-xg9jd\") pod \"25249dd5-54b4-44dc-ab35-e8532b1d0875\" (UID: \"25249dd5-54b4-44dc-ab35-e8532b1d0875\") " Feb 24 05:55:14.461225 master-0 kubenswrapper[34361]: I0224 05:55:14.461119 34361 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/25249dd5-54b4-44dc-ab35-e8532b1d0875-kube-api-access-xg9jd" (OuterVolumeSpecName: "kube-api-access-xg9jd") pod "25249dd5-54b4-44dc-ab35-e8532b1d0875" (UID: "25249dd5-54b4-44dc-ab35-e8532b1d0875"). InnerVolumeSpecName "kube-api-access-xg9jd". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 24 05:55:14.464032 master-0 kubenswrapper[34361]: I0224 05:55:14.463956 34361 generic.go:334] "Generic (PLEG): container finished" podID="25249dd5-54b4-44dc-ab35-e8532b1d0875" containerID="ea151b7ae69ab3bd70944165ae65abbf25d3a10ddb132d07a67405c0bb5d49a5" exitCode=137 Feb 24 05:55:14.464102 master-0 kubenswrapper[34361]: I0224 05:55:14.464055 34361 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"25249dd5-54b4-44dc-ab35-e8532b1d0875","Type":"ContainerDied","Data":"ea151b7ae69ab3bd70944165ae65abbf25d3a10ddb132d07a67405c0bb5d49a5"} Feb 24 05:55:14.464171 master-0 kubenswrapper[34361]: I0224 05:55:14.464128 34361 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"25249dd5-54b4-44dc-ab35-e8532b1d0875","Type":"ContainerDied","Data":"183ab0821ade2735e9e3e697e6ae92172e0515adcacc52fdfd0a552690c0ef6f"} Feb 24 05:55:14.464221 master-0 kubenswrapper[34361]: I0224 05:55:14.464191 34361 scope.go:117] "RemoveContainer" containerID="ea151b7ae69ab3bd70944165ae65abbf25d3a10ddb132d07a67405c0bb5d49a5" Feb 24 05:55:14.464564 master-0 kubenswrapper[34361]: I0224 05:55:14.464514 34361 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Feb 24 05:55:14.491935 master-0 kubenswrapper[34361]: I0224 05:55:14.491815 34361 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/25249dd5-54b4-44dc-ab35-e8532b1d0875-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "25249dd5-54b4-44dc-ab35-e8532b1d0875" (UID: "25249dd5-54b4-44dc-ab35-e8532b1d0875"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 24 05:55:14.493085 master-0 kubenswrapper[34361]: I0224 05:55:14.493008 34361 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/25249dd5-54b4-44dc-ab35-e8532b1d0875-config-data" (OuterVolumeSpecName: "config-data") pod "25249dd5-54b4-44dc-ab35-e8532b1d0875" (UID: "25249dd5-54b4-44dc-ab35-e8532b1d0875"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 24 05:55:14.555630 master-0 kubenswrapper[34361]: I0224 05:55:14.555558 34361 scope.go:117] "RemoveContainer" containerID="ea151b7ae69ab3bd70944165ae65abbf25d3a10ddb132d07a67405c0bb5d49a5" Feb 24 05:55:14.556477 master-0 kubenswrapper[34361]: E0224 05:55:14.556385 34361 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ea151b7ae69ab3bd70944165ae65abbf25d3a10ddb132d07a67405c0bb5d49a5\": container with ID starting with ea151b7ae69ab3bd70944165ae65abbf25d3a10ddb132d07a67405c0bb5d49a5 not found: ID does not exist" containerID="ea151b7ae69ab3bd70944165ae65abbf25d3a10ddb132d07a67405c0bb5d49a5" Feb 24 05:55:14.556549 master-0 kubenswrapper[34361]: I0224 05:55:14.556496 34361 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ea151b7ae69ab3bd70944165ae65abbf25d3a10ddb132d07a67405c0bb5d49a5"} err="failed to get container status \"ea151b7ae69ab3bd70944165ae65abbf25d3a10ddb132d07a67405c0bb5d49a5\": rpc error: code = NotFound desc = could not find container \"ea151b7ae69ab3bd70944165ae65abbf25d3a10ddb132d07a67405c0bb5d49a5\": container with ID starting with ea151b7ae69ab3bd70944165ae65abbf25d3a10ddb132d07a67405c0bb5d49a5 not found: ID does not exist" Feb 24 05:55:14.557514 master-0 kubenswrapper[34361]: I0224 05:55:14.557469 34361 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/25249dd5-54b4-44dc-ab35-e8532b1d0875-config-data\") on node \"master-0\" DevicePath \"\"" Feb 24 05:55:14.557619 master-0 kubenswrapper[34361]: I0224 05:55:14.557601 34361 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xg9jd\" (UniqueName: \"kubernetes.io/projected/25249dd5-54b4-44dc-ab35-e8532b1d0875-kube-api-access-xg9jd\") on node \"master-0\" DevicePath \"\"" Feb 24 05:55:14.557701 master-0 kubenswrapper[34361]: I0224 05:55:14.557687 34361 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/25249dd5-54b4-44dc-ab35-e8532b1d0875-combined-ca-bundle\") on node \"master-0\" DevicePath \"\"" Feb 24 05:55:14.876387 master-0 kubenswrapper[34361]: I0224 05:55:14.873648 34361 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Feb 24 05:55:14.892340 master-0 kubenswrapper[34361]: I0224 05:55:14.890019 34361 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Feb 24 05:55:14.912592 master-0 kubenswrapper[34361]: I0224 05:55:14.910523 34361 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Feb 24 05:55:14.912592 master-0 kubenswrapper[34361]: E0224 05:55:14.911389 34361 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="25249dd5-54b4-44dc-ab35-e8532b1d0875" containerName="nova-cell1-novncproxy-novncproxy" Feb 24 05:55:14.912592 master-0 kubenswrapper[34361]: I0224 05:55:14.911408 34361 state_mem.go:107] "Deleted CPUSet assignment" podUID="25249dd5-54b4-44dc-ab35-e8532b1d0875" containerName="nova-cell1-novncproxy-novncproxy" Feb 24 05:55:14.912592 master-0 kubenswrapper[34361]: I0224 05:55:14.912090 34361 memory_manager.go:354] "RemoveStaleState removing state" podUID="25249dd5-54b4-44dc-ab35-e8532b1d0875" containerName="nova-cell1-novncproxy-novncproxy" Feb 24 05:55:14.913842 master-0 kubenswrapper[34361]: I0224 05:55:14.913097 34361 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Feb 24 05:55:14.916125 master-0 kubenswrapper[34361]: I0224 05:55:14.916091 34361 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-novncproxy-cell1-public-svc" Feb 24 05:55:14.916371 master-0 kubenswrapper[34361]: I0224 05:55:14.916290 34361 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-novncproxy-cell1-vencrypt" Feb 24 05:55:14.916459 master-0 kubenswrapper[34361]: I0224 05:55:14.916435 34361 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-novncproxy-config-data" Feb 24 05:55:14.924763 master-0 kubenswrapper[34361]: I0224 05:55:14.924635 34361 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Feb 24 05:55:14.976669 master-0 kubenswrapper[34361]: I0224 05:55:14.976356 34361 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a03f2b36-4824-4f0f-810d-9be012f74776-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"a03f2b36-4824-4f0f-810d-9be012f74776\") " pod="openstack/nova-cell1-novncproxy-0" Feb 24 05:55:14.976669 master-0 kubenswrapper[34361]: I0224 05:55:14.976439 34361 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-njxqn\" (UniqueName: \"kubernetes.io/projected/a03f2b36-4824-4f0f-810d-9be012f74776-kube-api-access-njxqn\") pod \"nova-cell1-novncproxy-0\" (UID: \"a03f2b36-4824-4f0f-810d-9be012f74776\") " pod="openstack/nova-cell1-novncproxy-0" Feb 24 05:55:14.976669 master-0 kubenswrapper[34361]: I0224 05:55:14.976546 34361 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a03f2b36-4824-4f0f-810d-9be012f74776-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"a03f2b36-4824-4f0f-810d-9be012f74776\") " pod="openstack/nova-cell1-novncproxy-0" Feb 24 05:55:14.976669 master-0 kubenswrapper[34361]: I0224 05:55:14.976624 34361 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/a03f2b36-4824-4f0f-810d-9be012f74776-vencrypt-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"a03f2b36-4824-4f0f-810d-9be012f74776\") " pod="openstack/nova-cell1-novncproxy-0" Feb 24 05:55:14.977061 master-0 kubenswrapper[34361]: I0224 05:55:14.976967 34361 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/a03f2b36-4824-4f0f-810d-9be012f74776-nova-novncproxy-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"a03f2b36-4824-4f0f-810d-9be012f74776\") " pod="openstack/nova-cell1-novncproxy-0" Feb 24 05:55:15.080346 master-0 kubenswrapper[34361]: I0224 05:55:15.079505 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a03f2b36-4824-4f0f-810d-9be012f74776-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"a03f2b36-4824-4f0f-810d-9be012f74776\") " pod="openstack/nova-cell1-novncproxy-0" Feb 24 05:55:15.080346 master-0 kubenswrapper[34361]: I0224 05:55:15.079637 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/a03f2b36-4824-4f0f-810d-9be012f74776-vencrypt-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"a03f2b36-4824-4f0f-810d-9be012f74776\") " pod="openstack/nova-cell1-novncproxy-0" Feb 24 05:55:15.080346 master-0 kubenswrapper[34361]: I0224 05:55:15.079704 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/a03f2b36-4824-4f0f-810d-9be012f74776-nova-novncproxy-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"a03f2b36-4824-4f0f-810d-9be012f74776\") " pod="openstack/nova-cell1-novncproxy-0" Feb 24 05:55:15.080346 master-0 kubenswrapper[34361]: I0224 05:55:15.079799 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a03f2b36-4824-4f0f-810d-9be012f74776-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"a03f2b36-4824-4f0f-810d-9be012f74776\") " pod="openstack/nova-cell1-novncproxy-0" Feb 24 05:55:15.080346 master-0 kubenswrapper[34361]: I0224 05:55:15.079827 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-njxqn\" (UniqueName: \"kubernetes.io/projected/a03f2b36-4824-4f0f-810d-9be012f74776-kube-api-access-njxqn\") pod \"nova-cell1-novncproxy-0\" (UID: \"a03f2b36-4824-4f0f-810d-9be012f74776\") " pod="openstack/nova-cell1-novncproxy-0" Feb 24 05:55:15.088350 master-0 kubenswrapper[34361]: I0224 05:55:15.084826 34361 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a03f2b36-4824-4f0f-810d-9be012f74776-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"a03f2b36-4824-4f0f-810d-9be012f74776\") " pod="openstack/nova-cell1-novncproxy-0" Feb 24 05:55:15.088350 master-0 kubenswrapper[34361]: I0224 05:55:15.085369 34361 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/a03f2b36-4824-4f0f-810d-9be012f74776-nova-novncproxy-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"a03f2b36-4824-4f0f-810d-9be012f74776\") " pod="openstack/nova-cell1-novncproxy-0" Feb 24 05:55:15.088350 master-0 kubenswrapper[34361]: I0224 05:55:15.085940 34361 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/a03f2b36-4824-4f0f-810d-9be012f74776-vencrypt-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"a03f2b36-4824-4f0f-810d-9be012f74776\") " pod="openstack/nova-cell1-novncproxy-0" Feb 24 05:55:15.089681 master-0 kubenswrapper[34361]: I0224 05:55:15.089608 34361 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a03f2b36-4824-4f0f-810d-9be012f74776-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"a03f2b36-4824-4f0f-810d-9be012f74776\") " pod="openstack/nova-cell1-novncproxy-0" Feb 24 05:55:15.100815 master-0 kubenswrapper[34361]: I0224 05:55:15.100745 34361 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-njxqn\" (UniqueName: \"kubernetes.io/projected/a03f2b36-4824-4f0f-810d-9be012f74776-kube-api-access-njxqn\") pod \"nova-cell1-novncproxy-0\" (UID: \"a03f2b36-4824-4f0f-810d-9be012f74776\") " pod="openstack/nova-cell1-novncproxy-0" Feb 24 05:55:15.264204 master-0 kubenswrapper[34361]: I0224 05:55:15.264110 34361 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Feb 24 05:55:15.829028 master-0 kubenswrapper[34361]: I0224 05:55:15.828943 34361 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Feb 24 05:55:16.498780 master-0 kubenswrapper[34361]: I0224 05:55:16.498605 34361 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"a03f2b36-4824-4f0f-810d-9be012f74776","Type":"ContainerStarted","Data":"b8f489936aa5a654e78237ba51046fdb9e7eace3d970d9dfc52587e83939ed96"} Feb 24 05:55:16.498780 master-0 kubenswrapper[34361]: I0224 05:55:16.498685 34361 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"a03f2b36-4824-4f0f-810d-9be012f74776","Type":"ContainerStarted","Data":"2e8cb8b44346bd422646a34a22d7c271ccc2c60c1d975703a870e22e1a783ae4"} Feb 24 05:55:16.533839 master-0 kubenswrapper[34361]: I0224 05:55:16.532613 34361 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-novncproxy-0" podStartSLOduration=2.53258341 podStartE2EDuration="2.53258341s" podCreationTimestamp="2026-02-24 05:55:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-24 05:55:16.528253674 +0000 UTC m=+1076.230870720" watchObservedRunningTime="2026-02-24 05:55:16.53258341 +0000 UTC m=+1076.235200456" Feb 24 05:55:16.547379 master-0 kubenswrapper[34361]: I0224 05:55:16.547270 34361 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Feb 24 05:55:16.551102 master-0 kubenswrapper[34361]: I0224 05:55:16.550809 34361 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Feb 24 05:55:16.559822 master-0 kubenswrapper[34361]: I0224 05:55:16.559642 34361 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Feb 24 05:55:16.624996 master-0 kubenswrapper[34361]: I0224 05:55:16.624905 34361 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="25249dd5-54b4-44dc-ab35-e8532b1d0875" path="/var/lib/kubelet/pods/25249dd5-54b4-44dc-ab35-e8532b1d0875/volumes" Feb 24 05:55:17.527371 master-0 kubenswrapper[34361]: I0224 05:55:17.527279 34361 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Feb 24 05:55:19.590193 master-0 kubenswrapper[34361]: I0224 05:55:19.590081 34361 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Feb 24 05:55:19.590898 master-0 kubenswrapper[34361]: I0224 05:55:19.590794 34361 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Feb 24 05:55:19.591397 master-0 kubenswrapper[34361]: I0224 05:55:19.591242 34361 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Feb 24 05:55:19.597437 master-0 kubenswrapper[34361]: I0224 05:55:19.597253 34361 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Feb 24 05:55:20.265862 master-0 kubenswrapper[34361]: I0224 05:55:20.265796 34361 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-novncproxy-0" Feb 24 05:55:20.589508 master-0 kubenswrapper[34361]: I0224 05:55:20.589315 34361 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Feb 24 05:55:20.594442 master-0 kubenswrapper[34361]: I0224 05:55:20.594043 34361 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Feb 24 05:55:20.848620 master-0 kubenswrapper[34361]: I0224 05:55:20.840430 34361 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-555687858c-l6w59"] Feb 24 05:55:20.848620 master-0 kubenswrapper[34361]: I0224 05:55:20.846029 34361 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-555687858c-l6w59" Feb 24 05:55:20.986629 master-0 kubenswrapper[34361]: I0224 05:55:20.983607 34361 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/05be70e5-57bf-4d8b-bc61-18cf27ef2b40-dns-swift-storage-0\") pod \"dnsmasq-dns-555687858c-l6w59\" (UID: \"05be70e5-57bf-4d8b-bc61-18cf27ef2b40\") " pod="openstack/dnsmasq-dns-555687858c-l6w59" Feb 24 05:55:20.986629 master-0 kubenswrapper[34361]: I0224 05:55:20.983785 34361 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/05be70e5-57bf-4d8b-bc61-18cf27ef2b40-dns-svc\") pod \"dnsmasq-dns-555687858c-l6w59\" (UID: \"05be70e5-57bf-4d8b-bc61-18cf27ef2b40\") " pod="openstack/dnsmasq-dns-555687858c-l6w59" Feb 24 05:55:20.986629 master-0 kubenswrapper[34361]: I0224 05:55:20.983916 34361 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/05be70e5-57bf-4d8b-bc61-18cf27ef2b40-config\") pod \"dnsmasq-dns-555687858c-l6w59\" (UID: \"05be70e5-57bf-4d8b-bc61-18cf27ef2b40\") " pod="openstack/dnsmasq-dns-555687858c-l6w59" Feb 24 05:55:20.986629 master-0 kubenswrapper[34361]: I0224 05:55:20.984073 34361 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/05be70e5-57bf-4d8b-bc61-18cf27ef2b40-ovsdbserver-nb\") pod \"dnsmasq-dns-555687858c-l6w59\" (UID: \"05be70e5-57bf-4d8b-bc61-18cf27ef2b40\") " pod="openstack/dnsmasq-dns-555687858c-l6w59" Feb 24 05:55:20.986629 master-0 kubenswrapper[34361]: I0224 05:55:20.984202 34361 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4wff4\" (UniqueName: \"kubernetes.io/projected/05be70e5-57bf-4d8b-bc61-18cf27ef2b40-kube-api-access-4wff4\") pod \"dnsmasq-dns-555687858c-l6w59\" (UID: \"05be70e5-57bf-4d8b-bc61-18cf27ef2b40\") " pod="openstack/dnsmasq-dns-555687858c-l6w59" Feb 24 05:55:20.986629 master-0 kubenswrapper[34361]: I0224 05:55:20.984285 34361 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/05be70e5-57bf-4d8b-bc61-18cf27ef2b40-ovsdbserver-sb\") pod \"dnsmasq-dns-555687858c-l6w59\" (UID: \"05be70e5-57bf-4d8b-bc61-18cf27ef2b40\") " pod="openstack/dnsmasq-dns-555687858c-l6w59" Feb 24 05:55:21.007843 master-0 kubenswrapper[34361]: I0224 05:55:21.007723 34361 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-555687858c-l6w59"] Feb 24 05:55:21.116542 master-0 kubenswrapper[34361]: I0224 05:55:21.116374 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/05be70e5-57bf-4d8b-bc61-18cf27ef2b40-dns-swift-storage-0\") pod \"dnsmasq-dns-555687858c-l6w59\" (UID: \"05be70e5-57bf-4d8b-bc61-18cf27ef2b40\") " pod="openstack/dnsmasq-dns-555687858c-l6w59" Feb 24 05:55:21.116865 master-0 kubenswrapper[34361]: I0224 05:55:21.116849 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/05be70e5-57bf-4d8b-bc61-18cf27ef2b40-dns-svc\") pod \"dnsmasq-dns-555687858c-l6w59\" (UID: \"05be70e5-57bf-4d8b-bc61-18cf27ef2b40\") " pod="openstack/dnsmasq-dns-555687858c-l6w59" Feb 24 05:55:21.116990 master-0 kubenswrapper[34361]: I0224 05:55:21.116977 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/05be70e5-57bf-4d8b-bc61-18cf27ef2b40-config\") pod \"dnsmasq-dns-555687858c-l6w59\" (UID: \"05be70e5-57bf-4d8b-bc61-18cf27ef2b40\") " pod="openstack/dnsmasq-dns-555687858c-l6w59" Feb 24 05:55:21.117107 master-0 kubenswrapper[34361]: I0224 05:55:21.117094 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/05be70e5-57bf-4d8b-bc61-18cf27ef2b40-ovsdbserver-nb\") pod \"dnsmasq-dns-555687858c-l6w59\" (UID: \"05be70e5-57bf-4d8b-bc61-18cf27ef2b40\") " pod="openstack/dnsmasq-dns-555687858c-l6w59" Feb 24 05:55:21.117215 master-0 kubenswrapper[34361]: I0224 05:55:21.117202 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4wff4\" (UniqueName: \"kubernetes.io/projected/05be70e5-57bf-4d8b-bc61-18cf27ef2b40-kube-api-access-4wff4\") pod \"dnsmasq-dns-555687858c-l6w59\" (UID: \"05be70e5-57bf-4d8b-bc61-18cf27ef2b40\") " pod="openstack/dnsmasq-dns-555687858c-l6w59" Feb 24 05:55:21.117324 master-0 kubenswrapper[34361]: I0224 05:55:21.117293 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/05be70e5-57bf-4d8b-bc61-18cf27ef2b40-ovsdbserver-sb\") pod \"dnsmasq-dns-555687858c-l6w59\" (UID: \"05be70e5-57bf-4d8b-bc61-18cf27ef2b40\") " pod="openstack/dnsmasq-dns-555687858c-l6w59" Feb 24 05:55:21.118399 master-0 kubenswrapper[34361]: I0224 05:55:21.118385 34361 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/05be70e5-57bf-4d8b-bc61-18cf27ef2b40-ovsdbserver-sb\") pod \"dnsmasq-dns-555687858c-l6w59\" (UID: \"05be70e5-57bf-4d8b-bc61-18cf27ef2b40\") " pod="openstack/dnsmasq-dns-555687858c-l6w59" Feb 24 05:55:21.119059 master-0 kubenswrapper[34361]: I0224 05:55:21.119044 34361 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/05be70e5-57bf-4d8b-bc61-18cf27ef2b40-dns-swift-storage-0\") pod \"dnsmasq-dns-555687858c-l6w59\" (UID: \"05be70e5-57bf-4d8b-bc61-18cf27ef2b40\") " pod="openstack/dnsmasq-dns-555687858c-l6w59" Feb 24 05:55:21.119722 master-0 kubenswrapper[34361]: I0224 05:55:21.119705 34361 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/05be70e5-57bf-4d8b-bc61-18cf27ef2b40-dns-svc\") pod \"dnsmasq-dns-555687858c-l6w59\" (UID: \"05be70e5-57bf-4d8b-bc61-18cf27ef2b40\") " pod="openstack/dnsmasq-dns-555687858c-l6w59" Feb 24 05:55:21.120360 master-0 kubenswrapper[34361]: I0224 05:55:21.120346 34361 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/05be70e5-57bf-4d8b-bc61-18cf27ef2b40-config\") pod \"dnsmasq-dns-555687858c-l6w59\" (UID: \"05be70e5-57bf-4d8b-bc61-18cf27ef2b40\") " pod="openstack/dnsmasq-dns-555687858c-l6w59" Feb 24 05:55:21.122361 master-0 kubenswrapper[34361]: I0224 05:55:21.121187 34361 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/05be70e5-57bf-4d8b-bc61-18cf27ef2b40-ovsdbserver-nb\") pod \"dnsmasq-dns-555687858c-l6w59\" (UID: \"05be70e5-57bf-4d8b-bc61-18cf27ef2b40\") " pod="openstack/dnsmasq-dns-555687858c-l6w59" Feb 24 05:55:21.147588 master-0 kubenswrapper[34361]: I0224 05:55:21.147496 34361 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4wff4\" (UniqueName: \"kubernetes.io/projected/05be70e5-57bf-4d8b-bc61-18cf27ef2b40-kube-api-access-4wff4\") pod \"dnsmasq-dns-555687858c-l6w59\" (UID: \"05be70e5-57bf-4d8b-bc61-18cf27ef2b40\") " pod="openstack/dnsmasq-dns-555687858c-l6w59" Feb 24 05:55:21.267959 master-0 kubenswrapper[34361]: I0224 05:55:21.267814 34361 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-555687858c-l6w59" Feb 24 05:55:21.882982 master-0 kubenswrapper[34361]: I0224 05:55:21.882843 34361 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-555687858c-l6w59"] Feb 24 05:55:22.668886 master-0 kubenswrapper[34361]: I0224 05:55:22.667459 34361 generic.go:334] "Generic (PLEG): container finished" podID="05be70e5-57bf-4d8b-bc61-18cf27ef2b40" containerID="3d20c479acab853ba8360296a66e5315dc5b21e5513acf700e43e8884fde3f7b" exitCode=0 Feb 24 05:55:22.668886 master-0 kubenswrapper[34361]: I0224 05:55:22.667698 34361 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-555687858c-l6w59" event={"ID":"05be70e5-57bf-4d8b-bc61-18cf27ef2b40","Type":"ContainerDied","Data":"3d20c479acab853ba8360296a66e5315dc5b21e5513acf700e43e8884fde3f7b"} Feb 24 05:55:22.668886 master-0 kubenswrapper[34361]: I0224 05:55:22.667832 34361 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-555687858c-l6w59" event={"ID":"05be70e5-57bf-4d8b-bc61-18cf27ef2b40","Type":"ContainerStarted","Data":"14b47539547db6d0b36d1120f9650fb7e3bb3731aa1a3102138f263be0494a11"} Feb 24 05:55:23.692856 master-0 kubenswrapper[34361]: I0224 05:55:23.692768 34361 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-555687858c-l6w59" event={"ID":"05be70e5-57bf-4d8b-bc61-18cf27ef2b40","Type":"ContainerStarted","Data":"df2b5405a5cff5c96c3e7ea87e034b4b059bb46e44f7a2c18b4a0a8c25229962"} Feb 24 05:55:23.693728 master-0 kubenswrapper[34361]: I0224 05:55:23.693262 34361 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-555687858c-l6w59" Feb 24 05:55:23.739533 master-0 kubenswrapper[34361]: I0224 05:55:23.739381 34361 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-555687858c-l6w59" podStartSLOduration=3.739346202 podStartE2EDuration="3.739346202s" podCreationTimestamp="2026-02-24 05:55:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-24 05:55:23.717605925 +0000 UTC m=+1083.420223011" watchObservedRunningTime="2026-02-24 05:55:23.739346202 +0000 UTC m=+1083.441963288" Feb 24 05:55:24.075907 master-0 kubenswrapper[34361]: I0224 05:55:24.075657 34361 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Feb 24 05:55:24.076360 master-0 kubenswrapper[34361]: I0224 05:55:24.076072 34361 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="ab08d4ad-e3ec-4e99-96ce-f6242fe48f90" containerName="nova-api-log" containerID="cri-o://a6bb288e8b19f3d5a9ba17f1c1e199f60015d808b08b91cdf75f3da907a5a88b" gracePeriod=30 Feb 24 05:55:24.076360 master-0 kubenswrapper[34361]: I0224 05:55:24.076200 34361 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="ab08d4ad-e3ec-4e99-96ce-f6242fe48f90" containerName="nova-api-api" containerID="cri-o://45519f82cb58fe639143471d2ff7b23337594f893e8e328ded52c40f36c082fb" gracePeriod=30 Feb 24 05:55:24.719655 master-0 kubenswrapper[34361]: I0224 05:55:24.719565 34361 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"ab08d4ad-e3ec-4e99-96ce-f6242fe48f90","Type":"ContainerDied","Data":"a6bb288e8b19f3d5a9ba17f1c1e199f60015d808b08b91cdf75f3da907a5a88b"} Feb 24 05:55:24.719655 master-0 kubenswrapper[34361]: I0224 05:55:24.719514 34361 generic.go:334] "Generic (PLEG): container finished" podID="ab08d4ad-e3ec-4e99-96ce-f6242fe48f90" containerID="a6bb288e8b19f3d5a9ba17f1c1e199f60015d808b08b91cdf75f3da907a5a88b" exitCode=143 Feb 24 05:55:25.265889 master-0 kubenswrapper[34361]: I0224 05:55:25.265809 34361 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-cell1-novncproxy-0" Feb 24 05:55:25.296995 master-0 kubenswrapper[34361]: I0224 05:55:25.296915 34361 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-cell1-novncproxy-0" Feb 24 05:55:25.751138 master-0 kubenswrapper[34361]: I0224 05:55:25.751049 34361 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell1-novncproxy-0" Feb 24 05:55:26.063765 master-0 kubenswrapper[34361]: I0224 05:55:26.063551 34361 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-cell-mapping-l5vzg"] Feb 24 05:55:26.066161 master-0 kubenswrapper[34361]: I0224 05:55:26.066107 34361 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-l5vzg" Feb 24 05:55:26.090805 master-0 kubenswrapper[34361]: I0224 05:55:26.090647 34361 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-host-discover-4lbdf"] Feb 24 05:55:26.093287 master-0 kubenswrapper[34361]: I0224 05:55:26.093247 34361 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-host-discover-4lbdf" Feb 24 05:55:26.098210 master-0 kubenswrapper[34361]: I0224 05:55:26.098137 34361 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-manage-scripts" Feb 24 05:55:26.098697 master-0 kubenswrapper[34361]: I0224 05:55:26.098648 34361 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-manage-config-data" Feb 24 05:55:26.112152 master-0 kubenswrapper[34361]: I0224 05:55:26.112057 34361 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-cell-mapping-l5vzg"] Feb 24 05:55:26.133338 master-0 kubenswrapper[34361]: I0224 05:55:26.133244 34361 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-host-discover-4lbdf"] Feb 24 05:55:26.196351 master-0 kubenswrapper[34361]: I0224 05:55:26.196264 34361 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0215380e-69c6-41f6-a231-98e9714a160d-config-data\") pod \"nova-cell1-cell-mapping-l5vzg\" (UID: \"0215380e-69c6-41f6-a231-98e9714a160d\") " pod="openstack/nova-cell1-cell-mapping-l5vzg" Feb 24 05:55:26.196731 master-0 kubenswrapper[34361]: I0224 05:55:26.196713 34361 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0215380e-69c6-41f6-a231-98e9714a160d-scripts\") pod \"nova-cell1-cell-mapping-l5vzg\" (UID: \"0215380e-69c6-41f6-a231-98e9714a160d\") " pod="openstack/nova-cell1-cell-mapping-l5vzg" Feb 24 05:55:26.196896 master-0 kubenswrapper[34361]: I0224 05:55:26.196876 34361 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-426j4\" (UniqueName: \"kubernetes.io/projected/0215380e-69c6-41f6-a231-98e9714a160d-kube-api-access-426j4\") pod \"nova-cell1-cell-mapping-l5vzg\" (UID: \"0215380e-69c6-41f6-a231-98e9714a160d\") " pod="openstack/nova-cell1-cell-mapping-l5vzg" Feb 24 05:55:26.197029 master-0 kubenswrapper[34361]: I0224 05:55:26.197014 34361 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/598b59b9-eeed-4a94-a3b0-fb6c19d76c53-scripts\") pod \"nova-cell1-host-discover-4lbdf\" (UID: \"598b59b9-eeed-4a94-a3b0-fb6c19d76c53\") " pod="openstack/nova-cell1-host-discover-4lbdf" Feb 24 05:55:26.197128 master-0 kubenswrapper[34361]: I0224 05:55:26.197114 34361 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fnlks\" (UniqueName: \"kubernetes.io/projected/598b59b9-eeed-4a94-a3b0-fb6c19d76c53-kube-api-access-fnlks\") pod \"nova-cell1-host-discover-4lbdf\" (UID: \"598b59b9-eeed-4a94-a3b0-fb6c19d76c53\") " pod="openstack/nova-cell1-host-discover-4lbdf" Feb 24 05:55:26.197233 master-0 kubenswrapper[34361]: I0224 05:55:26.197215 34361 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0215380e-69c6-41f6-a231-98e9714a160d-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-l5vzg\" (UID: \"0215380e-69c6-41f6-a231-98e9714a160d\") " pod="openstack/nova-cell1-cell-mapping-l5vzg" Feb 24 05:55:26.197412 master-0 kubenswrapper[34361]: I0224 05:55:26.197366 34361 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/598b59b9-eeed-4a94-a3b0-fb6c19d76c53-config-data\") pod \"nova-cell1-host-discover-4lbdf\" (UID: \"598b59b9-eeed-4a94-a3b0-fb6c19d76c53\") " pod="openstack/nova-cell1-host-discover-4lbdf" Feb 24 05:55:26.197533 master-0 kubenswrapper[34361]: I0224 05:55:26.197519 34361 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/598b59b9-eeed-4a94-a3b0-fb6c19d76c53-combined-ca-bundle\") pod \"nova-cell1-host-discover-4lbdf\" (UID: \"598b59b9-eeed-4a94-a3b0-fb6c19d76c53\") " pod="openstack/nova-cell1-host-discover-4lbdf" Feb 24 05:55:26.300595 master-0 kubenswrapper[34361]: I0224 05:55:26.300471 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0215380e-69c6-41f6-a231-98e9714a160d-scripts\") pod \"nova-cell1-cell-mapping-l5vzg\" (UID: \"0215380e-69c6-41f6-a231-98e9714a160d\") " pod="openstack/nova-cell1-cell-mapping-l5vzg" Feb 24 05:55:26.301001 master-0 kubenswrapper[34361]: I0224 05:55:26.300684 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-426j4\" (UniqueName: \"kubernetes.io/projected/0215380e-69c6-41f6-a231-98e9714a160d-kube-api-access-426j4\") pod \"nova-cell1-cell-mapping-l5vzg\" (UID: \"0215380e-69c6-41f6-a231-98e9714a160d\") " pod="openstack/nova-cell1-cell-mapping-l5vzg" Feb 24 05:55:26.301001 master-0 kubenswrapper[34361]: I0224 05:55:26.300866 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/598b59b9-eeed-4a94-a3b0-fb6c19d76c53-scripts\") pod \"nova-cell1-host-discover-4lbdf\" (UID: \"598b59b9-eeed-4a94-a3b0-fb6c19d76c53\") " pod="openstack/nova-cell1-host-discover-4lbdf" Feb 24 05:55:26.301001 master-0 kubenswrapper[34361]: I0224 05:55:26.300932 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fnlks\" (UniqueName: \"kubernetes.io/projected/598b59b9-eeed-4a94-a3b0-fb6c19d76c53-kube-api-access-fnlks\") pod \"nova-cell1-host-discover-4lbdf\" (UID: \"598b59b9-eeed-4a94-a3b0-fb6c19d76c53\") " pod="openstack/nova-cell1-host-discover-4lbdf" Feb 24 05:55:26.301464 master-0 kubenswrapper[34361]: I0224 05:55:26.301017 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0215380e-69c6-41f6-a231-98e9714a160d-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-l5vzg\" (UID: \"0215380e-69c6-41f6-a231-98e9714a160d\") " pod="openstack/nova-cell1-cell-mapping-l5vzg" Feb 24 05:55:26.301464 master-0 kubenswrapper[34361]: I0224 05:55:26.301108 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/598b59b9-eeed-4a94-a3b0-fb6c19d76c53-config-data\") pod \"nova-cell1-host-discover-4lbdf\" (UID: \"598b59b9-eeed-4a94-a3b0-fb6c19d76c53\") " pod="openstack/nova-cell1-host-discover-4lbdf" Feb 24 05:55:26.301464 master-0 kubenswrapper[34361]: I0224 05:55:26.301183 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/598b59b9-eeed-4a94-a3b0-fb6c19d76c53-combined-ca-bundle\") pod \"nova-cell1-host-discover-4lbdf\" (UID: \"598b59b9-eeed-4a94-a3b0-fb6c19d76c53\") " pod="openstack/nova-cell1-host-discover-4lbdf" Feb 24 05:55:26.301464 master-0 kubenswrapper[34361]: I0224 05:55:26.301239 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0215380e-69c6-41f6-a231-98e9714a160d-config-data\") pod \"nova-cell1-cell-mapping-l5vzg\" (UID: \"0215380e-69c6-41f6-a231-98e9714a160d\") " pod="openstack/nova-cell1-cell-mapping-l5vzg" Feb 24 05:55:26.306933 master-0 kubenswrapper[34361]: I0224 05:55:26.306887 34361 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0215380e-69c6-41f6-a231-98e9714a160d-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-l5vzg\" (UID: \"0215380e-69c6-41f6-a231-98e9714a160d\") " pod="openstack/nova-cell1-cell-mapping-l5vzg" Feb 24 05:55:26.309604 master-0 kubenswrapper[34361]: I0224 05:55:26.309510 34361 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/598b59b9-eeed-4a94-a3b0-fb6c19d76c53-config-data\") pod \"nova-cell1-host-discover-4lbdf\" (UID: \"598b59b9-eeed-4a94-a3b0-fb6c19d76c53\") " pod="openstack/nova-cell1-host-discover-4lbdf" Feb 24 05:55:26.309878 master-0 kubenswrapper[34361]: I0224 05:55:26.309814 34361 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0215380e-69c6-41f6-a231-98e9714a160d-config-data\") pod \"nova-cell1-cell-mapping-l5vzg\" (UID: \"0215380e-69c6-41f6-a231-98e9714a160d\") " pod="openstack/nova-cell1-cell-mapping-l5vzg" Feb 24 05:55:26.311395 master-0 kubenswrapper[34361]: I0224 05:55:26.311272 34361 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/598b59b9-eeed-4a94-a3b0-fb6c19d76c53-scripts\") pod \"nova-cell1-host-discover-4lbdf\" (UID: \"598b59b9-eeed-4a94-a3b0-fb6c19d76c53\") " pod="openstack/nova-cell1-host-discover-4lbdf" Feb 24 05:55:26.319728 master-0 kubenswrapper[34361]: I0224 05:55:26.314168 34361 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0215380e-69c6-41f6-a231-98e9714a160d-scripts\") pod \"nova-cell1-cell-mapping-l5vzg\" (UID: \"0215380e-69c6-41f6-a231-98e9714a160d\") " pod="openstack/nova-cell1-cell-mapping-l5vzg" Feb 24 05:55:26.319728 master-0 kubenswrapper[34361]: I0224 05:55:26.317228 34361 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/598b59b9-eeed-4a94-a3b0-fb6c19d76c53-combined-ca-bundle\") pod \"nova-cell1-host-discover-4lbdf\" (UID: \"598b59b9-eeed-4a94-a3b0-fb6c19d76c53\") " pod="openstack/nova-cell1-host-discover-4lbdf" Feb 24 05:55:26.328490 master-0 kubenswrapper[34361]: I0224 05:55:26.322173 34361 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-426j4\" (UniqueName: \"kubernetes.io/projected/0215380e-69c6-41f6-a231-98e9714a160d-kube-api-access-426j4\") pod \"nova-cell1-cell-mapping-l5vzg\" (UID: \"0215380e-69c6-41f6-a231-98e9714a160d\") " pod="openstack/nova-cell1-cell-mapping-l5vzg" Feb 24 05:55:26.328490 master-0 kubenswrapper[34361]: I0224 05:55:26.322906 34361 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fnlks\" (UniqueName: \"kubernetes.io/projected/598b59b9-eeed-4a94-a3b0-fb6c19d76c53-kube-api-access-fnlks\") pod \"nova-cell1-host-discover-4lbdf\" (UID: \"598b59b9-eeed-4a94-a3b0-fb6c19d76c53\") " pod="openstack/nova-cell1-host-discover-4lbdf" Feb 24 05:55:26.447602 master-0 kubenswrapper[34361]: I0224 05:55:26.447422 34361 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-l5vzg" Feb 24 05:55:26.457438 master-0 kubenswrapper[34361]: I0224 05:55:26.457383 34361 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-host-discover-4lbdf" Feb 24 05:55:27.035403 master-0 kubenswrapper[34361]: I0224 05:55:27.028095 34361 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-host-discover-4lbdf"] Feb 24 05:55:27.076564 master-0 kubenswrapper[34361]: I0224 05:55:27.075297 34361 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-cell-mapping-l5vzg"] Feb 24 05:55:27.776369 master-0 kubenswrapper[34361]: I0224 05:55:27.771325 34361 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-l5vzg" event={"ID":"0215380e-69c6-41f6-a231-98e9714a160d","Type":"ContainerStarted","Data":"88c32843b8ac98f536171b903f3090837a291cd61314786b4abb1451784d161b"} Feb 24 05:55:27.776369 master-0 kubenswrapper[34361]: I0224 05:55:27.771400 34361 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-l5vzg" event={"ID":"0215380e-69c6-41f6-a231-98e9714a160d","Type":"ContainerStarted","Data":"001295e46c81403233da552a6bd26f3d1c5989ef1c45addc83d0ef5248614fdd"} Feb 24 05:55:27.776369 master-0 kubenswrapper[34361]: I0224 05:55:27.774556 34361 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-host-discover-4lbdf" event={"ID":"598b59b9-eeed-4a94-a3b0-fb6c19d76c53","Type":"ContainerStarted","Data":"cf4854259d77311f1ec27f712fbe10530ae0ecfdff1f0b17fcf2a99fb10cb7bb"} Feb 24 05:55:27.776369 master-0 kubenswrapper[34361]: I0224 05:55:27.774581 34361 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-host-discover-4lbdf" event={"ID":"598b59b9-eeed-4a94-a3b0-fb6c19d76c53","Type":"ContainerStarted","Data":"36c3f0ca48b2b9f0321218ca247f970bd6a224dff44f5b06188f9b85500d70ef"} Feb 24 05:55:27.778087 master-0 kubenswrapper[34361]: I0224 05:55:27.778025 34361 generic.go:334] "Generic (PLEG): container finished" podID="ab08d4ad-e3ec-4e99-96ce-f6242fe48f90" containerID="45519f82cb58fe639143471d2ff7b23337594f893e8e328ded52c40f36c082fb" exitCode=0 Feb 24 05:55:27.778149 master-0 kubenswrapper[34361]: I0224 05:55:27.778092 34361 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"ab08d4ad-e3ec-4e99-96ce-f6242fe48f90","Type":"ContainerDied","Data":"45519f82cb58fe639143471d2ff7b23337594f893e8e328ded52c40f36c082fb"} Feb 24 05:55:27.778149 master-0 kubenswrapper[34361]: I0224 05:55:27.778121 34361 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"ab08d4ad-e3ec-4e99-96ce-f6242fe48f90","Type":"ContainerDied","Data":"5ce2992ec65d69a669593b6dbc2fac418731493a431bfcd324af7f4dec4d669c"} Feb 24 05:55:27.778149 master-0 kubenswrapper[34361]: I0224 05:55:27.778132 34361 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5ce2992ec65d69a669593b6dbc2fac418731493a431bfcd324af7f4dec4d669c" Feb 24 05:55:27.796487 master-0 kubenswrapper[34361]: I0224 05:55:27.795975 34361 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-cell-mapping-l5vzg" podStartSLOduration=1.795955721 podStartE2EDuration="1.795955721s" podCreationTimestamp="2026-02-24 05:55:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-24 05:55:27.792877028 +0000 UTC m=+1087.495494074" watchObservedRunningTime="2026-02-24 05:55:27.795955721 +0000 UTC m=+1087.498572757" Feb 24 05:55:27.799219 master-0 kubenswrapper[34361]: I0224 05:55:27.799164 34361 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 24 05:55:27.830457 master-0 kubenswrapper[34361]: I0224 05:55:27.824759 34361 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-host-discover-4lbdf" podStartSLOduration=1.824733057 podStartE2EDuration="1.824733057s" podCreationTimestamp="2026-02-24 05:55:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-24 05:55:27.820706079 +0000 UTC m=+1087.523323125" watchObservedRunningTime="2026-02-24 05:55:27.824733057 +0000 UTC m=+1087.527350103" Feb 24 05:55:27.978178 master-0 kubenswrapper[34361]: I0224 05:55:27.978035 34361 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ab08d4ad-e3ec-4e99-96ce-f6242fe48f90-config-data\") pod \"ab08d4ad-e3ec-4e99-96ce-f6242fe48f90\" (UID: \"ab08d4ad-e3ec-4e99-96ce-f6242fe48f90\") " Feb 24 05:55:27.978178 master-0 kubenswrapper[34361]: I0224 05:55:27.978172 34361 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ab08d4ad-e3ec-4e99-96ce-f6242fe48f90-combined-ca-bundle\") pod \"ab08d4ad-e3ec-4e99-96ce-f6242fe48f90\" (UID: \"ab08d4ad-e3ec-4e99-96ce-f6242fe48f90\") " Feb 24 05:55:27.978452 master-0 kubenswrapper[34361]: I0224 05:55:27.978207 34361 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vfr24\" (UniqueName: \"kubernetes.io/projected/ab08d4ad-e3ec-4e99-96ce-f6242fe48f90-kube-api-access-vfr24\") pod \"ab08d4ad-e3ec-4e99-96ce-f6242fe48f90\" (UID: \"ab08d4ad-e3ec-4e99-96ce-f6242fe48f90\") " Feb 24 05:55:27.978452 master-0 kubenswrapper[34361]: I0224 05:55:27.978382 34361 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ab08d4ad-e3ec-4e99-96ce-f6242fe48f90-logs\") pod \"ab08d4ad-e3ec-4e99-96ce-f6242fe48f90\" (UID: \"ab08d4ad-e3ec-4e99-96ce-f6242fe48f90\") " Feb 24 05:55:27.979551 master-0 kubenswrapper[34361]: I0224 05:55:27.979511 34361 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ab08d4ad-e3ec-4e99-96ce-f6242fe48f90-logs" (OuterVolumeSpecName: "logs") pod "ab08d4ad-e3ec-4e99-96ce-f6242fe48f90" (UID: "ab08d4ad-e3ec-4e99-96ce-f6242fe48f90"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 24 05:55:27.989001 master-0 kubenswrapper[34361]: I0224 05:55:27.988832 34361 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ab08d4ad-e3ec-4e99-96ce-f6242fe48f90-kube-api-access-vfr24" (OuterVolumeSpecName: "kube-api-access-vfr24") pod "ab08d4ad-e3ec-4e99-96ce-f6242fe48f90" (UID: "ab08d4ad-e3ec-4e99-96ce-f6242fe48f90"). InnerVolumeSpecName "kube-api-access-vfr24". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 24 05:55:28.034363 master-0 kubenswrapper[34361]: I0224 05:55:28.034174 34361 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ab08d4ad-e3ec-4e99-96ce-f6242fe48f90-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "ab08d4ad-e3ec-4e99-96ce-f6242fe48f90" (UID: "ab08d4ad-e3ec-4e99-96ce-f6242fe48f90"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 24 05:55:28.042620 master-0 kubenswrapper[34361]: I0224 05:55:28.042515 34361 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ab08d4ad-e3ec-4e99-96ce-f6242fe48f90-config-data" (OuterVolumeSpecName: "config-data") pod "ab08d4ad-e3ec-4e99-96ce-f6242fe48f90" (UID: "ab08d4ad-e3ec-4e99-96ce-f6242fe48f90"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 24 05:55:28.081857 master-0 kubenswrapper[34361]: I0224 05:55:28.081784 34361 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ab08d4ad-e3ec-4e99-96ce-f6242fe48f90-logs\") on node \"master-0\" DevicePath \"\"" Feb 24 05:55:28.081857 master-0 kubenswrapper[34361]: I0224 05:55:28.081844 34361 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ab08d4ad-e3ec-4e99-96ce-f6242fe48f90-config-data\") on node \"master-0\" DevicePath \"\"" Feb 24 05:55:28.082173 master-0 kubenswrapper[34361]: I0224 05:55:28.081922 34361 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ab08d4ad-e3ec-4e99-96ce-f6242fe48f90-combined-ca-bundle\") on node \"master-0\" DevicePath \"\"" Feb 24 05:55:28.082173 master-0 kubenswrapper[34361]: I0224 05:55:28.081940 34361 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vfr24\" (UniqueName: \"kubernetes.io/projected/ab08d4ad-e3ec-4e99-96ce-f6242fe48f90-kube-api-access-vfr24\") on node \"master-0\" DevicePath \"\"" Feb 24 05:55:28.792467 master-0 kubenswrapper[34361]: I0224 05:55:28.792415 34361 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 24 05:55:28.844336 master-0 kubenswrapper[34361]: I0224 05:55:28.844191 34361 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Feb 24 05:55:28.862803 master-0 kubenswrapper[34361]: I0224 05:55:28.858967 34361 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Feb 24 05:55:28.885333 master-0 kubenswrapper[34361]: I0224 05:55:28.884262 34361 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Feb 24 05:55:28.885333 master-0 kubenswrapper[34361]: E0224 05:55:28.884964 34361 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ab08d4ad-e3ec-4e99-96ce-f6242fe48f90" containerName="nova-api-api" Feb 24 05:55:28.885333 master-0 kubenswrapper[34361]: I0224 05:55:28.884983 34361 state_mem.go:107] "Deleted CPUSet assignment" podUID="ab08d4ad-e3ec-4e99-96ce-f6242fe48f90" containerName="nova-api-api" Feb 24 05:55:28.885333 master-0 kubenswrapper[34361]: E0224 05:55:28.885005 34361 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ab08d4ad-e3ec-4e99-96ce-f6242fe48f90" containerName="nova-api-log" Feb 24 05:55:28.885333 master-0 kubenswrapper[34361]: I0224 05:55:28.885012 34361 state_mem.go:107] "Deleted CPUSet assignment" podUID="ab08d4ad-e3ec-4e99-96ce-f6242fe48f90" containerName="nova-api-log" Feb 24 05:55:28.885916 master-0 kubenswrapper[34361]: I0224 05:55:28.885562 34361 memory_manager.go:354] "RemoveStaleState removing state" podUID="ab08d4ad-e3ec-4e99-96ce-f6242fe48f90" containerName="nova-api-api" Feb 24 05:55:28.885916 master-0 kubenswrapper[34361]: I0224 05:55:28.885618 34361 memory_manager.go:354] "RemoveStaleState removing state" podUID="ab08d4ad-e3ec-4e99-96ce-f6242fe48f90" containerName="nova-api-log" Feb 24 05:55:28.899341 master-0 kubenswrapper[34361]: I0224 05:55:28.887097 34361 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 24 05:55:28.899341 master-0 kubenswrapper[34361]: I0224 05:55:28.894180 34361 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-public-svc" Feb 24 05:55:28.899341 master-0 kubenswrapper[34361]: I0224 05:55:28.894693 34361 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Feb 24 05:55:28.899802 master-0 kubenswrapper[34361]: I0224 05:55:28.899743 34361 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-internal-svc" Feb 24 05:55:28.914247 master-0 kubenswrapper[34361]: I0224 05:55:28.912795 34361 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Feb 24 05:55:29.021063 master-0 kubenswrapper[34361]: I0224 05:55:29.020934 34361 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/732af8b5-18bd-4054-bf88-cd073fe009a6-logs\") pod \"nova-api-0\" (UID: \"732af8b5-18bd-4054-bf88-cd073fe009a6\") " pod="openstack/nova-api-0" Feb 24 05:55:29.021404 master-0 kubenswrapper[34361]: I0224 05:55:29.021187 34361 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9krjb\" (UniqueName: \"kubernetes.io/projected/732af8b5-18bd-4054-bf88-cd073fe009a6-kube-api-access-9krjb\") pod \"nova-api-0\" (UID: \"732af8b5-18bd-4054-bf88-cd073fe009a6\") " pod="openstack/nova-api-0" Feb 24 05:55:29.021404 master-0 kubenswrapper[34361]: I0224 05:55:29.021343 34361 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/732af8b5-18bd-4054-bf88-cd073fe009a6-internal-tls-certs\") pod \"nova-api-0\" (UID: \"732af8b5-18bd-4054-bf88-cd073fe009a6\") " pod="openstack/nova-api-0" Feb 24 05:55:29.022458 master-0 kubenswrapper[34361]: I0224 05:55:29.022421 34361 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/732af8b5-18bd-4054-bf88-cd073fe009a6-public-tls-certs\") pod \"nova-api-0\" (UID: \"732af8b5-18bd-4054-bf88-cd073fe009a6\") " pod="openstack/nova-api-0" Feb 24 05:55:29.022677 master-0 kubenswrapper[34361]: I0224 05:55:29.022622 34361 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/732af8b5-18bd-4054-bf88-cd073fe009a6-config-data\") pod \"nova-api-0\" (UID: \"732af8b5-18bd-4054-bf88-cd073fe009a6\") " pod="openstack/nova-api-0" Feb 24 05:55:29.022945 master-0 kubenswrapper[34361]: I0224 05:55:29.022913 34361 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/732af8b5-18bd-4054-bf88-cd073fe009a6-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"732af8b5-18bd-4054-bf88-cd073fe009a6\") " pod="openstack/nova-api-0" Feb 24 05:55:29.125015 master-0 kubenswrapper[34361]: I0224 05:55:29.124948 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/732af8b5-18bd-4054-bf88-cd073fe009a6-config-data\") pod \"nova-api-0\" (UID: \"732af8b5-18bd-4054-bf88-cd073fe009a6\") " pod="openstack/nova-api-0" Feb 24 05:55:29.125680 master-0 kubenswrapper[34361]: I0224 05:55:29.125049 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/732af8b5-18bd-4054-bf88-cd073fe009a6-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"732af8b5-18bd-4054-bf88-cd073fe009a6\") " pod="openstack/nova-api-0" Feb 24 05:55:29.125680 master-0 kubenswrapper[34361]: I0224 05:55:29.125091 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/732af8b5-18bd-4054-bf88-cd073fe009a6-logs\") pod \"nova-api-0\" (UID: \"732af8b5-18bd-4054-bf88-cd073fe009a6\") " pod="openstack/nova-api-0" Feb 24 05:55:29.125680 master-0 kubenswrapper[34361]: I0224 05:55:29.125137 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9krjb\" (UniqueName: \"kubernetes.io/projected/732af8b5-18bd-4054-bf88-cd073fe009a6-kube-api-access-9krjb\") pod \"nova-api-0\" (UID: \"732af8b5-18bd-4054-bf88-cd073fe009a6\") " pod="openstack/nova-api-0" Feb 24 05:55:29.125680 master-0 kubenswrapper[34361]: I0224 05:55:29.125186 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/732af8b5-18bd-4054-bf88-cd073fe009a6-internal-tls-certs\") pod \"nova-api-0\" (UID: \"732af8b5-18bd-4054-bf88-cd073fe009a6\") " pod="openstack/nova-api-0" Feb 24 05:55:29.125680 master-0 kubenswrapper[34361]: I0224 05:55:29.125257 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/732af8b5-18bd-4054-bf88-cd073fe009a6-public-tls-certs\") pod \"nova-api-0\" (UID: \"732af8b5-18bd-4054-bf88-cd073fe009a6\") " pod="openstack/nova-api-0" Feb 24 05:55:29.126261 master-0 kubenswrapper[34361]: I0224 05:55:29.126221 34361 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/732af8b5-18bd-4054-bf88-cd073fe009a6-logs\") pod \"nova-api-0\" (UID: \"732af8b5-18bd-4054-bf88-cd073fe009a6\") " pod="openstack/nova-api-0" Feb 24 05:55:29.131611 master-0 kubenswrapper[34361]: I0224 05:55:29.131575 34361 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/732af8b5-18bd-4054-bf88-cd073fe009a6-config-data\") pod \"nova-api-0\" (UID: \"732af8b5-18bd-4054-bf88-cd073fe009a6\") " pod="openstack/nova-api-0" Feb 24 05:55:29.131866 master-0 kubenswrapper[34361]: I0224 05:55:29.131835 34361 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/732af8b5-18bd-4054-bf88-cd073fe009a6-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"732af8b5-18bd-4054-bf88-cd073fe009a6\") " pod="openstack/nova-api-0" Feb 24 05:55:29.132362 master-0 kubenswrapper[34361]: I0224 05:55:29.132342 34361 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/732af8b5-18bd-4054-bf88-cd073fe009a6-public-tls-certs\") pod \"nova-api-0\" (UID: \"732af8b5-18bd-4054-bf88-cd073fe009a6\") " pod="openstack/nova-api-0" Feb 24 05:55:29.133822 master-0 kubenswrapper[34361]: I0224 05:55:29.133747 34361 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/732af8b5-18bd-4054-bf88-cd073fe009a6-internal-tls-certs\") pod \"nova-api-0\" (UID: \"732af8b5-18bd-4054-bf88-cd073fe009a6\") " pod="openstack/nova-api-0" Feb 24 05:55:29.149879 master-0 kubenswrapper[34361]: I0224 05:55:29.149809 34361 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9krjb\" (UniqueName: \"kubernetes.io/projected/732af8b5-18bd-4054-bf88-cd073fe009a6-kube-api-access-9krjb\") pod \"nova-api-0\" (UID: \"732af8b5-18bd-4054-bf88-cd073fe009a6\") " pod="openstack/nova-api-0" Feb 24 05:55:29.236084 master-0 kubenswrapper[34361]: I0224 05:55:29.236003 34361 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 24 05:55:29.809813 master-0 kubenswrapper[34361]: W0224 05:55:29.809744 34361 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod732af8b5_18bd_4054_bf88_cd073fe009a6.slice/crio-c78f8c88b06bcf0bcfe70d732737b0a481d09d97ea43be4bdd2f06e366765783 WatchSource:0}: Error finding container c78f8c88b06bcf0bcfe70d732737b0a481d09d97ea43be4bdd2f06e366765783: Status 404 returned error can't find the container with id c78f8c88b06bcf0bcfe70d732737b0a481d09d97ea43be4bdd2f06e366765783 Feb 24 05:55:29.811428 master-0 kubenswrapper[34361]: I0224 05:55:29.811397 34361 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Feb 24 05:55:30.614297 master-0 kubenswrapper[34361]: I0224 05:55:30.614226 34361 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ab08d4ad-e3ec-4e99-96ce-f6242fe48f90" path="/var/lib/kubelet/pods/ab08d4ad-e3ec-4e99-96ce-f6242fe48f90/volumes" Feb 24 05:55:30.861142 master-0 kubenswrapper[34361]: I0224 05:55:30.861067 34361 generic.go:334] "Generic (PLEG): container finished" podID="598b59b9-eeed-4a94-a3b0-fb6c19d76c53" containerID="cf4854259d77311f1ec27f712fbe10530ae0ecfdff1f0b17fcf2a99fb10cb7bb" exitCode=0 Feb 24 05:55:30.861663 master-0 kubenswrapper[34361]: I0224 05:55:30.861177 34361 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-host-discover-4lbdf" event={"ID":"598b59b9-eeed-4a94-a3b0-fb6c19d76c53","Type":"ContainerDied","Data":"cf4854259d77311f1ec27f712fbe10530ae0ecfdff1f0b17fcf2a99fb10cb7bb"} Feb 24 05:55:30.865052 master-0 kubenswrapper[34361]: I0224 05:55:30.864991 34361 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"732af8b5-18bd-4054-bf88-cd073fe009a6","Type":"ContainerStarted","Data":"6d99395451a38492334030ad1f3a8ef049f606db06fee046630209a29f2c8895"} Feb 24 05:55:30.865052 master-0 kubenswrapper[34361]: I0224 05:55:30.865032 34361 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"732af8b5-18bd-4054-bf88-cd073fe009a6","Type":"ContainerStarted","Data":"7f97d9de5975a4d2d1d19fcf71d72274f703fdac1aa293895261156a3558beb8"} Feb 24 05:55:30.865052 master-0 kubenswrapper[34361]: I0224 05:55:30.865051 34361 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"732af8b5-18bd-4054-bf88-cd073fe009a6","Type":"ContainerStarted","Data":"c78f8c88b06bcf0bcfe70d732737b0a481d09d97ea43be4bdd2f06e366765783"} Feb 24 05:55:30.940045 master-0 kubenswrapper[34361]: I0224 05:55:30.939472 34361 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=2.939444583 podStartE2EDuration="2.939444583s" podCreationTimestamp="2026-02-24 05:55:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-24 05:55:30.9240953 +0000 UTC m=+1090.626712346" watchObservedRunningTime="2026-02-24 05:55:30.939444583 +0000 UTC m=+1090.642061629" Feb 24 05:55:31.270614 master-0 kubenswrapper[34361]: I0224 05:55:31.270522 34361 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-555687858c-l6w59" Feb 24 05:55:31.387743 master-0 kubenswrapper[34361]: I0224 05:55:31.387680 34361 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6fcf8f9d6f-578q8"] Feb 24 05:55:31.389201 master-0 kubenswrapper[34361]: I0224 05:55:31.389169 34361 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-6fcf8f9d6f-578q8" podUID="2d229fa5-0153-43d2-92d6-e548ed604b0b" containerName="dnsmasq-dns" containerID="cri-o://65d2b8dd751716ea675453d0f0ff5d427a09809a3ad40f1add62946e5d0a5571" gracePeriod=10 Feb 24 05:55:31.906818 master-0 kubenswrapper[34361]: I0224 05:55:31.906746 34361 generic.go:334] "Generic (PLEG): container finished" podID="2d229fa5-0153-43d2-92d6-e548ed604b0b" containerID="65d2b8dd751716ea675453d0f0ff5d427a09809a3ad40f1add62946e5d0a5571" exitCode=0 Feb 24 05:55:31.908332 master-0 kubenswrapper[34361]: I0224 05:55:31.908288 34361 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6fcf8f9d6f-578q8" event={"ID":"2d229fa5-0153-43d2-92d6-e548ed604b0b","Type":"ContainerDied","Data":"65d2b8dd751716ea675453d0f0ff5d427a09809a3ad40f1add62946e5d0a5571"} Feb 24 05:55:31.908395 master-0 kubenswrapper[34361]: I0224 05:55:31.908344 34361 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6fcf8f9d6f-578q8" event={"ID":"2d229fa5-0153-43d2-92d6-e548ed604b0b","Type":"ContainerDied","Data":"96293aa1aeb850959c682803c0bbd53c471e71c8830a61709e8562e15eb31920"} Feb 24 05:55:31.908395 master-0 kubenswrapper[34361]: I0224 05:55:31.908358 34361 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="96293aa1aeb850959c682803c0bbd53c471e71c8830a61709e8562e15eb31920" Feb 24 05:55:31.969372 master-0 kubenswrapper[34361]: I0224 05:55:31.967264 34361 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6fcf8f9d6f-578q8" Feb 24 05:55:32.045062 master-0 kubenswrapper[34361]: I0224 05:55:32.042244 34361 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/2d229fa5-0153-43d2-92d6-e548ed604b0b-dns-swift-storage-0\") pod \"2d229fa5-0153-43d2-92d6-e548ed604b0b\" (UID: \"2d229fa5-0153-43d2-92d6-e548ed604b0b\") " Feb 24 05:55:32.045062 master-0 kubenswrapper[34361]: I0224 05:55:32.042636 34361 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2d229fa5-0153-43d2-92d6-e548ed604b0b-config\") pod \"2d229fa5-0153-43d2-92d6-e548ed604b0b\" (UID: \"2d229fa5-0153-43d2-92d6-e548ed604b0b\") " Feb 24 05:55:32.050268 master-0 kubenswrapper[34361]: I0224 05:55:32.049523 34361 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-g7cq4\" (UniqueName: \"kubernetes.io/projected/2d229fa5-0153-43d2-92d6-e548ed604b0b-kube-api-access-g7cq4\") pod \"2d229fa5-0153-43d2-92d6-e548ed604b0b\" (UID: \"2d229fa5-0153-43d2-92d6-e548ed604b0b\") " Feb 24 05:55:32.050268 master-0 kubenswrapper[34361]: I0224 05:55:32.049608 34361 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/2d229fa5-0153-43d2-92d6-e548ed604b0b-ovsdbserver-nb\") pod \"2d229fa5-0153-43d2-92d6-e548ed604b0b\" (UID: \"2d229fa5-0153-43d2-92d6-e548ed604b0b\") " Feb 24 05:55:32.050268 master-0 kubenswrapper[34361]: I0224 05:55:32.049900 34361 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/2d229fa5-0153-43d2-92d6-e548ed604b0b-ovsdbserver-sb\") pod \"2d229fa5-0153-43d2-92d6-e548ed604b0b\" (UID: \"2d229fa5-0153-43d2-92d6-e548ed604b0b\") " Feb 24 05:55:32.050268 master-0 kubenswrapper[34361]: I0224 05:55:32.049933 34361 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/2d229fa5-0153-43d2-92d6-e548ed604b0b-dns-svc\") pod \"2d229fa5-0153-43d2-92d6-e548ed604b0b\" (UID: \"2d229fa5-0153-43d2-92d6-e548ed604b0b\") " Feb 24 05:55:32.065593 master-0 kubenswrapper[34361]: I0224 05:55:32.063295 34361 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2d229fa5-0153-43d2-92d6-e548ed604b0b-kube-api-access-g7cq4" (OuterVolumeSpecName: "kube-api-access-g7cq4") pod "2d229fa5-0153-43d2-92d6-e548ed604b0b" (UID: "2d229fa5-0153-43d2-92d6-e548ed604b0b"). InnerVolumeSpecName "kube-api-access-g7cq4". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 24 05:55:32.127279 master-0 kubenswrapper[34361]: I0224 05:55:32.127203 34361 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2d229fa5-0153-43d2-92d6-e548ed604b0b-config" (OuterVolumeSpecName: "config") pod "2d229fa5-0153-43d2-92d6-e548ed604b0b" (UID: "2d229fa5-0153-43d2-92d6-e548ed604b0b"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 24 05:55:32.130253 master-0 kubenswrapper[34361]: I0224 05:55:32.130187 34361 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2d229fa5-0153-43d2-92d6-e548ed604b0b-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "2d229fa5-0153-43d2-92d6-e548ed604b0b" (UID: "2d229fa5-0153-43d2-92d6-e548ed604b0b"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 24 05:55:32.145055 master-0 kubenswrapper[34361]: I0224 05:55:32.144978 34361 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2d229fa5-0153-43d2-92d6-e548ed604b0b-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "2d229fa5-0153-43d2-92d6-e548ed604b0b" (UID: "2d229fa5-0153-43d2-92d6-e548ed604b0b"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 24 05:55:32.161736 master-0 kubenswrapper[34361]: I0224 05:55:32.161611 34361 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/2d229fa5-0153-43d2-92d6-e548ed604b0b-ovsdbserver-sb\") on node \"master-0\" DevicePath \"\"" Feb 24 05:55:32.161736 master-0 kubenswrapper[34361]: I0224 05:55:32.161677 34361 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/2d229fa5-0153-43d2-92d6-e548ed604b0b-dns-swift-storage-0\") on node \"master-0\" DevicePath \"\"" Feb 24 05:55:32.161736 master-0 kubenswrapper[34361]: I0224 05:55:32.161696 34361 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2d229fa5-0153-43d2-92d6-e548ed604b0b-config\") on node \"master-0\" DevicePath \"\"" Feb 24 05:55:32.161736 master-0 kubenswrapper[34361]: I0224 05:55:32.161711 34361 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-g7cq4\" (UniqueName: \"kubernetes.io/projected/2d229fa5-0153-43d2-92d6-e548ed604b0b-kube-api-access-g7cq4\") on node \"master-0\" DevicePath \"\"" Feb 24 05:55:32.162614 master-0 kubenswrapper[34361]: I0224 05:55:32.162541 34361 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2d229fa5-0153-43d2-92d6-e548ed604b0b-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "2d229fa5-0153-43d2-92d6-e548ed604b0b" (UID: "2d229fa5-0153-43d2-92d6-e548ed604b0b"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 24 05:55:32.226058 master-0 kubenswrapper[34361]: I0224 05:55:32.225981 34361 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2d229fa5-0153-43d2-92d6-e548ed604b0b-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "2d229fa5-0153-43d2-92d6-e548ed604b0b" (UID: "2d229fa5-0153-43d2-92d6-e548ed604b0b"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 24 05:55:32.264403 master-0 kubenswrapper[34361]: I0224 05:55:32.264328 34361 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/2d229fa5-0153-43d2-92d6-e548ed604b0b-ovsdbserver-nb\") on node \"master-0\" DevicePath \"\"" Feb 24 05:55:32.264403 master-0 kubenswrapper[34361]: I0224 05:55:32.264392 34361 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/2d229fa5-0153-43d2-92d6-e548ed604b0b-dns-svc\") on node \"master-0\" DevicePath \"\"" Feb 24 05:55:32.390179 master-0 kubenswrapper[34361]: I0224 05:55:32.390107 34361 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-host-discover-4lbdf" Feb 24 05:55:32.468277 master-0 kubenswrapper[34361]: I0224 05:55:32.468105 34361 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fnlks\" (UniqueName: \"kubernetes.io/projected/598b59b9-eeed-4a94-a3b0-fb6c19d76c53-kube-api-access-fnlks\") pod \"598b59b9-eeed-4a94-a3b0-fb6c19d76c53\" (UID: \"598b59b9-eeed-4a94-a3b0-fb6c19d76c53\") " Feb 24 05:55:32.469087 master-0 kubenswrapper[34361]: I0224 05:55:32.468282 34361 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/598b59b9-eeed-4a94-a3b0-fb6c19d76c53-config-data\") pod \"598b59b9-eeed-4a94-a3b0-fb6c19d76c53\" (UID: \"598b59b9-eeed-4a94-a3b0-fb6c19d76c53\") " Feb 24 05:55:32.469087 master-0 kubenswrapper[34361]: I0224 05:55:32.468452 34361 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/598b59b9-eeed-4a94-a3b0-fb6c19d76c53-scripts\") pod \"598b59b9-eeed-4a94-a3b0-fb6c19d76c53\" (UID: \"598b59b9-eeed-4a94-a3b0-fb6c19d76c53\") " Feb 24 05:55:32.469087 master-0 kubenswrapper[34361]: I0224 05:55:32.468748 34361 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/598b59b9-eeed-4a94-a3b0-fb6c19d76c53-combined-ca-bundle\") pod \"598b59b9-eeed-4a94-a3b0-fb6c19d76c53\" (UID: \"598b59b9-eeed-4a94-a3b0-fb6c19d76c53\") " Feb 24 05:55:32.475477 master-0 kubenswrapper[34361]: I0224 05:55:32.472659 34361 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/598b59b9-eeed-4a94-a3b0-fb6c19d76c53-scripts" (OuterVolumeSpecName: "scripts") pod "598b59b9-eeed-4a94-a3b0-fb6c19d76c53" (UID: "598b59b9-eeed-4a94-a3b0-fb6c19d76c53"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 24 05:55:32.475477 master-0 kubenswrapper[34361]: I0224 05:55:32.473544 34361 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/598b59b9-eeed-4a94-a3b0-fb6c19d76c53-kube-api-access-fnlks" (OuterVolumeSpecName: "kube-api-access-fnlks") pod "598b59b9-eeed-4a94-a3b0-fb6c19d76c53" (UID: "598b59b9-eeed-4a94-a3b0-fb6c19d76c53"). InnerVolumeSpecName "kube-api-access-fnlks". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 24 05:55:32.506765 master-0 kubenswrapper[34361]: I0224 05:55:32.506670 34361 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/598b59b9-eeed-4a94-a3b0-fb6c19d76c53-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "598b59b9-eeed-4a94-a3b0-fb6c19d76c53" (UID: "598b59b9-eeed-4a94-a3b0-fb6c19d76c53"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 24 05:55:32.522234 master-0 kubenswrapper[34361]: I0224 05:55:32.522154 34361 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/598b59b9-eeed-4a94-a3b0-fb6c19d76c53-config-data" (OuterVolumeSpecName: "config-data") pod "598b59b9-eeed-4a94-a3b0-fb6c19d76c53" (UID: "598b59b9-eeed-4a94-a3b0-fb6c19d76c53"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 24 05:55:32.571388 master-0 kubenswrapper[34361]: I0224 05:55:32.571280 34361 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fnlks\" (UniqueName: \"kubernetes.io/projected/598b59b9-eeed-4a94-a3b0-fb6c19d76c53-kube-api-access-fnlks\") on node \"master-0\" DevicePath \"\"" Feb 24 05:55:32.571388 master-0 kubenswrapper[34361]: I0224 05:55:32.571368 34361 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/598b59b9-eeed-4a94-a3b0-fb6c19d76c53-config-data\") on node \"master-0\" DevicePath \"\"" Feb 24 05:55:32.571388 master-0 kubenswrapper[34361]: I0224 05:55:32.571391 34361 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/598b59b9-eeed-4a94-a3b0-fb6c19d76c53-scripts\") on node \"master-0\" DevicePath \"\"" Feb 24 05:55:32.571826 master-0 kubenswrapper[34361]: I0224 05:55:32.571410 34361 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/598b59b9-eeed-4a94-a3b0-fb6c19d76c53-combined-ca-bundle\") on node \"master-0\" DevicePath \"\"" Feb 24 05:55:32.933645 master-0 kubenswrapper[34361]: I0224 05:55:32.933417 34361 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-host-discover-4lbdf" event={"ID":"598b59b9-eeed-4a94-a3b0-fb6c19d76c53","Type":"ContainerDied","Data":"36c3f0ca48b2b9f0321218ca247f970bd6a224dff44f5b06188f9b85500d70ef"} Feb 24 05:55:32.933645 master-0 kubenswrapper[34361]: I0224 05:55:32.933489 34361 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="36c3f0ca48b2b9f0321218ca247f970bd6a224dff44f5b06188f9b85500d70ef" Feb 24 05:55:32.933645 master-0 kubenswrapper[34361]: I0224 05:55:32.933569 34361 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-host-discover-4lbdf" Feb 24 05:55:32.942527 master-0 kubenswrapper[34361]: I0224 05:55:32.942449 34361 generic.go:334] "Generic (PLEG): container finished" podID="0215380e-69c6-41f6-a231-98e9714a160d" containerID="88c32843b8ac98f536171b903f3090837a291cd61314786b4abb1451784d161b" exitCode=0 Feb 24 05:55:32.942875 master-0 kubenswrapper[34361]: I0224 05:55:32.942568 34361 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-l5vzg" event={"ID":"0215380e-69c6-41f6-a231-98e9714a160d","Type":"ContainerDied","Data":"88c32843b8ac98f536171b903f3090837a291cd61314786b4abb1451784d161b"} Feb 24 05:55:32.942875 master-0 kubenswrapper[34361]: I0224 05:55:32.942612 34361 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6fcf8f9d6f-578q8" Feb 24 05:55:33.006827 master-0 kubenswrapper[34361]: I0224 05:55:33.006737 34361 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6fcf8f9d6f-578q8"] Feb 24 05:55:33.023631 master-0 kubenswrapper[34361]: I0224 05:55:33.023576 34361 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-6fcf8f9d6f-578q8"] Feb 24 05:55:34.504523 master-0 kubenswrapper[34361]: I0224 05:55:34.504464 34361 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-l5vzg" Feb 24 05:55:34.619717 master-0 kubenswrapper[34361]: I0224 05:55:34.619345 34361 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2d229fa5-0153-43d2-92d6-e548ed604b0b" path="/var/lib/kubelet/pods/2d229fa5-0153-43d2-92d6-e548ed604b0b/volumes" Feb 24 05:55:34.635572 master-0 kubenswrapper[34361]: I0224 05:55:34.635493 34361 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0215380e-69c6-41f6-a231-98e9714a160d-scripts\") pod \"0215380e-69c6-41f6-a231-98e9714a160d\" (UID: \"0215380e-69c6-41f6-a231-98e9714a160d\") " Feb 24 05:55:34.635770 master-0 kubenswrapper[34361]: I0224 05:55:34.635726 34361 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-426j4\" (UniqueName: \"kubernetes.io/projected/0215380e-69c6-41f6-a231-98e9714a160d-kube-api-access-426j4\") pod \"0215380e-69c6-41f6-a231-98e9714a160d\" (UID: \"0215380e-69c6-41f6-a231-98e9714a160d\") " Feb 24 05:55:34.635873 master-0 kubenswrapper[34361]: I0224 05:55:34.635793 34361 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0215380e-69c6-41f6-a231-98e9714a160d-config-data\") pod \"0215380e-69c6-41f6-a231-98e9714a160d\" (UID: \"0215380e-69c6-41f6-a231-98e9714a160d\") " Feb 24 05:55:34.636184 master-0 kubenswrapper[34361]: I0224 05:55:34.636145 34361 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0215380e-69c6-41f6-a231-98e9714a160d-combined-ca-bundle\") pod \"0215380e-69c6-41f6-a231-98e9714a160d\" (UID: \"0215380e-69c6-41f6-a231-98e9714a160d\") " Feb 24 05:55:34.640080 master-0 kubenswrapper[34361]: I0224 05:55:34.639986 34361 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0215380e-69c6-41f6-a231-98e9714a160d-kube-api-access-426j4" (OuterVolumeSpecName: "kube-api-access-426j4") pod "0215380e-69c6-41f6-a231-98e9714a160d" (UID: "0215380e-69c6-41f6-a231-98e9714a160d"). InnerVolumeSpecName "kube-api-access-426j4". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 24 05:55:34.641189 master-0 kubenswrapper[34361]: I0224 05:55:34.641129 34361 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0215380e-69c6-41f6-a231-98e9714a160d-scripts" (OuterVolumeSpecName: "scripts") pod "0215380e-69c6-41f6-a231-98e9714a160d" (UID: "0215380e-69c6-41f6-a231-98e9714a160d"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 24 05:55:34.675043 master-0 kubenswrapper[34361]: I0224 05:55:34.674868 34361 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0215380e-69c6-41f6-a231-98e9714a160d-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "0215380e-69c6-41f6-a231-98e9714a160d" (UID: "0215380e-69c6-41f6-a231-98e9714a160d"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 24 05:55:34.678945 master-0 kubenswrapper[34361]: I0224 05:55:34.678879 34361 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0215380e-69c6-41f6-a231-98e9714a160d-config-data" (OuterVolumeSpecName: "config-data") pod "0215380e-69c6-41f6-a231-98e9714a160d" (UID: "0215380e-69c6-41f6-a231-98e9714a160d"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 24 05:55:34.740582 master-0 kubenswrapper[34361]: I0224 05:55:34.740507 34361 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0215380e-69c6-41f6-a231-98e9714a160d-scripts\") on node \"master-0\" DevicePath \"\"" Feb 24 05:55:34.740582 master-0 kubenswrapper[34361]: I0224 05:55:34.740560 34361 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-426j4\" (UniqueName: \"kubernetes.io/projected/0215380e-69c6-41f6-a231-98e9714a160d-kube-api-access-426j4\") on node \"master-0\" DevicePath \"\"" Feb 24 05:55:34.740582 master-0 kubenswrapper[34361]: I0224 05:55:34.740580 34361 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0215380e-69c6-41f6-a231-98e9714a160d-config-data\") on node \"master-0\" DevicePath \"\"" Feb 24 05:55:34.740582 master-0 kubenswrapper[34361]: I0224 05:55:34.740596 34361 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0215380e-69c6-41f6-a231-98e9714a160d-combined-ca-bundle\") on node \"master-0\" DevicePath \"\"" Feb 24 05:55:34.985085 master-0 kubenswrapper[34361]: I0224 05:55:34.984978 34361 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-l5vzg" event={"ID":"0215380e-69c6-41f6-a231-98e9714a160d","Type":"ContainerDied","Data":"001295e46c81403233da552a6bd26f3d1c5989ef1c45addc83d0ef5248614fdd"} Feb 24 05:55:34.985085 master-0 kubenswrapper[34361]: I0224 05:55:34.985056 34361 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="001295e46c81403233da552a6bd26f3d1c5989ef1c45addc83d0ef5248614fdd" Feb 24 05:55:34.985636 master-0 kubenswrapper[34361]: I0224 05:55:34.985112 34361 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-l5vzg" Feb 24 05:55:35.262560 master-0 kubenswrapper[34361]: I0224 05:55:35.261995 34361 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Feb 24 05:55:35.262560 master-0 kubenswrapper[34361]: I0224 05:55:35.262368 34361 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="732af8b5-18bd-4054-bf88-cd073fe009a6" containerName="nova-api-log" containerID="cri-o://7f97d9de5975a4d2d1d19fcf71d72274f703fdac1aa293895261156a3558beb8" gracePeriod=30 Feb 24 05:55:35.262837 master-0 kubenswrapper[34361]: I0224 05:55:35.262590 34361 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="732af8b5-18bd-4054-bf88-cd073fe009a6" containerName="nova-api-api" containerID="cri-o://6d99395451a38492334030ad1f3a8ef049f606db06fee046630209a29f2c8895" gracePeriod=30 Feb 24 05:55:35.297341 master-0 kubenswrapper[34361]: I0224 05:55:35.291381 34361 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Feb 24 05:55:35.297341 master-0 kubenswrapper[34361]: I0224 05:55:35.291742 34361 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-scheduler-0" podUID="7c71a7ab-625b-4ba1-b7d2-7832cc1ba651" containerName="nova-scheduler-scheduler" containerID="cri-o://59a229cc27e8ab8436e29ad9f6fade4482c5b173ffe204683a6a47581c238e04" gracePeriod=30 Feb 24 05:55:35.309370 master-0 kubenswrapper[34361]: I0224 05:55:35.308091 34361 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Feb 24 05:55:35.309370 master-0 kubenswrapper[34361]: I0224 05:55:35.308485 34361 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="cefa04df-75ac-48a5-ac80-62009d398d01" containerName="nova-metadata-log" containerID="cri-o://3f70fabdaa1c10de1289e9175a4e84b8bb9b8438a37c75570f67615cd4a67a5f" gracePeriod=30 Feb 24 05:55:35.309370 master-0 kubenswrapper[34361]: I0224 05:55:35.309095 34361 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="cefa04df-75ac-48a5-ac80-62009d398d01" containerName="nova-metadata-metadata" containerID="cri-o://b24c1ec9a4cb118cf8f370ab23b3e38523e5d056fef82cbb1a6b9b9ca58ab3a8" gracePeriod=30 Feb 24 05:55:36.001741 master-0 kubenswrapper[34361]: I0224 05:55:36.001658 34361 generic.go:334] "Generic (PLEG): container finished" podID="732af8b5-18bd-4054-bf88-cd073fe009a6" containerID="6d99395451a38492334030ad1f3a8ef049f606db06fee046630209a29f2c8895" exitCode=0 Feb 24 05:55:36.001741 master-0 kubenswrapper[34361]: I0224 05:55:36.001719 34361 generic.go:334] "Generic (PLEG): container finished" podID="732af8b5-18bd-4054-bf88-cd073fe009a6" containerID="7f97d9de5975a4d2d1d19fcf71d72274f703fdac1aa293895261156a3558beb8" exitCode=143 Feb 24 05:55:36.002250 master-0 kubenswrapper[34361]: I0224 05:55:36.001785 34361 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"732af8b5-18bd-4054-bf88-cd073fe009a6","Type":"ContainerDied","Data":"6d99395451a38492334030ad1f3a8ef049f606db06fee046630209a29f2c8895"} Feb 24 05:55:36.002250 master-0 kubenswrapper[34361]: I0224 05:55:36.001901 34361 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"732af8b5-18bd-4054-bf88-cd073fe009a6","Type":"ContainerDied","Data":"7f97d9de5975a4d2d1d19fcf71d72274f703fdac1aa293895261156a3558beb8"} Feb 24 05:55:36.005033 master-0 kubenswrapper[34361]: I0224 05:55:36.005001 34361 generic.go:334] "Generic (PLEG): container finished" podID="cefa04df-75ac-48a5-ac80-62009d398d01" containerID="3f70fabdaa1c10de1289e9175a4e84b8bb9b8438a37c75570f67615cd4a67a5f" exitCode=143 Feb 24 05:55:36.005289 master-0 kubenswrapper[34361]: I0224 05:55:36.005048 34361 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"cefa04df-75ac-48a5-ac80-62009d398d01","Type":"ContainerDied","Data":"3f70fabdaa1c10de1289e9175a4e84b8bb9b8438a37c75570f67615cd4a67a5f"} Feb 24 05:55:36.106802 master-0 kubenswrapper[34361]: I0224 05:55:36.106713 34361 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 24 05:55:36.196847 master-0 kubenswrapper[34361]: I0224 05:55:36.196742 34361 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9krjb\" (UniqueName: \"kubernetes.io/projected/732af8b5-18bd-4054-bf88-cd073fe009a6-kube-api-access-9krjb\") pod \"732af8b5-18bd-4054-bf88-cd073fe009a6\" (UID: \"732af8b5-18bd-4054-bf88-cd073fe009a6\") " Feb 24 05:55:36.197195 master-0 kubenswrapper[34361]: I0224 05:55:36.197051 34361 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/732af8b5-18bd-4054-bf88-cd073fe009a6-config-data\") pod \"732af8b5-18bd-4054-bf88-cd073fe009a6\" (UID: \"732af8b5-18bd-4054-bf88-cd073fe009a6\") " Feb 24 05:55:36.197195 master-0 kubenswrapper[34361]: I0224 05:55:36.197161 34361 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/732af8b5-18bd-4054-bf88-cd073fe009a6-internal-tls-certs\") pod \"732af8b5-18bd-4054-bf88-cd073fe009a6\" (UID: \"732af8b5-18bd-4054-bf88-cd073fe009a6\") " Feb 24 05:55:36.197334 master-0 kubenswrapper[34361]: I0224 05:55:36.197272 34361 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/732af8b5-18bd-4054-bf88-cd073fe009a6-logs\") pod \"732af8b5-18bd-4054-bf88-cd073fe009a6\" (UID: \"732af8b5-18bd-4054-bf88-cd073fe009a6\") " Feb 24 05:55:36.197611 master-0 kubenswrapper[34361]: I0224 05:55:36.197556 34361 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/732af8b5-18bd-4054-bf88-cd073fe009a6-public-tls-certs\") pod \"732af8b5-18bd-4054-bf88-cd073fe009a6\" (UID: \"732af8b5-18bd-4054-bf88-cd073fe009a6\") " Feb 24 05:55:36.197826 master-0 kubenswrapper[34361]: I0224 05:55:36.197772 34361 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/732af8b5-18bd-4054-bf88-cd073fe009a6-combined-ca-bundle\") pod \"732af8b5-18bd-4054-bf88-cd073fe009a6\" (UID: \"732af8b5-18bd-4054-bf88-cd073fe009a6\") " Feb 24 05:55:36.203101 master-0 kubenswrapper[34361]: I0224 05:55:36.203027 34361 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/732af8b5-18bd-4054-bf88-cd073fe009a6-kube-api-access-9krjb" (OuterVolumeSpecName: "kube-api-access-9krjb") pod "732af8b5-18bd-4054-bf88-cd073fe009a6" (UID: "732af8b5-18bd-4054-bf88-cd073fe009a6"). InnerVolumeSpecName "kube-api-access-9krjb". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 24 05:55:36.203427 master-0 kubenswrapper[34361]: I0224 05:55:36.203395 34361 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/732af8b5-18bd-4054-bf88-cd073fe009a6-logs" (OuterVolumeSpecName: "logs") pod "732af8b5-18bd-4054-bf88-cd073fe009a6" (UID: "732af8b5-18bd-4054-bf88-cd073fe009a6"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 24 05:55:36.204459 master-0 kubenswrapper[34361]: I0224 05:55:36.204420 34361 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9krjb\" (UniqueName: \"kubernetes.io/projected/732af8b5-18bd-4054-bf88-cd073fe009a6-kube-api-access-9krjb\") on node \"master-0\" DevicePath \"\"" Feb 24 05:55:36.204459 master-0 kubenswrapper[34361]: I0224 05:55:36.204452 34361 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/732af8b5-18bd-4054-bf88-cd073fe009a6-logs\") on node \"master-0\" DevicePath \"\"" Feb 24 05:55:36.261008 master-0 kubenswrapper[34361]: I0224 05:55:36.260883 34361 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/732af8b5-18bd-4054-bf88-cd073fe009a6-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "732af8b5-18bd-4054-bf88-cd073fe009a6" (UID: "732af8b5-18bd-4054-bf88-cd073fe009a6"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 24 05:55:36.270617 master-0 kubenswrapper[34361]: I0224 05:55:36.270536 34361 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/732af8b5-18bd-4054-bf88-cd073fe009a6-config-data" (OuterVolumeSpecName: "config-data") pod "732af8b5-18bd-4054-bf88-cd073fe009a6" (UID: "732af8b5-18bd-4054-bf88-cd073fe009a6"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 24 05:55:36.287500 master-0 kubenswrapper[34361]: I0224 05:55:36.287426 34361 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/732af8b5-18bd-4054-bf88-cd073fe009a6-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "732af8b5-18bd-4054-bf88-cd073fe009a6" (UID: "732af8b5-18bd-4054-bf88-cd073fe009a6"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 24 05:55:36.291479 master-0 kubenswrapper[34361]: I0224 05:55:36.289632 34361 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/732af8b5-18bd-4054-bf88-cd073fe009a6-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "732af8b5-18bd-4054-bf88-cd073fe009a6" (UID: "732af8b5-18bd-4054-bf88-cd073fe009a6"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 24 05:55:36.309509 master-0 kubenswrapper[34361]: I0224 05:55:36.307781 34361 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/732af8b5-18bd-4054-bf88-cd073fe009a6-config-data\") on node \"master-0\" DevicePath \"\"" Feb 24 05:55:36.309509 master-0 kubenswrapper[34361]: I0224 05:55:36.307849 34361 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/732af8b5-18bd-4054-bf88-cd073fe009a6-internal-tls-certs\") on node \"master-0\" DevicePath \"\"" Feb 24 05:55:36.309509 master-0 kubenswrapper[34361]: I0224 05:55:36.307870 34361 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/732af8b5-18bd-4054-bf88-cd073fe009a6-public-tls-certs\") on node \"master-0\" DevicePath \"\"" Feb 24 05:55:36.309509 master-0 kubenswrapper[34361]: I0224 05:55:36.307884 34361 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/732af8b5-18bd-4054-bf88-cd073fe009a6-combined-ca-bundle\") on node \"master-0\" DevicePath \"\"" Feb 24 05:55:36.703804 master-0 kubenswrapper[34361]: E0224 05:55:36.703677 34361 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="59a229cc27e8ab8436e29ad9f6fade4482c5b173ffe204683a6a47581c238e04" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Feb 24 05:55:36.708375 master-0 kubenswrapper[34361]: E0224 05:55:36.706590 34361 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="59a229cc27e8ab8436e29ad9f6fade4482c5b173ffe204683a6a47581c238e04" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Feb 24 05:55:36.708750 master-0 kubenswrapper[34361]: E0224 05:55:36.708590 34361 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="59a229cc27e8ab8436e29ad9f6fade4482c5b173ffe204683a6a47581c238e04" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Feb 24 05:55:36.708839 master-0 kubenswrapper[34361]: E0224 05:55:36.708722 34361 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/nova-scheduler-0" podUID="7c71a7ab-625b-4ba1-b7d2-7832cc1ba651" containerName="nova-scheduler-scheduler" Feb 24 05:55:37.023356 master-0 kubenswrapper[34361]: I0224 05:55:37.023266 34361 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"732af8b5-18bd-4054-bf88-cd073fe009a6","Type":"ContainerDied","Data":"c78f8c88b06bcf0bcfe70d732737b0a481d09d97ea43be4bdd2f06e366765783"} Feb 24 05:55:37.024018 master-0 kubenswrapper[34361]: I0224 05:55:37.023397 34361 scope.go:117] "RemoveContainer" containerID="6d99395451a38492334030ad1f3a8ef049f606db06fee046630209a29f2c8895" Feb 24 05:55:37.024018 master-0 kubenswrapper[34361]: I0224 05:55:37.023396 34361 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 24 05:55:37.066678 master-0 kubenswrapper[34361]: I0224 05:55:37.066600 34361 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Feb 24 05:55:37.075266 master-0 kubenswrapper[34361]: I0224 05:55:37.075195 34361 scope.go:117] "RemoveContainer" containerID="7f97d9de5975a4d2d1d19fcf71d72274f703fdac1aa293895261156a3558beb8" Feb 24 05:55:37.148917 master-0 kubenswrapper[34361]: I0224 05:55:37.132486 34361 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Feb 24 05:55:37.166224 master-0 kubenswrapper[34361]: I0224 05:55:37.164843 34361 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Feb 24 05:55:37.166224 master-0 kubenswrapper[34361]: E0224 05:55:37.165607 34361 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="732af8b5-18bd-4054-bf88-cd073fe009a6" containerName="nova-api-log" Feb 24 05:55:37.166224 master-0 kubenswrapper[34361]: I0224 05:55:37.165626 34361 state_mem.go:107] "Deleted CPUSet assignment" podUID="732af8b5-18bd-4054-bf88-cd073fe009a6" containerName="nova-api-log" Feb 24 05:55:37.166224 master-0 kubenswrapper[34361]: E0224 05:55:37.165637 34361 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2d229fa5-0153-43d2-92d6-e548ed604b0b" containerName="init" Feb 24 05:55:37.166224 master-0 kubenswrapper[34361]: I0224 05:55:37.165644 34361 state_mem.go:107] "Deleted CPUSet assignment" podUID="2d229fa5-0153-43d2-92d6-e548ed604b0b" containerName="init" Feb 24 05:55:37.166224 master-0 kubenswrapper[34361]: E0224 05:55:37.165666 34361 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2d229fa5-0153-43d2-92d6-e548ed604b0b" containerName="dnsmasq-dns" Feb 24 05:55:37.166224 master-0 kubenswrapper[34361]: I0224 05:55:37.165673 34361 state_mem.go:107] "Deleted CPUSet assignment" podUID="2d229fa5-0153-43d2-92d6-e548ed604b0b" containerName="dnsmasq-dns" Feb 24 05:55:37.166224 master-0 kubenswrapper[34361]: E0224 05:55:37.165681 34361 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="732af8b5-18bd-4054-bf88-cd073fe009a6" containerName="nova-api-api" Feb 24 05:55:37.166224 master-0 kubenswrapper[34361]: I0224 05:55:37.165688 34361 state_mem.go:107] "Deleted CPUSet assignment" podUID="732af8b5-18bd-4054-bf88-cd073fe009a6" containerName="nova-api-api" Feb 24 05:55:37.166224 master-0 kubenswrapper[34361]: E0224 05:55:37.165718 34361 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0215380e-69c6-41f6-a231-98e9714a160d" containerName="nova-manage" Feb 24 05:55:37.166224 master-0 kubenswrapper[34361]: I0224 05:55:37.165725 34361 state_mem.go:107] "Deleted CPUSet assignment" podUID="0215380e-69c6-41f6-a231-98e9714a160d" containerName="nova-manage" Feb 24 05:55:37.166224 master-0 kubenswrapper[34361]: E0224 05:55:37.165742 34361 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="598b59b9-eeed-4a94-a3b0-fb6c19d76c53" containerName="nova-manage" Feb 24 05:55:37.166224 master-0 kubenswrapper[34361]: I0224 05:55:37.165750 34361 state_mem.go:107] "Deleted CPUSet assignment" podUID="598b59b9-eeed-4a94-a3b0-fb6c19d76c53" containerName="nova-manage" Feb 24 05:55:37.166224 master-0 kubenswrapper[34361]: I0224 05:55:37.166079 34361 memory_manager.go:354] "RemoveStaleState removing state" podUID="598b59b9-eeed-4a94-a3b0-fb6c19d76c53" containerName="nova-manage" Feb 24 05:55:37.166224 master-0 kubenswrapper[34361]: I0224 05:55:37.166099 34361 memory_manager.go:354] "RemoveStaleState removing state" podUID="2d229fa5-0153-43d2-92d6-e548ed604b0b" containerName="dnsmasq-dns" Feb 24 05:55:37.166224 master-0 kubenswrapper[34361]: I0224 05:55:37.166172 34361 memory_manager.go:354] "RemoveStaleState removing state" podUID="732af8b5-18bd-4054-bf88-cd073fe009a6" containerName="nova-api-api" Feb 24 05:55:37.166224 master-0 kubenswrapper[34361]: I0224 05:55:37.166184 34361 memory_manager.go:354] "RemoveStaleState removing state" podUID="732af8b5-18bd-4054-bf88-cd073fe009a6" containerName="nova-api-log" Feb 24 05:55:37.166224 master-0 kubenswrapper[34361]: I0224 05:55:37.166245 34361 memory_manager.go:354] "RemoveStaleState removing state" podUID="0215380e-69c6-41f6-a231-98e9714a160d" containerName="nova-manage" Feb 24 05:55:37.168570 master-0 kubenswrapper[34361]: I0224 05:55:37.168527 34361 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 24 05:55:37.171915 master-0 kubenswrapper[34361]: I0224 05:55:37.171847 34361 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Feb 24 05:55:37.171915 master-0 kubenswrapper[34361]: I0224 05:55:37.171861 34361 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-public-svc" Feb 24 05:55:37.172233 master-0 kubenswrapper[34361]: I0224 05:55:37.171878 34361 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-internal-svc" Feb 24 05:55:37.186362 master-0 kubenswrapper[34361]: I0224 05:55:37.185955 34361 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Feb 24 05:55:37.259718 master-0 kubenswrapper[34361]: I0224 05:55:37.259639 34361 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8fe396c8-a2e6-4d64-8cbf-9594635ad09d-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"8fe396c8-a2e6-4d64-8cbf-9594635ad09d\") " pod="openstack/nova-api-0" Feb 24 05:55:37.259718 master-0 kubenswrapper[34361]: I0224 05:55:37.259730 34361 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8fe396c8-a2e6-4d64-8cbf-9594635ad09d-logs\") pod \"nova-api-0\" (UID: \"8fe396c8-a2e6-4d64-8cbf-9594635ad09d\") " pod="openstack/nova-api-0" Feb 24 05:55:37.260082 master-0 kubenswrapper[34361]: I0224 05:55:37.259829 34361 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/8fe396c8-a2e6-4d64-8cbf-9594635ad09d-internal-tls-certs\") pod \"nova-api-0\" (UID: \"8fe396c8-a2e6-4d64-8cbf-9594635ad09d\") " pod="openstack/nova-api-0" Feb 24 05:55:37.260082 master-0 kubenswrapper[34361]: I0224 05:55:37.259861 34361 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/8fe396c8-a2e6-4d64-8cbf-9594635ad09d-public-tls-certs\") pod \"nova-api-0\" (UID: \"8fe396c8-a2e6-4d64-8cbf-9594635ad09d\") " pod="openstack/nova-api-0" Feb 24 05:55:37.260082 master-0 kubenswrapper[34361]: I0224 05:55:37.259960 34361 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4dnk8\" (UniqueName: \"kubernetes.io/projected/8fe396c8-a2e6-4d64-8cbf-9594635ad09d-kube-api-access-4dnk8\") pod \"nova-api-0\" (UID: \"8fe396c8-a2e6-4d64-8cbf-9594635ad09d\") " pod="openstack/nova-api-0" Feb 24 05:55:37.260082 master-0 kubenswrapper[34361]: I0224 05:55:37.260016 34361 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8fe396c8-a2e6-4d64-8cbf-9594635ad09d-config-data\") pod \"nova-api-0\" (UID: \"8fe396c8-a2e6-4d64-8cbf-9594635ad09d\") " pod="openstack/nova-api-0" Feb 24 05:55:37.363469 master-0 kubenswrapper[34361]: I0224 05:55:37.363013 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4dnk8\" (UniqueName: \"kubernetes.io/projected/8fe396c8-a2e6-4d64-8cbf-9594635ad09d-kube-api-access-4dnk8\") pod \"nova-api-0\" (UID: \"8fe396c8-a2e6-4d64-8cbf-9594635ad09d\") " pod="openstack/nova-api-0" Feb 24 05:55:37.363469 master-0 kubenswrapper[34361]: I0224 05:55:37.363240 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8fe396c8-a2e6-4d64-8cbf-9594635ad09d-config-data\") pod \"nova-api-0\" (UID: \"8fe396c8-a2e6-4d64-8cbf-9594635ad09d\") " pod="openstack/nova-api-0" Feb 24 05:55:37.365045 master-0 kubenswrapper[34361]: I0224 05:55:37.363880 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8fe396c8-a2e6-4d64-8cbf-9594635ad09d-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"8fe396c8-a2e6-4d64-8cbf-9594635ad09d\") " pod="openstack/nova-api-0" Feb 24 05:55:37.365045 master-0 kubenswrapper[34361]: I0224 05:55:37.363993 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8fe396c8-a2e6-4d64-8cbf-9594635ad09d-logs\") pod \"nova-api-0\" (UID: \"8fe396c8-a2e6-4d64-8cbf-9594635ad09d\") " pod="openstack/nova-api-0" Feb 24 05:55:37.365045 master-0 kubenswrapper[34361]: I0224 05:55:37.364288 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/8fe396c8-a2e6-4d64-8cbf-9594635ad09d-internal-tls-certs\") pod \"nova-api-0\" (UID: \"8fe396c8-a2e6-4d64-8cbf-9594635ad09d\") " pod="openstack/nova-api-0" Feb 24 05:55:37.365045 master-0 kubenswrapper[34361]: I0224 05:55:37.364354 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/8fe396c8-a2e6-4d64-8cbf-9594635ad09d-public-tls-certs\") pod \"nova-api-0\" (UID: \"8fe396c8-a2e6-4d64-8cbf-9594635ad09d\") " pod="openstack/nova-api-0" Feb 24 05:55:37.366246 master-0 kubenswrapper[34361]: I0224 05:55:37.366175 34361 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8fe396c8-a2e6-4d64-8cbf-9594635ad09d-logs\") pod \"nova-api-0\" (UID: \"8fe396c8-a2e6-4d64-8cbf-9594635ad09d\") " pod="openstack/nova-api-0" Feb 24 05:55:37.368158 master-0 kubenswrapper[34361]: I0224 05:55:37.368123 34361 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8fe396c8-a2e6-4d64-8cbf-9594635ad09d-config-data\") pod \"nova-api-0\" (UID: \"8fe396c8-a2e6-4d64-8cbf-9594635ad09d\") " pod="openstack/nova-api-0" Feb 24 05:55:37.369051 master-0 kubenswrapper[34361]: I0224 05:55:37.369018 34361 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/8fe396c8-a2e6-4d64-8cbf-9594635ad09d-public-tls-certs\") pod \"nova-api-0\" (UID: \"8fe396c8-a2e6-4d64-8cbf-9594635ad09d\") " pod="openstack/nova-api-0" Feb 24 05:55:37.370181 master-0 kubenswrapper[34361]: I0224 05:55:37.370125 34361 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8fe396c8-a2e6-4d64-8cbf-9594635ad09d-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"8fe396c8-a2e6-4d64-8cbf-9594635ad09d\") " pod="openstack/nova-api-0" Feb 24 05:55:37.370652 master-0 kubenswrapper[34361]: I0224 05:55:37.370604 34361 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/8fe396c8-a2e6-4d64-8cbf-9594635ad09d-internal-tls-certs\") pod \"nova-api-0\" (UID: \"8fe396c8-a2e6-4d64-8cbf-9594635ad09d\") " pod="openstack/nova-api-0" Feb 24 05:55:37.396562 master-0 kubenswrapper[34361]: I0224 05:55:37.394756 34361 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4dnk8\" (UniqueName: \"kubernetes.io/projected/8fe396c8-a2e6-4d64-8cbf-9594635ad09d-kube-api-access-4dnk8\") pod \"nova-api-0\" (UID: \"8fe396c8-a2e6-4d64-8cbf-9594635ad09d\") " pod="openstack/nova-api-0" Feb 24 05:55:37.508642 master-0 kubenswrapper[34361]: I0224 05:55:37.508527 34361 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 24 05:55:38.128385 master-0 kubenswrapper[34361]: W0224 05:55:38.128267 34361 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod8fe396c8_a2e6_4d64_8cbf_9594635ad09d.slice/crio-2cf55c57c1baf91ec1d6e283c1abae63acca3214487cdaaa5012b2ff03c8af3c WatchSource:0}: Error finding container 2cf55c57c1baf91ec1d6e283c1abae63acca3214487cdaaa5012b2ff03c8af3c: Status 404 returned error can't find the container with id 2cf55c57c1baf91ec1d6e283c1abae63acca3214487cdaaa5012b2ff03c8af3c Feb 24 05:55:38.156378 master-0 kubenswrapper[34361]: I0224 05:55:38.156248 34361 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Feb 24 05:55:38.455924 master-0 kubenswrapper[34361]: I0224 05:55:38.455811 34361 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/nova-metadata-0" podUID="cefa04df-75ac-48a5-ac80-62009d398d01" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.128.1.4:8775/\": read tcp 10.128.0.2:53838->10.128.1.4:8775: read: connection reset by peer" Feb 24 05:55:38.456365 master-0 kubenswrapper[34361]: I0224 05:55:38.455901 34361 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/nova-metadata-0" podUID="cefa04df-75ac-48a5-ac80-62009d398d01" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.128.1.4:8775/\": read tcp 10.128.0.2:53832->10.128.1.4:8775: read: connection reset by peer" Feb 24 05:55:38.620393 master-0 kubenswrapper[34361]: I0224 05:55:38.620291 34361 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="732af8b5-18bd-4054-bf88-cd073fe009a6" path="/var/lib/kubelet/pods/732af8b5-18bd-4054-bf88-cd073fe009a6/volumes" Feb 24 05:55:39.063188 master-0 kubenswrapper[34361]: I0224 05:55:39.063045 34361 generic.go:334] "Generic (PLEG): container finished" podID="cefa04df-75ac-48a5-ac80-62009d398d01" containerID="b24c1ec9a4cb118cf8f370ab23b3e38523e5d056fef82cbb1a6b9b9ca58ab3a8" exitCode=0 Feb 24 05:55:39.063458 master-0 kubenswrapper[34361]: I0224 05:55:39.063122 34361 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"cefa04df-75ac-48a5-ac80-62009d398d01","Type":"ContainerDied","Data":"b24c1ec9a4cb118cf8f370ab23b3e38523e5d056fef82cbb1a6b9b9ca58ab3a8"} Feb 24 05:55:39.063458 master-0 kubenswrapper[34361]: I0224 05:55:39.063361 34361 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"cefa04df-75ac-48a5-ac80-62009d398d01","Type":"ContainerDied","Data":"99116e412da4f7af95a99f6e64d3b866858d279019f6a24b6b7eb6f179ecdbcf"} Feb 24 05:55:39.063534 master-0 kubenswrapper[34361]: I0224 05:55:39.063457 34361 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="99116e412da4f7af95a99f6e64d3b866858d279019f6a24b6b7eb6f179ecdbcf" Feb 24 05:55:39.067293 master-0 kubenswrapper[34361]: I0224 05:55:39.067227 34361 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"8fe396c8-a2e6-4d64-8cbf-9594635ad09d","Type":"ContainerStarted","Data":"46cb362457e381b794ef0cc8d6cfb8ebb9d650855de4e9253576e8ff7b8596db"} Feb 24 05:55:39.067378 master-0 kubenswrapper[34361]: I0224 05:55:39.067327 34361 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"8fe396c8-a2e6-4d64-8cbf-9594635ad09d","Type":"ContainerStarted","Data":"46481b6a5c03c96999349f26660ac126ad070fce3fa71926a77f998c06c1c577"} Feb 24 05:55:39.067378 master-0 kubenswrapper[34361]: I0224 05:55:39.067350 34361 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"8fe396c8-a2e6-4d64-8cbf-9594635ad09d","Type":"ContainerStarted","Data":"2cf55c57c1baf91ec1d6e283c1abae63acca3214487cdaaa5012b2ff03c8af3c"} Feb 24 05:55:39.083151 master-0 kubenswrapper[34361]: I0224 05:55:39.082985 34361 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 24 05:55:39.113333 master-0 kubenswrapper[34361]: I0224 05:55:39.113074 34361 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=2.113039571 podStartE2EDuration="2.113039571s" podCreationTimestamp="2026-02-24 05:55:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-24 05:55:39.093698819 +0000 UTC m=+1098.796315885" watchObservedRunningTime="2026-02-24 05:55:39.113039571 +0000 UTC m=+1098.815656647" Feb 24 05:55:39.222797 master-0 kubenswrapper[34361]: I0224 05:55:39.222720 34361 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cefa04df-75ac-48a5-ac80-62009d398d01-config-data\") pod \"cefa04df-75ac-48a5-ac80-62009d398d01\" (UID: \"cefa04df-75ac-48a5-ac80-62009d398d01\") " Feb 24 05:55:39.222797 master-0 kubenswrapper[34361]: I0224 05:55:39.222786 34361 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cefa04df-75ac-48a5-ac80-62009d398d01-combined-ca-bundle\") pod \"cefa04df-75ac-48a5-ac80-62009d398d01\" (UID: \"cefa04df-75ac-48a5-ac80-62009d398d01\") " Feb 24 05:55:39.223467 master-0 kubenswrapper[34361]: I0224 05:55:39.222827 34361 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/cefa04df-75ac-48a5-ac80-62009d398d01-logs\") pod \"cefa04df-75ac-48a5-ac80-62009d398d01\" (UID: \"cefa04df-75ac-48a5-ac80-62009d398d01\") " Feb 24 05:55:39.223467 master-0 kubenswrapper[34361]: I0224 05:55:39.223010 34361 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/cefa04df-75ac-48a5-ac80-62009d398d01-nova-metadata-tls-certs\") pod \"cefa04df-75ac-48a5-ac80-62009d398d01\" (UID: \"cefa04df-75ac-48a5-ac80-62009d398d01\") " Feb 24 05:55:39.223467 master-0 kubenswrapper[34361]: I0224 05:55:39.223265 34361 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qt2qg\" (UniqueName: \"kubernetes.io/projected/cefa04df-75ac-48a5-ac80-62009d398d01-kube-api-access-qt2qg\") pod \"cefa04df-75ac-48a5-ac80-62009d398d01\" (UID: \"cefa04df-75ac-48a5-ac80-62009d398d01\") " Feb 24 05:55:39.224059 master-0 kubenswrapper[34361]: I0224 05:55:39.224021 34361 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/cefa04df-75ac-48a5-ac80-62009d398d01-logs" (OuterVolumeSpecName: "logs") pod "cefa04df-75ac-48a5-ac80-62009d398d01" (UID: "cefa04df-75ac-48a5-ac80-62009d398d01"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 24 05:55:39.228198 master-0 kubenswrapper[34361]: I0224 05:55:39.228136 34361 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cefa04df-75ac-48a5-ac80-62009d398d01-kube-api-access-qt2qg" (OuterVolumeSpecName: "kube-api-access-qt2qg") pod "cefa04df-75ac-48a5-ac80-62009d398d01" (UID: "cefa04df-75ac-48a5-ac80-62009d398d01"). InnerVolumeSpecName "kube-api-access-qt2qg". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 24 05:55:39.270750 master-0 kubenswrapper[34361]: I0224 05:55:39.270653 34361 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cefa04df-75ac-48a5-ac80-62009d398d01-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "cefa04df-75ac-48a5-ac80-62009d398d01" (UID: "cefa04df-75ac-48a5-ac80-62009d398d01"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 24 05:55:39.291431 master-0 kubenswrapper[34361]: I0224 05:55:39.291296 34361 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cefa04df-75ac-48a5-ac80-62009d398d01-config-data" (OuterVolumeSpecName: "config-data") pod "cefa04df-75ac-48a5-ac80-62009d398d01" (UID: "cefa04df-75ac-48a5-ac80-62009d398d01"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 24 05:55:39.291775 master-0 kubenswrapper[34361]: I0224 05:55:39.291738 34361 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cefa04df-75ac-48a5-ac80-62009d398d01-nova-metadata-tls-certs" (OuterVolumeSpecName: "nova-metadata-tls-certs") pod "cefa04df-75ac-48a5-ac80-62009d398d01" (UID: "cefa04df-75ac-48a5-ac80-62009d398d01"). InnerVolumeSpecName "nova-metadata-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 24 05:55:39.331339 master-0 kubenswrapper[34361]: I0224 05:55:39.326779 34361 reconciler_common.go:293] "Volume detached for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/cefa04df-75ac-48a5-ac80-62009d398d01-nova-metadata-tls-certs\") on node \"master-0\" DevicePath \"\"" Feb 24 05:55:39.331339 master-0 kubenswrapper[34361]: I0224 05:55:39.326838 34361 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qt2qg\" (UniqueName: \"kubernetes.io/projected/cefa04df-75ac-48a5-ac80-62009d398d01-kube-api-access-qt2qg\") on node \"master-0\" DevicePath \"\"" Feb 24 05:55:39.331339 master-0 kubenswrapper[34361]: I0224 05:55:39.326855 34361 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cefa04df-75ac-48a5-ac80-62009d398d01-config-data\") on node \"master-0\" DevicePath \"\"" Feb 24 05:55:39.331339 master-0 kubenswrapper[34361]: I0224 05:55:39.326868 34361 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cefa04df-75ac-48a5-ac80-62009d398d01-combined-ca-bundle\") on node \"master-0\" DevicePath \"\"" Feb 24 05:55:39.331339 master-0 kubenswrapper[34361]: I0224 05:55:39.326878 34361 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/cefa04df-75ac-48a5-ac80-62009d398d01-logs\") on node \"master-0\" DevicePath \"\"" Feb 24 05:55:40.100541 master-0 kubenswrapper[34361]: I0224 05:55:40.100234 34361 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 24 05:55:40.161633 master-0 kubenswrapper[34361]: I0224 05:55:40.161553 34361 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Feb 24 05:55:40.179020 master-0 kubenswrapper[34361]: I0224 05:55:40.178939 34361 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Feb 24 05:55:40.217877 master-0 kubenswrapper[34361]: I0224 05:55:40.215277 34361 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Feb 24 05:55:40.218151 master-0 kubenswrapper[34361]: E0224 05:55:40.217944 34361 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cefa04df-75ac-48a5-ac80-62009d398d01" containerName="nova-metadata-log" Feb 24 05:55:40.218151 master-0 kubenswrapper[34361]: I0224 05:55:40.217979 34361 state_mem.go:107] "Deleted CPUSet assignment" podUID="cefa04df-75ac-48a5-ac80-62009d398d01" containerName="nova-metadata-log" Feb 24 05:55:40.218151 master-0 kubenswrapper[34361]: E0224 05:55:40.218053 34361 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cefa04df-75ac-48a5-ac80-62009d398d01" containerName="nova-metadata-metadata" Feb 24 05:55:40.218151 master-0 kubenswrapper[34361]: I0224 05:55:40.218061 34361 state_mem.go:107] "Deleted CPUSet assignment" podUID="cefa04df-75ac-48a5-ac80-62009d398d01" containerName="nova-metadata-metadata" Feb 24 05:55:40.219440 master-0 kubenswrapper[34361]: I0224 05:55:40.218764 34361 memory_manager.go:354] "RemoveStaleState removing state" podUID="cefa04df-75ac-48a5-ac80-62009d398d01" containerName="nova-metadata-metadata" Feb 24 05:55:40.219440 master-0 kubenswrapper[34361]: I0224 05:55:40.218786 34361 memory_manager.go:354] "RemoveStaleState removing state" podUID="cefa04df-75ac-48a5-ac80-62009d398d01" containerName="nova-metadata-log" Feb 24 05:55:40.226879 master-0 kubenswrapper[34361]: I0224 05:55:40.226625 34361 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 24 05:55:40.231521 master-0 kubenswrapper[34361]: I0224 05:55:40.231473 34361 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Feb 24 05:55:40.237887 master-0 kubenswrapper[34361]: I0224 05:55:40.237811 34361 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-metadata-internal-svc" Feb 24 05:55:40.266465 master-0 kubenswrapper[34361]: I0224 05:55:40.251842 34361 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Feb 24 05:55:40.291505 master-0 kubenswrapper[34361]: I0224 05:55:40.273647 34361 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/37764ade-6e1b-4bbc-b592-a2f1d3bb49f8-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"37764ade-6e1b-4bbc-b592-a2f1d3bb49f8\") " pod="openstack/nova-metadata-0" Feb 24 05:55:40.291505 master-0 kubenswrapper[34361]: I0224 05:55:40.273772 34361 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/37764ade-6e1b-4bbc-b592-a2f1d3bb49f8-logs\") pod \"nova-metadata-0\" (UID: \"37764ade-6e1b-4bbc-b592-a2f1d3bb49f8\") " pod="openstack/nova-metadata-0" Feb 24 05:55:40.291505 master-0 kubenswrapper[34361]: I0224 05:55:40.273982 34361 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/37764ade-6e1b-4bbc-b592-a2f1d3bb49f8-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"37764ade-6e1b-4bbc-b592-a2f1d3bb49f8\") " pod="openstack/nova-metadata-0" Feb 24 05:55:40.291505 master-0 kubenswrapper[34361]: I0224 05:55:40.274437 34361 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/37764ade-6e1b-4bbc-b592-a2f1d3bb49f8-config-data\") pod \"nova-metadata-0\" (UID: \"37764ade-6e1b-4bbc-b592-a2f1d3bb49f8\") " pod="openstack/nova-metadata-0" Feb 24 05:55:40.291505 master-0 kubenswrapper[34361]: I0224 05:55:40.274616 34361 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5xj9s\" (UniqueName: \"kubernetes.io/projected/37764ade-6e1b-4bbc-b592-a2f1d3bb49f8-kube-api-access-5xj9s\") pod \"nova-metadata-0\" (UID: \"37764ade-6e1b-4bbc-b592-a2f1d3bb49f8\") " pod="openstack/nova-metadata-0" Feb 24 05:55:40.377224 master-0 kubenswrapper[34361]: I0224 05:55:40.377021 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/37764ade-6e1b-4bbc-b592-a2f1d3bb49f8-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"37764ade-6e1b-4bbc-b592-a2f1d3bb49f8\") " pod="openstack/nova-metadata-0" Feb 24 05:55:40.377224 master-0 kubenswrapper[34361]: I0224 05:55:40.377221 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/37764ade-6e1b-4bbc-b592-a2f1d3bb49f8-config-data\") pod \"nova-metadata-0\" (UID: \"37764ade-6e1b-4bbc-b592-a2f1d3bb49f8\") " pod="openstack/nova-metadata-0" Feb 24 05:55:40.377542 master-0 kubenswrapper[34361]: I0224 05:55:40.377283 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5xj9s\" (UniqueName: \"kubernetes.io/projected/37764ade-6e1b-4bbc-b592-a2f1d3bb49f8-kube-api-access-5xj9s\") pod \"nova-metadata-0\" (UID: \"37764ade-6e1b-4bbc-b592-a2f1d3bb49f8\") " pod="openstack/nova-metadata-0" Feb 24 05:55:40.377542 master-0 kubenswrapper[34361]: I0224 05:55:40.377401 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/37764ade-6e1b-4bbc-b592-a2f1d3bb49f8-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"37764ade-6e1b-4bbc-b592-a2f1d3bb49f8\") " pod="openstack/nova-metadata-0" Feb 24 05:55:40.377542 master-0 kubenswrapper[34361]: I0224 05:55:40.377431 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/37764ade-6e1b-4bbc-b592-a2f1d3bb49f8-logs\") pod \"nova-metadata-0\" (UID: \"37764ade-6e1b-4bbc-b592-a2f1d3bb49f8\") " pod="openstack/nova-metadata-0" Feb 24 05:55:40.378175 master-0 kubenswrapper[34361]: I0224 05:55:40.378130 34361 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/37764ade-6e1b-4bbc-b592-a2f1d3bb49f8-logs\") pod \"nova-metadata-0\" (UID: \"37764ade-6e1b-4bbc-b592-a2f1d3bb49f8\") " pod="openstack/nova-metadata-0" Feb 24 05:55:40.380967 master-0 kubenswrapper[34361]: I0224 05:55:40.380910 34361 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/37764ade-6e1b-4bbc-b592-a2f1d3bb49f8-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"37764ade-6e1b-4bbc-b592-a2f1d3bb49f8\") " pod="openstack/nova-metadata-0" Feb 24 05:55:40.382214 master-0 kubenswrapper[34361]: I0224 05:55:40.382170 34361 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/37764ade-6e1b-4bbc-b592-a2f1d3bb49f8-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"37764ade-6e1b-4bbc-b592-a2f1d3bb49f8\") " pod="openstack/nova-metadata-0" Feb 24 05:55:40.382434 master-0 kubenswrapper[34361]: I0224 05:55:40.382288 34361 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/37764ade-6e1b-4bbc-b592-a2f1d3bb49f8-config-data\") pod \"nova-metadata-0\" (UID: \"37764ade-6e1b-4bbc-b592-a2f1d3bb49f8\") " pod="openstack/nova-metadata-0" Feb 24 05:55:40.406302 master-0 kubenswrapper[34361]: I0224 05:55:40.406243 34361 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5xj9s\" (UniqueName: \"kubernetes.io/projected/37764ade-6e1b-4bbc-b592-a2f1d3bb49f8-kube-api-access-5xj9s\") pod \"nova-metadata-0\" (UID: \"37764ade-6e1b-4bbc-b592-a2f1d3bb49f8\") " pod="openstack/nova-metadata-0" Feb 24 05:55:40.625340 master-0 kubenswrapper[34361]: I0224 05:55:40.625231 34361 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 24 05:55:40.637583 master-0 kubenswrapper[34361]: I0224 05:55:40.637411 34361 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cefa04df-75ac-48a5-ac80-62009d398d01" path="/var/lib/kubelet/pods/cefa04df-75ac-48a5-ac80-62009d398d01/volumes" Feb 24 05:55:41.123096 master-0 kubenswrapper[34361]: I0224 05:55:41.123011 34361 generic.go:334] "Generic (PLEG): container finished" podID="7c71a7ab-625b-4ba1-b7d2-7832cc1ba651" containerID="59a229cc27e8ab8436e29ad9f6fade4482c5b173ffe204683a6a47581c238e04" exitCode=0 Feb 24 05:55:41.123096 master-0 kubenswrapper[34361]: I0224 05:55:41.123084 34361 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"7c71a7ab-625b-4ba1-b7d2-7832cc1ba651","Type":"ContainerDied","Data":"59a229cc27e8ab8436e29ad9f6fade4482c5b173ffe204683a6a47581c238e04"} Feb 24 05:55:41.184565 master-0 kubenswrapper[34361]: I0224 05:55:41.184106 34361 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Feb 24 05:55:41.341010 master-0 kubenswrapper[34361]: I0224 05:55:41.340896 34361 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Feb 24 05:55:41.418261 master-0 kubenswrapper[34361]: I0224 05:55:41.418190 34361 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7c71a7ab-625b-4ba1-b7d2-7832cc1ba651-config-data\") pod \"7c71a7ab-625b-4ba1-b7d2-7832cc1ba651\" (UID: \"7c71a7ab-625b-4ba1-b7d2-7832cc1ba651\") " Feb 24 05:55:41.418698 master-0 kubenswrapper[34361]: I0224 05:55:41.418665 34361 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pc284\" (UniqueName: \"kubernetes.io/projected/7c71a7ab-625b-4ba1-b7d2-7832cc1ba651-kube-api-access-pc284\") pod \"7c71a7ab-625b-4ba1-b7d2-7832cc1ba651\" (UID: \"7c71a7ab-625b-4ba1-b7d2-7832cc1ba651\") " Feb 24 05:55:41.418818 master-0 kubenswrapper[34361]: I0224 05:55:41.418789 34361 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7c71a7ab-625b-4ba1-b7d2-7832cc1ba651-combined-ca-bundle\") pod \"7c71a7ab-625b-4ba1-b7d2-7832cc1ba651\" (UID: \"7c71a7ab-625b-4ba1-b7d2-7832cc1ba651\") " Feb 24 05:55:41.443534 master-0 kubenswrapper[34361]: I0224 05:55:41.443466 34361 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7c71a7ab-625b-4ba1-b7d2-7832cc1ba651-kube-api-access-pc284" (OuterVolumeSpecName: "kube-api-access-pc284") pod "7c71a7ab-625b-4ba1-b7d2-7832cc1ba651" (UID: "7c71a7ab-625b-4ba1-b7d2-7832cc1ba651"). InnerVolumeSpecName "kube-api-access-pc284". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 24 05:55:41.454344 master-0 kubenswrapper[34361]: I0224 05:55:41.454251 34361 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7c71a7ab-625b-4ba1-b7d2-7832cc1ba651-config-data" (OuterVolumeSpecName: "config-data") pod "7c71a7ab-625b-4ba1-b7d2-7832cc1ba651" (UID: "7c71a7ab-625b-4ba1-b7d2-7832cc1ba651"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 24 05:55:41.462856 master-0 kubenswrapper[34361]: I0224 05:55:41.462804 34361 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7c71a7ab-625b-4ba1-b7d2-7832cc1ba651-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "7c71a7ab-625b-4ba1-b7d2-7832cc1ba651" (UID: "7c71a7ab-625b-4ba1-b7d2-7832cc1ba651"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 24 05:55:41.521923 master-0 kubenswrapper[34361]: I0224 05:55:41.521761 34361 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7c71a7ab-625b-4ba1-b7d2-7832cc1ba651-config-data\") on node \"master-0\" DevicePath \"\"" Feb 24 05:55:41.521923 master-0 kubenswrapper[34361]: I0224 05:55:41.521837 34361 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pc284\" (UniqueName: \"kubernetes.io/projected/7c71a7ab-625b-4ba1-b7d2-7832cc1ba651-kube-api-access-pc284\") on node \"master-0\" DevicePath \"\"" Feb 24 05:55:41.521923 master-0 kubenswrapper[34361]: I0224 05:55:41.521849 34361 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7c71a7ab-625b-4ba1-b7d2-7832cc1ba651-combined-ca-bundle\") on node \"master-0\" DevicePath \"\"" Feb 24 05:55:42.143697 master-0 kubenswrapper[34361]: I0224 05:55:42.143604 34361 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"7c71a7ab-625b-4ba1-b7d2-7832cc1ba651","Type":"ContainerDied","Data":"e0b810cc86386ad7bdb27f354891aa09362873975a56c6fac62f49d32498d1fb"} Feb 24 05:55:42.143697 master-0 kubenswrapper[34361]: I0224 05:55:42.143703 34361 scope.go:117] "RemoveContainer" containerID="59a229cc27e8ab8436e29ad9f6fade4482c5b173ffe204683a6a47581c238e04" Feb 24 05:55:42.144035 master-0 kubenswrapper[34361]: I0224 05:55:42.143643 34361 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Feb 24 05:55:42.147167 master-0 kubenswrapper[34361]: I0224 05:55:42.146937 34361 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"37764ade-6e1b-4bbc-b592-a2f1d3bb49f8","Type":"ContainerStarted","Data":"f472bdaea5287d7c59407f56540ccf8f9b6ed82ec9c27a1140cfbd9acd26d1ac"} Feb 24 05:55:42.147167 master-0 kubenswrapper[34361]: I0224 05:55:42.147036 34361 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"37764ade-6e1b-4bbc-b592-a2f1d3bb49f8","Type":"ContainerStarted","Data":"b0b4d93c7a3c5172ea211ea5cd6156504739fb540327f908afca8bf60c1de688"} Feb 24 05:55:42.147167 master-0 kubenswrapper[34361]: I0224 05:55:42.147066 34361 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"37764ade-6e1b-4bbc-b592-a2f1d3bb49f8","Type":"ContainerStarted","Data":"828b85f2314539295b5696201eb5ac634bd2371bb988fcda44c7c8f76acd85f5"} Feb 24 05:55:42.214570 master-0 kubenswrapper[34361]: I0224 05:55:42.211214 34361 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=2.211187981 podStartE2EDuration="2.211187981s" podCreationTimestamp="2026-02-24 05:55:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-24 05:55:42.180049722 +0000 UTC m=+1101.882666798" watchObservedRunningTime="2026-02-24 05:55:42.211187981 +0000 UTC m=+1101.913805037" Feb 24 05:55:42.226941 master-0 kubenswrapper[34361]: I0224 05:55:42.224665 34361 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Feb 24 05:55:42.243619 master-0 kubenswrapper[34361]: I0224 05:55:42.243511 34361 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-scheduler-0"] Feb 24 05:55:42.329357 master-0 kubenswrapper[34361]: I0224 05:55:42.329257 34361 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Feb 24 05:55:42.331562 master-0 kubenswrapper[34361]: E0224 05:55:42.331141 34361 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7c71a7ab-625b-4ba1-b7d2-7832cc1ba651" containerName="nova-scheduler-scheduler" Feb 24 05:55:42.331562 master-0 kubenswrapper[34361]: I0224 05:55:42.331171 34361 state_mem.go:107] "Deleted CPUSet assignment" podUID="7c71a7ab-625b-4ba1-b7d2-7832cc1ba651" containerName="nova-scheduler-scheduler" Feb 24 05:55:42.331822 master-0 kubenswrapper[34361]: I0224 05:55:42.331800 34361 memory_manager.go:354] "RemoveStaleState removing state" podUID="7c71a7ab-625b-4ba1-b7d2-7832cc1ba651" containerName="nova-scheduler-scheduler" Feb 24 05:55:42.348266 master-0 kubenswrapper[34361]: I0224 05:55:42.348179 34361 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Feb 24 05:55:42.351929 master-0 kubenswrapper[34361]: I0224 05:55:42.351900 34361 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Feb 24 05:55:42.378890 master-0 kubenswrapper[34361]: I0224 05:55:42.378822 34361 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Feb 24 05:55:42.454115 master-0 kubenswrapper[34361]: I0224 05:55:42.453901 34361 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/360d9532-5637-4ec1-b703-6ea284ec943b-config-data\") pod \"nova-scheduler-0\" (UID: \"360d9532-5637-4ec1-b703-6ea284ec943b\") " pod="openstack/nova-scheduler-0" Feb 24 05:55:42.454115 master-0 kubenswrapper[34361]: I0224 05:55:42.454038 34361 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6jlhc\" (UniqueName: \"kubernetes.io/projected/360d9532-5637-4ec1-b703-6ea284ec943b-kube-api-access-6jlhc\") pod \"nova-scheduler-0\" (UID: \"360d9532-5637-4ec1-b703-6ea284ec943b\") " pod="openstack/nova-scheduler-0" Feb 24 05:55:42.454485 master-0 kubenswrapper[34361]: I0224 05:55:42.454137 34361 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/360d9532-5637-4ec1-b703-6ea284ec943b-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"360d9532-5637-4ec1-b703-6ea284ec943b\") " pod="openstack/nova-scheduler-0" Feb 24 05:55:42.556607 master-0 kubenswrapper[34361]: I0224 05:55:42.556534 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6jlhc\" (UniqueName: \"kubernetes.io/projected/360d9532-5637-4ec1-b703-6ea284ec943b-kube-api-access-6jlhc\") pod \"nova-scheduler-0\" (UID: \"360d9532-5637-4ec1-b703-6ea284ec943b\") " pod="openstack/nova-scheduler-0" Feb 24 05:55:42.557245 master-0 kubenswrapper[34361]: I0224 05:55:42.557212 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/360d9532-5637-4ec1-b703-6ea284ec943b-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"360d9532-5637-4ec1-b703-6ea284ec943b\") " pod="openstack/nova-scheduler-0" Feb 24 05:55:42.557511 master-0 kubenswrapper[34361]: I0224 05:55:42.557474 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/360d9532-5637-4ec1-b703-6ea284ec943b-config-data\") pod \"nova-scheduler-0\" (UID: \"360d9532-5637-4ec1-b703-6ea284ec943b\") " pod="openstack/nova-scheduler-0" Feb 24 05:55:42.563869 master-0 kubenswrapper[34361]: I0224 05:55:42.563372 34361 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/360d9532-5637-4ec1-b703-6ea284ec943b-config-data\") pod \"nova-scheduler-0\" (UID: \"360d9532-5637-4ec1-b703-6ea284ec943b\") " pod="openstack/nova-scheduler-0" Feb 24 05:55:42.563869 master-0 kubenswrapper[34361]: I0224 05:55:42.563769 34361 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/360d9532-5637-4ec1-b703-6ea284ec943b-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"360d9532-5637-4ec1-b703-6ea284ec943b\") " pod="openstack/nova-scheduler-0" Feb 24 05:55:42.576765 master-0 kubenswrapper[34361]: I0224 05:55:42.576712 34361 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6jlhc\" (UniqueName: \"kubernetes.io/projected/360d9532-5637-4ec1-b703-6ea284ec943b-kube-api-access-6jlhc\") pod \"nova-scheduler-0\" (UID: \"360d9532-5637-4ec1-b703-6ea284ec943b\") " pod="openstack/nova-scheduler-0" Feb 24 05:55:42.614693 master-0 kubenswrapper[34361]: I0224 05:55:42.614572 34361 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7c71a7ab-625b-4ba1-b7d2-7832cc1ba651" path="/var/lib/kubelet/pods/7c71a7ab-625b-4ba1-b7d2-7832cc1ba651/volumes" Feb 24 05:55:42.708890 master-0 kubenswrapper[34361]: I0224 05:55:42.708617 34361 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Feb 24 05:55:43.210847 master-0 kubenswrapper[34361]: I0224 05:55:43.210728 34361 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Feb 24 05:55:43.211397 master-0 kubenswrapper[34361]: W0224 05:55:43.211246 34361 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod360d9532_5637_4ec1_b703_6ea284ec943b.slice/crio-00c498abf46705e65e717850d1204556133bee7da20ca892e174ac62d53f5136 WatchSource:0}: Error finding container 00c498abf46705e65e717850d1204556133bee7da20ca892e174ac62d53f5136: Status 404 returned error can't find the container with id 00c498abf46705e65e717850d1204556133bee7da20ca892e174ac62d53f5136 Feb 24 05:55:44.194523 master-0 kubenswrapper[34361]: I0224 05:55:44.194445 34361 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"360d9532-5637-4ec1-b703-6ea284ec943b","Type":"ContainerStarted","Data":"6802e9074a7a32838e0744400be1c9666f4322435477856b7293b3f520340baa"} Feb 24 05:55:44.194523 master-0 kubenswrapper[34361]: I0224 05:55:44.194524 34361 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"360d9532-5637-4ec1-b703-6ea284ec943b","Type":"ContainerStarted","Data":"00c498abf46705e65e717850d1204556133bee7da20ca892e174ac62d53f5136"} Feb 24 05:55:44.229486 master-0 kubenswrapper[34361]: I0224 05:55:44.229385 34361 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=2.229363732 podStartE2EDuration="2.229363732s" podCreationTimestamp="2026-02-24 05:55:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-24 05:55:44.226860585 +0000 UTC m=+1103.929477631" watchObservedRunningTime="2026-02-24 05:55:44.229363732 +0000 UTC m=+1103.931980778" Feb 24 05:55:45.626863 master-0 kubenswrapper[34361]: I0224 05:55:45.626786 34361 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Feb 24 05:55:45.627692 master-0 kubenswrapper[34361]: I0224 05:55:45.626906 34361 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Feb 24 05:55:47.509646 master-0 kubenswrapper[34361]: I0224 05:55:47.509517 34361 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Feb 24 05:55:47.510483 master-0 kubenswrapper[34361]: I0224 05:55:47.509697 34361 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Feb 24 05:55:47.709406 master-0 kubenswrapper[34361]: I0224 05:55:47.709224 34361 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Feb 24 05:55:48.523719 master-0 kubenswrapper[34361]: I0224 05:55:48.523578 34361 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="8fe396c8-a2e6-4d64-8cbf-9594635ad09d" containerName="nova-api-log" probeResult="failure" output="Get \"https://10.128.1.12:8774/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 24 05:55:48.523719 master-0 kubenswrapper[34361]: I0224 05:55:48.523634 34361 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="8fe396c8-a2e6-4d64-8cbf-9594635ad09d" containerName="nova-api-api" probeResult="failure" output="Get \"https://10.128.1.12:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 24 05:55:50.635290 master-0 kubenswrapper[34361]: I0224 05:55:50.635187 34361 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Feb 24 05:55:50.635290 master-0 kubenswrapper[34361]: I0224 05:55:50.635283 34361 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Feb 24 05:55:51.637953 master-0 kubenswrapper[34361]: I0224 05:55:51.637829 34361 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="37764ade-6e1b-4bbc-b592-a2f1d3bb49f8" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.128.1.13:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 24 05:55:51.648766 master-0 kubenswrapper[34361]: I0224 05:55:51.648647 34361 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="37764ade-6e1b-4bbc-b592-a2f1d3bb49f8" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.128.1.13:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 24 05:55:52.709259 master-0 kubenswrapper[34361]: I0224 05:55:52.709151 34361 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-0" Feb 24 05:55:52.754418 master-0 kubenswrapper[34361]: I0224 05:55:52.754298 34361 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-0" Feb 24 05:55:53.465073 master-0 kubenswrapper[34361]: I0224 05:55:53.464955 34361 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-0" Feb 24 05:55:57.520552 master-0 kubenswrapper[34361]: I0224 05:55:57.520430 34361 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Feb 24 05:55:57.521594 master-0 kubenswrapper[34361]: I0224 05:55:57.520620 34361 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Feb 24 05:55:57.521719 master-0 kubenswrapper[34361]: I0224 05:55:57.521585 34361 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Feb 24 05:55:57.521719 master-0 kubenswrapper[34361]: I0224 05:55:57.521695 34361 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Feb 24 05:55:57.531299 master-0 kubenswrapper[34361]: I0224 05:55:57.531226 34361 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Feb 24 05:55:57.535542 master-0 kubenswrapper[34361]: I0224 05:55:57.535478 34361 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Feb 24 05:56:00.633614 master-0 kubenswrapper[34361]: I0224 05:56:00.633541 34361 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Feb 24 05:56:00.638635 master-0 kubenswrapper[34361]: I0224 05:56:00.638554 34361 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Feb 24 05:56:00.640033 master-0 kubenswrapper[34361]: I0224 05:56:00.639998 34361 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Feb 24 05:56:01.541301 master-0 kubenswrapper[34361]: I0224 05:56:01.541221 34361 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Feb 24 05:56:28.692629 master-0 kubenswrapper[34361]: I0224 05:56:28.692460 34361 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["sushy-emulator/sushy-emulator-78f6d7d749-rjgth"] Feb 24 05:56:28.693868 master-0 kubenswrapper[34361]: I0224 05:56:28.693794 34361 kuberuntime_container.go:808] "Killing container with a grace period" pod="sushy-emulator/sushy-emulator-78f6d7d749-rjgth" podUID="0b2fa994-aafc-4629-a833-1dc2435b42f4" containerName="sushy-emulator" containerID="cri-o://499d7907cff13e523f56adf0fa8f5df83fc3c5415a327eb0d78ff44229bc4782" gracePeriod=30 Feb 24 05:56:29.061579 master-0 kubenswrapper[34361]: I0224 05:56:29.061468 34361 generic.go:334] "Generic (PLEG): container finished" podID="0b2fa994-aafc-4629-a833-1dc2435b42f4" containerID="499d7907cff13e523f56adf0fa8f5df83fc3c5415a327eb0d78ff44229bc4782" exitCode=0 Feb 24 05:56:29.061579 master-0 kubenswrapper[34361]: I0224 05:56:29.061571 34361 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="sushy-emulator/sushy-emulator-78f6d7d749-rjgth" event={"ID":"0b2fa994-aafc-4629-a833-1dc2435b42f4","Type":"ContainerDied","Data":"499d7907cff13e523f56adf0fa8f5df83fc3c5415a327eb0d78ff44229bc4782"} Feb 24 05:56:29.412072 master-0 kubenswrapper[34361]: I0224 05:56:29.411988 34361 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="sushy-emulator/sushy-emulator-78f6d7d749-rjgth" Feb 24 05:56:29.495906 master-0 kubenswrapper[34361]: I0224 05:56:29.495768 34361 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4gpl8\" (UniqueName: \"kubernetes.io/projected/0b2fa994-aafc-4629-a833-1dc2435b42f4-kube-api-access-4gpl8\") pod \"0b2fa994-aafc-4629-a833-1dc2435b42f4\" (UID: \"0b2fa994-aafc-4629-a833-1dc2435b42f4\") " Feb 24 05:56:29.496396 master-0 kubenswrapper[34361]: I0224 05:56:29.496094 34361 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sushy-emulator-config\" (UniqueName: \"kubernetes.io/configmap/0b2fa994-aafc-4629-a833-1dc2435b42f4-sushy-emulator-config\") pod \"0b2fa994-aafc-4629-a833-1dc2435b42f4\" (UID: \"0b2fa994-aafc-4629-a833-1dc2435b42f4\") " Feb 24 05:56:29.496396 master-0 kubenswrapper[34361]: I0224 05:56:29.496226 34361 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"os-client-config\" (UniqueName: \"kubernetes.io/secret/0b2fa994-aafc-4629-a833-1dc2435b42f4-os-client-config\") pod \"0b2fa994-aafc-4629-a833-1dc2435b42f4\" (UID: \"0b2fa994-aafc-4629-a833-1dc2435b42f4\") " Feb 24 05:56:29.497071 master-0 kubenswrapper[34361]: I0224 05:56:29.496996 34361 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0b2fa994-aafc-4629-a833-1dc2435b42f4-sushy-emulator-config" (OuterVolumeSpecName: "sushy-emulator-config") pod "0b2fa994-aafc-4629-a833-1dc2435b42f4" (UID: "0b2fa994-aafc-4629-a833-1dc2435b42f4"). InnerVolumeSpecName "sushy-emulator-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 24 05:56:29.497527 master-0 kubenswrapper[34361]: I0224 05:56:29.497464 34361 reconciler_common.go:293] "Volume detached for volume \"sushy-emulator-config\" (UniqueName: \"kubernetes.io/configmap/0b2fa994-aafc-4629-a833-1dc2435b42f4-sushy-emulator-config\") on node \"master-0\" DevicePath \"\"" Feb 24 05:56:29.500466 master-0 kubenswrapper[34361]: I0224 05:56:29.500389 34361 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0b2fa994-aafc-4629-a833-1dc2435b42f4-kube-api-access-4gpl8" (OuterVolumeSpecName: "kube-api-access-4gpl8") pod "0b2fa994-aafc-4629-a833-1dc2435b42f4" (UID: "0b2fa994-aafc-4629-a833-1dc2435b42f4"). InnerVolumeSpecName "kube-api-access-4gpl8". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 24 05:56:29.600351 master-0 kubenswrapper[34361]: I0224 05:56:29.600124 34361 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4gpl8\" (UniqueName: \"kubernetes.io/projected/0b2fa994-aafc-4629-a833-1dc2435b42f4-kube-api-access-4gpl8\") on node \"master-0\" DevicePath \"\"" Feb 24 05:56:30.075573 master-0 kubenswrapper[34361]: I0224 05:56:30.075488 34361 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="sushy-emulator/sushy-emulator-78f6d7d749-rjgth" event={"ID":"0b2fa994-aafc-4629-a833-1dc2435b42f4","Type":"ContainerDied","Data":"39cc02a2f5a4e465de05fad2e8bc40077a91b3c4f719bdecd649d3bb4da0cee2"} Feb 24 05:56:30.075573 master-0 kubenswrapper[34361]: I0224 05:56:30.075590 34361 scope.go:117] "RemoveContainer" containerID="499d7907cff13e523f56adf0fa8f5df83fc3c5415a327eb0d78ff44229bc4782" Feb 24 05:56:30.076538 master-0 kubenswrapper[34361]: I0224 05:56:30.075587 34361 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="sushy-emulator/sushy-emulator-78f6d7d749-rjgth" Feb 24 05:56:30.551686 master-0 kubenswrapper[34361]: I0224 05:56:30.551597 34361 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/watcher-operator-controller-manager-bccc79885-96xg2" podUID="59d94335-c0b3-4bf5-b0a6-b3e1f618f2aa" containerName="manager" probeResult="failure" output="Get \"http://10.128.0.157:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 24 05:56:30.824559 master-0 kubenswrapper[34361]: I0224 05:56:30.821795 34361 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b2fa994-aafc-4629-a833-1dc2435b42f4-os-client-config" (OuterVolumeSpecName: "os-client-config") pod "0b2fa994-aafc-4629-a833-1dc2435b42f4" (UID: "0b2fa994-aafc-4629-a833-1dc2435b42f4"). InnerVolumeSpecName "os-client-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 24 05:56:30.845000 master-0 kubenswrapper[34361]: I0224 05:56:30.844893 34361 reconciler_common.go:293] "Volume detached for volume \"os-client-config\" (UniqueName: \"kubernetes.io/secret/0b2fa994-aafc-4629-a833-1dc2435b42f4-os-client-config\") on node \"master-0\" DevicePath \"\"" Feb 24 05:56:31.014062 master-0 kubenswrapper[34361]: I0224 05:56:30.982432 34361 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["sushy-emulator/sushy-emulator-84965d5d88-5549q"] Feb 24 05:56:31.014062 master-0 kubenswrapper[34361]: E0224 05:56:30.983178 34361 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0b2fa994-aafc-4629-a833-1dc2435b42f4" containerName="sushy-emulator" Feb 24 05:56:31.014062 master-0 kubenswrapper[34361]: I0224 05:56:30.983194 34361 state_mem.go:107] "Deleted CPUSet assignment" podUID="0b2fa994-aafc-4629-a833-1dc2435b42f4" containerName="sushy-emulator" Feb 24 05:56:31.014062 master-0 kubenswrapper[34361]: I0224 05:56:30.983566 34361 memory_manager.go:354] "RemoveStaleState removing state" podUID="0b2fa994-aafc-4629-a833-1dc2435b42f4" containerName="sushy-emulator" Feb 24 05:56:31.014062 master-0 kubenswrapper[34361]: I0224 05:56:30.984742 34361 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="sushy-emulator/sushy-emulator-84965d5d88-5549q" Feb 24 05:56:31.014062 master-0 kubenswrapper[34361]: I0224 05:56:30.987549 34361 reflector.go:368] Caches populated for *v1.ConfigMap from object-"sushy-emulator"/"sushy-emulator-config" Feb 24 05:56:31.056073 master-0 kubenswrapper[34361]: I0224 05:56:31.054063 34361 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-client-config\" (UniqueName: \"kubernetes.io/secret/11ec33a6-5856-4eea-87dc-c5814d1f7cf8-os-client-config\") pod \"sushy-emulator-84965d5d88-5549q\" (UID: \"11ec33a6-5856-4eea-87dc-c5814d1f7cf8\") " pod="sushy-emulator/sushy-emulator-84965d5d88-5549q" Feb 24 05:56:31.056073 master-0 kubenswrapper[34361]: I0224 05:56:31.054211 34361 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sushy-emulator-config\" (UniqueName: \"kubernetes.io/configmap/11ec33a6-5856-4eea-87dc-c5814d1f7cf8-sushy-emulator-config\") pod \"sushy-emulator-84965d5d88-5549q\" (UID: \"11ec33a6-5856-4eea-87dc-c5814d1f7cf8\") " pod="sushy-emulator/sushy-emulator-84965d5d88-5549q" Feb 24 05:56:31.056073 master-0 kubenswrapper[34361]: I0224 05:56:31.054254 34361 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8l8dq\" (UniqueName: \"kubernetes.io/projected/11ec33a6-5856-4eea-87dc-c5814d1f7cf8-kube-api-access-8l8dq\") pod \"sushy-emulator-84965d5d88-5549q\" (UID: \"11ec33a6-5856-4eea-87dc-c5814d1f7cf8\") " pod="sushy-emulator/sushy-emulator-84965d5d88-5549q" Feb 24 05:56:31.056073 master-0 kubenswrapper[34361]: I0224 05:56:31.055711 34361 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["sushy-emulator/sushy-emulator-84965d5d88-5549q"] Feb 24 05:56:31.102153 master-0 kubenswrapper[34361]: I0224 05:56:31.101838 34361 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["sushy-emulator/sushy-emulator-78f6d7d749-rjgth"] Feb 24 05:56:31.116422 master-0 kubenswrapper[34361]: I0224 05:56:31.116262 34361 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["sushy-emulator/sushy-emulator-78f6d7d749-rjgth"] Feb 24 05:56:31.155464 master-0 kubenswrapper[34361]: I0224 05:56:31.155344 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sushy-emulator-config\" (UniqueName: \"kubernetes.io/configmap/11ec33a6-5856-4eea-87dc-c5814d1f7cf8-sushy-emulator-config\") pod \"sushy-emulator-84965d5d88-5549q\" (UID: \"11ec33a6-5856-4eea-87dc-c5814d1f7cf8\") " pod="sushy-emulator/sushy-emulator-84965d5d88-5549q" Feb 24 05:56:31.155838 master-0 kubenswrapper[34361]: I0224 05:56:31.155607 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8l8dq\" (UniqueName: \"kubernetes.io/projected/11ec33a6-5856-4eea-87dc-c5814d1f7cf8-kube-api-access-8l8dq\") pod \"sushy-emulator-84965d5d88-5549q\" (UID: \"11ec33a6-5856-4eea-87dc-c5814d1f7cf8\") " pod="sushy-emulator/sushy-emulator-84965d5d88-5549q" Feb 24 05:56:31.156463 master-0 kubenswrapper[34361]: I0224 05:56:31.156438 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"os-client-config\" (UniqueName: \"kubernetes.io/secret/11ec33a6-5856-4eea-87dc-c5814d1f7cf8-os-client-config\") pod \"sushy-emulator-84965d5d88-5549q\" (UID: \"11ec33a6-5856-4eea-87dc-c5814d1f7cf8\") " pod="sushy-emulator/sushy-emulator-84965d5d88-5549q" Feb 24 05:56:31.157114 master-0 kubenswrapper[34361]: I0224 05:56:31.157056 34361 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sushy-emulator-config\" (UniqueName: \"kubernetes.io/configmap/11ec33a6-5856-4eea-87dc-c5814d1f7cf8-sushy-emulator-config\") pod \"sushy-emulator-84965d5d88-5549q\" (UID: \"11ec33a6-5856-4eea-87dc-c5814d1f7cf8\") " pod="sushy-emulator/sushy-emulator-84965d5d88-5549q" Feb 24 05:56:31.162098 master-0 kubenswrapper[34361]: I0224 05:56:31.162037 34361 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"os-client-config\" (UniqueName: \"kubernetes.io/secret/11ec33a6-5856-4eea-87dc-c5814d1f7cf8-os-client-config\") pod \"sushy-emulator-84965d5d88-5549q\" (UID: \"11ec33a6-5856-4eea-87dc-c5814d1f7cf8\") " pod="sushy-emulator/sushy-emulator-84965d5d88-5549q" Feb 24 05:56:31.173011 master-0 kubenswrapper[34361]: I0224 05:56:31.172968 34361 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8l8dq\" (UniqueName: \"kubernetes.io/projected/11ec33a6-5856-4eea-87dc-c5814d1f7cf8-kube-api-access-8l8dq\") pod \"sushy-emulator-84965d5d88-5549q\" (UID: \"11ec33a6-5856-4eea-87dc-c5814d1f7cf8\") " pod="sushy-emulator/sushy-emulator-84965d5d88-5549q" Feb 24 05:56:31.358372 master-0 kubenswrapper[34361]: I0224 05:56:31.358109 34361 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="sushy-emulator/sushy-emulator-84965d5d88-5549q" Feb 24 05:56:31.976146 master-0 kubenswrapper[34361]: W0224 05:56:31.976059 34361 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod11ec33a6_5856_4eea_87dc_c5814d1f7cf8.slice/crio-e8dff1e3a15b5dba8f524951e1b57e44abccc91bf8253294e1d4462290319fc7 WatchSource:0}: Error finding container e8dff1e3a15b5dba8f524951e1b57e44abccc91bf8253294e1d4462290319fc7: Status 404 returned error can't find the container with id e8dff1e3a15b5dba8f524951e1b57e44abccc91bf8253294e1d4462290319fc7 Feb 24 05:56:31.980510 master-0 kubenswrapper[34361]: I0224 05:56:31.980446 34361 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["sushy-emulator/sushy-emulator-84965d5d88-5549q"] Feb 24 05:56:32.115362 master-0 kubenswrapper[34361]: I0224 05:56:32.115116 34361 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="sushy-emulator/sushy-emulator-84965d5d88-5549q" event={"ID":"11ec33a6-5856-4eea-87dc-c5814d1f7cf8","Type":"ContainerStarted","Data":"e8dff1e3a15b5dba8f524951e1b57e44abccc91bf8253294e1d4462290319fc7"} Feb 24 05:56:32.615135 master-0 kubenswrapper[34361]: I0224 05:56:32.615029 34361 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0b2fa994-aafc-4629-a833-1dc2435b42f4" path="/var/lib/kubelet/pods/0b2fa994-aafc-4629-a833-1dc2435b42f4/volumes" Feb 24 05:56:33.136853 master-0 kubenswrapper[34361]: I0224 05:56:33.136778 34361 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="sushy-emulator/sushy-emulator-84965d5d88-5549q" event={"ID":"11ec33a6-5856-4eea-87dc-c5814d1f7cf8","Type":"ContainerStarted","Data":"9e717461c977ae8e0c9af3a8e974b1c93dbc0bd4d4528a96b29352d90f9879dc"} Feb 24 05:56:33.166385 master-0 kubenswrapper[34361]: I0224 05:56:33.166083 34361 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="sushy-emulator/sushy-emulator-84965d5d88-5549q" podStartSLOduration=3.1660381969999998 podStartE2EDuration="3.166038197s" podCreationTimestamp="2026-02-24 05:56:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-24 05:56:33.160543799 +0000 UTC m=+1152.863160895" watchObservedRunningTime="2026-02-24 05:56:33.166038197 +0000 UTC m=+1152.868655283" Feb 24 05:56:41.358519 master-0 kubenswrapper[34361]: I0224 05:56:41.358382 34361 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="sushy-emulator/sushy-emulator-84965d5d88-5549q" Feb 24 05:56:41.358519 master-0 kubenswrapper[34361]: I0224 05:56:41.358522 34361 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="sushy-emulator/sushy-emulator-84965d5d88-5549q" Feb 24 05:56:41.373634 master-0 kubenswrapper[34361]: I0224 05:56:41.373547 34361 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="sushy-emulator/sushy-emulator-84965d5d88-5549q" Feb 24 05:56:42.306998 master-0 kubenswrapper[34361]: I0224 05:56:42.306921 34361 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="sushy-emulator/sushy-emulator-84965d5d88-5549q" Feb 24 05:57:23.731930 master-0 kubenswrapper[34361]: I0224 05:57:23.731779 34361 scope.go:117] "RemoveContainer" containerID="daf55ae9d390f698358051c3226bb41d0c117e2713443d4a5ebb58d7b50960ec" Feb 24 05:57:52.891382 master-0 kubenswrapper[34361]: E0224 05:57:52.891246 34361 upgradeaware.go:441] Error proxying data from backend to client: writeto tcp 192.168.32.10:50140->192.168.32.10:36187: read tcp 192.168.32.10:50140->192.168.32.10:36187: read: connection reset by peer Feb 24 05:58:23.901916 master-0 kubenswrapper[34361]: I0224 05:58:23.901838 34361 scope.go:117] "RemoveContainer" containerID="52540c52217a1760ab281ef48a693b2dfb9645bcbc15a990572211f0ca11cb14" Feb 24 05:58:23.941209 master-0 kubenswrapper[34361]: I0224 05:58:23.941142 34361 scope.go:117] "RemoveContainer" containerID="809200b42c356e44e4959c36a4e0f4f9adc64b7377838bfed8429a3c4bc571e9" Feb 24 05:58:24.007930 master-0 kubenswrapper[34361]: I0224 05:58:24.007868 34361 scope.go:117] "RemoveContainer" containerID="c35c7ecde0912c81c6f0da24c270691434f7feef5c1a125826559e148787e233" Feb 24 06:00:00.187352 master-0 kubenswrapper[34361]: I0224 06:00:00.186431 34361 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29531880-xpxmc"] Feb 24 06:00:00.192352 master-0 kubenswrapper[34361]: I0224 06:00:00.188791 34361 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29531880-xpxmc" Feb 24 06:00:00.192352 master-0 kubenswrapper[34361]: I0224 06:00:00.191405 34361 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-27rfg" Feb 24 06:00:00.197366 master-0 kubenswrapper[34361]: I0224 06:00:00.196270 34361 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Feb 24 06:00:00.197753 master-0 kubenswrapper[34361]: I0224 06:00:00.197668 34361 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x4xvb\" (UniqueName: \"kubernetes.io/projected/567c775f-2a63-4f05-b846-4a0422a8d0e9-kube-api-access-x4xvb\") pod \"collect-profiles-29531880-xpxmc\" (UID: \"567c775f-2a63-4f05-b846-4a0422a8d0e9\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29531880-xpxmc" Feb 24 06:00:00.201350 master-0 kubenswrapper[34361]: I0224 06:00:00.197813 34361 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/567c775f-2a63-4f05-b846-4a0422a8d0e9-config-volume\") pod \"collect-profiles-29531880-xpxmc\" (UID: \"567c775f-2a63-4f05-b846-4a0422a8d0e9\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29531880-xpxmc" Feb 24 06:00:00.201350 master-0 kubenswrapper[34361]: I0224 06:00:00.198088 34361 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/567c775f-2a63-4f05-b846-4a0422a8d0e9-secret-volume\") pod \"collect-profiles-29531880-xpxmc\" (UID: \"567c775f-2a63-4f05-b846-4a0422a8d0e9\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29531880-xpxmc" Feb 24 06:00:00.203483 master-0 kubenswrapper[34361]: I0224 06:00:00.203187 34361 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29531880-xpxmc"] Feb 24 06:00:00.300015 master-0 kubenswrapper[34361]: I0224 06:00:00.299957 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x4xvb\" (UniqueName: \"kubernetes.io/projected/567c775f-2a63-4f05-b846-4a0422a8d0e9-kube-api-access-x4xvb\") pod \"collect-profiles-29531880-xpxmc\" (UID: \"567c775f-2a63-4f05-b846-4a0422a8d0e9\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29531880-xpxmc" Feb 24 06:00:00.300388 master-0 kubenswrapper[34361]: I0224 06:00:00.300366 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/567c775f-2a63-4f05-b846-4a0422a8d0e9-config-volume\") pod \"collect-profiles-29531880-xpxmc\" (UID: \"567c775f-2a63-4f05-b846-4a0422a8d0e9\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29531880-xpxmc" Feb 24 06:00:00.300569 master-0 kubenswrapper[34361]: I0224 06:00:00.300552 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/567c775f-2a63-4f05-b846-4a0422a8d0e9-secret-volume\") pod \"collect-profiles-29531880-xpxmc\" (UID: \"567c775f-2a63-4f05-b846-4a0422a8d0e9\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29531880-xpxmc" Feb 24 06:00:00.302596 master-0 kubenswrapper[34361]: I0224 06:00:00.301867 34361 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/567c775f-2a63-4f05-b846-4a0422a8d0e9-config-volume\") pod \"collect-profiles-29531880-xpxmc\" (UID: \"567c775f-2a63-4f05-b846-4a0422a8d0e9\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29531880-xpxmc" Feb 24 06:00:00.305072 master-0 kubenswrapper[34361]: I0224 06:00:00.304825 34361 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/567c775f-2a63-4f05-b846-4a0422a8d0e9-secret-volume\") pod \"collect-profiles-29531880-xpxmc\" (UID: \"567c775f-2a63-4f05-b846-4a0422a8d0e9\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29531880-xpxmc" Feb 24 06:00:00.320299 master-0 kubenswrapper[34361]: I0224 06:00:00.320248 34361 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x4xvb\" (UniqueName: \"kubernetes.io/projected/567c775f-2a63-4f05-b846-4a0422a8d0e9-kube-api-access-x4xvb\") pod \"collect-profiles-29531880-xpxmc\" (UID: \"567c775f-2a63-4f05-b846-4a0422a8d0e9\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29531880-xpxmc" Feb 24 06:00:00.511398 master-0 kubenswrapper[34361]: I0224 06:00:00.511293 34361 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29531880-xpxmc" Feb 24 06:00:01.077678 master-0 kubenswrapper[34361]: W0224 06:00:01.077591 34361 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod567c775f_2a63_4f05_b846_4a0422a8d0e9.slice/crio-e80801c4789f58328a4c68d7a0ab50bffa16ffe9ce536e21c44c5b4e54ede988 WatchSource:0}: Error finding container e80801c4789f58328a4c68d7a0ab50bffa16ffe9ce536e21c44c5b4e54ede988: Status 404 returned error can't find the container with id e80801c4789f58328a4c68d7a0ab50bffa16ffe9ce536e21c44c5b4e54ede988 Feb 24 06:00:01.085275 master-0 kubenswrapper[34361]: I0224 06:00:01.084561 34361 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29531880-xpxmc"] Feb 24 06:00:01.189004 master-0 kubenswrapper[34361]: I0224 06:00:01.188799 34361 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29531880-xpxmc" event={"ID":"567c775f-2a63-4f05-b846-4a0422a8d0e9","Type":"ContainerStarted","Data":"e80801c4789f58328a4c68d7a0ab50bffa16ffe9ce536e21c44c5b4e54ede988"} Feb 24 06:00:02.213100 master-0 kubenswrapper[34361]: I0224 06:00:02.213003 34361 generic.go:334] "Generic (PLEG): container finished" podID="567c775f-2a63-4f05-b846-4a0422a8d0e9" containerID="0331ca65e54bcb290b0bfd53b1146153f02e821f215f70eb2f854c4aa3074109" exitCode=0 Feb 24 06:00:02.214071 master-0 kubenswrapper[34361]: I0224 06:00:02.213098 34361 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29531880-xpxmc" event={"ID":"567c775f-2a63-4f05-b846-4a0422a8d0e9","Type":"ContainerDied","Data":"0331ca65e54bcb290b0bfd53b1146153f02e821f215f70eb2f854c4aa3074109"} Feb 24 06:00:03.858830 master-0 kubenswrapper[34361]: I0224 06:00:03.858750 34361 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29531880-xpxmc" Feb 24 06:00:04.042907 master-0 kubenswrapper[34361]: I0224 06:00:04.042831 34361 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/567c775f-2a63-4f05-b846-4a0422a8d0e9-secret-volume\") pod \"567c775f-2a63-4f05-b846-4a0422a8d0e9\" (UID: \"567c775f-2a63-4f05-b846-4a0422a8d0e9\") " Feb 24 06:00:04.042907 master-0 kubenswrapper[34361]: I0224 06:00:04.042932 34361 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/567c775f-2a63-4f05-b846-4a0422a8d0e9-config-volume\") pod \"567c775f-2a63-4f05-b846-4a0422a8d0e9\" (UID: \"567c775f-2a63-4f05-b846-4a0422a8d0e9\") " Feb 24 06:00:04.043389 master-0 kubenswrapper[34361]: I0224 06:00:04.043000 34361 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x4xvb\" (UniqueName: \"kubernetes.io/projected/567c775f-2a63-4f05-b846-4a0422a8d0e9-kube-api-access-x4xvb\") pod \"567c775f-2a63-4f05-b846-4a0422a8d0e9\" (UID: \"567c775f-2a63-4f05-b846-4a0422a8d0e9\") " Feb 24 06:00:04.044185 master-0 kubenswrapper[34361]: I0224 06:00:04.044123 34361 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/567c775f-2a63-4f05-b846-4a0422a8d0e9-config-volume" (OuterVolumeSpecName: "config-volume") pod "567c775f-2a63-4f05-b846-4a0422a8d0e9" (UID: "567c775f-2a63-4f05-b846-4a0422a8d0e9"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 24 06:00:04.047403 master-0 kubenswrapper[34361]: I0224 06:00:04.047246 34361 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/567c775f-2a63-4f05-b846-4a0422a8d0e9-kube-api-access-x4xvb" (OuterVolumeSpecName: "kube-api-access-x4xvb") pod "567c775f-2a63-4f05-b846-4a0422a8d0e9" (UID: "567c775f-2a63-4f05-b846-4a0422a8d0e9"). InnerVolumeSpecName "kube-api-access-x4xvb". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 24 06:00:04.047612 master-0 kubenswrapper[34361]: I0224 06:00:04.047517 34361 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/567c775f-2a63-4f05-b846-4a0422a8d0e9-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "567c775f-2a63-4f05-b846-4a0422a8d0e9" (UID: "567c775f-2a63-4f05-b846-4a0422a8d0e9"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 24 06:00:04.147406 master-0 kubenswrapper[34361]: I0224 06:00:04.147230 34361 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/567c775f-2a63-4f05-b846-4a0422a8d0e9-secret-volume\") on node \"master-0\" DevicePath \"\"" Feb 24 06:00:04.147406 master-0 kubenswrapper[34361]: I0224 06:00:04.147338 34361 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/567c775f-2a63-4f05-b846-4a0422a8d0e9-config-volume\") on node \"master-0\" DevicePath \"\"" Feb 24 06:00:04.147406 master-0 kubenswrapper[34361]: I0224 06:00:04.147361 34361 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x4xvb\" (UniqueName: \"kubernetes.io/projected/567c775f-2a63-4f05-b846-4a0422a8d0e9-kube-api-access-x4xvb\") on node \"master-0\" DevicePath \"\"" Feb 24 06:00:04.248489 master-0 kubenswrapper[34361]: I0224 06:00:04.248373 34361 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29531880-xpxmc" event={"ID":"567c775f-2a63-4f05-b846-4a0422a8d0e9","Type":"ContainerDied","Data":"e80801c4789f58328a4c68d7a0ab50bffa16ffe9ce536e21c44c5b4e54ede988"} Feb 24 06:00:04.248995 master-0 kubenswrapper[34361]: I0224 06:00:04.248953 34361 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e80801c4789f58328a4c68d7a0ab50bffa16ffe9ce536e21c44c5b4e54ede988" Feb 24 06:00:04.249167 master-0 kubenswrapper[34361]: I0224 06:00:04.248522 34361 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29531880-xpxmc" Feb 24 06:00:04.996821 master-0 kubenswrapper[34361]: I0224 06:00:04.996458 34361 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29531835-tsgrz"] Feb 24 06:00:05.018935 master-0 kubenswrapper[34361]: I0224 06:00:05.017547 34361 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29531835-tsgrz"] Feb 24 06:00:06.614512 master-0 kubenswrapper[34361]: I0224 06:00:06.614434 34361 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8978e4e5-18ef-4b69-a127-5e9409163935" path="/var/lib/kubelet/pods/8978e4e5-18ef-4b69-a127-5e9409163935/volumes" Feb 24 06:00:24.257525 master-0 kubenswrapper[34361]: I0224 06:00:24.256424 34361 scope.go:117] "RemoveContainer" containerID="3c24b58bd92b804a63d803200f7a1ff1770a8e7351e2091f1326f31e84f6d272" Feb 24 06:01:00.216547 master-0 kubenswrapper[34361]: I0224 06:01:00.216422 34361 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-cron-29531881-w8jl5"] Feb 24 06:01:00.218417 master-0 kubenswrapper[34361]: E0224 06:01:00.218121 34361 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="567c775f-2a63-4f05-b846-4a0422a8d0e9" containerName="collect-profiles" Feb 24 06:01:00.218417 master-0 kubenswrapper[34361]: I0224 06:01:00.218165 34361 state_mem.go:107] "Deleted CPUSet assignment" podUID="567c775f-2a63-4f05-b846-4a0422a8d0e9" containerName="collect-profiles" Feb 24 06:01:00.219403 master-0 kubenswrapper[34361]: I0224 06:01:00.219366 34361 memory_manager.go:354] "RemoveStaleState removing state" podUID="567c775f-2a63-4f05-b846-4a0422a8d0e9" containerName="collect-profiles" Feb 24 06:01:00.243939 master-0 kubenswrapper[34361]: I0224 06:01:00.243851 34361 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29531881-w8jl5" Feb 24 06:01:00.262843 master-0 kubenswrapper[34361]: I0224 06:01:00.262768 34361 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-cron-29531881-w8jl5"] Feb 24 06:01:00.447024 master-0 kubenswrapper[34361]: I0224 06:01:00.446894 34361 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3dd12cf5-aadd-4077-ba7a-ab9f5797250c-combined-ca-bundle\") pod \"keystone-cron-29531881-w8jl5\" (UID: \"3dd12cf5-aadd-4077-ba7a-ab9f5797250c\") " pod="openstack/keystone-cron-29531881-w8jl5" Feb 24 06:01:00.448608 master-0 kubenswrapper[34361]: I0224 06:01:00.448499 34361 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q5q8v\" (UniqueName: \"kubernetes.io/projected/3dd12cf5-aadd-4077-ba7a-ab9f5797250c-kube-api-access-q5q8v\") pod \"keystone-cron-29531881-w8jl5\" (UID: \"3dd12cf5-aadd-4077-ba7a-ab9f5797250c\") " pod="openstack/keystone-cron-29531881-w8jl5" Feb 24 06:01:00.448872 master-0 kubenswrapper[34361]: I0224 06:01:00.448759 34361 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/3dd12cf5-aadd-4077-ba7a-ab9f5797250c-fernet-keys\") pod \"keystone-cron-29531881-w8jl5\" (UID: \"3dd12cf5-aadd-4077-ba7a-ab9f5797250c\") " pod="openstack/keystone-cron-29531881-w8jl5" Feb 24 06:01:00.449006 master-0 kubenswrapper[34361]: I0224 06:01:00.448971 34361 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3dd12cf5-aadd-4077-ba7a-ab9f5797250c-config-data\") pod \"keystone-cron-29531881-w8jl5\" (UID: \"3dd12cf5-aadd-4077-ba7a-ab9f5797250c\") " pod="openstack/keystone-cron-29531881-w8jl5" Feb 24 06:01:00.551846 master-0 kubenswrapper[34361]: I0224 06:01:00.551725 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3dd12cf5-aadd-4077-ba7a-ab9f5797250c-combined-ca-bundle\") pod \"keystone-cron-29531881-w8jl5\" (UID: \"3dd12cf5-aadd-4077-ba7a-ab9f5797250c\") " pod="openstack/keystone-cron-29531881-w8jl5" Feb 24 06:01:00.552290 master-0 kubenswrapper[34361]: I0224 06:01:00.552014 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q5q8v\" (UniqueName: \"kubernetes.io/projected/3dd12cf5-aadd-4077-ba7a-ab9f5797250c-kube-api-access-q5q8v\") pod \"keystone-cron-29531881-w8jl5\" (UID: \"3dd12cf5-aadd-4077-ba7a-ab9f5797250c\") " pod="openstack/keystone-cron-29531881-w8jl5" Feb 24 06:01:00.552480 master-0 kubenswrapper[34361]: I0224 06:01:00.552257 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/3dd12cf5-aadd-4077-ba7a-ab9f5797250c-fernet-keys\") pod \"keystone-cron-29531881-w8jl5\" (UID: \"3dd12cf5-aadd-4077-ba7a-ab9f5797250c\") " pod="openstack/keystone-cron-29531881-w8jl5" Feb 24 06:01:00.555482 master-0 kubenswrapper[34361]: I0224 06:01:00.555277 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3dd12cf5-aadd-4077-ba7a-ab9f5797250c-config-data\") pod \"keystone-cron-29531881-w8jl5\" (UID: \"3dd12cf5-aadd-4077-ba7a-ab9f5797250c\") " pod="openstack/keystone-cron-29531881-w8jl5" Feb 24 06:01:00.556210 master-0 kubenswrapper[34361]: I0224 06:01:00.555765 34361 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3dd12cf5-aadd-4077-ba7a-ab9f5797250c-combined-ca-bundle\") pod \"keystone-cron-29531881-w8jl5\" (UID: \"3dd12cf5-aadd-4077-ba7a-ab9f5797250c\") " pod="openstack/keystone-cron-29531881-w8jl5" Feb 24 06:01:00.556210 master-0 kubenswrapper[34361]: I0224 06:01:00.556094 34361 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/3dd12cf5-aadd-4077-ba7a-ab9f5797250c-fernet-keys\") pod \"keystone-cron-29531881-w8jl5\" (UID: \"3dd12cf5-aadd-4077-ba7a-ab9f5797250c\") " pod="openstack/keystone-cron-29531881-w8jl5" Feb 24 06:01:00.559203 master-0 kubenswrapper[34361]: I0224 06:01:00.559144 34361 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3dd12cf5-aadd-4077-ba7a-ab9f5797250c-config-data\") pod \"keystone-cron-29531881-w8jl5\" (UID: \"3dd12cf5-aadd-4077-ba7a-ab9f5797250c\") " pod="openstack/keystone-cron-29531881-w8jl5" Feb 24 06:01:00.586127 master-0 kubenswrapper[34361]: I0224 06:01:00.586041 34361 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q5q8v\" (UniqueName: \"kubernetes.io/projected/3dd12cf5-aadd-4077-ba7a-ab9f5797250c-kube-api-access-q5q8v\") pod \"keystone-cron-29531881-w8jl5\" (UID: \"3dd12cf5-aadd-4077-ba7a-ab9f5797250c\") " pod="openstack/keystone-cron-29531881-w8jl5" Feb 24 06:01:00.883926 master-0 kubenswrapper[34361]: I0224 06:01:00.883699 34361 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29531881-w8jl5" Feb 24 06:01:01.510269 master-0 kubenswrapper[34361]: I0224 06:01:01.510144 34361 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-cron-29531881-w8jl5"] Feb 24 06:01:02.348210 master-0 kubenswrapper[34361]: I0224 06:01:02.348067 34361 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29531881-w8jl5" event={"ID":"3dd12cf5-aadd-4077-ba7a-ab9f5797250c","Type":"ContainerStarted","Data":"bd8ffa6268ddf7c02a5c9751fe1ae9737c78edfb98e98898127b525ce810198e"} Feb 24 06:01:02.348210 master-0 kubenswrapper[34361]: I0224 06:01:02.348179 34361 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29531881-w8jl5" event={"ID":"3dd12cf5-aadd-4077-ba7a-ab9f5797250c","Type":"ContainerStarted","Data":"d3d6db697be6938afabe099ab52242df04436d6626d80f3b235b5b0ced8adb4e"} Feb 24 06:01:02.400721 master-0 kubenswrapper[34361]: I0224 06:01:02.400549 34361 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-cron-29531881-w8jl5" podStartSLOduration=2.400519498 podStartE2EDuration="2.400519498s" podCreationTimestamp="2026-02-24 06:01:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-24 06:01:02.380540279 +0000 UTC m=+1422.083157365" watchObservedRunningTime="2026-02-24 06:01:02.400519498 +0000 UTC m=+1422.103136554" Feb 24 06:01:04.405057 master-0 kubenswrapper[34361]: I0224 06:01:04.404968 34361 generic.go:334] "Generic (PLEG): container finished" podID="3dd12cf5-aadd-4077-ba7a-ab9f5797250c" containerID="bd8ffa6268ddf7c02a5c9751fe1ae9737c78edfb98e98898127b525ce810198e" exitCode=0 Feb 24 06:01:04.405931 master-0 kubenswrapper[34361]: I0224 06:01:04.405050 34361 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29531881-w8jl5" event={"ID":"3dd12cf5-aadd-4077-ba7a-ab9f5797250c","Type":"ContainerDied","Data":"bd8ffa6268ddf7c02a5c9751fe1ae9737c78edfb98e98898127b525ce810198e"} Feb 24 06:01:06.159179 master-0 kubenswrapper[34361]: I0224 06:01:06.159111 34361 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29531881-w8jl5" Feb 24 06:01:06.303295 master-0 kubenswrapper[34361]: I0224 06:01:06.303235 34361 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-q5q8v\" (UniqueName: \"kubernetes.io/projected/3dd12cf5-aadd-4077-ba7a-ab9f5797250c-kube-api-access-q5q8v\") pod \"3dd12cf5-aadd-4077-ba7a-ab9f5797250c\" (UID: \"3dd12cf5-aadd-4077-ba7a-ab9f5797250c\") " Feb 24 06:01:06.303913 master-0 kubenswrapper[34361]: I0224 06:01:06.303889 34361 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/3dd12cf5-aadd-4077-ba7a-ab9f5797250c-fernet-keys\") pod \"3dd12cf5-aadd-4077-ba7a-ab9f5797250c\" (UID: \"3dd12cf5-aadd-4077-ba7a-ab9f5797250c\") " Feb 24 06:01:06.304115 master-0 kubenswrapper[34361]: I0224 06:01:06.304096 34361 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3dd12cf5-aadd-4077-ba7a-ab9f5797250c-combined-ca-bundle\") pod \"3dd12cf5-aadd-4077-ba7a-ab9f5797250c\" (UID: \"3dd12cf5-aadd-4077-ba7a-ab9f5797250c\") " Feb 24 06:01:06.304268 master-0 kubenswrapper[34361]: I0224 06:01:06.304238 34361 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3dd12cf5-aadd-4077-ba7a-ab9f5797250c-config-data\") pod \"3dd12cf5-aadd-4077-ba7a-ab9f5797250c\" (UID: \"3dd12cf5-aadd-4077-ba7a-ab9f5797250c\") " Feb 24 06:01:06.308473 master-0 kubenswrapper[34361]: I0224 06:01:06.308405 34361 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3dd12cf5-aadd-4077-ba7a-ab9f5797250c-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "3dd12cf5-aadd-4077-ba7a-ab9f5797250c" (UID: "3dd12cf5-aadd-4077-ba7a-ab9f5797250c"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 24 06:01:06.309384 master-0 kubenswrapper[34361]: I0224 06:01:06.309330 34361 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3dd12cf5-aadd-4077-ba7a-ab9f5797250c-kube-api-access-q5q8v" (OuterVolumeSpecName: "kube-api-access-q5q8v") pod "3dd12cf5-aadd-4077-ba7a-ab9f5797250c" (UID: "3dd12cf5-aadd-4077-ba7a-ab9f5797250c"). InnerVolumeSpecName "kube-api-access-q5q8v". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 24 06:01:06.353721 master-0 kubenswrapper[34361]: I0224 06:01:06.353614 34361 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3dd12cf5-aadd-4077-ba7a-ab9f5797250c-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "3dd12cf5-aadd-4077-ba7a-ab9f5797250c" (UID: "3dd12cf5-aadd-4077-ba7a-ab9f5797250c"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 24 06:01:06.397599 master-0 kubenswrapper[34361]: I0224 06:01:06.397509 34361 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3dd12cf5-aadd-4077-ba7a-ab9f5797250c-config-data" (OuterVolumeSpecName: "config-data") pod "3dd12cf5-aadd-4077-ba7a-ab9f5797250c" (UID: "3dd12cf5-aadd-4077-ba7a-ab9f5797250c"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 24 06:01:06.409814 master-0 kubenswrapper[34361]: I0224 06:01:06.409731 34361 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-q5q8v\" (UniqueName: \"kubernetes.io/projected/3dd12cf5-aadd-4077-ba7a-ab9f5797250c-kube-api-access-q5q8v\") on node \"master-0\" DevicePath \"\"" Feb 24 06:01:06.409814 master-0 kubenswrapper[34361]: I0224 06:01:06.409798 34361 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/3dd12cf5-aadd-4077-ba7a-ab9f5797250c-fernet-keys\") on node \"master-0\" DevicePath \"\"" Feb 24 06:01:06.409814 master-0 kubenswrapper[34361]: I0224 06:01:06.409818 34361 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3dd12cf5-aadd-4077-ba7a-ab9f5797250c-combined-ca-bundle\") on node \"master-0\" DevicePath \"\"" Feb 24 06:01:06.410262 master-0 kubenswrapper[34361]: I0224 06:01:06.409836 34361 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3dd12cf5-aadd-4077-ba7a-ab9f5797250c-config-data\") on node \"master-0\" DevicePath \"\"" Feb 24 06:01:06.466847 master-0 kubenswrapper[34361]: I0224 06:01:06.466764 34361 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29531881-w8jl5" event={"ID":"3dd12cf5-aadd-4077-ba7a-ab9f5797250c","Type":"ContainerDied","Data":"d3d6db697be6938afabe099ab52242df04436d6626d80f3b235b5b0ced8adb4e"} Feb 24 06:01:06.466847 master-0 kubenswrapper[34361]: I0224 06:01:06.466827 34361 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d3d6db697be6938afabe099ab52242df04436d6626d80f3b235b5b0ced8adb4e" Feb 24 06:01:06.467225 master-0 kubenswrapper[34361]: I0224 06:01:06.466848 34361 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29531881-w8jl5" Feb 24 06:01:20.077381 master-0 kubenswrapper[34361]: I0224 06:01:20.075649 34361 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-db-create-f9qxr"] Feb 24 06:01:20.097782 master-0 kubenswrapper[34361]: I0224 06:01:20.097696 34361 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-db-create-f9qxr"] Feb 24 06:01:20.619355 master-0 kubenswrapper[34361]: I0224 06:01:20.619256 34361 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="962f1471-7def-4417-a4bc-cf1013a76b2f" path="/var/lib/kubelet/pods/962f1471-7def-4417-a4bc-cf1013a76b2f/volumes" Feb 24 06:01:21.065575 master-0 kubenswrapper[34361]: I0224 06:01:21.065476 34361 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-b69d-account-create-update-7dq92"] Feb 24 06:01:21.090181 master-0 kubenswrapper[34361]: I0224 06:01:21.090071 34361 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-738d-account-create-update-p9hmm"] Feb 24 06:01:21.108954 master-0 kubenswrapper[34361]: I0224 06:01:21.108864 34361 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-db-create-62w87"] Feb 24 06:01:21.121707 master-0 kubenswrapper[34361]: I0224 06:01:21.121621 34361 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-db-create-rfgw2"] Feb 24 06:01:21.134139 master-0 kubenswrapper[34361]: I0224 06:01:21.134025 34361 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-b69d-account-create-update-7dq92"] Feb 24 06:01:21.145622 master-0 kubenswrapper[34361]: I0224 06:01:21.145574 34361 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-738d-account-create-update-p9hmm"] Feb 24 06:01:21.156037 master-0 kubenswrapper[34361]: I0224 06:01:21.155979 34361 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-db-create-62w87"] Feb 24 06:01:21.166161 master-0 kubenswrapper[34361]: I0224 06:01:21.166075 34361 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-db-create-rfgw2"] Feb 24 06:01:22.048527 master-0 kubenswrapper[34361]: I0224 06:01:22.048446 34361 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-7814-account-create-update-vkdnw"] Feb 24 06:01:22.062700 master-0 kubenswrapper[34361]: I0224 06:01:22.062592 34361 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-7814-account-create-update-vkdnw"] Feb 24 06:01:22.624686 master-0 kubenswrapper[34361]: I0224 06:01:22.624593 34361 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1580115a-d292-46a8-b90c-850d483892a4" path="/var/lib/kubelet/pods/1580115a-d292-46a8-b90c-850d483892a4/volumes" Feb 24 06:01:22.625514 master-0 kubenswrapper[34361]: I0224 06:01:22.625457 34361 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5560d5a8-4360-4b16-b5ca-2817343b3ec9" path="/var/lib/kubelet/pods/5560d5a8-4360-4b16-b5ca-2817343b3ec9/volumes" Feb 24 06:01:22.626232 master-0 kubenswrapper[34361]: I0224 06:01:22.626197 34361 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c5b94586-4af1-4814-aaa7-baeba7af6359" path="/var/lib/kubelet/pods/c5b94586-4af1-4814-aaa7-baeba7af6359/volumes" Feb 24 06:01:22.627047 master-0 kubenswrapper[34361]: I0224 06:01:22.626993 34361 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d5c030be-c6d3-421a-9a88-e5a3cbd5c8ca" path="/var/lib/kubelet/pods/d5c030be-c6d3-421a-9a88-e5a3cbd5c8ca/volumes" Feb 24 06:01:22.628784 master-0 kubenswrapper[34361]: I0224 06:01:22.628750 34361 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="dd8843ac-1648-4291-8d77-ba67a5e46d2b" path="/var/lib/kubelet/pods/dd8843ac-1648-4291-8d77-ba67a5e46d2b/volumes" Feb 24 06:01:24.382541 master-0 kubenswrapper[34361]: I0224 06:01:24.382063 34361 scope.go:117] "RemoveContainer" containerID="65d2b8dd751716ea675453d0f0ff5d427a09809a3ad40f1add62946e5d0a5571" Feb 24 06:01:24.441992 master-0 kubenswrapper[34361]: I0224 06:01:24.441904 34361 scope.go:117] "RemoveContainer" containerID="7112a30745a472e0723347eec5359bb7a5397bcd85933a4808883c7db8b9763e" Feb 24 06:01:24.489439 master-0 kubenswrapper[34361]: I0224 06:01:24.489364 34361 scope.go:117] "RemoveContainer" containerID="8d1f60d586ac2b0d3c3b17db22297f713d35b861309c73591b30209f6c98ad21" Feb 24 06:01:24.581070 master-0 kubenswrapper[34361]: I0224 06:01:24.577348 34361 scope.go:117] "RemoveContainer" containerID="b24c1ec9a4cb118cf8f370ab23b3e38523e5d056fef82cbb1a6b9b9ca58ab3a8" Feb 24 06:01:24.640246 master-0 kubenswrapper[34361]: I0224 06:01:24.640199 34361 scope.go:117] "RemoveContainer" containerID="3f70fabdaa1c10de1289e9175a4e84b8bb9b8438a37c75570f67615cd4a67a5f" Feb 24 06:01:24.664970 master-0 kubenswrapper[34361]: I0224 06:01:24.664908 34361 scope.go:117] "RemoveContainer" containerID="3377e172e16cebea7357cd056913ba2980dd298b502c539acf7cae646d2d3c96" Feb 24 06:01:24.696986 master-0 kubenswrapper[34361]: I0224 06:01:24.696927 34361 scope.go:117] "RemoveContainer" containerID="a3dea6b25f70ce313d99b17381b693616e1d7310210cc2cba8026f495273acdd" Feb 24 06:01:24.792025 master-0 kubenswrapper[34361]: I0224 06:01:24.791980 34361 scope.go:117] "RemoveContainer" containerID="1ab20e1b4e8e66bc887752774a1f00920bee111374be74e85255dfa14c229255" Feb 24 06:01:24.839941 master-0 kubenswrapper[34361]: I0224 06:01:24.839886 34361 scope.go:117] "RemoveContainer" containerID="a6bb288e8b19f3d5a9ba17f1c1e199f60015d808b08b91cdf75f3da907a5a88b" Feb 24 06:01:24.896173 master-0 kubenswrapper[34361]: I0224 06:01:24.895646 34361 scope.go:117] "RemoveContainer" containerID="215edc49d60e772323fd3bbc4c69723c8dadafacdce47e7c7984dd2521caa018" Feb 24 06:01:24.947419 master-0 kubenswrapper[34361]: I0224 06:01:24.947340 34361 scope.go:117] "RemoveContainer" containerID="e5228a7ddc095dcfad9c4d23fbce83825608be2bd0507a3e6be62ae35103f671" Feb 24 06:01:24.989259 master-0 kubenswrapper[34361]: I0224 06:01:24.989194 34361 scope.go:117] "RemoveContainer" containerID="45519f82cb58fe639143471d2ff7b23337594f893e8e328ded52c40f36c082fb" Feb 24 06:01:44.074300 master-0 kubenswrapper[34361]: I0224 06:01:44.074044 34361 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/root-account-create-update-7zq2x"] Feb 24 06:01:44.088217 master-0 kubenswrapper[34361]: I0224 06:01:44.088118 34361 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/root-account-create-update-7zq2x"] Feb 24 06:01:44.617441 master-0 kubenswrapper[34361]: I0224 06:01:44.616681 34361 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="33ec35fe-3c0e-4a79-9f87-63279b8cc21a" path="/var/lib/kubelet/pods/33ec35fe-3c0e-4a79-9f87-63279b8cc21a/volumes" Feb 24 06:01:51.050971 master-0 kubenswrapper[34361]: I0224 06:01:51.050865 34361 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-db-sync-f4vxh"] Feb 24 06:01:51.073299 master-0 kubenswrapper[34361]: I0224 06:01:51.073226 34361 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-db-sync-f4vxh"] Feb 24 06:01:52.621442 master-0 kubenswrapper[34361]: I0224 06:01:52.621226 34361 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="36f1ab4b-258e-434c-8674-0375758ffd49" path="/var/lib/kubelet/pods/36f1ab4b-258e-434c-8674-0375758ffd49/volumes" Feb 24 06:02:01.072963 master-0 kubenswrapper[34361]: I0224 06:02:01.072865 34361 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-db-create-sns65"] Feb 24 06:02:01.089917 master-0 kubenswrapper[34361]: I0224 06:02:01.089823 34361 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-db-create-sns65"] Feb 24 06:02:02.616965 master-0 kubenswrapper[34361]: I0224 06:02:02.616895 34361 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d66837be-9db0-4e89-be7a-fbcd10882b17" path="/var/lib/kubelet/pods/d66837be-9db0-4e89-be7a-fbcd10882b17/volumes" Feb 24 06:02:05.108265 master-0 kubenswrapper[34361]: I0224 06:02:05.105724 34361 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-828b-account-create-update-4bgnt"] Feb 24 06:02:05.119735 master-0 kubenswrapper[34361]: I0224 06:02:05.119664 34361 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-3d67-account-create-update-vkpgp"] Feb 24 06:02:05.133160 master-0 kubenswrapper[34361]: I0224 06:02:05.133075 34361 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-db-create-fcxq8"] Feb 24 06:02:05.144866 master-0 kubenswrapper[34361]: I0224 06:02:05.144801 34361 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-828b-account-create-update-4bgnt"] Feb 24 06:02:05.158397 master-0 kubenswrapper[34361]: I0224 06:02:05.158326 34361 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-3d67-account-create-update-vkpgp"] Feb 24 06:02:05.171005 master-0 kubenswrapper[34361]: I0224 06:02:05.170906 34361 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-db-create-fcxq8"] Feb 24 06:02:06.623404 master-0 kubenswrapper[34361]: I0224 06:02:06.623296 34361 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="26f2ed2f-05e1-4060-8d74-200fcf3cbfe9" path="/var/lib/kubelet/pods/26f2ed2f-05e1-4060-8d74-200fcf3cbfe9/volumes" Feb 24 06:02:06.627407 master-0 kubenswrapper[34361]: I0224 06:02:06.627066 34361 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c4f8406c-2516-4c44-b748-bdc79ef32db1" path="/var/lib/kubelet/pods/c4f8406c-2516-4c44-b748-bdc79ef32db1/volumes" Feb 24 06:02:06.627966 master-0 kubenswrapper[34361]: I0224 06:02:06.627928 34361 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d3a55d2e-a011-4a92-a4a5-3f36d34661b5" path="/var/lib/kubelet/pods/d3a55d2e-a011-4a92-a4a5-3f36d34661b5/volumes" Feb 24 06:02:12.116119 master-0 kubenswrapper[34361]: I0224 06:02:12.116023 34361 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-db-sync-j2nkz"] Feb 24 06:02:12.128824 master-0 kubenswrapper[34361]: I0224 06:02:12.128721 34361 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-db-sync-j2nkz"] Feb 24 06:02:12.619862 master-0 kubenswrapper[34361]: I0224 06:02:12.619777 34361 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a50d2174-643c-425d-92e5-ff1ab4d12f7a" path="/var/lib/kubelet/pods/a50d2174-643c-425d-92e5-ff1ab4d12f7a/volumes" Feb 24 06:02:23.059527 master-0 kubenswrapper[34361]: I0224 06:02:23.058994 34361 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ironic-b901-account-create-update-vmptn"] Feb 24 06:02:23.078342 master-0 kubenswrapper[34361]: I0224 06:02:23.077596 34361 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ironic-db-create-hgms6"] Feb 24 06:02:23.089857 master-0 kubenswrapper[34361]: I0224 06:02:23.089747 34361 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ironic-b901-account-create-update-vmptn"] Feb 24 06:02:23.101836 master-0 kubenswrapper[34361]: I0224 06:02:23.101751 34361 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ironic-db-create-hgms6"] Feb 24 06:02:24.619429 master-0 kubenswrapper[34361]: I0224 06:02:24.619351 34361 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="32b27462-7223-4f43-8eea-25a2dcd42b17" path="/var/lib/kubelet/pods/32b27462-7223-4f43-8eea-25a2dcd42b17/volumes" Feb 24 06:02:24.622669 master-0 kubenswrapper[34361]: I0224 06:02:24.621739 34361 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9a1e2cd8-9a9f-454d-b520-75769a722e55" path="/var/lib/kubelet/pods/9a1e2cd8-9a9f-454d-b520-75769a722e55/volumes" Feb 24 06:02:25.325651 master-0 kubenswrapper[34361]: I0224 06:02:25.325547 34361 scope.go:117] "RemoveContainer" containerID="54eec3d598d469ce1517f3f57924ef0ec74d22cb2f19f65d7fdc0151234a97b1" Feb 24 06:02:25.364674 master-0 kubenswrapper[34361]: I0224 06:02:25.364542 34361 scope.go:117] "RemoveContainer" containerID="32a49cb5298021eb2317880085eff0f3e379e1e29dc76290de13fe982a1891e4" Feb 24 06:02:25.479998 master-0 kubenswrapper[34361]: I0224 06:02:25.479933 34361 scope.go:117] "RemoveContainer" containerID="7ee8a6559ab43eb317259dafe7cdf86adf385019298144a14eb4ca8308528154" Feb 24 06:02:25.533834 master-0 kubenswrapper[34361]: I0224 06:02:25.527973 34361 scope.go:117] "RemoveContainer" containerID="6d2b5790114018e290bf3c0eb80fd60c045b9479ea39f45082fd548a03d99b46" Feb 24 06:02:25.598069 master-0 kubenswrapper[34361]: I0224 06:02:25.598022 34361 scope.go:117] "RemoveContainer" containerID="509b23ea003f9f1b616fb11fbe93550071804b3fb869f13891f4c8285341bbf2" Feb 24 06:02:25.656782 master-0 kubenswrapper[34361]: I0224 06:02:25.656702 34361 scope.go:117] "RemoveContainer" containerID="505c85bf407d16734485972c8f8ba68a955434679874c9188aa62bcaf5c2307a" Feb 24 06:02:25.758691 master-0 kubenswrapper[34361]: I0224 06:02:25.756059 34361 scope.go:117] "RemoveContainer" containerID="b7d4a41f2866ac98c22c4063d66419579186118ba9b12a5eb8213634976ee515" Feb 24 06:02:25.798948 master-0 kubenswrapper[34361]: I0224 06:02:25.798885 34361 scope.go:117] "RemoveContainer" containerID="247cdc915c9add122e28a4d7837e566b0d10d128b171a045c4a461105de9ab7f" Feb 24 06:02:25.829106 master-0 kubenswrapper[34361]: I0224 06:02:25.829046 34361 scope.go:117] "RemoveContainer" containerID="49c48c9bcbb12493b70b28af441b3bdc7f385caa3ed90ea6876b2fb7f910379f" Feb 24 06:02:31.064444 master-0 kubenswrapper[34361]: I0224 06:02:31.064339 34361 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-db-sync-629gt"] Feb 24 06:02:31.079173 master-0 kubenswrapper[34361]: I0224 06:02:31.079060 34361 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-db-sync-629gt"] Feb 24 06:02:32.618149 master-0 kubenswrapper[34361]: I0224 06:02:32.618058 34361 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a033a9c9-abde-4d05-b958-06c6bb913e85" path="/var/lib/kubelet/pods/a033a9c9-abde-4d05-b958-06c6bb913e85/volumes" Feb 24 06:02:42.057540 master-0 kubenswrapper[34361]: I0224 06:02:42.056432 34361 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-bootstrap-trt9l"] Feb 24 06:02:42.088739 master-0 kubenswrapper[34361]: I0224 06:02:42.088637 34361 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-bootstrap-trt9l"] Feb 24 06:02:42.616446 master-0 kubenswrapper[34361]: I0224 06:02:42.616361 34361 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6eb75ff2-586c-4d0c-bb92-967635ac99d0" path="/var/lib/kubelet/pods/6eb75ff2-586c-4d0c-bb92-967635ac99d0/volumes" Feb 24 06:02:52.056601 master-0 kubenswrapper[34361]: I0224 06:02:52.056445 34361 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-db-sync-m7xgd"] Feb 24 06:02:52.087741 master-0 kubenswrapper[34361]: I0224 06:02:52.087593 34361 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-db-sync-m7xgd"] Feb 24 06:02:52.619066 master-0 kubenswrapper[34361]: I0224 06:02:52.618965 34361 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4f5d8934-00e0-46c9-ba9d-d9183edd6fb8" path="/var/lib/kubelet/pods/4f5d8934-00e0-46c9-ba9d-d9183edd6fb8/volumes" Feb 24 06:02:55.053867 master-0 kubenswrapper[34361]: I0224 06:02:55.053776 34361 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-b7346-db-sync-f9mbk"] Feb 24 06:02:55.071261 master-0 kubenswrapper[34361]: I0224 06:02:55.071175 34361 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-b7346-db-sync-f9mbk"] Feb 24 06:02:56.624751 master-0 kubenswrapper[34361]: I0224 06:02:56.624647 34361 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="41c862b6-5eb6-4f54-a435-a8e7691b87c9" path="/var/lib/kubelet/pods/41c862b6-5eb6-4f54-a435-a8e7691b87c9/volumes" Feb 24 06:03:03.085082 master-0 kubenswrapper[34361]: I0224 06:03:03.085008 34361 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ironic-db-sync-s9d6l"] Feb 24 06:03:03.103841 master-0 kubenswrapper[34361]: I0224 06:03:03.103756 34361 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ironic-db-sync-s9d6l"] Feb 24 06:03:04.619202 master-0 kubenswrapper[34361]: I0224 06:03:04.619105 34361 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7e30393e-c247-4ba9-9db9-864d16ba6d82" path="/var/lib/kubelet/pods/7e30393e-c247-4ba9-9db9-864d16ba6d82/volumes" Feb 24 06:03:10.061283 master-0 kubenswrapper[34361]: I0224 06:03:10.061147 34361 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ironic-inspector-6402-account-create-update-kj7ts"] Feb 24 06:03:10.078252 master-0 kubenswrapper[34361]: I0224 06:03:10.078151 34361 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ironic-inspector-db-create-pwcj4"] Feb 24 06:03:10.093261 master-0 kubenswrapper[34361]: I0224 06:03:10.093189 34361 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ironic-inspector-6402-account-create-update-kj7ts"] Feb 24 06:03:10.107642 master-0 kubenswrapper[34361]: I0224 06:03:10.107545 34361 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ironic-inspector-db-create-pwcj4"] Feb 24 06:03:10.631876 master-0 kubenswrapper[34361]: I0224 06:03:10.630611 34361 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0a0262a8-c31e-4022-bf1e-7952af276733" path="/var/lib/kubelet/pods/0a0262a8-c31e-4022-bf1e-7952af276733/volumes" Feb 24 06:03:10.632886 master-0 kubenswrapper[34361]: I0224 06:03:10.632806 34361 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="68c5e68e-c7ed-4fb9-a323-2104110a3742" path="/var/lib/kubelet/pods/68c5e68e-c7ed-4fb9-a323-2104110a3742/volumes" Feb 24 06:03:26.108142 master-0 kubenswrapper[34361]: I0224 06:03:26.107900 34361 scope.go:117] "RemoveContainer" containerID="96979dff88f72790d528082b0711c7f48cc32173ad6267d2e6500d3c608b8037" Feb 24 06:03:26.164630 master-0 kubenswrapper[34361]: I0224 06:03:26.164512 34361 scope.go:117] "RemoveContainer" containerID="e1779a5577f87379b3eaec1b4b22da92e33df9ea40fc881bc79cca47a933b8d7" Feb 24 06:03:26.277265 master-0 kubenswrapper[34361]: I0224 06:03:26.277195 34361 scope.go:117] "RemoveContainer" containerID="ec9e0f959f58b15d2ac33c7f7fe7637fca1a3c27908113b8181f4a982095b802" Feb 24 06:03:26.355019 master-0 kubenswrapper[34361]: I0224 06:03:26.354938 34361 scope.go:117] "RemoveContainer" containerID="9d60d4d6b2af8e7533e56ae9ba0ebd383f1b4443362a0493fff00bdb76302614" Feb 24 06:03:26.423301 master-0 kubenswrapper[34361]: I0224 06:03:26.423186 34361 scope.go:117] "RemoveContainer" containerID="c47f422e1fdc982c03255913a4df34e0eb690b433e5d23550b39b7db2c74272d" Feb 24 06:03:26.510008 master-0 kubenswrapper[34361]: I0224 06:03:26.509916 34361 scope.go:117] "RemoveContainer" containerID="56b61e70b135e0157c325da061cf145839f62281a175f804247358f1c3ec123a" Feb 24 06:03:26.551610 master-0 kubenswrapper[34361]: I0224 06:03:26.551535 34361 scope.go:117] "RemoveContainer" containerID="06699e4a7aac5b8021ea710da54b96f039768ebde19dc562ecdc64e5a9245ac0" Feb 24 06:03:26.619938 master-0 kubenswrapper[34361]: I0224 06:03:26.619867 34361 scope.go:117] "RemoveContainer" containerID="bd09c98d98993340d389f251378d76ce14467fdb5b3cfc089a66c0e6178dace3" Feb 24 06:03:35.079698 master-0 kubenswrapper[34361]: I0224 06:03:35.079577 34361 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ironic-inspector-db-sync-pd272"] Feb 24 06:03:35.096081 master-0 kubenswrapper[34361]: I0224 06:03:35.095998 34361 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ironic-inspector-db-sync-pd272"] Feb 24 06:03:36.616273 master-0 kubenswrapper[34361]: I0224 06:03:36.616202 34361 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e59553b6-01d7-45a8-8475-647431627701" path="/var/lib/kubelet/pods/e59553b6-01d7-45a8-8475-647431627701/volumes" Feb 24 06:03:48.110478 master-0 kubenswrapper[34361]: I0224 06:03:48.109625 34361 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-db-create-qrtq2"] Feb 24 06:03:48.132781 master-0 kubenswrapper[34361]: I0224 06:03:48.132666 34361 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-db-create-qrtq2"] Feb 24 06:03:48.616336 master-0 kubenswrapper[34361]: I0224 06:03:48.616217 34361 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ebf1db3e-e40c-41b6-ad8f-c1decbbfba24" path="/var/lib/kubelet/pods/ebf1db3e-e40c-41b6-ad8f-c1decbbfba24/volumes" Feb 24 06:03:49.069521 master-0 kubenswrapper[34361]: I0224 06:03:49.069360 34361 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-db-create-kzhmb"] Feb 24 06:03:49.085144 master-0 kubenswrapper[34361]: I0224 06:03:49.083397 34361 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-db-create-4kz4t"] Feb 24 06:03:49.099542 master-0 kubenswrapper[34361]: I0224 06:03:49.098235 34361 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-c618-account-create-update-mmq8h"] Feb 24 06:03:49.113064 master-0 kubenswrapper[34361]: I0224 06:03:49.112965 34361 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-8a9d-account-create-update-hxq4n"] Feb 24 06:03:49.124246 master-0 kubenswrapper[34361]: I0224 06:03:49.124105 34361 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-e077-account-create-update-fnxnr"] Feb 24 06:03:49.136072 master-0 kubenswrapper[34361]: I0224 06:03:49.135979 34361 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-db-create-kzhmb"] Feb 24 06:03:49.148935 master-0 kubenswrapper[34361]: I0224 06:03:49.148821 34361 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-c618-account-create-update-mmq8h"] Feb 24 06:03:49.159064 master-0 kubenswrapper[34361]: I0224 06:03:49.158984 34361 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-db-create-4kz4t"] Feb 24 06:03:49.170293 master-0 kubenswrapper[34361]: I0224 06:03:49.170227 34361 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-8a9d-account-create-update-hxq4n"] Feb 24 06:03:49.184408 master-0 kubenswrapper[34361]: I0224 06:03:49.184261 34361 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-e077-account-create-update-fnxnr"] Feb 24 06:03:50.623005 master-0 kubenswrapper[34361]: I0224 06:03:50.622927 34361 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="06a1970d-fc4d-4522-a195-fa7fc9d5485d" path="/var/lib/kubelet/pods/06a1970d-fc4d-4522-a195-fa7fc9d5485d/volumes" Feb 24 06:03:50.626097 master-0 kubenswrapper[34361]: I0224 06:03:50.625775 34361 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="24794552-5cfa-428e-ad46-ce7a1794c7ec" path="/var/lib/kubelet/pods/24794552-5cfa-428e-ad46-ce7a1794c7ec/volumes" Feb 24 06:03:50.627382 master-0 kubenswrapper[34361]: I0224 06:03:50.627288 34361 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="398060c6-ec35-4659-89a2-550ad8c81453" path="/var/lib/kubelet/pods/398060c6-ec35-4659-89a2-550ad8c81453/volumes" Feb 24 06:03:50.629630 master-0 kubenswrapper[34361]: I0224 06:03:50.628628 34361 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="481f56ba-4864-42fb-b0f3-02a4e4311e7d" path="/var/lib/kubelet/pods/481f56ba-4864-42fb-b0f3-02a4e4311e7d/volumes" Feb 24 06:03:50.631750 master-0 kubenswrapper[34361]: I0224 06:03:50.631661 34361 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d21763a0-0808-4fe2-94bb-37aea78c00f0" path="/var/lib/kubelet/pods/d21763a0-0808-4fe2-94bb-37aea78c00f0/volumes" Feb 24 06:04:25.084376 master-0 kubenswrapper[34361]: I0224 06:04:25.083357 34361 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-ph4c9"] Feb 24 06:04:25.110862 master-0 kubenswrapper[34361]: I0224 06:04:25.110766 34361 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-ph4c9"] Feb 24 06:04:26.622772 master-0 kubenswrapper[34361]: I0224 06:04:26.622672 34361 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d4656fa5-01da-43e7-8bc9-f2b67c89b70d" path="/var/lib/kubelet/pods/d4656fa5-01da-43e7-8bc9-f2b67c89b70d/volumes" Feb 24 06:04:26.861012 master-0 kubenswrapper[34361]: I0224 06:04:26.860876 34361 scope.go:117] "RemoveContainer" containerID="e5e444e3b28484ac024aae033f87647e5055ae23d82ae3f0756fc7aa00b3d0b0" Feb 24 06:04:26.890956 master-0 kubenswrapper[34361]: I0224 06:04:26.890899 34361 scope.go:117] "RemoveContainer" containerID="d465ae7bc5b67aa22453114bb9d2bca2a310263c4849f3130a7ab19495572ff4" Feb 24 06:04:27.011988 master-0 kubenswrapper[34361]: I0224 06:04:27.011914 34361 scope.go:117] "RemoveContainer" containerID="9926dabed789d363070cf3d4e2ba027bba87827031dcfcbffeb311425028de1f" Feb 24 06:04:27.090331 master-0 kubenswrapper[34361]: I0224 06:04:27.090249 34361 scope.go:117] "RemoveContainer" containerID="df0fe7b9bd2f8eb372c48486e25717ab273f2d537f398fb8c3309e2504cfb362" Feb 24 06:04:27.170659 master-0 kubenswrapper[34361]: I0224 06:04:27.170569 34361 scope.go:117] "RemoveContainer" containerID="af1522d029f8bb57a552add3e575b79016723a4e45e4ce2d7eb1c88f2b1f6d45" Feb 24 06:04:27.199692 master-0 kubenswrapper[34361]: I0224 06:04:27.198838 34361 scope.go:117] "RemoveContainer" containerID="50a6bed273455d05151b6f2708e3f0bc7e0af934424f6af21e302193e2b54a6c" Feb 24 06:04:27.249847 master-0 kubenswrapper[34361]: I0224 06:04:27.249775 34361 scope.go:117] "RemoveContainer" containerID="982f90cfe38d398e0f9cff69f20afa1d36878ca7162b82c475a6720604383ca4" Feb 24 06:04:27.279122 master-0 kubenswrapper[34361]: I0224 06:04:27.279060 34361 scope.go:117] "RemoveContainer" containerID="c16fcc32e461add8199d9cf339ba076ac5d2296d2d3b20d6e0fdc5e9ebe01b73" Feb 24 06:04:54.074654 master-0 kubenswrapper[34361]: I0224 06:04:54.074526 34361 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-cell-mapping-fck78"] Feb 24 06:04:54.090472 master-0 kubenswrapper[34361]: I0224 06:04:54.090380 34361 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-cell-mapping-fck78"] Feb 24 06:04:54.618082 master-0 kubenswrapper[34361]: I0224 06:04:54.617995 34361 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b6bc1039-ed61-4fdb-9e4d-0cb8a4aa1e0d" path="/var/lib/kubelet/pods/b6bc1039-ed61-4fdb-9e4d-0cb8a4aa1e0d/volumes" Feb 24 06:04:55.050901 master-0 kubenswrapper[34361]: I0224 06:04:55.048786 34361 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-lc7xf"] Feb 24 06:04:55.063651 master-0 kubenswrapper[34361]: I0224 06:04:55.063529 34361 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-lc7xf"] Feb 24 06:04:56.621225 master-0 kubenswrapper[34361]: I0224 06:04:56.621112 34361 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7d0a5bab-2c7d-4526-8505-873c732edcf1" path="/var/lib/kubelet/pods/7d0a5bab-2c7d-4526-8505-873c732edcf1/volumes" Feb 24 06:05:27.576473 master-0 kubenswrapper[34361]: I0224 06:05:27.576399 34361 scope.go:117] "RemoveContainer" containerID="515d55b62b2b88bfd6765de031608c20d95a75d635ec2d3ad786f86826787472" Feb 24 06:05:27.641456 master-0 kubenswrapper[34361]: I0224 06:05:27.641373 34361 scope.go:117] "RemoveContainer" containerID="e8550d306f17ed524e9dca5e627f8e0163ebe91bf66c9fb133708d5539e8e635" Feb 24 06:05:32.098436 master-0 kubenswrapper[34361]: I0224 06:05:32.096681 34361 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-host-discover-4lbdf"] Feb 24 06:05:32.114747 master-0 kubenswrapper[34361]: I0224 06:05:32.114639 34361 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-host-discover-4lbdf"] Feb 24 06:05:32.628145 master-0 kubenswrapper[34361]: I0224 06:05:32.628039 34361 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="598b59b9-eeed-4a94-a3b0-fb6c19d76c53" path="/var/lib/kubelet/pods/598b59b9-eeed-4a94-a3b0-fb6c19d76c53/volumes" Feb 24 06:05:35.049377 master-0 kubenswrapper[34361]: I0224 06:05:35.049282 34361 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-cell-mapping-l5vzg"] Feb 24 06:05:35.062422 master-0 kubenswrapper[34361]: I0224 06:05:35.061530 34361 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-cell-mapping-l5vzg"] Feb 24 06:05:36.627661 master-0 kubenswrapper[34361]: I0224 06:05:36.627553 34361 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0215380e-69c6-41f6-a231-98e9714a160d" path="/var/lib/kubelet/pods/0215380e-69c6-41f6-a231-98e9714a160d/volumes" Feb 24 06:06:27.765294 master-0 kubenswrapper[34361]: I0224 06:06:27.765192 34361 scope.go:117] "RemoveContainer" containerID="88c32843b8ac98f536171b903f3090837a291cd61314786b4abb1451784d161b" Feb 24 06:06:27.829655 master-0 kubenswrapper[34361]: I0224 06:06:27.829557 34361 scope.go:117] "RemoveContainer" containerID="cf4854259d77311f1ec27f712fbe10530ae0ecfdff1f0b17fcf2a99fb10cb7bb" Feb 24 06:09:37.954977 master-0 kubenswrapper[34361]: E0224 06:09:37.954858 34361 upgradeaware.go:427] Error proxying data from client to backend: readfrom tcp 192.168.32.10:39040->192.168.32.10:36187: write tcp 192.168.32.10:39040->192.168.32.10:36187: write: broken pipe Feb 24 06:12:42.511384 master-0 kubenswrapper[34361]: E0224 06:12:42.509678 34361 upgradeaware.go:427] Error proxying data from client to backend: readfrom tcp 192.168.32.10:50714->192.168.32.10:36187: write tcp 192.168.32.10:50714->192.168.32.10:36187: write: broken pipe Feb 24 06:15:00.217212 master-0 kubenswrapper[34361]: I0224 06:15:00.217137 34361 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29531895-57vmb"] Feb 24 06:15:00.218488 master-0 kubenswrapper[34361]: E0224 06:15:00.218246 34361 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3dd12cf5-aadd-4077-ba7a-ab9f5797250c" containerName="keystone-cron" Feb 24 06:15:00.218488 master-0 kubenswrapper[34361]: I0224 06:15:00.218276 34361 state_mem.go:107] "Deleted CPUSet assignment" podUID="3dd12cf5-aadd-4077-ba7a-ab9f5797250c" containerName="keystone-cron" Feb 24 06:15:00.218758 master-0 kubenswrapper[34361]: I0224 06:15:00.218727 34361 memory_manager.go:354] "RemoveStaleState removing state" podUID="3dd12cf5-aadd-4077-ba7a-ab9f5797250c" containerName="keystone-cron" Feb 24 06:15:00.221193 master-0 kubenswrapper[34361]: I0224 06:15:00.221099 34361 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29531895-57vmb" Feb 24 06:15:00.245666 master-0 kubenswrapper[34361]: I0224 06:15:00.236759 34361 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-27rfg" Feb 24 06:15:00.245666 master-0 kubenswrapper[34361]: I0224 06:15:00.237115 34361 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Feb 24 06:15:00.245666 master-0 kubenswrapper[34361]: I0224 06:15:00.239589 34361 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29531895-57vmb"] Feb 24 06:15:00.344529 master-0 kubenswrapper[34361]: I0224 06:15:00.344438 34361 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/86485963-e689-4820-b86a-76829f286eb9-secret-volume\") pod \"collect-profiles-29531895-57vmb\" (UID: \"86485963-e689-4820-b86a-76829f286eb9\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29531895-57vmb" Feb 24 06:15:00.344790 master-0 kubenswrapper[34361]: I0224 06:15:00.344543 34361 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zlwmr\" (UniqueName: \"kubernetes.io/projected/86485963-e689-4820-b86a-76829f286eb9-kube-api-access-zlwmr\") pod \"collect-profiles-29531895-57vmb\" (UID: \"86485963-e689-4820-b86a-76829f286eb9\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29531895-57vmb" Feb 24 06:15:00.344790 master-0 kubenswrapper[34361]: I0224 06:15:00.344661 34361 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/86485963-e689-4820-b86a-76829f286eb9-config-volume\") pod \"collect-profiles-29531895-57vmb\" (UID: \"86485963-e689-4820-b86a-76829f286eb9\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29531895-57vmb" Feb 24 06:15:00.446768 master-0 kubenswrapper[34361]: I0224 06:15:00.446675 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/86485963-e689-4820-b86a-76829f286eb9-secret-volume\") pod \"collect-profiles-29531895-57vmb\" (UID: \"86485963-e689-4820-b86a-76829f286eb9\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29531895-57vmb" Feb 24 06:15:00.447270 master-0 kubenswrapper[34361]: I0224 06:15:00.446927 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zlwmr\" (UniqueName: \"kubernetes.io/projected/86485963-e689-4820-b86a-76829f286eb9-kube-api-access-zlwmr\") pod \"collect-profiles-29531895-57vmb\" (UID: \"86485963-e689-4820-b86a-76829f286eb9\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29531895-57vmb" Feb 24 06:15:00.447461 master-0 kubenswrapper[34361]: I0224 06:15:00.447285 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/86485963-e689-4820-b86a-76829f286eb9-config-volume\") pod \"collect-profiles-29531895-57vmb\" (UID: \"86485963-e689-4820-b86a-76829f286eb9\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29531895-57vmb" Feb 24 06:15:00.448303 master-0 kubenswrapper[34361]: I0224 06:15:00.448254 34361 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/86485963-e689-4820-b86a-76829f286eb9-config-volume\") pod \"collect-profiles-29531895-57vmb\" (UID: \"86485963-e689-4820-b86a-76829f286eb9\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29531895-57vmb" Feb 24 06:15:00.450460 master-0 kubenswrapper[34361]: I0224 06:15:00.450397 34361 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/86485963-e689-4820-b86a-76829f286eb9-secret-volume\") pod \"collect-profiles-29531895-57vmb\" (UID: \"86485963-e689-4820-b86a-76829f286eb9\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29531895-57vmb" Feb 24 06:15:00.464346 master-0 kubenswrapper[34361]: I0224 06:15:00.464292 34361 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zlwmr\" (UniqueName: \"kubernetes.io/projected/86485963-e689-4820-b86a-76829f286eb9-kube-api-access-zlwmr\") pod \"collect-profiles-29531895-57vmb\" (UID: \"86485963-e689-4820-b86a-76829f286eb9\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29531895-57vmb" Feb 24 06:15:00.566163 master-0 kubenswrapper[34361]: I0224 06:15:00.566073 34361 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29531895-57vmb" Feb 24 06:15:01.088233 master-0 kubenswrapper[34361]: I0224 06:15:01.088089 34361 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29531895-57vmb"] Feb 24 06:15:01.295806 master-0 kubenswrapper[34361]: I0224 06:15:01.295705 34361 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29531895-57vmb" event={"ID":"86485963-e689-4820-b86a-76829f286eb9","Type":"ContainerStarted","Data":"22fbd68dcca455bb32d5309a1922bf42d5222de9850fee25570aeb0c5c059351"} Feb 24 06:15:02.312593 master-0 kubenswrapper[34361]: I0224 06:15:02.312509 34361 generic.go:334] "Generic (PLEG): container finished" podID="86485963-e689-4820-b86a-76829f286eb9" containerID="78061a4f73dcfca6405896d64de76d848f8af09f249739302721361a34b0204a" exitCode=0 Feb 24 06:15:02.313340 master-0 kubenswrapper[34361]: I0224 06:15:02.312602 34361 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29531895-57vmb" event={"ID":"86485963-e689-4820-b86a-76829f286eb9","Type":"ContainerDied","Data":"78061a4f73dcfca6405896d64de76d848f8af09f249739302721361a34b0204a"} Feb 24 06:15:03.829292 master-0 kubenswrapper[34361]: I0224 06:15:03.829228 34361 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29531895-57vmb" Feb 24 06:15:03.957391 master-0 kubenswrapper[34361]: I0224 06:15:03.957273 34361 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/86485963-e689-4820-b86a-76829f286eb9-config-volume\") pod \"86485963-e689-4820-b86a-76829f286eb9\" (UID: \"86485963-e689-4820-b86a-76829f286eb9\") " Feb 24 06:15:03.957391 master-0 kubenswrapper[34361]: I0224 06:15:03.957408 34361 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/86485963-e689-4820-b86a-76829f286eb9-secret-volume\") pod \"86485963-e689-4820-b86a-76829f286eb9\" (UID: \"86485963-e689-4820-b86a-76829f286eb9\") " Feb 24 06:15:03.957946 master-0 kubenswrapper[34361]: I0224 06:15:03.957867 34361 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/86485963-e689-4820-b86a-76829f286eb9-config-volume" (OuterVolumeSpecName: "config-volume") pod "86485963-e689-4820-b86a-76829f286eb9" (UID: "86485963-e689-4820-b86a-76829f286eb9"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 24 06:15:03.958055 master-0 kubenswrapper[34361]: I0224 06:15:03.958011 34361 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zlwmr\" (UniqueName: \"kubernetes.io/projected/86485963-e689-4820-b86a-76829f286eb9-kube-api-access-zlwmr\") pod \"86485963-e689-4820-b86a-76829f286eb9\" (UID: \"86485963-e689-4820-b86a-76829f286eb9\") " Feb 24 06:15:03.958974 master-0 kubenswrapper[34361]: I0224 06:15:03.958928 34361 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/86485963-e689-4820-b86a-76829f286eb9-config-volume\") on node \"master-0\" DevicePath \"\"" Feb 24 06:15:03.961183 master-0 kubenswrapper[34361]: I0224 06:15:03.961096 34361 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/86485963-e689-4820-b86a-76829f286eb9-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "86485963-e689-4820-b86a-76829f286eb9" (UID: "86485963-e689-4820-b86a-76829f286eb9"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 24 06:15:03.962423 master-0 kubenswrapper[34361]: I0224 06:15:03.962296 34361 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/86485963-e689-4820-b86a-76829f286eb9-kube-api-access-zlwmr" (OuterVolumeSpecName: "kube-api-access-zlwmr") pod "86485963-e689-4820-b86a-76829f286eb9" (UID: "86485963-e689-4820-b86a-76829f286eb9"). InnerVolumeSpecName "kube-api-access-zlwmr". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 24 06:15:04.063017 master-0 kubenswrapper[34361]: I0224 06:15:04.062898 34361 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/86485963-e689-4820-b86a-76829f286eb9-secret-volume\") on node \"master-0\" DevicePath \"\"" Feb 24 06:15:04.063017 master-0 kubenswrapper[34361]: I0224 06:15:04.062991 34361 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zlwmr\" (UniqueName: \"kubernetes.io/projected/86485963-e689-4820-b86a-76829f286eb9-kube-api-access-zlwmr\") on node \"master-0\" DevicePath \"\"" Feb 24 06:15:04.343645 master-0 kubenswrapper[34361]: I0224 06:15:04.343570 34361 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29531895-57vmb" event={"ID":"86485963-e689-4820-b86a-76829f286eb9","Type":"ContainerDied","Data":"22fbd68dcca455bb32d5309a1922bf42d5222de9850fee25570aeb0c5c059351"} Feb 24 06:15:04.343645 master-0 kubenswrapper[34361]: I0224 06:15:04.343641 34361 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="22fbd68dcca455bb32d5309a1922bf42d5222de9850fee25570aeb0c5c059351" Feb 24 06:15:04.344002 master-0 kubenswrapper[34361]: I0224 06:15:04.343736 34361 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29531895-57vmb" Feb 24 06:15:04.968488 master-0 kubenswrapper[34361]: I0224 06:15:04.968128 34361 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29531850-l54gb"] Feb 24 06:15:04.981933 master-0 kubenswrapper[34361]: I0224 06:15:04.980373 34361 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29531850-l54gb"] Feb 24 06:15:06.622613 master-0 kubenswrapper[34361]: I0224 06:15:06.622512 34361 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f2249df3-3ce9-4f96-8f6f-59943125f8ed" path="/var/lib/kubelet/pods/f2249df3-3ce9-4f96-8f6f-59943125f8ed/volumes" Feb 24 06:15:10.156223 master-0 kubenswrapper[34361]: E0224 06:15:10.156122 34361 upgradeaware.go:427] Error proxying data from client to backend: readfrom tcp 192.168.32.10:49670->192.168.32.10:36187: write tcp 192.168.32.10:49670->192.168.32.10:36187: write: broken pipe Feb 24 06:15:28.285742 master-0 kubenswrapper[34361]: I0224 06:15:28.285675 34361 scope.go:117] "RemoveContainer" containerID="f98e6d86d52c9e26477f3eaacf651db4b9ae2a6be8a9a3959935ba8da1491173" Feb 24 06:16:48.727263 master-0 kubenswrapper[34361]: E0224 06:16:48.727153 34361 upgradeaware.go:427] Error proxying data from client to backend: readfrom tcp 192.168.32.10:60792->192.168.32.10:36187: write tcp 192.168.32.10:60792->192.168.32.10:36187: write: broken pipe Feb 24 06:25:05.894957 master-0 kubenswrapper[34361]: I0224 06:25:05.894828 34361 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/swift-proxy-8695dc84b-bccck" podUID="54d8708a-1dae-47bc-aead-fa87ab028821" containerName="proxy-httpd" probeResult="failure" output="HTTP probe failed with statuscode: 502" Feb 24 06:30:00.198087 master-0 kubenswrapper[34361]: I0224 06:30:00.197972 34361 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29531910-4pps5"] Feb 24 06:30:00.199096 master-0 kubenswrapper[34361]: E0224 06:30:00.198840 34361 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="86485963-e689-4820-b86a-76829f286eb9" containerName="collect-profiles" Feb 24 06:30:00.199096 master-0 kubenswrapper[34361]: I0224 06:30:00.198864 34361 state_mem.go:107] "Deleted CPUSet assignment" podUID="86485963-e689-4820-b86a-76829f286eb9" containerName="collect-profiles" Feb 24 06:30:00.200494 master-0 kubenswrapper[34361]: I0224 06:30:00.199265 34361 memory_manager.go:354] "RemoveStaleState removing state" podUID="86485963-e689-4820-b86a-76829f286eb9" containerName="collect-profiles" Feb 24 06:30:00.200586 master-0 kubenswrapper[34361]: I0224 06:30:00.200554 34361 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29531910-4pps5" Feb 24 06:30:00.204049 master-0 kubenswrapper[34361]: I0224 06:30:00.203870 34361 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-27rfg" Feb 24 06:30:00.218463 master-0 kubenswrapper[34361]: I0224 06:30:00.216835 34361 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Feb 24 06:30:00.236352 master-0 kubenswrapper[34361]: I0224 06:30:00.212184 34361 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29531910-4pps5"] Feb 24 06:30:00.283913 master-0 kubenswrapper[34361]: I0224 06:30:00.273748 34361 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-98z5l\" (UniqueName: \"kubernetes.io/projected/28d18337-939c-4201-9681-420aa627b692-kube-api-access-98z5l\") pod \"collect-profiles-29531910-4pps5\" (UID: \"28d18337-939c-4201-9681-420aa627b692\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29531910-4pps5" Feb 24 06:30:00.283913 master-0 kubenswrapper[34361]: I0224 06:30:00.273914 34361 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/28d18337-939c-4201-9681-420aa627b692-config-volume\") pod \"collect-profiles-29531910-4pps5\" (UID: \"28d18337-939c-4201-9681-420aa627b692\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29531910-4pps5" Feb 24 06:30:00.283913 master-0 kubenswrapper[34361]: I0224 06:30:00.273985 34361 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/28d18337-939c-4201-9681-420aa627b692-secret-volume\") pod \"collect-profiles-29531910-4pps5\" (UID: \"28d18337-939c-4201-9681-420aa627b692\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29531910-4pps5" Feb 24 06:30:00.377120 master-0 kubenswrapper[34361]: I0224 06:30:00.377027 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-98z5l\" (UniqueName: \"kubernetes.io/projected/28d18337-939c-4201-9681-420aa627b692-kube-api-access-98z5l\") pod \"collect-profiles-29531910-4pps5\" (UID: \"28d18337-939c-4201-9681-420aa627b692\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29531910-4pps5" Feb 24 06:30:00.377469 master-0 kubenswrapper[34361]: I0224 06:30:00.377236 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/28d18337-939c-4201-9681-420aa627b692-config-volume\") pod \"collect-profiles-29531910-4pps5\" (UID: \"28d18337-939c-4201-9681-420aa627b692\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29531910-4pps5" Feb 24 06:30:00.377469 master-0 kubenswrapper[34361]: I0224 06:30:00.377329 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/28d18337-939c-4201-9681-420aa627b692-secret-volume\") pod \"collect-profiles-29531910-4pps5\" (UID: \"28d18337-939c-4201-9681-420aa627b692\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29531910-4pps5" Feb 24 06:30:00.379396 master-0 kubenswrapper[34361]: I0224 06:30:00.379349 34361 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/28d18337-939c-4201-9681-420aa627b692-config-volume\") pod \"collect-profiles-29531910-4pps5\" (UID: \"28d18337-939c-4201-9681-420aa627b692\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29531910-4pps5" Feb 24 06:30:00.381601 master-0 kubenswrapper[34361]: I0224 06:30:00.381553 34361 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/28d18337-939c-4201-9681-420aa627b692-secret-volume\") pod \"collect-profiles-29531910-4pps5\" (UID: \"28d18337-939c-4201-9681-420aa627b692\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29531910-4pps5" Feb 24 06:30:00.413162 master-0 kubenswrapper[34361]: I0224 06:30:00.413041 34361 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-98z5l\" (UniqueName: \"kubernetes.io/projected/28d18337-939c-4201-9681-420aa627b692-kube-api-access-98z5l\") pod \"collect-profiles-29531910-4pps5\" (UID: \"28d18337-939c-4201-9681-420aa627b692\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29531910-4pps5" Feb 24 06:30:00.552923 master-0 kubenswrapper[34361]: I0224 06:30:00.552858 34361 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29531910-4pps5" Feb 24 06:30:01.050655 master-0 kubenswrapper[34361]: I0224 06:30:01.050571 34361 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29531910-4pps5"] Feb 24 06:30:01.057026 master-0 kubenswrapper[34361]: W0224 06:30:01.056921 34361 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod28d18337_939c_4201_9681_420aa627b692.slice/crio-0f3c95bf045d1e8861a61d96e632493d9ec886d3d9e7a69facff969da64a4f67 WatchSource:0}: Error finding container 0f3c95bf045d1e8861a61d96e632493d9ec886d3d9e7a69facff969da64a4f67: Status 404 returned error can't find the container with id 0f3c95bf045d1e8861a61d96e632493d9ec886d3d9e7a69facff969da64a4f67 Feb 24 06:30:01.696757 master-0 kubenswrapper[34361]: I0224 06:30:01.696671 34361 generic.go:334] "Generic (PLEG): container finished" podID="28d18337-939c-4201-9681-420aa627b692" containerID="4123cd13742d962e2d3588f42cbf873bb87347846b27f7c1059265e9ca2d5301" exitCode=0 Feb 24 06:30:01.696757 master-0 kubenswrapper[34361]: I0224 06:30:01.696737 34361 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29531910-4pps5" event={"ID":"28d18337-939c-4201-9681-420aa627b692","Type":"ContainerDied","Data":"4123cd13742d962e2d3588f42cbf873bb87347846b27f7c1059265e9ca2d5301"} Feb 24 06:30:01.696757 master-0 kubenswrapper[34361]: I0224 06:30:01.696772 34361 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29531910-4pps5" event={"ID":"28d18337-939c-4201-9681-420aa627b692","Type":"ContainerStarted","Data":"0f3c95bf045d1e8861a61d96e632493d9ec886d3d9e7a69facff969da64a4f67"} Feb 24 06:30:03.251893 master-0 kubenswrapper[34361]: I0224 06:30:03.251803 34361 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29531910-4pps5" Feb 24 06:30:03.391035 master-0 kubenswrapper[34361]: I0224 06:30:03.390832 34361 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-98z5l\" (UniqueName: \"kubernetes.io/projected/28d18337-939c-4201-9681-420aa627b692-kube-api-access-98z5l\") pod \"28d18337-939c-4201-9681-420aa627b692\" (UID: \"28d18337-939c-4201-9681-420aa627b692\") " Feb 24 06:30:03.391035 master-0 kubenswrapper[34361]: I0224 06:30:03.390897 34361 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/28d18337-939c-4201-9681-420aa627b692-secret-volume\") pod \"28d18337-939c-4201-9681-420aa627b692\" (UID: \"28d18337-939c-4201-9681-420aa627b692\") " Feb 24 06:30:03.391702 master-0 kubenswrapper[34361]: I0224 06:30:03.391122 34361 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/28d18337-939c-4201-9681-420aa627b692-config-volume\") pod \"28d18337-939c-4201-9681-420aa627b692\" (UID: \"28d18337-939c-4201-9681-420aa627b692\") " Feb 24 06:30:03.392112 master-0 kubenswrapper[34361]: I0224 06:30:03.392057 34361 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/28d18337-939c-4201-9681-420aa627b692-config-volume" (OuterVolumeSpecName: "config-volume") pod "28d18337-939c-4201-9681-420aa627b692" (UID: "28d18337-939c-4201-9681-420aa627b692"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 24 06:30:03.398747 master-0 kubenswrapper[34361]: I0224 06:30:03.398671 34361 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/28d18337-939c-4201-9681-420aa627b692-kube-api-access-98z5l" (OuterVolumeSpecName: "kube-api-access-98z5l") pod "28d18337-939c-4201-9681-420aa627b692" (UID: "28d18337-939c-4201-9681-420aa627b692"). InnerVolumeSpecName "kube-api-access-98z5l". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 24 06:30:03.398917 master-0 kubenswrapper[34361]: I0224 06:30:03.398672 34361 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/28d18337-939c-4201-9681-420aa627b692-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "28d18337-939c-4201-9681-420aa627b692" (UID: "28d18337-939c-4201-9681-420aa627b692"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 24 06:30:03.495517 master-0 kubenswrapper[34361]: I0224 06:30:03.495425 34361 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/28d18337-939c-4201-9681-420aa627b692-secret-volume\") on node \"master-0\" DevicePath \"\"" Feb 24 06:30:03.495517 master-0 kubenswrapper[34361]: I0224 06:30:03.495486 34361 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/28d18337-939c-4201-9681-420aa627b692-config-volume\") on node \"master-0\" DevicePath \"\"" Feb 24 06:30:03.495517 master-0 kubenswrapper[34361]: I0224 06:30:03.495498 34361 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-98z5l\" (UniqueName: \"kubernetes.io/projected/28d18337-939c-4201-9681-420aa627b692-kube-api-access-98z5l\") on node \"master-0\" DevicePath \"\"" Feb 24 06:30:03.764367 master-0 kubenswrapper[34361]: I0224 06:30:03.764250 34361 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29531910-4pps5" event={"ID":"28d18337-939c-4201-9681-420aa627b692","Type":"ContainerDied","Data":"0f3c95bf045d1e8861a61d96e632493d9ec886d3d9e7a69facff969da64a4f67"} Feb 24 06:30:03.764367 master-0 kubenswrapper[34361]: I0224 06:30:03.764376 34361 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0f3c95bf045d1e8861a61d96e632493d9ec886d3d9e7a69facff969da64a4f67" Feb 24 06:30:03.764744 master-0 kubenswrapper[34361]: I0224 06:30:03.764512 34361 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29531910-4pps5" Feb 24 06:30:04.382702 master-0 kubenswrapper[34361]: I0224 06:30:04.382339 34361 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29531865-5wmht"] Feb 24 06:30:04.400429 master-0 kubenswrapper[34361]: I0224 06:30:04.399585 34361 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29531865-5wmht"] Feb 24 06:30:04.619748 master-0 kubenswrapper[34361]: I0224 06:30:04.619632 34361 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ec4713f8-2961-462e-bdf0-ba653bd29445" path="/var/lib/kubelet/pods/ec4713f8-2961-462e-bdf0-ba653bd29445/volumes" Feb 24 06:30:08.293264 master-0 kubenswrapper[34361]: I0224 06:30:08.293152 34361 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-khpqm/must-gather-s6ssr"] Feb 24 06:30:08.294265 master-0 kubenswrapper[34361]: E0224 06:30:08.294230 34361 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="28d18337-939c-4201-9681-420aa627b692" containerName="collect-profiles" Feb 24 06:30:08.294265 master-0 kubenswrapper[34361]: I0224 06:30:08.294260 34361 state_mem.go:107] "Deleted CPUSet assignment" podUID="28d18337-939c-4201-9681-420aa627b692" containerName="collect-profiles" Feb 24 06:30:08.294727 master-0 kubenswrapper[34361]: I0224 06:30:08.294691 34361 memory_manager.go:354] "RemoveStaleState removing state" podUID="28d18337-939c-4201-9681-420aa627b692" containerName="collect-profiles" Feb 24 06:30:08.296675 master-0 kubenswrapper[34361]: I0224 06:30:08.296637 34361 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-khpqm/must-gather-s6ssr" Feb 24 06:30:08.302732 master-0 kubenswrapper[34361]: I0224 06:30:08.302656 34361 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-khpqm"/"openshift-service-ca.crt" Feb 24 06:30:08.303100 master-0 kubenswrapper[34361]: I0224 06:30:08.303075 34361 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-khpqm"/"kube-root-ca.crt" Feb 24 06:30:08.326614 master-0 kubenswrapper[34361]: I0224 06:30:08.326446 34361 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-khpqm/must-gather-nz2cg"] Feb 24 06:30:08.329192 master-0 kubenswrapper[34361]: I0224 06:30:08.329104 34361 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-khpqm/must-gather-nz2cg" Feb 24 06:30:08.352163 master-0 kubenswrapper[34361]: I0224 06:30:08.349710 34361 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-khpqm/must-gather-s6ssr"] Feb 24 06:30:08.366329 master-0 kubenswrapper[34361]: I0224 06:30:08.365813 34361 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/a184f497-a24b-4056-9456-51c0f13af7ec-must-gather-output\") pod \"must-gather-s6ssr\" (UID: \"a184f497-a24b-4056-9456-51c0f13af7ec\") " pod="openshift-must-gather-khpqm/must-gather-s6ssr" Feb 24 06:30:08.366329 master-0 kubenswrapper[34361]: I0224 06:30:08.365932 34361 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rxdtq\" (UniqueName: \"kubernetes.io/projected/a184f497-a24b-4056-9456-51c0f13af7ec-kube-api-access-rxdtq\") pod \"must-gather-s6ssr\" (UID: \"a184f497-a24b-4056-9456-51c0f13af7ec\") " pod="openshift-must-gather-khpqm/must-gather-s6ssr" Feb 24 06:30:08.374327 master-0 kubenswrapper[34361]: I0224 06:30:08.370409 34361 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-khpqm/must-gather-nz2cg"] Feb 24 06:30:08.471977 master-0 kubenswrapper[34361]: I0224 06:30:08.471877 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/a184f497-a24b-4056-9456-51c0f13af7ec-must-gather-output\") pod \"must-gather-s6ssr\" (UID: \"a184f497-a24b-4056-9456-51c0f13af7ec\") " pod="openshift-must-gather-khpqm/must-gather-s6ssr" Feb 24 06:30:08.472297 master-0 kubenswrapper[34361]: I0224 06:30:08.472016 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rxdtq\" (UniqueName: \"kubernetes.io/projected/a184f497-a24b-4056-9456-51c0f13af7ec-kube-api-access-rxdtq\") pod \"must-gather-s6ssr\" (UID: \"a184f497-a24b-4056-9456-51c0f13af7ec\") " pod="openshift-must-gather-khpqm/must-gather-s6ssr" Feb 24 06:30:08.472297 master-0 kubenswrapper[34361]: I0224 06:30:08.472086 34361 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-psnt7\" (UniqueName: \"kubernetes.io/projected/6e1e9070-21a2-474c-a2a2-695dfc90d09d-kube-api-access-psnt7\") pod \"must-gather-nz2cg\" (UID: \"6e1e9070-21a2-474c-a2a2-695dfc90d09d\") " pod="openshift-must-gather-khpqm/must-gather-nz2cg" Feb 24 06:30:08.472521 master-0 kubenswrapper[34361]: I0224 06:30:08.472414 34361 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/6e1e9070-21a2-474c-a2a2-695dfc90d09d-must-gather-output\") pod \"must-gather-nz2cg\" (UID: \"6e1e9070-21a2-474c-a2a2-695dfc90d09d\") " pod="openshift-must-gather-khpqm/must-gather-nz2cg" Feb 24 06:30:08.473680 master-0 kubenswrapper[34361]: I0224 06:30:08.473641 34361 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/a184f497-a24b-4056-9456-51c0f13af7ec-must-gather-output\") pod \"must-gather-s6ssr\" (UID: \"a184f497-a24b-4056-9456-51c0f13af7ec\") " pod="openshift-must-gather-khpqm/must-gather-s6ssr" Feb 24 06:30:08.500531 master-0 kubenswrapper[34361]: I0224 06:30:08.497329 34361 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rxdtq\" (UniqueName: \"kubernetes.io/projected/a184f497-a24b-4056-9456-51c0f13af7ec-kube-api-access-rxdtq\") pod \"must-gather-s6ssr\" (UID: \"a184f497-a24b-4056-9456-51c0f13af7ec\") " pod="openshift-must-gather-khpqm/must-gather-s6ssr" Feb 24 06:30:08.576053 master-0 kubenswrapper[34361]: I0224 06:30:08.575825 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/6e1e9070-21a2-474c-a2a2-695dfc90d09d-must-gather-output\") pod \"must-gather-nz2cg\" (UID: \"6e1e9070-21a2-474c-a2a2-695dfc90d09d\") " pod="openshift-must-gather-khpqm/must-gather-nz2cg" Feb 24 06:30:08.576322 master-0 kubenswrapper[34361]: I0224 06:30:08.576071 34361 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-psnt7\" (UniqueName: \"kubernetes.io/projected/6e1e9070-21a2-474c-a2a2-695dfc90d09d-kube-api-access-psnt7\") pod \"must-gather-nz2cg\" (UID: \"6e1e9070-21a2-474c-a2a2-695dfc90d09d\") " pod="openshift-must-gather-khpqm/must-gather-nz2cg" Feb 24 06:30:08.576532 master-0 kubenswrapper[34361]: I0224 06:30:08.576468 34361 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/6e1e9070-21a2-474c-a2a2-695dfc90d09d-must-gather-output\") pod \"must-gather-nz2cg\" (UID: \"6e1e9070-21a2-474c-a2a2-695dfc90d09d\") " pod="openshift-must-gather-khpqm/must-gather-nz2cg" Feb 24 06:30:08.594467 master-0 kubenswrapper[34361]: I0224 06:30:08.594396 34361 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-psnt7\" (UniqueName: \"kubernetes.io/projected/6e1e9070-21a2-474c-a2a2-695dfc90d09d-kube-api-access-psnt7\") pod \"must-gather-nz2cg\" (UID: \"6e1e9070-21a2-474c-a2a2-695dfc90d09d\") " pod="openshift-must-gather-khpqm/must-gather-nz2cg" Feb 24 06:30:08.628153 master-0 kubenswrapper[34361]: I0224 06:30:08.628081 34361 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-khpqm/must-gather-s6ssr" Feb 24 06:30:08.703339 master-0 kubenswrapper[34361]: I0224 06:30:08.703212 34361 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-khpqm/must-gather-nz2cg" Feb 24 06:30:09.233189 master-0 kubenswrapper[34361]: I0224 06:30:09.230928 34361 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-khpqm/must-gather-s6ssr"] Feb 24 06:30:09.238926 master-0 kubenswrapper[34361]: W0224 06:30:09.238807 34361 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda184f497_a24b_4056_9456_51c0f13af7ec.slice/crio-2a54b90121ea3c9a28b55e38912b47c8116232b131c25b467d6e65ca50f3e37a WatchSource:0}: Error finding container 2a54b90121ea3c9a28b55e38912b47c8116232b131c25b467d6e65ca50f3e37a: Status 404 returned error can't find the container with id 2a54b90121ea3c9a28b55e38912b47c8116232b131c25b467d6e65ca50f3e37a Feb 24 06:30:09.242550 master-0 kubenswrapper[34361]: I0224 06:30:09.242193 34361 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 24 06:30:09.447839 master-0 kubenswrapper[34361]: I0224 06:30:09.445594 34361 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-khpqm/must-gather-nz2cg"] Feb 24 06:30:09.897053 master-0 kubenswrapper[34361]: I0224 06:30:09.896970 34361 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-khpqm/must-gather-s6ssr" event={"ID":"a184f497-a24b-4056-9456-51c0f13af7ec","Type":"ContainerStarted","Data":"2a54b90121ea3c9a28b55e38912b47c8116232b131c25b467d6e65ca50f3e37a"} Feb 24 06:30:09.900924 master-0 kubenswrapper[34361]: I0224 06:30:09.900815 34361 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-khpqm/must-gather-nz2cg" event={"ID":"6e1e9070-21a2-474c-a2a2-695dfc90d09d","Type":"ContainerStarted","Data":"8d3b1a9011b86332b247a84cc2d9cdf96b7cb4bf66b45faa920ac42cb7b7ea8d"} Feb 24 06:30:11.936193 master-0 kubenswrapper[34361]: I0224 06:30:11.936119 34361 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-khpqm/must-gather-nz2cg" event={"ID":"6e1e9070-21a2-474c-a2a2-695dfc90d09d","Type":"ContainerStarted","Data":"9d3c301da37217c7a09a9f7bd68eecef5f20413cca626d42a7ca503bd1af3e04"} Feb 24 06:30:11.936193 master-0 kubenswrapper[34361]: I0224 06:30:11.936201 34361 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-khpqm/must-gather-nz2cg" event={"ID":"6e1e9070-21a2-474c-a2a2-695dfc90d09d","Type":"ContainerStarted","Data":"6186f26f2da95ca74ddc0cb64c3170a209db0a4eaada8a941c8903386d934a8c"} Feb 24 06:30:11.997168 master-0 kubenswrapper[34361]: I0224 06:30:11.996989 34361 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-khpqm/must-gather-nz2cg" podStartSLOduration=2.602861601 podStartE2EDuration="3.99689882s" podCreationTimestamp="2026-02-24 06:30:08 +0000 UTC" firstStartedPulling="2026-02-24 06:30:09.439401488 +0000 UTC m=+3169.142018554" lastFinishedPulling="2026-02-24 06:30:10.833438727 +0000 UTC m=+3170.536055773" observedRunningTime="2026-02-24 06:30:11.979582913 +0000 UTC m=+3171.682199969" watchObservedRunningTime="2026-02-24 06:30:11.99689882 +0000 UTC m=+3171.699515866" Feb 24 06:30:13.869754 master-0 kubenswrapper[34361]: I0224 06:30:13.869689 34361 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cluster-version_cluster-version-operator-57476485-7g2gq_0e05783d-6bd1-4c71-87d9-1eb3edd827b3/cluster-version-operator/0.log" Feb 24 06:30:14.172903 master-0 kubenswrapper[34361]: I0224 06:30:14.172783 34361 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cluster-version_cluster-version-operator-57476485-7g2gq_0e05783d-6bd1-4c71-87d9-1eb3edd827b3/cluster-version-operator/1.log"